14:00:16 #startmeeting networking 14:00:16 Meeting started Tue May 28 14:00:16 2024 UTC and is due to finish in 60 minutes. The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:16 The meeting name has been set to 'networking' 14:00:18 Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, amotoki, haleyb, ralonsoh 14:00:19 \o 14:00:22 o/ 14:00:26 o/ 14:00:28 o/ 14:00:31 o/ 14:00:39 o/ 14:00:54 o/ 14:00:59 o/ 14:01:02 o/ 14:01:21 #topic announcements 14:01:49 We are now in Dalmatian release week (R - 18) 14:01:51 #link https://releases.openstack.org/dalmatian/schedule.html 14:02:35 so libraries have been released 14:03:01 Reminder if you have a topic for the drivers meeting on Fridays, please add it to the wiki @ https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 14:03:55 I really had no other announcements 14:04:02 I have one small 14:04:06 sure 14:04:14 Next Monday is Openinfra Day Budapest: https://oideurope2024.openinfra.dev/hungary/?_ga=2.48816486.198551210.1716904953-1866571429.1693922736 14:04:34 if you have a chance to be there you are my guest to a beer :-) 14:05:24 I wish to be there but no chance :/ 14:05:33 i was just in Europe, but had to come back home, so no beer for me :( 14:05:58 timing is critical :-) 14:06:31 yes, enjoy the meet-up! 14:07:04 any other announcements? 14:07:28 not from me, thanks 14:07:54 i actually might be out some of the day if anyone is looking for me 14:08:16 #topic bugs 14:08:31 i was the deputy last week, here is a link to my report 14:08:39 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/JJ5FDIAWXMLG5LGZC4G4E5Y72G26ECFB/ 14:09:09 i had a couple of bugs to discuss 14:09:21 #link https://bugs.launchpad.net/neutron/+bug/2066989 14:09:29 Virtual function is being attached to port regardless of the exclude_devices configuration 14:09:55 i asked for more information last week bug nothing yet 14:10:19 didn't know if someone had seen something like this before or maybe just an edge case? 14:10:37 Takashi Kajinami proposed openstack/neutron-fwaas master: DNM: Run migration tool https://review.opendev.org/c/openstack/neutron-fwaas/+/920672 14:11:54 ok, i'll just wait for submittors response as i don't have any sriov equipment to test on at the moment 14:12:10 next one 14:12:21 #link https://bugs.launchpad.net/neutron/+bug/2067183 14:12:22 Properly Use Network's DNS Domain in Assignment and DHCP 14:13:22 i know we have been bad at breaking things adding DNS domains before, so wanted to get some eyes on this 14:13:46 the proposed change is at https://review.opendev.org/c/openstack/neutron/+/920459 14:14:06 I'll take a look 14:15:08 mlavalle: thanks 14:16:19 there was one more bug from me 14:16:33 #link https://bugs.launchpad.net/nova/+bug/2065790 14:16:35 Nova fails to live migrate instance with dash separated MAC 14:17:17 neutron doesn't care if you use : or - for a MAC delimiter as both are Ok for netaddr.EUI format 14:17:29 but it causes an issue in live migration 14:17:29 this smells like comming from near netaddr 14:17:59 but just a quick tip 14:18:55 the ask was to have neutron always use : but i'd be worried we break something by doing it in the API 14:19:11 plus we'd have to updated every port on a DB upgrade 14:19:42 imo it would be better to document around it by saying : is the way 14:20:28 any opinions on that? 14:22:17 I have a vague memory of an old neutron bug that made me open this: https://github.com/netaddr/netaddr/issues/250 14:22:31 but having a hard time finding the neutron bug itself 14:23:18 interesting, must have been neutron with the mac prefix 14:26:34 so no opinions, i will have to look at either documenting or forcing to : in some local testing when i have time 14:27:22 i did have another bug to discuss from last week 14:27:31 #link https://bugs.launchpad.net/neutron/+bug/2065821 14:27:41 this is the cover job failing with "Killed" 14:28:08 i have two updates 14:28:30 1) reverting the sqlalchemy update makes it go away 14:28:50 2) running the cover job with --concurrency 4 makes it (usually) go away 14:29:11 i think it was successful 18/20 with that change 14:29:44 and i saw today ykarel proposed a change to use swap which also helped 14:29:54 should we tell mike?.. 14:31:20 yes in swap patch patch 15/15 passed 14:31:34 (found the old mac address formatting bug (https://bugs.launchpad.net/neutron/+bug/1995732) but the wrongly formatted mac was never returned by the api at that time, so it's not a precedent in any way) 14:31:37 ihrachys: yes, we should but i still don't know if it's the root cause and the job has little info 14:32:21 maybe he's aware of something that is expected to raise the memory consumption... 14:32:41 I'd recheck a bit more and then pull Mike in; he probably would want to know at least :) 14:32:49 i noticed even with older sqlachemy memory consumption was high when running tests, and with newver versions it increased more and hit the border 14:32:58 i know other jobs can generate much more info in the post state, if we even had ps output it might help 14:32:59 ++ ihrachys 14:33:36 we have memory tracker logs in devstack jobs; wonder if there's an observable difference there. 14:33:40 ykarel: right, if we were near the edge and had 8 threads running could have gone over 14:35:16 myself atleast have not checked it in memory tracker devstack logs, but agree it's worth to check 14:35:55 i did not see in job logs unless i looked in the wrong places 14:36:13 not in this job; in devstack jobs 14:36:14 also issue is seen only in cover job, not in other unit-tests jobs, so coverage run adding to memory consuption 14:38:13 ihrachys: if you have a pointer i'd appreciate it, if i can add something to this job might help (i'll look in devstack repo) 14:39:03 haleyb, https://31eb70649c494f0b332d-b9c14e29079b515b3d9c294293d431fb.ssl.cf1.rackcdn.com/917951/2/check/neutron-tempest-plugin-ovn/0a24577/controller/logs/screen-memory_tracker.txt 14:39:20 ^ what ihar referring, and this enabled with a devstack service 14:39:30 enable_service memory_tracker 14:39:42 ah, so maybe a simple change for our testing 14:39:48 ykarel++ 14:39:59 no karma bot here :( 14:40:20 directly would not work as cover job not involve devstack but something similar could be done 14:40:27 not sure if you can enable it in cover job easily; but we could compare mem consumption in tempest jobs with sqla1 and 2 and see if there's a difference in base line / peak for neutron-server. 14:40:43 +1 14:41:29 ok, will look later/tomorrow 14:41:37 thanks for the discussion 14:41:40 any other bugs? 14:42:16 #topic specs 14:42:37 think jus one open 14:42:41 #link https://review.opendev.org/c/openstack/neutron-specs/+/899210 14:42:59 lajoskatona: did your comment get addressed? 14:43:24 I have to check it 14:43:29 will do it today 14:43:35 ack, thanks 14:43:49 #topic community-goals 14:44:31 not sure if any progress was made on our goals? didn't see any patches 14:44:47 not from me 14:45:56 ok, thanks for update 14:46:09 nothing from me neighter this week 14:46:22 ack 14:46:25 #topic on-demand 14:46:49 i had one topic, but will let others go first 14:47:32 ok, i'll go then 14:48:59 i noticed last week there was a little frustration that some ML2/OVS changes were taking a long time to get merged 14:49:09 (sorry for being direct) 14:49:58 i wanted to make sure we do try and spend some time on these changes, even if OVN is the center of most of our attentions 14:50:28 I asked Liu to add things to the wiki that he wanted help prioritizing 14:50:39 #link https://wiki.openstack.org/wiki/Network/Meetings 14:51:23 we could have a shared dashboard with prioritized patches... :) 14:51:25 while we can't go through them all in 10 minutes, if you have knowledge in the areas in these reviews please spend a little time on them 14:51:49 ihrachys: i think we have a dashboard like that 14:52:07 i'll just never find it in all these tabs 14:52:21 but not all of these have priority +2 14:52:35 https://review.opendev.org/dashboard/?title=Neutron+Priorities+Dashboard&foreach=(project:openstack/neutron+OR+project:openstack/neutron-lib+OR+project:openstack/neutron-tempest-plugin+OR+project:openstack/python-neutronclient+OR+project:openstack/neutron-specs+OR+project:openstack/networking-bagpipe+OR+project:openstack/networking-bgpvpn+OR+project:openstack/networking-odl 14:52:37 +OR+project:openstack/networking-ovn+OR+project:openstack/networking-sfc+OR+project:openstack/neutron-dynamic-routing+OR+project:openstack/neutron-fwaas+OR+project:openstack/neutron-fwaas-dashboard+OR+project:openstack/neutron-vpnaas+OR+project:openstack/neutron-vpnaas-dashboard+OR+project:openstack/os-ken+OR+project:openstack/ovsdbapp)+status:open&High+Priority+Changes=lab 14:52:39 el:Review-Priority=2&Priority+Changes=label:Review-Priority=1&Blocked+Reviews=label:Review-Priority=-1 14:52:46 oops, too long 14:52:50 sorry 14:53:15 I can't find it in the docs, but perhaps my grep was leaky 14:54:44 I have it finally: https://docs.openstack.org/neutron/10.0.1/dashboards/index.html 14:55:07 we can try to review some of these; but long term, ml2/ovs support will necessarily lose a lot of eyeballs - most contributors are from redhat (reviews and commits), and we are dropped ml2/ovs support completely in the next redhat builds. there will be less expertise and less motivation. 14:56:02 ihrachys: yes, and canonical defaults to OVN as well, very few running ML2/OVS 14:56:02 No this is the correct page: https://docs.openstack.org/neutron/latest/contributor/dashboards/index.html 14:56:52 lajoskatona: yes, thanks, and i do see one of Liu's review topics there 14:57:37 At Ericsson we default to OVS mostly at the moment 14:57:39 the point was that we still have some large deployments running ml2/ovs in China, and they still find bugs there 14:59:20 i just wanted everyone to be aware of the issue 14:59:58 we are at end of the hour 15:00:12 thanks haleyb, good to discuss such topics in the open 15:00:22 CI meeting will start soon, video this week i believe 15:00:33 #endmeeting