14:00:16 <haleyb> #startmeeting networking
14:00:16 <opendevmeet> Meeting started Tue May 28 14:00:16 2024 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:16 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:16 <opendevmeet> The meeting name has been set to 'networking'
14:00:18 <haleyb> Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, amotoki, haleyb, ralonsoh
14:00:19 <mlavalle> \o
14:00:22 <elvira> o/
14:00:26 <obondarev> o/
14:00:28 <lajoskatona> o/
14:00:31 <ykarel> o/
14:00:39 <ihrachys> o/
14:00:54 <bcafarel> o/
14:00:59 <slaweq> o/
14:01:02 <rubasov> o/
14:01:21 <haleyb> #topic announcements
14:01:49 <haleyb> We are now in Dalmatian release week (R - 18)
14:01:51 <haleyb> #link https://releases.openstack.org/dalmatian/schedule.html
14:02:35 <haleyb> so libraries have been released
14:03:01 <haleyb> Reminder if you have a topic for the drivers meeting on Fridays, please add it to the wiki @ https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
14:03:55 <haleyb> I really had no other announcements
14:04:02 <lajoskatona> I have one small
14:04:06 <haleyb> sure
14:04:14 <lajoskatona> Next Monday is Openinfra Day Budapest: https://oideurope2024.openinfra.dev/hungary/?_ga=2.48816486.198551210.1716904953-1866571429.1693922736
14:04:34 <lajoskatona> if you have a chance to be there you are my guest to a beer :-)
14:05:24 <slaweq> I wish to be there but no chance :/
14:05:33 <haleyb> i was just in Europe, but had to come back home, so no beer for me :(
14:05:58 <lajoskatona> timing is critical :-)
14:06:31 <haleyb> yes, enjoy the meet-up!
14:07:04 <haleyb> any other announcements?
14:07:28 <lajoskatona> not from me, thanks
14:07:54 <haleyb> i actually might be out some of the day if anyone is looking for me
14:08:16 <haleyb> #topic bugs
14:08:31 <haleyb> i was the deputy last week, here is a link to my report
14:08:39 <haleyb> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/JJ5FDIAWXMLG5LGZC4G4E5Y72G26ECFB/
14:09:09 <haleyb> i had a couple of bugs to discuss
14:09:21 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2066989
14:09:29 <haleyb> Virtual function is being attached to port regardless of the exclude_devices configuration
14:09:55 <haleyb> i asked for more information last week bug nothing yet
14:10:19 <haleyb> didn't know if someone had seen something like this before or maybe just an edge case?
14:10:37 <opendevreview> Takashi Kajinami proposed openstack/neutron-fwaas master: DNM: Run migration tool  https://review.opendev.org/c/openstack/neutron-fwaas/+/920672
14:11:54 <haleyb> ok, i'll just wait for submittors response as i don't have any sriov equipment to test on at the moment
14:12:10 <haleyb> next one
14:12:21 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2067183
14:12:22 <haleyb> Properly Use Network's DNS Domain in Assignment and DHCP
14:13:22 <haleyb> i know we have been bad at breaking things adding DNS domains before, so wanted to get some eyes on this
14:13:46 <haleyb> the proposed change is at https://review.opendev.org/c/openstack/neutron/+/920459
14:14:06 <mlavalle> I'll take a look
14:15:08 <haleyb> mlavalle: thanks
14:16:19 <haleyb> there was one more bug from me
14:16:33 <haleyb> #link https://bugs.launchpad.net/nova/+bug/2065790
14:16:35 <haleyb> Nova fails to live migrate instance with dash separated MAC
14:17:17 <haleyb> neutron doesn't care if you use : or - for a MAC delimiter as both are Ok for netaddr.EUI format
14:17:29 <haleyb> but it causes an issue in live migration
14:17:29 <lajoskatona> this smells like comming from near netaddr
14:17:59 <lajoskatona> but just a quick tip
14:18:55 <haleyb> the ask was to have neutron always use : but i'd be worried we break something by doing it in the API
14:19:11 <haleyb> plus we'd have to updated every port on a DB upgrade
14:19:42 <haleyb> imo it would be better to document around it by saying : is the way
14:20:28 <haleyb> any opinions on that?
14:22:17 <rubasov> I have a vague memory of an old neutron bug that made me open this: https://github.com/netaddr/netaddr/issues/250
14:22:31 <rubasov> but having a hard time finding the neutron bug itself
14:23:18 <haleyb> interesting, must have been neutron with the mac prefix
14:26:34 <haleyb> so no opinions, i will have to look at either documenting or forcing to : in some local testing when i have time
14:27:22 <haleyb> i did have another bug to discuss from last week
14:27:31 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2065821
14:27:41 <haleyb> this is the cover job failing with "Killed"
14:28:08 <haleyb> i have two updates
14:28:30 <haleyb> 1) reverting the sqlalchemy update makes it go away
14:28:50 <haleyb> 2) running the cover job with --concurrency 4 makes it (usually) go away
14:29:11 <haleyb> i think it was successful 18/20 with that change
14:29:44 <haleyb> and i saw today ykarel proposed a change to use swap which also helped
14:29:54 <ihrachys> should we tell mike?..
14:31:20 <ykarel> yes in swap patch patch 15/15 passed
14:31:34 <rubasov> (found the old mac address formatting bug (https://bugs.launchpad.net/neutron/+bug/1995732) but the wrongly formatted mac was never returned by the api at that time, so it's not a precedent in any way)
14:31:37 <haleyb> ihrachys: yes, we should but i still don't know if it's the root cause and the job has little info
14:32:21 <ihrachys> maybe he's aware of something that is expected to raise the memory consumption...
14:32:41 <ihrachys> I'd recheck a bit more and then pull Mike in; he probably would want to know at least :)
14:32:49 <ykarel> i noticed even with older sqlachemy memory consumption was high when running tests, and with newver versions it increased more and hit the border
14:32:58 <haleyb> i know other jobs can generate much more info in the post state, if we even had ps output it might help
14:32:59 <ykarel> ++ ihrachys
14:33:36 <ihrachys> we have memory tracker logs in devstack jobs; wonder if there's an observable difference there.
14:33:40 <haleyb> ykarel: right, if we were near the edge and had 8 threads running could have gone over
14:35:16 <ykarel> myself atleast have not checked it in memory tracker devstack logs, but agree it's worth to check
14:35:55 <haleyb> i did not see in job logs unless i looked in the wrong places
14:36:13 <ihrachys> not in this job; in devstack jobs
14:36:14 <ykarel> also issue is seen only in cover job, not in other unit-tests jobs, so coverage run adding to memory consuption
14:38:13 <haleyb> ihrachys: if you have a pointer i'd appreciate it, if i can add something to this job might help (i'll look in devstack repo)
14:39:03 <ykarel> haleyb, https://31eb70649c494f0b332d-b9c14e29079b515b3d9c294293d431fb.ssl.cf1.rackcdn.com/917951/2/check/neutron-tempest-plugin-ovn/0a24577/controller/logs/screen-memory_tracker.txt
14:39:20 <ykarel> ^ what ihar referring, and this enabled with a devstack service
14:39:30 <ykarel> enable_service memory_tracker
14:39:42 <haleyb> ah, so maybe a simple change for our testing
14:39:48 <haleyb> ykarel++
14:39:59 <haleyb> no karma bot here :(
14:40:20 <ykarel> directly would not work as cover job not involve devstack but something similar could be done
14:40:27 <ihrachys> not sure if you can enable it in cover job easily; but we could compare mem consumption in tempest jobs with sqla1 and 2 and see if there's a difference in base line / peak for neutron-server.
14:40:43 <ykarel> +1
14:41:29 <haleyb> ok, will look later/tomorrow
14:41:37 <haleyb> thanks for the discussion
14:41:40 <haleyb> any other bugs?
14:42:16 <haleyb> #topic specs
14:42:37 <haleyb> think jus one open
14:42:41 <haleyb> #link https://review.opendev.org/c/openstack/neutron-specs/+/899210
14:42:59 <haleyb> lajoskatona: did your comment get addressed?
14:43:24 <lajoskatona> I have to check it
14:43:29 <lajoskatona> will do it today
14:43:35 <haleyb> ack, thanks
14:43:49 <haleyb> #topic community-goals
14:44:31 <haleyb> not sure if any progress was made on our goals? didn't see any patches
14:44:47 <lajoskatona> not from me
14:45:56 <haleyb> ok, thanks for update
14:46:09 <slaweq> nothing from me neighter this week
14:46:22 <haleyb> ack
14:46:25 <haleyb> #topic on-demand
14:46:49 <haleyb> i had one topic, but will let others go first
14:47:32 <haleyb> ok, i'll go then
14:48:59 <haleyb> i noticed last week there was a little frustration that some ML2/OVS changes were taking a long time to get merged
14:49:09 <haleyb> (sorry for being direct)
14:49:58 <haleyb> i wanted to make sure we do try and spend some time on these changes, even if OVN is the center of most of our attentions
14:50:28 <haleyb> I asked Liu to add things to the wiki that he wanted help prioritizing
14:50:39 <haleyb> #link https://wiki.openstack.org/wiki/Network/Meetings
14:51:23 <ihrachys> we could have a shared dashboard with prioritized patches... :)
14:51:25 <haleyb> while we can't go through them all in 10 minutes, if you have knowledge in the areas in these reviews please spend a little time on them
14:51:49 <haleyb> ihrachys: i think we have a dashboard like that
14:52:07 <haleyb> i'll just never find it in all these tabs
14:52:21 <haleyb> but not all of these have priority +2
14:52:35 <mlavalle> https://review.opendev.org/dashboard/?title=Neutron+Priorities+Dashboard&foreach=(project:openstack/neutron+OR+project:openstack/neutron-lib+OR+project:openstack/neutron-tempest-plugin+OR+project:openstack/python-neutronclient+OR+project:openstack/neutron-specs+OR+project:openstack/networking-bagpipe+OR+project:openstack/networking-bgpvpn+OR+project:openstack/networking-odl
14:52:37 <mlavalle> +OR+project:openstack/networking-ovn+OR+project:openstack/networking-sfc+OR+project:openstack/neutron-dynamic-routing+OR+project:openstack/neutron-fwaas+OR+project:openstack/neutron-fwaas-dashboard+OR+project:openstack/neutron-vpnaas+OR+project:openstack/neutron-vpnaas-dashboard+OR+project:openstack/os-ken+OR+project:openstack/ovsdbapp)+status:open&High+Priority+Changes=lab
14:52:39 <mlavalle> el:Review-Priority=2&Priority+Changes=label:Review-Priority=1&Blocked+Reviews=label:Review-Priority=-1
14:52:46 <mlavalle> oops, too long
14:52:50 <mlavalle> sorry
14:53:15 <lajoskatona> I can't find it in the docs, but perhaps my grep was leaky
14:54:44 <lajoskatona> I have it finally: https://docs.openstack.org/neutron/10.0.1/dashboards/index.html
14:55:07 <ihrachys> we can try to review some of these; but long term, ml2/ovs support will necessarily lose a lot of eyeballs - most contributors are from redhat (reviews and commits), and we are dropped ml2/ovs support completely in the next redhat builds. there will be less expertise and less motivation.
14:56:02 <haleyb> ihrachys: yes, and canonical defaults to OVN as well, very few running ML2/OVS
14:56:02 <lajoskatona> No this is the correct page: https://docs.openstack.org/neutron/latest/contributor/dashboards/index.html
14:56:52 <haleyb> lajoskatona: yes, thanks, and i do see one of Liu's review topics there
14:57:37 <lajoskatona> At Ericsson we default to OVS mostly at the moment
14:57:39 <haleyb> the point was that we still have some large deployments running ml2/ovs in China, and they still find bugs there
14:59:20 <haleyb> i just wanted everyone to be aware of the issue
14:59:58 <haleyb> we are at end of the hour
15:00:12 <lajoskatona> thanks haleyb, good to discuss such topics in the open
15:00:22 <haleyb> CI meeting will start soon, video this week i believe
15:00:33 <haleyb> #endmeeting