14:00:17 #startmeeting networking 14:00:17 Meeting started Tue Dec 10 14:00:17 2024 UTC and is due to finish in 60 minutes. The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 The meeting name has been set to 'networking' 14:00:19 Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, amotoki, haleyb, ralonsoh 14:00:20 \o 14:00:24 o/ 14:00:25 o/ 14:00:27 hello 14:00:31 o/ 14:00:36 hi all 14:00:38 \o 14:00:56 o/ 14:01:25 ok, let's get started 14:01:29 #announcements 14:01:29 o/ 14:01:39 #link https://releases.openstack.org/epoxy/schedule.html 14:01:47 We are currently in week R-16 14:01:56 Epoxy-2 milestone R-12 (week of Jan 6th) 14:02:37 Reminder: If you have a topic for the drivers meeting on Friday, please add it to the wiki @ https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 14:03:07 i see one item on the wiki for this week, unfortunately i might not be able to attend 14:03:52 it actually doesn't look like an RFE anyways 14:04:24 so maybe no meeting is required 14:04:38 +1, no meeting is good meeting :-) 14:04:46 Dong Ma proposed openstack/ovn-bgp-agent master: WIP: Add the support of create kubernetes resource https://review.opendev.org/c/openstack/ovn-bgp-agent/+/937457 14:04:59 late o/ 14:05:06 i will ping him after meeting for status 14:05:06 especially on Friday afternoon :-) 14:06:03 the other thing i wanted to be aware of is the holiday break, a lot of people (me included) are taking time off the next few weeks 14:06:27 * mlavalle will be off starting tomorrow 14:07:08 i was going to cancel the meeting for the 24th 14:07:43 but with E-2 coming up shortly after, i'm not sure if we will have things to discuss on 12/31 14:08:18 I won't be there 24th nor 31st :) 14:08:20 like mlavalle, i will also be taking a break, but will be working a bit to make sure the wheels don't fall off 14:09:07 I also will be on PTO 14:09:43 i will put my days off on the wiki, but essentially from the 18th to the 8th i will be off, completely unavailable the first week of January 14:09:58 first time in 40 years that I take such a long break 14:10:34 mlavalle: you deserve it, and enjoy you're time off 14:10:38 Dong Ma proposed openstack/ovn-bgp-agent master: WIP: Add the support of create kubernetes resource https://review.opendev.org/c/openstack/ovn-bgp-agent/+/937457 14:10:42 mlavalle: enjoy it :-) 14:11:09 so should i also cancel the meeting on the 31st? 14:11:23 ++ 14:11:27 ++ 14:11:31 ++ 14:11:32 yes! some locations / culture consider NY eve as much a holiday as Christmas 14:11:33 +1 14:12:19 ok, done. Meeting next week on the 17th, then next on Jan 7th 14:12:48 I will still be on the beach on the 7th, can someone volunteer to lead that one? I'm back on the 8th 14:13:14 yes I can 14:13:53 mlavalle: thanks. and that might be a busy week being E-2 with releases being proposed 14:14:55 i will send an email to the list regarding the meetings in case someone is not here 14:15:30 last announcement i have is my weekly reminder 14:15:33 Let's continue to use the priorities dashboard for patches in the "ready to merge" state (weekly reminder) 14:16:10 any other announcements? 14:16:47 #topic bugs 14:17:00 mlavalle was the deputy this week, his report is at 14:17:05 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/3HIN2NODJM6S3BF3LKXQW6BMHSNQPKWP/ 14:17:55 i see most have owners and/or patches 14:18:15 this one does not 14:18:30 #link https://bugs.launchpad.net/neutron/+bug/2091211 14:19:05 yeah that one I marked as incomplete, but submitter responded, so it needs follow up 14:19:36 it's on yoga/zed 14:19:49 Fernando Royo proposed openstack/ovn-octavia-provider master: DNM: debug o-api <--> ovsdbapp timeout https://review.opendev.org/c/openstack/ovn-octavia-provider/+/936952 14:21:17 first step would be to reproduce on master branch, not sure if anyone has cycles or a setup running ml2/ovs 14:21:45 haleyb: I can check, I try to spare some time for it 14:22:40 lajoskatona: ack thanks. this also could be a deployment issue, that was using ansible 14:22:54 isn't it normal what is in this bug / 14:23:09 when you run tcpdump your nic is in promiscous mode, right? 14:23:13 or am I wrong? 14:23:37 it would be bad if there would be packets from some other network 14:23:47 but from the same one I think it is kind of ok, no? 14:24:21 possible, I have to check 14:24:35 in OVS? TAP ports should receive VM traffic only 14:24:46 slaweq: it could be normal, all those things you describe seem valid, i guess with port security i would not have expected 14:25:13 it is a flat provider network 14:25:17 but port security is not filtering incoming packets based on the dst mac 14:25:33 IIRC 14:26:01 we do, we accept packets only for the destination port, mac and IP 14:26:05 anyway, I don't have cycles to work on it but for me it looks like maybe it is not really a bug :) 14:26:25 ralonsoh in each fw driver? 14:26:41 at least with native one 14:27:31 i will ask that question in the bug, perhaps this is with iptables hybrid? i don't want lajoskatona wasting cycles with wrong config 14:27:56 +1 14:28:05 haleyb: ack, the fw driver can beimportant, I defaulted to ovs fw driver in my mind :-) 14:28:49 lajoskatona: i posted that question, and thanks for taking a look 14:28:58 any other bugs we should discuss? 14:29:44 ihrachys is the bug deputy this week 14:30:48 i see lucasgomes as next week, but didn't you mention he's not working on openstack any more ralonsoh ? should i ping him about that so he doesn't do unnecessary work? 14:30:58 right 14:31:02 I can take his place 14:32:06 ralonsoh: ok, thanks. i will ping him later about the next rotation which is way out in March 14:32:44 hopefully things are quiet those weeks anyways 14:33:16 ok, moving on 14:33:30 #topic community goals 14:33:40 i see there was a little movement on neutronclient deprecation 14:33:43 #link https://review.opendev.org/q/topic:%22bug/1999774%22 14:34:33 yes, some progress, I am lost in some of the tests in horizon for quotas, so in worst case I ask for help from the horizon team 14:34:40 but slow progress anyway 14:34:53 lajoskatona: ack, at least one has a +2 14:35:12 and i was surprised py39 is still being run in the other 14:36:18 doh, although we run it too, my brain has not had coffeee yet 14:37:26 other community goal is eventlet 14:37:36 #link https://bugs.launchpad.net/neutron/+bugs?field.tag=eventlet-deprecation 14:37:47 yes, I'm working in the ovn metadata migration 14:37:56 more in particular in the metadta handler 14:38:16 right now we have 2 methods to create the handler: in a thread and spawning processes 14:38:29 (metadata_workers=0 or other number) 14:38:37 all these methods use eventlet 14:39:00 so I'm going to start migrating only the default one (because metadata_workers=0 by default) 14:39:31 and that implies refactoring everything: the http handler, the embeded wsgi server and remove, probably, the haproxy 14:39:37 5 minutes, more or less 14:39:59 the question is: I'll start with this thread model 14:40:32 so no more haproxy? 14:40:34 instead of migrating the other that spawns several process, that would imply creating something like the Neutron API wsgi server (or nova metadata) 14:41:00 haleyb, we don't need it if we have one single embedded server reading from the port 14:41:18 so do you agree with this strategy? 14:41:34 (maybe this deserves a mail) 14:42:20 ralonsoh: i'm trying to think of other settings we put in the haproxy conf, and if they matter (i truly don't remember at the moment) 14:42:27 like rate limiting, etc 14:42:40 yes, that's something to be considered 14:42:48 so maybe haproxy can stay 14:42:49 yes, rate limiting is important IMO 14:43:07 but with haproxy we have it only for IPv4 or IPv6, not both at the same time 14:43:09 and the could be also compatible with a future implementation using multiple processes 14:43:21 slaweq, we don't need both 14:43:25 if we could have something else working for both IP versions would be great 14:43:27 at the same time, I mean 14:43:55 I would prefer to implement something compatible with the current code 14:44:06 more or less, rather than adding functionalities 14:44:16 sure 14:44:23 (to be honest, I really don't even know how to start right now) 14:44:38 this requires a level of refactor... huge one 14:44:44 so that's all 14:45:01 "5 minutes, more or less" 14:45:08 :) 14:45:14 I'll send a mail today to provide a feedback channel 14:45:19 or 6 mins 14:46:39 ralonsoh: so my next question is what do you need help with? i was thinking once there was a patch for one, we could adopt it for the others 14:46:46 or is that a wrong assumption? 14:47:01 yes, I'm starting with ovn because I have the env 14:47:06 that should work for both 14:47:22 also ovn metadata only depends on eventlet on this part of the code 14:47:38 so maybe we'll be able to create a CI with an eventlet-free ovn metadata 14:48:52 ralonsoh: i was even thinking for the other agents too 14:49:23 well, we'll need a new olso.service release 14:49:54 ok, and that i'm assuming will happen by E-2 14:51:26 i will just wait for now, but would like to help with at least one of the other agents once we have a framework 14:52:06 alright, we can move on 14:52:10 #topic on-demand 14:52:15 floor is open for any other topics 14:52:19 I have another eventlet topic, which is slightly related 14:52:28 frickler: sure, go ahead 14:52:29 it is about making neutron work with the latest eventlet release. currently we are still pinned to 0.36.1 in reqs see https://review.opendev.org/c/openstack/requirements/+/933257 14:53:09 I started checking the neutron failures in the tempest job there and came up with https://review.opendev.org/c/openstack/neutron/+/937353 but this seems to be complicated still 14:53:15 the issue in tempest is not related to this 14:53:36 we are investigating the issues in https://review.opendev.org/c/openstack/neutron/+/936429 14:53:50 there are 5 patches under this one that fixes several issues 14:54:12 although we are still seeing problems with high API numbers, with 2 Neutron APIs work fine 14:54:25 so I would encourage all to review these patches 14:55:01 ralonsoh: did you test any of that with updated eventlet yet? 14:55:09 no 14:56:00 we can talk after this meeting 14:56:03 so I think there are more hidden issues uncovered by newer eventlet, but I'll check your patches and see if they help with that 14:56:27 (also the tempest job is not using wsgi...) 14:56:31 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3c1/933257/3/check/tempest-full-py3/3c1f7c1/controller/logs/screen-q-svc.txt 14:57:36 RuntimeError: cannot notify on un-acquired lock 14:58:04 yes, I thought that this was a bug in oslo.messaging at first 14:58:14 but those errors go away with my above patch 14:59:52 we can continue discussing, just let me close the meeting 15:00:02 thanks for attending everyone and have a good week 15:00:05 #endmeeting