14:00:17 <haleyb> #startmeeting networking
14:00:17 <opendevmeet> Meeting started Tue Dec 10 14:00:17 2024 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:17 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:17 <opendevmeet> The meeting name has been set to 'networking'
14:00:19 <haleyb> Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, amotoki, haleyb, ralonsoh
14:00:20 <mlavalle> \o
14:00:24 <lajoskatona> o/
14:00:25 <obondarev> o/
14:00:27 <ralonsoh> hello
14:00:31 <ykarel> o/
14:00:36 <s3rj1k> hi all
14:00:38 <frickler> \o
14:00:56 <jlibosva> o/
14:01:25 <haleyb> ok, let's get started
14:01:29 <haleyb> #announcements
14:01:29 <slaweq> o/
14:01:39 <haleyb> #link https://releases.openstack.org/epoxy/schedule.html
14:01:47 <haleyb> We are currently in week R-16
14:01:56 <haleyb> Epoxy-2 milestone R-12 (week of Jan 6th)
14:02:37 <haleyb> Reminder: If you have a topic for the drivers meeting on Friday, please add it to the wiki @ https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
14:03:07 <haleyb> i see one item on the wiki for this week, unfortunately i might not be able to attend
14:03:52 <haleyb> it actually doesn't look like an RFE anyways
14:04:24 <haleyb> so maybe no meeting is required
14:04:38 <lajoskatona> +1, no meeting is good meeting :-)
14:04:46 <opendevreview> Dong Ma proposed openstack/ovn-bgp-agent master: WIP: Add the support of create kubernetes resource  https://review.opendev.org/c/openstack/ovn-bgp-agent/+/937457
14:04:59 <rubasov> late o/
14:05:06 <haleyb> i will ping him after meeting for status
14:05:06 <lajoskatona> especially on Friday afternoon :-)
14:06:03 <haleyb> the other thing i wanted to be aware of is the holiday break, a lot of people (me included) are taking time off the next few weeks
14:06:27 * mlavalle will be off starting tomorrow
14:07:08 <haleyb> i was going to cancel the meeting for the 24th
14:07:43 <haleyb> but with E-2 coming up shortly after, i'm not sure if we will have things to discuss on 12/31
14:08:18 <slaweq> I won't be there 24th nor 31st :)
14:08:20 <haleyb> like mlavalle, i will also be taking a break, but will be working a bit to make sure the wheels don't fall off
14:09:07 <lajoskatona> I also will be on PTO
14:09:43 <haleyb> i will put my days off on the wiki, but essentially from the 18th to the 8th i will be off, completely unavailable the first week of January
14:09:58 <mlavalle> first time in 40 years that I take such a long break
14:10:34 <haleyb> mlavalle: you deserve it, and enjoy you're time off
14:10:38 <opendevreview> Dong Ma proposed openstack/ovn-bgp-agent master: WIP: Add the support of create kubernetes resource  https://review.opendev.org/c/openstack/ovn-bgp-agent/+/937457
14:10:42 <lajoskatona> mlavalle: enjoy it :-)
14:11:09 <haleyb> so should i also cancel the meeting on the 31st?
14:11:23 <slaweq> ++
14:11:27 <ykarel> ++
14:11:31 <mlavalle> ++
14:11:32 <ihrachys> yes! some locations / culture consider NY eve as much a holiday as Christmas
14:11:33 <lajoskatona> +1
14:12:19 <haleyb> ok, done. Meeting next week on the 17th, then next on Jan 7th
14:12:48 <haleyb> I will still be on the beach on the 7th, can someone volunteer to lead that one? I'm back on the 8th
14:13:14 <mlavalle> yes I can
14:13:53 <haleyb> mlavalle: thanks. and that might be a busy week being E-2 with releases being proposed
14:14:55 <haleyb> i will send an email to the list regarding the meetings in case someone is not here
14:15:30 <haleyb> last announcement i have is my weekly reminder
14:15:33 <haleyb> Let's continue to use the priorities dashboard for patches in the "ready to merge" state (weekly reminder)
14:16:10 <haleyb> any other announcements?
14:16:47 <haleyb> #topic bugs
14:17:00 <haleyb> mlavalle was the deputy this week, his report is at
14:17:05 <haleyb> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/3HIN2NODJM6S3BF3LKXQW6BMHSNQPKWP/
14:17:55 <haleyb> i see most have owners and/or patches
14:18:15 <haleyb> this one does not
14:18:30 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2091211
14:19:05 <mlavalle> yeah that one I marked as incomplete, but submitter responded, so it needs follow up
14:19:36 <haleyb> it's on yoga/zed
14:19:49 <opendevreview> Fernando Royo proposed openstack/ovn-octavia-provider master: DNM: debug o-api <--> ovsdbapp timeout  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/936952
14:21:17 <haleyb> first step would be to reproduce on master branch, not sure if anyone has cycles or a setup running ml2/ovs
14:21:45 <lajoskatona> haleyb: I can check, I try to spare some time for it
14:22:40 <haleyb> lajoskatona: ack thanks. this also could be a deployment issue, that was using ansible
14:22:54 <slaweq> isn't it normal what is in this bug /
14:23:09 <slaweq> when you run tcpdump your nic is in promiscous mode, right?
14:23:13 <slaweq> or am I wrong?
14:23:37 <slaweq> it would be bad if there would be packets from some other network
14:23:47 <slaweq> but from the same one I think it is kind of ok, no?
14:24:21 <lajoskatona> possible, I have to check
14:24:35 <ralonsoh> in OVS? TAP ports should receive VM traffic only
14:24:46 <haleyb> slaweq: it could be normal, all those things you describe seem valid, i guess with port security i would not have expected
14:25:13 <haleyb> it is a flat provider network
14:25:17 <slaweq> but port security is not filtering incoming packets based on the dst mac
14:25:33 <slaweq> IIRC
14:26:01 <ralonsoh> we do, we accept packets only for the destination port, mac and IP
14:26:05 <slaweq> anyway, I don't have cycles to work on it but for me it looks like maybe it is not really a bug :)
14:26:25 <slaweq> ralonsoh in each fw driver?
14:26:41 <ralonsoh> at least with native one
14:27:31 <haleyb> i will ask that question in the bug, perhaps this is with iptables hybrid? i don't want lajoskatona wasting cycles with wrong config
14:27:56 <mlavalle> +1
14:28:05 <lajoskatona> haleyb: ack, the fw driver can beimportant, I defaulted to ovs fw driver in my mind :-)
14:28:49 <haleyb> lajoskatona: i posted that question, and thanks for taking a look
14:28:58 <haleyb> any other bugs we should discuss?
14:29:44 <haleyb> ihrachys is the bug deputy this week
14:30:48 <haleyb> i see lucasgomes as next week, but didn't you mention he's not working on openstack any more ralonsoh ? should i ping him about that so he doesn't do unnecessary work?
14:30:58 <ralonsoh> right
14:31:02 <ralonsoh> I can take his place
14:32:06 <haleyb> ralonsoh: ok, thanks. i will ping him later about the next rotation which is way out in March
14:32:44 <haleyb> hopefully things are quiet those weeks anyways
14:33:16 <haleyb> ok, moving on
14:33:30 <haleyb> #topic community goals
14:33:40 <haleyb> i see there was a little movement on neutronclient deprecation
14:33:43 <haleyb> #link https://review.opendev.org/q/topic:%22bug/1999774%22
14:34:33 <lajoskatona> yes, some progress, I am lost in some of the tests in horizon for quotas, so in worst case I ask for help from the horizon team
14:34:40 <lajoskatona> but slow progress anyway
14:34:53 <haleyb> lajoskatona: ack, at least one has a +2
14:35:12 <haleyb> and i was surprised py39 is still being run in the other
14:36:18 <haleyb> doh, although we run it too, my brain has not had coffeee yet
14:37:26 <haleyb> other community goal is eventlet
14:37:36 <haleyb> #link https://bugs.launchpad.net/neutron/+bugs?field.tag=eventlet-deprecation
14:37:47 <ralonsoh> yes, I'm working in the ovn metadata migration
14:37:56 <ralonsoh> more in particular in the metadta handler
14:38:16 <ralonsoh> right now we have 2 methods to create the handler: in a thread and spawning processes
14:38:29 <ralonsoh> (metadata_workers=0 or other number)
14:38:37 <ralonsoh> all these methods use eventlet
14:39:00 <ralonsoh> so I'm going to start migrating only the default one (because metadata_workers=0 by default)
14:39:31 <ralonsoh> and that implies refactoring everything: the http handler, the embeded wsgi server and remove, probably, the haproxy
14:39:37 <ralonsoh> 5 minutes, more or less
14:39:59 <ralonsoh> the question is: I'll start with this thread model
14:40:32 <haleyb> so no more haproxy?
14:40:34 <ralonsoh> instead of migrating the other that spawns several process, that would imply creating something like the Neutron API wsgi server (or nova metadata)
14:41:00 <ralonsoh> haleyb, we don't need it if we have one single embedded server reading from the port
14:41:18 <ralonsoh> so do you agree with this strategy?
14:41:34 <ralonsoh> (maybe this deserves a mail)
14:42:20 <haleyb> ralonsoh: i'm trying to think of other settings we put in the haproxy conf, and if they matter (i truly don't remember at the moment)
14:42:27 <haleyb> like rate limiting, etc
14:42:40 <ralonsoh> yes, that's something to be considered
14:42:48 <ralonsoh> so maybe haproxy  can stay
14:42:49 <slaweq> yes, rate limiting is important IMO
14:43:07 <slaweq> but with haproxy we have it only for IPv4 or IPv6, not both at the same time
14:43:09 <ralonsoh> and the could be also compatible with a future implementation using multiple processes
14:43:21 <ralonsoh> slaweq, we don't need both
14:43:25 <slaweq> if we could have something else working for both IP versions would be great
14:43:27 <ralonsoh> at the same time, I mean
14:43:55 <ralonsoh> I would prefer to implement something compatible with the current code
14:44:06 <ralonsoh> more or less, rather than adding functionalities
14:44:16 <slaweq> sure
14:44:23 <ralonsoh> (to be honest, I really don't even know how to start right now)
14:44:38 <ralonsoh> this requires a level of refactor... huge one
14:44:44 <ralonsoh> so that's all
14:45:01 <haleyb> "5 minutes, more or less"
14:45:08 <haleyb> :)
14:45:14 <ralonsoh> I'll send a mail today to provide a feedback channel
14:45:19 <ralonsoh> or 6 mins
14:46:39 <haleyb> ralonsoh: so my next question is what do you need help with? i was thinking once there was a patch for one, we could adopt it for the others
14:46:46 <haleyb> or is that a wrong assumption?
14:47:01 <ralonsoh> yes, I'm starting with ovn because I have the env
14:47:06 <ralonsoh> that should work for both
14:47:22 <ralonsoh> also ovn metadata only depends on eventlet on this part of the code
14:47:38 <ralonsoh> so maybe we'll be able to create a CI with an eventlet-free ovn metadata
14:48:52 <haleyb> ralonsoh: i was even thinking for the other agents too
14:49:23 <ralonsoh> well, we'll need a new olso.service release
14:49:54 <haleyb> ok, and that i'm assuming will happen by E-2
14:51:26 <haleyb> i will just wait for now, but would like to help with at least one of the other agents once we have a framework
14:52:06 <haleyb> alright, we can move on
14:52:10 <haleyb> #topic on-demand
14:52:15 <haleyb> floor is open for any other topics
14:52:19 <frickler> I have another eventlet topic, which is slightly related
14:52:28 <haleyb> frickler: sure, go ahead
14:52:29 <frickler> it is about making neutron work with the latest eventlet release. currently we are still pinned to 0.36.1 in reqs see https://review.opendev.org/c/openstack/requirements/+/933257
14:53:09 <frickler> I started checking the neutron failures in the tempest job there and came up with https://review.opendev.org/c/openstack/neutron/+/937353 but this seems to be complicated still
14:53:15 <ralonsoh> the issue in tempest is not related to this
14:53:36 <ralonsoh> we are investigating the issues in https://review.opendev.org/c/openstack/neutron/+/936429
14:53:50 <ralonsoh> there are 5 patches under this one that fixes several issues
14:54:12 <ralonsoh> although we are still seeing problems with high API numbers, with 2 Neutron APIs work fine
14:54:25 <ralonsoh> so I would encourage all to review these patches
14:55:01 <frickler> ralonsoh: did you test any of that with updated eventlet yet?
14:55:09 <ralonsoh> no
14:56:00 <ralonsoh> we can talk after this meeting
14:56:03 <frickler> so I think there are more hidden issues uncovered by newer eventlet, but I'll check your patches and see if they help with that
14:56:27 <ralonsoh> (also the tempest job is not using wsgi...)
14:56:31 <ralonsoh> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3c1/933257/3/check/tempest-full-py3/3c1f7c1/controller/logs/screen-q-svc.txt
14:57:36 <haleyb> RuntimeError: cannot notify on un-acquired lock
14:58:04 <frickler> yes, I thought that this was a bug in oslo.messaging at first
14:58:14 <frickler> but those errors go away with my above patch
14:59:52 <haleyb> we can continue discussing, just let me close the meeting
15:00:02 <haleyb> thanks for attending everyone and have a good week
15:00:05 <haleyb> #endmeeting