14:00:38 <haleyb> #startmeeting networking
14:00:38 <opendevmeet> Meeting started Tue Sep  3 14:00:38 2024 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:38 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:38 <opendevmeet> The meeting name has been set to 'networking'
14:00:40 <haleyb> Ping list: bcafarel, elvira, frickler, mlavalle, mtomaska, obondarev, slaweq, tobias-urdin, ykarel, lajoskatona, jlibosva, averdagu, amotoki, haleyb, ralonsoh
14:00:44 <ihrachys> o/
14:00:54 <obondarev> o/
14:00:56 <ralonsoh> hello
14:01:05 <mlavalle> \o
14:01:10 <lajoskatona> o/
14:01:13 <slaweq> o/
14:01:52 <rubasov> o/
14:02:19 <haleyb> #topic announcements
14:02:34 <haleyb> We are now in Dalmatian release week (R - 4)
14:03:18 <haleyb> If there are any featureful requests that need merging we should discuss as an exception
14:03:41 <haleyb> cut/paste from elod's email
14:03:44 <haleyb> Focus should be on finding and fixing release-critical bugs, so that
14:03:44 <haleyb> release candidates and final versions of the 2024.2 Dalmatian
14:03:44 <haleyb> deliverables can be proposed, well ahead of the final 2024.2 Dalmatian
14:03:44 <haleyb> release date.
14:04:37 <haleyb> I am fine landing any other bug fixes as well
14:05:21 <haleyb> stable/2024.2 branches should be created soon for all not-already-branched libraries
14:05:51 <haleyb> RC1 deadline: September 13th, 2024 (R-3 week)
14:05:54 <haleyb> Final RC deadline: September 26th, 2024 (R-1 week)
14:06:06 <haleyb> Final 2024.2 Dalmatian release: October 2nd, 2024
14:06:50 <haleyb> that's a lot of info, any questions or comments on the schedule?
14:06:58 <ralonsoh> https://releases.openstack.org/dalmatian/schedule.html#d-cycle-highlights
14:07:04 <ralonsoh> I think you need to send this patch
14:09:05 <haleyb> ralonsoh: thanks for reminding me, i will propose it today or tomorrow. if anyone has a highlight feel free to ping me else i'll go through the commit messages, etc
14:10:26 <haleyb> as far as reviews, please add completed ones to the review dashboard by adding an RP +1
14:10:36 <haleyb> there are a few on the list at the moment
14:11:07 <haleyb> as it was a US holiday yesterday i have not looked since Friday
14:11:31 <lajoskatona> +1
14:11:53 <lajoskatona> I checked some from Liu but this week (and last one) I had no time for them
14:12:10 <lajoskatona> but those are big features
14:12:55 <haleyb> lajoskatona: thanks for checking those, i had been trying to make time for the metadata path extension changes as well
14:13:40 <haleyb> #link https://review.opendev.org/q/topic:%22distributed_metadata_data_path%22
14:13:54 <haleyb> the last three are the largest
14:14:59 <lajoskatona> yes not easy ones
14:15:39 <ralonsoh> I have one from me (I've increased the RP because the n-lib patch is merged and the API ref released): https://review.opendev.org/c/openstack/neutron/+/924724
14:15:57 <ralonsoh> not critical, of course
14:16:42 <slaweq> also https://review.opendev.org/q/topic:%22bug/2060916%22 are ready to review
14:17:09 <haleyb> only other announcement from me today is i will be unavailable this afternoon, not feeling great at the moment and appointment later
14:17:16 <haleyb> always seems to happen after a long weekend
14:17:36 <slaweq> haha, we call it hangover :P
14:17:50 <slaweq> I also often have it after long weekend :D
14:18:28 <slaweq> but seriously, I hope You will feel better soon haleyb :)
14:18:37 <lajoskatona> +1
14:19:50 <opendevreview> Ihar Hrachyshka proposed openstack/neutron-tempest-plugin master: Update snat_rules_apply_to_nested_networks with ovn details  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/927220
14:19:51 <haleyb> hah, you make a good point, but i did not drink at all yesterday...
14:20:31 <haleyb> any other announcements?
14:20:43 <opendevreview> Rodolfo Alonso proposed openstack/neutron master: Neutron quota engine checks the resource usage by default  https://review.opendev.org/c/openstack/neutron/+/926725
14:21:19 <haleyb> #topic bugs
14:21:30 <haleyb> mlavalle was the deputy last week, his report is at
14:21:38 <haleyb> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/ACLSJ2F6VBBBVOWDYNFZENSCEOTZACKO/
14:22:17 <haleyb> two of the high's have proposed fixes
14:22:44 <haleyb> the other is rbac related
14:22:47 <ihrachys> the second is not a fix it seems
14:22:51 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2078518
14:22:59 <slaweq> regarding designate scenario, I today proposed fix in designate devstack plugin
14:23:14 <ralonsoh> no, the second is not a fix but an extra check, the bug is still open
14:23:19 <slaweq> I proposed patch https://review.opendev.org/c/openstack/designate/+/927792 which is tested in the https://review.opendev.org/c/openstack/neutron/+/926085 and neutron-tempest-plugin-designate-scenario job is passing now: https://zuul.opendev.org/t/openstack/build/092e8d0e2af24124bb873a4b9dc592aa
14:23:44 <slaweq> so now johnsom and other folks from designate team will need to review it
14:23:57 <johnsom> ack
14:24:09 <slaweq> it don't seems to be issue in neutron really, so I marked it as invalid for neutron
14:24:13 <slaweq> thx johnsom
14:24:31 <haleyb> slaweq: ah, thanks for tracking that down, we should make the job voting again once it merges
14:24:36 <ihrachys> afaiu we disabled the job due to this. we'll need to re-enable when it's fixed in designate?
14:25:02 <slaweq> yes, I will propose revert of that patch which made it non-voting
14:25:09 <haleyb> ihrachys: yes, we made it non-voting to land the requirements bump
14:26:48 <opendevreview> Slawek Kaplonski proposed openstack/neutron-tempest-plugin master: Revert "Make neutron-tempest-plugin-designate-scenario non voting"  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/927834
14:27:03 <haleyb> ralonsoh: and getting back to the functional failure, i see that is assigned to you let us know if you need any help looking
14:27:31 <opendevreview> Slawek Kaplonski proposed openstack/neutron-tempest-plugin master: Revert "Make neutron-tempest-plugin-designate-scenario non voting"  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/927834
14:27:34 <ralonsoh> I didn't close the LP bug, the patch is marked as related-bug
14:27:41 <slaweq> revert proposed ^^
14:27:50 <ralonsoh> If I see again this issue, I'll have more info about the problem
14:27:57 <ralonsoh> it is not easy to debug this error
14:28:05 <haleyb> ralonsoh: ack
14:28:23 <haleyb> brb
14:29:33 <ralonsoh> (during this break, please check this fix for postgresql: https://review.opendev.org/c/openstack/neutron/+/927801)
14:30:04 <haleyb> the last but is another OVN nested router issue
14:30:11 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2077879
14:30:16 <haleyb> s/but/bug
14:30:57 <haleyb> ihrachys: thanks for the notes there
14:31:43 <ihrachys> haleyb: this looks like an RFE for a different way of configuring nested snat
14:32:01 <ihrachys> not via subnets + static routes; but though gw_info on inner-router
14:33:51 <ihrachys> (this is an invitation to tag it as such and let drivers team to consider UX for these scenarios)
14:35:03 <haleyb> i will have to look at the commands he ran, but can probably tag it as an RFE and discuss friday, and hope the submittor can work on it
14:35:32 <ihrachys> thanks. fyi I don't have immediate plans to work on it :)
14:36:02 <haleyb> the other issue 2+ of us see now is you can't add a FIP to a VM on a nested network, which looks like
14:36:07 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2072505
14:36:23 <haleyb> ihrachys: understood
14:37:40 <haleyb> those were the only bugs i had to discuss
14:38:26 <haleyb> ihrachys is the deputy list week, lucasgomes next week
14:38:36 <haleyb> s/list/this can't type
14:38:46 <ihrachys> ack
14:40:36 <haleyb> is lucas around? i will look for him later this week
14:40:55 <mlavalle> I can ping him internally to make sure he is aware
14:41:18 <haleyb> mlavalle: thanks
14:41:37 <haleyb> Current bug count this week: 711, down 6 from last week
14:41:59 <haleyb> #topic community-goals
14:42:33 <haleyb> lajoskatona: i see a number of the horizon sdk patches merged?
14:42:38 <lajoskatona> yes :-)
14:43:02 <lajoskatona> and thanks for the reviews, and for the ones that are for bumping netaddr in Horizon and Cinder
14:44:11 <lajoskatona> https://review.opendev.org/q/topic:%22bump-netaddr%22
14:44:17 <haleyb> lajoskatona: np, i was surprised the netaddr bump hadn't merged, but maybe it's pushed to next cycle now?
14:44:46 <haleyb> ah, looks like it
14:44:47 <lajoskatona> I don't know, just saw the hanging patch in requirements repo and thought I can push it perhaps
14:45:16 <haleyb> it's marked -2
14:45:32 <lajoskatona> I don't feel the urge from Cinder and Horizon team (or anybody not that interested in networking) to fix something with netaddr
14:45:32 <frickler> reqs are frozen now, yes
14:45:43 <haleyb> lajoskatona: thanks for helpiing to fix those last projects
14:45:46 * mlavalle confirmed with lucasgomes he is ok being bugs deputy next week
14:45:52 <haleyb> mlavalle: thanks
14:46:41 <haleyb> lajoskatona: :-/ is all i can say to that
14:47:13 <lajoskatona> frickler: thanks, we come back to it in Epoxy I suppose :-)
14:48:49 <haleyb> the other community goal was eventlet removal, and i just see one patch left
14:48:55 <haleyb> #link https://review.opendev.org/c/openstack/neutron/+/924317
14:49:19 <haleyb> ralonsoh: is that ready to merge?
14:49:22 <ralonsoh> yes and that will be the last one until we have news from oslo.services and oslo.messaging
14:49:48 <ralonsoh> yes, is ready (and then I'll need to update all the docs and names, referring to wsgi/eventlet server)
14:50:20 <mlavalle> was the failure in experimental expected?
14:50:42 <ralonsoh> which one?
14:51:06 <ralonsoh> we can discuss it in the patch
14:51:14 <ihrachys> sorry, I am clueless, butL what does it mean? are we ready to remove the dependency? or is it neutron-server specific?
14:51:35 <mlavalle> the build failed (several jobs)
14:51:40 <ralonsoh> ihrachys, I don't understand your question
14:51:50 <ralonsoh> mlavalle, I'll check them later again
14:52:32 <ihrachys> "the community goal was eventlet removal", "and i just see one patch left" I interpret it as we removed eventlet completely? or I misread?
14:52:32 <ralonsoh> this is not neutron-server specific, we are moving to wsgi to avoid using eventlet server and because almost all projects already move to wsgi
14:52:49 <haleyb> #link https://review.opendev.org/c/openstack/governance/+/902585
14:52:51 <ihrachys> context of the question is `git grep eventlet` shows a lot of stuff in agents
14:52:56 <haleyb> that's the larger goal
14:52:57 <ralonsoh> no, there is one patch left for now, we are not even close to remove eventlet
14:53:26 <ralonsoh> the agent stuff should start with the oslo.service change, from using eventlet to other strategy (to be defined)
14:53:27 <ihrachys> ok ok. see, I'm clueless :) and I misread the "just one patch left" thing
14:54:04 <ihrachys> and thanks for clarification
14:54:07 <slaweq> yeah, this is much bigger goal and it will have multiple steps IIUC
14:54:21 <lajoskatona> ralonsoh, ihrachys: thanks for the details and the good questions
14:55:00 <haleyb> #topic on-demand
14:55:15 <opendevreview> Michel Nederlof proposed openstack/ovn-bgp-agent master: Fix cleanup of rules per evpn device  https://review.opendev.org/c/openstack/ovn-bgp-agent/+/927816
14:55:25 <haleyb> i did not see any topics in the agenda for this week, but if anyone has one can discuss now
14:55:35 <mlavalle> can I get another pair of eyes on https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/926503?
14:55:44 <mlavalle> it's an easy one
14:56:16 <ihrachys> i'll bite. we should promote ovs-lts jobs to gate, eventually. thoughts?
14:56:38 <ihrachys> (as in - jobs that build ovs and ovn from latest LTS branches and run tempest against it)
14:56:46 <lajoskatona> mlavalle: added to my list
14:56:55 <mlavalle> thanks!
14:57:05 <ihrachys> these are proposed as experimental here https://review.opendev.org/c/openstack/neutron/+/927221 but I'm looking if we should promote them to periodic and then checl
14:57:08 <ihrachys> *check
14:57:58 <ralonsoh> we'll have 6 more CI hours per patch
14:58:23 <ihrachys> we'll have better coverage too
14:58:26 <ralonsoh> but makes sense to test OVN latest release
14:58:38 <ralonsoh> for sure, I'm OK with this
14:58:49 <slaweq> I'm fine adding them to the periodic queue but for gate/check I think we should maybe try to replace some exisitng jobs with those new ones?
14:59:11 <ralonsoh> could be another option, to avoid being warned by zuul folks again
14:59:16 <ihrachys> I am all for revisiting what we run in gate, or even in experimental (which seems like a graveyard for jobs)
14:59:33 <ihrachys> ralonsoh: were we warned? interesting
14:59:35 <slaweq> I'm not even worried that much about resources usage but also about overall gate stability
14:59:44 <ralonsoh> ihrachys, we check the periodic executions every week in the CI meeting
15:00:04 <slaweq> we have seen that in the past - we had many jobs there and even when each of them were failing 10% of times, it was almost impossible to merge anything
15:00:06 <ihrachys> what I hear is I should start with periodics and build confidence :)
15:00:37 <haleyb> i was going to say the same thing, is there something we can remove?
15:01:31 <ralonsoh> ihrachys, I think you can move these jobs to periodic too (actually by moving them to periodic we'll have them in experimental)
15:01:47 <slaweq> so my proposal for now is: let's add them to the periodic queue, and discuss in some time during the drivers meeting maybe
15:01:56 <ihrachys> ralonsoh: ack I'll do it first thing and we can revisit gate question in several months or weeks
15:01:56 <ralonsoh> +1
15:02:14 <haleyb> +1 from me
15:02:58 <haleyb> we are over time so i'll end the meeting
15:03:05 <haleyb> thanks for all the discussions
15:03:11 <ihrachys> fyi looked at a random patch and seeing 6 jobs tempest for ovs and 3 for ovn. priorities :)
15:03:52 <haleyb> ihrachys: right, and they all took 2 hours each
15:04:15 <haleyb> #endmeeting