14:01:16 <slaweq> #startmeeting networking
14:01:17 <openstack> Meeting started Tue May 11 14:01:16 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:19 <slaweq> hi
14:01:20 <openstack> The meeting name has been set to 'networking'
14:01:22 <mlavalle> o/
14:01:38 <obondarev> hi
14:01:42 <njohnston> o/
14:01:45 <rubasov> hi
14:01:51 <ralonsoh> hi
14:02:06 <slaweq> welcome everyone
14:02:12 <slaweq> let's start with first topic
14:02:21 <slaweq> #topic announcements
14:02:50 <slaweq> According to Xena calendar, first milestone is May 24th
14:02:58 <slaweq> so in about 2 weeks from now
14:03:11 <slaweq> next one
14:03:14 <slaweq> We have bunch of security bugs opened http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022353.html
14:03:28 <slaweq> I will need some help with triaging those LPs
14:04:48 <mlavalle> I'll help
14:05:20 <slaweq> thx
14:05:34 <slaweq> and that's all announcements from me for today
14:05:34 <mlavalle> any pressing deadline?
14:05:45 <mlavalle> for the triaging I mean
14:05:58 <slaweq> mlavalle: no, but as those are security issues, we should probably do it in some reasonable time
14:06:11 <amotoki> they are public security so there is no specific deadline but it would be nice if we can triage them.
14:06:18 <amotoki> I will look into them too.
14:06:23 <slaweq> thx amotoki
14:06:25 <mlavalle> ok, I'll try to go over as many as possible by early next week
14:06:40 <slaweq> thx both of You
14:06:52 <slaweq> anything else You want to share with the team today?
14:07:44 <slaweq> if not, let's move on to the next topic
14:07:51 <slaweq> #topic Blueprints
14:08:05 <slaweq> BPs for Xena-1 are on the list https://bugs.launchpad.net/neutron/+milestone/xena-1
14:08:14 <slaweq> do You have any updates about them?
14:09:45 <slaweq> ok, from my side I just want to ask for review patches related to secure-rbac https://review.opendev.org/q/topic:%2522secure-rbac%2522+(status:open+OR+status:merged)+project:openstack/neutron
14:09:56 <slaweq> there are still some of them which needs to be checked
14:10:02 <slaweq> but fortunatelly not a lot :)
14:11:13 <slaweq> please also check list of opened specs and review some https://review.opendev.org/q/project:openstack/neutron-specs+status:open
14:11:20 <slaweq> I will try to review them too
14:11:37 <slaweq> especially those proposed by gibi, lajoskatona and rubasov :)
14:12:04 <gibi> slaweq: thanks :)
14:12:07 <rubasov> thank you
14:12:17 <slaweq> yw :)
14:12:57 <mlavalle> any of those specs we should prioritize?
14:13:23 <mlavalle> there are a few
14:13:51 <slaweq> IIUC what was said during the ptg, all of them are equally important
14:13:58 <slaweq> is that correct gibi, rubasov?
14:14:10 <rubasov> good question
14:14:28 <rubasov> right now I don't have an order in my mind
14:14:39 <gibi> rubasov: do we have some dependency order between the l3 spces?
14:14:41 <gibi> specs
14:14:53 <rubasov> but I'll come back if I realize some order
14:14:53 <gibi> at leat the pps spec is indepented from the rest
14:15:07 <mlavalle> the reason I ask is that in the list I got things like https://review.opendev.org/c/openstack/neutron-specs/+/601320
14:15:39 <slaweq> mlavalle: no, I listed all opened specs
14:15:41 <mlavalle> which I don't remeber we discussed in the PTG
14:15:44 <slaweq> so let's do it that way
14:16:13 <rubasov> that one is not from us
14:16:15 <slaweq> I will tonight go over all opened specs and will set review-priority to +1 for those which are important for Xena cycle
14:16:16 <mlavalle> rubasov, gibi: don't worry to much on prioritize your specs
14:16:24 <slaweq> mlavalle: will that be ok?
14:16:25 <gibi> mlavalle: ack
14:16:49 <mlavalle> rubasov, gibi: I'll look at your specs that we recently discussed
14:16:59 <gibi> thank you
14:17:01 <rubasov> thank you
14:17:24 <mlavalle> slaweq: it would be helpful, bit please don't sweat it too much. I just was surprsied at some of the things that came out in the list
14:17:38 <slaweq> sure
14:17:52 <mlavalle> I have enough with this clarification
14:18:40 <slaweq> k
14:18:43 <slaweq> thx for asking
14:19:00 <lajoskatona> Hi, sorry for beeing late
14:19:40 <slaweq> if there are no other updates about BPs, I think we can move on
14:19:48 <slaweq> #topic Bugs
14:19:59 <slaweq> lucasgomez was bug deputy two weeks ago, report: http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022168.html
14:20:06 <slaweq> jlibosva was bug deputy last week but I don't see report from him
14:22:20 <slaweq> from the bugs which lucasgomes listed, there is couple which don't have any assignment:
14:22:27 <slaweq> https://bugs.launchpad.net/neutron/+bug/1926109 - CI issue
14:22:27 <openstack> Launchpad bug 1926109 in neutron "SSH timeout (wait timeout) due to potential paramiko issue" [Critical,New]
14:22:29 <slaweq> https://bugs.launchpad.net/neutron/+bug/1926780 - CI issue
14:22:30 <openstack> Launchpad bug 1926780 in neutron "Multicast traffic scenario test is failing sometimes on OVN job" [High,Confirmed]
14:22:31 <slaweq> https://bugs.launchpad.net/neutron/+bug/1926531 - DVR issue
14:22:31 <openstack> Launchpad bug 1926531 in neutron "SNAT namespace prematurely created then deleted on hosts, resulting in removal of RFP/FPR link to FIP namespace" [Undecided,New]
14:22:33 <slaweq> https://bugs.launchpad.net/neutron/+bug/1926838 - OVN bug
14:22:33 <openstack> Launchpad bug 1926838 in neutron "[OVN] infinite loop in ovsdb_monitor" [High,New]
14:23:57 <slaweq> so if anyone would have some time, would be great if You can take a look into them :)
14:24:21 <ralonsoh> sure
14:24:27 <slaweq> any other bugs You want to discuss today?
14:26:12 <slaweq> ok, if not, let's move on
14:26:18 <slaweq> #topic On Demand Agenda
14:26:23 <slaweq> ralonsoh: You have one topic here
14:26:34 <ralonsoh> thanks
14:26:44 <ralonsoh> let me find the LP
14:26:53 <slaweq> https://bugs.launchpad.net/neutron/+bug/1926787
14:26:53 <openstack> Launchpad bug 1926787 in neutron "[DB] Neutron quota request implementation can end in a lock status" [Wishlist,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
14:26:54 <slaweq> that one?
14:26:56 <slaweq> :)
14:26:59 <ralonsoh> right!
14:27:04 <ralonsoh> and the POC: https://review.opendev.org/c/openstack/neutron/+/790060
14:27:31 <ralonsoh> in a nutshell: the current quota driver uses a global lock for each (resource, project_id)
14:27:59 <ralonsoh> as commented in the LP bug, that could lead to a lock state in the DB if number of queries > number of replies
14:28:23 <ralonsoh> what I'm proposing is an alternative driver 100% compatible with the current quota system
14:28:31 <ralonsoh> the main part is
14:28:49 <ralonsoh> --> https://review.opendev.org/c/openstack/neutron/+/790060/6/neutron/db/quota/driver_nolock.py
14:29:08 <ralonsoh> how the reservations are made and how the resource quota is calculated
14:29:45 <ralonsoh> instead of using a used_quota table, count the resources directly on the DB, along with the reserved ones and make, if quota is available, a new reservation
14:29:56 <ralonsoh> that's all, thanks!
14:30:21 <obondarev> ralonsoh, does lock state affect all DB engines or just specific ones?
14:30:38 <ralonsoh> that depends on the engine used
14:30:53 <ralonsoh> but by default, if you use innodb tables, you'll have this lock
14:31:02 <obondarev> got it
14:31:04 <obondarev> thanks
14:31:27 <ralonsoh> what I'm trying to do is to use the DB transactionality, instead of creating a global lock (in the BD)
14:31:40 <obondarev> so this new approach gives a performance improvement as well, right ralonsoh?
14:31:47 <ralonsoh> for sure!
14:31:51 <obondarev> nice!
14:31:52 <mlavalle> +1
14:32:40 <ralonsoh> thanks a lot, I'll push a new patch and a periodic CI job to using this driver
14:32:50 <slaweq> one question - what is the purpose of keeping both drivers in tree?
14:32:52 <ralonsoh> s/to using/to use
14:33:07 <ralonsoh> well, I think first we need to test it
14:33:13 <slaweq> do we want to deprecate old one at some point?
14:33:24 <ralonsoh> I think so
14:33:28 <slaweq> or keep both forever as they are maybe better/worst to some use cases?
14:33:37 <lajoskatona> If we need to keep both for some reason a configg option could be added
14:33:45 <ralonsoh> we already have it
14:34:01 <slaweq> lajoskatona: there is already config option to specify quota driver which should be used
14:34:03 <lajoskatona> ok, that's done:-)
14:34:11 <ralonsoh> but I don't see any use case where the old one is better
14:34:33 <ralonsoh> what I can do, if you want
14:34:39 <ralonsoh> is to change the default one
14:34:49 <slaweq> maybe we should simply replace old one with that new one?
14:34:51 <ralonsoh> that will test it in any CI job
14:35:03 <slaweq> of course it will be tested in all our ci jobs :)
14:35:08 <ralonsoh> as you wish
14:35:13 <ralonsoh> do you want me to replace it?
14:35:13 <slaweq> I'm just asking :)
14:35:26 <ralonsoh> I think we can keep both for now
14:35:31 <amotoki> the config quota driver can be deprecated but our UT depends on it.
14:35:34 <ralonsoh> and next release, when tested, remove the old one
14:35:49 <amotoki> before we need to drop the config driver, we need to revisit our UT
14:35:49 <obondarev> I'd give some time for new driver to prove it's validity :)
14:36:02 <ralonsoh> ^^ exactly
14:36:19 <slaweq> my main concern is that if it will not be default, nobody will really test it
14:36:28 <slaweq> but maybe I'm wrong (hopefully)
14:36:34 <ralonsoh> I can change the default one to the new one
14:36:49 <ralonsoh> that will test this driver in all CI jobs
14:37:06 <ralonsoh> and in Y, deprecate the old one
14:37:19 <obondarev> +1
14:37:58 <slaweq> IMHO that sounds like good plan - people may test our new driver and in case of issues, there is always easy way to get back to the old one
14:38:19 <slaweq> if all will be fine, we can simply deprecate and later delete old one
14:38:43 <ralonsoh> thank you all
14:39:29 <mlavalle> +1
14:40:14 <slaweq> ralonsoh: thx for proposing that, and let's go with that :)
14:40:19 <ralonsoh> thanks
14:40:41 <slaweq> any other topics You want to discuss today?
14:40:59 <slaweq> if not, I will give You about 20 minutes back :)
14:41:41 <slaweq> ok, so thx for attending the meeting
14:41:44 <ralonsoh> bye
14:41:46 <mlavalle> o/
14:41:49 <slaweq> ahh, one more thing
14:41:53 <slaweq> sorry, I forgot
14:41:56 <slaweq> bug deputy
14:41:56 <mlavalle> LOL
14:42:06 <slaweq> this week it's obondarev's turn
14:42:14 <slaweq> and next week we are starting new round
14:42:18 <obondarev> ack
14:42:27 <slaweq> so please check https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy
14:42:52 <slaweq> and if it's not good for You, please let me know so I can change it if needed
14:42:59 <slaweq> now, it's all done :)
14:43:10 <mlavalle> \o/
14:43:12 <slaweq> thx for attending the meeting and see You all online
14:43:17 <slaweq> o/
14:43:21 <slaweq> #endmeeting