14:00:36 <slaweq> #startmeeting neutron_drivers
14:00:36 <opendevmeet> Meeting started Fri Aug 27 14:00:36 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:36 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:36 <opendevmeet> The meeting name has been set to 'neutron_drivers'
14:00:38 <slaweq> hi
14:00:40 <mlavalle> o/
14:00:42 <ralonsoh> hello
14:00:47 <amotoki> hi
14:00:51 <thomasb06> hi
14:00:51 <njohnston> o/
14:00:56 <lajoskatona> Hi
14:01:20 <slaweq> I think we can start as we have quorum today
14:01:44 <slaweq> and I want to start today in a different way
14:01:47 <haleyb> hi
14:01:51 <slaweq> #topic On Demand Agenda
14:01:54 <slaweq> hi haleyb
14:01:57 <slaweq> welcome back :)
14:02:22 <slaweq> first - congratulations to Lajos, our new PTL!
14:02:31 <ralonsoh> congrats!!!
14:02:38 <lajoskatona> thanks
14:02:38 <amotoki> congrats, lajoskatona
14:02:40 <mlavalle> +1
14:02:50 <slaweq> and regarding that I have a question to all of You
14:02:50 <njohnston> Congratulations Lajos!
14:03:01 <slaweq> I think that lajoskatona should now be part of the drivers team
14:03:08 <ralonsoh> I think so, makes sense
14:03:13 <njohnston> +1
14:03:13 <slaweq> even if he wouldn't be ptl, I think he deserves that
14:03:17 <slaweq> but now even more :)
14:03:32 <mlavalle> +1
14:03:41 <slaweq> do You think I should send official email with nomination for lajoskatona or would it be enough if we will vote here?
14:04:20 <ralonsoh> this meeting will be registered, so it is "legal"
14:04:36 <isabek> Congratulations lajoskatona !
14:04:37 <amotoki> most of voters are here and it looks enough to vote here.
14:04:56 <ralonsoh> +1
14:05:04 <slaweq> so let's vote :)
14:06:11 <mlavalle> +1 again
14:06:17 <ralonsoh> +1
14:06:19 <haleyb> +1
14:06:21 <amotoki> +1
14:06:37 <slaweq> +1 of course :)
14:06:44 <slaweq> njohnston: and You?
14:07:02 <mlavalle> I would still send a message to the ML communicating the decision we just took here
14:07:17 <slaweq> mlavalle: yes, I will do it
14:07:19 <amotoki> mlavalle: totally agree
14:07:21 <mlavalle> noit nominating. just communicating
14:07:39 <slaweq> but I was just hoping we don't need to wait e.g. 1 week for votes there
14:07:43 <slaweq> as we are all here :)
14:08:21 <lajoskatona> thanks for the tust :-)
14:08:48 <slaweq> I think njohnston already voted above, so - welcome lajoskatona in the drivers team also
14:08:54 <slaweq> and congratulations
14:09:13 <slaweq> let's move on
14:09:28 <slaweq> another quick topic
14:09:41 <slaweq> I proposed already patch with cycle highlights for Xena: https://review.opendev.org/c/openstack/releases/+/805610
14:10:01 <slaweq> please check it if You have time, especially from english PoV :)
14:10:09 <slaweq> thx in advance
14:10:18 <slaweq> and now, let's move on to the RFEs
14:10:28 <slaweq> #topic RFEs
14:10:37 <slaweq> we have 2 RFEs for today
14:10:48 <slaweq> first one
14:10:48 <slaweq> https://bugs.launchpad.net/neutron/+bug/1936408
14:11:00 <ralonsoh> thanks
14:11:08 <ralonsoh> first of all, a link to POC: https://review.opendev.org/q/topic:%22bug%252F1936408%22+(status:open%20OR%20status:merged)
14:11:26 <ralonsoh> what I'm proposing here is *NOT* to change the default API behaviour
14:12:07 <ralonsoh> what I want here (a customer) is to, somehow, raise an exception if we a lowering a quota limit under the current usage
14:12:31 <ralonsoh> and what I'm proposing is something like Nova API --force (but the opposite, because we always force the quota update)
14:12:54 <ralonsoh> with this new parameter, --check-limit, the Neutron server will check the resource usage before updating it
14:12:57 <ralonsoh> that's all, thanks
14:14:10 <njohnston> I think this is fine, with the clarifications I am ok with the concept
14:15:08 <slaweq> ralonsoh: I was yesterday thinking about it and about how nova behaves there
14:15:37 <slaweq> maybe You could add flag "--force" as nova do but for now defaults it to True to have not changed our api
14:15:47 <amotoki> it sounds good as neutron API.
14:15:49 <amotoki> From POV of OSC, we will have three modes now: using API default behavior, --force (for nova) and --check-limit (for neutron)
14:15:57 <slaweq> in the future we can maybe communicate it more widely and change behaviour to be same as nova
14:16:08 <ralonsoh> hmmmm I see slaweq's point
14:16:11 <amotoki> it may be nice if OSC behavior --force or --check-limit :)
14:16:22 <ralonsoh> to have only one parameter, force
14:16:32 <ralonsoh> and for network commands, use always "force"
14:16:42 <ralonsoh> ok ok, but this is a boolean parameter
14:16:47 <ralonsoh> we need the opposite
14:16:51 <ralonsoh> slaweq, we can't use this
14:16:59 <ralonsoh> we need something like --no-force
14:17:13 <ralonsoh> I think we should use --check-limit
14:17:35 <slaweq> ralonsoh: I'm talking about server side parameter which will by default be like force=True
14:17:42 <ralonsoh> yes
14:17:43 <slaweq> in OSC it can be implemented as --no-force
14:17:58 <slaweq> I'm not exactly sure how nova do that on server side really
14:18:06 <ralonsoh> this is less intuitive than --check-limit
14:18:15 <slaweq> I know
14:18:21 <ralonsoh> but whatever you decide
14:18:33 <slaweq> my point was that at some point we may deprecate old behaviour and be consistent with nova
14:18:39 <slaweq> but that's just an idea
14:18:45 <slaweq> I'm ok with --check-limit as well
14:19:20 <ralonsoh> so what do you vote here? 1) --check-limit or 2) --no-force ?
14:19:28 <njohnston> 1
14:20:05 <amotoki> --check-limit sounds better as it looks clearer to me
14:20:34 <mlavalle> overall I'm ok with the proposal. I don't like it though, from a OpenStack perspective. We are giving users contradictory behaviors among services. So at least let's try to be clear and I think --check-limit is clearer
14:20:36 <lajoskatona> +1 on check-limit
14:21:13 <ralonsoh> thank you all for your ideas and votes
14:21:19 <slaweq> I'm fine with check-limit too :)
14:21:29 <mlavalle> and maybe, over time, try to converge to a consistent behavior community wide
14:21:41 <ralonsoh> for sure, we need that
14:22:02 <mlavalle> there are no neutron users. there are openstack users
14:24:36 <slaweq> ok, I will mark this rfe as approved and let's go with --check-limit option in API for now
14:24:41 <slaweq> thx ralonsoh for proposing that
14:24:42 <ralonsoh> thanks!
14:25:06 <slaweq> next rfe
14:25:13 <slaweq> https://bugs.launchpad.net/neutron/+bug/1939602
14:25:16 <ralonsoh> thanks
14:25:23 <ralonsoh> first of all, the POC: https://review.opendev.org/c/openstack/neutron/+/803034
14:25:42 <ralonsoh> the idea is to have a memory profiler FOR TESTING ONLY
14:25:51 <ralonsoh> I say that because that will impact in the performance
14:26:08 <ralonsoh> this memory profiler could be useful to detect memory leaks in new modules
14:26:26 <ralonsoh> every x seconds, this service plugin will print memory stats
14:26:32 <ralonsoh> and the admin can filter by module
14:26:39 <ralonsoh> and the number of stats needed
14:26:51 <ralonsoh> --> https://review.opendev.org/c/openstack/neutron/+/803034/5/neutron/services/memory_profiler/plugin.py
14:27:31 <ralonsoh> that's basically what I'm proposing. This is not something for a production environment but could be useful for testing
14:27:40 <ralonsoh> that's all, thanks
14:28:30 <ralonsoh> (btw, CI is not passing but this patch works, you can test it)
14:30:02 <slaweq> I'm ok with that idea as optional plugin
14:30:25 <slaweq> but we will have to discuss if we want to add new ci job with it or maybe enable it in some (all) of the existing jobs
14:30:40 <ralonsoh> periodic job?
14:30:56 <slaweq> could be, probably better than adding new job to check queue
14:31:05 <ralonsoh> I think so
14:32:07 <slaweq> but tbh I don't see huge performance impact and job time difference between neutron-ovn-tempest-ovs-release and neutron-ovn-tempest-memory-profiler
14:32:07 <lajoskatona> it looks similar to loki
14:32:16 <slaweq> and both jobs runs same tests
14:32:25 <slaweq> so maybe it can be enabled in existing jobs simply?
14:32:35 <slaweq> we can discuss that later in the ci meeting for sure :)
14:32:51 <lajoskatona> for loki we have experimental job as I see
14:33:09 <lajoskatona> but yeah we can discuss on CI meeting
14:33:12 <ralonsoh> yes, I think we can add it to experimental queue, same as loki
14:33:18 <ralonsoh> ok to discuss it in CI meeting
14:34:30 <njohnston> I think this is a very useful idea.  Is there any way to quantify how bad the performance penalty is?  Some guidance anywhere from "10% slowdown" to "4000% slowdown and cannot ever be activated even in light production use" would be helpful.  Someone facing a memory leak in prod is likely to at least think twice about possibly enabling this.
14:34:52 <ralonsoh> njohnston, yes, I need to benchmark it
14:35:11 <ralonsoh> this will be running in a parallel thread and memory snapshots are expensive
14:35:28 <ralonsoh> but it will affect only one worker
14:38:35 <ralonsoh> (actually, I can print the time spent on printing the memory stats, that could be very easy and helpful for benchmarking)
14:38:48 <njohnston> +1
14:39:00 <slaweq> I'm +1 on this RFE
14:39:19 <lajoskatona> +1
14:39:25 <amotoki> +1 on the RFE
14:39:40 <ralonsoh> thank you all, I'll bring this topic again next week in the CI meeting
14:39:49 <haleyb> +1
14:40:16 <ralonsoh> thank you for your time
14:41:00 <slaweq> mlavalle: any thoughts on this one?
14:41:10 <mlavalle> +1
14:43:00 <slaweq> thx
14:43:08 <slaweq> so I will mark that RFE as approved too
14:43:16 <slaweq> thx ralonsoh for proposing it
14:43:19 <ralonsoh> thanks
14:43:28 <slaweq> and that's all for this week from me
14:43:38 <slaweq> do You have anything else You would like to discuss?
14:44:10 <amotoki> nothing from me
14:44:36 <mlavalle> not from me
14:44:42 <ralonsoh> nothing, thanks
14:46:14 <slaweq> thx for attending the meeting today
14:46:18 <njohnston> thanks slaweq, congrats again lajoskatona
14:46:25 <slaweq> have a great weekend and see You online next week :)
14:46:27 <slaweq> o/
14:46:29 <slaweq> #endmeeting