14:00:36 #startmeeting neutron_drivers 14:00:36 Meeting started Fri Aug 27 14:00:36 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:36 The meeting name has been set to 'neutron_drivers' 14:00:38 hi 14:00:40 o/ 14:00:42 hello 14:00:47 hi 14:00:51 hi 14:00:51 o/ 14:00:56 Hi 14:01:20 I think we can start as we have quorum today 14:01:44 and I want to start today in a different way 14:01:47 hi 14:01:51 #topic On Demand Agenda 14:01:54 hi haleyb 14:01:57 welcome back :) 14:02:22 first - congratulations to Lajos, our new PTL! 14:02:31 congrats!!! 14:02:38 thanks 14:02:38 congrats, lajoskatona 14:02:40 +1 14:02:50 and regarding that I have a question to all of You 14:02:50 Congratulations Lajos! 14:03:01 I think that lajoskatona should now be part of the drivers team 14:03:08 I think so, makes sense 14:03:13 +1 14:03:13 even if he wouldn't be ptl, I think he deserves that 14:03:17 but now even more :) 14:03:32 +1 14:03:41 do You think I should send official email with nomination for lajoskatona or would it be enough if we will vote here? 14:04:20 this meeting will be registered, so it is "legal" 14:04:36 Congratulations lajoskatona ! 14:04:37 most of voters are here and it looks enough to vote here. 14:04:56 +1 14:05:04 so let's vote :) 14:06:11 +1 again 14:06:17 +1 14:06:19 +1 14:06:21 +1 14:06:37 +1 of course :) 14:06:44 njohnston: and You? 14:07:02 I would still send a message to the ML communicating the decision we just took here 14:07:17 mlavalle: yes, I will do it 14:07:19 mlavalle: totally agree 14:07:21 noit nominating. just communicating 14:07:39 but I was just hoping we don't need to wait e.g. 1 week for votes there 14:07:43 as we are all here :) 14:08:21 thanks for the tust :-) 14:08:48 I think njohnston already voted above, so - welcome lajoskatona in the drivers team also 14:08:54 and congratulations 14:09:13 let's move on 14:09:28 another quick topic 14:09:41 I proposed already patch with cycle highlights for Xena: https://review.opendev.org/c/openstack/releases/+/805610 14:10:01 please check it if You have time, especially from english PoV :) 14:10:09 thx in advance 14:10:18 and now, let's move on to the RFEs 14:10:28 #topic RFEs 14:10:37 we have 2 RFEs for today 14:10:48 first one 14:10:48 https://bugs.launchpad.net/neutron/+bug/1936408 14:11:00 thanks 14:11:08 first of all, a link to POC: https://review.opendev.org/q/topic:%22bug%252F1936408%22+(status:open%20OR%20status:merged) 14:11:26 what I'm proposing here is *NOT* to change the default API behaviour 14:12:07 what I want here (a customer) is to, somehow, raise an exception if we a lowering a quota limit under the current usage 14:12:31 and what I'm proposing is something like Nova API --force (but the opposite, because we always force the quota update) 14:12:54 with this new parameter, --check-limit, the Neutron server will check the resource usage before updating it 14:12:57 that's all, thanks 14:14:10 I think this is fine, with the clarifications I am ok with the concept 14:15:08 ralonsoh: I was yesterday thinking about it and about how nova behaves there 14:15:37 maybe You could add flag "--force" as nova do but for now defaults it to True to have not changed our api 14:15:47 it sounds good as neutron API. 14:15:49 From POV of OSC, we will have three modes now: using API default behavior, --force (for nova) and --check-limit (for neutron) 14:15:57 in the future we can maybe communicate it more widely and change behaviour to be same as nova 14:16:08 hmmmm I see slaweq's point 14:16:11 it may be nice if OSC behavior --force or --check-limit :) 14:16:22 to have only one parameter, force 14:16:32 and for network commands, use always "force" 14:16:42 ok ok, but this is a boolean parameter 14:16:47 we need the opposite 14:16:51 slaweq, we can't use this 14:16:59 we need something like --no-force 14:17:13 I think we should use --check-limit 14:17:35 ralonsoh: I'm talking about server side parameter which will by default be like force=True 14:17:42 yes 14:17:43 in OSC it can be implemented as --no-force 14:17:58 I'm not exactly sure how nova do that on server side really 14:18:06 this is less intuitive than --check-limit 14:18:15 I know 14:18:21 but whatever you decide 14:18:33 my point was that at some point we may deprecate old behaviour and be consistent with nova 14:18:39 but that's just an idea 14:18:45 I'm ok with --check-limit as well 14:19:20 so what do you vote here? 1) --check-limit or 2) --no-force ? 14:19:28 1 14:20:05 --check-limit sounds better as it looks clearer to me 14:20:34 overall I'm ok with the proposal. I don't like it though, from a OpenStack perspective. We are giving users contradictory behaviors among services. So at least let's try to be clear and I think --check-limit is clearer 14:20:36 +1 on check-limit 14:21:13 thank you all for your ideas and votes 14:21:19 I'm fine with check-limit too :) 14:21:29 and maybe, over time, try to converge to a consistent behavior community wide 14:21:41 for sure, we need that 14:22:02 there are no neutron users. there are openstack users 14:24:36 ok, I will mark this rfe as approved and let's go with --check-limit option in API for now 14:24:41 thx ralonsoh for proposing that 14:24:42 thanks! 14:25:06 next rfe 14:25:13 https://bugs.launchpad.net/neutron/+bug/1939602 14:25:16 thanks 14:25:23 first of all, the POC: https://review.opendev.org/c/openstack/neutron/+/803034 14:25:42 the idea is to have a memory profiler FOR TESTING ONLY 14:25:51 I say that because that will impact in the performance 14:26:08 this memory profiler could be useful to detect memory leaks in new modules 14:26:26 every x seconds, this service plugin will print memory stats 14:26:32 and the admin can filter by module 14:26:39 and the number of stats needed 14:26:51 --> https://review.opendev.org/c/openstack/neutron/+/803034/5/neutron/services/memory_profiler/plugin.py 14:27:31 that's basically what I'm proposing. This is not something for a production environment but could be useful for testing 14:27:40 that's all, thanks 14:28:30 (btw, CI is not passing but this patch works, you can test it) 14:30:02 I'm ok with that idea as optional plugin 14:30:25 but we will have to discuss if we want to add new ci job with it or maybe enable it in some (all) of the existing jobs 14:30:40 periodic job? 14:30:56 could be, probably better than adding new job to check queue 14:31:05 I think so 14:32:07 but tbh I don't see huge performance impact and job time difference between neutron-ovn-tempest-ovs-release and neutron-ovn-tempest-memory-profiler 14:32:07 it looks similar to loki 14:32:16 and both jobs runs same tests 14:32:25 so maybe it can be enabled in existing jobs simply? 14:32:35 we can discuss that later in the ci meeting for sure :) 14:32:51 for loki we have experimental job as I see 14:33:09 but yeah we can discuss on CI meeting 14:33:12 yes, I think we can add it to experimental queue, same as loki 14:33:18 ok to discuss it in CI meeting 14:34:30 I think this is a very useful idea. Is there any way to quantify how bad the performance penalty is? Some guidance anywhere from "10% slowdown" to "4000% slowdown and cannot ever be activated even in light production use" would be helpful. Someone facing a memory leak in prod is likely to at least think twice about possibly enabling this. 14:34:52 njohnston, yes, I need to benchmark it 14:35:11 this will be running in a parallel thread and memory snapshots are expensive 14:35:28 but it will affect only one worker 14:38:35 (actually, I can print the time spent on printing the memory stats, that could be very easy and helpful for benchmarking) 14:38:48 +1 14:39:00 I'm +1 on this RFE 14:39:19 +1 14:39:25 +1 on the RFE 14:39:40 thank you all, I'll bring this topic again next week in the CI meeting 14:39:49 +1 14:40:16 thank you for your time 14:41:00 mlavalle: any thoughts on this one? 14:41:10 +1 14:43:00 thx 14:43:08 so I will mark that RFE as approved too 14:43:16 thx ralonsoh for proposing it 14:43:19 thanks 14:43:28 and that's all for this week from me 14:43:38 do You have anything else You would like to discuss? 14:44:10 nothing from me 14:44:36 not from me 14:44:42 nothing, thanks 14:46:14 thx for attending the meeting today 14:46:18 thanks slaweq, congrats again lajoskatona 14:46:25 have a great weekend and see You online next week :) 14:46:27 o/ 14:46:29 #endmeeting