14:00:25 #startmeeting neutron_drivers 14:00:25 Meeting started Fri Mar 18 14:00:25 2022 UTC and is due to finish in 60 minutes. The chair is lajoskatona. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:25 The meeting name has been set to 'neutron_drivers' 14:00:29 o/ 14:00:35 o/ 14:00:46 hi 14:01:13 hi 14:01:22 o/ 14:01:40 hi 14:02:20 hi 14:03:06 I think we can start 14:03:46 The first topic from the list regarding GARP are not sent during ha router master state transition we can postpone 14:04:00 So the first one fortoday: 14:04:04 [QoS][OvS] implement QoS bandwidth limitation by leveraging ovs meter (#link https://bugs.launchpad.net/neutron/+bug/1964342 ) 14:04:04 Are we? 14:04:11 yeah, I think that's trident's topic 14:04:14 Because that's quite critical 14:04:20 o/ 14:04:36 Basic functionality is quite broken and this topic has not been taken for meeting for 3 weeks now 14:04:57 noonedeadpunk: I jsut asked trident if we can discuss it today, but they had no time for testing the possible solutions this week 14:05:16 yeap 14:05:39 is it about ingress or egress traffic (from vm point of view)? 14:05:42 or both? 14:06:07 noonedeadpunk: Yeah, me and damiandabrowski[m] discussed a bit and there is more testing on our side that needs to be done before we really have anything new to report on it. 14:06:09 I'm not sure about egress 14:06:35 Ingress for sure is broken 14:07:19 https://review.opendev.org/c/openstack/neutron/+/832662 that should fix current implementation of the ingress bw limit in ml2/ovs 14:07:33 I'm not neutron code expert, but can try to help out. But eventuaalyy any solutions I saw has own flaws... 14:08:19 noonedeadpunk: could you sync with trident and than next week we can check the topic 14:08:34 I don't think it will as VIP is not functional and mentioned patch not touching that part? 14:08:37 noonedeadpunk: is that work for you? 14:08:44 lajoskatona: yes, thanks! 14:08:59 noonedeadpunk: cool 14:09:03 so talking now about QoS? 14:09:18 sorry for interrupting:) I just joined to check out on the bug that was skipped without mentioning reason again :) 14:09:24 yes I think we can jump on it 14:09:57 noonedeadpunk: true, I forgot to mention that I asked about it trident 14:10:29 So the coming rfe: [QoS][OvS] implement QoS bandwidth limitation by leveraging ovs meter (#link https://bugs.launchpad.net/neutron/+bug/1964342 ) 14:10:37 this is from liuyulong 14:11:42 If I understand correctly it is to have another QoS Bandwidth limit driver with os-ken and and ovs meter rules 14:12:15 sorry, I though we are talking about this one all the time :) 14:12:22 LOL 14:12:28 yeah I figured 14:12:32 * slaweq is focused mostly on other meeting now 14:13:28 regarding this qos bw limit rfe - what's the point of having 2 different ways to configure it in ovs agent? I would just go with one which works better 14:13:44 agree 14:13:55 so if using meters will be more accurate/efficient/better in other way - lets go with it and replace current implementation 14:14:11 if it's not better, what's the point of adding it to the code? 14:14:15 slaweq: valid point 14:14:20 that's my opinion at least 14:14:29 I agree 14:14:48 but tbh I would like to see ralonsoh's thoughts on that one before approving/rejecting it 14:14:56 it seems he is trying to justify this from the off loading perspective 14:15:01 to smart-nic 14:15:16 but then he states that tc can also be off-loaded 14:15:24 I bit confusing 14:15:41 the current driver uses ovsdb 14:15:55 sorry half finished sentence.... 14:16:28 so that one was confusing for me also, as I don't see why offloading is better with the proposed solution 14:17:30 agree, need to clarify what are the benefits 14:18:57 ok, I add the questions to the bug, and ask liuyulong to present the asked details, is that ok? 14:19:11 +§ 14:19:15 +1 14:20:18 perfect 14:20:20 +1 14:20:20 +1 14:20:40 ok, thanks 14:20:54 Next topic is: 14:21:05 DHCP agent scheduler filtering ignored when agent service restarted (#link https://bugs.launchpad.net/neutron/+bug/1964765 ) 14:21:15 it's from BBC as I see 14:22:16 honestly I am not sure this is a bug or not.... 14:23:15 summary is to have possiblity to schedule network on a group of dhcp agents specified by some regex filtering or similar 14:23:23 I think it’s a fair expectation to have consistent filtering of dhcp agents when network (auto)scheduling 14:23:47 amotoki: your decision to mark it a srfe was really good, thanks 14:24:24 obondarev: true, I can imagine usecases to be useful like the one in the bug report 14:26:08 from the other hand changing this behaviour (skip filtering on net auto schedule) might break things for someone 14:27:27 for this I think a new parameter in the Filter itself might be added - whether filter should be applied on net auto-schedule (agent restart) 14:27:35 is it the right thing to skip filtering? I think it is better to apply consistent filtering if there is some inconsistency. 14:28:06 this will let us avoid new config option 14:29:16 and what about a new scheduler and if somebody needs this with possible traps, they select it in config? 14:29:59 new scheduler or new filter? 14:30:55 I thought new scheduler, as that is configurable 14:31:15 I think two topics are mixed. the one is about a new filtering/scheduler and the other point is that filtering is skipped when agent restart. 14:31:39 i am not sure the second point only affects the new filtering mentioned in the bug. 14:34:29 afaiu only segment related filtering is currently done on agent restart 14:34:49 all others are skipped, both old and new ones 14:35:55 ok so the LP can be split to an RFE and an actual bug? 14:37:39 I think the bug is just to decide if we should enable filters on agent restart 14:38:19 obondarev: agree 14:38:26 and my point is - let's enable filtering on agent restart for those filters that explicitly say "apply me on restart" - then we can decide if existing filters should add this flag or not 14:39:57 +1 14:40:03 +1 14:40:15 it looks like that the bug author does not request to add a new filter in neutron. the author just uses their new filter as an example to explain what happens. 14:40:37 so I totally agree with obondarev's point 14:41:26 amotoki: possible, I concerned more on the filter they created 14:42:50 not sure this new filter is even going to be upstreamed 14:43:12 I mean if author has such plans 14:43:45 ok, than the summary of this LP is: allow new flag for filters to execute them on agent restart, and set this to False for already existing filters 14:44:05 obondarev: possible 14:44:26 or to accept it as False if filter doesn't have this parameter 14:44:45 lajoskatona: (re question about smartnic_dpu docs, screnshot and possible marketing page reference) apologies for the tardy response, that is an excellent idea. Are you asking for expansion of the existing doc page or some oob screenshot of stuff in action? 14:45:32 obondarev: +1 14:46:01 fnordahl: just a sec we are at the end of the meeting 14:46:38 ok regarding DHCP scheduling/filtering: is it ok this way for everybody? 14:46:49 +1 14:47:35 I am still unclear a bit. I think AZ schduler does the similar thing so it might be affected, while network AZ and network segments are used together. 14:48:40 s/are not used together/ 14:51:11 so do you suggest to enable it for all filters on agent restart? 14:51:52 what is "this way"? is it to accept filtering in dhcp RPC as False? 14:52:31 if so, I am okay with it. 14:52:38 If the current behavior affects the existing scheduler, we can fix it separately. 14:53:08 amotoki: for this way I meant this : "allow new flag for filters to execute them on agent restart, and set this to False for already existing filters" 14:53:27 IOW nothing changes for current filters, new filters can specify if they should filter on agent restart 14:53:47 yes that's my understanding also 14:53:49 lajoskatona: obondarev: thanks for the clarification. I got it. 14:54:47 amotoki: :-) this is a tricky LP bug, but really good discussion 14:55:12 ok, could you vote on in please if it is ok for you? 14:55:21 +1 14:55:28 +1 14:55:31 +1 14:55:32 +1 14:55:52 +1 14:56:12 ok, thanks 14:56:28 if there is nothing more to discuss, we can close the meeting 14:57:29 #endmeeting