14:01:12 #startmeeting neutron_drivers 14:01:13 Meeting started Fri Feb 28 14:01:12 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:17 The meeting name has been set to 'neutron_drivers' 14:01:18 hi 14:01:18 o/ 14:01:20 o/ 14:01:21 sorry for being late 14:01:55 ok, we already have quorum but lets wait few more minutes for amotoki and yamamoto 14:02:07 I believe yamamoto said he couldn't make it 14:02:26 ji 14:02:28 hi 14:02:29 we won't attend 14:02:34 he^^^ 14:02:46 hi 14:02:56 ok, right 14:02:59 so we can start 14:03:01 #topic RFEs 14:03:08 we have 2 RFEs for today 14:03:16 first one is continuation from last week 14:03:24 https://bugs.launchpad.net/neutron/+bug/1861529 14:03:26 Launchpad bug 1861529 in neutron "[RFE] A port's network should be changable" [Wishlist,New] 14:04:15 last week I proposed 2 possible ways to go with this: 14:04:17 1. we will continue discussion about that in review of spec where more details will be shared with us, 14:04:22 2. we don't accept this RFE 14:05:14 so I would like to ask You which option of those 2 You would vote for 14:05:27 or maybe there is some 3rd possible option? 14:05:29 link to the previous conversation: http://eavesdrop.openstack.org/meetings/neutron_drivers/2020/neutron_drivers.2020-02-21-14.00.log.html#l-119 14:05:39 thx njohnston 14:06:46 I'll restate my previous opinion: I would vote for 2 unless we have a very clear user story that means this will have a real benefit for today's cloud customer, with modern guest OSes that notice PCI plug/unplug events 14:07:32 stephen-ma gave MS-DOS as an example of a guest OS that would need this feature, for example 14:07:33 njohnston: I would also be for option 2) as I said last week 14:07:55 I am leaning to option 2 to deny it. per discussion on the background last week, it is to support older OSes before PCI hot plug era. 14:08:41 yes, and I don't think that making many complicated changes in the code is worth to do only to support e.g MS-DOS and other older systems 14:08:54 I am open to being persuaded otherwise but I think we have to focus ourselves on serving code that has been created since the turn of the century or so 14:09:58 mlavalle: haleyb: any thoughts? 14:10:09 Even if we allow to change network for a port if it has no L3 address, what happens for applications on such OS like MS-DOS? Do such apps take into account a case where IP address is not assigned? 14:10:24 I would go with option 2 for the same reasons, i don't think the use case has been fully explained, and unplug/plug works for most today 14:10:58 2 unless we have a convincingly explained use case 14:11:14 thx 14:11:20 I think we all agreed here 14:11:22 I was looking for a spec in Gerrit 14:11:29 and didn't find any 14:11:31 The user will need to use the VM's console to have the new IP take effect. I 14:11:46 I will update RFE after the meeting 14:11:48 The user needs to set a new fixed-ip after changing the network. 14:12:13 stephen-ma: welcome :) 14:13:12 stephen-ma: hi, so do You have user story for that, as njohnston asked? 14:13:15 stephen-ma: but releasing IP address means the service is stopped during a VM has no IP. What is the difference from a case where you deploy a new instance for a new network. 14:13:18 ? 14:13:51 stephen-ma: or e.g. stop vm, unplug port, plug new port and start vm again 14:14:23 The only difference is that you can continue to use a SAME server instance. If this is the only difference, the RFE doesn't convince me. 14:15:08 The VM's is owned by a customer with a Cloud service provider. The cloud service provider cannot shut doen the customer VM. 14:16:34 still this use case don't convince me to approve this rfe as it would mean changes in our basic concepts which we have in neutron for years 14:16:42 it is a balance among users and consumers in various level. there is no reason that a cloud provider needs to consider all cases. 14:16:43 and probably it will not be trivial change 14:17:34 yes it will end up being a big change. 14:17:50 stephen-ma: exactly :/ 14:18:02 so I'm still voting for not approving this rfe 14:18:17 and I think we all agreed on that, right? 14:18:31 slaweq, I agree too in not approving it 14:18:32 yes 14:18:34 yes 14:18:50 slaweq: yes 14:19:47 haleyb: You're also still for not approving it, right? 14:20:03 slaweq: yes, -1 14:20:11 thx all 14:20:22 I will write a comment in this rfe 14:20:27 stephen-ma: we are not rejecting you. if you need to explain the situation we can help. 14:20:49 @slaweq can this RFE be rediscussed if there are new use cases? 14:20:57 stephen-ma: ofcourse 14:21:16 stephen-ma: yes, the motivations and backgrounds are important 14:21:22 feel free to update it if You will have any "more modern" use cases for that 14:21:29 if more information about the use case / s surfaces, why don't we start with a spec? 14:21:48 good idea. 14:21:52 mlavalle: it can be given in spec also 14:21:59 both would work for me 14:22:16 agreed 14:22:28 ok, lets move on to the next one now 14:22:33 https://bugs.launchpad.net/neutron/+bug/1858610 14:22:35 Launchpad bug 1858610 in neutron "[RFE] Qos policy not supporting sharing bandwidth between several nics of the same vm." [Undecided,New] 14:22:55 JieLi: hi, it's Your RFE, right? 14:23:10 yes 14:25:21 Interesting idea. I would think that we'd only want to aggregate bandwidth for nics that are on the same network, right? In other words if a VM has a port on an administrative network and a port on an application network, they shouldn't share bandwidth limitations. 14:25:26 JieLi, can you explain a bit the backend implementation? 14:26:41 njohnston, not only the same network but also you need to consider if you want to limit this only to bound ports to the same VM or host 14:27:08 but this is server logic, how to decide to share the BW 14:27:10 njohnston we want to limit the total bandwidth of vm, for restricting system resource usage. 14:27:24 my concern is how this is going to be implemented in any backend 14:28:22 yes, backend implementation is very interesting thing in this case as limiting bw isn't trivial thing :) 14:28:25 ralonsoh's point is really good and important to me too. 14:28:33 in reality. confining bandwidth to a collection of ports could have many uses, not necessarily restricting just to a single host. For example if you have 5 app servers you could use such a shared policy to say "Don't exceed X bandwidth for these 5 ports" and limit the whole app server farm together, 14:29:10 ralonsoh currently, we'd like to use linux tc stack to redirect nic's traffic to an ifb interface, and limit the bandwidth on the ifb interface 14:29:24 njohnston: but that would be far more complicated as such ports could be on different hosts :) 14:29:42 JieLi, so this is limited, for now, to LB backend 14:30:18 and another question: direction?? ingress? egress? both? 14:30:27 ralonsoh you mean loadbalancer? 14:30:35 JieLi, ? 14:30:39 no no Linux Bridge 14:30:49 there are 4 backends in the neutron repo 14:30:50 direction: both 14:30:57 sriov, LB, ovs and OVN 14:31:01 ok 14:31:44 backends: we want to support ovs and LB 14:31:59 with an IFB block? 14:32:11 that means you need to setup an hybrid deployment 14:32:37 implications: no DPDK support, maybe no HW offload support 14:32:53 yes, no DPDK support, maybe no HW offload support 14:33:08 and the OVS firewall could be affected 14:33:23 because you are redirecting all those ports to a unique IFB 14:33:57 is there a non IFB alternative? 14:34:24 * mlavalle will have to leave 15 minutes before the hour 14:34:47 so far in OVS we don't have something like this, a shared qos policy 14:34:49 traffic will be send back to tapXXX interface from ifb after netwroking shaping, firewall need to be tested, i can't be sure now 14:34:52 so there is no native support 14:34:59 what kind of a performance impact would such an architecture have? 14:35:24 @mla 14:36:11 let me be frank: the OVS present too many possible incompatibilities 14:36:14 mlavalle,at present, we have no non IFB alternative 14:36:25 and the architecture could be quite complex 14:36:52 I say let's do a PoC 14:36:57 perfect for me 14:36:59 and explore the idea 14:37:25 (btw, trunk plugin could help on this, just an idea) 14:37:59 IMO idea of this is interesting but there is many things to clarify so maybe PoC and spec would be needed first 14:38:10 but as for rfe I tend to approve it 14:38:16 +1 14:39:13 +1 14:39:28 +1 14:39:39 when we visit the possibility of the proposed idea after approving it? 14:39:56 I tend to +1 but it hits me 14:40:09 yes I would like to see what the PoC results are 14:40:17 amotoki: I was thinking to work out all possibilities during spec review 14:40:25 and maybe PoC too 14:40:40 sounds good 14:40:45 +1 14:40:58 with QoS we already have situation that not all backends supports every rule types 14:41:09 so this one can also be supported only by some backends, not all of them 14:41:14 exactly 14:41:20 it is interesting enoguh and it is worth exploring impls. 14:42:09 to sum up: we all agreed to approve RFE as the idea and continue exploring details of it in spec and poc patches 14:42:28 JieLi: is it fine for You? 14:43:17 yes 14:43:32 ok 14:43:40 thx for proposing it 14:43:48 we are done with RFEs for today 14:43:59 #topic On demand agenda 14:44:05 I have one more topic from ralonsoh 14:44:12 https://bugs.launchpad.net/neutron/+bug/1863423. The service plugin network_segment_ranges has some implementation issues: users can create networks outside the project segment ranges and those segmentation IDs can belong to other project segment ranges. 14:44:13 Launchpad bug 1863423 in neutron "Method "build_segment_queries_for_tenant_and_shared_ranges" returning empty query" [Undecided,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 14:44:18 It's described in details in http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012736.html 14:44:19 oh yes 14:44:20 one sec 14:44:29 #link https://review.opendev.org/#/c/710090/ 14:44:31 * mlavalle leaves now, see you next week 14:44:32 but maybe we can use this couple of minutes to discuss it here 14:44:34 discussed in the ML 14:44:37 mlavalle: see You 14:44:46 this is the possible architecture solution 14:44:50 as commented in the mail 14:45:07 1) a user (in a project) has network segment ranges 14:45:22 --> will retrieve the seg_id from those ranges first 14:45:29 and then from the shared one 14:45:40 never from other project ranges 14:45:58 2) if the project does not have ranges, directly from the shared pool 14:46:07 that's all 14:46:27 I replied to Your email some time ago 14:46:36 IMHO 2) would be better to backport to stable branches 14:46:40 yes, that's why I implemented this solution 14:46:49 as API behaviour wouldn't change too much 14:47:04 no, no API change 14:47:10 just fix the queries 14:47:21 (and a couple of extra things) 14:49:23 (for reviewers: the core of the patch is here https://review.opendev.org/#/c/710090/4/neutron/objects/network_segment_range.py) 14:50:45 ralonsoh: sure, I will review it ASAP 14:50:52 thanks! 14:53:29 is still reading the thread 14:54:06 1 and 2 above here look exclusive so I am a bit confused. 14:54:36 no sorry, not the same 14:54:50 in the ML 1) and 2) are different options 14:54:53 ralonsoh: I don't mean 1 and 2 in your email 14:55:06 here 1) and 2) are just possible situations 14:55:16 ralonsoh: I see 14:55:35 in the patch, I implemented option (2) of the mail 14:57:44 atm option 2 of the mail sounds good to me (but I am still trying to understand the situation more) 14:58:10 ok, we are almost out of the time, amotoki please reply to the email if You will have anything to add regarding this topic 14:58:13 amotoki, sure, ping me via IRC or mail if you have questions 14:58:15 and also others :) 14:58:23 slaweq: ralonsoh: sure 14:58:25 thx for attending the meeting 14:58:33 and have a great weekend 14:58:35 bye! 14:58:35 o/ 14:58:38 o/ 14:58:39 #endmeeting