14:01:12 <slaweq> #startmeeting neutron_drivers
14:01:13 <openstack> Meeting started Fri Feb 28 14:01:12 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:17 <openstack> The meeting name has been set to 'neutron_drivers'
14:01:18 <slaweq> hi
14:01:18 <njohnston> o/
14:01:20 <haleyb> o/
14:01:21 <slaweq> sorry for being late
14:01:55 <slaweq> ok, we already have quorum but lets wait few more minutes for amotoki and yamamoto
14:02:07 <njohnston> I believe yamamoto said he couldn't make it
14:02:26 <ralonsoh> ji
14:02:28 <ralonsoh> hi
14:02:29 <mlavalle> we won't attend
14:02:34 <mlavalle> he^^^
14:02:46 <amotoki> hi
14:02:56 <slaweq> ok, right
14:02:59 <slaweq> so we can start
14:03:01 <slaweq> #topic RFEs
14:03:08 <slaweq> we have 2 RFEs for today
14:03:16 <slaweq> first one is continuation from last week
14:03:24 <slaweq> https://bugs.launchpad.net/neutron/+bug/1861529
14:03:26 <openstack> Launchpad bug 1861529 in neutron "[RFE] A port's network should be changable" [Wishlist,New]
14:04:15 <slaweq> last week I proposed 2 possible ways to go with this:
14:04:17 <slaweq> 1. we will continue discussion about that in review of spec where more details will be shared with us,
14:04:22 <slaweq> 2. we don't  accept this RFE
14:05:14 <slaweq> so I would like to ask You which option of those 2 You would vote for
14:05:27 <slaweq> or maybe there is some 3rd possible option?
14:05:29 <njohnston> link to the previous conversation: http://eavesdrop.openstack.org/meetings/neutron_drivers/2020/neutron_drivers.2020-02-21-14.00.log.html#l-119
14:05:39 <slaweq> thx njohnston
14:06:46 <njohnston> I'll restate my previous opinion: I would vote for 2 unless we have a very clear user story that means this will have a real benefit for today's cloud customer, with modern guest OSes that notice PCI plug/unplug events
14:07:32 <njohnston> stephen-ma gave MS-DOS as an example of a guest OS that would need this feature, for example
14:07:33 <slaweq> njohnston: I would also be for option 2) as I said last week
14:07:55 <amotoki> I am leaning to option 2 to deny it. per discussion on the background last week, it is to support older OSes before PCI hot plug era.
14:08:41 <slaweq> yes, and I don't think that making many complicated changes in the code is worth to do only to support e.g MS-DOS and other older systems
14:08:54 <njohnston> I am open to being persuaded otherwise but I think we have to focus ourselves on serving code that has been created since the turn of the century or so
14:09:58 <slaweq> mlavalle: haleyb: any thoughts?
14:10:09 <amotoki> Even if we allow to change network for a port if it has no L3 address, what happens for applications on such OS like MS-DOS? Do such apps take into account a case where IP address is not assigned?
14:10:24 <haleyb> I would go with option 2 for the same reasons, i don't think the use case has been fully explained, and unplug/plug works for most today
14:10:58 <mlavalle> 2 unless we have a convincingly explained use case
14:11:14 <slaweq> thx
14:11:20 <slaweq> I think we all agreed here
14:11:22 <mlavalle> I was looking for a spec in Gerrit
14:11:29 <mlavalle> and didn't find any
14:11:31 <stephen-ma> The user will need to use the VM's console to have the new IP take effect.  I
14:11:46 <slaweq> I will update RFE after the meeting
14:11:48 <stephen-ma> The user needs to set a new fixed-ip after changing the network.
14:12:13 <amotoki> stephen-ma: welcome :)
14:13:12 <slaweq> stephen-ma: hi, so do You have user story for that, as njohnston asked?
14:13:15 <amotoki> stephen-ma: but releasing IP address means the service is stopped during a VM has no IP. What is the difference from a case where you deploy a new instance for a new network.
14:13:18 <amotoki> ?
14:13:51 <slaweq> stephen-ma: or e.g. stop vm, unplug port, plug new port and start vm again
14:14:23 <amotoki> The only difference is that you can continue to use a SAME server instance. If this is the only difference, the RFE doesn't convince me.
14:15:08 <stephen-ma> The VM's is owned by a customer with a Cloud service provider.  The cloud service provider cannot shut doen the customer VM.
14:16:34 <slaweq> still this use case don't convince me to approve this rfe as it would mean changes in our basic concepts which we have in neutron for years
14:16:42 <amotoki> it is a balance among users and consumers in various level. there is no reason that a cloud provider needs to consider all cases.
14:16:43 <slaweq> and probably it will not be trivial change
14:17:34 <stephen-ma> yes it will end up being a big change.
14:17:50 <slaweq> stephen-ma: exactly :/
14:18:02 <slaweq> so I'm still voting for not approving this rfe
14:18:17 <slaweq> and I think we all agreed on that, right?
14:18:31 <ralonsoh> slaweq, I agree too in not approving it
14:18:32 <mlavalle> yes
14:18:34 <njohnston> yes
14:18:50 <amotoki> slaweq: yes
14:19:47 <slaweq> haleyb: You're also still for not approving it, right?
14:20:03 <haleyb> slaweq: yes, -1
14:20:11 <slaweq> thx all
14:20:22 <slaweq> I will write a comment in this rfe
14:20:27 <amotoki> stephen-ma: we are not rejecting you. if you need to explain the situation we can help.
14:20:49 <stephen-ma> @slaweq can this RFE be rediscussed if there are new use cases?
14:20:57 <slaweq> stephen-ma: ofcourse
14:21:16 <amotoki> stephen-ma: yes, the motivations and backgrounds are important
14:21:22 <slaweq> feel free to update it if You will have any "more modern" use cases for that
14:21:29 <mlavalle> if more information about the use case / s surfaces, why don't we start with a spec?
14:21:48 <stephen-ma> good idea.
14:21:52 <slaweq> mlavalle: it can be given in spec also
14:21:59 <slaweq> both would work for me
14:22:16 <njohnston> agreed
14:22:28 <slaweq> ok, lets move on to the next one now
14:22:33 <slaweq> https://bugs.launchpad.net/neutron/+bug/1858610
14:22:35 <openstack> Launchpad bug 1858610 in neutron "[RFE] Qos policy not supporting sharing bandwidth between several nics of the same vm." [Undecided,New]
14:22:55 <slaweq> JieLi: hi, it's Your RFE, right?
14:23:10 <JieLi> yes
14:25:21 <njohnston> Interesting idea.  I would think that we'd only want to aggregate bandwidth for nics that are on the same network, right?  In other words if a VM has a port on an administrative network and a port on an application network, they shouldn't share bandwidth limitations.
14:25:26 <ralonsoh> JieLi, can you explain a bit the backend implementation?
14:26:41 <ralonsoh> njohnston, not only the same network but also you need to consider if you want to limit this only to bound ports to the same VM or host
14:27:08 <ralonsoh> but this is server logic, how to decide to share the BW
14:27:10 <JieLi> njohnston we want to limit the total bandwidth of vm, for restricting system resource usage.
14:27:24 <ralonsoh> my concern is how this is going to be implemented in any backend
14:28:22 <slaweq> yes, backend implementation is very interesting thing in this case as limiting bw isn't trivial thing :)
14:28:25 <amotoki> ralonsoh's point is really good and important to me too.
14:28:33 <njohnston> in reality. confining bandwidth to a collection of ports could have many uses, not necessarily restricting just to a single host.  For example if you have 5 app servers you could use such a shared policy to say "Don't exceed X bandwidth for these 5 ports" and limit the whole app server farm together,
14:29:10 <JieLi> ralonsoh currently, we'd like to use linux tc stack to redirect nic's traffic to an ifb interface, and limit the bandwidth on the ifb interface
14:29:24 <slaweq> njohnston: but that would be far more complicated as such ports could be on different hosts :)
14:29:42 <ralonsoh> JieLi, so this is limited, for now, to LB backend
14:30:18 <ralonsoh> and another question: direction?? ingress? egress? both?
14:30:27 <JieLi> ralonsoh you mean loadbalancer?
14:30:35 <ralonsoh> JieLi, ?
14:30:39 <ralonsoh> no no Linux Bridge
14:30:49 <ralonsoh> there are 4 backends in the neutron repo
14:30:50 <JieLi> direction: both
14:30:57 <ralonsoh> sriov, LB, ovs and OVN
14:31:01 <ralonsoh> ok
14:31:44 <JieLi> backends: we want to support ovs and LB
14:31:59 <ralonsoh> with an IFB block?
14:32:11 <ralonsoh> that means you need to setup an hybrid deployment
14:32:37 <ralonsoh> implications: no DPDK support, maybe no HW offload support
14:32:53 <JieLi> yes,  no DPDK support, maybe no HW offload support
14:33:08 <ralonsoh> and the OVS firewall could be affected
14:33:23 <ralonsoh> because you are redirecting all those ports to a unique IFB
14:33:57 <mlavalle> is there a non IFB alternative?
14:34:24 * mlavalle will have to leave 15 minutes before the hour
14:34:47 <ralonsoh> so far in OVS we don't have something like this, a shared qos policy
14:34:49 <JieLi> traffic will be send back to tapXXX  interface from ifb after netwroking shaping, firewall need to be tested, i can't be sure now
14:34:52 <ralonsoh> so there is no native support
14:34:59 <njohnston> what kind of a performance impact would such an architecture have?
14:35:24 <JieLi> @mla
14:36:11 <ralonsoh> let me be frank: the OVS present too many possible incompatibilities
14:36:14 <JieLi> mlavalle,at present, we have no non IFB alternative
14:36:25 <ralonsoh> and the architecture could be quite complex
14:36:52 <mlavalle> I say let's do a PoC
14:36:57 <ralonsoh> perfect for me
14:36:59 <mlavalle> and explore the idea
14:37:25 <ralonsoh> (btw, trunk plugin could help on this, just an idea)
14:37:59 <slaweq> IMO idea of this is interesting but there is many things to clarify so maybe PoC and spec would be needed first
14:38:10 <slaweq> but as for rfe I tend to approve it
14:38:16 <ralonsoh> +1
14:39:13 <mlavalle> +1
14:39:28 <haleyb> +1
14:39:39 <amotoki> when we visit the possibility of the proposed idea after approving it?
14:39:56 <amotoki> I tend to +1 but it hits me
14:40:09 <njohnston> yes I would like to see what the PoC results are
14:40:17 <slaweq> amotoki: I was thinking to work out all possibilities during spec review
14:40:25 <slaweq> and maybe PoC too
14:40:40 <amotoki> sounds good
14:40:45 <njohnston> +1
14:40:58 <slaweq> with QoS we already have situation that not all backends supports every rule types
14:41:09 <slaweq> so this one can also be supported only by some backends, not all of them
14:41:14 <ralonsoh> exactly
14:41:20 <amotoki> it is interesting enoguh and it is worth exploring impls.
14:42:09 <slaweq> to sum up: we all agreed to approve RFE as the idea and continue exploring details of it in spec and poc patches
14:42:28 <slaweq> JieLi: is it fine for You?
14:43:17 <JieLi> yes
14:43:32 <slaweq> ok
14:43:40 <slaweq> thx for proposing it
14:43:48 <slaweq> we are done with RFEs for today
14:43:59 <slaweq> #topic On demand agenda
14:44:05 <slaweq> I have one more topic from ralonsoh
14:44:12 <slaweq> https://bugs.launchpad.net/neutron/+bug/1863423. The service plugin network_segment_ranges has some implementation issues: users can create networks outside the project segment ranges and those segmentation IDs can belong to other project segment ranges.
14:44:13 <openstack> Launchpad bug 1863423 in neutron "Method "build_segment_queries_for_tenant_and_shared_ranges" returning empty query" [Undecided,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
14:44:18 <slaweq> It's described in details in http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012736.html
14:44:19 <ralonsoh> oh yes
14:44:20 <ralonsoh> one sec
14:44:29 <ralonsoh> #link https://review.opendev.org/#/c/710090/
14:44:31 * mlavalle leaves now, see you next week
14:44:32 <slaweq> but maybe we can use this couple of minutes to discuss it here
14:44:34 <ralonsoh> discussed in the ML
14:44:37 <slaweq> mlavalle: see You
14:44:46 <ralonsoh> this is the possible architecture solution
14:44:50 <ralonsoh> as commented in the mail
14:45:07 <ralonsoh> 1) a user (in a project) has network segment ranges
14:45:22 <ralonsoh> --> will retrieve the seg_id from those ranges first
14:45:29 <ralonsoh> and then from the shared one
14:45:40 <ralonsoh> never from other project ranges
14:45:58 <ralonsoh> 2) if the project does not have ranges, directly from the shared pool
14:46:07 <ralonsoh> that's all
14:46:27 <slaweq> I replied to Your email some time ago
14:46:36 <slaweq> IMHO 2) would be better to backport to stable branches
14:46:40 <ralonsoh> yes, that's why I implemented this solution
14:46:49 <slaweq> as API behaviour wouldn't change too much
14:47:04 <ralonsoh> no, no API change
14:47:10 <ralonsoh> just fix the queries
14:47:21 <ralonsoh> (and a couple of extra things)
14:49:23 <ralonsoh> (for reviewers: the core of the patch is here https://review.opendev.org/#/c/710090/4/neutron/objects/network_segment_range.py)
14:50:45 <slaweq> ralonsoh: sure, I will review it ASAP
14:50:52 <ralonsoh> thanks!
14:53:29 <amotoki> is still reading the thread
14:54:06 <amotoki> 1 and 2 above here look exclusive so I am a bit confused.
14:54:36 <ralonsoh> no sorry, not the same
14:54:50 <ralonsoh> in the ML 1) and 2) are different options
14:54:53 <amotoki> ralonsoh: I don't mean 1 and 2 in your email
14:55:06 <ralonsoh> here 1) and 2) are just possible situations
14:55:16 <amotoki> ralonsoh: I see
14:55:35 <ralonsoh> in the patch, I implemented option (2) of the mail
14:57:44 <amotoki> atm option 2 of the mail sounds good to me (but I am still trying to understand the situation more)
14:58:10 <slaweq> ok, we are almost out of the time, amotoki please reply to the email if You will have anything to add regarding this topic
14:58:13 <ralonsoh> amotoki, sure, ping me via IRC or mail if you have questions
14:58:15 <slaweq> and also others :)
14:58:23 <amotoki> slaweq: ralonsoh: sure
14:58:25 <slaweq> thx for attending the meeting
14:58:33 <slaweq> and have a great weekend
14:58:35 <ralonsoh> bye!
14:58:35 <slaweq> o/
14:58:38 <amotoki> o/
14:58:39 <slaweq> #endmeeting