15:00:58 #startmeeting neutron_qos 15:00:59 Meeting started Tue Jun 19 15:00:58 2018 UTC and is due to finish in 60 minutes. The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:03 The meeting name has been set to 'neutron_qos' 15:01:21 hi again 15:01:33 o/ 15:03:06 rubasov, gibi: today slaweq won't be around. He is watching the Poland game. Go Poland!!!!! 15:03:15 so you are stuck with me 15:03:21 go Poland! 15:03:27 slaweq: go Poland! 15:03:45 give me a couple of minutes to strtech my legs 15:03:51 sure 15:03:53 I'll be right back 15:03:59 mlavalle: of course 15:06:46 gibi, rubasov: I'm back. My wife and daughter are watching the game. So if something happens, I might be able to update you 15:07:31 mlavalle: I'm not that deep in soccer but thanks :) 15:07:33 mlavalle: cool 15:08:10 mlavalle: I'll have to leave around :35-:40 but until then I have my full attention here 15:08:21 ok, let's get going 15:08:40 the agenda for today is here: https://etherpad.openstack.org/p/neutron_qos_meeting_chair 15:09:24 let's start with you rubasov 15:09:54 https://bugs.launchpad.net/neutron/+bug/1578989 15:09:55 Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Bence Romsics (bence-romsics) 15:10:06 mlavalle: thank you 15:10:41 there's this patch about me thinking how to drive binding from placement: https://review.openstack.org/#/c/574783 15:10:42 patch 574783 - neutron - WIP Drive binding by placement allocation 15:11:22 I took a look yesterday 15:11:36 the basic idea is simple: instead of calling on each mechanism driver configured restrict the list according to placement's decision 15:11:41 mlavalle: thank you 15:12:33 but the qos-min-bw work is supposed to be limited to the bottom binding level (otherwise the scope would be larger) 15:12:58 and I was stuck how to know if the current binding level is the bottom binding level 15:13:01 I also thought of taking advantage of the assumption that bind_port is side effect free 15:13:09 or to find an alternative 15:13:35 just before this meeting I added a comment on the patch I guess you have seen it 15:14:25 the alternative that I thought of was that we know we are at the botoom at this point: https://review.openstack.org/#/c/574783/5/neutron/plugins/ml2/managers.py@834 15:14:26 mlavalle: if bind_port is side effect free by design (not by just chance) we could use that 15:14:26 patch 574783 - neutron - WIP Drive binding by placement allocation 15:15:28 what if, at that point, if we have 'allocation' in context.current['binding:profile'] 15:15:49 we just re-execute the last binding level with the restricted set of drivers 15:15:51 ? 15:16:20 mlavalle: thinking... 15:16:25 up to that point, we just let everything run as normal 15:16:34 until we detect the bottom 15:17:49 its another way to backtrack 15:17:50 rubasov: let me rephrase to see if I understand: when done, go back and overwrite the last level according to the allocation 15:18:24 when done, let's execute the last level again with the restricted set of drivers 15:18:37 and rewrite the binding 15:19:00 until we find the bottom, we just let everything proceed as normal 15:19:07 as today 15:19:20 mlavalle: okay I think I understand 15:20:08 and we only do the backtracking if we have 'allocation' in context.current['binding:profile'] 15:20:17 mlavalle: I like your idea, looks working to me 15:20:33 so instead of adding code at L778 15:21:01 you add code in the else branch in L832 15:21:09 when you know you are at the bottom 15:21:44 the in-tree drivers are side effect free 15:21:48 we know that 15:21:56 mlavalle: one question: what if the two drivers (drv1: that first called set_binding, drv2: that follows from the allocation) thinks differently which level should be the bottom? 15:22:10 mlavalle: can that ever happen? 15:22:38 at least in the case of the in-tree drivers, that cannot happen 15:22:47 drv2 should never say continue_binding because that contradicts the allocation 15:23:16 beacuase does drivers actually don't touch the _next_segments_to_bind 15:23:16 mlavalle: so it's theoretically possible, but practically not? 15:23:38 those ^^^^ 15:23:46 I checked last night 15:23:55 and I will double check 15:24:23 mlavalle: thanks a lot for that 15:24:27 and we can doument these assumptions as part of this implementation 15:25:06 mlavalle: do I understand correctly that the proposed change my restrict what an out of tree driver can do? 15:25:13 s/my/might/ 15:25:30 mlavalle: I think now with your idea I can go ahead and upload a new patchset doing that 15:26:07 mlavalle: compared to what an out of tree driver can do today 15:26:08 gibi: we are going to make the assumption that out of tree drivers that bind at the bottom are side effect free 15:26:32 to tell you the true I don't even know if that is an issue 15:26:38 mlavalle: more like 'beacuase does drivers actually don't touch the _next_segments_to_bind' what if out of tree drivers touch it? 15:27:20 it doesn't depend on not touching the _next_segments_to_bind attribute in the context 15:27:28 I know they do 15:27:54 it depends on the fact that the driver that binds at the bottom should be side effect free 15:28:02 so we can run it a second time 15:28:10 once we know we are at the bottom 15:28:31 OK, thanks 15:29:30 the current binding code allows a lot of freedom but based on this discussion it feels that the drivers are not using that freedom 15:29:35 mlavalle: thanks a lot for the input, I'll go ahead accordingly with the patch 15:29:58 gibi: in the case of the in-tree drivers, they are certainly not 15:30:27 and there aren't a lot of out of tree drivers 15:30:29 I'm fine going forward :) 15:30:35 so we can probably work with them 15:30:49 agree 15:31:14 ok, we have a plan to go forward 15:31:47 will look for the next revision to the patch 15:32:14 mlavalle: will get back to it this week 15:32:26 rubasov: Thanks! 15:32:43 mlavalle: thank you 15:32:49 mlavalle: thanks 15:33:19 gibi, rubasov: I will continue going though the agenda to leave it on the record. But feel free to leave at any moment ;-) 15:33:34 mlavalle: I'll have to very soon, sorry 15:34:02 bye everyone 15:34:11 o/ 15:34:48 you can go an enjoy the game of your Visegrad fellow country ;-) 15:35:17 #topic RFEs 15:35:43 In regards to https://bugs.launchpad.net/neutron/+bug/1727578 15:35:44 Launchpad bug 1727578 in neutron "[RFE]Support apply qos policy in VPN service" [Wishlist,Triaged] 15:36:21 we have a patch up for review https://review.openstack.org/#/c/558986/ 15:36:22 patch 558986 - neutron-vpnaas - WIP: Support vpn service Qos 15:36:56 mlavalle thinks he brought this up to the attention of the VPNaaS team, but will try agin this week 15:37:17 #action mlavalle to talk to the VPNaaS team about https://review.openstack.org/#/c/558986/ 15:37:18 patch 558986 - neutron-vpnaas - WIP: Support vpn service Qos 15:37:56 * njohnston is still here 15:38:20 njohnston: nice!. Thanks :-) 15:38:41 in regards to https://bugs.launchpad.net/neutron/+bug/1578989 15:38:42 Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Bence Romsics (bence-romsics) 15:39:21 We discussed alternatives to move forward with the WIP patches in the first part of the meeting with rubasov and gibi 15:39:26 Patches are here: 15:40:40 #link https://review.openstack.org/#/c/574781/ 15:40:41 patch 574781 - neutron-lib - WIP Mechanism driver API: responsible_for_rsc_prov... 15:40:56 #link https://review.openstack.org/#/c/574783/ 15:40:57 patch 574783 - neutron - WIP Drive binding by placement allocation 15:41:46 In regards to https://bugs.launchpad.net/neutron/+bug/1560963 15:41:47 Launchpad bug 1560963 in neutron "[RFE] Minimum bandwidth support (egress)" [Wishlist,In progress] 15:42:06 slaweq is working on a PoC 15:43:00 https://bugs.launchpad.net/neutron/+bug/1757044 has been discussed twice by the drivers team. Waiting clarification from submitter 15:43:01 Launchpad bug 1757044 in neutron "[RFE] neutron L3 router gateway IP QoS" [Wishlist,In progress] - Assigned to LIU Yulong (dragon889) 15:44:00 https://bugs.launchpad.net/neutron/+bug/1708460. The spec is up for review 15:44:00 Launchpad bug 1708460 in neutron " [RFE] Reaction to network congestion for qos" [Wishlist,Triaged] - Assigned to Fouad Benamrane (ftreqah) 15:44:54 #link https://review.openstack.org/#/c/445762 15:44:55 patch 445762 - neutron-specs - Spec for Explicit Congestion Notification 15:45:02 #action mlavalle to review it 15:45:45 I'll review it as well. I thought that ECN bits were autonegotiated during transit, and all that we could really do is enable processing of ECN in the network stack. 15:46:07 Thanks! 15:46:13 #topic Bugs 15:46:42 We have a recently filed bug: https://bugs.launchpad.net/neutron/+bug/1776160 15:46:43 Launchpad bug 1776160 in neutron "'burst' does not take effect for neutron egress qos bindlimit by ovs" [Undecided,New] 15:46:58 It requires further triaging / investigation 15:47:18 feel free to take a look 15:47:26 #topic Open Discussion 15:47:35 any other topics we should cover today? 15:48:24 dang! Somalia just scored a few minutes ago. Hang in there Poland. Go Poland! 15:48:34 #endmeeting