15:00:16 #startmeeting neutron_qos 15:00:16 Meeting started Tue Jul 31 15:00:16 2018 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'neutron_qos' 15:00:23 hello on qos meeting now :) 15:00:23 o/ 15:00:32 o/ 15:00:40 o/ 15:00:43 o/ 15:00:45 oh its next ??? :) 15:00:56 o/ 15:01:02 #topic RFEs 15:01:18 we don't have any new rfe reported 15:01:31 so let's check briefly existing ones 15:01:37 #link https://bugs.launchpad.net/neutron/+bug/1560963 15:01:37 Launchpad bug 1560963 in neutron "[RFE] Minimum bandwidth support (egress)" [Wishlist,In progress] 15:01:48 o/ 15:01:49 here I still didn't do any progress 15:02:14 I suggest here to add this topic to the PTG etherpad 15:02:37 so we can give it visibility 15:02:51 ok, I will add it there when etherpad will be ready :) 15:02:55 thx mlavalle 15:03:07 next one is: 15:03:10 #link https://bugs.launchpad.net/neutron/+bug/1727578 15:03:10 Launchpad bug 1727578 in neutron "[RFE]Support apply qos policy in VPN service" [Wishlist,Triaged] 15:03:37 here as I checked today, there is also no progress in WIP patch: https://review.openstack.org/#/c/558986/ 15:04:01 yeah, I think I mentioned in a previous meeting that we are not going to see progress on this 15:04:09 yes, I remember :) 15:04:15 until we finish Rocky 15:04:22 just mentioning that we have it on the list 15:04:42 so moving to the next one 15:04:44 #link https://bugs.launchpad.net/neutron/+bug/1757044 15:04:44 Launchpad bug 1757044 in neutron "[RFE] neutron L3 router gateway IP QoS" [Wishlist,In progress] - Assigned to LIU Yulong (dragon889) 15:04:44 cooo, zhaobo will get back to it :-) 15:05:01 it is already approved by drivers team 15:05:06 thx mlavalle 15:05:09 we approved this RFE in the latest meeting 15:05:24 patches are also waiting for reviews: 15:05:26 BTW... 15:05:30 https://review.openstack.org/#/q/topic:bug/1757044+(status:open+OR+status:merged) 15:06:00 it's not urgent for sure as now we have RC-1 as top prio but please take a look at it if You will have a minute :) 15:06:10 I mentioned this topic and "Support apply qos policy in VPN service" in my PTL self nomination 15:06:28 precisely to give them visiblity in the Stein cycle 15:07:12 thx 15:08:43 oh , you are up for election ? 15:08:44 ok, and the last rfe which we have is 15:08:48 #link https://bugs.launchpad.net/neutron/+bug/1578989 15:08:48 Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Bence Romsics (bence-romsics) 15:09:00 lajoskatona_: rubasov: any updates? 15:09:13 yep, there's quite a lot going on 15:10:04 yeah, I've been feeling a tad overwhelmed seeing the revisions and patches pushed 15:10:17 being bogged down tryin to finish rocky 15:10:23 I had to merge the binding related and placement-reporting patches into a single change series to keep me sane while working with them 15:10:24 Thanks a lot :-) 15:11:35 I will also go back to review those patches soon :) 15:12:07 I hope to get the placement reporting stuff out of WIP in a week maybe 15:12:28 slaweq, mlavalle: actually we are waiting for rocky release as we see that the team is now working mostly to finish the release process, etc., this is why we are "sitting on patches" for neutron-lib for example 15:12:32 I found one thing that made think 15:12:56 slaweq: thanks in advance, and also for the reviews we've already got 15:13:29 I just realized that the ovs mech driver reports supporting the direct vnic_type 15:13:43 therefore both ovs and sr-iov reports the same 15:14:04 and I'm not sure how placement will be able to decide which one to choose 15:14:22 mhhhh 15:14:49 I have a better explanation of it here: https://review.openstack.org/#/c/580672/5/neutron/services/placement_report/plugin.py@93 15:15:38 it feels like this is the case that was probably solved previously by setting mech driver order, but that won't work with placement 15:16:25 yeah I see that 15:17:22 and VIF_TYP is not known in placement, right? 15:17:56 yep, placement has no idea of vif types 15:18:51 so maybe, to enable our case, we might need to give the admin the power to configure which vnic_type the mechanism driver advertises 15:19:04 at least for the OVS case 15:19:12 ok, but is it common UC that someone have SRIOV mech_driver and OVS mech_driver which is used to manage SR-IOV ports? 15:19:44 mlavalle: you're reading my thoughts :-) how do you do that? 15:20:08 we do need OVS and SRIOV drivers to monitor the PFs 15:20:36 OVS might be needed when one of the VF is required to talk via FDB , isnt it? 15:21:29 can it be something like: "if mech_sriov loaded then mech_openvswitch supports only VNIC_NORMAL"? 15:21:55 rbanerje: I have to admit I didn't get to complete some of my homework till now and I don't fully understand what does it mean when ovs says it supports sr-iov 15:22:53 dont worry, it took me a long time to understand this, when I was trying to implement it poractically 15:23:33 slaweq: I agree, if we need support for direct from ovs and sr-iov at the same time, that's the hardest to solve, so better avoid it 15:24:03 yeah, let's explore this option 15:24:07 slaweq: but can we do that, or does somebody use it like that? 15:24:28 rubasov: I don't know? :) 15:24:58 we cannot forsee alll the possible cases. I suggest that we state clearly waht combinations we support and document them clearly 15:25:16 mlavalle++ 15:25:17 let' the feature run in the wild for a while 15:25:30 and see what feedback we get from users / deployers 15:26:20 mlavalle: by "this" you mean the config override of supported_vnic_types or that ovs does not report direct if sr-iov is also present? 15:27:04 I lean towards the config option 15:27:15 but could be talked into the other option 15:27:24 mlavalle: ack, I'll go that way then 15:27:26 as you all very well know ;-) 15:27:51 lajoskatona_: anything to add? 15:28:09 rubasov: let's go this way 15:28:35 ok 15:28:41 mlavalle, slaweq, rbanerje: thank you guys 15:28:51 thx for update rubasov and lajoskatona_ 15:28:57 let's move to bugs then 15:29:02 #topic Bugs 15:29:20 there is few bugs on the list 15:29:23 #link https://bugs.launchpad.net/neutron/+bug/1783559 15:29:23 Launchpad bug 1783559 in neutron "Qos binding network causes ovs-agent to send a lot of rpc" [Medium,In progress] - Assigned to Chengqian Liu (liuchengqian90) 15:29:37 patch for this bug is ready for review: 15:29:39 #link https://review.openstack.org/585752 15:30:28 but I think that we should now wait for RC-1 to be done and then maybe backport it to stable/rocky, right mlavalle? 15:30:42 right 15:30:55 thx for confirmation 15:30:59 LOL 15:31:09 so next one is 15:31:14 #link https://bugs.launchpad.net/neutron/+bug/1784006 15:31:15 Launchpad bug 1784006 in neutron "Instances miss neutron QoS on their ports after unrescue and soft reboot" [Medium,Confirmed] 15:31:51 good thing is that looks that it is easily reproducible 15:32:26 if anyone wants to work on it, that would be great :) 15:33:28 assign it to me please 15:34:56 mlavalle: done, thx :) 15:35:20 TBH I never did anything like soft/hard reboot and check if QoS is restored then 15:35:41 maybe we could include such operation in scenario test? 15:36:17 yeah 15:36:21 I'll test it 15:36:31 thx mlavalle a lot 15:36:36 next one is 15:36:39 #link https://bugs.launchpad.net/neutron/+bug/1778666 15:36:39 Launchpad bug 1778666 in neutron "QoS - “port” parameter is required in CLI in order to set/unset QoS policy to floating IP" [Low,Confirmed] 15:37:00 lajoskatona_ tried to reproduce it and he couldn't AFAIR 15:37:16 so we are now waiting for info about OSC version from reported 15:37:19 right lajoskatona_ ? 15:37:34 slaweq: yes, there was no answer for my questions 15:37:50 ok, so we are waiting with this one 15:37:59 thx lajoskatona_ 15:38:01 slaweq: yep 15:38:18 next is related to docs: 15:38:21 #link https://bugs.launchpad.net/neutron/+bug/1778740 15:38:21 Launchpad bug 1778740 in neutron "Quality of Service (QoS) in Neutron - associating QoS policy to Floating IP" [Low,Confirmed] 15:38:45 I will try to take a look on it when I will have few minutes 15:39:09 another bug related to docs is: 15:39:12 #link https://bugs.launchpad.net/neutron/+bug/1779052 15:39:12 Launchpad bug 1779052 in neutron "Quality of Service (QoS) in Neutron (documentation) - only different types rules can be combined in QoS policy" [Low,In progress] - Assigned to yanpuqing (ycx) 15:39:23 and here patch is waiting for review: https://review.openstack.org/581941 15:39:55 btw. mlavalle what is the policy of such documentation changes now? should it also wait for RC-1 to be done or such patches can be merged? 15:40:08 no, documents are fine 15:40:43 ok, thx. So please review this patch if You will have few minutes :) 15:40:58 next one is #link https://bugs.launchpad.net/neutron/+bug/1781892 15:40:58 Launchpad bug 1781892 in neutron "QoS – Neutron port is not effected after association “Floating IP” with “QoS policy” enabled" [Low,In progress] - Assigned to Slawek Kaplonski (slaweq) 15:41:02 and this is also related to docs only 15:41:08 patch: https://review.openstack.org/#/c/583967/ 15:41:14 so please review it also 15:41:25 ok 15:41:29 thx 15:41:35 and the last one on my list is: 15:41:38 #link https://bugs.launchpad.net/neutron/+bug/1777866 15:41:39 Launchpad bug 1777866 in neutron "QoS CLI – Warning in case when provided burst is lower than 80% BW" [Wishlist,New] 15:41:48 And there are 2 things here: 15:42:10 1. docs clarify - I will check if we can update it somehow to make things more clear 15:42:28 2. Request of displaying some kind of "warning message" in CLI 15:42:50 according to 2 I don't know if something like that is possible/doable in OSC 15:42:57 what do You think? 15:43:24 slaweq: I can check that in osc 15:45:24 ok, thx lajoskatona_ 15:46:13 if You will find something please update bug report in launchpad :) 15:46:22 ok, that's all from my side for today 15:46:32 do You have anythig else You want to talk? 15:46:47 I have a question for rubasov and lajoskatona_? 15:46:57 mlavalle: shoot 15:47:09 mlavalle: sure 15:47:29 The other day hongbin_ asked me if he can help with the work you are doing on bandwidth based scheduling 15:47:43 o/ 15:47:47 and my response was let's ask the guys who know.... 15:47:55 hongbin_: welcome 15:48:11 hongbin_: yeah, welcome 15:48:12 sure, absolutely 15:48:14 rubasov: thanks, just looking for if anything i can help 15:48:41 so if some items pop up, i will pick it up 15:48:47 hongbin_: do you favor some part of this work? 15:49:05 * mlavalle has yet another question for rubasov and lajoskatona_ 15:49:17 rubasov: some beginner level tasks would be good for me to get started 15:50:09 hongbin_: let me think about it and I'll get back to you with a few ideas 15:50:16 hongbin_: we shall sync on the way forward 15:50:19 rubasov: thanks 15:50:21 hongbin_: which time zone do you work in? 15:50:29 lajoskatona_: ack 15:50:33 he is UTC - 4 15:50:39 Toronto 15:50:40 yes :) 15:51:40 hongbin_: cool, shall we sync tomorrow maybe? 15:51:50 rubasov: sure, thanks 15:52:56 rubasov, lajoskatona_: do we have anything we need in placement? 15:52:57 hongbin_: I'll ping you tomorrow then 15:53:24 mlavalle: you mean 'everyhting'? 15:53:50 is there anything that needs to be implemented in placement for us to continue our work? 15:54:14 mlavalle: nested RP support is in place 15:54:24 mlavalle: on the placement side 15:54:24 that was my question 15:54:33 thanks gibi ;-) 15:54:49 any traits is for the next cycle 15:54:56 mlavalle: from nova perspective we still cannot use the nested RPs 15:55:06 ahhh, yeah 15:55:07 mlavalle: that still needs couple of steps 15:55:16 the "any" stuff is what I had in mind 15:55:22 so I think we're not blocked by placement progress 15:55:29 cool 15:55:31 thanks 15:55:44 mlavalle: that is also missing. I will re-propose the any-traits spec for Stein 15:56:07 mlavalle: #link https://review.openstack.org/#/c/565730/ 15:56:19 gibi: cool. Thanks 15:56:19 * mlavalle will shut up now 15:56:28 * gibi follows 15:56:33 LOL 15:56:41 thx for update gibi :) 15:57:22 so I think we can finish now 15:57:27 yeap 15:57:27 we have our agent embedded in the placement development team :-)) 15:57:36 thx for attending guys :) 15:57:43 thank you 15:57:46 #endmeeting