15:01:07 #startmeeting neutron_qos 15:01:08 Meeting started Tue Aug 28 15:01:07 2018 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:09 slaweq: the room is all yours... 15:01:09 hi 15:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:12 The meeting name has been set to 'neutron_qos' 15:01:14 mlavalle: thx :) 15:01:23 hi again 15:01:30 o/ 15:01:31 #topic RFEs 15:01:44 fwiw.... I didn't want to trigger the fury of the Hulk 15:01:52 LOL 15:02:04 You will not trigger it for sure :) 15:02:38 let's talk about RFEs which are in progress somehow and about new ones later 15:02:44 LOL 15:02:51 let's do that 15:02:58 #link https://bugs.launchpad.net/neutron/+bug/1757044 15:02:58 Launchpad bug 1757044 in neutron "[RFE] neutron L3 router gateway IP QoS" [Wishlist,In progress] - Assigned to LIU Yulong (dragon889) 15:03:14 We have now API def patch merged now: https://review.openstack.org/#/c/567497/ 15:03:48 o/ 15:03:52 yes, I pushed it a few days ago 15:04:24 there is no other patches in progress currently but I hope it will be going forward as it's approved by drivers now 15:04:41 ahhh 15:04:43 hang on 15:04:48 yes? 15:05:00 let me find the patches 15:05:44 https://review.openstack.org/#/q/topic:bp/router-gateway-ip-qos,n,z 15:06:05 There are server and agent side patches 15:06:11 ahh, so there are patches already but with different topic 15:06:15 thx mlavalle 15:06:26 I will review them this week 15:06:37 yeah, what happened is that I created the topic after I pushed the APi definition 15:07:04 and also created the blueprint, so we can track it in our weekly meeting 15:07:16 https://blueprints.launchpad.net/neutron/+spec/router-gateway-ip-qos 15:07:28 slaweq: do you want to be the approver? 15:07:37 I can 15:07:43 done 15:08:03 thx 15:08:30 ok, next RFE is #link https://bugs.launchpad.net/neutron/+bug/1578989 15:08:31 Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Lajos Katona (lajos-katona) 15:08:34 Hi, the neutron patches now need a new neutron-lib release for the testing since I updated the extension to the merged patch here: https://review.openstack.org/#/c/567497/ 15:08:39 rubasov: any updates? 15:08:49 slaweq: yes 15:08:54 I know that patches are going forward :) 15:08:59 liuyulong: ack, I will discuss with boden 15:09:04 we have landed many neutron-lib patches 15:09:16 thanks a lot for everyone helping 15:09:26 thanks, and sorry for the interrupt 15:10:20 many of the open neutron changes are actually ready for review, the zuul-1 is only because of the neutron-lib changes not being released yet 15:10:53 mlavalle: will it be possible to release new neutron-lib soon then? 15:11:06 rubasov: I will include the neutron-lib patches in my conversation with boden about releasing neutron-lib again 15:11:11 slaweq: yeap 15:11:16 mlavalle: thx 15:11:21 we clearly need it 15:11:36 probably next week 15:11:36 that would be cool 15:11:39 thank you 15:11:52 once we get rocky completely out of the door 15:12:06 that way we know we can break things without too much impact 15:12:10 yes, that was what I though :) 15:12:18 cool 15:12:29 this week I was putting together a new end-to-end integration of the whole feature (including the open nova and placement changes) 15:12:39 I think that rocky is already bound to some released version of neutron-lib so new version shouldn't break it, right? 15:12:55 I hope to show that to you on the ptg 15:13:13 rubasov: sounds good 15:13:16 slaweq: right. let's see how boden feels about it 15:13:26 mlavalle: ok, thx :) 15:13:38 because neutron-lib impacts a number of projects 15:13:52 lajoskatona couldn't be here today, but I know he is working on tests for neutron-tempest-plugin 15:14:44 I am working on pushing a new version of the placement sync service plugin with a plan of how to handle transient placement api errors 15:15:35 that's the current status I think 15:15:49 ok, thx for update rubasov :) 15:15:55 rubasov: great progress!, Thanks for your hardwork... 15:16:05 and lajoskatona's as well 15:16:13 +1 15:16:30 I'll forward it to lajoskatona 15:16:49 let's not forget gibi ;-) 15:17:01 right 15:17:10 our infiltrated man in the Nova team ;-) 15:17:35 LOL 15:17:46 and in the Placement team :-D 15:19:33 ok, let's move forward to next RFEs now :) 15:19:47 we have 2 new RFEs reported recently 15:19:51 first one is: 15:19:54 https://bugs.launchpad.net/neutron/+bug/1787792 15:19:55 Launchpad bug 1787792 in neutron "[RFE] Does not support ipv6 N-S qos" [Wishlist,New] 15:20:47 This one actually came from my trip to China 15:20:55 IMO it may be achieved by implementing classful bandwidth limit, e.g. using Common Classifier Framework 15:21:08 mlavalle: You bring it from China? :) 15:21:12 lol 15:21:18 I asked Na Zhu to file the RFE 15:21:31 and we are ready to decidcate a resource to it 15:21:44 that sounds good :) 15:21:52 it could be me 15:22:07 I'm perfectly fine to improve our qos rules for such things 15:22:25 but I think that it can be something more than only support for IPv6 15:22:55 if You e.g. use CCF then it may be classified by different things also 15:23:51 well, if you give me some guidance, I am willing to pursue it 15:24:31 mlavalle: I don't know this CCF too much, I just know there is (was) something like that 15:24:46 slaweq: ok, we can investigate 15:24:49 according to backend implementation, I can help for sure 15:25:03 mlavalle: sounds good 15:25:28 and we can elaborate in the PTG 15:25:56 yes 15:26:29 I will try to read a bit more about this classifier framework and check if we can use it here 15:26:42 ok 15:26:45 If you can find davidsha, he is a good resource on it 15:27:00 ahhh, ok, I'll ping him 15:27:07 njohnston: thx 15:28:34 ok, moving on 15:28:41 we have also one more new RFE 15:28:43 https://bugs.launchpad.net/neutron/+bug/1787793 15:28:43 Launchpad bug 1787793 in neutron "[RFE] Does not support shared N-S qos per-tenant" [Wishlist,New] 15:29:17 This is a similar case as the previous one 15:29:30 I asked Na Zhu to file the RFE 15:29:43 IIUC this one, it's about making shared bandwidth limit for tenant's ports/floating IPs, right? 15:29:53 yes 15:30:16 I wanted first to see what slaweq and liuyulong think about it 15:30:25 whether it is doable or not? 15:30:40 idea is fine, but I don't know how we could realize it 15:31:07 I'm not sure how it should work on API level 15:31:21 I'm not very sure about the TC ipv6 status, does it supporting now? 15:31:46 liuyulong: I don't know, it should be checked 15:32:20 so if you tell me where to go and chack, I can take that as homework 15:33:47 are You asking about IPv6 in tc? 15:34:08 yeah 15:34:31 but that's ok, I can search for that information 15:35:16 I don't know exactly tbh, I need to search simply :) 15:35:26 ok 15:36:50 ok, getting back to this QoS per tenant 15:37:00 do You have any idea how You would see it? 15:37:36 No, I wanted to get ideas from you and liuyulong 15:38:00 my first idea was to use Quota and treat bandwidth as any other resource 15:38:09 but I don't know if that will work fine 15:38:18 and probably there will be some problems with that 15:38:57 so, this is not something that we can enforce at the dataplane, right? 15:39:25 I don't know 15:39:36 how You want to enforce it at dataplane only? 15:39:43 any thoughts liuyulong? 15:39:51 IMO, dvr will block the dataplane bandwidth share. 15:40:19 but bandwidth quota can be introduced to the neutron. 15:41:18 so let me ask you a question 15:41:32 let's forget about DVR 15:41:45 let's say the fips are centralized 15:41:59 in the same network server 15:42:30 can we set up something in TC whereby two or more floating IPs can share a bandwidth limit? 15:43:49 Yes, in such centralized router, it can. 15:44:19 The tc classifier rules can do such work. 15:44:30 let's start from there. could you post the tc commands that would do that in the RFE? 15:45:07 ok, but how as an operator I should tell neutron that this limit should be shared between FIPs? 15:45:33 in my mind that is something that we can fix in the API 15:46:01 I suggested in the RFE adding an attibute to the policy resource 15:46:03 mlavalle: ok 15:46:26 then we can definitely start with something supported only by L3HA and not supported in DVR 15:46:33 indicating that that policy is meant to share limits at the tenant level 15:46:33 I don't have it right now, let me google one. 15:46:51 liuyulong: didn't mean right now 15:47:08 let's start brainstorming in the RFE as to what the TC commands would be 15:47:26 mlavalle: sounds good to me 15:47:27 OK, I will paste it to the RFE LP bug. 15:47:41 and thx liuyulong for working on this 15:47:49 slaweq, liuyulong: thanks you very much!!!! I think we made progress on this 15:48:06 and liuyulong thanks for responding so quickly to my mail message. Much appreciated 15:48:09 I think Zhaobo's VPN qos patch has something helpful. 15:48:32 https://review.openstack.org/#/c/558986/ 15:48:47 on top of that you are attending this meeting at midnight your time ;-) 15:49:25 He already has a new tc-wrapper for the classifier tc rules. 15:50:02 liuyulong: ok, I'll take a look 15:50:54 thx liuyulong for info :) 15:51:49 ok, are there any other questions related to RFEs? 15:51:56 not from me 15:52:01 or can we move forward to bugs now? 15:52:08 let's move on 15:52:16 #topic Bugs 15:52:30 #link https://bugs.launchpad.net/neutron/+bug/1783559 15:52:30 Launchpad bug 1783559 in neutron "Qos binding network causes ovs-agent to send a lot of rpc" [Medium,In progress] - Assigned to s10 (vlad-esten) 15:52:40 Patch https://review.openstack.org/#/c/585752/ - IMO it's good to go but someone needs to check it as well 15:52:53 mlavalle: please add it to Your queue :) 15:52:57 done 15:53:11 thx 15:53:15 next one is: 15:53:18 #link https://bugs.launchpad.net/neutron/+bug/1784006 15:53:18 Launchpad bug 1784006 in neutron "Instances miss neutron QoS on their ports after unrescue and soft reboot" [Medium,Confirmed] - Assigned to Miguel Lavalle (minsel) 15:53:22 any updates mlavalle on this one? 15:53:44 sorry, this one slipped from my todo list 15:53:57 I'll work on it this week 15:54:09 ok, no problem at all :) 15:54:21 next one on the list: 15:54:23 #link https://bugs.launchpad.net/neutron/+bug/1778666 15:54:23 Launchpad bug 1778666 in python-openstackclient "QoS - “port” parameter is required in CLI in order to set/unset QoS policy to floating IP" [Low,Confirmed] - Assigned to Lajos Katona (lajos-katona) 15:54:26 lajoskatona pushed cherry-pick: https://review.openstack.org/#/c/594244/ 15:54:42 so it's almost fixed now 15:55:05 next: 15:55:08 #link https://bugs.launchpad.net/neutron/+bug/1778740 15:55:08 Launchpad bug 1778740 in neutron "Quality of Service (QoS) in Neutron - associating QoS policy to Floating IP" [Low,In progress] - Assigned to Slawek Kaplonski (slaweq) 15:55:14 Patch is ready for review: https://review.openstack.org/591619 - I need to address comments in it 15:55:20 ok 15:55:32 I will do it this week 15:55:52 same for: 15:55:55 #link https://bugs.launchpad.net/neutron/+bug/1777866 15:55:55 Launchpad bug 1777866 in neutron "QoS CLI – Warning in case when provided burst is lower than 80% BW" [Wishlist,New] 15:56:10 patch for docs: https://review.openstack.org/591623 - I will have to address comments there 15:56:30 and lajoskatona did patch with updated help message in OSC: https://review.openstack.org/#/c/588168/ 15:57:02 last one on the list: 15:57:06 #link https://bugs.launchpad.net/neutron/+bug/1779052 15:57:06 Launchpad bug 1779052 in neutron "Quality of Service (QoS) in Neutron (documentation) - only different types rules can be combined in QoS policy" [Low,In progress] - Assigned to yanpuqing (ycx) 15:57:18 patch if waiting: https://review.openstack.org/581941 - also some comments waiting to be addressed there 15:57:35 and that's all from my about bugs 15:57:44 any questions/something to add? 15:58:02 not from me? 15:58:27 mlavalle: are You asking me? :D 15:58:32 no 15:58:37 LOL 15:58:41 the question mark shouldn't be there 15:58:43 LOL 15:59:00 ok, let's finish meeting then 15:59:05 thx for attending guys 15:59:08 ahh, one more thing 15:59:10 o/ 15:59:16 next meeting will be cancelled as it's during PTG 15:59:21 yeap 15:59:22 I will send an email about that 15:59:31 #endmeeting