14:03:05 #startmeeting neutron_qos 14:03:06 Meeting started Wed Dec 9 14:03:05 2015 UTC and is due to finish in 60 minutes. The chair is moshele. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:09 ajo: sorry about that got carried away :) 14:03:10 The meeting name has been set to 'neutron_qos' 14:03:18 alexpilotti : np :D it happens sometimes :D 14:03:26 good morning 14:03:28 hi 14:03:29 hi 14:03:32 hi 14:03:38 hi 14:03:41 cheers everybody, moshele will lead this meeting, ad I need to leave around :20 14:03:46 ad->as 14:03:59 moshele , do you want me to start, and you take it from :20 ? 14:04:02 I have conflicting meeting, sorry for low particiipation level in advance 14:04:14 sure 14:04:19 ack, thanks 14:04:31 ok, we have a few items to track, thanks moshele for putting the agenda in shape 14:04:41 https://etherpad.openstack.org/p/qos-mitaka 14:05:00 for the rpc-callback upgrades dependency, it seems we have agreement in devref, 14:05:03 the agenda ^ 14:05:08 and I shall push code before leaving for xmas 14:05:35 ajo: when will you be heading off for christmas? 14:05:48 ajo: if you don't mind me asking... 14:05:56 davidsha : since 21 Dec, to Jan 4th I think 14:06:07 I'm back Jan 4th 14:06:20 if there's any urgent matter, please write to my personal email: miguelangel@ajo.es 14:06:28 alexpilotti: will do for sure. 14:06:34 I will be watching the company one, but my personal one is less pulluted 14:06:38 polluted 14:06:49 ajo: enjoy ;) take rest 14:06:51 ok 14:06:53 thanks :) 14:06:54 ajo: Thanks for doing that 14:06:58 ajo: enjoy! 14:07:12 njohnston : of course, I know we have important stuff we must complete 14:07:22 you're welcome 14:07:23 about RBAC 14:07:39 I know hdaniel posted a spec, since we were required to publish one 14:07:57 due to the suspect that we were going to modify share, and therefore render the API backwards incompatible 14:08:25 hdaniel posted it: here: https://review.openstack.org/#/c/254224/ , and this is the blueprint https://blueprints.launchpad.net/neutron/+spec/rbac-qos 14:08:32 if you have some time for review, that would be nice 14:08:42 hdaniel , am I missing something? 14:08:54 ajo: nope, you got it all alright :) 14:09:04 * njohnston will review today 14:09:31 hdaniel: already started 14:09:36 ok 14:09:42 linux bridge integration 14:09:52 slaweq around ? 14:10:00 or is it your's davidsha ? 14:10:16 I didn't find the linuxbridge fullstack tests patches 14:10:21 its slaweq, I'm dscp! 14:10:23 did he upload something? 14:10:34 moshele: not yet 14:10:46 ok 14:10:46 moshele: he will by this week i believe 14:11:10 (sorry for the confusion davidsha :) ) 14:11:29 no problem! 14:11:34 ok, let's keep watching LB as it goes 14:11:38 vikram: can you update the etherpad once he have something 14:11:47 moshele : +1 14:11:56 moshele: sure 14:12:02 #topic DSCP 14:12:09 it ignores me :D 14:12:24 davidsha: can you update on DSCP? 14:12:26 #topic DCP 14:12:36 thanks moshele :D 14:12:47 the power of being a chairman :) 14:13:30 moshele: DSCP 14:13:31 Ya, it's dependent on ihar but I have tested it with vhost user and it works. I'm working at the moment on trying to move code to ovs-ofctl 14:13:40 #topic DSCP 14:14:02 thanks irenab 14:14:06 davidsha , if it's going to be waiting on ihar details, may be we could use some sort of temporary hack to be able to test it? 14:14:15 like setting the uuid or the OVSBridge object in a global? 14:14:20 I'm also trying to get the cli to work, I wasn't inline with the spec so I working under the assumption that the cli is right and my code isn't. 14:14:29 with a #TODO(...): hack, this is depending on ? 14:15:04 davidsha: if you're using the client change that vhoward dropped, I think that he is still working on that 14:15:19 ajo: the hack is use networking ovs dpdk, it doesn't seem to clear the flows. 14:15:43 davidsha :) 14:15:54 but does that work for regular ovs ports? (non-dpdk?) 14:16:09 so I was ping between VMs and checking if the bit was set and it sets them. 14:16:10 I don't have dpdk knowledge, so testing it would be a bit hard 14:17:11 I'll make a change and put it in the next patch up, I'll see commenting out the refresh works. 14:18:24 njohnston: Ok, I'll hack it to work properly, I'm just having problems with the variable name. 14:19:19 While davidsha has been doing that, I am working on unit tests then functional tests. and vhoward is working on python-neutronclient changes, so we can hopefully drop everything reasonably quickly but in reviewable chunks. 14:19:21 davidsha, can you put some brief instructions on how to put it to test? 14:19:30 any special ovs for using the ovs-dpdk agent? 14:19:44 njohnston++ 14:19:45 davidsha+ 14:19:48 ++ 14:19:52 +1 for testing instructions 14:20:28 ok, I have to drop from the meeting, moshele , it's all yours. 14:20:42 thanks ajo 14:20:46 cheers everybody, 14:20:50 ajo: For ovs-dpdk, it is already merged with the ovs-agent in neutron, you need to set the ML2 agent to just ovs I think. 14:20:54 ajo: thanks! 14:20:55 ajo: bye 14:20:58 bye 14:21:00 ajo: cya 14:21:02 bye 14:21:03 thanks ajo 14:23:04 so it looks like the change Ihar outlined is a dependency for the DSCP stuff getting merged, do we know what the status of that is? Not sure if there is a spec yet. 14:23:10 njohnston: also thanks for putting in the valid_dscp_nums, was a bit overwhelmed at the time and was slow to upload. 14:24:02 np, happy to help! 14:24:04 njohnston: what dependency? the upgrade? 14:24:46 moshele: http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html 14:25:09 ok the l2 agent 14:25:24 I don't think work has started on that 14:25:42 but we need to follow up with Ihar on that 14:25:54 This is also a dependency for vikram on his 801.1p (I think) 14:26:04 I will add it to the etherpad 14:26:23 His blueprint would be implemented the exact same was as dscp is. 14:26:41 802.1p* 14:26:43 I am unclear on whether that is a hard dependency on Ihar's stuff before DSCP can be merged, or if there is a scenario where DSCP can be merged with a work-around before Ihar's proposed change. 14:27:08 davidsha: +1 14:27:39 Well Ihar's change is actually what I was thinking as well, take a table other than 0 for qos flows. 14:28:07 because tables other than 0 are not cleared 14:28:38 I guess we will need to sync with ihar on that 14:29:16 anything else on the DSCP ? 14:29:41 nope 14:29:51 Not atm, I'm still working through Ihar's comments and have to do the TODO's I marked. 14:30:23 ok cool 14:31:04 so let move for the slow moving issues 14:31:35 did we mention python client yet? I wf-1d a patch https://review.openstack.org/#/c/254280/ 14:31:42 sorry for being late and stuff 14:31:59 vhoward: Yup 14:32:09 cool, then i'm good for now, thx for all your help 14:32:30 vhoward: no prob, thanks for the cli. 14:32:39 yw 14:33:00 so I don't think anyone is working on BWG or and integration with nova scheduler , right ? 14:34:23 ok I just wanted you let you know that there is QoS integration with Horizon and with Heat 14:34:49 so you have time you can review the patches they are listed on the eatherpad 14:35:10 * njohnston has been reviewing the heat patches, think they are close 14:35:38 njohnston: cool thanks for the update 14:36:17 irenab: do you want some thing to say regarding the port flavors proposal ? 14:36:39 i'll help with that also 14:36:47 moshele: I need to catch up on this 14:37:16 I am not aware of any progress 14:37:33 ok thanks 14:37:55 let move to opened bugs 14:38:04 #topic Opened Bugs 14:38:29 this is the link for all the qos bugs https://bugs.launchpad.net/neutron/+bugs?field.tag=qos 14:39:03 moshele: I have updated etherpad with patch for review 14:39:03 Some of those are rfe 14:39:16 https://review.openstack.org/#/c/253928/ 14:39:27 we have on high regarding SR-IOV and update the network, I think it close for core reviewer 14:39:45 bug https://launchpad.net/bugs/1509232 14:39:45 Launchpad bug 1509232 in neutron "If we update a QoSPolicy description, the agents get notified and rules get rewired for nothing" [Medium,In progress] - Assigned to Irena Berezovsky (irenab) 14:39:47 davidsha: yes you are right 14:39:53 irenab: thank 14:40:34 davidsha: I did know who to filter them out :( 14:40:40 I didn't 14:41:04 moshele: no problem, I don't either ;) 14:41:49 irenab: do you have any update on you patch https://review.openstack.org/#/c/253928/ ? 14:43:00 moshele: please review it 14:43:16 irenab: I will :) 14:43:43 the patch is small, the main issue is method naming :-) 14:44:30 irenab: method naming is not my strong side :) 14:45:34 all the other bug are low priority, so unless someone have something to say about them I think we can skip them 14:46:21 moshele: i want to discuss about https://bugs.launchpad.net/neutron/+bug/1521194 14:46:21 Launchpad bug 1521194 in neutron "Qos Aggregated Bandwidth Rate Limiting" [Wishlist,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 14:46:59 Need some inputs 14:47:27 it is more like a feature. no? 14:47:59 yes.. it's a rfe bug 14:48:14 i want to discuss about the implementation idea 14:48:28 ok please go head 14:48:35 My Idea: "This proposal talks about the support for aggregated bandwidth rate limiting where all the ports in the network should together attain the specified network bandwidth limit. To start with an easiest implementation could be dividing the overall bandwidth value with the number of ports in the network. In this there might a case of over and under utilization which might need more thought (May be we got to monitor all the ports and have 14:48:36 a notion of thresh hold to decide whether to increase or decrease the bandwidth)" 14:49:14 In the current QoS implementation, network QoS bandwidth limit are applied to all the ports uniformly in the network. This may be wrong. Consider a case where the user just want 20 mbps speed for the entire network but the current implementation will end up in allowing 20 * num_of_ports_in_the_networks mbps. 14:49:38 Is my proposal sounds correct.. 14:49:44 or any suggestion from the team 14:50:31 When you apply a policy to a network does it not just apply the qos_policy to all the ports in the network? 14:50:40 i think so 14:50:41 so how do you plan to specify the new behavior for network qos 14:50:50 yes 14:51:00 except router and dhcp 14:51:13 to start with uniformly divide.. 14:51:20 for all the compute port 14:51:23 Oh sorry I misunderstood, kk. 14:51:30 moshele: and sometimes you may want to limit router port too 14:51:58 but this may have bigger issues of over and under utilization 14:52:07 so you can apply it on the port it self 14:52:12 yes 14:52:28 divide equally and apply 14:52:38 vikram : Can user specify the bandwidth ? 14:52:55 reedip: they specify the total fro the network 14:52:56 reedip_: i didn't get you 14:53:14 reedip_: For network it can be specified 14:53:19 vikram : for a particular port 14:53:29 reedip_: yes we can 14:53:46 reedip_: 2 ways we can apply now .. port / network 14:53:55 reedip_: the issue is with network 14:54:24 load balancing ( not LB ) would have to be considered here 14:54:42 is dividing it equally and applying to all ports sounds good? 14:55:10 not if you are sure that you have a high data traffic for a specific type of data 14:55:20 reedip_: that's why I said it's a harder problem to solve ;) 14:55:26 like for example , if you have pretty high HTTP request 14:55:26 vikram: just for clarification, this is a new rule, not modifiing the existing? 14:55:53 then would the new implementation assist us ? 14:56:01 I think he want to modify the existing rate limit rule 14:56:09 moshele: ++ 14:56:22 vikram: now the policy doesn't care if it attached to port or network, and you request that for qos bonded to network there will be 2 behavior 14:56:24 ok. 14:56:27 Can we provide this as a new option? 14:56:42 so I wonder how to do it in the api and model level of qos 14:56:52 do you have a spec ? 14:57:03 moshele: nope.. 14:57:13 moshele: since it's an rfe 14:57:31 moshele: if we need to write one .. i can 14:57:41 vikram: can you post a link? 14:57:49 I think you should, it make it clearer 14:57:53 will we make it the norm for different behavior for network and port versions? 14:58:20 irenab, moshele: Let's me do some write up then 14:58:45 cool update the etherapd when you have something 14:58:55 moshele: done 14:58:56 and the we can discuss it in gerrit :) 14:59:02 moshele: ;) 14:59:26 so we about to end the meeting 14:59:35 anything else? 14:59:45 nope. thanks moshele! 14:59:57 I'm going to be trying a PoC for heirarchal rules 15:00:08 ok cool bye everyone 15:00:09 #endmeeting