14:03:05 <moshele> #startmeeting neutron_qos
14:03:06 <openstack> Meeting started Wed Dec  9 14:03:05 2015 UTC and is due to finish in 60 minutes.  The chair is moshele. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:03:09 <alexpilotti> ajo: sorry about that got carried away :)
14:03:10 <openstack> The meeting name has been set to 'neutron_qos'
14:03:18 <ajo> alexpilotti : np :D it happens sometimes :D
14:03:26 <njohnston> good morning
14:03:28 <moshele> hi
14:03:29 <irenab> hi
14:03:32 <vikram> hi
14:03:38 <davidsha> hi
14:03:41 <ajo> cheers everybody, moshele will lead this meeting, ad I need to leave around :20
14:03:46 <ajo> ad->as
14:03:59 <ajo> moshele , do you want me to start, and you take it from :20 ?
14:04:02 <irenab> I have conflicting meeting, sorry for low particiipation level in advance
14:04:14 <moshele> sure
14:04:19 <ajo> ack, thanks
14:04:31 <ajo> ok, we have a few items to track, thanks moshele for putting the agenda in shape
14:04:41 <moshele> https://etherpad.openstack.org/p/qos-mitaka
14:05:00 <ajo> for the rpc-callback upgrades dependency, it seems we have agreement in devref,
14:05:03 <moshele> the agenda  ^
14:05:08 <ajo> and I shall push code before leaving for xmas
14:05:35 <davidsha> ajo: when will you be heading off for christmas?
14:05:48 <davidsha> ajo: if you don't mind me asking...
14:05:56 <ajo> davidsha : since 21 Dec, to Jan 4th I think
14:06:07 <ajo> I'm back Jan 4th
14:06:20 <ajo> if there's any urgent matter, please write to my personal email: miguelangel@ajo.es
14:06:28 <jaypipes> alexpilotti: will do for sure.
14:06:34 <ajo> I will be watching the company one, but my personal one is less pulluted
14:06:38 <ajo> polluted
14:06:49 <vikram> ajo: enjoy ;) take rest
14:06:51 <ajo> ok
14:06:53 <ajo> thanks :)
14:06:54 <njohnston> ajo: Thanks for doing that
14:06:58 <davidsha> ajo: enjoy!
14:07:12 <ajo> njohnston : of course, I know we have important stuff we must complete
14:07:22 <ajo> you're welcome
14:07:23 <ajo> about RBAC
14:07:39 <ajo> I know hdaniel  posted a spec, since we were required to publish one
14:07:57 <ajo> due to the suspect that we were going to modify share, and therefore render the API backwards incompatible
14:08:25 <ajo> hdaniel  posted it: here: https://review.openstack.org/#/c/254224/  , and this is the blueprint https://blueprints.launchpad.net/neutron/+spec/rbac-qos
14:08:32 <ajo> if you have some time for review, that would be nice
14:08:42 <ajo> hdaniel , am I missing something?
14:08:54 <hdaniel> ajo: nope, you got it all alright :)
14:09:04 * njohnston will review today
14:09:31 <irenab> hdaniel: already started
14:09:36 <ajo> ok
14:09:42 <ajo> linux bridge integration
14:09:52 <ajo> slaweq around ?
14:10:00 <ajo> or is it your's davidsha  ?
14:10:16 <moshele> I didn't find the  linuxbridge fullstack tests  patches
14:10:21 <davidsha> its slaweq, I'm dscp!
14:10:23 <moshele> did he upload something?
14:10:34 <vikram> moshele: not yet
14:10:46 <moshele> ok
14:10:46 <vikram> moshele: he will by this week i believe
14:11:10 <ajo> (sorry for the confusion davidsha  :) )
14:11:29 <davidsha> no problem!
14:11:34 <ajo> ok, let's keep watching LB as it goes
14:11:38 <moshele> vikram: can you update the etherpad once he have something
14:11:47 <ajo> moshele : +1
14:11:56 <vikram> moshele: sure
14:12:02 <ajo> #topic DSCP
14:12:09 <ajo> it ignores me :D
14:12:24 <ajo> davidsha: can you update on DSCP?
14:12:26 <moshele> #topic DCP
14:12:36 <ajo> thanks moshele  :D
14:12:47 <moshele> the power of being a chairman :)
14:13:30 <irenab> moshele: DSCP
14:13:31 <davidsha> Ya, it's dependent on ihar but I have tested it with vhost user and it works. I'm working at the moment on trying to move code to ovs-ofctl
14:13:40 <moshele> #topic DSCP
14:14:02 <moshele> thanks irenab
14:14:06 <ajo> davidsha , if it's going to be waiting on ihar details, may be we could use some sort of temporary hack to be able to test it?
14:14:15 <ajo> like setting the uuid or the OVSBridge object in a global?
14:14:20 <davidsha> I'm also trying to get the cli to work, I wasn't inline with the spec so I working under the assumption that the cli is right and my code isn't.
14:14:29 <ajo> with a #TODO(...): hack, this is depending on <url> ?
14:15:04 <njohnston> davidsha: if you're using the client change that vhoward dropped, I think that he is still working on that
14:15:19 <davidsha> ajo: the hack is use networking ovs dpdk, it doesn't seem to clear the flows.
14:15:43 <ajo> davidsha  :)
14:15:54 <ajo> but does that work for regular ovs ports? (non-dpdk?)
14:16:09 <davidsha> so I was ping between VMs and checking if the bit was set and it sets them.
14:16:10 <ajo> I don't have dpdk knowledge, so testing it would be a bit hard
14:17:11 <davidsha> I'll make a change and put it in the next patch up, I'll see commenting out the refresh works.
14:18:24 <davidsha> njohnston: Ok, I'll hack it to work properly, I'm just having problems with the variable name.
14:19:19 <njohnston> While davidsha has been doing that, I am working on unit tests then functional tests. and vhoward is working on python-neutronclient changes, so we can hopefully drop everything reasonably quickly but in reviewable chunks.
14:19:21 <ajo> davidsha, can you put some brief instructions on how to put it to test?
14:19:30 <ajo> any special ovs for using the ovs-dpdk agent?
14:19:44 <ajo> njohnston++
14:19:45 <ajo> davidsha+
14:19:48 <ajo> ++
14:19:52 <njohnston> +1 for testing instructions
14:20:28 <ajo> ok, I have to drop from the meeting, moshele , it's all yours.
14:20:42 <moshele> thanks ajo
14:20:46 <ajo> cheers everybody,
14:20:50 <davidsha> ajo: For ovs-dpdk, it is already merged with the ovs-agent in neutron, you need to set the ML2 agent to just ovs I think.
14:20:54 <davidsha> ajo: thanks!
14:20:55 <irenab> ajo: bye
14:20:58 <vikram> bye
14:21:00 <davidsha> ajo: cya
14:21:02 <moshele> bye
14:21:03 <njohnston> thanks ajo
14:23:04 <njohnston> so it looks like the change Ihar outlined is a dependency for the DSCP stuff getting merged, do we know what the status of that is?  Not sure if there is a spec yet.
14:23:10 <davidsha> njohnston: also thanks for putting in the valid_dscp_nums, was a bit overwhelmed at the time and was slow to upload.
14:24:02 <njohnston> np, happy to help!
14:24:04 <moshele> njohnston: what dependency? the upgrade?
14:24:46 <njohnston> moshele: http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html
14:25:09 <moshele> ok the l2 agent
14:25:24 <moshele> I don't think work has started on that
14:25:42 <moshele> but we need to follow up with  Ihar  on that
14:25:54 <davidsha> This is also a dependency for vikram on his 801.1p (I think)
14:26:04 <moshele> I will add it to the etherpad
14:26:23 <davidsha> His blueprint would be implemented the exact same was as dscp is.
14:26:41 <davidsha> 802.1p*
14:26:43 <njohnston> I am unclear on whether that is a hard dependency on Ihar's stuff before DSCP can be merged, or if there is a scenario where DSCP can be merged with a work-around before Ihar's proposed change.
14:27:08 <vikram> davidsha: +1
14:27:39 <davidsha> Well Ihar's change is actually what I was thinking as well, take a table other than 0 for qos flows.
14:28:07 <davidsha> because tables other than 0 are not cleared
14:28:38 <moshele> I guess we will need to sync with ihar on that
14:29:16 <moshele> anything else on the  DSCP ?
14:29:41 <njohnston> nope
14:29:51 <davidsha> Not atm, I'm still working through Ihar's comments and have to do the TODO's I marked.
14:30:23 <moshele> ok cool
14:31:04 <moshele> so let move for the  slow moving issues
14:31:35 <vhoward> did we mention python client yet? I wf-1d a patch https://review.openstack.org/#/c/254280/
14:31:42 <vhoward> sorry for being late and stuff
14:31:59 <davidsha> vhoward: Yup
14:32:09 <vhoward> cool, then i'm good for now, thx for all your help
14:32:30 <davidsha> vhoward: no prob, thanks for the cli.
14:32:39 <vhoward> yw
14:33:00 <moshele> so I don't think anyone is working on BWG or and integration with nova scheduler , right ?
14:34:23 <moshele> ok I just wanted you let you know that there is QoS integration with Horizon and with Heat
14:34:49 <moshele> so you have time you can review the patches they are listed on the eatherpad
14:35:10 * njohnston has been reviewing the heat patches, think they are close
14:35:38 <moshele> njohnston: cool thanks for the update
14:36:17 <moshele> irenab: do you want some thing to say regarding the port flavors proposal ?
14:36:39 <vhoward> i'll help with that also
14:36:47 <irenab> moshele: I need to catch up on this
14:37:16 <irenab> I am not aware of any progress
14:37:33 <moshele> ok thanks
14:37:55 <moshele> let move to opened bugs
14:38:04 <moshele> #topic Opened Bugs
14:38:29 <moshele> this is the link for all the qos bugs  https://bugs.launchpad.net/neutron/+bugs?field.tag=qos
14:39:03 <irenab> moshele: I have updated etherpad with patch for review
14:39:03 <davidsha> Some of those are rfe
14:39:16 <irenab> https://review.openstack.org/#/c/253928/
14:39:27 <moshele> we have on high regarding SR-IOV and update the network, I think it  close for core reviewer
14:39:45 <irenab> bug  https://launchpad.net/bugs/1509232
14:39:45 <openstack> Launchpad bug 1509232 in neutron "If we update a QoSPolicy description, the agents get notified and rules get rewired for nothing" [Medium,In progress] - Assigned to Irena Berezovsky (irenab)
14:39:47 <moshele> davidsha: yes you are right
14:39:53 <moshele> irenab: thank
14:40:34 <moshele> davidsha: I did know who to filter them out :(
14:40:40 <moshele> I didn't
14:41:04 <davidsha> moshele: no problem, I don't either ;)
14:41:49 <moshele> irenab: do you have any update on you patch https://review.openstack.org/#/c/253928/ ?
14:43:00 <irenab> moshele: please review it
14:43:16 <moshele> irenab: I will :)
14:43:43 <irenab> the patch is small, the main issue is method naming :-)
14:44:30 <moshele> irenab: method naming is not my strong side  :)
14:45:34 <moshele> all the other bug are low priority, so unless someone have something to say about them I think we can skip them
14:46:21 <vikram> moshele: i want to discuss about https://bugs.launchpad.net/neutron/+bug/1521194
14:46:21 <openstack> Launchpad bug 1521194 in neutron "Qos Aggregated Bandwidth Rate Limiting" [Wishlist,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
14:46:59 <vikram> Need some inputs
14:47:27 <moshele> it is more like a feature. no?
14:47:59 <vikram> yes.. it's a rfe bug
14:48:14 <vikram> i want to discuss about the implementation idea
14:48:28 <moshele> ok please go head
14:48:35 <vikram> My Idea: "This proposal talks about the support for aggregated bandwidth rate limiting where all the ports in the network should together attain the specified network bandwidth limit. To start with an easiest implementation could be dividing the overall bandwidth value with the number of ports in the network. In this there might a case of over and under utilization which might need more thought (May be we got to monitor all the ports and have
14:48:36 <vikram> a notion of thresh hold to decide whether to increase or decrease the bandwidth)"
14:49:14 <vikram> In the current QoS implementation, network QoS bandwidth limit are applied to all the ports uniformly in the network. This may be wrong. Consider a case where the user just want 20 mbps speed for the entire network but the current implementation will end up in allowing 20 * num_of_ports_in_the_networks mbps.
14:49:38 <vikram> Is my proposal sounds correct..
14:49:44 <vikram> or any suggestion from the team
14:50:31 <davidsha> When you apply a policy to a network does it not just apply the qos_policy to all the ports in the network?
14:50:40 <vikram> i think so
14:50:41 <moshele> so how do you plan to specify the new behavior for network qos
14:50:50 <moshele> yes
14:51:00 <moshele> except router and dhcp
14:51:13 <vikram> to start with uniformly divide..
14:51:20 <moshele> for all the compute port
14:51:23 <davidsha> Oh sorry I misunderstood, kk.
14:51:30 <irenab> moshele: and sometimes you may want to limit router port too
14:51:58 <vikram> but this may have bigger issues of over and under utilization
14:52:07 <moshele> so you can apply it  on the port it self
14:52:12 <vikram> yes
14:52:28 <vikram> divide equally and apply
14:52:38 <reedip_> vikram : Can user specify the bandwidth ?
14:52:55 <davidsha> reedip: they specify the total fro the network
14:52:56 <vikram> reedip_: i didn't get you
14:53:14 <vikram> reedip_: For network it can be specified
14:53:19 <reedip_> vikram : for a particular port
14:53:29 <vikram> reedip_: yes we can
14:53:46 <vikram> reedip_: 2 ways we can apply now .. port / network
14:53:55 <vikram> reedip_: the issue is with network
14:54:24 <reedip_> load balancing ( not LB ) would have to be considered here
14:54:42 <vikram> is dividing it equally and applying to all ports sounds good?
14:55:10 <reedip_> not if you are sure that you have a high  data traffic for a specific type of data
14:55:20 <vikram> reedip_: that's why I said it's a harder problem to solve ;)
14:55:26 <reedip_> like for example , if you have pretty high HTTP request
14:55:26 <davidsha> vikram: just for clarification, this is a new rule, not modifiing the existing?
14:55:53 <reedip_> then would the new implementation assist us ?
14:56:01 <moshele> I think he want to modify the existing rate limit rule
14:56:09 <vikram> moshele: ++
14:56:22 <moshele> vikram: now the policy doesn't care if it attached to port or network, and you request that for qos bonded to network there will be 2 behavior
14:56:24 <davidsha> ok.
14:56:27 <reedip_> Can we provide this as a new option?
14:56:42 <moshele> so I wonder how to do it in the api and model level of qos
14:56:52 <moshele> do you have a spec ?
14:57:03 <vikram> moshele: nope..
14:57:13 <vikram> moshele: since it's an rfe
14:57:31 <vikram> moshele: if we need to write one .. i can
14:57:41 <irenab> vikram: can you post a link?
14:57:49 <moshele> I think you should, it make it clearer
14:57:53 <davidsha> will we make it the norm for different behavior for network and port versions?
14:58:20 <vikram> irenab, moshele: Let's me do some write up then
14:58:45 <moshele> cool update the etherapd when you have something
14:58:55 <vikram> moshele: done
14:58:56 <moshele> and the we can discuss it in gerrit :)
14:59:02 <vikram> moshele: ;)
14:59:26 <moshele> so we about to end the meeting
14:59:35 <moshele> anything else?
14:59:45 <njohnston> nope.  thanks moshele!
14:59:57 <davidsha> I'm going to be trying a PoC for heirarchal rules
15:00:08 <moshele> ok cool bye everyone
15:00:09 <moshele> #endmeeting