14:01:30 #startmeeting neutron_qos 14:01:32 Meeting started Wed Apr 29 14:01:30 2015 UTC and is due to finish in 60 minutes. The chair is ajo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:35 The meeting name has been set to 'neutron_qos' 14:01:47 Hello! 14:01:51 hi 14:01:52 #topic Announcements 14:01:52 #link https://wiki.openstack.org/wiki/Meetings/NeutronQoS 14:01:54 #link https://etherpad.openstack.org/p/neutron-liberty-qos-code-sprint 14:02:15 :-) I wanted to spam as usual about the sprint, if anybody want to join and didn't sign up yet, do it :) 14:02:27 #topic High level api/cmdline overview, to share with operators 14:02:27 #link https://etherpad.openstack.org/p/neutron-qos-api-preview 14:02:45 hi 14:02:49 hi 14:02:51 hi 14:02:55 hi garyk, sorry, may be I was running too much :) 14:03:05 I just posted: https://etherpad.openstack.org/p/neutron-qos-api-preview 14:03:20 just a suggestion as i wrote in message, make the unit a field and then "max", "max_burst"... 14:03:33 and unit can be Kb, Mb... 14:03:45 gsagie, yes I thought of it, but I believe we shall not complicate ourselves adding unit management 14:03:53 hi 14:03:56 ajo: is that egress or ingress or both directions? 14:04:00 if we pick one , it's uniform across all 14:04:00 hey ajo 14:04:22 garyk, the current definition is egress, 14:04:37 but we could extend it in the future, as we plan to do with protocol / port matching, etc 14:04:44 ok, thanks, any reason why we do not allow for igress? 14:05:30 garyk, I didn't have time to experiment with the ref implementation to provide ingress, but we could extend to support it 14:05:41 ajo: ok, thanks 14:05:51 sorry for jumping in very late… but does not harm to ask 14:05:57 I guess it's just basically adding another queue, and moving traffic going to the mac into the queue 14:05:58 ajo: shall we have direction as part of API? 14:06:17 irenab, may be we should consider that field in this first iteration 14:06:20 to make it clear 14:06:30 I believe it's a good suggestion 14:06:55 +1 14:06:57 #action ajo add direction field to QoSRules (ingress/egress) 14:07:02 ajo: irenab: one more question (which maybe i missed) is this per port per network? 14:07:10 garyk per port 14:07:19 ok, thanks 14:07:27 garyk, you could attach a policy to a network, but that means all ports in that network will be applied the policy 14:07:37 no consideration for agregated bandwidth at this time 14:07:43 per network qos will just be applied on each port that does not have its own qos policy associated 14:07:44 would it be possible to define it globally on the network and then each port can inherit the global policy? 14:07:59 irenab: ok, thanks, that answers the question :) 14:07:59 i have a doubt.. 14:08:05 garyk, that's the plan, but bandwidth is accounted locally to each port. 14:08:14 ok 14:08:31 if ratelimiting for network-100mbps and for port=200mbps which will be considered 14:08:42 garyk, may be at a later time we can come out with a way to have "agregated network bw limits" but not sure how to do it on the low level I guess is a complicated topic 14:08:44 port overrides network policy 14:09:00 ajo: ok 14:09:06 the spec currently says, port config overrides network config 14:09:14 but overall n/w is configured to support only 100 14:09:28 is this a misconfiguration? 14:09:32 do we need to handle 14:09:33 vikram__: so every port on the network except that port gets 100 14:09:33 vikram__: i think that is per virtual port 14:09:48 the port with the 200 gets 200 14:10:26 In some cases we could think of blending rules... 14:10:28 if that makes sense 14:10:29 I think the idea of per network setting is some sort of ‘use network setting as port default setting if not specified otherwise' 14:10:39 I considered it at the start, but I had negative comments in the spec 14:10:54 for the bw limiting case, it's easy to do, just get the min of all parameters 14:11:08 for other cases, for example dscp marking, it couldn't be blended 14:11:34 what do you think about blending policies? 14:11:34 ok 14:11:44 I'm ok with port setting overrides net setiting 14:11:52 setting 14:12:27 any feedback related to blending? 14:13:06 if anybody thinks blending is important, please think about use cases, and how to handle conflicts in other rule types 14:13:10 ok, let's move on 14:13:22 #topic Update on latest changes to models 14:13:32 I have modified the models to be 14:13:38 QoSPolicy and QoSRule 14:13:48 before they were QoSProfile and QoSPolicy 14:14:01 I believe this is more similar to the language of security groups 14:14:22 and thus be more usable because we're sharing the concepts 14:14:28 easy to understand as well:) 14:14:36 :-) 14:14:39 ajo: sound fine 14:14:48 ajo: sounds good 14:14:51 sounds good to me 14:14:54 thanks :) 14:14:59 #topic Ideas for generic RPC mechanisms 14:15:11 ok, this is still an inmature idea 14:15:22 based on the ML feedback on the topic, 14:15:39 we keep adding more RPC messages /calls from to/from agents with every new thing 14:15:54 ajo: it may be too early to jump into implementation details if the api is still in discussion 14:15:58 what if we were able to have a common interface to send updates on different types of objects 14:16:03 garyk, you're probably right 14:16:23 I just wanted to drop the idea to let others think about it and see if it makes some sense 14:16:26 ajo: I think I see where you are going, I agree with garyk, it's big enough to warrant its own spec 14:16:27 ajo: i think that it would also be a driver implementation detail 14:16:43 sc68cal, that's right, 14:16:50 ajo: but I see the merit in it, for sure! 14:17:26 ajo: are we considering using a no-agent approach or thats currently off topic? 14:17:43 basically I was thinking of something similar to callbacks but via agents/rpc, 14:17:45 but 14:17:53 let's move on 14:17:55 ajo: would this be part of the ref impementation spec discussion? 14:18:22 irenab, I guess if we propose such a thing, as sc68cal was saying it deserves it's own spec 14:18:26 ajo: can you share the details about ur idea 14:18:36 but if we're going to add new rpc methods and calls... well.. may be it's a good moment 14:18:53 vikram__, this is just the high level overview, didn't have more time to look at it 14:19:05 just to undestand the way to proceed, we have API/Data Model spec, with no real backend implementation 14:19:15 ok 14:19:25 just have to be careful about how many dependencies we insert before we start the actual work on QoS, otherwise we'll be in Zeta release ;) 14:19:40 then there will be ref implementation spec (OVS based, I guess) 14:19:47 :-) 14:19:52 sc68cal, I totally agree on that, I was just trying to address the concern of "we keep adding more rpc calls for every single feature" :) 14:20:10 yes, I guess it's time to move onto that topic 14:20:10 so you think there should be another spec? or just some work to be done related to one of these two specs? 14:20:29 #topic inter-related specs which should happen after the API 14:20:36 ajo: if this is bundeled in as part of a neutron port then we do not need additional calls. just the payload needs to change 14:20:52 garyk, not exactly, 14:20:56 why? 14:20:59 garyk, this is similar to the security groups 14:21:01 garyk: still need RPC for fetching policy stuff 14:21:12 garyk, for example, if a policy is updated, we need the changes applied to the ports 14:21:16 unless we're going to pack *more* data onto rabbitmq 14:21:20 I thought we agreed not to talk about implementation details yet :-) 14:21:26 '':D 14:21:37 ok, now i understand. i was just thinking of the port creation :) 14:21:41 ok, let's talk about implementation in the future :) 14:21:43 irenab +1 14:22:03 few dumb questions 14:22:04 ok, specs we need for sure... 14:22:16 ml2-ovs ml2-sriov, ml2-lb implementations 14:22:17 will the api be in the neutron code base or an external project? 14:22:44 garyk, that's still on debate, 14:22:54 irenab, and sc68cal believe that's going to be faster, 14:23:07 if we go the service-plugin way, that's probably an option 14:23:08 as an external project? 14:23:22 garyk, what do you think about the topic? 14:23:28 similar to advanced services 14:23:36 +1 14:23:49 garyk, my opinion is that it's something which should be available to all plugin implementors... 14:23:51 Regardless - we're going to need a patch that adds an extension in neutron/extensions 14:23:53 i feelt hat it is part of the core 14:23:59 or a shim 14:24:00 if it's well tested in coordination with neutron core stuff, I'm ok 14:24:11 but being able to develop out of tree is certainly a lot faster 14:24:25 the glue is what concerns me. i am not sure about the service hook 14:24:32 garyk it's my feeling also that this is part of the core, as we're defining port & net properties 14:24:37 that is more something that uses the ports etcs 14:24:46 garyk, that's my feeling too 14:25:01 i also think that the sec group model is very messy and in retrospect we should have done that differently 14:25:04 garyk, but I'm not sure we have better mechanisms to decouple the code af this moment 14:25:09 agree 14:25:22 will there be a session at the summit so that maybe we can bash this through 14:25:23 I think its more DB Mixin versus Service plugin discussion, not totally independent service discussion 14:25:31 garyk, if we find a better way we can follow that path 14:25:42 ok 14:26:19 garyk, I'm still trying to understand the internals of neutron at this level so I'll be able to provide any beter idea, right now... I'm not able :D 14:26:20 this is an area where a design session at the summit could help? 14:26:38 but I agree, it feels like it's not a service making use of ports, it's just the best way to decouple we have now 14:26:52 sadasu. could be, 14:27:00 ajo: +1 14:27:06 ajo: I can teach you the ways of the force ;) 14:27:15 we have a slot for QoS, but probably such topic would take a whole session :) 14:27:22 mestery ^ 14:27:25 i am not familiar enough with the services to be able to have an opinion here. i just know that they create and delete ports. 14:27:30 ajo: ++ 14:27:48 here we need to be part of the port creation (and update) 14:28:14 ajo: agree with your service dilemma 14:28:22 I guess the idea with the service is to register for the port creation/update callbacks, being able to notify where necessary.. 14:28:38 sorry, have to leave. Will catch up later 14:28:53 garyk, sadasu , please feel free to add ideas to the ML Thread bout the QoS service plugin or not ;) 14:29:00 bout->about :) 14:29:07 ok, so 14:29:10 about specs 14:29:27 or devref's ;) 14:29:36 ;) 14:29:44 we may need to define how we're going to do it in the low level 14:29:48 ajo: will do 14:29:52 -ovs is clear to me 14:30:06 I think moshele will take care of -sriov 14:30:08 ajo: will do, have a bit of catching up to do regarding what has happened in the ML 14:30:10 anybody for -lb? 14:30:23 what is -lb? 14:30:27 linux bridge :) 14:30:39 that is so 80's 14:30:45 new black is ovn 14:30:55 ajo: yes I will take sriov 14:30:57 i can take up 14:30:58 :) yeah, but in previous iterations people had an interest on QoS/LB support :) 14:30:59 then again retro is in 14:31:08 garyk +1 14:31:10 :D 14:32:04 I guess, -OVN work will come later in time when they have a working implementation 14:32:09 ajo: linux bridge i can take up 14:32:16 vikram__, that would be nice 14:32:39 vikram__, I was planning to use ovs queues with htb, which is what ovs supports 14:33:22 not sure what was the plan for LB, or what's the best plan 14:33:35 ok. I used linux bridge long back need to refresh :D 14:33:36 you can leverage iptables for the rate limit 14:33:48 in linux bridge i believe 14:34:09 I was planing to take on the -ovs low levels, but help is accepted 14:34:28 talking of which... gsagie was proposing to try an agent-less approach to do the QoS 14:34:31 via ovsdb interface 14:34:32 gsagie: +1 14:35:05 heh no comments sounds bad... ;) 14:35:14 :-) 14:35:28 gsagie, I believe it's better if we have the same approach in the different levels instead of mixing :) 14:35:31 * sc68cal hopes he can assist ajo on the ovs low levels 14:35:42 agent-less? what's that.. 14:35:56 also it would require good coordination with the current ovs-agent :) 14:36:18 ajo: not sure what you mean different levels, i believe some backends will implement this agentless as well 14:36:19 gsagie, may be it's where OVN is going to do better than the current ovs implementation :) 14:36:43 true gsagie , but I mean, for ml2-ovs we would be mixing 14:36:54 for ml2-ovn, probably that's the way to go 14:37:01 np 14:37:37 vikram__, agentless means you control the ovsdb + openflow rules from your neutron-server 14:37:52 instead of sending rpc commands to an agent 14:37:55 ok 14:38:02 i can probably provide help as well for the ovs and maybe also the linux bridge if needed 14:38:02 gsagie, am I correct? 14:38:27 gsagie, thanks a lot 14:38:42 i havent looked at the QoS settings, but i believe its only with OVSDB, but yeah thats the concept 14:39:02 gsagie, it could involve some rules if we want to control ingress too 14:39:22 for egres it's just attaching a queue to the port 14:39:52 and of course, it requires some rules to do the same with different types of traffic 14:40:02 ok, to sumarize 14:40:33 #action vikram__ will take care of the ml2-linuxbridge implementation 14:40:52 #action moshele will take care of the ml2-sriov implementation 14:41:24 #action ajo will take care of the ml2-ovs implementation, gsagie and sc68cal offer help 14:41:40 also gsagie you offered help to vikram__ regarding linux bridge 14:41:41 What all QoSPolicy we will be implementing at the first go now? 14:41:52 ratecontrol 14:41:59 but, with the new more open RFE process 14:42:09 we could propose other types as we finish ratecontrol 14:42:23 ok 14:42:24 I hope that goes forward ;) mestery++ 14:43:06 ok , we had a conflicting proposal so far (But this is looking into the future of supported types) 14:43:22 I'd like to comment now at least, since we have time 14:43:29 #topic conflicting type proposals 14:43:48 vikram__, I believe it was you, who proposed to use IPv6 flow labels to put QoS levels, right? 14:43:55 yes 14:44:04 there's some RFC about using IPv6 flow labels for QoS, do you have the reference? 14:44:38 There are other RFCs proposing ot use the flow labels for tenant isolation in virtual networks 14:44:39 i have can send after the meeting 14:44:55 ianwells was proposing that ... let me look for it 14:46:00 https://www.ietf.org/rfc/rfc3697.txt 14:46:46 #link https://etherpad.openstack.org/p/IPv6Networks 14:46:48 yes but he was using them for tenant isolation 14:46:52 #link https://www.ietf.org/rfc/rfc3697.txt 14:47:08 #link https://tools.ietf.org/html/rfc6437 14:47:22 My concern about using packet headers is contention over how they are used, and if we are adding new meanings 14:47:22 correct sc68cal , this is why I think it's conflicting 14:47:39 yes 14:47:54 yes, I think we can decide at a later time, 14:48:01 based on the status of the different RFCs 14:48:25 I just wanted to raise the issue so you're able to discuss with the right people 14:48:47 vikram__, if you'll be able to come to the Summit, I can put you in contact with Ian 14:49:02 #topic free discussion 14:49:02 Thanks. I am travelling 14:49:15 ok, something important you believe we should talk about? 14:49:27 or ideas for the next agenda? 14:49:39 how about packet priority marking and ecn 14:49:52 vikram__, I believe we can discuss those in the future 14:49:56 when we have the basic building blocks 14:50:00 it's written in the current spec 14:50:03 as future ideas 14:52:05 A comment I had from irena as to standarize the parameters dict in command line 14:52:14 as some other cmdline commands do 14:52:22 I need to look into that 14:52:46 ah, and I was thinking 14:52:55 if it made sense to have a low level "qos_driver" 14:53:03 as we have for the "firewall_driver" in security groups 14:53:23 sc68cal, what do you think about such thing ? 14:53:47 for example, in the OVSHybrid thing, we could use lb or ovs to do QoS 14:53:57 so.. having some sort of modular driver would help there 14:54:05 ajo: yeah my old code had a abstract driver class, that the dscp driver inherited from 14:54:08 if one works better than the other, people just switch 14:54:46 sc68cal, ok, that sounds like we thought the same way 14:55:05 ajo: https://github.com/netoisstools/neutron/tree/comcast_milestone_proposed/neutron/services/qos/drivers 14:55:41 sc68cal, exactly 14:56:02 #link https://github.com/netoisstools/neutron/tree/comcast_milestone_proposed/neutron/services/qos/drivers 14:56:15 this way, if this is finally developed as a service, we can keep the drive implementation at the same place 14:56:25 and extend the agent in a modular way 14:56:31 via the same interface 14:56:56 ok, anything else, or shall we end meeting? :) 14:57:52 #endmeeting