14:03:42 #startmeeting neutron_qos 14:03:43 Meeting started Wed Feb 24 14:03:42 2016 UTC and is due to finish in 60 minutes. The chair is ajo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:47 The meeting name has been set to 'neutron_qos' 14:03:47 Hi everybody! ;) 14:03:48 o/ 14:03:53 \o/ 14:04:02 hi 14:04:06 hi! 14:04:06 #link http://eavesdrop.openstack.org/#Neutron_QoS_Meeting 14:04:12 o/ 14:04:15 o/ :) 14:04:46 I wanted to start raising the topic of our roadmap 14:05:14 on last drivers meetings there were concerns about our roadmap, status, and the amount of RFEs they were finding 14:05:28 #link http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-02-18-22.01.log.html#l-52 14:05:51 ajo: so qos features have high request 14:05:56 So I thought we may clarify that 14:05:59 #link http://lists.openstack.org/pipermail/openstack-dev/2016-February/087360.html 14:06:08 I sent this email to the mailing list 14:06:24 irenab: yeah, but it's not like you post an RFE and it magically happen 14:06:46 and, to be right, armax was partly right, because I haven't been doing a good review of new RFEs, because I was focused on the mitaka bits 14:06:47 we should consider available resources, current roadmap... 14:07:12 ihrachys: the intent should be approved so the one who proposes can move on 14:07:18 and I guess, they felt overwhelmed by RFEs they didn't understand how exactly fit in the architecture we designed 14:07:24 ajo: I am always 100% right! 14:07:24 :) 14:07:30 ajo: jokes aside, I saw your email…but I haven’t had the chance to reply yet…I’ll do that today 14:07:30 armax++ 14:07:31 lol 14:07:32 irenab: you can't effectively move forward without having reviewers on board 14:07:50 that's why we have approvers for blueprints 14:08:00 ihrachys: so intent and review commitmentm right? 14:08:02 (we don't have them for RFEs and I believe that's a bug) 14:08:19 armax: I wanted to discuss the current status on the meeting, and then send a detailed report, I'm sorry I haven't been communicating to you properly, and actively reviewing new RFEs, consider that changed from now on 14:08:19 armax: btw do we plan to have approvers for RFEs? 14:08:58 ihrachys: something to consider/experiment next cycle. We’ll do a postmortem once mitaka is out of the way 14:09:11 So I think we take the list of all QoS features that could be implemented - BW limiting, DSCP, ECN, 802.1p/q, and minimum bandwidth guarantees - and the possible implementations for each - OVS, LB, SR-IOV - and we can provide a matrix of all the QoS items between us and full implementation. Some of them will be empty spots - DSCP on SR-IOV is an impossibility - but at least we can say "this is the comp 14:09:11 lete roadmap", and show which 1-3 items we're targeting this cycle and next cycle 14:09:33 yes, 14:09:40 I have brought up this: 14:09:42 #link https://etherpad.openstack.org/p/qos-roadmap 14:09:47 to discuss during the meeting 14:09:48 the matrix idea seems like a good idea 14:09:55 And I was thinking of the same 14:10:08 one important thing, is that we don't need to fill RFEs for specific ref-arch implementations 14:10:25 probably a bug is enough 14:10:41 + 14:10:57 only if the implementation is a huge change to the specific implementation, then that could be a matter of an spec/rfe/devref 14:11:14 to have a better understanding of how is it going to be implemented 14:11:33 That sounds fair to me. 14:11:47 So, in the tiny etherpad, 14:11:54 I have detailed r current status 14:11:57 the documentation we have, 14:12:04 and what we're doing for mitaka 14:12:23 basically, we don't have a *lot* of things in mitaka, because, doing things right, we need to cover a lot of related dependencis 14:12:38 irenab: my understanding is that posting an RFE without having anyone to implement and approve the code is a waste of drivers time 14:12:49 yeah 14:12:49 * jschwarz thinks that aside from listing features, etc, you guys may want to assign them to people (so drivers will feel comfortable and will know who to ask when things go south) 14:12:51 I agree too 14:12:57 we can discuss new ideas in the meeting 14:13:18 jschwarz: and that's where approvers for RFE should come to help 14:13:23 but I'd say, let's only fill RFEs if we have people willing, and with the ability to implement 14:13:47 ajo: ihrachys : sounds reasonable 14:14:11 We still won't have control over the RFEs people fill, but I will monitor that 14:14:16 on a weekly basis 14:14:37 #action ajo sets a calendar reminder for himself before driver's meeting to check any new QoS related RFE 14:15:24 Initially I thought that RFE was for users to express requirements 14:15:51 yes, in fact I understand that's the thing 14:16:16 but if we can globally do some filtering ourselves here, we're as developers proposing features, so we can discuss it in advance 14:16:19 to have a more filtered RFE 14:16:31 +1 14:16:35 or higher quality RFE, know that we have backers to write the code, etc.. 14:16:35 irenab: well kinda. are users posting the RFEs in question though? for the most part, it's people who are in the community, so a more informal means of tracking ideas could be less harsh for drivers. but maybe it's just me ranting and we should post more RFEs. 14:16:51 I guess the general workflow is 14:17:16 customer -> openstack-related-company -> developer -> RFE 14:17:20 and in some cases 14:17:29 the thing is, I see that some RFEs are actually closed on drivers meetings because there is no one to back the implementation up. 14:17:33 openstack-user/contributor-company -> developer -> RFE 14:17:44 hi 14:18:28 njohnston, and vhoward seem to be in a good TZ for the drivers meeting 14:18:41 and they helped so far by being there and answering :) 14:18:50 We're happy to represent :) 14:18:51 I usually try to join too, but that time I was off 14:18:57 so I guess we could pre-analize here, and if they can represent us, that's great 14:19:23 I used too, and I will try from now on, but I will be quite random 14:19:28 njohnston++ 14:19:30 thanks 14:19:43 yes, let's have US folks on board with representing the group there :) 14:20:04 I guess that from now on, we could have a meeting section for qos-related-rfes 14:20:45 sounds good 14:20:55 if you find anything missing or you believe something is wrong, please update https://etherpad.openstack.org/p/qos-roadmap when you have time 14:21:55 I think VLAN marking, Ingress QoS rate limiting, are probably quite straight forward 14:22:20 in fact, when we implemented the low levels of vm-egress , we did vm-ingress by mistake 14:22:31 and had to switch the implementation 14:22:37 :D we could close two features in one go 14:22:45 VLAN marking is the same way as dscp for openvswitch. 14:22:54 gal-sagie implementation is still there in gerrit 14:23:11 davidsha, exactly, it's almost the same 14:23:26 one tackles L3, and the other tackles L2 14:23:38 still two separate rule types 14:23:42 exactly 14:24:05 then, we have the ECN RFE, which makes sense IMO, but there are a few things to clarify 14:24:36 basically, ECN seems to be a mechanism that can be used in combination with TCP/IP to throttle the other host end dinamically 14:24:43 if your ingress is getting congested 14:25:14 but I believe we need to clarify how congestion is detected, and how we model the rules 14:25:28 ajo: I have a suggestion 14:25:37 would that be something to use with traffic classification then? 14:25:39 irenab, shot :D 14:25:46 "Conventionally, TCP/IP networks signal congestion by dropping packets. When ECN is successfully negotiated, an ECN-aware router may set a mark in the IP header instead of dropping a packet in order to signal impending congestion. The receiver of the packet echoes the congestion indication to the sender, which reduces its transmission rate as if it detected a dropped packet." 14:25:53 I think each RFE should present relevant use case(s) 14:26:06 davidsha, no, that's more related to your other RFE :) 14:26:19 so it will be clear how the requested functionality is used 14:26:31 irenab: +1 14:26:42 ajo: Ah ok. 14:26:50 I see use cases for ECN now that I understood it, 14:26:56 but, yes, that's not well addressed 14:27:10 we should ask vikram and reedip_ reedip for that 14:27:11 I think neutron implementation details much less important and can be resolved later 14:27:30 is there a case when you have ECN supported but you want to disable it? 14:27:30 yes 14:27:47 ihrachys, like filter ECN flags? 14:28:07 I beleive this can be the case, since it should be across the fabric 14:28:09 My question is, ECN is negotiated, it isn't something that is supposed to be administratively enabled or disabled. And the thing doing the negotiation won't be neutron, it will be the TCP stack implementation itself. 14:28:46 njohnston, as far as I undestood, switches, and mid-point network devices can modify the flags 14:28:48 on flight 14:29:03 ajo: yeah but who's going to decide the flag to be set? 14:29:03 so they're able to throttle traffic going through it 14:29:12 but I'm not 100% sure, we may ask vikram and reedit 14:29:13 "When both endpoints support ECN they mark their packets with ECT(0) or ECT(1). If the packet traverses an active queue management (AQM) queue (e.g., a queue that uses random early detection (RED)) that is experiencing congestion and the corresponding router supports ECN, it may change the codepoint to CE instead of dropping the packet." 14:29:21 I'm not an ECN expert by any mean, totally new to me 14:29:44 njohnston, ahh, exactly, I got it right then 14:29:55 ihrachys, : that's one of the question I had for them 14:30:16 ihrachys, it could be the agent, inspecting the ports sustained BW, the host load, the host br-* interfaces bw... etc 14:30:28 it's clearly not well understood in the team. let's do some homework reading docs before we decide anything on its feasibility. 14:30:33 +1 14:30:38 +1` 14:30:39 yeah, 14:30:48 it's the time to read, and ask the RFE proposers 14:30:52 I'm still on that proccess 14:31:03 I see possible value in it 14:31:22 as something softer/more effective than policing 14:31:29 but, policing is fully automatic 14:31:33 ajo: ihrachys : general question regarding RFE. Lets say there something that cannot be impemented by Ref implementation, it should not be proposed? 14:32:06 irenab, it's my understanding that "no", but, well, we have things that are only cisco implemented 14:32:12 what was the name of it... 14:32:13 hmm 14:32:26 irenab: I think otherwise, I believe it can be proposed. 14:32:51 though I really wonder what can't be implemented in ovs. 14:32:57 I believe it could be proposed, if some SDN vendor implements it, but we'd have to discuss with drivers & core team 14:33:09 got it, thanks 14:33:20 ihrachys, I start to grasp the limits sometimes, 14:33:21 yes, that would require some exception process, but I believe it may have place for itself 14:33:50 yes, if the use case is well funded, as you said, and it can be modeled, we may try 14:34:07 Then 14:34:29 we have davidsha's RFE #link https://etherpad.openstack.org/p/qos-roadmap about neutron QoS priority queing rules 14:34:44 davisha, if I didn't get it wrong 14:34:46 ajo: wrong link 14:34:50 sorry 14:34:50 :/ 14:35:04 #link https://bugs.launchpad.net/neutron/+bug/1527671 14:35:04 Launchpad bug 1527671 in neutron "[RFE]Neutron QoS Priority Queuing rule" [Wishlist,Triaged] 14:35:12 If I didn't get it wrong 14:35:27 you propose to have filters for traffic, so different traffic can be limited in different ways 14:35:28 right? 14:35:36 correct 14:35:39 ok 14:35:52 does it rely on traffic classifier? 14:35:55 I assume yes 14:36:05 that was foreseen in our initial brainstorms 14:36:16 and we thought we could model such thing 14:36:20 I don't see the dep mentioned there 14:36:23 by attaching rules to traffic classifiers 14:36:30 it was originally going to use ovs flows and then I was looking into tc 14:36:33 ihrachys, I commented in #12 14:36:39 oh I see 14:36:46 was looking at original description 14:36:48 I believe 14:36:54 the user case is clear, 14:37:11 and the modeling, needs some eye on this: http://git.openstack.org/cgit/openstack/neutron-classifier 14:37:14 #link http://git.openstack.org/cgit/openstack/neutron-classifier 14:37:33 the RFE should probably be refactored to something like that 14:37:35 davidsha: Can we talk after this meeting about this? I would like to understand how "the least likelihood of being rejected due to a queue reaching its maximum capacity" is different from DSCP. That's kind of what DSCP is all about. 14:38:08 njohnston: kk, I'm free to talk. 14:38:12 thanks 14:38:20 njohnston, the idea is that you assign different bw limits to different kinds of traffic 14:38:28 so you have different likelihoods 14:38:30 but let's expand later :) 14:38:53 ajo: what is the state of neutron-classifier? 14:39:03 davidsha, does it seem reasonable for you to change that RFE into: integrating QoS rules to neutron-classifiers ? 14:39:10 that is something we should investigate, definitely 14:39:21 I thought we'd end up with some common REST API to manage the classifiers 14:39:25 but I don't see that, just libs 14:39:26 irenab: I believe it's on hold 14:39:29 and DB moles 14:39:31 models 14:39:42 ok, may be they need help on that 14:39:48 irenab: probably starving for implementers 14:39:57 ajo: would it be ok if I looked into neutron classifier a bit more first? 14:40:11 davidsha, makes total sense 14:40:25 #action davidsha to look into neutron-classifiers state 14:40:30 I do not remember seeing anything on the mailing list or any dedicated sub-team 14:41:02 irenab: I believe it was just an experiment from Sean Collins that never delivered much 14:41:08 let's investigate, and bring up the topic to see how it is. 14:41:19 I suspect Sean would appreciate help 14:41:27 ihrachys, when it was proposed the call was to make it a separate library 14:41:54 yes 14:42:03 ok 14:42:23 and, the last RFE(s) in place are for bandwidth guarantees 14:42:44 when we talk about BW guarantees, it's about minimum bandwidth on ports 14:42:54 we can have strict, or best-effort 14:43:11 strict requires coordination with nova-scheduler, so no interface is oversubscribed .... 14:43:24 I'm trying to fight on that battle, but ... to be fair, I'm far from success, 14:43:50 there's a spec from Jaypipes which could satisfy what we need in that regard, but It doesn't look to me as dynamic as I think it could be 14:44:06 if we could use that mechanism they're designing, it could be awesome 14:44:11 (generic resource pools) 14:44:13 * ajo looks for the link 14:44:26 ajo: meaning nova will manage BW counting? 14:44:27 #link https://review.openstack.org/#/c/253187/ 14:44:32 irenab, nope 14:44:38 not by itself 14:44:54 I mean yes, sorry 14:45:02 refer the counting done by 3rd party (neutron?) 14:45:17 but our dynamic way of modifying policies I'm unsure it plays well with that 14:45:30 we may need to some sort of process to sync to that API 14:45:36 any of our changes in policies 14:45:44 so the nova database is always up to date 14:46:05 what bugs me, is that to make that possible, we could need to create one resource pool (or several) per compute node 14:46:19 because those resources are consumed in the compute nodes itselves 14:46:22 themselves 14:46:24 (sorry) :) 14:46:54 I guess that could also help model things like TOR switch bandwidth, and things like that 14:47:03 but ok, I'm trying to explore that 14:47:10 .. 14:47:16 In the other hand, and I finish, 14:47:22 is best-effort 14:47:31 that basically, is... do what we can within the hosts/hypervisors 14:47:33 to guarantee that 14:47:42 ovs and TC have mechanisms for that 14:48:01 I explored them, and I think davidsha did it too 14:48:11 they seem to work 14:48:21 ajo: any summary you can share on your findings? 14:48:25 the OVS/OF ones require a total refactor or our openflow rules 14:48:32 of our 14:48:48 because NORMAL rules don't work to queue traffic (we need to use queues) 14:48:49 and... 14:49:19 TC mixes technologies for filtering traffic (TC and OF...) (a bit like mixing linuxbridge and iptables with openflow) 14:49:22 so 14:49:26 there's no golden path 14:49:28 it can be done 14:49:51 may be we could start by TC,and then upgrade to something better in the future (OF only) 14:50:03 it worked pretty well on my testings 14:50:13 but ok 14:50:28 I can dive in the details on another meeting, probably not important now 14:50:49 I will switch to checking the status of the ongoing patches if there's no objection 14:51:00 YES 14:51:05 no objection 14:51:07 #topic status 14:51:21 njohnston, how's DSCP? and L2 api, any blocker? 14:51:46 I made a comment on the RPC patch so you can test the upgrade mechanism 14:51:47 The L2 agent patch has Ihar's +2 and just needs another https://review.openstack.org/#/c/267591/ 14:51:56 I'm not sure if I fully clarified it 14:52:04 #action ajo review L2 agent patch!!! :] 14:52:07 njohnston: there is a concern from yamamoto there. are we going to handle that? 14:52:11 RPC rolling upgrades has some concerns https://review.openstack.org/#/c/268040/ 14:52:32 njohnston, yes will address it tonight I guess, I was focusing on the roadmap this morning :) 14:52:59 ajo: just wanted to raise the discussion at neutron channel today regarding max_burst parameter misleading name in bw_limit rule 14:53:03 ihrachys: I don't know how we can properly account for that 14:53:07 ihrachys: I don't think it's a problem, any project that used the agent_uuid_stamp was using it for flows. 14:53:22 +1 ^^ 14:53:45 davidsha: I am good. just wanted to clarify. 14:54:02 So once those 2 patches get merged the main DSCP patch looks good, it only has one nit from Vikram https://review.openstack.org/#/c/251738 14:54:06 irenab, I agree it's missleading :/ can we talk about it after meeting? :) 14:54:30 if the is agreed to be the bug slawek mentioned he would like to fix it 14:54:46 ajo: sure 14:54:48 and then the python-neutronclient change for DSCP also looks to be in good shape: https://review.openstack.org/#/c/254280 14:55:21 ok, that's great :) 14:55:25 The documentation changes associated with DSCP already have 2 +2s, so I think they can go as soon as the patch they depend on merges 14:55:39 https://review.openstack.org/#/c/273638 14:55:58 njohnston, great , side note, the QoS API docs got re-injected, it seems the coauthor removed it by error 14:56:13 https://review.openstack.org/#/c/284059/1 14:56:39 njohnston, we need to contribute it to the common API guide ^ 14:56:43 d'oh 14:56:58 it's a hell, XML :) 14:57:05 ajo: There is an API guide change as well: https://review.openstack.org/#/c/275253 with one +2 already 14:57:09 I must admit for QoS somebody from the doc team helped 14:57:24 ohhh 14:57:29 awesome njohnston !!! 14:57:33 good work 14:57:46 ajo: All of the gaggle of DSCP changes are listed explicitly in the main patch's commit message: https://review.openstack.org/#/c/251738 14:57:56 and they all depend on the main patch 14:58:14 yikes 14:58:15 nice work 14:58:21 2 minutes to the end of the hour :/ 14:58:26 any other important updates? 14:58:38 I saw the LB support was making good progress too 14:58:42 we have fullstack tests now :) 14:58:46 I believe it's ready to merge 14:58:50 ok 14:58:54 so another action for me 14:59:01 #action ajo review Linux bridge related patches for QoS 14:59:09 I wonder whether everyone is fine that two redhat cores merge stuff 14:59:21 * ajo tries to clone himself: raise CloneError() 14:59:30 ihrachys, that's a good question 14:59:51 ajo: maybe it's fine to review and then ask someone else to rubber stamp it 14:59:51 give that LB is not our main thing in redhat 14:59:58 we're doing it for the community mostly :) 15:00:03 link for the LB change: https://review.openstack.org/236210 15:00:12 ok we need to wrap up 15:00:15 yeah, may be asking for a third +2 and +W 15:00:16 yes 15:00:31 ok, wrapping up,thanks everybody 15:00:33 * njohnston has no religion on +2s from y'all 15:00:49 let's keep discussing on #openstack-neutron (who cans) 15:00:53 #endmeeting