18:01:39 <sc68cal> #startmeeting neutron_qos
18:01:40 <openstack> Meeting started Tue May 27 18:01:39 2014 UTC and is due to finish in 60 minutes.  The chair is sc68cal. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:44 <openstack> The meeting name has been set to 'neutron_qos'
18:01:49 <sc68cal> #link https://wiki.openstack.org/wiki/Meetings#Neutron_Quality_of_Service_API_Sub_Team_Meeting
18:02:19 <Kanzhe> hi Sean
18:02:29 <sc68cal> hey how's it going?
18:03:17 <sc68cal> going to wait to 5 after to see who else joins for the meeting
18:03:18 <kevinbenton> hi
18:03:29 <sc68cal> kevinbenton: hey!
18:03:57 <smonov> Hello
18:04:28 <sc68cal> smonov: hello
18:05:28 <kevinbenton> i’m on free airport wifi so my connenction might be unstable
18:05:39 <sc68cal> kevinbenton: ok - no worries :)
18:05:47 <sc68cal> So it's 5 after, let's go ahead and start introductions - I'm Sean Collins and I work at Comcast
18:05:57 <sc68cal> #topic introductions
18:06:39 <smonov> Hi guys. I'm Simeon Monov and work at IBM.
18:07:17 <kevinbenton> I’m Kevin Benton and I work at Big Switch Networks
18:08:25 <Kanzhe> Kanzhe at Big Switch Network
18:08:47 <pcarver> Paul Carver @ AT&T
18:09:30 <sc68cal> anyone else?
18:10:18 <sc68cal> OK - I'll just lay out what I've got so far for an agenda
18:10:29 <sc68cal> #topic agenda
18:11:00 <sc68cal> Really the two things to talk about so far are a recap of the meeting we had in the neutron pod at the summit, and discussion of the current spec for the API in neutron-specs
18:11:30 <sc68cal> Then turn it over to open discussion, to give people time to digest the spec
18:12:00 <sc68cal> any questions?
18:12:02 <kevinbenton> sc68cal: do you have a link handy for the spec?
18:12:17 <sc68cal> kevinbenton: yup - https://review.openstack.org/#/c/88599/
18:12:36 <sc68cal> #topic ATL summit recap
18:12:57 <sc68cal> So for those that didn't attend, we had a well attended meeting at the networking pod at the ATL summit
18:13:41 <sc68cal> Lots of really good discussions about where the API is, and what people are doing around QoS currently, and how we can bind everything together with a vendor neutral API
18:15:00 <sc68cal> So currently I am working on improving the spec that was submitted to Neutron-specs, since the QoS extension API was mostly designed in Launchpad, so I am working to pull things into a single document
18:15:34 <sc68cal> #topic spec
18:15:42 <sc68cal> #link https://review.openstack.org/#/c/88599/
18:15:45 <sc68cal> #undo
18:15:46 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x235d5d0>
18:15:52 <sc68cal> #link https://review.openstack.org/#/c/88599/ QoS API extension specification
18:16:00 <kevinbenton> sc68cal: so after a quick look at it. where is the association of a policy to a port/network going to be stored?
18:16:29 <sc68cal> kevinbenton: the API extension adds a new attribute to Ports and Networks
18:16:35 <sc68cal> qos_id
18:16:43 <sc68cal> to link back to a QoS object
18:17:30 <sc68cal> The REST API impact section shows the code, but it probably needs to be made more explicit
18:17:46 <pcarver> The biggest issue in relation to the prototype we've been working on is that we're trying to make bandwidth guarantees between pairs of points. That doesn't fit exactly onto having the QoS be tied to a single port or network.
18:18:10 <sc68cal> it's a bit too low-level, tied directly to the WSGI interface of Neutron
18:18:24 <sc68cal> (the EXTENDED_ATTRIBUTES_2_0 piece)
18:18:40 <sc68cal> pcarver: Are the pairs of ports on the same neutron net
18:19:19 <sc68cal> or between different networks in Neutron
18:19:28 <pcarver> Typically I think they will be, although that includes traffic out to WAN via Q-router
18:19:42 <pcarver> But it wouldn't be 100% of traffic on that neutron network
18:20:09 <kevinbenton> sc68cal: i see. so it will just be QOS: UUID in a network object
18:20:18 <sc68cal> kevinbenton: correct
18:20:31 <kevinbenton> sc68cal: can we have a notion of a default QOS policy applied to networks and ports without one explicitly set?
18:21:06 <kevinbenton> sc68cal: i’m thinking of a completely admin-driven workflow in this case where tenants aren’t allowed to set their own QoS policies
18:21:09 <Kanzhe> sc68cal: Would it make sense to have a separate mapping table, where QOS can be mapped to port, network, or other objects if other use case pops out?
18:22:03 <sc68cal> pcarver: I see - the difficulty with that is how do you define a QoS where the destination is on the WAN
18:22:12 <sc68cal> and out of the purview of Neutron
18:22:46 <sc68cal> You'd either create a network that goes out to the WAN, and associate a QoS with that entire network
18:22:48 <sc68cal> or go per-port
18:23:11 <pcarver> sc68cal: yes, the WAN is outside Neutron. We're assuming a DSCP honoring WAN that the OpenStack environmnet can hand off to
18:23:15 <sc68cal> you'd just have to make a net that only egress traffic goes on, so you don't have the qos apply to traffic that is not egressing
18:23:30 <sc68cal> kevinbenton: Kanzhe: will get to your questions :)
18:23:43 <sc68cal> pcarver: excellent - that is very close to our usecase as well
18:24:00 <sc68cal> kevinbenton: we have a similar thought
18:24:13 <sc68cal> We create networks that are owned by the admin tenant, and shared = true
18:24:20 <pcarver> we're focussing on guaranteeing bandwidth across the datacenter/LAN and getting the marked traffic out to the WAN
18:24:23 <sc68cal> then we set a QoS policy on that, and tenants attach to it
18:24:41 <Kanzhe> pcarver: sc68cal , If the QOS mapping is in a separate table. One can construct a port-pair, then map qos to the port-pair.
18:25:40 <sc68cal> pcarver: OK - so looks like we're good at least on that if you do a network dedicated to just getting out to the WAN
18:25:49 <sc68cal> pcarver: we just need to figure out how we want to mark + ratelimit
18:26:03 <pcarver> another part of what we're doing is making sure that they underlying physical network has sufficient bandwidth on th erequired paths across physical links
18:26:05 <sc68cal> Kanzhe: Correct, you could create a QOS policy and just apply the two ports
18:26:16 <sc68cal> instead of the network
18:26:38 <sc68cal> but it would apply to all traffic leaving those ports, unless your driver supports doing by destination address
18:26:50 <pcarver> We're interested in figuring out more about the discussion that has been going on the mailing list about physical topology
18:27:04 <sc68cal> pcarver: There was also a post that someone from the climate team posted
18:27:24 <sc68cal> since climate is supposed to handle capacity and resource allocation - the context was for reserving IP addresses
18:27:31 <pcarver> just marking traffic isn't sufficient if the underlying physical network doesn't have sufficient bandwidth to meet the guaranteeds
18:27:35 <sc68cal> but it might also apply to bandwidth capacity
18:27:45 <pcarver> that's why we've viewed it as an "admission conrol" problem
18:28:26 <pcarver> a key capability is to be able to deny reservations if they would exceed the physical capacity
18:28:52 <sc68cal> pcarver: To me, that sounds like right up the alley of Climate
18:29:04 <pcarver> because if you can't deny reservations then eventually you'll reach a point where all the traffic is highly marked and still gets poor throughput
18:29:14 <sc68cal> we may need to get Neutron to expose bandwidth capacity so that Climate can make those decisions
18:29:46 <pcarver> Climate is yet another project that I haven't had enough time to do all the reading on. Definitely on the "to read" list
18:30:00 <sc68cal> Still spitballing, but also Ceilometer
18:30:10 <sc68cal> that would provide the real time counters as to utilization
18:30:25 <sc68cal> Probably does packet/byte counters already, or should
18:30:54 <sc68cal> so between neutron climate and cielometer you could get a good idea of how much BW is in use, vs. how much total
18:31:12 <DinaBelova> pcarver - not Climate, Blazar is the new name)
18:31:26 <sc68cal> Anyway, that's pretty deep in the weeds :)
18:31:28 <pcarver> sc68cal: I think you're talking about after the fact.
18:31:58 <pcarver> sc68cal: That's not a bad idea but doesn't make guarantees.
18:32:08 <pcarver> We're thinking of VoIP use cases
18:32:19 <sc68cal> Comcast has similar needs ;)
18:32:31 <pcarver> Though video is certainly an area where you'd also want guarantees
18:33:07 <sc68cal> honestly guarantees of BW is a huge space - it probably warrants *at least* its own spec
18:33:46 <sc68cal> but before we get too carried away, any other q's about the qos api ext as it exists currently?
18:33:58 <pcarver> Does the QoS subteam need a sub-subteam?
18:34:11 <pcarver> :-)
18:34:16 <sc68cal> :-)
18:34:43 <sc68cal> if someone puts a spec together, let's see where it goes
18:35:02 <kevinbenton> for bandwidth guaruntees?
18:35:06 <sc68cal> yeah
18:35:10 <sc68cal> and capacity
18:35:24 <sc68cal> off the top of my head, it also probably has some overlap with group based policies
18:35:34 <kevinbenton> yeah, i agree that this patch gives us a good starting point and at least a high-level object to start expressing these needs in
18:36:04 <sc68cal> maybe if we get some good pieces into this API, we could piggy back on GBPO to make those guarantees
18:36:33 <sc68cal> where you say - guarantee X bandwith via GBPO and GBPO drives that via the qos api
18:36:51 <sc68cal> plus all the other pieces I mentiond ;)
18:37:20 <sc68cal> oh, the review finally rendered
18:37:27 <sc68cal> #link http://docs-draft.openstack.org/99/88599/3/check/gate-neutron-specs-docs/f246385/doc/build/html/specs/juno/qos-api-extension.html QoS API extension (rendered)
18:39:12 <kevinbenton> i have to leave early, plane is boarding now. sc68cal: thanks for putting this on
18:39:40 <sc68cal> kevinbenton: have a safe flight - thanks for joining!
18:40:19 <Kanzhe> kevinbenton: See u later. :-)
18:41:26 <sc68cal> If there isn't any other questions - I'll give everyone back 15 minutes to let people digest the spec, review + add comments, and such
18:42:20 <pcarver> sc68cal: I'm all in favor of moving forward. I'll continue to work with my peers to formulate more input but not to slow anything down.
18:42:53 <pcarver> Our view of QoS may be in addition to or complementary to the current spec.
18:43:05 <Kanzhe> sc68cal: This is a good starting point for QOS.
18:43:09 <sc68cal> pcarver: perfect - please do continue to discuss use cases
18:43:23 <sc68cal> Kanzhe: thank you :)
18:44:06 <sc68cal> OK - until next week, thank you everyone for attending!
18:44:22 <Kanzhe> Thanks, bye.
18:44:36 <smonov> thanks
18:44:56 <sc68cal> I am also on #openstack-neutron during USA EST
18:45:03 <sc68cal> as well as the ML
18:45:10 <sc68cal> take care everyone!
18:45:14 <sc68cal> #endmeeting