14:01:37 <ajo> #startmeeting neutron_qos
14:01:38 <openstack> Meeting started Wed Jun 10 14:01:37 2015 UTC and is due to finish in 60 minutes.  The chair is ajo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:41 <openstack> The meeting name has been set to 'neutron_qos'
14:02:02 <ajo> #topic Where are we?
14:02:10 <ajo> #link https://etherpad.openstack.org/p/neutron-qos-jun-10-2015
14:02:16 <mohankumar> Hi  Ajo .. Mohan here from vikram team
14:02:16 <ajo> an interactive agenda ^
14:02:21 <ajo> hi mohankumar , welcome too
14:02:51 <ajo> So, past monday we got the feature branch setup
14:02:56 <ajo> it's feature/qos
14:03:05 <ajo> https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/qos,n,z
14:03:08 <ajo> #link https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/qos,n,z
14:03:17 <Ramanjaneya> We are doing neutron client changes...and posted qos policy cli cmds
14:03:18 <Ramanjaneya> https://review.openstack.org/#/c/189655/
14:03:41 <Ramanjaneya> and we are working other neutron client commands too.
14:03:42 <ajo> ah, thanks Ramanjaneya , please note it on the etherpad I will make a comment when we get there :)
14:03:48 <ajo> that's very nice! ;)
14:04:19 <ajo> any question about the feature branch thing?
14:04:23 <Ramanjaneya> ok..sure
14:04:28 <sc68cal> Can we organize all our patches under one topioc?
14:04:29 <ajo> there are some notes on the etherpad about what's the plan
14:04:30 <sc68cal> *topic?
14:04:50 <ajo> sc68cal, do you mean, a general topic to aggregate client + api + everything?
14:04:58 <irenab> =1
14:05:01 <sc68cal> anteaya showed me something neat the other day, you can use git review -t <topic name> so you can keep everything organized
14:05:02 <ihrachyshka> sc68cal, why not filtering using branch now that we have it?
14:05:40 <sc68cal> for example: https://review.openstack.org/#/q/topic:neutron-linuxbridge,n,z
14:05:45 <irenab> lets have both
14:06:02 <ihrachyshka> ah right, it's not just neutron
14:06:04 <irenab> we may switch branch to master at some point
14:06:16 <ajo> sounds good to me, we can try it
14:06:18 <ihrachyshka> + for both then
14:06:37 <ajo> if it gets messy at some point, we can think of changing strategy, but I guess that won't happen :)
14:06:42 <ajo> +1
14:06:43 <ihrachyshka> topic:neutron-qos?
14:06:51 <sc68cal> ^ works for me
14:06:54 <ajo> ihrachyshka +1 wfm too
14:07:40 <ajo> ok Ramanjaneya change the topic of your reviews when you can, I'll change mine, moshele , you too please :)
14:07:54 <moshele> ajo: sure
14:08:09 <ajo> ok, now that we are on a similar topic, I'm going to jump over it first
14:08:21 <ajo> #topic CI cross/projects/feature branch
14:08:34 <Ramanjaneya> ok
14:08:39 <ajo> I'm unsure how're we going to put together the changes to neutron-client + CI tests from the feature branch
14:08:56 <ajo> shall we create separate CI jobs?
14:09:09 <ajo> we need to enable: service_plugins=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.qos.qos_plugin.QoSPlugin
14:09:11 <ajo> from debstack
14:09:15 <ajo> devstack O:)
14:09:28 <ajo> the last bit: neutron.services.qos.qos_plugin.QoSPlugin
14:09:37 <ajo> otherwise neither our extension or plugin is loaded, and we can't test
14:09:42 <ajo> and also, we have the same issue with neutron client
14:10:29 <ajo> we need to investigate if your devstack initialization can be made (somehow) conditional on the feature branch
14:10:32 <ajo> to set that
14:11:02 <sc68cal> we can create experimental jobs
14:11:05 <ajo> any comments or ideas about this? or shall I move one?
14:11:20 <sc68cal> can take the similar approach to what we're doing now with the linux bridge testing initiative
14:11:28 <ajo> sc68cal, ahaa
14:11:38 <ajo> sc68cal, experimental are run on every patch?
14:11:49 <ajo> then we could just skip it if branch is not "feature/qos"
14:11:57 <sc68cal> ajo: no, you check with a special comment
14:11:58 <ajo> and set the right neutron client when feature/qos is on
14:12:08 <ajo> ahh sc68cal , ack
14:12:18 <ajo> that could work for me
14:12:37 <ajo> sc68cal, any link/review about how to setup an experimental CI job?
14:12:41 <ajo> any volunteer? :)
14:12:59 <sc68cal> I can take it on - here's how I did it for LB - https://review.openstack.org/184830
14:13:04 <sc68cal> so it'll be similar
14:13:12 <ajo> #link https://review.openstack.org/184830
14:13:37 <ajo> sc68cal, thank you very much
14:13:52 <ajo> #action sc68cal to setup an experimental job for testing QoS plugin + ext
14:14:19 <ajo> #topic poc mixin-less core resource extension in service plugin
14:14:32 <ajo> more details on the etherpad
14:14:37 <ajo> but basically
14:14:42 <ajo> #link  https://review.openstack.org/#/c/189628/
14:15:23 <ajo> I introduced the AFTER_READ callback notification in neutron/db/db_base_plugin_v2.py
14:15:23 <ajo> 
14:15:38 <dhruv> Ajo: can you post the etherpad link
14:15:42 <ajo> and we register for it on the service plugin, being able to extend core resources (PORT in this case)
14:15:51 <ajo> sure dhruv : https://etherpad.openstack.org/p/neutron-qos-jun-10-2015
14:16:15 <dhruv> ajo: thanks
14:16:44 <ajo> I need to talk to armax to get some review on the way I did dit
14:16:49 <ajo> I hope it looks reasonable
14:17:37 <ajo> as a strategy, for example, you don't have control over how callbacks are called, so future dependent extensions using the same mechanism may need some way to handle that
14:18:09 <ajo> moshele, do you want to jump on what you've been doing?
14:18:25 <irenab> I wonder if the patch should be spli into few patches, to make it easier to review?
14:18:34 <irenab> ^ajo
14:18:46 <moshele> ajo: I started on the agent extension driver
14:19:01 <irenab> I think callbacks modification is self standing change
14:19:11 <ajo> irenab, probably it could be split, yes... extension in one, service plugin in other,
14:19:21 <moshele> I talked to irenab an I need to some change current path
14:19:26 <ajo> irenab, the callback notify in other
14:19:34 <irenab> I think not too granular, but generic/versus qos specific
14:19:49 <ajo> irenab, I can split, yes
14:20:06 <moshele> I mean patch
14:20:15 <ajo> moshele, thanks :)
14:20:42 <moshele> basically I plan to add  L2Agent interface class
14:20:48 <ajo> irenab, I can split now that the approach seems to work :)
14:20:58 <irenab> ajo: great
14:22:00 <irenab> following our discussion with moshele, some questions there regarding RPC between Server and agent
14:22:09 <ajo> yes
14:22:27 <ajo> I have some debt in that regard, I need to put a devref or devref+POC on a generic RPC mechanism
14:22:29 <irenab> ajo: do you have some details regarding generic RPC approach you mentioned last meeting?
14:22:53 <ajo> sadly not :( I invested all my time in the other patch
14:23:29 <ajo> moshele, I can have something by tomorrow for review
14:23:40 <ajo> to see if it does make sense
14:23:44 <moshele> that will be great
14:24:08 <ajo> #action ajo write the generic RPC mechanism doc for review
14:24:08 <irenab> as for the workflow between agent and server, I think we should discuss how its bound to the current L2 agent RPC calls if at all
14:24:43 <ajo> irenab, those may be independent calls...
14:25:09 <ajo> irenab, do you mean, current L2, or current on moshele's patch? (I didn't have time to review)
14:25:28 <irenab> ajo: current, we may still reuse the update_port and get_device_details
14:25:58 <ajo> irenab, I guess you're right
14:26:12 <irenab> to add qos_policy_id, and qos_agent will have to query for details
14:26:19 <ajo> I'd try to avoid cluttering get_device_info with more info
14:26:57 <irenab> we should try to keep number of RPC call to minimum
14:27:05 <ajo> and probably include an alternate call to retrieve specific info + subscribe
14:27:38 <ajo> irenab, yes, I agree on that, I guess it's my fault for not presenting my plan
14:27:55 <ajo> my plan includes, later on in time, migrating security groups (the user of get_device_info) later to the new mechanism
14:27:55 <irenab> ajo: right, but probably need to go though generic loop to apply changes
14:28:12 <irenab> ajo: completely with you on this
14:28:47 <ajo> the idea is that subscription via fanouts will distribute the info to the right extensions as the information changes...
14:28:53 <irenab> we need to define this part to make agent side refactoring
14:28:57 <ajo> extensions : modular L2 agent extensions
14:29:15 <ajo> irenab yes
14:29:37 <ajo> ok, I can push on that, while we get progress in the DB models, the neutron client, etc..
14:30:02 <irenab> there are two different types of updates, on on port level (update policy mapping) and other update of policy itself
14:30:13 <ajo> irenab, correct
14:30:30 <irenab> I guess the first one can be treated via update_port workflow
14:30:39 <ajo> correct
14:30:45 <ajo> update_port shall provide the qos_profile_id
14:30:51 <ajo> and then, the agent can subscribe to such profile_id
14:30:51 <irenab> the second is proposed RPC pub/sub
14:30:58 <ajo> correct
14:31:02 <irenab> ajo:+1
14:31:26 <ajo> then unsub when no port is ussing such qos_profile_id anymore in the host
14:31:30 <ajo> using
14:32:09 <irenab> get_device_details now is used for first time configuration, it maybe useful to propogate qos_policy there as well
14:32:25 <irenab> then agent will subscribe for updates
14:32:43 <ajo> we need to explore that, yes
14:32:57 <ajo> IMO I believe it's going to be more modulare if we can do it via the same pub/sub mechanism,
14:33:04 <ajo> send the info on subscription
14:33:20 <ajo> then we don't depend on two different RPC callbacks
14:33:24 <ajo> sorry calls
14:33:37 <ajo> when there are changes to make, or new pub/sub consumer extensions
14:34:00 <irenab> this may work, but need to synchronize the configuration on agent side
14:34:22 <irenab> both l2 connectivity and qos policy
14:34:33 <irenab> to report device as UP
14:34:42 <ajo> irenab, correct, as it happens now in the get_device_info, right? :)
14:34:48 <irenab> yes
14:34:49 <moshele> yes
14:34:57 <ajo> we'd just be doing one call per different resource type
14:35:01 <ajo> may be that can be coalesced to one
14:35:13 <ajo> (deferred subscription or something...)
14:35:47 <ajo> all extensions come in at port creation... and subscribe what they need.... they finish, we do the whole subscription/info retrieval... we send the info back to all the extensions
14:36:03 <irenab> I gues we agree on requiremetns, so lets just put down the RPC mech details and agent workflow
14:36:37 <ajo> irenab, moshele , tomorrow I'll put my time on the rpc thing devref, and probably we can discuss on the review
14:37:06 <irenab> ajo: great
14:37:13 <ajo> ok, shall we move on to the list of things we need progress on?
14:38:37 <ajo> #topic TO-DO
14:38:45 <ajo> I'm afraid I lost almost everybody :D
14:38:47 <ajo> lol :)
14:39:37 <ajo> ok, we need to advance on the ovs/sr-iov low level details, may be just as devref + code?
14:39:42 <ajo> moshele ^ how does that sound?
14:39:57 <irenab> ajo: I am a bit oveloaded this week, but hope to get some work starting next week
14:40:05 <ajo> irenab thanks :)
14:40:15 <moshele> I will try
14:40:38 <ajo> we also have the neutron client: Ramanjaneya and mohankumar , thanks :)
14:40:49 <ihrachyshka> I started looking into supported qos rule type detection, not much done yet, but hopefully will have smth on Mon. not a prio, I know ;)
14:40:50 <ajo> keep up the work on this, and ping me if you need any help
14:41:16 <mohankumar> :)
14:41:17 <ajo> ihrachyshka: that will be very important at some point, thanks for working on this
14:41:48 <ajo> We need to progress on the DB models
14:42:09 <ajo> Vikram signed up for it, but afaik he's now on a 2-week PTO if I didn't get him wrong
14:42:22 <ajo> somebody has time / can look into it?
14:42:37 <ajo> I was thinking of using https://github.com/openstack/oslo.versionedobjects
14:42:42 <ajo> even if there is not too much doc on it
14:43:05 <ajo> we could benefit from out of the box rpc serialization/deserialization that works across different versions of the models
14:43:49 <ihrachyshka> and you would work with objects, not dicts.
14:43:58 <ajo> yey!
14:44:05 <ajo> that's another nice advantage
14:44:38 <ihrachyshka> to make it clear: do you anticipate it db side?
14:44:48 <ajo> ihrachyshka, what do you mean?
14:45:28 <ihrachyshka> I mean, how/when will you deserialize into sqla dict?
14:45:49 <ihrachyshka> meh... better 'convert'
14:46:07 <ajo> ihrachyshka for the rpc pub/sub and the API calls I guess?, I still need to look into it
14:46:26 <ajo> ihrachyshka I just now the "wows" of versioned objects, but didn't dig into the details yet
14:46:50 <ajo> now->know
14:47:08 <ihrachyshka> ack, let's postpone
14:47:10 <ajo> ok, if no volunteers, I will start looking into it if Vikram doesn't arrive first
14:47:44 <Ramanjaneya> ajo: me and mohan work on behalf of vikram for DB model till he comes
14:48:37 <ajo> Ramanjaneya, thanks, probably the best is to start by the db migrations, then we can look into the DB models with oslo.versionedobjects, let me find some references about it (I'll get to you with the link)
14:49:15 <Ramanjaneya> ok. thank you ajo
14:49:33 <ihrachyshka> ajo, you still need to define models, so there is no change in that regard afaik
14:49:45 <ajo> ihrachyshka, they remain the same? I didn't know
14:50:08 <ajo> ok, that's good, I guess
14:50:10 <ihrachyshka> I thought so! you define a middle layer that mostly reflects what is in schemas, but still
14:50:27 <ihrachyshka> that said, I may talk nonsense now, I will recheck better
14:50:46 <ajo> ihrachyshka, ack, we will recheck , np :)
14:50:54 <ajo> ok
14:51:17 <ajo> if somebody else has time, I may need to look into:  https://review.openstack.org/#/c/88599/14/specs/liberty/qos-api-extension.rst  L283
14:51:37 <ajo> and replace that with something that really resembles our proposed API structure on the spec
14:51:38 <ajo> this sets
14:51:46 <ajo> /v2.0/qos/qos-profiles
14:52:01 <ajo> and /v2.0/qos/qos-bandwidthlimiting-rules
14:52:03 <ajo> ro something like that
14:52:15 <ajo> while we want: /v2.0/qos/rules/bandwidthlimiting
14:52:16 <ajo> ...
14:52:17 <ajo> etc
14:52:43 <ajo> I guess it's just not using the helper to create the resources and controller and do it yourself (probably talking nonsense here...)
14:53:21 <ajo> last two days I've been wearing my "reverse-engineer" tshirt for a reason ;D, lot's of debugging and getting to know how the API is built in neutron
14:53:52 <ajo> I shall change it now, but ... lot's of reverse engineering ahead ;D
14:54:17 <ajo> if anybody has time to look at it while I wrap my head around RPC and VO's ^
14:54:19 <ajo> it'd be great
14:55:15 <ajo> ok, anything to add?
14:55:28 <ajo> or shall I endmeeting/sleep ? :)
14:56:28 <ihrachyshka> probably #endmeeting
14:56:39 <ajo> #sleep
14:56:40 <ajo> hmm :D
14:56:44 <ajo> #endmeeting