14:01:37 #startmeeting neutron_qos 14:01:38 Meeting started Wed Jun 10 14:01:37 2015 UTC and is due to finish in 60 minutes. The chair is ajo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:41 The meeting name has been set to 'neutron_qos' 14:02:02 #topic Where are we? 14:02:10 #link https://etherpad.openstack.org/p/neutron-qos-jun-10-2015 14:02:16 Hi Ajo .. Mohan here from vikram team 14:02:16 an interactive agenda ^ 14:02:21 hi mohankumar , welcome too 14:02:51 So, past monday we got the feature branch setup 14:02:56 it's feature/qos 14:03:05 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/qos,n,z 14:03:08 #link https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/qos,n,z 14:03:17 We are doing neutron client changes...and posted qos policy cli cmds 14:03:18 https://review.openstack.org/#/c/189655/ 14:03:41 and we are working other neutron client commands too. 14:03:42 ah, thanks Ramanjaneya , please note it on the etherpad I will make a comment when we get there :) 14:03:48 that's very nice! ;) 14:04:19 any question about the feature branch thing? 14:04:23 ok..sure 14:04:28 Can we organize all our patches under one topioc? 14:04:29 there are some notes on the etherpad about what's the plan 14:04:30 *topic? 14:04:50 sc68cal, do you mean, a general topic to aggregate client + api + everything? 14:04:58 =1 14:05:01 anteaya showed me something neat the other day, you can use git review -t so you can keep everything organized 14:05:02 sc68cal, why not filtering using branch now that we have it? 14:05:40 for example: https://review.openstack.org/#/q/topic:neutron-linuxbridge,n,z 14:05:45 lets have both 14:06:02 ah right, it's not just neutron 14:06:04 we may switch branch to master at some point 14:06:16 sounds good to me, we can try it 14:06:18 + for both then 14:06:37 if it gets messy at some point, we can think of changing strategy, but I guess that won't happen :) 14:06:42 +1 14:06:43 topic:neutron-qos? 14:06:51 ^ works for me 14:06:54 ihrachyshka +1 wfm too 14:07:40 ok Ramanjaneya change the topic of your reviews when you can, I'll change mine, moshele , you too please :) 14:07:54 ajo: sure 14:08:09 ok, now that we are on a similar topic, I'm going to jump over it first 14:08:21 #topic CI cross/projects/feature branch 14:08:34 ok 14:08:39 I'm unsure how're we going to put together the changes to neutron-client + CI tests from the feature branch 14:08:56 shall we create separate CI jobs? 14:09:09 we need to enable: service_plugins=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.qos.qos_plugin.QoSPlugin 14:09:11 from debstack 14:09:15 devstack O:) 14:09:28 the last bit: neutron.services.qos.qos_plugin.QoSPlugin 14:09:37 otherwise neither our extension or plugin is loaded, and we can't test 14:09:42 and also, we have the same issue with neutron client 14:10:29 we need to investigate if your devstack initialization can be made (somehow) conditional on the feature branch 14:10:32 to set that 14:11:02 we can create experimental jobs 14:11:05 any comments or ideas about this? or shall I move one? 14:11:20 can take the similar approach to what we're doing now with the linux bridge testing initiative 14:11:28 sc68cal, ahaa 14:11:38 sc68cal, experimental are run on every patch? 14:11:49 then we could just skip it if branch is not "feature/qos" 14:11:57 ajo: no, you check with a special comment 14:11:58 and set the right neutron client when feature/qos is on 14:12:08 ahh sc68cal , ack 14:12:18 that could work for me 14:12:37 sc68cal, any link/review about how to setup an experimental CI job? 14:12:41 any volunteer? :) 14:12:59 I can take it on - here's how I did it for LB - https://review.openstack.org/184830 14:13:04 so it'll be similar 14:13:12 #link https://review.openstack.org/184830 14:13:37 sc68cal, thank you very much 14:13:52 #action sc68cal to setup an experimental job for testing QoS plugin + ext 14:14:19 #topic poc mixin-less core resource extension in service plugin 14:14:32 more details on the etherpad 14:14:37 but basically 14:14:42 #link https://review.openstack.org/#/c/189628/ 14:15:23 I introduced the AFTER_READ callback notification in neutron/db/db_base_plugin_v2.py 14:15:23 14:15:38 Ajo: can you post the etherpad link 14:15:42 and we register for it on the service plugin, being able to extend core resources (PORT in this case) 14:15:51 sure dhruv : https://etherpad.openstack.org/p/neutron-qos-jun-10-2015 14:16:15 ajo: thanks 14:16:44 I need to talk to armax to get some review on the way I did dit 14:16:49 I hope it looks reasonable 14:17:37 as a strategy, for example, you don't have control over how callbacks are called, so future dependent extensions using the same mechanism may need some way to handle that 14:18:09 moshele, do you want to jump on what you've been doing? 14:18:25 I wonder if the patch should be spli into few patches, to make it easier to review? 14:18:34 ^ajo 14:18:46 ajo: I started on the agent extension driver 14:19:01 I think callbacks modification is self standing change 14:19:11 irenab, probably it could be split, yes... extension in one, service plugin in other, 14:19:21 I talked to irenab an I need to some change current path 14:19:26 irenab, the callback notify in other 14:19:34 I think not too granular, but generic/versus qos specific 14:19:49 irenab, I can split, yes 14:20:06 I mean patch 14:20:15 moshele, thanks :) 14:20:42 basically I plan to add L2Agent interface class 14:20:48 irenab, I can split now that the approach seems to work :) 14:20:58 ajo: great 14:22:00 following our discussion with moshele, some questions there regarding RPC between Server and agent 14:22:09 yes 14:22:27 I have some debt in that regard, I need to put a devref or devref+POC on a generic RPC mechanism 14:22:29 ajo: do you have some details regarding generic RPC approach you mentioned last meeting? 14:22:53 sadly not :( I invested all my time in the other patch 14:23:29 moshele, I can have something by tomorrow for review 14:23:40 to see if it does make sense 14:23:44 that will be great 14:24:08 #action ajo write the generic RPC mechanism doc for review 14:24:08 as for the workflow between agent and server, I think we should discuss how its bound to the current L2 agent RPC calls if at all 14:24:43 irenab, those may be independent calls... 14:25:09 irenab, do you mean, current L2, or current on moshele's patch? (I didn't have time to review) 14:25:28 ajo: current, we may still reuse the update_port and get_device_details 14:25:58 irenab, I guess you're right 14:26:12 to add qos_policy_id, and qos_agent will have to query for details 14:26:19 I'd try to avoid cluttering get_device_info with more info 14:26:57 we should try to keep number of RPC call to minimum 14:27:05 and probably include an alternate call to retrieve specific info + subscribe 14:27:38 irenab, yes, I agree on that, I guess it's my fault for not presenting my plan 14:27:55 my plan includes, later on in time, migrating security groups (the user of get_device_info) later to the new mechanism 14:27:55 ajo: right, but probably need to go though generic loop to apply changes 14:28:12 ajo: completely with you on this 14:28:47 the idea is that subscription via fanouts will distribute the info to the right extensions as the information changes... 14:28:53 we need to define this part to make agent side refactoring 14:28:57 extensions : modular L2 agent extensions 14:29:15 irenab yes 14:29:37 ok, I can push on that, while we get progress in the DB models, the neutron client, etc.. 14:30:02 there are two different types of updates, on on port level (update policy mapping) and other update of policy itself 14:30:13 irenab, correct 14:30:30 I guess the first one can be treated via update_port workflow 14:30:39 correct 14:30:45 update_port shall provide the qos_profile_id 14:30:51 and then, the agent can subscribe to such profile_id 14:30:51 the second is proposed RPC pub/sub 14:30:58 correct 14:31:02 ajo:+1 14:31:26 then unsub when no port is ussing such qos_profile_id anymore in the host 14:31:30 using 14:32:09 get_device_details now is used for first time configuration, it maybe useful to propogate qos_policy there as well 14:32:25 then agent will subscribe for updates 14:32:43 we need to explore that, yes 14:32:57 IMO I believe it's going to be more modulare if we can do it via the same pub/sub mechanism, 14:33:04 send the info on subscription 14:33:20 then we don't depend on two different RPC callbacks 14:33:24 sorry calls 14:33:37 when there are changes to make, or new pub/sub consumer extensions 14:34:00 this may work, but need to synchronize the configuration on agent side 14:34:22 both l2 connectivity and qos policy 14:34:33 to report device as UP 14:34:42 irenab, correct, as it happens now in the get_device_info, right? :) 14:34:48 yes 14:34:49 yes 14:34:57 we'd just be doing one call per different resource type 14:35:01 may be that can be coalesced to one 14:35:13 (deferred subscription or something...) 14:35:47 all extensions come in at port creation... and subscribe what they need.... they finish, we do the whole subscription/info retrieval... we send the info back to all the extensions 14:36:03 I gues we agree on requiremetns, so lets just put down the RPC mech details and agent workflow 14:36:37 irenab, moshele , tomorrow I'll put my time on the rpc thing devref, and probably we can discuss on the review 14:37:06 ajo: great 14:37:13 ok, shall we move on to the list of things we need progress on? 14:38:37 #topic TO-DO 14:38:45 I'm afraid I lost almost everybody :D 14:38:47 lol :) 14:39:37 ok, we need to advance on the ovs/sr-iov low level details, may be just as devref + code? 14:39:42 moshele ^ how does that sound? 14:39:57 ajo: I am a bit oveloaded this week, but hope to get some work starting next week 14:40:05 irenab thanks :) 14:40:15 I will try 14:40:38 we also have the neutron client: Ramanjaneya and mohankumar , thanks :) 14:40:49 I started looking into supported qos rule type detection, not much done yet, but hopefully will have smth on Mon. not a prio, I know ;) 14:40:50 keep up the work on this, and ping me if you need any help 14:41:16 :) 14:41:17 ihrachyshka: that will be very important at some point, thanks for working on this 14:41:48 We need to progress on the DB models 14:42:09 Vikram signed up for it, but afaik he's now on a 2-week PTO if I didn't get him wrong 14:42:22 somebody has time / can look into it? 14:42:37 I was thinking of using https://github.com/openstack/oslo.versionedobjects 14:42:42 even if there is not too much doc on it 14:43:05 we could benefit from out of the box rpc serialization/deserialization that works across different versions of the models 14:43:49 and you would work with objects, not dicts. 14:43:58 yey! 14:44:05 that's another nice advantage 14:44:38 to make it clear: do you anticipate it db side? 14:44:48 ihrachyshka, what do you mean? 14:45:28 I mean, how/when will you deserialize into sqla dict? 14:45:49 meh... better 'convert' 14:46:07 ihrachyshka for the rpc pub/sub and the API calls I guess?, I still need to look into it 14:46:26 ihrachyshka I just now the "wows" of versioned objects, but didn't dig into the details yet 14:46:50 now->know 14:47:08 ack, let's postpone 14:47:10 ok, if no volunteers, I will start looking into it if Vikram doesn't arrive first 14:47:44 ajo: me and mohan work on behalf of vikram for DB model till he comes 14:48:37 Ramanjaneya, thanks, probably the best is to start by the db migrations, then we can look into the DB models with oslo.versionedobjects, let me find some references about it (I'll get to you with the link) 14:49:15 ok. thank you ajo 14:49:33 ajo, you still need to define models, so there is no change in that regard afaik 14:49:45 ihrachyshka, they remain the same? I didn't know 14:50:08 ok, that's good, I guess 14:50:10 I thought so! you define a middle layer that mostly reflects what is in schemas, but still 14:50:27 that said, I may talk nonsense now, I will recheck better 14:50:46 ihrachyshka, ack, we will recheck , np :) 14:50:54 ok 14:51:17 if somebody else has time, I may need to look into: https://review.openstack.org/#/c/88599/14/specs/liberty/qos-api-extension.rst L283 14:51:37 and replace that with something that really resembles our proposed API structure on the spec 14:51:38 this sets 14:51:46 /v2.0/qos/qos-profiles 14:52:01 and /v2.0/qos/qos-bandwidthlimiting-rules 14:52:03 ro something like that 14:52:15 while we want: /v2.0/qos/rules/bandwidthlimiting 14:52:16 ... 14:52:17 etc 14:52:43 I guess it's just not using the helper to create the resources and controller and do it yourself (probably talking nonsense here...) 14:53:21 last two days I've been wearing my "reverse-engineer" tshirt for a reason ;D, lot's of debugging and getting to know how the API is built in neutron 14:53:52 I shall change it now, but ... lot's of reverse engineering ahead ;D 14:54:17 if anybody has time to look at it while I wrap my head around RPC and VO's ^ 14:54:19 it'd be great 14:55:15 ok, anything to add? 14:55:28 or shall I endmeeting/sleep ? :) 14:56:28 probably #endmeeting 14:56:39 #sleep 14:56:40 hmm :D 14:56:44 #endmeeting