18:02:17 #startmeeting networking_policy 18:02:17 Meeting started Thu Jan 28 18:02:17 2016 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:21 The meeting name has been set to 'networking_policy' 18:02:33 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Jan_28th.2C_2016 18:02:44 #topic Bugs 18:03:02 we have one critical bug in UI for liberty 18:03:17 the javascript files are not getting loaded correctly 18:03:55 ank is looking at it and has a patch: #link https://review.openstack.org/273496 18:04:24 the patch is failing because of a version number in the setup.cfg 18:04:40 i post another patch to remove the version number, will need to help to get it merged quickly 18:04:56 * tbachman steps in 18:05:02 any other high priority bus to discuss? 18:05:09 tbachman: hi! 18:05:13 SumitNaiksatam: hi! 18:05:49 no new updates on integration test or docs 18:06:01 #topic Packaging 18:06:09 rkukura: anything to update here? 18:06:23 liberty tars are posted 18:06:44 nothing new from me on this yet, but should be able to do liberty soon 18:06:50 the other stable releaes are not, waiting a little bit more to see if there are any more changes 18:06:54 rkukura: okay, thanks 18:07:04 do we plan to move our masters to use neutron, etc. masters? 18:07:19 rkukura: yes 18:07:48 I think that will be important for getting RDO delorean setup for our stuff 18:08:07 sure, i will post a WIP patch soon 18:08:10 great 18:08:36 I will try to get a plan for this at least by next week’s meeting 18:08:44 rkukura: nice, thanks 18:09:04 so while we are on that topic of syncing up the master branch 18:09:11 #topic DB migrations 18:09:53 in the liberty cycle neutron moved to a new scheme of DB migrations 18:10:48 which allows to perform additive migrations online (without having to restart the server) 18:11:07 we will need to decide if we want to move to such a scheme as well 18:12:05 for reference #link https://github.com/openstack/neutron/commit/c7acfbabdc13ed2a73bdbc6275af8063b8c1eb2f 18:13:14 we can give it some thought and discuss further in subsequent meetings 18:14:02 #topic Design Specs 18:14:12 #link https://review.openstack.org/#/c/239743/ 18:14:30 hemanthravi: songole: any update on the above? 18:14:53 SumitNaiksatam: hemanth is updating it. 18:14:59 I'm behind on the network func platform spec, will post the update by end of the week 18:15:08 songole: hemanthravi: okay thanks 18:15:31 hemanthravi: we planned to post a separate spec for “nfp”? 18:16:03 no, will the same spec 18:16:03 or nsf -> nfp? 18:16:08 hemanthravi: okay 18:16:27 I think I had the name mixed up with the many we discussed 18:16:29 nsf 18:16:29 hemanthravi: it will spec out, both the northbound, and the southbound APIs? 18:17:05 the northbound will take some time, i'll start with the southbound 18:17:12 hemanthravi: okay 18:17:58 hemanthravi: having the southbound API will allow service developers to plug into this framework, right? 18:18:21 yes, 18:19:26 the idea is to have to an infra component and a service-tenant component and the soutbound api 18:19:38 will allow the service-tenant component to be pluggable 18:20:04 hemanthravi: okay, thanks for the update 18:20:37 #topic Status field for resources 18:20:54 field -> attribute 18:21:15 we have discussed this in the past in the context of the “service_chain_instance" 18:22:07 however this requirement applies to other resources as well 18:22:26 SumitNaiksatam: Is there a concrete proposal for this? 18:22:35 rkukura: no 18:23:03 just wanted to bring it up here before putting the proposal 18:23:03 Do we want just an enum, or also something more verbose like a text description? 18:23:23 rkukura: a description can certain be very helpful for the user 18:23:28 we do need this, else the calls to instantiate service chain instances end up as blocking calls 18:23:48 Since we can have multiple drivers, we also need some way to coelesce status from them into a single enum. This is something we want to do in ML2, but don’t have a proposal yet. 18:25:04 rkukura: i would imagine that it would be the responsibility of the plugin to coalesce the states, but would be specific to the drivers that are supported by that particular plugin 18:25:19 right 18:25:22 rkukura: admittedly this is difficult to do in a generic way (like in the case of ML2) 18:25:55 So the driver API would need a way for each driver to provide its own status, and would have to figure out how to compose these into the overall status 18:26:20 i also recall a similar issue with fwaas, where the state needed to be constructed from distributed components 18:26:23 rkukura: right 18:27:05 if anyone wants to post a proposal for this, please feel free to do so, else i will post one 18:27:46 #topic Cross-project clean-up patches 18:28:01 rkukura and I discussed some of these offline 18:28:03 #link https://review.openstack.org/264642 18:28:12 (use of assertions) 18:28:22 #link https://review.openstack.org/#/c/263188/ 18:28:26 (use of log.warn) 18:28:41 the first one is interesting, since it removes the testing that a boolean is actually returned 18:28:53 rkukura: right, and hence some projects have abandoned it 18:29:25 lets decide one way or the other today whether we want to accept these patches 18:30:09 probably makes sense to try to be consistent in validing the exact values returned, so I think I’d vote to abandon it 18:30:27 rkukura: okay, i tend to agree too 18:30:37 the latter one seems sane, so i guess we can move forward with that? 18:30:54 for example, I’d hate to see returning None not break an assertFalse() test 18:31:03 rkukura: agreed 18:31:09 the latter has not passed CI, right? 18:31:27 non-voting, but is there an issue? 18:31:45 rkukura: its a transient failure, i just rechecked it 18:32:13 ok 18:32:43 so once the gate-group-based-policy-dsvm-functional comes back successfully i think we can merge this 18:32:56 agreed 18:33:15 #topic Open Discussion 18:33:42 anything related to the summit session proposal deadline on Monday? 18:33:43 i think igordcard_ you ran into some problems earlier with liberty devstack setup, but you are past those now? 18:33:53 rkukura: ah good one 18:34:01 yes, I was trying to stack master but I was running into some problems 18:34:09 with the liberty release it's fine 18:34:40 igordcard_: okay, i havent tried master off late, the upstream gate job seems to work, so i imagined it might have been okay 18:34:50 igordcard_: what was the problem you were seeing? 18:35:57 so on the topic of submissions, there is a plan to submit one hands-on lab session for showing how services work in our framework 18:36:12 I was able to stack at the end, but the CLI wouldn't work ( I figured it was because loading the creds from accrc don't work anymore) 18:36:17 if it gets accepted i imagine we will all work on this together as a team 18:36:28 and I was also getting too many connections in mysql because of heat-engine 18:36:36 igordcard_: oh okay, interesting 18:37:03 the stacking issue let me check here in the old logs 18:37:39 any other talks or labs or demos we want to plan for submission? 18:38:14 oh it wasn't loading the group_policy plugin (not found), I think there was some conflict of requirements (wrong instructions followed probably) 18:38:24 igordcard_: okay 18:38:54 hemanthravi: did you decide whether you wanted to submit a talk? 18:39:27 not yet, looking at coming up with something before mon 18:40:13 hemanthravi: okay 18:40:24 any submissions other than the lab? 18:40:36 could we submit something about the qos work? 18:41:08 igordcard_: sure, happy to support you if you want to submit something 18:41:43 it would be nice to have something for that, even though we still need to figure out the scope of qos for mitaka 18:42:38 igordcard_: if nothing else it will focus us on making progress, its a forcing function :-) 18:43:03 and it's something I wanted to discuss a bit today: what's the best for now, allow qos policies to be applied to policy targets, or allow qos policies to be applied across PTGs with the drawback of not being able to use a classifier? 18:43:21 #topic QoS support 18:43:24 SumitNaiksatam: yeah :) 18:44:12 igordcard_: sorry, did not understand the latter part - “with the drawback of not being able to use a classifier?” 18:44:56 ideally we should be doing things are the granularity of the group, right? 18:45:00 SumitNaiksatam: the current QoS API only allwos qos policies to be set to ports (or networks, that in turn assign to each port of the network) 18:45:17 igordcard_: right, right 18:45:20 so we cannot say, for this PRS with this classifier, apply this qos rule 18:47:21 igordcard_: but PRS always participate in a relationship with PTGs 18:47:29 I don't know if the new classifier will be in mitaka 18:47:40 igordcard_: ah the issue is with the classifier 18:47:57 yes, we cannot scope to specific traffic 18:48:16 can we perhaps put a constraint that the resource mapping driver can only support QoS for “any” traffic? 18:48:33 and later relax this constraint, as and when support for classifier is available? 18:48:42 that is one of the possibilities I see for now 18:49:11 the only possibility for PRS 18:49:19 ok 18:49:34 seems reasonable to me to start with this 18:49:42 rkukura: hemanthravi songole: what do you think? 18:49:45 makes sense to me 18:50:09 rkukura: i think we would prefer that policy always be applied at the group level, right? 18:50:17 also, the policy will in reality be applied to each of the PTs 18:51:07 so if the PRS says that the bandwidth limit is 1Mb, it will be 1Mb for each of the PTs of the consuming PTG 18:51:17 igordcard_: yes, and that is fine 18:51:22 okay 18:51:24 PTG vs. PT, or PTG vs PRS? 18:51:27 igordcard_: oh wait 18:51:46 because it is at the neutron port level 18:52:01 igordcard_: hmmm, i think we will need to think through that 18:53:00 i dont think applying policy to a PTG implies that we allocating a cumulative “quota” for that PTG (like bandwidht in this case) 18:53:16 hence my initial reaction was that it seems fine 18:53:59 however, in this case it seems like we would have to think of the cumulative implications 18:54:29 rkukura: i am not sure i understood your question 18:55:01 could we make a distinction between PTG-level bandwidth being per-PT vs. total? 18:55:42 SumitNaiksatam: I was just trying to understand your “applied at the group level” question 18:56:38 rkukura: okay, yeah i think “with the drawback of not being able to use a classifier?” makes sense 18:56:41 rkukura: at an implementation level it would not be possible by simply using what neutron provides today.. qos rules per port 18:57:07 rkukura: sorry, copy paste erro 18:57:25 i meant, “with the drawback of not being able to use a classifier?” makes sense 18:57:35 we have three minutes 18:57:55 igordcard_: how about at least you put this level of thoughts in a spec and post it? 18:57:56 I can make a PoC for the qos action in the PRS 18:58:10 sounds good to me 18:58:13 SumitNaiksatam: okay I can create a spec with the thoughts 18:58:20 might be much easier to discuss on the spec, and we can certainly follow up in the weekly meetings 18:58:28 igordcard_: yes sure 18:58:28 but it won't yet dive into how it would work internally 18:58:54 after the PoC I can update it with more detail 18:59:09 igordcard_: np, we understand that this is WIP 18:59:51 but i think having the spec will get the whole team on the same page with regards to get them up to speed with the thinking so far 18:59:51 it's all from my side then 18:59:58 igordcard_: thanks! 19:00:02 SumitNaiksatam: I see , yeah 19:00:22 igordcard_: assuming you are going to be available, we can make this a weekly discussion topic 19:00:36 alright, thanks everyone for your time today! 19:00:38 bye 19:00:43 bye 19:00:48 bye! 19:00:54 #endmeeting