18:02:17 <SumitNaiksatam> #startmeeting networking_policy
18:02:17 <openstack> Meeting started Thu Jan 28 18:02:17 2016 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:02:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:02:21 <openstack> The meeting name has been set to 'networking_policy'
18:02:33 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Jan_28th.2C_2016
18:02:44 <SumitNaiksatam> #topic Bugs
18:03:02 <SumitNaiksatam> we have one critical bug in UI for liberty
18:03:17 <SumitNaiksatam> the javascript files are not getting loaded correctly
18:03:55 <SumitNaiksatam> ank is looking at it and has a patch: #link https://review.openstack.org/273496
18:04:24 <SumitNaiksatam> the patch is failing because of a version number in the setup.cfg
18:04:40 <SumitNaiksatam> i post another patch to remove the version number, will need to help to get it merged quickly
18:04:56 * tbachman steps in
18:05:02 <SumitNaiksatam> any other high priority bus to discuss?
18:05:09 <SumitNaiksatam> tbachman: hi!
18:05:13 <tbachman> SumitNaiksatam: hi!
18:05:49 <SumitNaiksatam> no new updates on integration test or docs
18:06:01 <SumitNaiksatam> #topic Packaging
18:06:09 <SumitNaiksatam> rkukura: anything to update here?
18:06:23 <SumitNaiksatam> liberty tars are posted
18:06:44 <rkukura> nothing new from me on this yet, but should be able to do liberty soon
18:06:50 <SumitNaiksatam> the other stable releaes are not, waiting a little bit more to see if there are any more changes
18:06:54 <SumitNaiksatam> rkukura: okay, thanks
18:07:04 <rkukura> do we plan to move our masters to use neutron, etc. masters?
18:07:19 <SumitNaiksatam> rkukura: yes
18:07:48 <rkukura> I think that will be important for getting RDO delorean setup for our stuff
18:08:07 <SumitNaiksatam> sure, i will post a WIP patch soon
18:08:10 <rkukura> great
18:08:36 <rkukura> I will try to get a plan for this at least by next week’s meeting
18:08:44 <SumitNaiksatam> rkukura: nice, thanks
18:09:04 <SumitNaiksatam> so while we are on that topic of syncing up the master branch
18:09:11 <SumitNaiksatam> #topic DB migrations
18:09:53 <SumitNaiksatam> in the liberty cycle neutron moved to a new scheme of DB migrations
18:10:48 <SumitNaiksatam> which allows to perform additive migrations online (without having to restart the server)
18:11:07 <SumitNaiksatam> we will need to decide if we want to move to such a scheme as well
18:12:05 <SumitNaiksatam> for reference #link https://github.com/openstack/neutron/commit/c7acfbabdc13ed2a73bdbc6275af8063b8c1eb2f
18:13:14 <SumitNaiksatam> we can give it some thought and discuss further in subsequent meetings
18:14:02 <SumitNaiksatam> #topic Design Specs
18:14:12 <SumitNaiksatam> #link https://review.openstack.org/#/c/239743/
18:14:30 <SumitNaiksatam> hemanthravi: songole: any update on the above?
18:14:53 <songole> SumitNaiksatam: hemanth is updating it.
18:14:59 <hemanthravi> I'm behind on the  network func platform spec, will post the update by end of the  week
18:15:08 <SumitNaiksatam> songole: hemanthravi:  okay thanks
18:15:31 <SumitNaiksatam> hemanthravi: we planned to post a separate spec for “nfp”?
18:16:03 <hemanthravi> no, will the same spec
18:16:03 <SumitNaiksatam> or nsf -> nfp?
18:16:08 <SumitNaiksatam> hemanthravi: okay
18:16:27 <hemanthravi> I think I had the name mixed up with the many we discussed
18:16:29 <hemanthravi> nsf
18:16:29 <SumitNaiksatam> hemanthravi: it will spec out, both the northbound, and the southbound APIs?
18:17:05 <hemanthravi> the northbound will take some time, i'll start with the southbound
18:17:12 <SumitNaiksatam> hemanthravi: okay
18:17:58 <SumitNaiksatam> hemanthravi: having the southbound API will allow service developers to plug into this framework, right?
18:18:21 <hemanthravi> yes,
18:19:26 <hemanthravi> the idea is to have to an infra component and a service-tenant component and the soutbound api
18:19:38 <hemanthravi> will allow the service-tenant component to be pluggable
18:20:04 <SumitNaiksatam> hemanthravi: okay, thanks for the update
18:20:37 <SumitNaiksatam> #topic Status field for resources
18:20:54 <SumitNaiksatam> field -> attribute
18:21:15 <SumitNaiksatam> we have discussed this in the past in the context of the “service_chain_instance"
18:22:07 <SumitNaiksatam> however this requirement applies to other resources as well
18:22:26 <rkukura> SumitNaiksatam: Is there a concrete proposal for this?
18:22:35 <SumitNaiksatam> rkukura: no
18:23:03 <SumitNaiksatam> just wanted to bring it up here before putting the proposal
18:23:03 <rkukura> Do we want just an enum, or also something more verbose like a text description?
18:23:23 <SumitNaiksatam> rkukura: a description can certain be very helpful for the user
18:23:28 <hemanthravi> we do need this, else the calls to instantiate service chain instances end up as blocking calls
18:23:48 <rkukura> Since we can have multiple drivers, we also need some way to coelesce status from them into a single enum. This is something we want to do in ML2, but don’t have a proposal yet.
18:25:04 <SumitNaiksatam> rkukura: i would imagine that it would be the responsibility of the plugin to coalesce the states, but would be specific to the drivers that are supported by that particular plugin
18:25:19 <rkukura> right
18:25:22 <SumitNaiksatam> rkukura: admittedly this is difficult to do in a generic way (like in the case of ML2)
18:25:55 <rkukura> So the driver API would need a way for each driver to provide its own status, and would have to figure out how to compose these into the overall status
18:26:20 <SumitNaiksatam> i also recall a similar issue with fwaas, where the state needed to be constructed from distributed components
18:26:23 <SumitNaiksatam> rkukura: right
18:27:05 <SumitNaiksatam> if anyone wants to post a proposal for this, please feel free to do so, else i will post one
18:27:46 <SumitNaiksatam> #topic Cross-project clean-up patches
18:28:01 <SumitNaiksatam> rkukura and I discussed some of these offline
18:28:03 <SumitNaiksatam> #link https://review.openstack.org/264642
18:28:12 <SumitNaiksatam> (use of assertions)
18:28:22 <SumitNaiksatam> #link https://review.openstack.org/#/c/263188/
18:28:26 <SumitNaiksatam> (use of log.warn)
18:28:41 <rkukura> the first one is interesting, since it removes the testing that a boolean is actually returned
18:28:53 <SumitNaiksatam> rkukura: right, and hence some projects have abandoned it
18:29:25 <SumitNaiksatam> lets decide one way or the other today whether we want to accept these patches
18:30:09 <rkukura> probably makes sense to try to be consistent in validing the exact values returned, so I think I’d vote to abandon it
18:30:27 <SumitNaiksatam> rkukura: okay, i tend to agree too
18:30:37 <SumitNaiksatam> the latter one seems sane, so i guess we can move forward with that?
18:30:54 <rkukura> for example, I’d hate to see returning None not break an assertFalse() test
18:31:03 <SumitNaiksatam> rkukura: agreed
18:31:09 <rkukura> the latter has not passed CI, right?
18:31:27 <rkukura> non-voting, but is there an issue?
18:31:45 <SumitNaiksatam> rkukura: its a transient failure, i just rechecked it
18:32:13 <rkukura> ok
18:32:43 <SumitNaiksatam> so once the gate-group-based-policy-dsvm-functional comes back successfully i think we can merge this
18:32:56 <rkukura> agreed
18:33:15 <SumitNaiksatam> #topic Open Discussion
18:33:42 <rkukura> anything related to the summit session proposal deadline on Monday?
18:33:43 <SumitNaiksatam> i think igordcard_ you ran into some problems earlier with liberty devstack setup, but you are past those now?
18:33:53 <SumitNaiksatam> rkukura: ah good one
18:34:01 <igordcard_> yes, I was trying to stack master but I was running into some problems
18:34:09 <igordcard_> with the liberty release it's fine
18:34:40 <SumitNaiksatam> igordcard_: okay, i havent tried master off late, the upstream gate job seems to work, so i imagined it might have been okay
18:34:50 <SumitNaiksatam> igordcard_: what was the problem you were seeing?
18:35:57 <SumitNaiksatam> so on the topic of submissions, there is a plan to submit one hands-on lab session for showing how services work in our framework
18:36:12 <igordcard_> I was able to stack at the end, but the CLI wouldn't work ( I figured it was because loading the creds from accrc don't work anymore)
18:36:17 <SumitNaiksatam> if it gets accepted i imagine we will all work on this together as a team
18:36:28 <igordcard_> and I was also getting too many connections in mysql because of heat-engine
18:36:36 <SumitNaiksatam> igordcard_: oh okay, interesting
18:37:03 <igordcard_> the stacking issue let me check here in the old logs
18:37:39 <SumitNaiksatam> any other talks or labs or demos we want to plan for submission?
18:38:14 <igordcard_> oh it wasn't loading the group_policy plugin (not found), I think there was some conflict of requirements (wrong instructions followed probably)
18:38:24 <SumitNaiksatam> igordcard_: okay
18:38:54 <SumitNaiksatam> hemanthravi: did you decide whether you wanted to submit a talk?
18:39:27 <hemanthravi> not yet, looking at coming up with something before mon
18:40:13 <SumitNaiksatam> hemanthravi: okay
18:40:24 <hemanthravi> any submissions other than the lab?
18:40:36 <igordcard_> could we submit something about the qos work?
18:41:08 <SumitNaiksatam> igordcard_: sure, happy to support you if you want to submit something
18:41:43 <igordcard_> it would be nice to have something for that, even though we still need to figure out the scope of qos for mitaka
18:42:38 <SumitNaiksatam> igordcard_: if nothing else it will focus us on making progress, its a forcing function :-)
18:43:03 <igordcard_> and it's something I wanted to discuss a bit today: what's the best for now, allow qos policies to be applied to policy targets, or allow qos policies to be applied across PTGs with the drawback of not being able to use a classifier?
18:43:21 <SumitNaiksatam> #topic QoS support
18:43:24 <igordcard_> SumitNaiksatam: yeah :)
18:44:12 <SumitNaiksatam> igordcard_: sorry, did not understand the latter part - “with the drawback of not being able to use a classifier?”
18:44:56 <SumitNaiksatam> ideally we should be doing things are the granularity of the group, right?
18:45:00 <igordcard_> SumitNaiksatam: the current QoS API only allwos qos policies to be set to ports (or networks, that in turn assign to each port of the network)
18:45:17 <SumitNaiksatam> igordcard_: right, right
18:45:20 <igordcard_> so we cannot say, for this PRS with this classifier, apply this qos rule
18:47:21 <SumitNaiksatam> igordcard_: but PRS always participate in a relationship with PTGs
18:47:29 <igordcard_> I don't know if the new classifier will be in mitaka
18:47:40 <SumitNaiksatam> igordcard_: ah the issue is with the classifier
18:47:57 <igordcard_> yes, we cannot scope to specific traffic
18:48:16 <SumitNaiksatam> can we perhaps put a constraint that the resource mapping driver can only support QoS for “any” traffic?
18:48:33 <SumitNaiksatam> and later relax this constraint, as and when support for classifier is available?
18:48:42 <igordcard_> that is one of the possibilities I see for now
18:49:11 <igordcard_> the only possibility for PRS
18:49:19 <SumitNaiksatam> ok
18:49:34 <SumitNaiksatam> seems reasonable to me to start with this
18:49:42 <SumitNaiksatam> rkukura: hemanthravi songole: what do you think?
18:49:45 <rkukura> makes sense to me
18:50:09 <SumitNaiksatam> rkukura: i think we would prefer that policy always be applied at the group level, right?
18:50:17 <igordcard_> also, the policy will in reality be applied to each of the PTs
18:51:07 <igordcard_> so if the PRS says that the bandwidth limit is 1Mb, it will be 1Mb for each of the PTs of the consuming PTG
18:51:17 <SumitNaiksatam> igordcard_: yes, and that is fine
18:51:22 <igordcard_> okay
18:51:24 <rkukura> PTG vs. PT, or PTG vs PRS?
18:51:27 <SumitNaiksatam> igordcard_: oh wait
18:51:46 <igordcard_> because it is at the neutron port level
18:52:01 <SumitNaiksatam> igordcard_: hmmm, i think we will need to think through that
18:53:00 <SumitNaiksatam> i dont think applying policy to a PTG implies that we allocating a cumulative “quota” for that PTG (like bandwidht in this case)
18:53:16 <SumitNaiksatam> hence my initial reaction was that it seems fine
18:53:59 <SumitNaiksatam> however, in this case it seems like we would have to think of the cumulative implications
18:54:29 <SumitNaiksatam> rkukura: i am not sure i understood your question
18:55:01 <rkukura> could we make a distinction between PTG-level bandwidth being per-PT vs. total?
18:55:42 <rkukura> SumitNaiksatam: I was just trying to understand your “applied at the group level” question
18:56:38 <SumitNaiksatam> rkukura: okay, yeah i think “with the drawback of not being able to use a classifier?” makes sense
18:56:41 <igordcard_> rkukura: at an implementation level it would not be possible by simply using what neutron provides today.. qos rules per port
18:57:07 <SumitNaiksatam> rkukura: sorry, copy paste erro
18:57:25 <SumitNaiksatam> i meant, “with the drawback of not being able to use a classifier?” makes sense
18:57:35 <SumitNaiksatam> we have three minutes
18:57:55 <SumitNaiksatam> igordcard_: how about at least you put this level of thoughts in a spec and post it?
18:57:56 <igordcard_> I can make a PoC for the qos action in the PRS
18:58:10 <rkukura> sounds good to me
18:58:13 <igordcard_> SumitNaiksatam: okay I can create a spec with the thoughts
18:58:20 <SumitNaiksatam> might be much easier to discuss on the spec, and we can certainly follow up in the weekly meetings
18:58:28 <SumitNaiksatam> igordcard_: yes sure
18:58:28 <igordcard_> but it won't yet dive into how it would work internally
18:58:54 <igordcard_> after the PoC I can update it with more detail
18:59:09 <SumitNaiksatam> igordcard_: np, we understand that this is WIP
18:59:51 <SumitNaiksatam> but i think having the spec will get the whole team on the same page with regards to get them up to speed with the thinking so far
18:59:51 <igordcard_> it's all from my side then
18:59:58 <SumitNaiksatam> igordcard_: thanks!
19:00:02 <igordcard_> SumitNaiksatam: I see , yeah
19:00:22 <SumitNaiksatam> igordcard_: assuming you are going to be available, we can make this a weekly discussion topic
19:00:36 <SumitNaiksatam> alright, thanks everyone for your time today!
19:00:38 <SumitNaiksatam> bye
19:00:43 <rkukura> bye
19:00:48 <tbachman> bye!
19:00:54 <SumitNaiksatam> #endmeeting