18:01:36 #startmeeting networking_policy 18:01:37 Meeting started Thu Apr 30 18:01:36 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:40 The meeting name has been set to 'networking_policy' 18:01:47 hello all 18:01:55 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#April_30th.2C_23rd.2C_2015 18:02:11 #topic Bugs 18:03:04 i think only the back port for this is not merged: #link https://bugs.launchpad.net/group-based-policy/+bug/1432779 18:03:04 Launchpad bug 1432779 in Group Based Policy "redirect actions don't work with external policies" [Critical,In progress] - Assigned to Ivar Lazzaro (mmaleckk) 18:03:31 other than that, i dont think we have any reported outstanding critical bugs 18:04:04 any other bugs to discuss today? 18:04:47 okay perhaps not 18:04:58 #topic Functional/Integration Tests 18:05:10 #link https://review.openstack.org/#/c/174267 18:05:54 the above patch starts running jishnu’s test suite which performs integration testing by exercising GBP REST and client interfaces 18:06:06 nice! 18:06:09 at this point all tests are passing in that test suite 18:06:21 do we know what is covered by the tests? 18:06:22 we have to fix some bugs which this test suite caught 18:06:46 it covers all the resources we had in juno 18:07:03 and some scenarios 18:07:05 are tests disabled due to bugs? 18:07:16 rkukura: no tests are currently disabled 18:07:29 i mean no tests in that test suite are disabled 18:07:49 if they all pass, why do we have to fix some bugs? 18:07:59 rkukura: they were not passing before 18:08:10 OK, so they have already been fixed? 18:08:19 yeah, as of yesterday 18:08:35 so you see the job passing today 18:08:41 great, I was confused by “we have to fix” implying they weren’t already fixed 18:08:49 rkukura: +1 :) 18:09:09 oh, sorry, i meant to say “had” not “have”…my bad 18:09:22 no problem 18:09:36 in some cases the bugs were fixed but the tests were not updated 18:09:50 in the sense that the tests were not checking for the right result 18:09:57 so a combination of things 18:10:08 but anyway, right now its clean 18:10:34 it would have been good to have jishnu in this meeting to answer specific questions on the extent of coverage 18:10:45 but he is in travel right now 18:11:31 the patch itself is just shell script enabling the tests 18:12:14 SumitNaiksatam: is the test suite published anywhere? 18:12:20 yapeng: songole: hi thanks for joining 18:12:29 hello 18:12:33 SumitNaiksatam: It would be good to have a feeling of which datapath scenario are covered 18:12:36 songole: hi 18:12:41 do the same tests run for both master and stable/juno? 18:12:47 ivar-lazzaro: hi 18:13:38 ivar-lazzaro: firstly the test suite is not doing data path testing 18:14:06 ivar-lazzaro: the tests are published in pypi package 18:14:31 rkukura: the same tests run for both master and stable/juno 18:15:29 SumitNaiksatam: What will we do when master diverges from stable/juno so that we need different tests? Can we branch the test repo too? 18:15:37 rkukura: yes 18:15:54 rkukura: right now the test suite mostly covers the juno features 18:17:04 any other questions on this test case? 18:18:52 hopefully we can get the patch merged soon 18:18:56 #topic Packaging Update 18:19:13 rkukura: anything to update here? 18:20:02 I have not made any progress on fedora/RDO packages (was on PTO last week). 18:20:15 I checked today and it looks like all the tarballs on launchpad are at least a month old. Are we waiting for new ones? 18:20:29 rkukura: we havent cut kilo-3 yet 18:20:36 that’s what I thought 18:20:45 if thats what you are looking for 18:20:59 that, and a new stable/juno release I think 18:21:07 or were you looking for another stable/juno? 18:21:13 ah just asking that 18:21:38 I can certainly update to the stable/juno tarballs we have, but thought we had lots of fixes since then 18:21:39 rkukura: okay lets sync up after this meeting on this 18:21:42 sure 18:22:04 #topic Kilo Sync 18:22:05 I’d like to get the fedora packages update soon enough that the RDO packags will be updated by the summit. 18:22:23 rkukura: okay, lets work towards getting you need for that 18:22:29 *what you 18:22:32 SumitNaiksatam: thanks 18:22:42 rkukura: i will ping you after this meeting 18:23:04 regarding kilo sync 18:23:42 the last vestiges of kilo sync are complete meaning - GBP-UI and GBP-Automation projects are is sync with kilo 18:24:00 also the kilo-gbp devstack branch is functional with kilo 18:24:35 SumitNaiksatam, nice 18:24:38 by functional with kilo I mean, it uses stable/kilo branches for openstack projects 18:24:48 SumitNaiksatam, was the kilo-gbp devstack branch broken last saturday? 18:25:02 igordcard: yes it was broken at different times :-) 18:25:08 SumitNaiksatam, okay 18:25:18 i have tested this, but please let me know if you see any issues 18:25:22 mac died, sorry about that 18:25:30 ivar-lazzaro: np 18:25:47 did I miss anything exciting? 18:25:58 the kilo-gbp branch itself is based on upstream devstack stable/juno 18:26:06 ivar-lazzaro: no, same old boring stuff! ;-) 18:26:26 last night (your afternoon) kilo-gbp ran without any issues here 18:26:27 at this point i consider the kilo-sync acitivity to be complete 18:26:34 ivar-lazzaro: ah okay, good to know 18:27:11 one thing to note, especially rkukura - the gbp client also has progressed to sync with kilo 18:27:31 it means that for using GBP stable/juno we have to use 0.9.1 version of the client 18:28:03 SumitNaiksatam: For fedora and RDO, we’ll need to be compatible with the client versions included in those releases 18:28:06 by version, i mean its a 0.9.1 tag in the repo 18:28:29 will python-openstackclient be extended to support gbp? 18:28:44 rkukura: so 0.9.1 should work with openstack stable/juno 18:29:34 igordcard: my understanding is that python-openstackclient has a modular architecture such that we could extend it without us having to push anything into python-openstackclient, is that correct? 18:30:27 SumitNaiksatam, I haven't seen the internals of python-openstackclient but the individual clients seem to be getting deprecated, so I asked 18:31:07 igordcard: i believe this discussion came up in the context of neutron as well, and the last i heard they were not planning to move immediately (or at least were not being forced to) 18:31:18 but i have not been uptodate with that 18:31:24 that said i dont think we are targeting any of this acitivity for kilo 18:31:32 unless we are forced to 18:31:43 of course, definitely something we do plan to do 18:31:53 igordcard: in case you want to scope out what it involves, that would be great 18:32:20 SumitNaiksatam, I can do that 18:32:20 just to assess the amount of work both from a technical and resource perspective 18:32:25 igordcard: great, thanks 18:32:56 #topic Floating IP 18:33:07 i think magesh is on PTO 18:33:28 but i did notice that he addresses the review comments and has posted a new version of the spec 18:33:37 Spec #link https://review.openstack.org/#/c/167174 18:33:48 Implementation #link https://review.openstack.org/#/c/167174/ 18:34:31 at this point i believe whatever was discussed in the numerous discussions has been captured and mostly implemented 18:34:55 since magesh is not here, comments will have to go to the review 18:35:53 #topic Service Chain provider refactor 18:36:08 Spec #link https://review.openstack.org/#/c/174118 18:36:17 Impl #link https://review.openstack.org/#/q/status:open+project:stackforge/group-based-policy+branch:master+topic:bp/sc-refactor,n,z 18:36:46 so there were few revs on the spec, and ivar-lazzaro has posted WIP patches as well per the current state of the spec 18:36:54 #link https://review.openstack.org/#/q/status:open+project:stackforge/group-based-policy+branch:master+topic:bp/node-centric-chain-plugin,n,z 18:37:34 The topic is different, just to be more specific 18:37:49 there was no blueprint yet in launchpad 18:38:15 ivar-lazzaro: okay i will add it 18:38:48 SumitNaiksatam: it's there now, just with a different topic #link https://blueprints.launchpad.net/openstack/?searchtext=node-centric-chain-plugin 18:39:16 SumitNaiksatam: I arbitrarily chose it, but we can change it at any time if we want to 18:40:30 ivar-lazzaro: nit comment - i prefer calling this a “nodes composition plugin” NCP 18:41:02 SumitNaiksatam: I like node composition better too 18:41:13 SumitNaiksatam: however I would use "Chain" instead of "Plugin" 18:41:31 ivar-lazzaro: i would intentionally avoid “chain" 18:41:39 SumitNaiksatam: plugin would be implicit in this case, but it's at least clear that it's a servicechain implementation 18:41:43 other than that, the ability to pass service-specific key-value pairs also needs to be there 18:42:21 SumitNaiksatam: about that, I was wondering if we need to introduce the concept of "tag" 18:42:35 It would be a different URI, something like /tags 18:42:56 in which you could define metadata for all the GBP objects (and SC) without changing them directly 18:43:33 The main point of Tags is that they will be for consumption of entities outside of GBP itself 18:43:39 (eg. UI, Automation, etc...) 18:43:39 thats definitely an option as we discussed 18:44:17 i would prefer to not call it tags, since we also have the notion of policy “labels” and can be confusing 18:44:29 So that every "internal" API (eg. service_profile, that needs to be consumed by the Node Drivers) can be defined by APIs and Extensions 18:44:30 but names aside, it would be good to hear from the rest of the team on this 18:45:20 SumitNaiksatam: Difference between labels and tags is that the latter won't be understood by internal GBP components 18:45:39 SumitNaiksatam: but we can choose a different name for that nonetheless 18:45:59 ivar-lazzaro: i dont think it will never be the case that a node driver does not use a service-specific meta-data attribute - that is an implementation detail for the node driver 18:46:37 SumitNaiksatam: right, but since the node understands it, then it could as well just be an extension of service_profile 18:46:47 SumitNaiksatam: instead of a generic key-value pair 18:47:07 the benefit of having this common metadata resource approach is that it allows us to associate aribitrary meta-data with any GBP resources 18:47:13 SumitNaiksatam: while for externally consumed metadata, we can use tags (of course the driver could tag a node) 18:47:21 This is not sounding very intent-oriented :( 18:47:58 the downside is that it requires the user to correlate information 18:48:05 rkukura: tags are not intended to change the resource behavior, they are useful for external automation more than anythig 18:48:22 rkukura: for example: which UI should I use to manage this Node? 18:49:00 I don’t see how “nodes” have much to do with “intent” 18:49:17 rkukura: i agree that the level of the node driver its not an intent abstraction any more 18:49:51 rkukura: on that I agree. However keep in mind that we are talking about an operator API 18:49:54 I admit I have not been following previous discussions on this, or the BP, so I’m not even sure what a “node driver” is. 18:50:08 but the user does not need to deal with a Node directly 18:50:12 rkukura: however its not the “nodes”, its the “nodes driver” 18:50:28 ivar-lazzaro: If its purely operation API, I’m not as concerned. 18:50:34 operator 18:50:36 igordcard: +1 18:50:43 rkukura: yes, per igordcard, the node driver is not visible to the user 18:50:49 thanks 18:50:49 igordcard: that's for operator consumption 18:51:07 rkukura: and yes this configuration is for opertaional reasons 18:51:37 the confusion is happening because so far we have been dealing only with tenant facing resources 18:51:46 rkukura: Service chain Specs/Nodes/Instances are purely Operator oriented, mostly like L2/L3 policies 18:51:53 Just to clarify, we mean “cloud operator” here, not the application deployment role, right? 18:51:55 the user simply chooses a service to be put into the service chain, eventually assigning some meaningful tags that the system supports, which could then be used internally for, e.g. as ivar-lazzaro said, show a different UI 18:52:02 rkukura: yes 18:52:16 however a new resource is being proposed here called “service profile” 18:52:33 this resource is managed by the operator (cloud operator) 18:52:45 igordcard: the user would simply chose the policy rule (REDIRECT to some chain) 18:52:46 but is visible to the user 18:53:04 ivar-lazzaro, and the tags are predefined for the nodes that compose the chain right? 18:53:14 igordcard: how the chain is implemented and which metadata are associated with it is a Cloud Operator's concern 18:53:52 i would have preferred to not immediately model these as a resource, but hide them as operational details (driven via configuration) 18:54:02 igordcard: this may be one use, yes. But the tag resource I have in mind actually covers *all* the GBP resources for API consistency 18:54:57 SumitNaiksatam: but this would mean adding key-value attributes to certain resources, so an API change is still required right? 18:55:20 any thought been given on allowing users to influence the behaviour of resources by applying tags themselves? 18:55:54 ivar-lazzaro: in that case the service profile is completely internal, you only expose its description to the user 18:55:55 igordcard: I think those are "labels", and no we haven't discussed them yet 18:56:00 in this case the tag resource would probably be something else 18:56:11 ivar-lazzaro, yeah 18:56:30 lets not call them tags, since it casuses confusion, i think meta-date is more appropriate here :-) 18:56:46 otherwise we will go around in circles even in this small group 18:57:04 SumitNaiksatam: I say we call them Endpoints! 18:57:36 ivar-lazzaro: amen 18:58:07 SumitNaiksatam: eheh :) 18:58:08 so, just to clarify, you intend to have service profiles defined by operators and used by tenants 18:58:19 igordcard: yes 18:58:33 perhaps we might not expose all attributes to users 18:59:23 and "config" is the field where the user can input additional constraints/metadata/configs for the service? 18:59:48 per https://review.openstack.org/#/c/174118/7/specs/kilo/gbp-service-chain-driver-refactor.rst : "Node Driver": "This configures the service based on the “config” provided in the Service Node definition." 18:59:49 igordcard: I actually think that the user should never interact with the services 18:59:53 igordcard: we already have a config in the “node” definition 19:00:42 igordcard: the meta-data i am referring to goes beyond that “config" 19:01:04 The final user should only think "intent 19:01:18 ivar-lazzaro: true, user will never directly interact with the service, that doesnt happen today either 19:01:44 *will never -> should never 19:01:54 okay we are a minute over 19:02:04 #topic Vancouver Summit prep 19:02:14 ivar-lazzaro: rkukura: anything quick to discuss? 19:02:18 ivar-lazzaro, yes, yes, but if the intent can be expressed in a richer way, it may be beneficial 19:02:22 igordcard: you are coming to the summit? 19:02:30 yapeng: you are coming to the summit? 19:02:31 SumitNaiksatam, unfortunately not :( 19:02:38 igordcard: oh bummer! 19:02:42 SumitNaiksatam: I'm trying to figure out a way to implement shared PRS for the RMD 19:02:50 yes i will come 19:02:54 ivar-lazzaro: ah correct 19:02:55 SumitNaiksatam: I have a couple of ideas but I'm still trying to validate them 19:03:07 ivar-lazzaro: that part was challenging as you mentioned 19:03:10 Should we discuss them in the gbp channel? 19:03:14 ivar-lazzaro: thanks for taking that up 19:03:19 ivar-lazzaro: sure 19:03:26 yapeng: good 19:03:35 I'd like to hear from the rest of the team, especially those with a richer Neutron background 19:03:50 SumitNaiksatam: unfortunately, I cannot 19:04:06 I won't be able to go to summit 19:04:10 Yi: sorry, i did not notice that you had joinied, earlier i checked you were not 19:04:17 Yi: damn thats a bummer 19:04:22 was stuck in another meeting 19:04:24 ivar-lazzaro: Is there something for us to look at to understand your ideas? 19:04:26 Yi: lets sync up offline on that 19:04:33 sure 19:04:59 rkukura: I have nothing besides some code, but it's not published yet... Maybe we can discuss it on IRC and then I can publish a WIP? 19:05:14 ivar-lazzaro: sure 19:05:18 rkukura: I wanted to write a spec but I don't know yet what will work! So it's not easy 19:05:50 one last thing 19:05:51 ivar-lazzaro: IRC, email, hangout, whatever works best 19:05:55 #topic Refactor feature branch 19:06:05 yi and yapeng’s patches are finally all in 19:06:20 cool :) 19:06:21 rkukura: anything is good for me, the sooner we discuss the best. Do you have time after the meeting? 19:06:22 Yi: thanks so much for pursuing the last patch 19:06:37 we will work towards merging them from the feature branch once we are done with the summit 19:06:38 my pleasure 19:06:45 yapeng: Yi: Thanks guys! 19:06:51 okay lets move to -gbp 19:07:00 thanks all, and apologies for going over 19:07:02 bye! 19:07:06 cya 19:07:08 bye 19:07:08 ciaooo 19:07:09 #endmeeting