18:02:12 #startmeeting networking_policy 18:02:13 Meeting started Thu Oct 1 18:02:12 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:17 The meeting name has been set to 'networking_policy' 18:02:43 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Oct_1st_2015 18:03:01 #topic Stable Release 18:03:27 per rkukura’s request, i put out stable releases for both juno and kilo 18:03:36 SumitNaiksatam: thanks! 18:03:42 a couple of days back, signed tarballs we posted 18:03:50 rkukura: np 18:04:37 just wanted the team to note, the GBP service release version is one minor version ahead of the gbp automation and gbp UI 18:04:50 so 2014.2.3 versus 2014.2.2 18:05:40 perhaps it might be better to get these back in sync, just so taht its easier to track 18:05:48 but for now thats where its at 18:05:58 #topic Packaging 18:06:24 so the main follow up for the stable release is with regards to the packaging 18:06:36 rkukura: any hiccups with using the tarballs? 18:07:10 OK, I’m working on updating the Fedora 22 packages to the latest stable/juno releases and the Fedora 23 packages to the latest stable/kilo releases. 18:07:29 No issues so far, but I haven’t done any testing of the packages yet. 18:07:39 rkukura: okay sure 18:07:56 The plan is for someone from Red Hat to then update the RDO juno and kilo packages based on these Fedora packages. 18:08:06 hi, sorry I'm late 18:08:17 rkukura: were you able to deploy any of the newly generated packages in your local setup? 18:08:23 ivar-lazzaro: hi, np 18:08:44 ivar-lazzaro: just discussing the stable release and packaging follow up 18:08:57 k 18:09:00 SumitNaiksatam: I have not yet tried installing any of the pacakges. I’ll do that once they are all ready on fresh F22 and F23 VMs. 18:09:25 rkukura: okay, mainly asking because of the neutronclient version issue that you were mentioning earlier 18:09:40 One the Fedora kilo packages are ready, these will be moved into Red Hat’s new Delorean build system. 18:10:00 rkukura: okay good to know 18:10:13 rkukura: do the packages need to be built differently for this new system? 18:10:28 SumitNaiksatam: I’m currently using the previous juno client that is compatible with the python-neutronclient that is in F22. 18:10:47 rkukura: okay 18:10:49 There will be some tweeks to the spec files for Delorean, but nothing major. 18:11:09 rkukura: okay 18:11:22 The nice thing is that Delorean monitors the git repo and triggers new packages on every commit, and I think runs some CI on them. 18:11:56 rkukura: nice, so in our case what would the CI do? 18:12:02 And RDO is using these. 18:12:26 I think the CI would be the same as or similar to the functional tests. 18:12:45 I’ll find out more details when we are ready. 18:13:30 rkukura: okay, i imagine we would have tell their CI what it should exercise at our end, and if thats the case we can perhaps leverage our existing gate jobs 18:13:31 My undestanding is that the OpenStack services will no longer be included in Fedora as of F24, only the clients. 18:13:44 rkukura: okay 18:14:27 Is our current plan to update GBP service to work with liberty neutron as a service plugin? 18:14:57 rkukura: yes that would probably the first step 18:15:44 So that version would probably only be packaged through Delorean, not manually packaged for Fedora. 18:16:01 Any questions on the Fedora/RDO packaging? 18:16:23 Oh, if anyone is aware of any packaging-related bugs, please let me know. 18:16:45 I did fix https://bugzilla.redhat.com/show_bug.cgi?id=1193642 18:16:45 bugzilla.redhat.com bug 1193642 in python-gbpclient "GBP bash-completion file not installed" [Unspecified,New] - Assigned to rk 18:16:45 rkukura: how frequently can we update these packages once you are done with the current iteration of fixing them 18:16:46 ? 18:17:27 rkukura: nice, not having bash completion was inconvenient 18:17:30 SumitNaiksatam: As frequently as we need to. With Delorean, this will happen automatically on every stable branch commit. 18:17:50 rkukura: okay 18:17:55 rkukura: thanks for the update 18:18:20 #topic Integration tests 18:18:44 SumitNaiksatam: in treeeee, lalalalalalalalaaa 18:19:08 ivar-lazzaro: yes indeed and catching issues 18:19:15 thanks for pep8ifying up the integration tests 18:19:41 i believe jishnu is trying to merge a few more tests 18:20:41 so for everyone else, you could add integration tests with your patch if you wanted to 18:21:15 * ivar-lazzaro thins we should make that mandatory at some point 18:21:46 ivar-lazzaro: yes, i was going to say 18:22:04 ivar-lazzaro: for now what is mandatory is that every patch should pass the existing suite 18:22:20 so if the tests break with your patch then you have to fix 18:22:29 SumitNaiksatam: yeah, still need to do some things with the tests before making them mandatory imho 18:22:37 ivar-lazzaro: right 18:22:53 for instance, an "How to write functional tests" guide 18:22:58 subsequently, we will require some test coverage with every patch 18:23:01 ivar-lazzaro: absolutely 18:23:12 and a way to organize tests by tipe (smoke, granade etc...) 18:23:36 we don't want to run every test every time I think... 30 min is already enough time spent 18:23:55 ivar-lazzaro: right, and we would have separate jobs for those as well 18:24:01 we need Jishnu wisdom on this matter :) 18:24:22 #topic Bugs 18:24:44 as a higher priority we were tracking some bugs over the past few weeks 18:25:43 rkukura: any chance you got time to look at #link https://bugs.launchpad.net/group-based-policy/+bug/1470646 ? 18:25:43 Launchpad bug 1470646 in Group Based Policy "Deleting network associated with L2P results in infinite loop" [High,Triaged] - Assigned to Robert Kukura (rkukura) 18:26:10 Not yet, but I will today or tomorrow 18:26:35 rkukura: okay 18:27:04 mageshgv: how about #link https://bugs.launchpad.net/group-based-policy/+bug/1489090 and #link https://bugs.launchpad.net/group-based-policy/+bug/1418839 ? 18:27:04 Launchpad bug 1489090 in Group Based Policy "GBP: Resource intergrity fails between policy-rule-set & external-policy" [High,Confirmed] - Assigned to Magesh GV (magesh-gv) 18:27:05 Launchpad bug 1418839 in Group Based Policy "Service config in yaml cannot be loaded" [High,In progress] - Assigned to Magesh GV (magesh-gv) 18:27:31 SumitNaiksatam: I did not get a chance to do this, will do it soon 18:27:37 mageshgv: ok thanks 18:27:56 i also noticed this issue yesterday #link https://bugs.launchpad.net/group-based-policy/+bug/1501657 18:27:56 Launchpad bug 1501657 in Group Based Policy "Explicit router provided for L3 Policy creation is not used" [High,New] 18:28:27 i am not sure what a good use case for this would be, but there was someone in the community who wanted to do this 18:28:54 I can look into that if you’d like 18:29:06 when i tried doing this, the router ID i provided while creating the L3P was not used, instead a new one got created 18:29:12 rkukura: that would be great 18:29:27 also i releazied that our CLI does not even support this option of passing the routers list 18:29:42 ugh 18:29:45 rkukura: so i will fix the CLI and perhaps you can look at it after that 18:29:51 ok 18:29:57 so that you can actually reprodue the issue :-) 18:30:03 *reproduce 18:30:06 makes it easier 18:30:13 i fixed the CLI in my local setup to test it 18:30:22 okay any other critical bugs? 18:30:39 there is a vendor bug reported as critical, which i think should de-prioritized 18:30:52 Is Jishnu going to retest https://bugs.launchpad.net/group-based-policy/+bug/1417312? 18:30:52 Launchpad bug 1417312 in Group Based Policy "GBP: Existing L3Policy results in failure to create PTG with default L2Policy" [High,In progress] - Assigned to Robert Kukura (rkukura) 18:31:07 rkukura: yeah i just remembered that when i saw the list 18:31:21 rkukura: i will request him to do that 18:31:27 thanks 18:31:43 I’ll be there Monday and can work with him on it if needed 18:31:52 rkukura: yeah that would be great 18:32:02 #topic Node Composition Plugin enhancements 18:32:42 ivar-lazzaro: thanks for posting the new spec #link https://review.openstack.org/#/c/228693 18:32:54 for traffic stitching and proxy groups 18:33:19 SumitNaiksatam: a spec on the PT HA will come as well 18:33:28 ivar-lazzaro: nice!!1 18:33:35 will make mageshgv happy i guess :-) 18:34:01 SumitNaiksatam :-) 18:34:46 mageshgv: thanks for the review on those two patches and getting them merged earlier today 18:34:46 SumitNaiksatam: implementation for mageshgv is there already ;) 18:34:55 ivar-lazzaro: you rock! 18:35:01 ivar-lazzaro: nice ! 18:35:12 #link https://review.openstack.org/#/c/229673/ 18:35:22 ivar-lazzaro: i believe you wanted to discuss the context elevation issue? 18:35:24 failing py27 apparently, but still :) 18:35:37 ivar-lazzaro: thanks, will take a look 18:35:57 i was referring to patch #link https://review.openstack.org/222704 18:36:07 SumitNaiksatam: more than that, we have a issue with notifications for added/removed consumers to a chain 18:36:22 ivar-lazzaro: okay so first things first 18:36:38 ivar-lazzaro: is the update for service nodes now working? 18:36:57 yes and no 18:37:09 ivar-lazzaro: okay, so we need to solve the other problem first? 18:37:09 it is working for new PTs 18:37:40 that's part of it 18:37:44 ivar-lazzaro: okay 18:37:45 so, to give some context: 18:37:50 ivar-lazzaro: yes please 18:38:18 The RMD implements a way to decide *who* owns the SCI 18:38:30 it can either be an admin, or the provider 18:39:16 However, when you are in *admin* mode, there's a conflict around who can modify the Specs and the Nodes 18:40:01 more specifically, in our initial design, the owner of the SCI would also be the owner of Specs and Nodes (except for shared stuff) 18:40:54 The use case we want to solve with this is: What if I want the provider to own SCN and SCS, but an admin to own SCIs? 18:41:08 ivar-lazzaro: right 18:41:29 This wouldn't work today! Because a normal tenant modifying the SCN wouldn't be able to affect instances he doesn't own 18:41:56 This is a RBAC problem, 18:42:09 a bit convoluted to solve with our current infrastructure 18:42:34 So my proposal is to simplify the problem as follows: 18:43:19 Why do you want an admin to own the SCI? the SCI per se is just a DB object, doesn't contain any dangerous info or anything... In fact, it's pretty useful for debug and UI purposes 18:43:44 What we are really looking fore (IMHO) is a way to hide the Nova/Neutron resources 18:43:50 ivar-lazzaro: i agree, in fact i would say we should make SCI visible perhaps 18:43:59 ivar-lazzaro +1 18:44:07 So my proposal is to not change anything at all :) 18:44:08 ivar-lazzaro: right, hide the actual manifestation of the servies 18:44:08 which means 18:44:24 That we can have Provider-owned chains (supported already) 18:44:43 and the Node Driver can choose to use a different tenant (configured) 18:45:03 for deploying actual manifestations of the service 18:45:51 ivar-lazzaro: So we will have to replace the TScP as well then ? 18:45:55 So all we need is basically a validation of this model 18:46:40 mageshgv: TScP could be extended to use a different tenant indeed 18:46:50 ivar-lazzaro: okay, got it 18:46:51 but we can even decide that this approach is so useful 18:47:05 that we want a standard way to make it possible internally 18:47:21 ivar-lazzaro: right 18:47:23 like we have get_plumbing_info, we could have a get_tenant_context API or something 18:47:38 ivar-lazzaro: okay 18:47:42 so that the node can decide who owns what 18:47:45 node driver* 18:48:14 Before going that however, I would like to see a working implementation of this model (by extending the TScP for instance) 18:48:27 and then we can merge this approach in the reference implementation 18:49:23 we earlier had trouble deciding whether the consumer or provider should own the SCI 18:49:44 hence we decided to have the option of creating it as admin 18:49:55 SumitNaiksatam: we still have it 18:50:39 I believe there are two cases... The case where an omnipotent operator wants to own ALL THE THINGS (admin mode) 18:51:07 ivar-lazzaro: yes sure, we still have that option 18:51:32 and the case where the application deployers is given the power to decide what services to put in front of the application (way less concern-separated... But still useful) 18:52:30 From these two cases, there's a huge number of subcases, that we obviously can't support 18:53:03 but we can implement almost anything starting from those 2 points... All we need is a proper workflow to present users on how to achieve a certain thing 18:54:02 Pluggable drivers are good for this reason! They can experiment and create new ways of owning resources. And we can integrate is in the NCP if we agree on some of them 18:55:29 ivar-lazzaro: okay, so in the context of mageshgv’s requirement your recommendation is to use provider-owned SCIs? 18:56:16 SumitNaiksatam: that seems like the cleaner way to reach what he's looking for 18:56:25 That said, we had an extra problem 18:56:34 ivar-lazzaro: okay 18:56:45 Which is that Notifications didn't follow the right ownership 18:56:54 (new PTs, new Consumers) 18:57:26 The reason is that the notifications used to happen in the main GBP plugin (since they are always needed) and the plugin have no knowledge of the chain owner 18:57:55 So what I did is creating a chain_mapping driver for service chain operations (more modularity, less code in one single class) 18:58:16 and I put the notification burden on this class, which will in turn use the right context 18:58:39 Then a third and much worse problem rose :) 18:59:01 In notifying a new consumer, we need to specify which SCI is affected 18:59:33 however, I just realized we have no way of unambiguously know what SCI is created by which PRS! 18:59:48 ivar-lazzaro: hmmm 19:00:20 ivar-lazzaro: i believe we only have tables to store the association of service chain specs to SCIs 19:00:28 so, even if we remove some PRSs from a consumer, we don't know which SCI is affected 19:00:33 but i guess not not for PRSs 19:00:38 ivar-lazzaro: Can we retireve PRS->Providing PTG->SCI ? 19:00:40 We also have classifier_id 19:00:44 but that's not enough 19:01:16 mageshgv: how can we go from PRS to Providing PTG? 19:01:22 mageshgv: that's not enough either... Or rather, it is today but it won't be tomorrow 19:01:36 mageshgv: if we have multiple chains provided by the same PTG 19:01:49 mageshgv: just knowing the provider won't be enough 19:01:52 SumitNaiksatam: Today, the PRS DB gets populated with the providing and consuming PTGs 19:01:53 ok just noticed we over the stipulated time 19:01:58 ivar-lazzaro: okay 19:02:13 lets continue the discussion in #openstack-gbp 19:02:19 thanks all for joining 19:02:25 ok 19:02:26 bye 19:02:27 ivar-lazzaro: sorry to cut off in this channel 19:02:27 ok, bye 19:02:32 bye 19:02:38 #endmeeting