18:02:12 <SumitNaiksatam> #startmeeting networking_policy 18:02:13 <openstack> Meeting started Thu Oct 1 18:02:12 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:17 <openstack> The meeting name has been set to 'networking_policy' 18:02:43 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Oct_1st_2015 18:03:01 <SumitNaiksatam> #topic Stable Release 18:03:27 <SumitNaiksatam> per rkukura’s request, i put out stable releases for both juno and kilo 18:03:36 <rkukura> SumitNaiksatam: thanks! 18:03:42 <SumitNaiksatam> a couple of days back, signed tarballs we posted 18:03:50 <SumitNaiksatam> rkukura: np 18:04:37 <SumitNaiksatam> just wanted the team to note, the GBP service release version is one minor version ahead of the gbp automation and gbp UI 18:04:50 <SumitNaiksatam> so 2014.2.3 versus 2014.2.2 18:05:40 <SumitNaiksatam> perhaps it might be better to get these back in sync, just so taht its easier to track 18:05:48 <SumitNaiksatam> but for now thats where its at 18:05:58 <SumitNaiksatam> #topic Packaging 18:06:24 <SumitNaiksatam> so the main follow up for the stable release is with regards to the packaging 18:06:36 <SumitNaiksatam> rkukura: any hiccups with using the tarballs? 18:07:10 <rkukura> OK, I’m working on updating the Fedora 22 packages to the latest stable/juno releases and the Fedora 23 packages to the latest stable/kilo releases. 18:07:29 <rkukura> No issues so far, but I haven’t done any testing of the packages yet. 18:07:39 <SumitNaiksatam> rkukura: okay sure 18:07:56 <rkukura> The plan is for someone from Red Hat to then update the RDO juno and kilo packages based on these Fedora packages. 18:08:06 <ivar-lazzaro> hi, sorry I'm late 18:08:17 <SumitNaiksatam> rkukura: were you able to deploy any of the newly generated packages in your local setup? 18:08:23 <SumitNaiksatam> ivar-lazzaro: hi, np 18:08:44 <SumitNaiksatam> ivar-lazzaro: just discussing the stable release and packaging follow up 18:08:57 <ivar-lazzaro> k 18:09:00 <rkukura> SumitNaiksatam: I have not yet tried installing any of the pacakges. I’ll do that once they are all ready on fresh F22 and F23 VMs. 18:09:25 <SumitNaiksatam> rkukura: okay, mainly asking because of the neutronclient version issue that you were mentioning earlier 18:09:40 <rkukura> One the Fedora kilo packages are ready, these will be moved into Red Hat’s new Delorean build system. 18:10:00 <SumitNaiksatam> rkukura: okay good to know 18:10:13 <SumitNaiksatam> rkukura: do the packages need to be built differently for this new system? 18:10:28 <rkukura> SumitNaiksatam: I’m currently using the previous juno client that is compatible with the python-neutronclient that is in F22. 18:10:47 <SumitNaiksatam> rkukura: okay 18:10:49 <rkukura> There will be some tweeks to the spec files for Delorean, but nothing major. 18:11:09 <SumitNaiksatam> rkukura: okay 18:11:22 <rkukura> The nice thing is that Delorean monitors the git repo and triggers new packages on every commit, and I think runs some CI on them. 18:11:56 <SumitNaiksatam> rkukura: nice, so in our case what would the CI do? 18:12:02 <rkukura> And RDO is using these. 18:12:26 <rkukura> I think the CI would be the same as or similar to the functional tests. 18:12:45 <rkukura> I’ll find out more details when we are ready. 18:13:30 <SumitNaiksatam> rkukura: okay, i imagine we would have tell their CI what it should exercise at our end, and if thats the case we can perhaps leverage our existing gate jobs 18:13:31 <rkukura> My undestanding is that the OpenStack services will no longer be included in Fedora as of F24, only the clients. 18:13:44 <SumitNaiksatam> rkukura: okay 18:14:27 <rkukura> Is our current plan to update GBP service to work with liberty neutron as a service plugin? 18:14:57 <SumitNaiksatam> rkukura: yes that would probably the first step 18:15:44 <rkukura> So that version would probably only be packaged through Delorean, not manually packaged for Fedora. 18:16:01 <rkukura> Any questions on the Fedora/RDO packaging? 18:16:23 <rkukura> Oh, if anyone is aware of any packaging-related bugs, please let me know. 18:16:45 <rkukura> I did fix https://bugzilla.redhat.com/show_bug.cgi?id=1193642 18:16:45 <openstack> bugzilla.redhat.com bug 1193642 in python-gbpclient "GBP bash-completion file not installed" [Unspecified,New] - Assigned to rk 18:16:45 <SumitNaiksatam> rkukura: how frequently can we update these packages once you are done with the current iteration of fixing them 18:16:46 <SumitNaiksatam> ? 18:17:27 <SumitNaiksatam> rkukura: nice, not having bash completion was inconvenient 18:17:30 <rkukura> SumitNaiksatam: As frequently as we need to. With Delorean, this will happen automatically on every stable branch commit. 18:17:50 <SumitNaiksatam> rkukura: okay 18:17:55 <SumitNaiksatam> rkukura: thanks for the update 18:18:20 <SumitNaiksatam> #topic Integration tests 18:18:44 <ivar-lazzaro> SumitNaiksatam: in treeeee, lalalalalalalalaaa 18:19:08 <SumitNaiksatam> ivar-lazzaro: yes indeed and catching issues 18:19:15 <SumitNaiksatam> thanks for pep8ifying up the integration tests 18:19:41 <SumitNaiksatam> i believe jishnu is trying to merge a few more tests 18:20:41 <SumitNaiksatam> so for everyone else, you could add integration tests with your patch if you wanted to 18:21:15 * ivar-lazzaro thins we should make that mandatory at some point 18:21:46 <SumitNaiksatam> ivar-lazzaro: yes, i was going to say 18:22:04 <SumitNaiksatam> ivar-lazzaro: for now what is mandatory is that every patch should pass the existing suite 18:22:20 <SumitNaiksatam> so if the tests break with your patch then you have to fix 18:22:29 <ivar-lazzaro> SumitNaiksatam: yeah, still need to do some things with the tests before making them mandatory imho 18:22:37 <SumitNaiksatam> ivar-lazzaro: right 18:22:53 <ivar-lazzaro> for instance, an "How to write functional tests" guide 18:22:58 <SumitNaiksatam> subsequently, we will require some test coverage with every patch 18:23:01 <SumitNaiksatam> ivar-lazzaro: absolutely 18:23:12 <ivar-lazzaro> and a way to organize tests by tipe (smoke, granade etc...) 18:23:36 <ivar-lazzaro> we don't want to run every test every time I think... 30 min is already enough time spent 18:23:55 <SumitNaiksatam> ivar-lazzaro: right, and we would have separate jobs for those as well 18:24:01 <ivar-lazzaro> we need Jishnu wisdom on this matter :) 18:24:22 <SumitNaiksatam> #topic Bugs 18:24:44 <SumitNaiksatam> as a higher priority we were tracking some bugs over the past few weeks 18:25:43 <SumitNaiksatam> rkukura: any chance you got time to look at #link https://bugs.launchpad.net/group-based-policy/+bug/1470646 ? 18:25:43 <openstack> Launchpad bug 1470646 in Group Based Policy "Deleting network associated with L2P results in infinite loop" [High,Triaged] - Assigned to Robert Kukura (rkukura) 18:26:10 <rkukura> Not yet, but I will today or tomorrow 18:26:35 <SumitNaiksatam> rkukura: okay 18:27:04 <SumitNaiksatam> mageshgv: how about #link https://bugs.launchpad.net/group-based-policy/+bug/1489090 and #link https://bugs.launchpad.net/group-based-policy/+bug/1418839 ? 18:27:04 <openstack> Launchpad bug 1489090 in Group Based Policy "GBP: Resource intergrity fails between policy-rule-set & external-policy" [High,Confirmed] - Assigned to Magesh GV (magesh-gv) 18:27:05 <openstack> Launchpad bug 1418839 in Group Based Policy "Service config in yaml cannot be loaded" [High,In progress] - Assigned to Magesh GV (magesh-gv) 18:27:31 <mageshgv> SumitNaiksatam: I did not get a chance to do this, will do it soon 18:27:37 <SumitNaiksatam> mageshgv: ok thanks 18:27:56 <SumitNaiksatam> i also noticed this issue yesterday #link https://bugs.launchpad.net/group-based-policy/+bug/1501657 18:27:56 <openstack> Launchpad bug 1501657 in Group Based Policy "Explicit router provided for L3 Policy creation is not used" [High,New] 18:28:27 <SumitNaiksatam> i am not sure what a good use case for this would be, but there was someone in the community who wanted to do this 18:28:54 <rkukura> I can look into that if you’d like 18:29:06 <SumitNaiksatam> when i tried doing this, the router ID i provided while creating the L3P was not used, instead a new one got created 18:29:12 <SumitNaiksatam> rkukura: that would be great 18:29:27 <SumitNaiksatam> also i releazied that our CLI does not even support this option of passing the routers list 18:29:42 <rkukura> ugh 18:29:45 <SumitNaiksatam> rkukura: so i will fix the CLI and perhaps you can look at it after that 18:29:51 <rkukura> ok 18:29:57 <SumitNaiksatam> so that you can actually reprodue the issue :-) 18:30:03 <SumitNaiksatam> *reproduce 18:30:06 <rkukura> makes it easier 18:30:13 <SumitNaiksatam> i fixed the CLI in my local setup to test it 18:30:22 <SumitNaiksatam> okay any other critical bugs? 18:30:39 <SumitNaiksatam> there is a vendor bug reported as critical, which i think should de-prioritized 18:30:52 <rkukura> Is Jishnu going to retest https://bugs.launchpad.net/group-based-policy/+bug/1417312? 18:30:52 <openstack> Launchpad bug 1417312 in Group Based Policy "GBP: Existing L3Policy results in failure to create PTG with default L2Policy" [High,In progress] - Assigned to Robert Kukura (rkukura) 18:31:07 <SumitNaiksatam> rkukura: yeah i just remembered that when i saw the list 18:31:21 <SumitNaiksatam> rkukura: i will request him to do that 18:31:27 <rkukura> thanks 18:31:43 <rkukura> I’ll be there Monday and can work with him on it if needed 18:31:52 <SumitNaiksatam> rkukura: yeah that would be great 18:32:02 <SumitNaiksatam> #topic Node Composition Plugin enhancements 18:32:42 <SumitNaiksatam> ivar-lazzaro: thanks for posting the new spec #link https://review.openstack.org/#/c/228693 18:32:54 <SumitNaiksatam> for traffic stitching and proxy groups 18:33:19 <ivar-lazzaro> SumitNaiksatam: a spec on the PT HA will come as well 18:33:28 <SumitNaiksatam> ivar-lazzaro: nice!!1 18:33:35 <SumitNaiksatam> will make mageshgv happy i guess :-) 18:34:01 <mageshgv> SumitNaiksatam :-) 18:34:46 <SumitNaiksatam> mageshgv: thanks for the review on those two patches and getting them merged earlier today 18:34:46 <ivar-lazzaro> SumitNaiksatam: implementation for mageshgv is there already ;) 18:34:55 <SumitNaiksatam> ivar-lazzaro: you rock! 18:35:01 <mageshgv> ivar-lazzaro: nice ! 18:35:12 <ivar-lazzaro> #link https://review.openstack.org/#/c/229673/ 18:35:22 <SumitNaiksatam> ivar-lazzaro: i believe you wanted to discuss the context elevation issue? 18:35:24 <ivar-lazzaro> failing py27 apparently, but still :) 18:35:37 <mageshgv> ivar-lazzaro: thanks, will take a look 18:35:57 <SumitNaiksatam> i was referring to patch #link https://review.openstack.org/222704 18:36:07 <ivar-lazzaro> SumitNaiksatam: more than that, we have a issue with notifications for added/removed consumers to a chain 18:36:22 <SumitNaiksatam> ivar-lazzaro: okay so first things first 18:36:38 <SumitNaiksatam> ivar-lazzaro: is the update for service nodes now working? 18:36:57 <ivar-lazzaro> yes and no 18:37:09 <SumitNaiksatam> ivar-lazzaro: okay, so we need to solve the other problem first? 18:37:09 <ivar-lazzaro> it is working for new PTs 18:37:40 <ivar-lazzaro> that's part of it 18:37:44 <SumitNaiksatam> ivar-lazzaro: okay 18:37:45 <ivar-lazzaro> so, to give some context: 18:37:50 <SumitNaiksatam> ivar-lazzaro: yes please 18:38:18 <ivar-lazzaro> The RMD implements a way to decide *who* owns the SCI 18:38:30 <ivar-lazzaro> it can either be an admin, or the provider 18:39:16 <ivar-lazzaro> However, when you are in *admin* mode, there's a conflict around who can modify the Specs and the Nodes 18:40:01 <ivar-lazzaro> more specifically, in our initial design, the owner of the SCI would also be the owner of Specs and Nodes (except for shared stuff) 18:40:54 <ivar-lazzaro> The use case we want to solve with this is: What if I want the provider to own SCN and SCS, but an admin to own SCIs? 18:41:08 <SumitNaiksatam> ivar-lazzaro: right 18:41:29 <ivar-lazzaro> This wouldn't work today! Because a normal tenant modifying the SCN wouldn't be able to affect instances he doesn't own 18:41:56 <ivar-lazzaro> This is a RBAC problem, 18:42:09 <ivar-lazzaro> a bit convoluted to solve with our current infrastructure 18:42:34 <ivar-lazzaro> So my proposal is to simplify the problem as follows: 18:43:19 <ivar-lazzaro> Why do you want an admin to own the SCI? the SCI per se is just a DB object, doesn't contain any dangerous info or anything... In fact, it's pretty useful for debug and UI purposes 18:43:44 <ivar-lazzaro> What we are really looking fore (IMHO) is a way to hide the Nova/Neutron resources 18:43:50 <SumitNaiksatam> ivar-lazzaro: i agree, in fact i would say we should make SCI visible perhaps 18:43:59 <mageshgv> ivar-lazzaro +1 18:44:07 <ivar-lazzaro> So my proposal is to not change anything at all :) 18:44:08 <SumitNaiksatam> ivar-lazzaro: right, hide the actual manifestation of the servies 18:44:08 <ivar-lazzaro> which means 18:44:24 <ivar-lazzaro> That we can have Provider-owned chains (supported already) 18:44:43 <ivar-lazzaro> and the Node Driver can choose to use a different tenant (configured) 18:45:03 <ivar-lazzaro> for deploying actual manifestations of the service 18:45:51 <mageshgv> ivar-lazzaro: So we will have to replace the TScP as well then ? 18:45:55 <ivar-lazzaro> So all we need is basically a validation of this model 18:46:40 <ivar-lazzaro> mageshgv: TScP could be extended to use a different tenant indeed 18:46:50 <mageshgv> ivar-lazzaro: okay, got it 18:46:51 <ivar-lazzaro> but we can even decide that this approach is so useful 18:47:05 <ivar-lazzaro> that we want a standard way to make it possible internally 18:47:21 <mageshgv> ivar-lazzaro: right 18:47:23 <ivar-lazzaro> like we have get_plumbing_info, we could have a get_tenant_context API or something 18:47:38 <SumitNaiksatam> ivar-lazzaro: okay 18:47:42 <ivar-lazzaro> so that the node can decide who owns what 18:47:45 <ivar-lazzaro> node driver* 18:48:14 <ivar-lazzaro> Before going that however, I would like to see a working implementation of this model (by extending the TScP for instance) 18:48:27 <ivar-lazzaro> and then we can merge this approach in the reference implementation 18:49:23 <SumitNaiksatam> we earlier had trouble deciding whether the consumer or provider should own the SCI 18:49:44 <SumitNaiksatam> hence we decided to have the option of creating it as admin 18:49:55 <ivar-lazzaro> SumitNaiksatam: we still have it 18:50:39 <ivar-lazzaro> I believe there are two cases... The case where an omnipotent operator wants to own ALL THE THINGS (admin mode) 18:51:07 <SumitNaiksatam> ivar-lazzaro: yes sure, we still have that option 18:51:32 <ivar-lazzaro> and the case where the application deployers is given the power to decide what services to put in front of the application (way less concern-separated... But still useful) 18:52:30 <ivar-lazzaro> From these two cases, there's a huge number of subcases, that we obviously can't support 18:53:03 <ivar-lazzaro> but we can implement almost anything starting from those 2 points... All we need is a proper workflow to present users on how to achieve a certain thing 18:54:02 <ivar-lazzaro> Pluggable drivers are good for this reason! They can experiment and create new ways of owning resources. And we can integrate is in the NCP if we agree on some of them 18:55:29 <SumitNaiksatam> ivar-lazzaro: okay, so in the context of mageshgv’s requirement your recommendation is to use provider-owned SCIs? 18:56:16 <ivar-lazzaro> SumitNaiksatam: that seems like the cleaner way to reach what he's looking for 18:56:25 <ivar-lazzaro> That said, we had an extra problem 18:56:34 <SumitNaiksatam> ivar-lazzaro: okay 18:56:45 <ivar-lazzaro> Which is that Notifications didn't follow the right ownership 18:56:54 <ivar-lazzaro> (new PTs, new Consumers) 18:57:26 <ivar-lazzaro> The reason is that the notifications used to happen in the main GBP plugin (since they are always needed) and the plugin have no knowledge of the chain owner 18:57:55 <ivar-lazzaro> So what I did is creating a chain_mapping driver for service chain operations (more modularity, less code in one single class) 18:58:16 <ivar-lazzaro> and I put the notification burden on this class, which will in turn use the right context 18:58:39 <ivar-lazzaro> Then a third and much worse problem rose :) 18:59:01 <ivar-lazzaro> In notifying a new consumer, we need to specify which SCI is affected 18:59:33 <ivar-lazzaro> however, I just realized we have no way of unambiguously know what SCI is created by which PRS! 18:59:48 <SumitNaiksatam> ivar-lazzaro: hmmm 19:00:20 <SumitNaiksatam> ivar-lazzaro: i believe we only have tables to store the association of service chain specs to SCIs 19:00:28 <ivar-lazzaro> so, even if we remove some PRSs from a consumer, we don't know which SCI is affected 19:00:33 <SumitNaiksatam> but i guess not not for PRSs 19:00:38 <mageshgv> ivar-lazzaro: Can we retireve PRS->Providing PTG->SCI ? 19:00:40 <ivar-lazzaro> We also have classifier_id 19:00:44 <ivar-lazzaro> but that's not enough 19:01:16 <SumitNaiksatam> mageshgv: how can we go from PRS to Providing PTG? 19:01:22 <ivar-lazzaro> mageshgv: that's not enough either... Or rather, it is today but it won't be tomorrow 19:01:36 <ivar-lazzaro> mageshgv: if we have multiple chains provided by the same PTG 19:01:49 <ivar-lazzaro> mageshgv: just knowing the provider won't be enough 19:01:52 <mageshgv> SumitNaiksatam: Today, the PRS DB gets populated with the providing and consuming PTGs 19:01:53 <SumitNaiksatam> ok just noticed we over the stipulated time 19:01:58 <mageshgv> ivar-lazzaro: okay 19:02:13 <SumitNaiksatam> lets continue the discussion in #openstack-gbp 19:02:19 <SumitNaiksatam> thanks all for joining 19:02:25 <ivar-lazzaro> ok 19:02:26 <ivar-lazzaro> bye 19:02:27 <SumitNaiksatam> ivar-lazzaro: sorry to cut off in this channel 19:02:27 <mageshgv> ok, bye 19:02:32 <SumitNaiksatam> bye 19:02:38 <SumitNaiksatam> #endmeeting