18:02:03 #startmeeting networking_policy 18:02:03 Meeting started Thu Sep 3 18:02:03 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:07 The meeting name has been set to 'networking_policy' 18:02:22 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Sept_3rd_2015 18:02:34 songole: hi 18:02:45 Hi SumitNaiksatam 18:03:13 i think we decided that over the next few days our focus is going to clean up bugs left over from kilo 18:03:41 in parallely we will review the long series of NCP related enhancements that ivar-lazzaro has been plugging away at 18:03:49 *in parallel 18:04:40 i dont have any updates on the functiona/integrating testing, except that we are some failures in the rally tests 18:04:44 which we dicussed before 18:04:58 so i will skip that topic 18:05:30 i was waiting for rkukura to join and he might have been able to fill in on the packaging update 18:05:56 but i just received an email from him that he is away and is having issues connecting to freenode 18:06:07 so we will skip packaging as well ;-) 18:06:24 anyone wanted to bring up anything on those items? 18:06:41 ok 18:06:45 #topic Bugs 18:07:02 #link https://bugs.launchpad.net/group-based-policy/+bug/1488652 18:07:03 Launchpad bug 1488652 in Group Based Policy "Implicit L3Policy Creation fails when multiple subnet are configured for an external network" [High,Confirmed] 18:07:39 ivar-lazzaro: any blockers with proceeding on this one? 18:07:59 not yet 18:08:31 still figuring out the best way of approaching this, without messing too much with the API 18:08:40 ivar-lazzaro: right 18:08:41 ivar-lazzaro: i know you have been busy with #link https://bugs.launchpad.net/group-based-policy/+bug/1432816 18:08:42 Launchpad bug 1432816 in Group Based Policy "inconsistent template ownership during chain creation" [High,In progress] - Assigned to Ivar Lazzaro (mmaleckk) 18:08:51 yep 18:09:00 now the functional tests are failing, and I'm not sure why 18:09:00 thanks for the summary over emails 18:09:07 hi - sorry I’m lsyr 18:09:09 rkukura: ah nice, glad you were able to join 18:09:11 late 18:09:11 (no host match found... Is that transient?) 18:09:21 * tbachman waves to rkukura 18:09:24 ivar-lazzaro: absolutely transient 18:09:39 ok, because the whole chain failed so I was worried 18:09:39 ivar-lazzaro: just recheck 18:09:53 Actually, I know for sure that provider centric chain patch fails 18:09:58 ivar-lazzaro: ok 18:10:11 #link https://review.openstack.org/#/c/215811/ 18:10:16 ivar-lazzaro: do we need to updated gbpfunc? 18:10:27 I think the reason is that I remove the chain agnosting plumber 18:10:32 removed* 18:10:51 and somehow the environment is using that in the conf file 18:11:06 I can't be sure though, the error is "neutron did not start" but I don't see Neutron logs 18:11:11 ivar-lazzaro: yeah, the default was the chain agnostic plumber i believe 18:11:20 yeah, but I've modified it 18:11:41 ivar-lazzaro: these are the logs from the rally job run: http://logs.openstack.org/11/215811/10/check/gate-group-based-policy-dsvm-rally/9ab0410/logs/q-svc.txt.gz 18:11:44 now it's tscp 18:12:12 and these from the devstack: 18:12:13 http://logs.openstack.org/11/215811/10/check/gate-group-based-policy-dsvm-functional/7f2bf7e/logs/q-svc.txt.gz 18:12:15 oh ok I was looking at the functional 18:12:34 ivar-lazzaro: i would imaging the devstack install itself might have failed 18:12:35 mmmh Is that in the latest patchset? 18:12:39 but i have not looked 18:12:46 it may be a different error 18:12:49 also, a very weird one 18:13:03 probably coming from the tenant ownership patch 18:13:05 yeah seems like 18:13:13 Iivar-lazzaro: think Neutron already registers the auth_uri elsewhere 18:13:28 mageshgv: but I don't think I did 18:13:49 ivar-lazzaro: ok 18:13:54 unless something went wrong in the rebase 18:14:33 ivar-lazzaro: #link https://bugs.launchpad.net/group-based-policy/+bug/1485656 18:14:35 Launchpad bug 1485656 in Group Based Policy "GBP EXT SEG: Destination Routes with valid nexthop fails" [High,Confirmed] - Assigned to Ivar Lazzaro (mmaleckk) 18:14:44 the above is an apic_mapping driver issue? 18:15:35 I haven't looked at it yet 18:15:43 ivar-lazzaro: okay 18:15:56 but it's very likely 18:16:05 given the information should come from the conf file 18:16:48 ivar-lazzaro: yeah, thats what it seemed like to me 18:17:03 ivar-lazzaro: i am guessing you probably did not have time to look at this: #link https://bugs.launchpad.net/group-based-policy/+bug/1486740 18:17:04 Launchpad bug 1486740 in Group Based Policy "GBP Ext-Seg: L3Policy cannot be updated to a new Ext-Seg" [High,Confirmed] - Assigned to Ivar Lazzaro (mmaleckk) 18:18:57 rkukura: over to you 18:19:03 any updates on #link https://bugs.launchpad.net/group-based-policy/+bug/1462024 18:19:04 Launchpad bug 1462024 in Group Based Policy "Concurrent create_policy_target_group call fails" [High,Confirmed] - Assigned to Robert Kukura (rkukura) 18:19:26 noy 18:19:30 thats a tricky one, first needs to be reproduced 18:19:48 not yet 18:20:30 SumitNaiksatam: that one is still related to apic mapping, probably an issue in the REST call we make for that event 18:20:50 ivar-lazzaro: ah, okay 18:21:25 ivar-lazzaro: perhaps we should add tags to these or may be move these to openstack-apic LP (we can discuss offline) 18:21:55 yeah 18:21:58 rkukura: did you get a chance to look at either #link https://bugs.launchpad.net/group-based-policy/+bug/1417312 or #link https://bugs.launchpad.net/group-based-policy/+bug/1470646 18:22:00 Launchpad bug 1417312 in Group Based Policy "GBP: Existing L3Policy results in failure to create PTG with default L2Policy" [High,Confirmed] - Assigned to Robert Kukura (rkukura) 18:22:01 Launchpad bug 1470646 in Group Based Policy "Deleting network associated with L2P results in infinite loop" [High,Triaged] - Assigned to Robert Kukura (rkukura) 18:22:20 no, 18:22:38 i’ll be digging into these later today and tomorrow 18:22:54 rkukura: okay sure 18:23:01 mageshgv: over to you :-) 18:23:12 #link https://bugs.launchpad.net/group-based-policy/+bug/1489090 and #link https://bugs.launchpad.net/group-based-policy/+bug/1418839 18:23:14 Launchpad bug 1489090 in Group Based Policy "GBP: Resource intergrity fails between policy-rule-set & external-policy" [High,Confirmed] - Assigned to Magesh GV (magesh-gv) 18:23:15 Launchpad bug 1418839 in Group Based Policy "Service config in yaml cannot be loaded" [High,In progress] - Assigned to Magesh GV (magesh-gv) 18:23:54 SumitNaiksatam: I’m going to have to drop of in a few minutes. Is there anything else we should cover before then? 18:24:12 SumitNaiksatam: For configuration in yaml support, we only have to enhance the heat based node driver 18:24:51 rkukura: you were going to validate ivar-lazzaro’s changes to the tenant ownership patch? 18:25:02 mageshgv: yes, only heat node driver 18:25:10 SumitNaiksatam: yes 18:25:16 mageshgv: you can add the empty conf string check as well ;-) 18:25:16 I havent taken a look at the other one. Will look at it today 18:25:19 rkukura: great thanks 18:25:25 mageshgv: okay 18:25:42 #topic NCP enhancements 18:26:07 this spec is been sitting for a long time: #link https://review.openstack.org/#/c/203226/ 18:26:32 if no one has objections, mageshgv, rkukura we should move forward with it 18:27:00 I have one question 18:27:02 ok, I’ll give it another look 18:27:09 rkukura: thanks 18:27:13 mageshgv: go ahead 18:27:15 I will review as well 18:27:15 #link https://review.openstack.org/#/c/203226/3/specs/kilo/traffic-stitching-plumber-placement-type.rst#L63 18:28:25 When we say a driver needs two PTs and the plumbing type is GATEWAY, does it mean that one of the PTs will be behaving as a Gateway and the other one is just an Endpoint ? 18:28:48 *two PTs on provider and two on consumer 18:29:05 mageshgv: that needs to be defined by the Plumber 18:29:34 ivar-lazzaro: okay, will there be another update to the spec later ? 18:29:39 currently we expect GWs to have 2-3 Ifaces (Management is optional) 18:29:56 2 per group is a good case for HA, but it's not defined yet 18:30:12 mageshgv: good point to bring up 18:31:07 we should definitively start the discussion around that 18:31:25 ivar-lazzaro: i believe there was some update related to notifying the consumer PTG as well? 18:31:36 ivar-lazzaro: okay, do we close on the current spec and the patches and then get started on a model for HA ? or get it done at one stretch ? 18:31:52 mageshgv: I would go on that later 18:32:13 mageshgv: I mean, I would really like to see chains working with the new model, and then iterate from there :) 18:32:17 I need to drop off now - I’ll read the log later. Bye. 18:32:18 ivar-lazzaro: okay, sounds good. Just wanted to get an idea 18:32:32 rkukura: ok bye 18:32:38 unless we see that the current interface is completely unusable for that specific use case ofc 18:32:38 rkukura: bye! 18:32:42 rkukura: bye 18:33:12 SumitNaiksatam: that would require a further step in the "provider centric chain" 18:33:20 SumitNaiksatam: and a backward incompatible API change 18:33:36 SumitNaiksatam: as is: Each SCI has a list of consumers and 1 provider 18:33:46 s/is/in 18:34:00 ivar-lazzaro: i am not clear about the API change part, but we can probably take that offline 18:34:19 Each SCI has one provider 18:34:24 but can have multiple consumers 18:34:37 so we need a read API change in the SCI model 18:34:42 ivar-lazzaro: can we just have a notification mechanism rather than API change ? 18:35:04 ivar-lazzaro: That might preserve the API 18:35:04 mageshgv: you still need to keep the information about who is consuming the chain 18:35:38 ivar-lazzaro: i believe mageshgv is suggesting that the driver can keep that information in today’s model 18:35:38 mageshgv: that could be retrieve from the PRS 18:35:44 ivar-lazzaro: Ah, yes, without the API the user wont be able to keep track 18:35:46 ivar-lazzaro: as long as it gets the notification 18:35:59 mageshgv: correct me if that is not what you meant 18:36:26 ivar-lazzaro, SumitNaiksatam: For a node driver it should not matter as long as it gets notified when a consumer is added or removed 18:36:39 mageshgv: yeah 18:36:40 yeah that is doable, also quite important to implement to make auto scaling work 18:36:47 ivar-lazzaro: ah good point 18:37:13 however, I believe we are at the point where NCP and TScP is too big 18:37:23 ivar-lazzaro: okay 18:37:30 at least the part that is not reviewed 18:38:02 mageshgv: can you close loop with ivar-lazzaro if you need this notification from the get-go? 18:38:05 we have to make progress and find issues with the current code, everything that is "needed but clearly feasible" should be pushed to when we know that what we have makes sense 18:38:14 SumitNaiksatam: okay 18:38:18 TScP can be tested without autoscaling 18:38:23 mageshgv: ivar-lazzaro: thanks 18:38:41 okay i think we covered all the major items today, and rkukura dropped off as well 18:38:51 ivar-lazzaro: what do you mean by auto-scaling? 18:39:18 adding members to group? 18:39:29 songole: to a LB pool for instance 18:39:47 songole: for each new PT in the group, add it to the LB pool 18:39:59 today that still works since it's on the provider side 18:40:01 ok. got it 18:40:13 but similar things on the *consumer* side may be useful as well 18:40:57 we need to identify the use cases, and then add the feature... But we should not keep implementing blindly before we know that what we have done so far works 18:41:21 as in: let's focus on reviews :) 18:41:58 mageshgv: was your use case related to auto-scaling? 18:42:20 mageshgv: i thought you had different use case with regards to configuring the firewall rules which needed the consumer PTG information? 18:42:29 SumitNaiksatam: No, it was related to firewall rule addition based on consumer CIDRs 18:42:34 mageshgv: right 18:42:52 ivar-lazzaro: so yeah, its not a blind wishlist that we are discussing 18:43:16 SumitNaiksatam: that's not what I meant :) I know there are use cases 18:44:02 I'm saying that I think it's unadvisable to add non critical stuff before we have at least some coverage on the current implementation 18:44:37 (by non critical I mean that at least one chain can be created without it) 18:45:12 ivar-lazzaro: it seemed like a non-starter for mageshgv’s firewall support without this 18:45:57 ivar-lazzaro: but i dont think its blocking the current review 18:46:01 SumitNaiksatam, ivar-lazzaro: right, we can have a service instantiated, but no configuration applied 18:46:20 SumitNaiksatam: I think both notifications and HA should be targeted for our first release 18:46:30 in that order then! :) 18:46:36 ivar-lazzaro: but since we were discussing the spec, i would imagine this is something we should be adding this to the design 18:47:02 ivar-lazzaro: implementation can follow in subsequent patches 18:47:05 SumitNaiksatam: if that's a blocker I agree 18:47:11 ivar-lazzaro: and i agree the current patches should be reviewed as they are 18:47:20 Ok 18:47:24 sounds good to me 18:47:30 ivar-lazzaro: so was not suggesting changes to the implementation of the current patches ;-) 18:47:46 We can discuss the notifications in the current spec (since it's critical) and HA in a subsequent one 18:47:55 ivar-lazzaro: awesome! 18:47:59 alright, i think we have a lot of different pieces in the air 18:48:29 we will continue our offline meetings as well to follow up on the bugs we discussed today 18:48:47 #topic Open Discussion 18:48:56 anything else anyone wanted to bring up? 18:49:11 I have a question about the admin chain ownership patch 18:49:27 #chair ivar-lazzaro mageshgv songole 18:49:28 Current chairs: SumitNaiksatam ivar-lazzaro mageshgv songole 18:49:33 ivar-lazzaro: go head 18:49:37 *ahead 18:49:43 i need to step away for a few minutes 18:49:44 Can we clarify what is the requirement for Keystone V3 at this point? 18:49:59 ivar-lazzaro: mageshgv songole can you close the meeting? 18:50:23 by that I mean: is it a blocker for testing? Is it doable by the Node Driver alone? 18:50:47 ivar-lazzaro: I cross checked the workflow. I think we are fine with v2 for now. v3 is only required if the admin user is in a non "default" domain 18:51:03 I would like to possibly find a way of seamlessly use V2 and V3 for something as simple as retrieving an ID... But it doesn't look very easy it seems 18:51:40 mageshgv: ok, that gives as some time to add it as a subsequent patch 18:51:49 otherwise the whole chain would have been blocked by this 18:51:51 ivar-lazzaro: right, the major difference is that v2 apis cannot see the v3 projects in different domains 18:52:14 ivar-lazzaro: I can give a proper confirmation in a day or two , but I think it looks okay 18:52:54 is a V2 "tenant" readable in a V3 "project" ? 18:53:06 or are those just completely separated? 18:53:15 ivar-lazzaro: yes, v2 tenants are readable in V3 18:53:35 so could the V3 client work with the V2 URI? 18:53:53 like, it figures out how to make the right call? 18:53:56 ivar-lazzaro: That I am not sure of 18:54:27 ivar-lazzaro: right, may be we can address that in a subsequent patch at a later point when required 18:54:39 ok sounds good to me 18:54:44 Anything else? 18:55:07 ivar-lazzaro: nothing else from my side :- 18:55:26 If not, we can spare this 5 minutes and move back to our lovely python :) 18:55:31 lol 18:55:38 :) 18:56:07 ok guys thanks for joining! 18:56:11 #endmeeting