18:02:03 <SumitNaiksatam> #startmeeting networking_policy
18:02:03 <openstack> Meeting started Thu Sep  3 18:02:03 2015 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:02:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:02:07 <openstack> The meeting name has been set to 'networking_policy'
18:02:22 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Sept_3rd_2015
18:02:34 <SumitNaiksatam> songole: hi
18:02:45 <songole> Hi SumitNaiksatam
18:03:13 <SumitNaiksatam> i think we decided that over the next few days our focus is going to clean up bugs left over from kilo
18:03:41 <SumitNaiksatam> in parallely we will review the long series of NCP related enhancements that ivar-lazzaro has been plugging away at
18:03:49 <SumitNaiksatam> *in parallel
18:04:40 <SumitNaiksatam> i dont have any updates on the functiona/integrating testing, except that we are some failures in the rally tests
18:04:44 <SumitNaiksatam> which we dicussed before
18:04:58 <SumitNaiksatam> so i will skip that topic
18:05:30 <SumitNaiksatam> i was waiting for rkukura to join and he might have been able to fill in on the packaging update
18:05:56 <SumitNaiksatam> but i just received an email from him that he is away and is having issues connecting to freenode
18:06:07 <SumitNaiksatam> so we will skip packaging as well ;-)
18:06:24 <SumitNaiksatam> anyone wanted to bring up anything on those items?
18:06:41 <SumitNaiksatam> ok
18:06:45 <SumitNaiksatam> #topic Bugs
18:07:02 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1488652
18:07:03 <openstack> Launchpad bug 1488652 in Group Based Policy "Implicit L3Policy Creation fails when multiple subnet are configured for an external network" [High,Confirmed]
18:07:39 <SumitNaiksatam> ivar-lazzaro: any blockers with proceeding on this one?
18:07:59 <ivar-lazzaro> not yet
18:08:31 <ivar-lazzaro> still figuring out the best way of approaching this, without messing too much with the API
18:08:40 <SumitNaiksatam> ivar-lazzaro: right
18:08:41 <SumitNaiksatam> ivar-lazzaro: i know you have been busy with #link https://bugs.launchpad.net/group-based-policy/+bug/1432816
18:08:42 <openstack> Launchpad bug 1432816 in Group Based Policy "inconsistent template ownership during chain creation" [High,In progress] - Assigned to Ivar Lazzaro (mmaleckk)
18:08:51 <ivar-lazzaro> yep
18:09:00 <ivar-lazzaro> now the functional tests are failing, and I'm not sure why
18:09:00 <SumitNaiksatam> thanks for the summary over emails
18:09:07 <rkukura> hi - sorry I’m lsyr
18:09:09 <SumitNaiksatam> rkukura: ah nice, glad you were able to join
18:09:11 <rkukura> late
18:09:11 <ivar-lazzaro> (no host match found... Is that transient?)
18:09:21 * tbachman waves to rkukura
18:09:24 <SumitNaiksatam> ivar-lazzaro: absolutely transient
18:09:39 <ivar-lazzaro> ok, because the whole chain failed so I was worried
18:09:39 <SumitNaiksatam> ivar-lazzaro: just recheck
18:09:53 <ivar-lazzaro> Actually, I know for sure that provider centric chain patch fails
18:09:58 <SumitNaiksatam> ivar-lazzaro: ok
18:10:11 <ivar-lazzaro> #link https://review.openstack.org/#/c/215811/
18:10:16 <SumitNaiksatam> ivar-lazzaro: do we need to updated gbpfunc?
18:10:27 <ivar-lazzaro> I think the reason is that I remove the chain agnosting plumber
18:10:32 <ivar-lazzaro> removed*
18:10:51 <ivar-lazzaro> and somehow the environment is using that in the conf file
18:11:06 <ivar-lazzaro> I can't be sure though, the error is "neutron did not start" but I don't see Neutron logs
18:11:11 <SumitNaiksatam> ivar-lazzaro: yeah, the default was the chain agnostic plumber i believe
18:11:20 <ivar-lazzaro> yeah, but I've modified it
18:11:41 <SumitNaiksatam> ivar-lazzaro: these are the logs from the rally job run: http://logs.openstack.org/11/215811/10/check/gate-group-based-policy-dsvm-rally/9ab0410/logs/q-svc.txt.gz
18:11:44 <ivar-lazzaro> now it's tscp
18:12:12 <SumitNaiksatam> and these from the devstack:
18:12:13 <SumitNaiksatam> http://logs.openstack.org/11/215811/10/check/gate-group-based-policy-dsvm-functional/7f2bf7e/logs/q-svc.txt.gz
18:12:15 <ivar-lazzaro> oh ok I was looking at the functional
18:12:34 <SumitNaiksatam> ivar-lazzaro: i would imaging the devstack install itself might have failed
18:12:35 <ivar-lazzaro> mmmh Is that in the latest patchset?
18:12:39 <SumitNaiksatam> but i have not looked
18:12:46 <ivar-lazzaro> it may be a different error
18:12:49 <ivar-lazzaro> also, a very weird one
18:13:03 <ivar-lazzaro> probably coming from the tenant ownership patch
18:13:05 <SumitNaiksatam> yeah seems like
18:13:13 <mageshgv> Iivar-lazzaro:  think Neutron already registers the auth_uri elsewhere
18:13:28 <ivar-lazzaro> mageshgv: but I don't think I did
18:13:49 <mageshgv> ivar-lazzaro: ok
18:13:54 <ivar-lazzaro> unless something went wrong in the rebase
18:14:33 <SumitNaiksatam> ivar-lazzaro: #link https://bugs.launchpad.net/group-based-policy/+bug/1485656
18:14:35 <openstack> Launchpad bug 1485656 in Group Based Policy "GBP EXT SEG: Destination Routes with valid nexthop fails" [High,Confirmed] - Assigned to Ivar Lazzaro (mmaleckk)
18:14:44 <SumitNaiksatam> the above is an apic_mapping driver issue?
18:15:35 <ivar-lazzaro> I haven't looked at it yet
18:15:43 <SumitNaiksatam> ivar-lazzaro: okay
18:15:56 <ivar-lazzaro> but it's very likely
18:16:05 <ivar-lazzaro> given the information should come from the conf file
18:16:48 <SumitNaiksatam> ivar-lazzaro: yeah, thats what it seemed like to me
18:17:03 <SumitNaiksatam> ivar-lazzaro: i am guessing you probably did not have time to look at this: #link https://bugs.launchpad.net/group-based-policy/+bug/1486740
18:17:04 <openstack> Launchpad bug 1486740 in Group Based Policy "GBP Ext-Seg: L3Policy cannot be updated to a new Ext-Seg" [High,Confirmed] - Assigned to Ivar Lazzaro (mmaleckk)
18:18:57 <SumitNaiksatam> rkukura: over to you
18:19:03 <SumitNaiksatam> any updates on #link https://bugs.launchpad.net/group-based-policy/+bug/1462024
18:19:04 <openstack> Launchpad bug 1462024 in Group Based Policy "Concurrent create_policy_target_group call fails" [High,Confirmed] - Assigned to Robert Kukura (rkukura)
18:19:26 <rkukura> noy
18:19:30 <SumitNaiksatam> thats a tricky one, first needs to be reproduced
18:19:48 <rkukura> not yet
18:20:30 <ivar-lazzaro> SumitNaiksatam: that one is still related to apic mapping, probably an issue in the REST call we make for that event
18:20:50 <SumitNaiksatam> ivar-lazzaro: ah, okay
18:21:25 <SumitNaiksatam> ivar-lazzaro: perhaps we should add tags to these or may be move these to openstack-apic LP (we can discuss offline)
18:21:55 <ivar-lazzaro> yeah
18:21:58 <SumitNaiksatam> rkukura: did you get a chance to look at either #link https://bugs.launchpad.net/group-based-policy/+bug/1417312 or #link https://bugs.launchpad.net/group-based-policy/+bug/1470646
18:22:00 <openstack> Launchpad bug 1417312 in Group Based Policy "GBP: Existing L3Policy results in failure to create PTG with default L2Policy" [High,Confirmed] - Assigned to Robert Kukura (rkukura)
18:22:01 <openstack> Launchpad bug 1470646 in Group Based Policy "Deleting network associated with L2P results in infinite loop" [High,Triaged] - Assigned to Robert Kukura (rkukura)
18:22:20 <rkukura> no,
18:22:38 <rkukura> i’ll be digging into these later today and tomorrow
18:22:54 <SumitNaiksatam> rkukura: okay sure
18:23:01 <SumitNaiksatam> mageshgv: over to you :-)
18:23:12 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1489090 and #link https://bugs.launchpad.net/group-based-policy/+bug/1418839
18:23:14 <openstack> Launchpad bug 1489090 in Group Based Policy "GBP: Resource intergrity fails between policy-rule-set & external-policy" [High,Confirmed] - Assigned to Magesh GV (magesh-gv)
18:23:15 <openstack> Launchpad bug 1418839 in Group Based Policy "Service config in yaml cannot be loaded" [High,In progress] - Assigned to Magesh GV (magesh-gv)
18:23:54 <rkukura> SumitNaiksatam: I’m going to have to drop of in a few minutes. Is there anything else we should cover before then?
18:24:12 <mageshgv> SumitNaiksatam: For configuration in yaml support, we only have to enhance the heat based node driver
18:24:51 <SumitNaiksatam> rkukura: you were going to validate ivar-lazzaro’s changes to the tenant ownership patch?
18:25:02 <SumitNaiksatam> mageshgv: yes, only heat node driver
18:25:10 <rkukura> SumitNaiksatam: yes
18:25:16 <SumitNaiksatam> mageshgv: you can add the empty conf string check as well ;-)
18:25:16 <mageshgv> I havent taken a look at the other one. Will look at it  today
18:25:19 <SumitNaiksatam> rkukura: great thanks
18:25:25 <SumitNaiksatam> mageshgv: okay
18:25:42 <SumitNaiksatam> #topic NCP enhancements
18:26:07 <SumitNaiksatam> this spec is been sitting for a long time: #link https://review.openstack.org/#/c/203226/
18:26:32 <SumitNaiksatam> if no one has objections, mageshgv, rkukura we should move forward with it
18:27:00 <mageshgv> I have one question
18:27:02 <rkukura> ok, I’ll give it another look
18:27:09 <SumitNaiksatam> rkukura: thanks
18:27:13 <SumitNaiksatam> mageshgv: go ahead
18:27:15 <songole> I will review as well
18:27:15 <mageshgv> #link https://review.openstack.org/#/c/203226/3/specs/kilo/traffic-stitching-plumber-placement-type.rst#L63
18:28:25 <mageshgv> When we say a driver needs two PTs and the plumbing type is GATEWAY, does it mean that one of the PTs will be behaving as a Gateway and the other one is just an Endpoint ?
18:28:48 <mageshgv> *two PTs on provider and two on consumer
18:29:05 <ivar-lazzaro> mageshgv: that needs to be defined by the Plumber
18:29:34 <mageshgv> ivar-lazzaro: okay, will there be another update to the spec later ?
18:29:39 <ivar-lazzaro> currently we expect GWs to have 2-3 Ifaces (Management is optional)
18:29:56 <ivar-lazzaro> 2 per group is a good case for HA, but it's not defined yet
18:30:12 <SumitNaiksatam> mageshgv: good point to bring up
18:31:07 <ivar-lazzaro> we should definitively start the discussion around that
18:31:25 <SumitNaiksatam> ivar-lazzaro: i believe there was some update related to notifying the consumer PTG as well?
18:31:36 <mageshgv> ivar-lazzaro: okay, do we close on the current spec and the patches and then get started on a model for HA ? or get it done at one stretch ?
18:31:52 <ivar-lazzaro> mageshgv: I would go on that later
18:32:13 <ivar-lazzaro> mageshgv: I mean, I would really like to see chains working with the new model, and then iterate from there :)
18:32:17 <rkukura> I need to drop off now - I’ll read the log later. Bye.
18:32:18 <mageshgv> ivar-lazzaro: okay, sounds good. Just wanted to get an idea
18:32:32 <SumitNaiksatam> rkukura: ok bye
18:32:38 <ivar-lazzaro> unless we see that the current interface is completely unusable for that specific use case ofc
18:32:38 <ivar-lazzaro> rkukura: bye!
18:32:42 <mageshgv> rkukura: bye
18:33:12 <ivar-lazzaro> SumitNaiksatam: that would require a further step in the "provider centric chain"
18:33:20 <ivar-lazzaro> SumitNaiksatam: and a backward incompatible API change
18:33:36 <ivar-lazzaro> SumitNaiksatam: as is: Each SCI has a list of consumers and 1 provider
18:33:46 <ivar-lazzaro> s/is/in
18:34:00 <SumitNaiksatam> ivar-lazzaro: i am not clear about the API change part, but we can probably take that offline
18:34:19 <ivar-lazzaro> Each SCI has one provider
18:34:24 <ivar-lazzaro> but can have multiple consumers
18:34:37 <ivar-lazzaro> so we need a read API change in the SCI model
18:34:42 <mageshgv> ivar-lazzaro: can we just have a notification mechanism rather than API change ?
18:35:04 <mageshgv> ivar-lazzaro: That might preserve the API
18:35:04 <ivar-lazzaro> mageshgv: you still need to keep the information about who is consuming the chain
18:35:38 <SumitNaiksatam> ivar-lazzaro: i believe mageshgv is suggesting that the driver can keep that information in today’s model
18:35:38 <ivar-lazzaro> mageshgv: that could be retrieve from the PRS
18:35:44 <mageshgv> ivar-lazzaro: Ah, yes, without the API the user wont be able to keep track
18:35:46 <SumitNaiksatam> ivar-lazzaro: as long as it gets the notification
18:35:59 <SumitNaiksatam> mageshgv: correct me if that is not what you meant
18:36:26 <mageshgv> ivar-lazzaro, SumitNaiksatam: For a node driver it should not matter as long as it gets notified when a consumer is added or removed
18:36:39 <SumitNaiksatam> mageshgv: yeah
18:36:40 <ivar-lazzaro> yeah that is doable, also quite important to implement to make auto scaling work
18:36:47 <SumitNaiksatam> ivar-lazzaro: ah good point
18:37:13 <ivar-lazzaro> however, I believe we are at the point where NCP and TScP is too big
18:37:23 <SumitNaiksatam> ivar-lazzaro: okay
18:37:30 <ivar-lazzaro> at least the part that is not reviewed
18:38:02 <SumitNaiksatam> mageshgv: can you close loop with ivar-lazzaro if you need this notification from the get-go?
18:38:05 <ivar-lazzaro> we have to make progress and find issues with the current code, everything that is "needed but clearly feasible" should be pushed to when we know that what we have makes sense
18:38:14 <mageshgv> SumitNaiksatam: okay
18:38:18 <ivar-lazzaro> TScP can be tested without autoscaling
18:38:23 <SumitNaiksatam> mageshgv: ivar-lazzaro: thanks
18:38:41 <SumitNaiksatam> okay i think we covered all the major items today, and rkukura dropped off as well
18:38:51 <songole> ivar-lazzaro: what do you mean by auto-scaling?
18:39:18 <songole> adding members to group?
18:39:29 <ivar-lazzaro> songole: to a LB pool for instance
18:39:47 <ivar-lazzaro> songole: for each new PT in the group, add it to the LB pool
18:39:59 <ivar-lazzaro> today that still works since it's on the provider side
18:40:01 <songole> ok. got it
18:40:13 <ivar-lazzaro> but similar things on the *consumer* side may be useful as well
18:40:57 <ivar-lazzaro> we need to identify the use cases, and then add the feature... But we should not keep implementing blindly before we know that what we have done so far works
18:41:21 <ivar-lazzaro> as in: let's focus on reviews :)
18:41:58 <SumitNaiksatam> mageshgv: was your use case related to auto-scaling?
18:42:20 <SumitNaiksatam> mageshgv: i thought you had different use case with regards to configuring the firewall rules which needed the consumer PTG information?
18:42:29 <mageshgv> SumitNaiksatam: No, it was related to firewall rule addition based on consumer CIDRs
18:42:34 <SumitNaiksatam> mageshgv: right
18:42:52 <SumitNaiksatam> ivar-lazzaro: so yeah, its not a blind wishlist that we are discussing
18:43:16 <ivar-lazzaro> SumitNaiksatam: that's not what I meant :) I know there are use cases
18:44:02 <ivar-lazzaro> I'm saying that I think it's unadvisable to add non critical stuff before we have at least some coverage on the current implementation
18:44:37 <ivar-lazzaro> (by non critical I mean that at least one chain can be created without it)
18:45:12 <SumitNaiksatam> ivar-lazzaro: it seemed like a non-starter for mageshgv’s firewall support without this
18:45:57 <SumitNaiksatam> ivar-lazzaro: but i dont think its blocking the current review
18:46:01 <mageshgv> SumitNaiksatam, ivar-lazzaro: right, we can have a service instantiated, but no configuration applied
18:46:20 <ivar-lazzaro> SumitNaiksatam: I think both notifications and HA should be targeted for our first release
18:46:30 <ivar-lazzaro> in that order then! :)
18:46:36 <SumitNaiksatam> ivar-lazzaro: but since we were discussing the spec, i would imagine this is something we should be adding this to the design
18:47:02 <SumitNaiksatam> ivar-lazzaro: implementation can follow in subsequent patches
18:47:05 <ivar-lazzaro> SumitNaiksatam: if that's a blocker I agree
18:47:11 <SumitNaiksatam> ivar-lazzaro: and i agree the current patches should be reviewed as they are
18:47:20 <ivar-lazzaro> Ok
18:47:24 <ivar-lazzaro> sounds good to me
18:47:30 <SumitNaiksatam> ivar-lazzaro: so was not suggesting changes to the implementation of the current patches ;-)
18:47:46 <ivar-lazzaro> We can discuss the notifications in the current spec (since it's critical) and HA in a subsequent one
18:47:55 <SumitNaiksatam> ivar-lazzaro: awesome!
18:47:59 <SumitNaiksatam> alright, i think we have a lot of different pieces in the air
18:48:29 <SumitNaiksatam> we will continue our offline meetings as well to follow up on the bugs we discussed today
18:48:47 <SumitNaiksatam> #topic Open Discussion
18:48:56 <SumitNaiksatam> anything else anyone wanted to bring up?
18:49:11 <ivar-lazzaro> I have a question about the admin chain ownership patch
18:49:27 <SumitNaiksatam> #chair ivar-lazzaro mageshgv songole
18:49:28 <openstack> Current chairs: SumitNaiksatam ivar-lazzaro mageshgv songole
18:49:33 <SumitNaiksatam> ivar-lazzaro: go head
18:49:37 <SumitNaiksatam> *ahead
18:49:43 <SumitNaiksatam> i need to step away for a few minutes
18:49:44 <ivar-lazzaro> Can we clarify what is the requirement for Keystone V3 at this point?
18:49:59 <SumitNaiksatam> ivar-lazzaro: mageshgv songole can you close the meeting?
18:50:23 <ivar-lazzaro> by that I mean: is it a blocker for testing? Is it doable by the Node Driver alone?
18:50:47 <mageshgv> ivar-lazzaro: I cross checked the workflow. I think we are fine with v2 for now. v3 is only required if the admin user is in a non "default" domain
18:51:03 <ivar-lazzaro> I would like to possibly find a way of seamlessly use V2 and V3 for something as simple as retrieving an ID... But it doesn't look very easy it seems
18:51:40 <ivar-lazzaro> mageshgv: ok, that gives as some time to add it as a subsequent patch
18:51:49 <ivar-lazzaro> otherwise the whole chain would have been blocked by this
18:51:51 <mageshgv> ivar-lazzaro: right, the major difference is that v2 apis cannot see the v3 projects in different domains
18:52:14 <mageshgv> ivar-lazzaro: I can give a proper confirmation in a day or two , but I think it looks okay
18:52:54 <ivar-lazzaro> is a V2 "tenant" readable in a V3 "project" ?
18:53:06 <ivar-lazzaro> or are those just completely separated?
18:53:15 <mageshgv> ivar-lazzaro: yes, v2 tenants are readable in V3
18:53:35 <ivar-lazzaro> so could the V3 client work with the V2 URI?
18:53:53 <ivar-lazzaro> like, it figures out how to make the right call?
18:53:56 <mageshgv> ivar-lazzaro: That I am not sure of
18:54:27 <mageshgv> ivar-lazzaro: right, may be we can address that in a subsequent patch at a later point when required
18:54:39 <ivar-lazzaro> ok sounds good to me
18:54:44 <ivar-lazzaro> Anything else?
18:55:07 <mageshgv> ivar-lazzaro: nothing else from my side :-
18:55:26 <ivar-lazzaro> If not, we can spare this 5 minutes and move back to our lovely python :)
18:55:31 <tbachman> lol
18:55:38 <mageshgv> :)
18:56:07 <ivar-lazzaro> ok guys thanks for joining!
18:56:11 <ivar-lazzaro> #endmeeting