18:01:05 <SumitNaiksatam> #startmeeting networking_policy
18:01:05 <openstack> Meeting started Thu Jun 15 18:01:05 2017 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:09 <openstack> The meeting name has been set to 'networking_policy'
18:01:22 <annak> hi
18:01:25 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#June_15th_2017
18:01:44 <SumitNaiksatam> #topic Subnetpools in resource_mapping driver
18:01:51 <SumitNaiksatam> #link https://review.openstack.org/469681
18:01:55 * tbachman runs and hides
18:01:57 <SumitNaiksatam> we have been discussing this offline
18:02:02 <SumitNaiksatam> tbachman: lol
18:02:35 <SumitNaiksatam> annak: any new updates beyind what you mentioned in the emails?
18:03:01 <annak> No
18:03:05 <SumitNaiksatam> annak: ok
18:03:33 <annak> Currently the patch implements the "ugly" solution
18:03:48 <annak> with delete issue unsolved
18:03:49 <SumitNaiksatam> rkukura: any new ideas besides what we discussed, since you mentioned that relaxing that the one subnetpool per netwok constraint is required
18:04:07 <SumitNaiksatam> annak: right, come to that delete issue in a minute
18:04:26 <rkukura> so why can’t proxy groups use subnets from the same subnetpool as the PTG?
18:04:57 <SumitNaiksatam> rkukura: they can, and that is what we are doing, but we need to carve out dedicated prefix for them
18:05:18 <rkukura> do we definitely need a dedicated prefix?
18:05:26 <SumitNaiksatam> seems like
18:06:05 <SumitNaiksatam> because that assignments from that prefix are only significant in the traffic stitched path
18:06:44 <SumitNaiksatam> its not meant to eat into the “ip_pool” that the user may desire and has set for the l3p
18:07:51 <annak> is there any significance in separating the two address spaces - ip pool and proxy ip pool?
18:08:03 <SumitNaiksatam> besides there might be routing/connectivity implications which i am probably forgetting here
18:08:22 <rkukura> annak: That is what I’m asking - after the proxy subnet is allocated, is it treated any differently?
18:08:45 <SumitNaiksatam> annak: rkukura: i think routes might be set in the stitched path
18:08:46 <rkukura> Like is some routing based on the proxy prefix?
18:09:06 <SumitNaiksatam> yeah, which is easier to do with a completely separate prefix
18:09:34 <SumitNaiksatam> and is how its most likely currently implemented
18:10:05 <SumitNaiksatam> songole: hi
18:10:12 <songole> Hi SumitNaiksatam
18:10:26 <SumitNaiksatam> songole: discussing proxy_pool again :-)
18:10:39 <songole> ok. goog timing
18:11:23 <SumitNaiksatam> annak: rkukura: one thing came up while discussing with tbachman yesterday, i was thinking if GBP can extend the subnetpool API to take the list of prefixes for assignment
18:11:58 <tbachman> SumitNaiksatam: should we bring songole up to speed?
18:12:05 <SumitNaiksatam> (as opposed to the current API of a single prefix or everything)
18:12:11 <SumitNaiksatam> tbachman: yes sure
18:12:22 <rkukura> So do you mean we’d have a single subnetpool, but be able to allocate from different sets of prefixes for normal vs. proxy PTGs?
18:12:31 <SumitNaiksatam> rkukura: yeah
18:12:44 <tbachman> songole: we’re struggling a bit with the prefixes for the proxy
18:13:14 <tbachman> right now, the implicit workflow allows the user to provide a prefix via the .ini file (or just use a default) to use for the proxy_ip_pool
18:13:15 <songole> tbachman: ipv4 & ipv6 - separate prefixes?
18:13:16 <annak> SumitNaiksatam: I think the problem with passing a cidr to subnetpool is that it has to be a complete cidr with prefix len, not just "give me something 192.168-ish"
18:13:29 <tbachman> songole: yes, but I think we have this issue even with single staack
18:13:42 <rkukura> I think the fundemental question is whether we really need the proxy subnets under a single distinct prefix, or if we could simply allocate them from the same subnetpool as regular PTGs use?
18:14:07 <tbachman> when subnetpools are used to allocate subnets to a network, neutron requires that all subnets on that network also be allocated from that specific subnetpool
18:14:32 <SumitNaiksatam> rkukura: to the extent i recall it needs to be distinct at least in the current implementation, thats the whole reason we defined this extention
18:14:54 * tbachman quiets down to minimize thread switching
18:15:11 <SumitNaiksatam> tbachman: sorry, for talking over
18:15:20 <tbachman> SumitNaiksatam: I think it was the other way around ;)
18:15:29 <tbachman> SumitNaiksatam: please continue
18:15:29 <songole> rkukura: the subnet size for proxy pool is very small. that is a reason why it is a separate pool
18:15:58 <SumitNaiksatam> songole: yes, definitely one of the reasons, we did not want IPs to be wasted
18:16:03 <rkukura> OK, but we I think decided not to use the default_prefix_len of the subnetpool anyway
18:16:19 <rkukura> diifferent sized subnets can easily be allocated from the same subnetpool
18:17:02 <annak> yes, we can respect the proxy prefix length parameter and still allocate from regular pool
18:17:16 <rkukura> Is there a difference in need for public IPs? For example, might publicly routable IPs be needed for regular subnets but not for proxy subnets?
18:17:22 <songole> SumitNaiksatam: the other reason being it to be quite distinct from the regular subnet.. doesn't have to be
18:17:33 <songole> makes it easy to distinguish
18:17:54 <SumitNaiksatam> songole: the “doesnt have to be” part is what we are debating about
18:18:47 <songole> annak, rkukura: if we can allocate different sized subnets, I suppose it should be ok.
18:19:09 <SumitNaiksatam> songole: i was also mentioning earlier, that it was indentified at that time we dont want to waste the “ip_pool” addresses for internal assignment
18:20:10 <rkukura> SumitNaiksatam: Does “internal assignment” mean the proxy subnets don’t need to be publicly routable even when the regular subnets are (i.e. no-NAT)?
18:20:16 <SumitNaiksatam> songole: this, along with the complexity of  assigning routes, was the major motivating factor separating the two
18:20:23 <SumitNaiksatam> rkukura: yes
18:21:08 <annak> rkukura: "dont need to be" or "must not be" publicly routable?
18:21:55 <rkukura> annak: I’d like to understand whether its “don’t need to be”, “must not be”, or “must be”.
18:22:16 <songole> I believe we assign fips to proxy ips in the case of vpn
18:22:36 <SumitNaiksatam> songole: no
18:22:41 <rkukura> songole: That would mean “must be”, right?
18:23:03 <annak> i understood proxy ip are only internal, is that wrong?
18:23:04 <songole> SumitNaiksatam: I will get back on that.
18:23:05 <SumitNaiksatam> songole: proxy_pool is only for interal stitching assignment
18:23:11 <rkukura> or maybe not, since fips imply NAT
18:23:32 <SumitNaiksatam> only IPs from “ip_pool” are NAT'ed
18:24:15 <songole> when fw is inserted, traffic is routed to stitching ip
18:24:30 <songole> which is allocated from proxy pool
18:24:56 <rkukura> SumitNaiksatam: thanks - I guess one argument would be to not waste public IP space for proxy subnets when normal subnets are public (no-NAT)
18:25:28 <SumitNaiksatam> rkukura: it might not just be a public space issue
18:26:10 <SumitNaiksatam> rkukura: even withing the organization, they might not want IPs to be wasted for internal assignment
18:26:42 <SumitNaiksatam> when its possible to us a locally scoped space
18:28:01 <SumitNaiksatam> songole: routing to proxy_ips only happens within the stitched path
18:28:25 <SumitNaiksatam> songole: no entity outside the stitched patch is aware of the proxy_ip assignments
18:29:19 <annak> sounds like the intuitive thing would be to configure two subnetpools.
18:29:31 <SumitNaiksatam> songole: the proxy_group takes over the original PTG’s identitiy on one side, and connects to the original PTG via the proxy IPs and the stiched path (with the inserted service) on the other side
18:30:07 <rkukura> annak, even if we did configure two subnetpools, these would need to reference the same address_scope, right, to ensure no overlap
18:30:21 <annak> rkukura: yes
18:30:30 <songole> SumitNaiksatam: right. vpn gets terminated on the fw. The only way you can reach vpn from outside would be by way of attaching a fip
18:31:14 <songole> Again, I will get back on the fip part
18:31:30 <SumitNaiksatam> songole: but the FIP would be associated with a PT’s IP, not an IP from the proxy_pool, right?
18:31:35 <SumitNaiksatam> songole: okay
18:31:50 <rkukura> And, assuming we need the proxy and normal subnets on the same L2 network, using separate subnetpool would require relaxing neutron’s restriction.
18:32:05 <SumitNaiksatam> because no one else knows about that IP from proxy_pool
18:32:31 <annak> rkukura: yes
18:33:24 <SumitNaiksatam> i guess they could belong to separate address spaces and we can enforce that they dont overlap, but they would have to map to same backend VRF
18:33:38 <SumitNaiksatam> so that they are all reachable
18:33:53 <annak> rkukura: I asked salvatore about the limitation and he thinks its  because subnet pools were introduced before address scopes.
18:34:27 <rkukura> I’m not fundementally opposed to relaxing that restriction (as long as same address_scope is required for all subnetpools on a network), but I think it would be difficult to change in Neutron. The subnetpool’s address_scope_id attribute is mutable, and there would be no way to atomically associate a set of subnetpools with a new address_scope.
18:35:21 <annak> rkukura: we can try to relax the limitation only for subnetpools that are already within same address scope
18:35:58 <rkukura> annak: That might make sense on its own, but what if one of those subnetpool’s addres_scope_id value is changed?
18:36:39 <rkukura> Our mechanism driver currently blocks changing the address_scope_id of subnetpool, but there may be valid use cases for changing this.
18:37:37 <SumitNaiksatam> rkukura: in the apic_mapping mechanism driver, is it possible to associate mutliple address_scopes with the same VRF?
18:37:57 <SumitNaiksatam> address_scopes of the same family that is
18:38:05 <annak> rkukura: i guess we should disallow such change if subnets are already allocated.. maybe similar checks are already in place, i can investigate that
18:38:33 <rkukura> SumitNaiksatam: In the aim_mapping MD, at most one v4 and at most one v6 scope can reference the same VRF
18:38:56 <SumitNaiksatam> annak: that validation would have to be in whichever Neutron mechanism driver or monolithin plugin is deployed
18:39:25 <SumitNaiksatam> rkukura: okay, so i guess multiple address_scopes is a non-starter (even as a hack)
18:39:41 <rkukura> I think the current one-subnetpool-per-family-per-network validation is done in neutron code, not in plugin or driver code, but I could be wrong.
18:40:06 <SumitNaiksatam> rkukura: ah you mean the DB level code?
18:40:12 <tbachman> I think there are two thigns here
18:40:19 <tbachman> one is a mapping to VRFs, which is ACI specific
18:40:31 <annak> rkukura: you're right, in neutron
18:40:31 <tbachman> the other is mapping of subnets belonging to address scopes to networks
18:40:34 <annak> (NetworkSubnetPoolAffinityError)
18:41:03 <rkukura> SumitNaiksatam: yes, plus the code that populates network dicts with subnetpool and address scope IDs.
18:41:24 <SumitNaiksatam> rkukura: annak: but that is something that is something that can be overridden at the plugin level (in theory)
18:41:55 <annak> I think the scenario we're struggling with is not unique to GBP
18:42:05 <SumitNaiksatam> tbachman: i agree, but I brought VRFs, because I am sure the same/similar concepts apply for other backends as well
18:42:19 <SumitNaiksatam> annak: agree
18:42:30 <SumitNaiksatam> *brought up
18:43:07 <SumitNaiksatam> just a quick time check, we are 42 mins into the meeting :-)
18:43:15 <tbachman> :)
18:43:20 <tbachman> SumitNaiksatam: it’s a tough problem
18:43:26 <SumitNaiksatam> couple of other topics to quickly touch up on
18:43:30 <SumitNaiksatam> tbachman: yes definitely
18:43:31 <rkukura> ok
18:43:46 <SumitNaiksatam> should we quickly bring up those topics and revert to this discussion in the open discussion section?
18:44:09 <annak> sorry I'll need to leave 5 min yearly
18:44:09 <rkukura> sure
18:44:14 <SumitNaiksatam> annak: oops
18:44:25 <SumitNaiksatam> #topic Ocata sync
18:44:28 <SumitNaiksatam> #link https://review.openstack.org/#/q/topic:ocata_sync
18:44:36 <SumitNaiksatam> annak: we will try to do this quickly
18:44:49 <SumitNaiksatam> annak: i know you are blocked on the dependent repos
18:44:57 <annak> (I mean, 11:55)
18:45:05 <SumitNaiksatam> annak: ah okay
18:45:10 <SumitNaiksatam> annak: i am working on those
18:45:22 <annak> SumitNaiksatam: thanks!
18:45:26 <SumitNaiksatam> i finished the python-opflex-agent
18:45:35 <SumitNaiksatam> which was fairly straightforward
18:45:46 <SumitNaiksatam> but i am having trouble with the apic-ml2 repo
18:46:02 <SumitNaiksatam> in trying to get the tests to be aligned with the refactored code
18:46:05 <tbachman> SumitNaiksatam: anything I can help with?
18:46:16 <SumitNaiksatam> tbachman: yes i will let you, thanks
18:46:24 <tbachman> SumitNaiksatam: np!
18:46:37 <tbachman> SumitNaiksatam: are you doing these under a topic?
18:46:42 * tbachman goes to gerrit
18:46:44 <SumitNaiksatam> annak: i will keep you updated, i was hoping that i would have been long done with this
18:46:58 <SumitNaiksatam> tbachman: no actually its a github repo
18:47:03 <tbachman> ah, right
18:47:09 <SumitNaiksatam> and i have not posted a new branch
18:47:11 <SumitNaiksatam> but will do so
18:47:22 <SumitNaiksatam> i thought i would fix the issue and do it
18:47:37 <SumitNaiksatam> i did some whole sale changes, and i cant figure out what is causing a wierd failure
18:47:59 <SumitNaiksatam> i think i got some mocking wrong somewhere
18:48:04 <SumitNaiksatam> anyway
18:48:11 <tbachman> SumitNaiksatam: which branch ?
18:48:18 <SumitNaiksatam> tbachman: not posted yet
18:48:24 <SumitNaiksatam> annak: anything else you wanted to bring up on this?
18:48:38 <annak> SumitNaiksatam: no, I don't think so
18:48:54 <SumitNaiksatam> annak: one question we had for you was whether the two patches which have been merged today, should be backported to stable/newton
18:49:03 <SumitNaiksatam> i guess not required to be
18:49:07 <rkukura> SumitNaiksatam: Do either of annak’s patches that merged to master today need to be back-ported to stable/newton?
18:49:26 <SumitNaiksatam> rkukura: yeah, just asking
18:49:30 <rkukura> should read as I type ;)
18:49:33 <annak> i think we wanted this for the ease of merging? I can backport them
18:49:42 <SumitNaiksatam> annak: ok cool
18:49:46 <rkukura> makes sense to me
18:49:54 <annak> np, will do
18:50:01 <SumitNaiksatam> rkukura: i was just about to type “makes sense” :-)
18:50:13 <SumitNaiksatam> #topic NFP patches
18:50:30 <SumitNaiksatam> songole: hi, we can merge the patches now, my only probably was that the NFP gate job keeps failing
18:50:51 <SumitNaiksatam> songole: is it because its something wrong with the job or is it the patches that are breaking the job?
18:51:08 <songole> SumitNaiksatam: I believe it is the test issue.
18:51:38 <songole> I will check
18:51:43 <songole> again
18:51:53 <SumitNaiksatam> songole: so pleaese +2 the patches that are comfortable with, and the rest of the reviewers will check accordingly
18:52:03 <SumitNaiksatam> songole: thanks
18:52:10 <SumitNaiksatam> #topic Open Discussion
18:52:21 <SumitNaiksatam> reverting back to the subnetpools discussion
18:53:25 <SumitNaiksatam> in my mind one of the priorities is to unblock annak with respect to the patch
18:53:57 <rkukura> For that purpose, I don’t think we can really consider any changes to neutron itself, right?
18:54:20 <SumitNaiksatam> rkukura: we could patch neutron in GBP, no?
18:54:42 <rkukura> Even if apic_aim or vmware drivers/plugins want to change Neutron, don’t we need the RMD to work with arbitrary neutron back-ends?
18:55:16 <SumitNaiksatam> rkukura: overriding by
18:55:26 <SumitNaiksatam> sorry, hit enter too soon
18:55:36 <SumitNaiksatam> annak: i guess its time for you to take off
18:55:39 <annak> sorry i have to leave, will read the discussion afterwards, thanks everyone!
18:55:47 <tbachman> Let’s presume we aren’t patching neutron — is there another solution other than either using the exisgint pool or adding the prefix to it?
18:56:15 <tbachman> can we have a separate network, as annak was asking?
18:56:26 <rkukura> My point was simply that if GBP were to monkey-patch neutron to relax the restricition, that could break neutron plugins or drivers that expect that restriction to be enforced.
18:56:27 <SumitNaiksatam> tbachman: i dont think so
18:56:52 <tbachman> and can we entertain other options for services and/or the proxy?
18:57:06 <tbachman> (i.e. how much are we open to changing here)
18:57:11 <SumitNaiksatam> rkukura: yes I agree, although in this case, GBP is being less restrictive
18:57:53 <SumitNaiksatam> tbachman: that becomes a longer discussion, and to my earlier point, it does not allow annak to make progres in the near term
18:58:23 <SumitNaiksatam> hence i was suggesting that we add an additional API option to the subnetpools
18:58:35 <SumitNaiksatam> one we implement in GBP and extend subnetpools
18:58:54 <SumitNaiksatam> and that allows taking in a one or more prefixes for assignment
18:59:19 <SumitNaiksatam> in the easiest implementation, it would just iterate throught the prefixes until an allocation is successful
18:59:57 <SumitNaiksatam> this would most likely be less optimal than allowing neutron subnetpool implementation to handle the choosing of the prefix
19:00:05 * tbachman notes the time
19:00:09 <SumitNaiksatam> tbachman: thanks
19:00:16 <SumitNaiksatam> yes we got to end it there for today
19:00:22 <rkukura> Is it worth looking at just using the same subnetpool and a different prefix_len? Do we know what other code would need to change (setting up routes, etc.)?
19:00:23 <SumitNaiksatam> back to the offline discussion on this
19:00:41 <rkukura> thanks SumitNaiksatam!
19:00:44 <tbachman> SumitNaiksatam: thanks for leading this!
19:00:49 <SumitNaiksatam> rkukura: yes, i believe that would required rewriting the traffic stitching implementation
19:00:56 <SumitNaiksatam> thanks all for attending
19:00:58 <SumitNaiksatam> bye!
19:01:00 <rkukura> bye
19:01:01 <tbachman> SumitNaiksatam: bye!
19:01:01 <SumitNaiksatam> #endmeeting