18:01:05 #startmeeting networking_policy 18:01:05 Meeting started Thu Jun 15 18:01:05 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:09 The meeting name has been set to 'networking_policy' 18:01:22 hi 18:01:25 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#June_15th_2017 18:01:44 #topic Subnetpools in resource_mapping driver 18:01:51 #link https://review.openstack.org/469681 18:01:55 * tbachman runs and hides 18:01:57 we have been discussing this offline 18:02:02 tbachman: lol 18:02:35 annak: any new updates beyind what you mentioned in the emails? 18:03:01 No 18:03:05 annak: ok 18:03:33 Currently the patch implements the "ugly" solution 18:03:48 with delete issue unsolved 18:03:49 rkukura: any new ideas besides what we discussed, since you mentioned that relaxing that the one subnetpool per netwok constraint is required 18:04:07 annak: right, come to that delete issue in a minute 18:04:26 so why can’t proxy groups use subnets from the same subnetpool as the PTG? 18:04:57 rkukura: they can, and that is what we are doing, but we need to carve out dedicated prefix for them 18:05:18 do we definitely need a dedicated prefix? 18:05:26 seems like 18:06:05 because that assignments from that prefix are only significant in the traffic stitched path 18:06:44 its not meant to eat into the “ip_pool” that the user may desire and has set for the l3p 18:07:51 is there any significance in separating the two address spaces - ip pool and proxy ip pool? 18:08:03 besides there might be routing/connectivity implications which i am probably forgetting here 18:08:22 annak: That is what I’m asking - after the proxy subnet is allocated, is it treated any differently? 18:08:45 annak: rkukura: i think routes might be set in the stitched path 18:08:46 Like is some routing based on the proxy prefix? 18:09:06 yeah, which is easier to do with a completely separate prefix 18:09:34 and is how its most likely currently implemented 18:10:05 songole: hi 18:10:12 Hi SumitNaiksatam 18:10:26 songole: discussing proxy_pool again :-) 18:10:39 ok. goog timing 18:11:23 annak: rkukura: one thing came up while discussing with tbachman yesterday, i was thinking if GBP can extend the subnetpool API to take the list of prefixes for assignment 18:11:58 SumitNaiksatam: should we bring songole up to speed? 18:12:05 (as opposed to the current API of a single prefix or everything) 18:12:11 tbachman: yes sure 18:12:22 So do you mean we’d have a single subnetpool, but be able to allocate from different sets of prefixes for normal vs. proxy PTGs? 18:12:31 rkukura: yeah 18:12:44 songole: we’re struggling a bit with the prefixes for the proxy 18:13:14 right now, the implicit workflow allows the user to provide a prefix via the .ini file (or just use a default) to use for the proxy_ip_pool 18:13:15 tbachman: ipv4 & ipv6 - separate prefixes? 18:13:16 SumitNaiksatam: I think the problem with passing a cidr to subnetpool is that it has to be a complete cidr with prefix len, not just "give me something 192.168-ish" 18:13:29 songole: yes, but I think we have this issue even with single staack 18:13:42 I think the fundemental question is whether we really need the proxy subnets under a single distinct prefix, or if we could simply allocate them from the same subnetpool as regular PTGs use? 18:14:07 when subnetpools are used to allocate subnets to a network, neutron requires that all subnets on that network also be allocated from that specific subnetpool 18:14:32 rkukura: to the extent i recall it needs to be distinct at least in the current implementation, thats the whole reason we defined this extention 18:14:54 * tbachman quiets down to minimize thread switching 18:15:11 tbachman: sorry, for talking over 18:15:20 SumitNaiksatam: I think it was the other way around ;) 18:15:29 SumitNaiksatam: please continue 18:15:29 rkukura: the subnet size for proxy pool is very small. that is a reason why it is a separate pool 18:15:58 songole: yes, definitely one of the reasons, we did not want IPs to be wasted 18:16:03 OK, but we I think decided not to use the default_prefix_len of the subnetpool anyway 18:16:19 diifferent sized subnets can easily be allocated from the same subnetpool 18:17:02 yes, we can respect the proxy prefix length parameter and still allocate from regular pool 18:17:16 Is there a difference in need for public IPs? For example, might publicly routable IPs be needed for regular subnets but not for proxy subnets? 18:17:22 SumitNaiksatam: the other reason being it to be quite distinct from the regular subnet.. doesn't have to be 18:17:33 makes it easy to distinguish 18:17:54 songole: the “doesnt have to be” part is what we are debating about 18:18:47 annak, rkukura: if we can allocate different sized subnets, I suppose it should be ok. 18:19:09 songole: i was also mentioning earlier, that it was indentified at that time we dont want to waste the “ip_pool” addresses for internal assignment 18:20:10 SumitNaiksatam: Does “internal assignment” mean the proxy subnets don’t need to be publicly routable even when the regular subnets are (i.e. no-NAT)? 18:20:16 songole: this, along with the complexity of assigning routes, was the major motivating factor separating the two 18:20:23 rkukura: yes 18:21:08 rkukura: "dont need to be" or "must not be" publicly routable? 18:21:55 annak: I’d like to understand whether its “don’t need to be”, “must not be”, or “must be”. 18:22:16 I believe we assign fips to proxy ips in the case of vpn 18:22:36 songole: no 18:22:41 songole: That would mean “must be”, right? 18:23:03 i understood proxy ip are only internal, is that wrong? 18:23:04 SumitNaiksatam: I will get back on that. 18:23:05 songole: proxy_pool is only for interal stitching assignment 18:23:11 or maybe not, since fips imply NAT 18:23:32 only IPs from “ip_pool” are NAT'ed 18:24:15 when fw is inserted, traffic is routed to stitching ip 18:24:30 which is allocated from proxy pool 18:24:56 SumitNaiksatam: thanks - I guess one argument would be to not waste public IP space for proxy subnets when normal subnets are public (no-NAT) 18:25:28 rkukura: it might not just be a public space issue 18:26:10 rkukura: even withing the organization, they might not want IPs to be wasted for internal assignment 18:26:42 when its possible to us a locally scoped space 18:28:01 songole: routing to proxy_ips only happens within the stitched path 18:28:25 songole: no entity outside the stitched patch is aware of the proxy_ip assignments 18:29:19 sounds like the intuitive thing would be to configure two subnetpools. 18:29:31 songole: the proxy_group takes over the original PTG’s identitiy on one side, and connects to the original PTG via the proxy IPs and the stiched path (with the inserted service) on the other side 18:30:07 annak, even if we did configure two subnetpools, these would need to reference the same address_scope, right, to ensure no overlap 18:30:21 rkukura: yes 18:30:30 SumitNaiksatam: right. vpn gets terminated on the fw. The only way you can reach vpn from outside would be by way of attaching a fip 18:31:14 Again, I will get back on the fip part 18:31:30 songole: but the FIP would be associated with a PT’s IP, not an IP from the proxy_pool, right? 18:31:35 songole: okay 18:31:50 And, assuming we need the proxy and normal subnets on the same L2 network, using separate subnetpool would require relaxing neutron’s restriction. 18:32:05 because no one else knows about that IP from proxy_pool 18:32:31 rkukura: yes 18:33:24 i guess they could belong to separate address spaces and we can enforce that they dont overlap, but they would have to map to same backend VRF 18:33:38 so that they are all reachable 18:33:53 rkukura: I asked salvatore about the limitation and he thinks its because subnet pools were introduced before address scopes. 18:34:27 I’m not fundementally opposed to relaxing that restriction (as long as same address_scope is required for all subnetpools on a network), but I think it would be difficult to change in Neutron. The subnetpool’s address_scope_id attribute is mutable, and there would be no way to atomically associate a set of subnetpools with a new address_scope. 18:35:21 rkukura: we can try to relax the limitation only for subnetpools that are already within same address scope 18:35:58 annak: That might make sense on its own, but what if one of those subnetpool’s addres_scope_id value is changed? 18:36:39 Our mechanism driver currently blocks changing the address_scope_id of subnetpool, but there may be valid use cases for changing this. 18:37:37 rkukura: in the apic_mapping mechanism driver, is it possible to associate mutliple address_scopes with the same VRF? 18:37:57 address_scopes of the same family that is 18:38:05 rkukura: i guess we should disallow such change if subnets are already allocated.. maybe similar checks are already in place, i can investigate that 18:38:33 SumitNaiksatam: In the aim_mapping MD, at most one v4 and at most one v6 scope can reference the same VRF 18:38:56 annak: that validation would have to be in whichever Neutron mechanism driver or monolithin plugin is deployed 18:39:25 rkukura: okay, so i guess multiple address_scopes is a non-starter (even as a hack) 18:39:41 I think the current one-subnetpool-per-family-per-network validation is done in neutron code, not in plugin or driver code, but I could be wrong. 18:40:06 rkukura: ah you mean the DB level code? 18:40:12 I think there are two thigns here 18:40:19 one is a mapping to VRFs, which is ACI specific 18:40:31 rkukura: you're right, in neutron 18:40:31 the other is mapping of subnets belonging to address scopes to networks 18:40:34 (NetworkSubnetPoolAffinityError) 18:41:03 SumitNaiksatam: yes, plus the code that populates network dicts with subnetpool and address scope IDs. 18:41:24 rkukura: annak: but that is something that is something that can be overridden at the plugin level (in theory) 18:41:55 I think the scenario we're struggling with is not unique to GBP 18:42:05 tbachman: i agree, but I brought VRFs, because I am sure the same/similar concepts apply for other backends as well 18:42:19 annak: agree 18:42:30 *brought up 18:43:07 just a quick time check, we are 42 mins into the meeting :-) 18:43:15 :) 18:43:20 SumitNaiksatam: it’s a tough problem 18:43:26 couple of other topics to quickly touch up on 18:43:30 tbachman: yes definitely 18:43:31 ok 18:43:46 should we quickly bring up those topics and revert to this discussion in the open discussion section? 18:44:09 sorry I'll need to leave 5 min yearly 18:44:09 sure 18:44:14 annak: oops 18:44:25 #topic Ocata sync 18:44:28 #link https://review.openstack.org/#/q/topic:ocata_sync 18:44:36 annak: we will try to do this quickly 18:44:49 annak: i know you are blocked on the dependent repos 18:44:57 (I mean, 11:55) 18:45:05 annak: ah okay 18:45:10 annak: i am working on those 18:45:22 SumitNaiksatam: thanks! 18:45:26 i finished the python-opflex-agent 18:45:35 which was fairly straightforward 18:45:46 but i am having trouble with the apic-ml2 repo 18:46:02 in trying to get the tests to be aligned with the refactored code 18:46:05 SumitNaiksatam: anything I can help with? 18:46:16 tbachman: yes i will let you, thanks 18:46:24 SumitNaiksatam: np! 18:46:37 SumitNaiksatam: are you doing these under a topic? 18:46:42 * tbachman goes to gerrit 18:46:44 annak: i will keep you updated, i was hoping that i would have been long done with this 18:46:58 tbachman: no actually its a github repo 18:47:03 ah, right 18:47:09 and i have not posted a new branch 18:47:11 but will do so 18:47:22 i thought i would fix the issue and do it 18:47:37 i did some whole sale changes, and i cant figure out what is causing a wierd failure 18:47:59 i think i got some mocking wrong somewhere 18:48:04 anyway 18:48:11 SumitNaiksatam: which branch ? 18:48:18 tbachman: not posted yet 18:48:24 annak: anything else you wanted to bring up on this? 18:48:38 SumitNaiksatam: no, I don't think so 18:48:54 annak: one question we had for you was whether the two patches which have been merged today, should be backported to stable/newton 18:49:03 i guess not required to be 18:49:07 SumitNaiksatam: Do either of annak’s patches that merged to master today need to be back-ported to stable/newton? 18:49:26 rkukura: yeah, just asking 18:49:30 should read as I type ;) 18:49:33 i think we wanted this for the ease of merging? I can backport them 18:49:42 annak: ok cool 18:49:46 makes sense to me 18:49:54 np, will do 18:50:01 rkukura: i was just about to type “makes sense” :-) 18:50:13 #topic NFP patches 18:50:30 songole: hi, we can merge the patches now, my only probably was that the NFP gate job keeps failing 18:50:51 songole: is it because its something wrong with the job or is it the patches that are breaking the job? 18:51:08 SumitNaiksatam: I believe it is the test issue. 18:51:38 I will check 18:51:43 again 18:51:53 songole: so pleaese +2 the patches that are comfortable with, and the rest of the reviewers will check accordingly 18:52:03 songole: thanks 18:52:10 #topic Open Discussion 18:52:21 reverting back to the subnetpools discussion 18:53:25 in my mind one of the priorities is to unblock annak with respect to the patch 18:53:57 For that purpose, I don’t think we can really consider any changes to neutron itself, right? 18:54:20 rkukura: we could patch neutron in GBP, no? 18:54:42 Even if apic_aim or vmware drivers/plugins want to change Neutron, don’t we need the RMD to work with arbitrary neutron back-ends? 18:55:16 rkukura: overriding by 18:55:26 sorry, hit enter too soon 18:55:36 annak: i guess its time for you to take off 18:55:39 sorry i have to leave, will read the discussion afterwards, thanks everyone! 18:55:47 Let’s presume we aren’t patching neutron — is there another solution other than either using the exisgint pool or adding the prefix to it? 18:56:15 can we have a separate network, as annak was asking? 18:56:26 My point was simply that if GBP were to monkey-patch neutron to relax the restricition, that could break neutron plugins or drivers that expect that restriction to be enforced. 18:56:27 tbachman: i dont think so 18:56:52 and can we entertain other options for services and/or the proxy? 18:57:06 (i.e. how much are we open to changing here) 18:57:11 rkukura: yes I agree, although in this case, GBP is being less restrictive 18:57:53 tbachman: that becomes a longer discussion, and to my earlier point, it does not allow annak to make progres in the near term 18:58:23 hence i was suggesting that we add an additional API option to the subnetpools 18:58:35 one we implement in GBP and extend subnetpools 18:58:54 and that allows taking in a one or more prefixes for assignment 18:59:19 in the easiest implementation, it would just iterate throught the prefixes until an allocation is successful 18:59:57 this would most likely be less optimal than allowing neutron subnetpool implementation to handle the choosing of the prefix 19:00:05 * tbachman notes the time 19:00:09 tbachman: thanks 19:00:16 yes we got to end it there for today 19:00:22 Is it worth looking at just using the same subnetpool and a different prefix_len? Do we know what other code would need to change (setting up routes, etc.)? 19:00:23 back to the offline discussion on this 19:00:41 thanks SumitNaiksatam! 19:00:44 SumitNaiksatam: thanks for leading this! 19:00:49 rkukura: yes, i believe that would required rewriting the traffic stitching implementation 19:00:56 thanks all for attending 19:00:58 bye! 19:01:00 bye 19:01:01 SumitNaiksatam: bye! 19:01:01 #endmeeting