18:00:46 #startmeeting networking_policy 18:00:47 Meeting started Thu May 25 18:00:46 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:50 The meeting name has been set to 'networking_policy' 18:01:07 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#May_25th.2C_18th_2017 18:01:15 hi! 18:01:26 i think we are still dealing with the issues from last week (at least i am) 18:01:37 #topic IP v4, v6 dual-stack support 18:01:59 #link https://review.openstack.org/#/c/459823/ 18:02:08 :| 18:02:28 tbachman: :-) thanks for continuing to work on this 18:02:31 SumitNaiksatam: there’s at least one fix to make to that patch 18:02:35 tbachman: oh 18:02:36 I need to change the defaults to use SLAAC 18:02:44 ah okay 18:02:55 tbachman: thats the neutron default? 18:02:57 I can resubmit that, but I was hoping to be further along than I am with functional/integration testing 18:03:00 Alas no 18:03:01 tbachman: Should that be configurable in GBP? 18:03:09 rkukura: good question 18:03:16 rkukura: good point 18:03:31 we could add a config option 18:03:40 but I worry about endless config 18:03:46 This seems similate to default_prefix_len, etc. 18:03:51 tbachman: only for those who need it 18:03:55 true 18:04:00 it’s easily added 18:04:02 tbachman: is it the neutron default? 18:04:12 I don’t believe so 18:04:18 I believe the neutron default is none 18:04:22 * tbachman goes to check 18:06:25 rkukura: are you good with the patch otherwise? 18:06:53 annak: how about you, in case you got a chance to look at the lastest patchset? 18:07:14 yes, although I want to take a quick look at the changes since the last version I reviewed 18:07:31 rkukura: sure 18:07:43 SumitNaiksatam: not the latest, but it looked good. I'll look at latest 18:07:49 annak: thanks 18:08:20 given that we are heading into the long weekend, i would hope that we can attempt to merge this sooner than later 18:08:31 we have to merge the backports as well 18:08:37 SumitNaiksatam: ack 18:08:45 SumitNaiksatam: I have 4 or 5 of those to do to stable/mitaka 18:08:47 I can’t find reference, but I’m pretty sure the default is None 18:08:50 you can expect some maintenance activity and downtime over the weekend 18:09:04 rkukura: yes, we will come to that in a min 18:09:06 SumitNaiksatam: I also noticed that igordcard’s QoS patch wasn’t backported. Is that b/c it’s a new feature? 18:09:07 tbachman: np 18:09:11 Or should that be backported as well? 18:09:29 tbachman: we decided that we will not backport the QoS patch to newton 18:09:32 k 18:09:35 it will be available starting Ocata 18:09:41 SumitNaiksatam: got it. thx 18:09:52 actually we attempted to backport 18:10:04 but ran into some issues, and decided it was best to leave it out 18:10:08 SumitNaiksatam: got it 18:10:24 if someone has a real need for this in mitaka, we can definitely reconsider the decision 18:10:24 rkukura: SumitNaiksatam: should I add a config var for IPv6 RA mode, etc. for GBP? 18:11:00 tbachman: both ra_mode and address_mode? 18:11:14 songole: hi 18:11:22 SumitNaiksatam: hello 18:11:24 rkukura: ack 18:11:39 SumitNaiksatam: since songole is here, we could talk about the policy ip_pool 18:11:40 tbachman: these will be used as defaults for implicit creation, right? 18:11:43 SumitNaiksatam: ack 18:11:50 my thinking is that our default would be SLAAC 18:11:53 tbachman: you mean proxy ip_pool 18:12:00 unless we want our defaults to be the neutron defaults 18:12:05 (with “our” meaning GBP) 18:12:10 SumitNaiksatam: ack on proxy ip_pool 18:12:12 tbachman: yeah, i wanted to dig a little more on that 18:12:20 tbachman: why SLAAC? 18:12:34 It seems to be a preferred mode, from what I can tell 18:12:39 but it doesn’t have to be 18:12:48 it may make more sense for our default to be the same as neutron's 18:12:53 tbachman: Are the neutron defaults usable for IPv6, or its it always necessary to specify something? 18:13:00 tbachman: yeah thats what i wanted to clarify 18:13:13 rkukura: define usable ;) 18:13:35 I think their defaults mean that neutron doesn’t take any responsibility for that behavior 18:13:48 tbachman: You create an IPv6 subnet, attach it to a router, and you get connectivity over IPv6 18:13:51 the documentation actually does a good job of explaining this, if I remember correctly 18:13:57 let me go find that link 18:14:53 This link has a helpful table: 18:15:00 #link https://docs.openstack.org/liberty/networking-guide/adv-config-ipv6.html <== this link has a helpful table 18:15:04 My argument would be that the GBP defaults used in implicit workflows should result in usable IPv6 connectivity when ip_version == 46 or 6 18:15:16 note the first entry in the table 18:15:25 “ Backwards compatibility with pre-Juno IPv6 behavior.” 18:15:49 This is the part titled “ipv6_ra_mode and ipv6_address_mode combinations" 18:15:56 N/S N/S Off Not Defined Backwards compatibility with pre-Juno IPv6 behavior. 18:16:15 Looks like the defaults are for backwards compatibility rather than usability 18:16:16 rkukura: agree, assuming neutron is not usable without that configuration 18:16:42 rkukura: ack on the backwards-compatibility 18:17:23 there’s also SLAAC from an openstack router, and SLAAC form a non-openstack router 18:17:43 but I’ve found that openstack doesn’t let you attach subnets to a router if you’ve elected SLAAC from a non-openstack router 18:17:55 so, I’d say it should be the SLAAC from an openstack router :) 18:18:03 (note: that’s for internal interfaces, and not the gateway) 18:18:16 tbachman: agree 18:18:56 but with pure-SLAAC, VMs don’t get a default gateway, right? 18:19:03 also note that most of these are “not implemented” by the reference implementation 18:19:08 SLAAC is just for address assignemnt 18:19:16 the router advertisements provide the default GW 18:19:48 So dhcpv6-stateless is what uses SLAAC and has RAs for the GW, right? 18:20:10 tbachman: rkukura: i still think the defaults should be what neutron does, a specific driver can override the defaults 18:20:39 SumitNaiksatam: I could go along with that, as long as its configurable for the deployment 18:20:53 SumitNaiksatam: I’m fine with that. I’ll amend my patch to include these config vars 18:20:53 and by driver, i mean the GBP policy driver 18:21:22 SumitNaiksatam: should these be provided in the resource mapping driver then? 18:21:25 okay, we can follow up on this discusion on the gerrit review 18:21:27 Do we need config at the PD level? What do we do for default_prefix_len? 18:21:41 For IPv6, we’ve agreed it’s 64 18:21:50 rkukura: i meant the PD can override it 18:22:07 tbachman: I meant for IPv4 18:22:21 ah — that’s part of the L3 config now, right? 18:22:26 I think its best of these sorts of config variables all work similarly 18:22:39 hmmm 18:22:52 rkukura: are you referring to implicit workflow (i.e. where L3P is created implicitly)? 18:23:11 rkukura: default_prefix_len is part of GBP API 18:23:45 Both implicit L3P creation, and explicit where the value is not passed 18:24:21 I guess the explicit creation uses API default values 18:24:40 rkukura: your point is just that the defaults should all be consistent then, right? 18:24:43 So do these need to be added as L3P API attributes? 18:24:48 (and if so, I should double-check that’s the case) 18:25:09 rkukura: i was hoping you would not bring that up! :-( 18:25:11 lol 18:25:51 My point is more about how a deployment sets defaults (via config) and how a user overrides those defaults than about whether the specific values are consistent between Neutron and GBP 18:26:49 well without representation in the API there is no way the user can override it 18:27:14 but — we don’t provide all the information about subnets when we create an L3P 18:27:18 the only way to override it is to create a neutron object with those overrides and associate it with a GBP object (i am glossing over the defails here) 18:27:37 for example, default GW 18:28:09 I guess it’s a question of whether this level of detail is warranted in the API 18:28:13 during L3P creation 18:28:25 right 18:28:28 tbachman: true, in such cases, where neutron models that detail, we expect the neutron object to be creted directly 18:28:40 and is this something someone would want to change during deployment, w/o having to restart neutron 18:28:43 and then be assocaited with the relevant GBP resource 18:29:03 I would think this would be consistent across a deployment 18:29:12 (meaning the same mode is used throughout) 18:29:27 tbachman: good point 18:29:47 SumitNaiksatam: but with L3P’s, we still create the subnets, even if the user just provided a prefix 18:30:00 and we don’t create them during L3P creation 18:30:32 I can see an argument for exposing this 18:30:33 tbachman: the resource_mapping driver would allow you to associate subnets with a PTG 18:30:56 b/c it’s kind of a “mode” for deployment 18:31:00 and you can create the subnets in Neutron the way you want it 18:31:21 SumitNaiksatam: true 18:31:29 but in our subnetpools model, we have no way of providing this 18:31:34 it’s only specified during subnet creation 18:32:18 tbachman: we will perhaps need to revisit that in the resource_mapping context :-) 18:32:22 :) 18:32:37 so, I just add a default in my driver for now, and we revisit later? 18:32:54 I’m just worried about changing a behavior/default at some later point 18:33:20 At the very least, I say we add config vars, and then we can examine later if we want to specify this in the REST 18:33:29 and if we expose it via REST, the defaults should be what’s in the config 18:33:34 (and not defined in the REST itself) 18:33:35 tbachman: would it not be a matter of promoting this configuration in the future (as opposed to changing behavior)? 18:33:43 adding config now and API later is fine with me 18:33:54 okay good, so we have agreement :-) 18:33:57 :) 18:34:04 cool — I’ll add that to my respin as well 18:34:10 assuming annak is on board too :-) 18:34:19 and thanks to everyone for reviewing and approving the spec, too! 18:34:44 i think rkukura deserves special pat on the back for that! 18:34:47 tbachman: i need to catch up a bit :) i understood subnets are always created implicitly for all drivers using subnetpools? 18:34:49 SumitNaiksatam: ack 18:35:02 annak: only in the implicit workflows :) 18:35:14 ah 18:35:17 annak: my bad 18:35:31 yes — subnets are created implicitly when needed, and not when the L3P is created 18:35:38 (from the subnetpools) 18:35:55 SumitNaiksatam: now that I’ve taken up 1/2 of our IRC meeting, do we want to cover the proxy ip_pool still? 18:36:20 tbachman: yes 18:36:24 songole: still there? 18:36:28 yes 18:36:42 tbachman: go ahead, we should try to close this 18:36:56 songole: not sure if you’ve followed the dual-stack development, but we’ve expanded the use of the ip_pool parameter 18:37:09 as you’re well aware, there’s also a proxy ip_pool parameter 18:37:26 tbachman: I need to catch up on the spec.. 18:37:35 our expansion of this parameter is to instead of it being just a single subnet prefix, it’s now a string, which we can interpret as a comma-separated list of prefixes 18:37:43 and the prefixes can be a mix of IPv4 and IPv6 18:37:54 right now, we haven’t changed any behavior for proxy ip_pool 18:38:11 but if you want to take advantage of dual-stack, there is a change we should probably make 18:38:23 songole: so the summary is, whether we need to do the same thing for proxy_ip_pool, what we did for the ip_pool of the L3P? 18:38:25 right now, there is no ip_version provided with the proxy workflow 18:38:36 the ip_version is implied as IPv4 18:39:02 we should consider adding an ip_version config var, and allow it to use the same values as is used with the L3P 18:39:06 (4, 6, and now 46 to mean “dual-stack”) 18:39:07 SumitNaiksatam: I would think it is required 18:39:25 songole: okay, tbachman’s current spec and implementation does not address this 18:39:43 songole: we would need some follow up work to get to that 18:39:46 SumitNaiksatam: it wouldn’t take much, I think 18:40:08 songole: until then, when using NFP and service chains, perhaps only ipv4 is going to work 18:40:12 tbachman: okay 18:40:19 I think it would just be adding the ip_version config var 18:40:21 the proposal to make it a string - is it part of the spec? 18:40:30 songole: yes 18:40:31 songole: it is — for L3P 18:40:38 but not the proxy ip_pool 18:40:41 tbachman: ah, got it 18:41:02 * tbachman has a terrible memory — we may have also changed the type for proxy ip_pool to be a string 18:41:19 tbachman: could we potentially consider using the same ip_version for both ip_pool and proxy_ip_pool? 18:41:36 SumitNaiksatam: we could 18:41:46 songole: it is a string in the proxy ip_pool as well 18:41:51 (that’s in my patch) 18:42:01 I did that to keep things consistent 18:42:02 tbachman: ah good, so that change is already done 18:42:07 tbachman: ok. cool 18:42:10 SumitNaiksatam: ack. But we still need ip_version 18:42:19 the question is whether we would want a separate one for proxy ip_pool 18:42:29 I can see the argument for a separate config file var 18:42:32 we could use the same version param with proxy ip_pool as well 18:42:35 since there is a separate ip_pool 18:42:36 tbachman: yeah, i meant the ip_version stands for both pools 18:43:18 In any case, this is something we can address in a follow-on patch, if needed 18:43:24 we already have to do one for the DB schema change 18:43:35 (in order to increase the ip_pool string from 64 to 256 characters) 18:44:07 if we use the same ip_version, then perhaps only additional changes are required in the plumber 18:44:19 tbachman: ack 18:44:20 tbachman: So if ip_version enables IPv6 but proxy_ip_pool does not have any IPv6 prefixes, does this disable IPv6 for proxies? 18:44:34 tbachman: would nfp break with this change unless we handle v6 addressing changes? 18:44:52 rkukura: right now, there is no checking of ip_version against proxy ip_pool vaues 18:44:57 songole: it depends on how its being used 18:45:03 songole: IPv4 works as-is 18:45:10 the goal was to not break existing workflows 18:45:30 so — you don’t get anything new, but nothing should be broken 18:45:44 tbachman: nice thanks for confirming that 18:45:44 * tbachman admits he has a limited grasp of the proxy workflows 18:46:01 songole: hopefullly our gate jobs will catch if anything breaks 18:46:09 songole: the NFP gate jobs that is 18:46:15 SumitNaiksatam: ok 18:46:17 tbachman: right, so it should be possible to have ip_version=46, and proxy_ip_pool to contain v4 and/or v6 prefixes, providing control over which IP versions are available to proxy 18:46:19 * tbachman notes that those passed in his current iteration of his patch 18:46:37 tbachman: nuce 18:46:44 rkukura: I’m not sure what happens if there is an IPv6 prefix in proxy_ip_pool 18:46:51 we don’t have a test for that 18:47:01 but IPv4 is currently tested, of course 18:47:06 (and still works) 18:47:27 okay, perhaps we can switch to the other two topics on the agenda 18:47:35 ok 18:48:02 songole: perhaps you can look at the spec: #link https://review.openstack.org/#/c/459823/ 18:48:07 * tbachman notes he probably should have an exception to catch that for now 18:48:11 SumitNaiksatam: thanks for linking that! 18:48:15 and we can follow up if you have concerns 18:48:30 tbachman: thanks again for being on top of this! 18:48:42 #topic stable/mitaka timeout issues 18:49:02 SumitNaiksatam: np! 18:49:15 the summary here is that the stable/mitaka py27 timesout more often than not 18:49:23 SumitNaiksatam: will do. 18:49:24 and i have been trying to get to the bottom of it 18:49:37 so far with limited success 18:50:00 i identified a couple of places where we were trying to connect to external servers/processes 18:50:06 and waiting for it to timeout 18:50:33 on both occassions we could just get rid of the tests since these are legacy 18:50:45 unfortunately the problem is still not solved 18:51:10 i have this patch: #link https://review.openstack.org/#/c/467391/ 18:51:33 it also cleans up a lot of the logging 18:52:07 so if you see this: 18:52:08 http://logs.openstack.org/26/463626/19/check/gate-group-based-policy-python27-ubuntu-trusty/6dc1861/console.html 18:52:17 a lot of the redundant noisy logging is gone 18:52:44 makes it easier to narrow down the problems 18:53:07 i think we should try to merge this patch and couple of others in the queue: 18:53:22 #link https://review.openstack.org/#/q/status:open+project:openstack/group-based-policy+branch:stable/mitaka 18:53:30 rkukura: your patches can also be posted 18:54:39 thats it from me, unless there are questions/comments 18:54:39 SumitNaiksatam: Which patches of mine? I haven’t done any of the backlogged stable/mitaka back-ports yet 18:54:57 rkukura: yes, i meant you can post them 18:55:14 rkukura: i feel more confident that they will not perpetually timeout 18:55:29 even while i try to fix the situation further 18:55:33 SumitNaiksatam: I haven’t been waiting for the timeout fix - I’ve been trying to finish the data migration patch for apic_aim 18:55:49 I will review the patches mentioned. 18:55:50 rkukura: i know that, but the reality is taht we wouldnt have been able to merge them 18:55:58 even if you had posted them 18:56:04 SumitNaiksatam: understood 18:56:15 I guess that is important for manual backports 18:56:35 i feel a little more optimistic now (rechecks will probably be needed) 18:56:45 for these timeout work-arounds, I’d rather see fixes merged to master, then newton, then mitaka when possible 18:57:18 rkukura: yes, but these might be branch specific 18:57:34 unless the code causing the issue is already gone/changed/fixed on the newer branches 18:57:35 the log clean up which i am doing here will definitely be helpful in the other branches as well 18:57:44 right 18:57:49 i will post the relevant patches separately 18:58:16 some of the patching might be different for those branches 18:58:41 okay switching topics 18:58:43 ok 18:58:46 #topic Ocata sync 18:58:52 annak_: still there? 18:58:58 yep 18:59:06 SumitNaiksatam: 1 min 18:59:12 sorry got delayed to get to this 18:59:22 np 18:59:23 so i was hoping to have started merging the patches this week 18:59:40 but might take a little longer since there is still some backlog 18:59:52 meanwhile annak_ thanks for keeping up the work on this 19:00:01 and let us know if you are hitting any issues 19:00:12 np, sure 19:00:14 we can discuss on GBP IRC channel or over emails 19:00:19 annak_: thanks! 19:00:33 alrighty, thanks all for joining! 19:00:35 bye 19:00:36 SumitNaiksatam: thanks! 19:00:37 I had a few qs - I'll ask you offline 19:00:38 #endmeeting