18:02:34 #startmeeting networking_policy 18:02:35 Meeting started Thu Jul 13 18:02:34 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:38 The meeting name has been set to 'networking_policy' 18:02:54 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#July_13th.2C_6th_2017 18:03:05 :) 18:03:06 sorry for being late 18:03:17 tbachman: np 18:03:33 #topic Proposal to add Anna K and Thomas B to core team 18:03:42 :) 18:03:48 annak: tbachman: a warm welcome once again! :-) 18:03:57 * tbachman throws hat in the air 18:03:57 thanks again for the honor! 18:04:02 thanks to all! 18:04:21 annak: equally an honor and pleasure to have you on the team 18:04:34 tbachman: annak: thanks for agreeing to take on the responsibility 18:04:51 #topic Ocata sync 18:04:52 SumitNaiksatam: thanks for bringing us aboard! 18:05:01 tbachman: np 18:05:01 * tbachman ^5’s annak 18:05:07 #link https://review.openstack.org/#/q/topic:ocata_sync 18:05:31 tbachman: :) 18:05:45 so our main patches are merged 18:05:55 SumitNaiksatam: annak: thanks for that huge effort! 18:05:58 i started posting the client, UI and heat patches 18:06:14 yeah big thanks to annak on that 18:06:53 i just posted a new revision on the client patch, and i have also posted a test patch in the GBP repo which points to the client patch (and will eventually also to the UI and heat patch) 18:07:12 if the client patch works, we would need to merge that first 18:07:18 since the other repos depend on clinet 18:07:23 *client 18:07:33 client changes were minor 18:07:41 waiting for the devstack gate test result 18:08:13 tbachman: at some point were you planning to look at getting rid of the dependency of some of the apic repos? 18:08:22 SumitNaiksatam: ack 18:08:26 SumitNaiksatam: Assuming that passes, any reason to hold off before merging the client patch? 18:08:29 tbachman: okay thanks 18:09:07 rkukura: i dont think so, but i myself need to validate, i will +1 the client patch, and that should serve as a indication that we are good to go 18:09:11 rkukura: sound okay? 18:09:31 ok, I’ll give all an initial review ASAP 18:09:39 rkukura: yes please, thanks 18:10:21 tbachman: based on how we decide to proceed, some of the changes which i made in the sumit/* branches (in say the apicapi repo) may not be required at all 18:10:32 tbachman: so we can remove the dependency and just delete that branch 18:10:37 k 18:10:49 I think we have a patch to move some of the agent stuff 18:11:03 and then there’s the topic name 18:11:11 but aside from that, I think that was it 18:11:15 hoepfully not too much 18:11:16 tbachman: perfect, so you have it tracked :-) 18:11:26 i was just typing that up 18:11:39 SumitNaiksatam: ack. More or less ;) 18:11:48 tbachman: that my recollection as well 18:12:00 tbachman: i know you are busy with the release effort 18:12:05 no worries 18:12:13 I think there was a question on the allowed-address-pairs stuff, too 18:12:15 tbachman: so this is obviously less priority than that 18:12:45 SumitNaiksatam: ack 18:13:36 tbachman: need to refresh my memory on that, but i believe it was a matter of moving a table schema definition or something like that 18:13:43 right 18:13:56 okay anything else on ocata_sync? 18:14:12 moving on 18:14:21 #topic VMware NSX Policy Driver 18:14:28 #link https://review.openstack.org/453925 18:14:49 i went through the patch 18:14:50 SumitNaiksatam: thanks for your comments, I'm working on them 18:14:58 annak: np 18:15:03 i am not familiar with the backend 18:15:08 so i cant validate that part 18:15:33 yes, of course 18:15:51 but i had some questions based on the configuration and because of that i was having a hard time try to reconcile with how this works 18:16:06 but its quite possible i am misunderstanding the test configuration 18:16:13 other than that the patch looked good to me 18:16:34 if annak can valiate that it works, i am good to move forward 18:16:41 rkukura: tbachman: any thoughts? 18:16:57 i know you guys also looked at it at different times 18:17:00 SumitNaiksatam: I gave some initial comments (just nits). I’ll have a look at the recently-updated patch set 18:17:06 tbachman: sure 18:17:14 and i assume this is going to evolve over time 18:17:17 I have not given it a detailed review yet, but will today 18:17:25 rkukura: thanks 18:17:53 if annak is comfortable, hopefully we can merge this by the end of the week? 18:18:11 (assuming review comments are provided and addressed) 18:18:15 regarding devstack - some config is missing indeed, and its a good idea to spawn all possible config from gbp side (such as core plugin) 18:18:35 annak: yeah 18:18:43 SumitNaiksatam: yes, once the config side is done 18:18:46 annak: you also mentioned you would be setting up a gate job 18:19:07 annak: it of course will be a follow up 18:19:34 but the devstack configuration would be helpful there 18:19:43 SumitNaiksatam: yes, but it will take some time - maybe a couple of weeks. the backend is still in development and some of the infra is missing 18:20:01 annak: yes absolutely understandable 18:20:39 annak: if you dont mind could you summarize in a few sentences as to how using the policy dirver here would be different from say, using NSX core plugin with the Neutron API? 18:20:57 hopefully that would set a good context for the reviewers 18:22:59 annak: still there? 18:23:02 SumitNaiksatam: assuming I understood the q correctly: the new driver configures all security stuff directly on the policy manager, bypassing neutron security groups, which is more effective 18:23:15 annak: okay 18:23:49 nsx manager and policy manager are two different endpoints, and policy manager in turn configures the nsx manager 18:24:06 annak: ah okay, that is good to know 18:24:19 nsx core plugin works against nsx manager 18:24:45 okay, good to know 18:25:00 policy manager is a new solution that is intent-based, same idea basically as GBP 18:25:14 annak: okay 18:25:37 intent is the new policy ;) 18:25:44 annak: perhaps in the devref you can add a pointer to any public document that may be available on this 18:26:12 annak: that will help anyone who wants to use this to understand readliy as to what componets are at play 18:26:13 but at current stage only the grouping objects and security maps between them is supported, so I inherited the connectivity part from resource mapping driver 18:26:20 SumitNaiksatam: yes, will do 18:26:33 annak: sure, you have mentioned that in the patch, makes sense 18:26:44 rkukura: tbachman do you have any more questions for annak on this? 18:26:50 SumitNaiksatam: not at this moment 18:26:58 will put any q’s I have in the gerrit 18:26:59 SumitNaiksatam, annak: that was very helpful 18:27:10 rkukura: ack on that 18:27:15 tbachman: sure 18:27:21 no immediate questions from me 18:27:25 rkukura: thanks 18:27:28 moving on 18:27:41 #topic Subnetpools in resource_mapping driver 18:27:52 #link https://review.openstack.org/469681 18:28:20 sorry, i dont have any new input on this yet, didnt spend any more time after the last discussion 18:28:27 SumitNaiksatam: same 18:28:42 perhaps we can get back to this after the NSX driver is merged and the ocata_sync is completed 18:29:08 rkukura: i believe you have been thinking about this in the background (in the context of the migration effortZ) 18:29:08 ok 18:29:14 *effort 18:29:17 I started looking at thge patch again 18:29:51 rkukura: okay 18:30:01 can someone remind me what the main issue was that seemed difficult to resolve, or am I totally confused? 18:30:37 rkukura: inability to configure subnets on same network from different address scope 18:31:03 annak: nicely summarized 18:31:04 annak: even worse, different subnetpools 18:31:11 tbachman: ack 18:31:24 sorry, different subnetpools :) 18:31:35 and we need to do that since we were have to deal with the proxy_ip_pool (which is currently a separate pool from the l3p) 18:31:56 thanks. And whats the scenario where this would happen, given all subnets in an L2P should use the L3P’s subnetpool? 18:32:06 rkukura: ^^^ 18:32:13 right 18:32:19 its all coming back now 18:32:40 annak: just to fill you in on the context of “migration” 18:32:49 since we might reference this 18:33:12 we are trying to figure out a way to migrate the old apic driver to the new one (aim_mapping) 18:33:37 ok 18:33:47 the migration requires preserving the entire GBP/Neutron model 18:33:59 and just adjusting the mapping 18:34:41 but in a similar vein, we might even need to think about migrating older GBP users (prior to the use of subnetpools and address-scopes) 18:35:01 to now, where every subnet will always have a subnetpool associated 18:35:02 yes, makes sense 18:35:43 rkukura has been mostly leading the discussion on this 18:36:00 but i thought it will be good to share the context here 18:36:09 rkukura: anything you want to add? 18:36:22 SumitNaiksatam: thanks for the context! 18:36:32 annak: you might see some patches around this, so hopefully you wont be surprised :-) 18:36:49 I’m hoping we can build a set of tools to handle the AIM plugin migration, and that one of these that takes care of the addition of subnetpools and address scopes might be usable for other PDs as well 18:37:04 rkukura: nice 18:37:28 I do have a shorter term question on this 18:38:29 many projects (nova, neutron, …) try to support deployments that are based directly off the git repos, without any commit breaking anything for running deployments 18:39:08 rkukura: you mean stable branches or trunk-chasers? 18:39:11 if we merge the switch to using pools and scopes, without a migration solution, we obviously wouldn’t be supporting these sorts of users 18:39:26 trunk-chasers is the term I was trtying to recall 18:39:47 but i guess in GBP context my question is not as relevant 18:39:54 since we aggressively backport 18:40:02 is it safe to assume GBP has no trunk-chasers? I don’t think we’ve worried too much about this in the past, but want to bring it up since the project has evolved 18:40:25 rkukura: good question, i could have said with confidence in the past 18:40:35 I guess there could be stable branch trunk chasers too 18:40:46 rkukura: yeah, and which is the case 18:41:36 well let me backtrack slightly 18:41:55 i am not aware of anyone building their own GBP packages, and deploying them 18:42:16 my view is we shoiuld not delay merging the subnetpool patch to our master, but just want to make sure 18:42:23 there are obviously people deploying from packages that are build from the stable branches 18:42:46 i dont know of anyone using a packages that is built from the master 18:43:02 we support devstack like deployments for the master 18:43:21 but we dont support and recommend deploying anything production from the master 18:44:29 rkukura: i imagine i am just restating/reinforcing your assumptions? 18:44:52 I noticed there is a use_subnetpools config item in the patch. If setting this to false keeps the old behavior, maybe we should default it to false initially 18:45:13 rkukura: okay, makes sense to me 18:46:04 rkukura: i think its only in resource_mapping patch, not in the aim patch? 18:46:04 i dont think songole is here 18:46:35 so we can skip the next agenda topic on NFP (i am looking at those patches) 18:47:08 annak: I don’t believe its configurable in AIM, and would prefer not to keep supporting the old way any longer than necessary 18:47:53 rkukura: ok, I can change the default 18:47:57 rkukura: yeah, to the best of my recollection, aim_mapping will always use subnetpools 18:48:53 It would be ideal if new deployments defaulted to True, but upgrading an existing deployment to a new commit would default to False - maybe we should code the default as False and have devstack set it to True? 18:49:49 rkukura: ok (assuming there are any existing deployments..) 18:50:06 annak: good point :-) 18:50:40 annak: maybe we don’t need to worry about trunk-chasers 18:50:41 there are, but rkukura’s earlier comment regarding migrating the apic drivers will take care of those deployments 18:51:23 so maybe we keep this configurable until we have a migration tool, then phase out support for non-subnetpool usage 18:51:34 i meant deployments that depend on resource_mapping 18:52:25 annak: right, i am not aware of but we might need to check with nfp because they were supporting some folks using resource_mapping driver earlier 18:52:46 ok, lets move this discussion to the review 18:53:08 rkukura: ack 18:53:16 anything else for today? 18:53:16 ok 18:53:36 not from me 18:53:43 is anyone planning to go to PTG/Sydney? 18:53:51 :) 18:54:01 * tbachman would love a free trip to Australia :) 18:54:19 tbachman: +1 18:54:30 lets all go 18:54:32 annak: not planning, too far! :-( 18:54:43 rkukura: i like the spirit 18:54:54 annak: you planning to go? 18:55:11 I'm thinking whether I should ask :) 18:56:01 annak: yes its good to think before you ask, but you might get what you want :-) 18:56:44 so anyone going to Sydney, please update the team 18:56:49 SumitNaiksatam: :) 18:56:55 will be nice for the others to know 18:57:10 may be those on the fence will change their minds :-) 18:57:13 alrighty thanks all 18:57:17 see you next week 18:57:19 SumitNaiksatam: thanks! 18:57:23 thanks SumitNaiksatam! 18:57:25 thanks, bye! 18:57:27 #endmeeting