18:01:30 #startmeeting networking_policy 18:01:31 Meeting started Thu Jul 7 18:01:30 2016 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:34 The meeting name has been set to 'networking_policy' 18:01:52 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#July_7th_2016 18:02:00 #topic Bugs 18:02:16 #link https://bugs.launchpad.net/group-based-policy/+bug/1590390 18:02:16 Launchpad bug 1590390 in Group Based Policy "Cross tenancy issue with neutron" [Medium,In progress] - Assigned to ashu (mca-ashu4) 18:02:27 hemanthravi: you mentioned that this was blocking you 18:02:54 yes, this patch is needed for NFP when services are launched by non-admin tenants 18:03:10 the fix is in this patch https://review.openstack.org/#/c/327033 18:03:22 need review on this one 18:03:36 hemanthravi: so you are relaxing the tenant validation 18:03:43 hi 18:04:09 yes only when the user is admin and allowing the user to the l3p of a tenant 18:04:22 this is for the stitching port 18:05:00 hemanthravi: is the comment in the code accurate, i found it a little confusing and have been kind of stuck at that! 18:05:29 hi - sorry I’m late 18:06:17 rkukura: hi, np 18:06:59 hemanthravi: i was wondering if this relaxation is a bit too open 18:07:01 i'll talk to ashutosh and make the comment clear 18:07:11 hemanthravi: is there no check for admin required? 18:07:18 hemanthravi: okay 18:07:38 hemanthravi: i am sure it works this way, but just to make sure that we are not relaxing more than we need to 18:07:41 from what i understood the left of the condition makes it an admin, but will check on that 18:08:01 and add to the comment to make sure it's restrictive, will do this tonight 18:08:27 hemanthravi: okay thanks 18:08:49 i didnt have any other blocking bug on my radar 18:09:20 we are also running into another issue during testing of network functions 18:09:21 hemanthravi: there were some new bugs created by venkat, i have marked them as incomplete, more information is needed 18:09:37 this is an old one https://bugs.launchpad.net/group-based-policy/+bug/1509458 18:09:37 Launchpad bug 1509458 in Group Based Policy "[apic mapping] Create group failed due to overlapping IP on external segment " [Medium,Incomplete] - Assigned to Robert Kukura (rkukura) 18:09:53 need to figure out a narrower test case to replicate this 18:10:14 hemanthravi: okay 18:10:21 in general if you see here #link https://bugs.launchpad.net/group-based-policy/+bugs?orderby=-importance&memo=75&start=75 18:10:22 it happens during a run with all the nfp ATF test cases 18:10:52 we have about 100 pending bugs, but more than half of them are in incomplete state 18:11:31 it would help if the folks who posted the bugs are either able to provide more information, or close these bugs 18:12:05 eventually, we will have to just go and close these if there is no follow up 18:12:30 #topic Packaging 18:12:33 some of these are bugs posted from the NFP review last week 18:12:41 will track these 18:13:01 hemanthravi: oh yeah, that is right, i will triage and assign them, i was not referring to those 18:13:11 hemanthravi: thanks 18:13:21 rkukura: anything to discuss today on packaging? 18:13:29 nothing new from me on this 18:13:37 rkukura: okay 18:13:42 #topic Testing 18:14:27 so on account of the recent additions to the integration gate job, our test run time has dramatically increased 18:14:41 it takes almost an hour and a half to run the integration test 18:15:01 so we are in the process of splitting that gate job: #link https://review.openstack.org/#/c/338577/ 18:15:35 but even if we split, i think we need to profile some of the exercise scripts and check if we can optimize 18:16:03 as discussed, will look at the exec time of nfp gate scripts 18:16:09 haven't gotten around to it yet 18:16:13 hemanthravi: cool thanks 18:16:37 hemanthravi: it will help for someone to look at the itegration gate job logs 18:16:51 ok 18:16:59 #topic NFP “configurator” patches 18:17:08 so a quick update on NFP first 18:17:28 on Monday, we merged the core NFP patches 18:18:03 thanks to the reviews from the team and sumit in particular for the extensive reviews to make the merge happen 18:18:05 there is a chain of complementary patches which help to perform the services’ configuration over the cloud 18:18:11 hemanthravi: np 18:18:25 hemanthravi: you want to summarize the “configurator” patches? 18:19:58 the scope of the config patches is to listen to config rpcs (starting with advanced services) and render them onto the service vms launched by the orchestrator 18:20:18 involves 2 main components, under the cloud and over the cloud 18:20:59 all the code has been submitted and we are in process of resolving the gate test failures and adding additonal tests 18:21:04 hemanthravi: thanks 18:21:24 hemanthravi: so this will start as “contrib” code, right? 18:21:53 yes, we'll be submitting this under gbpservice/service 18:22:07 and will be making the reqd changes over the next few days 18:22:12 hemanthravi: okay 18:22:32 so i believe the plan is to create a new “contrib” directory under gbpservice 18:22:44 yes 18:23:03 we originally had a contrib directory under gbpservice/test which was meant for testing artifacts 18:23:31 rkukura: igordcard tbachman: any objections to the plan to land this code as “contrib” code? 18:24:08 over time, we can evaluate the code, and promote it to the main part of the tree as we see it fit 18:24:14 * tbachman is fine with that 18:24:25 tbachman: okay 18:25:35 rkukura: there? 18:25:59 SumitNaiksatam: none 18:26:05 igordcard: okay thanks 18:26:05 yes - had an interruption 18:26:44 was just trying to figure out exactly what code is going to be considered contrib 18:27:12 rkukura: you mean in the case of these series of patches? 18:27:53 yes - what is different about these patches that makes them less supported? 18:28:31 rkukura: these have kind of been developed out of the tree 18:28:58 rkukura: they have been tested to work, and they conform to the general design that went into the NFP spec 18:29:03 ok 18:29:15 rkukura: however, i see them as complementary to the core framework itself 18:29:57 rkukura: but not yet part of the core framework 18:30:11 is this the only current in-tree way to use the core framework? 18:30:14 and the idea is to get them out to be deployed and move them to them to the main part of the tree with the usage 18:30:31 hemanthravi: thats right, good point 18:30:33 rkukura: no 18:30:59 So the framework itsself cannot be used without these contrib patches? 18:31:09 rkukura: the current in-tree NFP framework is self-sufficient, in fact all of it is running the gate jobs 18:31:28 but is it useful alone? 18:31:31 rkukura: yes 18:31:48 Or are these patches the reference implementation for using the framework? 18:32:08 cuurent framework can be used by anyone developing their own services, orchestrator will provide the lifecycle managemnt with the serice vm providing the required api 18:32:21 rkukura: no, they are not the reference implementation, else we would not have merged the patches to begin with 18:32:25 sort of a bring your own network function 18:32:36 OK, so I am fine with merging them as contrib for now 18:32:48 rkukura: okay 18:33:15 the configurator patches will enable the deployment of the currently available services with advanced services using the nfp framework 18:34:07 hemanthravi: i agree, its a much better way to do it 18:34:23 rkukura: so gbpservice/contrib directory structure is fine? 18:34:34 rkukura: i am asking from the packaging perspective 18:35:17 I don’t see any issue with using gbpservice/contrib 18:35:25 rkukura: okay, thanks 18:35:48 hemanthravi: is there anything else you need to discuss on this front? 18:36:31 no, we'll make the changes and post the patches 18:36:41 hemanthravi: okay, thanks 18:36:50 #topic PTG grouping with an "application" construct 18:37:01 i was hoping to have a spec in place before the meeting 18:37:16 i dont, but we have time so i guess we can discuss anyway 18:37:58 the idea is to define an application grouping resource, say - Application Policy Group (APG), which has a 1 to many relationship with PTGs 18:38:09 each PTG can belong to exactly one APG 18:38:17 each -> any 18:39:07 the claim is that this higher level grouping construct can further help in orchestration and intent specification 18:39:18 SumitNaiksatam: Can L3Ps and L2Ps be shared across APGs? 18:39:35 rkukura: good question 18:39:45 rkukura: i think they could be 18:39:52 I dpm 18:40:05 I don’t see why not 18:40:09 rkukura: right 18:41:03 at this point i am not sure that this APG needs to have any more properties than a list of PTGs 18:41:13 So what new does the APG provide that a simple PTG naming convention couldn’t provide? 18:41:39 rkukura: :-), yeah you can use naming convention, but that is fragile 18:42:15 rkukura: however, over time, we can envision adding more properties to this APG, which in turn can be orchestrated over the associated PTGs 18:42:25 Did we intend at some point to have heirarchical EPGs? Would that be a more general grouping mechanism? 18:42:30 rkukura: you can always argue that APG is just a label 18:42:56 rkukura: we definitely had hierarchical PRS 18:43:08 Maybe thats what I’m thinking of 18:43:39 rkukura: but you are right, APG is effectively hierarchical EPG with one level of nesting 18:43:55 oh, and every tenant will have a default APG for backward compatibility 18:44:37 well, not just for backward compat, but that will serve for backward compat 18:45:03 anyway, i will put this in a short spec, so that you all can read and comment 18:45:12 OK 18:45:20 #topic Open Discussion 18:45:30 anything else we missed that we want to discuss? 18:46:12 if not, thanks all for joining! 18:46:22 will need some review time on the config patches sson 18:46:25 next week will be more exciting with rkukura and tbachman in person! :-P 18:46:35 :) 18:46:37 hemanthravi: yes 18:46:45 * tbachman puts on his party hat 18:46:55 tbachman: lol 18:47:01 all right, thanks all 18:47:06 thanks SumitNaiksatam! 18:47:06 bye until next week! 18:47:08 bye 18:47:11 bye 18:47:15 #endmeeting