18:01:58 #startmeeting networking_policy 18:01:59 Meeting started Thu May 19 18:01:58 2016 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:03 The meeting name has been set to 'networking_policy' 18:02:10 wnated to check with you if it'll be possible to move the call up for a few weeks 18:02:26 to accomodate the folks in india 18:02:38 earlier in the day? 18:02:39 #link https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#May_19th.2C_2016 18:02:45 yes 18:02:49 songole: hi 18:02:55 SumitNaiksatam: hello 18:02:59 hemanthravi: would work for me 18:03:05 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#May_19th.2C_2016 18:03:28 hemanthravi: i will check, lets do it when we absolutely have to 18:03:39 hemanthravi: otherwise its confusing to go back and forth 18:03:44 ok 18:04:38 were there any critical bugs that needed to be looked at? 18:04:44 #topic Bugs 18:04:57 SumitNaiksatam: would it be possible to have a convenient timeslot for this IRC so India folks also can join? 18:05:07 songole: we just discussed that 18:05:12 ok 18:05:55 one update - the new functionality for providing a fixed IP when launching a member has been merged to the UI as well 18:06:08 thanks to Ank in hemanthravi’s team 18:06:28 that created an issue with member launch though in the stable branches 18:06:38 backport was incorrect, and it has been fixed 18:06:45 SumitNaiksatam: Can we discuss the issue with neutron flavors? 18:06:54 thats one critical issue i can think of in the last week, but its fixed now 18:07:13 songole: sure, we are discussing Bugs right now 18:07:47 the issue is with service_profile resource 18:07:57 any other critical or high priority bugs that we need to look at? 18:08:11 songole: okay, bug ID? 18:08:12 we need to find a way to have both to coexist.. 18:08:56 1560717 18:09:44 songole: can you post the link, the bot will automatically provide the bug title 18:09:47 I suggest that we rename gbp service_profile 18:10:29 https://bugs.launchpad.net/group-based-policy/+bug/1560717 18:10:31 Launchpad bug 1560717 in Group Based Policy " Flavors framework plugin is not loaded in gate job" [High,In progress] - Assigned to Subrahmanyam Ongole (osms69) 18:11:43 songole: looking 18:12:34 songole: for the patch i posted, the devstack integration job seems to have worked fine 18:12:47 #link https://review.openstack.org/#/c/296098 18:13:36 songole: i have rebased the patch, lets see if it passes the gate 18:14:15 songole: but if the devstack job passed the gate earlier, it means the loading and coexistence of flavors framework was successful, right? 18:14:20 How is it supposed to work? does it give higher precedence to gbp resource? 18:14:32 would I be able to use both? 18:15:01 neutron and gbp? 18:15:07 songole: i would hope that the appropriate namespace for the resources should be able to deal with it 18:15:47 songole: the gate devstack job will tell us if you can use the gbp service profile when the neutron flavors plugin is loaded 18:16:13 and the other way as well? 18:16:17 songole: you can add an exercise script to that same patch, and in that you can run neutron commands for the service profile and see if it works 18:16:36 SumitNaiksatam: ok. I will verify 18:17:28 songole: that particular bug i created was merely to track loading of the neutron flavors framework, to validate that it does not in any affect the GBP service profile 18:18:11 songole: if we find that the two are not compatible (i.e. you cannot create a neutron service profile and/or use flavors framework), we should open a different bug to track that 18:18:18 since its a bigger issue 18:18:32 SumitNaiksatam: ok 18:19:25 songole: was there any other issue that you wanted to discuss? 18:20:10 https://bugs.launchpad.net/group-based-policy-ui/+bug/1582457 18:20:12 Launchpad bug 1582457 in Group Based Policy UI "Member create UI should match nova instance create page" [High,Confirmed] - Assigned to ank (ank.b) 18:20:16 songole: i think you posted something regarding the mitaka UI (inconsistency of the member launch) 18:20:22 songole: ah, right on cue 18:20:52 songole: yeah, we need to fix that 18:21:18 moving on 18:21:22 #topic Packaging 18:21:28 rkukura: i believe you had an update 18:22:17 I’ve been trying to track down the status of GBP integration into DeLorean, which seems to have been renamed DLRN, but am not having much luck 18:23:05 rkukura: okay 18:23:17 songole: i belive you had earlier asked questions regarding this 18:23:24 I can’t find any evidence the work has been done, but I’m not sure I’m looking in the right places 18:23:47 we are building GBP rpms ourselves for now 18:24:20 rkukura: okay, and there was no pending action item at our end (that they might be waiting on)? 18:25:05 songole: did you get past the entry point issue? 18:25:10 Other than specifying a 2nd maintainer, I don’t think so 18:26:00 yeah, we installed liberty and we built rpms for mitaka. that was the issue 18:26:33 we are able to proceed with our testing 18:26:47 songole: nice :-) 18:26:52 rkukura: okay 18:27:05 rkukura: so you are the second maintainer, right? 18:27:32 SumitNaiksatam: I think I’m the 1st, and I said you would be the 2nd. Hope you don’t mind ;) 18:27:48 rkukura: oh okay, thats fine 18:28:09 SumitNaiksatam: In general, ignoring details like DLRN, how imporant is it to get GBP into RDO? 18:28:12 rkukura: i thought you said they were building the package, so someone in their team was the first maintainer 18:28:41 rkukura: i thought our plan was to always to get into RDO, no? 18:28:52 I think they are setting it up and want us to take over maintaining it 18:28:55 rkukura: but i dont know the relative priority, perhaps an offline discussion 18:29:12 rkukura: sure, its good to control our destiny in that sense 18:31:01 ok moving on 18:31:06 #topic NFP 18:31:22 hemanthravi: shall we assess the progress since last week’s meeting? 18:32:08 there have been review comments 18:32:26 and, responses to the comments? 18:32:43 the team was busy fixing some issues and posted patches for those. they will work on the review coments fri/sat 18:33:25 from my side i could only add the patch list to indicate the order, will be working on the desc and the devref 18:33:37 hope to finish these by mon 18:34:03 hemanthravi: thanks much for working on that 18:34:19 did go through the review comments and most of them should be addressed, will address the pending ones next week 18:34:37 hemanthravi: i had a quest in the sencond patch about duplicated db mixins (copied from neutron) 18:34:51 hemanthravi: magesh responded, but i dont see any changes in the code 18:35:18 SumitNaiksatam: will ping him today, he did say he'll address that 18:35:26 hemanthravi: okay thanks 18:36:30 hemanthravi: at what point could we take the series and deploy using devstack? 18:36:47 not the whole series, but the minimal number of patches (as we discussed in the last meeting) 18:38:13 SumitNaiksatam: the current set can be deployed with devstack, only issue is that we won't be using couple of the patches which are related to configuring service vms 18:38:53 do we want to make it deployobale without those patches, i would rather avoid it unless absolutely necessary 18:39:47 hemanthravi: but i thought we were anyways working on a gate job which is going to use the namespace http proxy lbaas impl (no VMs) 18:40:21 the gate job will work with the current devstack installing all the patches 18:40:41 the functionality in 2 of the current set will not be used for the minimal service vm 18:41:30 hemanthravi: hmmm, i thought we were adding an exercise script to the gate job which would test “base mode” (no service VMs involved) 18:41:51 i meant base mode... 18:42:27 not service vm 18:42:54 hemanthravi: okay, so i am asking at what point can we test that base mode in our developer environments (i.e. using devstack)? 18:43:22 or can we already do it (and if so it will be good to get those instructions posted on the wiki page)? 18:43:35 we can do that now, only missing the gate job 18:43:40 for merging 18:44:11 currently base mode (namespace haproxy) and minimal service vm work with the submitted patches 18:45:03 hemanthravi: okay, so keeping gate job aside for a min, can you post the instructions for the above so that reviewers can try it out in their environments? 18:45:33 will do 18:45:36 thanks 18:46:13 hemanthravi: i would imagine that the gate job would also implement something similar to the instructions you plan to post, and will be ready in course of time 18:46:43 hemanthravi: apart from that addition to the gate job, the current series is breaking the devstack integration job 18:46:47 yes, will keep these in sync 18:47:12 hemanthravi: i posted a comment to that effect in the first patch, and what needs to be done to fix it 18:47:27 ok, will address that one. haven't seen it yet 18:47:41 hemanthravi: that fix should go into the first patch, so that the all the patches at least have a clean gate job run 18:47:49 ok 18:48:32 anyone have any questions for hemanthravi this week regarding NFP? 18:49:09 nothing from here yet 18:50:09 igordcard: okay 18:50:32 igordcard: lets discuss your QoS patch next week, we might run out of time today 18:50:41 SumitNaiksatam: okay 18:51:31 it's up now, able to create maxrate bandwidth limiting qos policies mapped to each of the PTs inside a PTG, by attaching a new kind of NSP to the PTG 18:51:33 meanwhile FYI - #link https://review.openstack.org/#/c/301701/ 18:51:58 thats the QoS support patch on the feature/qos branch 18:52:01 igordcard: thanks 18:52:15 SumitNaiksatam: thanks for the link 18:52:19 #link Intra-PTG communication 18:52:40 so there is a use case for disallowing communication between PTs in the same PTG 18:53:35 this typically happens when you want to provide a common service, like data backup, to all VMs 18:54:39 so if you can create a new group which has intra-PTG communication disallowed, you can add all members to this backup group (in addition to their original groups) 18:55:18 SumitNaiksatam: I’m a bit confused - I thought a PT could only belong to one PTG 18:55:26 to facilitate this a boolean attribute has been requested in the PTG definition, to disallow Intra-PTG communication 18:56:02 rkukura: thats true, but a member can be associated with multiple PTGs 18:56:16 what is “a member”? 18:56:26 a PT? 18:56:35 rkukura: good question 18:57:19 rkukura: today the member definition resides in the clients which exercise the GBP api (its not within the GBP model) 18:57:57 so if you go to the GBP UI today, you create a member in the context of a group (the user doesnt create a PT) 18:58:06 SumitNaiksatam: Is the member the VM? And do you mean it can have multiple NICs attached to ports belonging to different PTs 18:58:14 rkukura: right 18:58:33 and when you create the member, you are obviously in the context of one PTG, but you can associate additional PTGs as well 18:58:59 rkukura: which in turn plays out like you described 18:59:32 I will put the details in a spec and send it out for review 18:59:40 just wanted to set the context here 18:59:50 OK, hopefully with the right one providing the VM’s default route 19:00:10 rkukura: yes, that needs to be handled correctly 19:00:24 all right, we are out of time, thanks all for joining 19:00:27 see you next week 19:00:30 bye! 19:00:34 #endmeeting