18:01:58 #startmeeting networking_policy 18:01:58 Meeting started Thu Dec 18 18:01:58 2014 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:02 The meeting name has been set to 'networking_policy' 18:02:03 s3wong: hi 18:02:07 hello 18:02:31 #info agenda https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy#Dec_18th.2C_2014 18:02:53 we are closing in on the release ;-) 18:03:36 so we are on track to wrap up the features and bugs by tomorrow 18:03:50 lets go over the details 18:04:14 #topic Bugs 18:04:25 there is one open critical bug: #link https://bugs.launchpad.net/group-based-policy/+bug/1403577 18:04:57 mageshgv: i believe this is addressed 18:04:59 ? 18:05:39 SumitNaiksatam: Yes, Right now this is done in the same patch as hierarchial redirects, May be we want to separate them ? 18:06:11 mageshgv: i believe those are related (as far as your implementation is concerned) 18:06:17 mageshgv: so i think its fine 18:06:24 mageshgv: thanks for working on this 18:06:31 SumitNaiksatam: yes, thats right. ok 18:06:38 we have a bunch of high and medium priority bugs 18:07:04 i have a few high on my plate 18:07:12 rkukura: you have some, right? 18:07:25 some might be targeted for the next release though 18:07:37 SumitNaiksatam: Only one that I think is worth fixing now. 18:07:49 Or may be. 18:07:52 rkukura: ok, link? 18:08:11 Lets make sure defering the other two is OK 18:08:24 https://bugs.launchpad.net/group-based-policy/+bug/1158684 18:08:43 This is the nova bug where pre-created ports get deleted on VM delete 18:09:09 ivar-lazzaro had committed a workaround, and I don’t think we need to do anything more right now. 18:09:09 rkukura: yes, there is pending patch in Nova 18:09:18 rkukura: agree 18:09:43 We need to pressure nova to rebase and review the fix 18:09:54 rkukura: yeah its pending for a long time 18:10:00 The other I think we can defer is https://bugs.launchpad.net/group-based-policy/+bug/1383947 18:10:00 across releases 18:10:12 rkukura: maybe just the delete subnet when BadRequest occurs? 18:11:07 This is the one were subnets get created for overlapping IPs 18:11:29 rkukura: ops :) 18:11:33 ivar-lazzaro did a workaround, and the real fix is to use the subnet pool feature planned for neutron in kilo 18:11:46 rkukura: got it 18:12:11 the one thing we might do now on this one is to delete the subnet when adding the interface to the router results in BadRequest 18:12:32 with the workaround, this could occur still occur with concurrent threads, but is not likely 18:12:39 rkukura: ah ok 18:12:53 so I think we could do that on the stable/juno branch at some point if needed 18:13:11 rkukura: can we have a separate bug to track this? 18:13:24 rkukura: just the delete subnet 18:13:46 SumitNaiksatam: I think we can use the current bug, since the subnet pools are addressed in the “does not scale” bug 18:14:13 rkukura: okay 18:14:24 SumitNaiksatam: Does that make sense? We’d target stable/juno I guess 18:14:42 Last high priority bug is https://bugs.launchpad.net/group-based-policy/+bug/1398674 18:14:55 rkukura: yes, for critical bug fixes we plan to back port 18:14:56 This is where updating the L2 policy fails 18:15:11 rkukura: yes, you are planning to fix that? 18:15:21 I could do this one today if needed, I think 18:15:40 rkukura: okay great, we will have quick review turnaround on this 18:16:02 I’m not clear on whether we really want to allow updating a PTG’s L2P at all for the RMD 18:16:03 ideally i would like to clear up the review queue for any non-vendor related patched by EoD (today) 18:16:20 tomorrow we can do vendor patches (ODL and Nuage) 18:16:20 We cannot update it if any PTs exist in the PTG 18:16:37 rkukura: true since we cannot move the PTs 18:16:44 Is it OK to simpy reject any changes to the PTG’s L2P? 18:17:05 yes, perhaps better to codify that in the driver thought? 18:17:09 *though 18:17:20 in case some other backend is able to support this? 18:17:59 Or do we need to check whether any PTs have been created, and if not, we’d need to tear everything down and recreate the PTG 18:18:09 This check would be the resource_mapping drive 18:18:12 driver 18:18:24 rkukura: ok great, i think the former would suffice for now 18:18:37 OK, I’ll whip up a patch today 18:18:47 rkukura: perhaps a comment in the code to the effect of the latter (as a possibility) 18:19:04 SumitNaiksatam: agreed 18:19:10 rkukura: thanks 18:19:34 SumitNaiksatam: Are we concerned with the medium priority bugs at this point? 18:20:04 rkukura: depends, some were classified as medium but turned out to be high 18:20:16 rkukura: for the ones you have on your plate i will leave it to your judgement 18:20:22 mageshgv: how does it look on your plate in terms of the high priority 18:20:35 SumitNaiksatam: OK, I’ll look them over 18:20:41 Have a question related to https://review.openstack.org/#/c/142643/. What is the right way to check if the router id belongs to tenant during L3P creation ? 18:20:44 rkukura: great, thanks 18:20:55 KrishnaK: one sec, we will come to that 18:20:59 mageshgv: any high priority ones for which patches are not posted yet? 18:21:01 thx 18:21:18 KrishnaK: thanks for joining though, i know you are fighting another releae in paralle! ;-) 18:21:31 *release in parallel 18:21:33 SumitNaiksatam: No, patches are posted for them all, just need some modifications for hierarchial redirect 18:21:38 SumitNaiksatam: I think all the medium and low bugs assigned to me can be defered 18:21:47 rkukura: ok thanks 18:21:53 mageshgv: great 18:22:13 mageshgv: i believe we have a plan on the hierarchical redirect based on the discussion today morning 18:22:21 ? 18:22:42 SumitNaiksatam: yes, will have to factor in those changes 18:23:37 mageshgv: ok thanks, looking forward to that patch, since it has the critical bug fix as well 18:23:54 mageshgv: worst we might have to break those down (leave it to your judgement) 18:24:09 mageshgv: but we need to merge the critical fix by EoD (PST) 18:24:25 SumitNaiksatam: ok, will see what can be done 18:24:48 mageshgv: thanks for your effort fixing the numerous bugs in quick time 18:24:56 KrishnaK: your question now 18:25:21 “Have a question related to https://review.openstack.org/#/c/142643/. What is the right way to check if the router id belongs to tenant during L3P creation ?” 18:25:39 SumitNaiksatam: thx. 18:26:00 KrishnaK: i was thinking that if you just checked for the router existence by doing a get and filtering on the tenant_id and the router_id 18:26:04 that would not work? 18:26:45 rkukura: this is the bug you created for validating the resources that are explicitly provided when creating PT/PTG/L2p?l3P 18:26:58 PT and PTG is done 18:27:07 KrishnaK is working on L2P and L3P 18:27:09 for the router id , tenant id is different for some case. 18:27:17 *cases 18:27:25 KrishnaK: really, i dont think that would be the case 18:27:46 KrishnaK: where did you see that happening? 18:28:49 2014-12-18 10:28:35,249 INFO [neutron.api.v2.resource] create failed (client error): Error while creating L3 Policy : Router id a54b6371-c996-4ab7-8bae-e37fe1e797c6 does not belong to t\ he tenant id test-tenant. ====================================================================== FAIL: gbp.neutron.tests.unit.services.grouppolicy.test_resource_mapping.TestL3Policy.test_explicit_router_lifecycle 18:29:23 rkukura: any chance that you can help KrishnaK with this one? 18:29:41 SumitNaiksatam: I can try 18:29:53 i think his changes are breaking the existing UTs 18:30:02 rkukura: thanks 18:30:14 rkukura: thx 18:30:16 KrishnaK: can you do a quick follow after this meeting with rkukura? 18:30:24 *follow up 18:30:27 Are these UTs that break related to shared L3Ps? 18:30:28 SumitNaiksatam: Thanks. 18:30:58 rkukura: I didn't see shared for that router 18:31:03 ok 18:31:22 Let me gather more debug data and email you or post in the review. 18:31:36 rkukura: i believe the UTs are breaking on in krishnak’s patch where he trying to do a validation 18:31:45 on -> only 18:32:06 #topic Pending feature merges 18:32:21 Hierarchical PRS compostion for redirects: #link https://review.openstack.org/140286 18:32:35 some of us met this morning to review this 18:32:46 mageshgv: rkukura ivar, thanks for your time 18:33:46 if you are not familiar with the above feature, essentially what we are trying to do is allow the admin to introduce redirect constraints for a user’s PRS 18:34:18 an example being, the admin can introduce a firewall to inspect traffic 18:34:27 yapeng: hi, good to see you here 18:34:42 we will come to the ODL part in just a bit 18:34:55 mageshgv: thanks for working on the above feature, and to ivar for reviewing it 18:34:59 hi 18:35:17 #topic ODL and Vendor drivers 18:35:53 yapeng and yi and the rest of the ODL team are working furiously to get the ODL integration done 18:36:01 yapeng: any update for the team here? 18:36:15 i believe you are mostly done on the openstack policy driver side? 18:36:19 ok, single compute node is working now. 18:36:21 yes 18:36:28 yapeng: wohoo! 18:36:36 i think Yi is working on test case part now. 18:36:41 yapeng: awesome 18:36:45 I am testing multi-compute node setup . 18:36:52 yapeng: sweet 18:37:08 yapeng: what about the part to raise not supported error for neutron operations? 18:37:30 I coded it up, will test this afternoon. 18:37:36 ah ok 18:37:40 if works, I will submit my patch 18:37:44 is that a new patch, i dont see one posted yet 18:37:54 i have not posted yet. 18:38:00 should be today 18:38:18 yapeng: ah ok, you might want to post it anyway as WIP, so its on people’s review radar 18:38:24 we are still targeting to get ODL policy driver merged by tonight (PST)? 18:38:35 yapeng: thanks to you and yi for working on this 18:38:40 s3wong: tomorrow is fine 18:38:40 sure Sumit. 18:39:01 we can have a session tomorrow morning 18:39:10 to review the workflow and the code 18:39:40 there is a weekly ODL GBP status meeting tomorrow morning 18:39:51 s3wong: okay 18:40:09 i think keith can relay the update there 18:40:45 i dont believe ronak is here, but i provided some high level comments on his Nuage driver 18:40:48 s3wong: can you pls send me a link if there is one. if handy otherwise i will find it 18:41:01 banix: a link to the patch? 18:41:16 banix: there are two patches 18:41:17 s3wong: no the the odl gbp call 18:41:20 banix: or to the webex for weekly ODL GBP meeting? 18:41:41 #topic Packaging 18:41:45 rkukura: over to you 18:42:36 Ok, the openstack-neutron-gbp, python-gbpclient, and python-django-horizon-gbp packages are all officially in fedora, and are based on recent commits 18:42:48 rkukura: awesome!! 18:43:03 The openstack-heat-gbp package is still waiting for review, but that should be done tomorrow morning 18:43:09 rkukura: ok 18:43:15 rkukura: so are able to deploy and test from the UI? 18:43:28 SumitNaiksatam: I have not been able to test the UI yet 18:43:34 rkukura: okay 18:44:11 So we need to do more testing, and there are a few small fixes to the packaging to include in the next round of updates 18:44:21 rkukura: some sym links have to be created to the gbp horion files, did oyu take that into consideration? 18:44:31 *horizon 18:44:45 SumitNaiksatam: No, will need to add the symlinks to the packaging, if that is possible. 18:44:56 banix: I forwarded the ODL weekly meeting invite to your gmail account 18:45:10 s3wong: thank you 18:45:16 SumitNaiksatam: Is the plan to do a juno-rc1 label? 18:45:35 rkukura: see this for what i do in devstack: #link https://github.com/group-policy/devstack/commit/d52f4e7d24f2f733842593a26387ba569d7a85f5#diff-b75b6ca41d002e9482bd7ff12eda0875R136 18:46:06 rkukura: there wasnt a plan, but we can discuss 18:46:07 SumitNaiksatam: Thanks 18:46:16 rkukura: ideally we should have been in RC1 now 18:46:45 SumitNaiksatam: OK, whether we do an RC1 or go right to the official release, do we have a process to create official tarballs on launchpad? 18:47:10 rkukura: i thought you were looking at the tarball part? ;-) 18:47:30 rkukura: i figured out adding the tags and creating the branch 18:47:41 i have tested adding tags 18:47:46 have created a branch 18:47:51 SumitNaiksatam: I’ll work with you on the tarball part 18:47:56 rkukura: okay 18:48:19 rkukura: i think uploading the tarball is straightforwar, but for creating i guess we have to follow the right process 18:48:33 rkukura: there are some release scripts available, but i havent tested those 18:48:38 We really should test all of this with an RC1, because its possible we’ll need to make changes to setup.cfg or something 18:48:45 rkukura: ok 18:48:57 lets discuss that as a follow up to this meeting 18:49:01 ok 18:49:18 please ping me if anyone else has thoughts or suggestions on how we want to go about this (else i will follow up with rkukura) 18:49:24 #topic Open Discussion 18:49:42 sorry we havent been able to discuss Kilo specs for a while now 18:49:52 The Fedora packages will then be the basis for RDO and RHEL-OSP packages 18:49:56 hopefully once we wrap up this release we can restart 18:50:09 rkukura: great, good to know the process there 18:50:56 rkukura: is this still the right page: #link https://openstack.redhat.com/Neutron_GBP 18:51:06 or is there more? 18:51:19 SumitNaiksatam: Yes, but there are some newer package versions on my fedorapeople account 18:52:22 rkukura: okay, perhaps when you get some time, can we have a new openstack wiki page for all the pacakges information, and that can in turn point to the above and others pages? 18:53:30 anyone else have anything else to bring up for discussion? 18:53:37 SumitNaiksatam: how's the status of GBP heat part? I have not got chance to integrate heat with OS and ODL GBP yet. If possible, I would like to give it a try. 18:53:40 SumitNaiksatam: That probably makes sense, and should include common stuff like configuring vendor drivers and a usage tutorial 18:53:43 oh btw, needless to say, no meeting next week 18:53:47 guys a quick and perhaps off question 18:53:52 rkukura: perfect, thanks! 18:54:02 yapeng_: yes its functional 18:54:12 yapeng_: the same devstack you are using has heat support 18:54:26 banix: sure, go for it 18:54:40 are we still planning to have the gbp under the networking program or that is not going to be possible 18:54:41 SumitNaiksatam: do you have some instruction how to verify? 18:55:00 banix: we are currently a stackforge project 18:55:12 banix: we need to get input from the community on this point 18:55:24 banix: until then we continue to function as stackforge 18:55:45 banix: in general, the stackforge option is always ongoing since it allows us to experiment 18:55:51 SumitNaiksatam, banix: I think having something the community can acutally use will help us get useful input. 18:56:07 SumitNaiksatam: we cannot be in stackforge and under networking? just verifying 18:56:15 banix: in parallel, we will continually evaluate in concert with the coummunity as to what goes where 18:56:34 banix: we could 18:56:54 rkukura: i agree and great effort to get here 18:57:18 banix: but again i think thats a community call 18:57:19 SumitNaiksatam: ok thanks 18:57:54 community includes us well :-) 18:58:00 SumitNaiksatam: yes we can talk again; for a project to be under a openstack project, is there a process… going off topic so pls ignore if out of time 18:58:16 banix: np 18:58:40 banix: actually there was only “integrated” criteria up until now 18:58:55 banix: so you were either integrated (like nova, neutron) etc or not 18:59:11 banix: and AFAIK, programs were integrated 18:59:23 however i believe those policies are being reworked 18:59:36 there is also a notion of a “def core” 18:59:42 some kind of a validated core 18:59:54 yeah wondering for example where the lbass and service be and if we can be somewhere similar … we’ll talk more later thanks 19:00:21 banix: for lbaas, the code was in neutron and it was split into a new repo 19:00:29 banix: but stil neutron program 19:00:46 so did not required new incubation process 19:00:54 *require 19:00:58 ok we are at the hour 19:01:05 SumitNaiksatam: ok thx 19:01:10 happy holidays and happy new year to everyone in advance 19:01:17 see you next year, if not before 19:01:17 bye 19:01:22 thanks all 19:01:23 bye! 19:01:23 SumitNaiksatam: same to you and the rest of the team 19:01:24 bye 19:01:28 bye 19:01:29 bye 19:01:31 bye 19:01:36 #endmeeting