18:02:19 #startmeeting networking_policy 18:02:20 Meeting started Thu Jun 18 18:02:19 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:23 The meeting name has been set to 'networking_policy' 18:02:39 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#June_18th_2015 18:02:51 #topic Bugs 18:03:17 #link https://bugs.launchpad.net/group-based-policy/+bug/1462024 18:03:18 Launchpad bug 1462024 in Group Based Policy "Concurrent create_policy_target_group call fails" [High,Confirmed] - Assigned to Robert Kukura (rkukura) 18:03:27 rkukura: did you have a chance to look at the above? 18:03:55 I have an idea what’s needed, but haven’t looked yet 18:04:28 rkukura: ok great, does this need taskflow or is there a shorter term fix? 18:05:11 shorter term I hope - create the implicit resources in separate transactions and make sure to use the ones that get committed 18:05:23 rkukura: ah ok 18:05:25 whoever wins 18:05:56 rkukura: do you mean the DB creation? 18:06:02 rkukura: or the full plugin call? 18:06:29 I’m hoping its possible to suspend the current transaction while making the plugin call that creates the implicit resource 18:07:21 rkukura: as opposed to a sub-transaction? 18:07:59 I think these are different - I need the create transaction to commit before the original transaction commits. Maybe subtransactions can do that, but I need to check. 18:08:13 rkukura: okay 18:08:34 This will of course introduce new failure modes - using TaskFlow will be a better long term solution 18:09:20 rkukura: so would you say that the ETA for this fix will be a week’s time? 18:09:45 Once I finish the nova node driver, I should be able to work on this, so hopefully 18:09:52 rkukura: thanks 18:10:43 any other bugs that we need discuss here? 18:11:01 yes 18:11:35 I don’t recall the bug number, but we’ve planned to use the new neutron subnet pool feature for our subnet allocation. Do we want to get that in our kilo release? 18:12:02 rkukura: yes 18:12:05 This one is likely to require schema changes, so would be harder to do on a proper stable branch 18:12:13 #link https://bugs.launchpad.net/group-based-policy/+bug/1383947 18:12:14 Launchpad bug 1383947 in Group Based Policy "subnet mapping broken with overlapping ips" [High,Confirmed] - Assigned to Robert Kukura (rkukura) 18:12:27 rkukura: i believe that one ^^^? 18:13:00 that’s related, but there is a different bug focused on performance and scalability issues with the current algorithm. 18:13:22 #link https://bugs.launchpad.net/group-based-policy/+bug/1382149 18:13:24 Launchpad bug 1382149 in Group Based Policy "subnet allocation does not scale" [Medium,Confirmed] - Assigned to Robert Kukura (rkukura) 18:13:33 rkukura: yeah just saw that too 18:13:46 i was looking for high priority one 18:13:47 Its referenced in the one you linked 18:13:48 ones 18:14:18 rkukura: would the usual process of migration scripts not work? 18:14:59 It would be a very difficult migration - the current allocations would need to somehow be transfered to a neutron subnet-pool. 18:15:13 rkukura: ah you mean data migration 18:15:23 Migrating just schema would be easy - migtrating data would be practically impossible 18:15:43 And I’d prefer not to have to maintain both mechanisms (i.e. as a driver) 18:15:50 rkukura: agree 18:16:08 rkukura: perhaps we can do some investigation and see if it is going to affect any users 18:16:43 I’m thinking this might be higher priority to get in before kilo is released than the race condition we were just discussing, which should not have migration issues 18:16:55 rkukura: okay 18:17:37 I’ll try to protoype it at least ASAP 18:17:48 rkukura: perhaps good to have the patch in the master posted, and we can subsequently make a call on how to handle the backport 18:17:54 just to make sure the neutron feature is really usable 18:18:01 rkukura: okay 18:18:26 when do we create the stable/kilo branch? 18:18:45 This cannot be backported to juno because the subnet-pools aren’t in juno 18:18:52 rkukura: the original plan has been the end of this month 18:18:59 rkukura: ah okay 18:19:41 alternatively we could push this to Liberty that we already know will be incompatible with former releases 18:19:42 rkukura: perhaps we can investigate if we can make the logic configurable 18:20:24 SumitNaiksatam: That would be needed if we want to implement after releasing our kilo, but I think we can get in in by then 18:21:26 rkukura: thanks for bringing that up 18:21:33 any other bugs that we need to discuss today 18:22:32 #topic Functional/Integration tests 18:22:38 is jishnu here? 18:23:00 i believe he is doing some refactoring for making it easy for others to add tests 18:23:24 This would be useful for an end-to-end nova node driver test 18:23:49 rkukura: true 18:24:23 for now, a devstack exercise seems the only option, or am I missing something? 18:24:38 rkukura: that is the easier one to do 18:25:04 rkukura: and provides reasonable coverage at least with sanity testing the basic workflow 18:25:08 hi 18:25:18 SumitNaiksatam: I’ll stick with that for now 18:25:18 Yi_: hi! 18:25:22 rkukura: ok 18:25:44 there were some failures on teh integration job yesterday 18:26:13 what now seems like transient, since after repeated rechecking all patches are passing (on the stable/juno patch) 18:26:29 ivar-lazzaro: i believe you didnt change anything on the patches, right? 18:26:38 hi 18:26:51 SumitNaiksatam: right 18:26:58 jishnu: hi 18:27:02 ivar-lazzaro: ok 18:27:20 SumitNaiksatam: I didn't notice they fixed themselves! 18:27:33 Our code is smarter than us. Scary 18:27:38 ivar-lazzaro: :-) 18:27:59 magesh rechecked everything in the morning, and they all seem to be passing now 18:28:24 i noticed that his patch was failing yesterday on a completely different error (related to passwords) 18:28:55 i did some refactoring on the gate hook script yesterday to make it more robust to errors 18:29:19 so the expectation is that even if stack.sh fails, the logs from the openstack services will now be archived 18:29:34 we will know when something fails if this works correctly ;-P 18:29:53 jishnu: joined in the nick of time :-) 18:29:55 SumitNaiksatam: nice 18:30:21 jishnu: i believe you are making updates to the gbpfunc test package? 18:30:29 any update/guidance for the team? 18:31:30 Yes they are done... in that I have not added new testcases, only the structuring of the suite has been changed... I was planning to add two more testcases, but backed it out 18:32:00 jishnu: any ETA on some high level document/wiki page on this? 18:32:07 jishnu: i mean how to add more tests 18:32:09 Wrt usage of the suite: Currently we should do the following 18:33:02 jishnu: lets add the info here: #link https://wiki.openstack.org/wiki/GroupBasedPolicy/Testing 18:33:16 and also any documentation on developing the tests 18:34:13 @Sumit: For what you are asking I need to put a wiki/doc in place and in all likeliness I can do that around last wk of the month 18:35:06 jishnu: thanks, earlier would be better, but we trust that you will do whatever is possible 18:35:39 #topic Packaging 18:35:45 Surely Sumit.. 18:35:49 jishnu: thanks 18:36:28 rkukura: so the issue regarding the gbpautomation milestone package in fedora 18:36:53 Nothing new on my end - I’d still like to update master and kilo fedora/RDO packages, but lower priority than the coding for kilo 18:37:08 rkukura: did you get a chance to respond to the person who raised that issue? 18:37:25 SumitNaiksatam: Two issues really - broken dependencies, and moving our heat add-on into their repo 18:37:43 I haven’t responded to Zane, but can if you’d like. 18:37:45 rkukura: the latter cannot be achieved retroactively for kilo 18:37:53 SumitNaiksatam: Good point 18:38:00 rkukura: i suspect he is going to send another email if we dont 18:38:27 rkukura: my suggestion was that we just pull that milestone package out, since i am out sure anyone is using it anymore, is that an option? 18:38:30 I’ll let him know I’ll fix the kilo packages ASAP, and we’ll try to do the right thing on master 18:38:51 *i am out -> i am not 18:38:54 We haven’t packaged the milestone yet - doing so should fix the first issue 18:39:24 rkukura: i mean the milestone k2 package that is perhaps causing him pain 18:39:37 rkukura: that will buy us some time 18:39:41 I don’t think we’ve ever updated to any kilo packages - still juno 18:40:09 I could be wrong, and will check 18:40:15 rkukura: okay then i am confused, why there would be an issue with juno packages 18:40:38 rkukura: our juno packages rely on the other openstack juno pakcages (including heat) 18:40:49 because fedora is still trying to build out juno packages on fedora versions that have moved on to kilo 18:41:23 so there is no juno heat on fedora 21+, AFAIK, except in RDO, which is seperate from the fedora packages 18:42:12 rkukura: ok, i did not realize that older releases are not maintained in fedora 18:42:33 fedora 20 is still getting updates, and is juno-based 18:43:34 rkukura: okay, so is it possible to make our juno package only fedora 20 specific, and yank it out of the 21+? 18:43:45 you know what to do best here 18:44:08 we could yank from 21+, but I’d prefer to just update the packing to use our k-2 or k-3 18:44:27 rkukura: okay, so we cant use k-2 there since that would still rely on juno 18:44:33 we would need to use k-3 18:45:06 rkukura: you can check with amit if he can help you with this 18:45:23 I think I’m way off on the fedora versions - juno is either 21 or 22 18:45:30 he can build the k-3 packages and you can post them 18:45:37 rkukura: okay 18:45:48 The builds are done by fedora infrastructure 18:46:30 humans never touch the rpms, just the tarballs and spec from which they are automatically produced 18:46:43 rkukura: okay, so i guess we have to update the tarballs 18:47:02 We can certainly use the k-2 tarballs until k-3 are ready 18:47:36 wait, sorry - k-3 are ready, and we are working towards k-4 18:47:52 rkukura: yeah k3 has been ready since 5/26 18:48:03 ok 12 mins left 18:48:14 rkukura: thanks for the discussion here 18:48:20 If I could spend a day on it, I’d update all 4 fedora packages to k-3 for the appropriate fedora versions 18:48:43 rkukura: ok thanks 18:48:47 team logistics 18:48:54 #topic Core reviewers 18:49:23 after the summit i reached out to the then set of core reviewers to check if they were planning to be active with the reviews going forward 18:50:35 based on the responses, the current set of core reviewers stands at: ivar-lazzaro, rkukura, magesh, myself, subramanyam, hemanth, s3wong 18:51:19 so kevin, ronak, and banix have moved out of the core reviewers 18:51:50 any concerns or thoughts on the above changes? 18:53:16 going forward based on the review contributions we will add more folks (and also the outgoing cores once they get more time to review) 18:53:53 so thanks to everyone for their effort on the reviews 18:54:25 ++ 18:55:27 and especially to banix and ronak who have been the founding members on this team 18:55:28 +2 18:56:36 #topic Kilo features 18:56:40 running short on time 18:57:26 ivar-lazzaro: on #link https://review.openstack.org/179327 and #link https://review.openstack.org/166424 do you need to bring up anything, or is it a longer discusion (perhaps for #openstack-gbp) 18:57:28 ? 18:57:54 SumitNaiksatam: yeah on the second one 18:58:05 I think it's time to converge on a decision 18:58:20 whether we want the services to be owned by the provider or by an admin tenant :) 18:58:25 “Admin tenant owns service chain instances” 18:58:38 we could use the NSP to regulate this behavior 18:58:45 but we need a default nonetheless 18:59:00 ivar-lazzaro: lets check with rukhsana, magesh since they had concerns 18:59:06 yes 18:59:10 i will take that as an action item 18:59:24 Their concern was about single tenant services 18:59:45 and service sharding based on tenant 19:00:13 “service sharding” thats a new one :-) 19:00:29 ivar-lazzaro: yeah, lets follow up 19:00:35 we are the hours 19:00:39 * hour 19:00:42 ok 19:00:45 #topic Open Discussion 19:00:57 igordcard_: sorry did not get time to discuss your patch 19:01:07 any update from mcard on the UI? 19:01:34 SumitNaiksatam: he's finishing the writing of his dissertation, so he's been moving slow 19:01:41 igordcard_: how is the steering proceeding? Let me know if anything is blocking you in the refactor 19:01:52 SumitNaiksatam: but he started integrating with the GBP codebase 19:02:08 igordcard_: okay, lets followup on the email thread 19:02:30 ivar-lazzaro: thanks, I do not yet have relevant updates on that :( 19:02:30 igordcard_: per ivar-lazzaro’s comment, let us know if there is any blocker for you 19:02:41 okay 3 minutes over 19:02:45 SumitNaiksatam: ivar-lazzaro: thanks 19:02:49 lets carry over the discussion to -gbp 19:02:52 thanks SumitNaiksatam! 19:02:53 thanks all for joining today 19:02:56 bye! 19:02:58 bye 19:03:00 bye 19:03:03 #endmeeting