18:02:19 <SumitNaiksatam> #startmeeting networking_policy
18:02:20 <openstack> Meeting started Thu Jun 18 18:02:19 2015 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:02:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:02:23 <openstack> The meeting name has been set to 'networking_policy'
18:02:39 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#June_18th_2015
18:02:51 <SumitNaiksatam> #topic Bugs
18:03:17 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1462024
18:03:18 <openstack> Launchpad bug 1462024 in Group Based Policy "Concurrent create_policy_target_group call fails" [High,Confirmed] - Assigned to Robert Kukura (rkukura)
18:03:27 <SumitNaiksatam> rkukura: did you have a chance to look at the above?
18:03:55 <rkukura> I have an idea what’s needed, but haven’t looked yet
18:04:28 <SumitNaiksatam> rkukura: ok great, does this need taskflow or is there a shorter term fix?
18:05:11 <rkukura> shorter term I hope - create the implicit resources in separate transactions and make sure to use the ones that get committed
18:05:23 <SumitNaiksatam> rkukura: ah ok
18:05:25 <rkukura> whoever wins
18:05:56 <ivar-lazzaro> rkukura: do you mean the DB creation?
18:06:02 <ivar-lazzaro> rkukura: or the full plugin call?
18:06:29 <rkukura> I’m hoping its possible to suspend the current transaction while making the plugin call that creates the implicit resource
18:07:21 <SumitNaiksatam> rkukura: as opposed to a sub-transaction?
18:07:59 <rkukura> I think these are different - I need the create transaction to commit before the original transaction commits. Maybe subtransactions can do that, but I need to check.
18:08:13 <SumitNaiksatam> rkukura: okay
18:08:34 <rkukura> This will of course introduce new failure modes - using TaskFlow will be a better long term solution
18:09:20 <SumitNaiksatam> rkukura: so would you say that the ETA for this fix will be a week’s time?
18:09:45 <rkukura> Once I finish the nova node driver, I should be able to work on this, so hopefully
18:09:52 <SumitNaiksatam> rkukura: thanks
18:10:43 <SumitNaiksatam> any other bugs that we need discuss here?
18:11:01 <rkukura> yes
18:11:35 <rkukura> I don’t recall the bug number, but we’ve planned to use the new neutron subnet pool feature for our subnet allocation. Do we want to get that in our kilo release?
18:12:02 <SumitNaiksatam> rkukura: yes
18:12:05 <rkukura> This one is likely to require schema changes, so would be harder to do on a proper stable branch
18:12:13 <SumitNaiksatam> #link https://bugs.launchpad.net/group-based-policy/+bug/1383947
18:12:14 <openstack> Launchpad bug 1383947 in Group Based Policy "subnet mapping broken with overlapping ips" [High,Confirmed] - Assigned to Robert Kukura (rkukura)
18:12:27 <SumitNaiksatam> rkukura: i believe that one ^^^?
18:13:00 <rkukura> that’s related, but there is a different bug focused on performance and scalability issues with the current algorithm.
18:13:22 <rkukura> #link https://bugs.launchpad.net/group-based-policy/+bug/1382149
18:13:24 <openstack> Launchpad bug 1382149 in Group Based Policy "subnet allocation does not scale" [Medium,Confirmed] - Assigned to Robert Kukura (rkukura)
18:13:33 <SumitNaiksatam> rkukura: yeah just saw that too
18:13:46 <SumitNaiksatam> i was looking for high priority one
18:13:47 <rkukura> Its referenced in the one you linked
18:13:48 <SumitNaiksatam> ones
18:14:18 <SumitNaiksatam> rkukura: would the usual process of migration scripts not work?
18:14:59 <rkukura> It would be a very difficult migration - the current allocations would need to somehow be transfered to a neutron subnet-pool.
18:15:13 <SumitNaiksatam> rkukura: ah you mean data migration
18:15:23 <rkukura> Migrating just schema would be easy - migtrating data would be practically impossible
18:15:43 <rkukura> And I’d prefer not to have to maintain both mechanisms (i.e. as a driver)
18:15:50 <SumitNaiksatam> rkukura: agree
18:16:08 <SumitNaiksatam> rkukura: perhaps we can do some investigation and see if it is going to affect any users
18:16:43 <rkukura> I’m thinking this might be higher priority to get in before kilo is released than the race condition we were just discussing, which should not have migration issues
18:16:55 <SumitNaiksatam> rkukura: okay
18:17:37 <rkukura> I’ll try to protoype it at least ASAP
18:17:48 <SumitNaiksatam> rkukura: perhaps good to have the patch in the master posted, and we can subsequently make a call on how to handle the backport
18:17:54 <rkukura> just to make sure the neutron feature is really usable
18:18:01 <SumitNaiksatam> rkukura: okay
18:18:26 <rkukura> when do we create the stable/kilo branch?
18:18:45 <rkukura> This cannot be backported to juno because the subnet-pools aren’t in juno
18:18:52 <SumitNaiksatam> rkukura: the original plan has been the end of this month
18:18:59 <SumitNaiksatam> rkukura: ah okay
18:19:41 <ivar-lazzaro> alternatively we could push this to Liberty that we already know will be incompatible with former releases
18:19:42 <SumitNaiksatam> rkukura: perhaps we can investigate if we can make the logic configurable
18:20:24 <rkukura> SumitNaiksatam: That would be needed if we want to implement after releasing our kilo, but I think we can get in in by then
18:21:26 <SumitNaiksatam> rkukura: thanks for bringing that up
18:21:33 <SumitNaiksatam> any other bugs that we need to discuss today
18:22:32 <SumitNaiksatam> #topic Functional/Integration tests
18:22:38 <SumitNaiksatam> is jishnu here?
18:23:00 <SumitNaiksatam> i believe he is doing some refactoring for making it easy for others to add tests
18:23:24 <rkukura> This would be useful for an end-to-end nova node driver test
18:23:49 <SumitNaiksatam> rkukura: true
18:24:23 <rkukura> for now, a devstack exercise seems the only option, or am I missing something?
18:24:38 <SumitNaiksatam> rkukura: that is the easier one to do
18:25:04 <SumitNaiksatam> rkukura: and provides reasonable coverage at least with sanity testing the basic workflow
18:25:08 <Yi_> hi
18:25:18 <rkukura> SumitNaiksatam: I’ll stick with that for now
18:25:18 <SumitNaiksatam> Yi_: hi!
18:25:22 <SumitNaiksatam> rkukura: ok
18:25:44 <SumitNaiksatam> there were some failures on teh integration job yesterday
18:26:13 <SumitNaiksatam> what now seems like transient, since after repeated rechecking all patches are passing (on the stable/juno patch)
18:26:29 <SumitNaiksatam> ivar-lazzaro: i believe you didnt change anything on the patches, right?
18:26:38 <jishnu> hi
18:26:51 <ivar-lazzaro> SumitNaiksatam: right
18:26:58 <SumitNaiksatam> jishnu: hi
18:27:02 <SumitNaiksatam> ivar-lazzaro: ok
18:27:20 <ivar-lazzaro> SumitNaiksatam: I didn't notice they fixed themselves!
18:27:33 <ivar-lazzaro> Our code is smarter than us. Scary
18:27:38 <SumitNaiksatam> ivar-lazzaro: :-)
18:27:59 <SumitNaiksatam> magesh rechecked everything in the morning, and they all seem to be passing now
18:28:24 <SumitNaiksatam> i noticed that his patch was failing yesterday on a completely different error (related to passwords)
18:28:55 <SumitNaiksatam> i did some refactoring on the gate hook script yesterday to make it more robust to errors
18:29:19 <SumitNaiksatam> so the expectation is that even if stack.sh fails, the logs from the openstack services will now be archived
18:29:34 <SumitNaiksatam> we will know when something fails if this works correctly ;-P
18:29:53 <SumitNaiksatam> jishnu: joined in the nick of time :-)
18:29:55 <rkukura> SumitNaiksatam: nice
18:30:21 <SumitNaiksatam> jishnu:  i believe you are making updates to the gbpfunc test package?
18:30:29 <SumitNaiksatam> any update/guidance for the team?
18:31:30 <jishnu> Yes they are done... in that I have not added new testcases, only the structuring of the suite has been changed... I was planning to add two more testcases, but backed it out
18:32:00 <SumitNaiksatam> jishnu: any ETA on some high level document/wiki page on this?
18:32:07 <SumitNaiksatam> jishnu: i mean how to add more tests
18:32:09 <jishnu> Wrt usage of the suite: Currently we should do the following
18:33:02 <SumitNaiksatam> jishnu: lets add the info here: #link https://wiki.openstack.org/wiki/GroupBasedPolicy/Testing
18:33:16 <SumitNaiksatam> and also any documentation on developing the tests
18:34:13 <jishnu> @Sumit: For what you are asking I need to put a wiki/doc in place and in all likeliness I can  do that around last wk of the month
18:35:06 <SumitNaiksatam> jishnu: thanks, earlier would be better, but we trust that you will do whatever is possible
18:35:39 <SumitNaiksatam> #topic Packaging
18:35:45 <jishnu> Surely Sumit..
18:35:49 <SumitNaiksatam> jishnu: thanks
18:36:28 <SumitNaiksatam> rkukura: so the issue regarding the gbpautomation milestone package in fedora
18:36:53 <rkukura> Nothing new on my end - I’d still like to update master and kilo fedora/RDO packages, but lower priority than the coding for kilo
18:37:08 <SumitNaiksatam> rkukura: did you get a chance to respond to the person who raised that issue?
18:37:25 <rkukura> SumitNaiksatam: Two issues really - broken dependencies, and moving our heat add-on into their repo
18:37:43 <rkukura> I haven’t responded to Zane, but can if you’d like.
18:37:45 <SumitNaiksatam> rkukura: the latter cannot be achieved retroactively for kilo
18:37:53 <rkukura> SumitNaiksatam: Good point
18:38:00 <SumitNaiksatam> rkukura: i suspect he is going to send another email if we dont
18:38:27 <SumitNaiksatam> rkukura: my suggestion was that we just pull that milestone package out, since i am out sure anyone is using it anymore, is that an option?
18:38:30 <rkukura> I’ll let him know I’ll fix the kilo packages ASAP, and we’ll try to do the right thing on master
18:38:51 <SumitNaiksatam> *i am out -> i am not
18:38:54 <rkukura> We haven’t packaged the milestone yet - doing so should fix the first issue
18:39:24 <SumitNaiksatam> rkukura: i mean the milestone k2 package that is perhaps causing him pain
18:39:37 <SumitNaiksatam> rkukura: that will buy us some time
18:39:41 <rkukura> I don’t think we’ve ever updated to any kilo packages - still juno
18:40:09 <rkukura> I could be wrong, and will check
18:40:15 <SumitNaiksatam> rkukura: okay then i am confused, why there would be an issue with juno packages
18:40:38 <SumitNaiksatam> rkukura: our juno packages rely on the other openstack juno pakcages (including heat)
18:40:49 <rkukura> because fedora is still trying to build out juno packages on fedora versions that have moved on to kilo
18:41:23 <rkukura> so there is no juno heat on fedora 21+, AFAIK, except in RDO, which is seperate from the fedora packages
18:42:12 <SumitNaiksatam> rkukura: ok, i did not realize that older releases are not maintained in fedora
18:42:33 <rkukura> fedora 20 is still getting updates, and is juno-based
18:43:34 <SumitNaiksatam> rkukura: okay, so is it possible to make our juno package only fedora 20 specific, and yank it out of the 21+?
18:43:45 <SumitNaiksatam> you know what to do best here
18:44:08 <rkukura> we could yank from 21+, but I’d prefer to just update the packing to use our k-2 or k-3
18:44:27 <SumitNaiksatam> rkukura: okay, so we cant use k-2 there since that would still rely on juno
18:44:33 <SumitNaiksatam> we would need to use k-3
18:45:06 <SumitNaiksatam> rkukura: you can check with amit if he can help you with this
18:45:23 <rkukura> I think I’m way off on the fedora versions - juno is either 21 or 22
18:45:30 <SumitNaiksatam> he can build the k-3 packages and you can post them
18:45:37 <SumitNaiksatam> rkukura: okay
18:45:48 <rkukura> The builds are done by fedora infrastructure
18:46:30 <rkukura> humans never touch the rpms, just the tarballs and spec from which they are automatically produced
18:46:43 <SumitNaiksatam> rkukura: okay, so i guess we have to update the tarballs
18:47:02 <rkukura> We can certainly use the k-2 tarballs until k-3 are ready
18:47:36 <rkukura> wait, sorry - k-3 are ready, and we are working towards k-4
18:47:52 <SumitNaiksatam> rkukura: yeah k3 has been ready since 5/26
18:48:03 <SumitNaiksatam> ok 12 mins left
18:48:14 <SumitNaiksatam> rkukura: thanks for the discussion here
18:48:20 <rkukura> If I could spend a day on it, I’d update all 4 fedora packages to k-3 for the appropriate fedora versions
18:48:43 <SumitNaiksatam> rkukura: ok thanks
18:48:47 <SumitNaiksatam> team logistics
18:48:54 <SumitNaiksatam> #topic Core reviewers
18:49:23 <SumitNaiksatam> after the summit i reached out to the then set of core reviewers to check if they were planning to be active with the reviews going forward
18:50:35 <SumitNaiksatam> based on the responses, the current set of core reviewers stands at: ivar-lazzaro, rkukura, magesh, myself, subramanyam, hemanth, s3wong
18:51:19 <SumitNaiksatam> so kevin, ronak, and banix have moved out of the core reviewers
18:51:50 <SumitNaiksatam> any concerns or thoughts on the above changes?
18:53:16 <SumitNaiksatam> going forward based on the review contributions we will add more folks (and also the outgoing cores once they get more time to review)
18:53:53 <SumitNaiksatam> so thanks to everyone for their effort on the reviews
18:54:25 <ivar-lazzaro> ++
18:55:27 <SumitNaiksatam> and especially to banix and ronak who have been the founding members on this team
18:55:28 <rkukura> +2
18:56:36 <SumitNaiksatam> #topic Kilo features
18:56:40 <SumitNaiksatam> running short on time
18:57:26 <SumitNaiksatam> ivar-lazzaro: on #link https://review.openstack.org/179327 and #link https://review.openstack.org/166424 do you need to bring up anything, or is it a longer discusion (perhaps for #openstack-gbp)
18:57:28 <SumitNaiksatam> ?
18:57:54 <ivar-lazzaro> SumitNaiksatam: yeah on the second one
18:58:05 <ivar-lazzaro> I think it's time to converge on a decision
18:58:20 <ivar-lazzaro> whether we want the services to be owned by the provider or by an admin tenant :)
18:58:25 <SumitNaiksatam> “Admin tenant owns service chain instances”
18:58:38 <ivar-lazzaro> we could use the NSP to regulate this behavior
18:58:45 <ivar-lazzaro> but we need a default nonetheless
18:59:00 <SumitNaiksatam> ivar-lazzaro: lets check with rukhsana, magesh since they had concerns
18:59:06 <ivar-lazzaro> yes
18:59:10 <SumitNaiksatam> i will take that as an action item
18:59:24 <ivar-lazzaro> Their concern was about single tenant services
18:59:45 <ivar-lazzaro> and service sharding based on tenant
19:00:13 <SumitNaiksatam> “service sharding” thats a new one :-)
19:00:29 <SumitNaiksatam> ivar-lazzaro: yeah, lets follow up
19:00:35 <SumitNaiksatam> we are the hours
19:00:39 <SumitNaiksatam> * hour
19:00:42 <ivar-lazzaro> ok
19:00:45 <SumitNaiksatam> #topic Open Discussion
19:00:57 <SumitNaiksatam> igordcard_: sorry did not get time to discuss your patch
19:01:07 <SumitNaiksatam> any update from mcard on the UI?
19:01:34 <igordcard_> SumitNaiksatam: he's finishing the writing of his dissertation, so he's been moving slow
19:01:41 <ivar-lazzaro> igordcard_: how is the steering proceeding? Let me know if anything is blocking you in the refactor
19:01:52 <igordcard_> SumitNaiksatam: but he started integrating with the GBP codebase
19:02:08 <SumitNaiksatam> igordcard_: okay, lets followup on the email thread
19:02:30 <igordcard_> ivar-lazzaro: thanks, I do not yet have relevant updates on that :(
19:02:30 <SumitNaiksatam> igordcard_: per ivar-lazzaro’s comment, let us know if there is any blocker for you
19:02:41 <SumitNaiksatam> okay 3 minutes over
19:02:45 <igordcard_> SumitNaiksatam: ivar-lazzaro: thanks
19:02:49 <SumitNaiksatam> lets carry over the discussion to -gbp
19:02:52 <rkukura> thanks SumitNaiksatam!
19:02:53 <SumitNaiksatam> thanks all for joining today
19:02:56 <SumitNaiksatam> bye!
19:02:58 <rkukura> bye
19:03:00 <Yi_> bye
19:03:03 <SumitNaiksatam> #endmeeting