21:00:33 <mikal> #startmeeting nova
21:00:33 <openstack> Meeting started Thu Mar 12 21:00:33 2015 UTC and is due to finish in 60 minutes.  The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:36 <openstack> The meeting name has been set to 'nova'
21:00:37 <bauzas> _o
21:00:40 <edleafe> \o
21:00:41 <mikal> Hey everyone
21:00:41 <bauzas> \o
21:00:42 <melwitt> o/
21:00:53 <mriedem> o/
21:00:53 <mikal> So, as usual the agenda is at https://wiki.openstack.org/wiki/Meetings/Nova
21:00:58 <mikal> Let's get started
21:01:06 <mikal> #topic Kilo release status
21:01:17 <mikal> So, feature proposal freeze has now hit
21:01:18 <sdague> o/
21:01:26 <mikal> This shouldn't be a big surprise as its the same process as Juno
21:01:37 <mikal> We have about a week before we hit dependancy freeze and string freeze
21:01:49 <mikal> So we need to make sure things which change strings or deps land before then
21:02:14 <mikal> The same applies for features in general, but especially for things which change deps and translatable strings
21:02:17 <mriedem> are we cutting a novaclient release before dep freeze?
21:02:31 <mikal> mriedem: has there been change in the client to justify it?
21:02:36 <mriedem> idk
21:02:39 <mriedem> :)
21:02:41 <mikal> mriedem: I'm not opposed
21:02:51 <alaski> o/
21:02:52 <mriedem> melwitt: had a bug on the agenda for novalcient
21:02:54 <mikal> Just not sure if anything worth the work has happened there in the last couple of weeks
21:02:58 <melwitt> there's a bug in the client that I put under Bugs on the agenda
21:03:06 <sdague> we should cut a release, if nothing else there have been some docs changes that make our examples actually work
21:03:20 <mikal> Ok, if there's a need we should do it
21:03:26 <melwitt> but yes, there have been bug fixes, doc fixes
21:03:41 <sdague> nothing makes me more ragey than code examples that don't work :)
21:03:41 <mikal> melwitt: should we delay that release until your bug fix lands?
21:04:35 <mikal> So, the bug melwitt is referring to is 143154
21:04:40 <melwitt> mikal: it's been broken for awhile, the volumes.* python apis are broken. it would be best if we fix that first I think
21:04:44 <mikal> Sounds like a big deal to me, and is marked critical
21:04:58 <mikal> Well, fixing a critical should trigger a release anyways
21:05:06 <mikal> So that gives us two excuses to do a release next week
21:05:22 <mikal> melwitt: the bug doesn't have a code review associated with it
21:05:30 <mikal> melwitt: and no assignee
21:05:49 <sdague> melwitt: they aren't exactly broken, they route differently. fixing that should be careful because it would be easy to regress the cli in the process
21:05:57 <tonyb> Do we also need to land the microversions support in novaclient?
21:06:17 <claudiub> o/
21:06:25 <melwitt> mikal: yeah, I opened it last night when I was reminded it was broken. and by broken I mean they 404 because they route to the wrong place
21:06:43 <sdague> tonyb: on the api side, yes. it seems like the cli should use best available.
21:06:50 <mikal> tonyb: I don't see any client reviews in the priority etherpad, is that an oversight?
21:07:15 <melwitt> and to fix it, I wasn't sure if I should just fix the urls to point at the nova volumes api proxy or to make them use the same magic the cli does to make it go direct to cinder
21:07:29 <sdague> mikal: it's an oversight
21:07:39 <mikal> melwitt: not all deployments expose cinder endpoints...
21:07:42 <sdague> melwitt: honestly, I vote magic
21:08:00 <sdague> mikal: then nova volume-list doesn't work for them
21:08:04 <mikal> sdague: we should fix that
21:08:05 <dims> list of unreleased changes in python-novaclient - http://paste.openstack.org/show/191940/
21:08:06 <sdague> and hasn't for 2 years
21:08:29 <sdague> this behavior has existed like this for > 2 years
21:08:34 <mikal> #action Do a python-novaclient release next week, but fix 1431154 first
21:08:38 <sdague> changing that... is something that requires a lot of thought
21:08:56 <mikal> Oh, I am ok with providing incentive to deployers to expose cinder
21:09:15 <mikal> So, we're decided we should do a release next week
21:09:21 <sdague> ++
21:09:29 <mikal> I can ping openstack-dev and ask for a list of the microversion patches we need to land first
21:09:30 <melwitt> yeah, magic would at least make volumes.list() do the same thing that 'nova volume-list' does
21:09:54 <mikal> #action Mikal to chase microversion changes for the client before release
21:10:13 <mikal> Is there anything else we need to remember to do for novaclient before release?
21:10:18 <melwitt> I'll add the novaclient microversion review to the etherpad now, I put it under Open Discussion for today
21:10:32 <dansmith> melwitt: I just moved it up to priorities
21:10:34 <mikal> melwitt: there's only one?
21:10:50 <dansmith> melwitt: because: microversions, hope that's okay
21:10:57 <dims> microversion review - https://review.openstack.org/#/c/152569/
21:11:06 <melwitt> mikal: I think so. I'm not a microversion expert though, so I didn't know if it does everything needed to support
21:11:20 <mikal> Ok, sounds like I am off the hook for that email then
21:11:37 <mikal> So, I feel like we've got novaclient under control
21:11:44 <melwitt> dansmith: cool, thanks!
21:11:45 <mikal> Is there anything else about the release status we need to cover?
21:11:56 <mikal> There was a thread overnight clarifying that you can still fix bugs
21:12:00 <mikal> Even after string freeze
21:12:05 <mriedem> wut?!
21:12:07 <mikal> (as long as they don't change strings)
21:12:20 <dansmith> what if the bug is in the string?
21:12:22 <mikal> mriedem: is that sarcasm?
21:12:26 <mikal> Sigh
21:12:32 <mriedem> mikal: yes
21:12:37 <mikal> dansmith: IIRC in the past we've punted on those after freeze
21:12:42 <dansmith> mikal: sarcasm
21:12:44 <bauzas> mikal: code freeze is on Thursday ? I was remembering some things happening on Tuesdays when FF
21:12:44 <mikal> mriedem: please be gentle, still on my first coffee
21:12:52 <dansmith> or just "dansmith being an ass"
21:12:55 <mikal> Heh
21:13:12 <mikal> bauzas: dunno, that date was picked by Mr TTX
21:13:19 <dansmith> why is he yelling?
21:13:25 <mikal> Have you met him?
21:13:29 <mikal> Ok, moving on
21:13:35 <mikal> #topic Priorities
21:13:57 <mikal> So, we've discovered we care about the microversions stuff in novaclient, so I think that's covered apart from remembering to review it?
21:14:14 <mikal> Looking at the tracking etherpad there is a lot of strikethrough
21:14:19 <mikal> Which I personally find distracting
21:14:24 <jogo> mikal: me too
21:14:28 <mikal> Are people ok with me doing a cleanup to remove complete things?
21:14:32 <bauzas> +1
21:14:42 <edleafe> +1
21:14:42 <sdague> yeh, lets delete
21:14:48 <dims> +1
21:14:54 <dansmith> sure
21:14:55 <sdague> keep the strikethroughs for the bug list at the bottom though
21:14:56 <mikal> Cool
21:15:02 <dansmith> although it makes it look like we've done a lot :)
21:15:03 <sdague> that's good for metrics on merged stuff
21:15:04 <dansmith> so one things on priorities
21:15:08 <mikal> Yeah, those are after the open ones so that's ok with me
21:15:18 <mikal> dansmith: do tell
21:15:18 <dims> sdague: we switched it into a counter
21:15:22 <sdague> dims: oh, ok
21:15:41 <dansmith> I'm halfway done with my context-ectomy set, which is all merged, but I have a couple more I need to put up yet
21:15:55 <dansmith> it's not really a feature, but also not a bug, it's cleanup
21:16:05 <dansmith> however, I want to get them all squared away in kilo,
21:16:22 <dansmith> because in lemming when we move to oslo.versionedobejcts, the syntax will change slightly
21:16:22 <mikal> dansmith: will the remaining ones be invasive?
21:16:34 <dansmith> and if we clean up in kilo then backports will be easier, and kilo will be fully clean
21:16:40 <dansmith> nope, it's almost removing dead code
21:16:51 <mikal> I think its ok to keep going with those then
21:16:56 <dansmith> I made it dead code, now it needs cleanup
21:16:59 <sdague> yep, +1
21:17:00 <mikal> I feels like a bug fix to me
21:17:02 <dansmith> okay, cool, just wanted to make sure we were okay with that
21:17:06 <mikal> Even if its not a bug
21:17:08 <dansmith> mikal: it kinda is, but..
21:17:16 <dansmith> anyway, sounds uncontroversial
21:17:29 <bauzas> mikal: maybe it would be worth mentioning a FeatureFreezeException process before FF ?
21:17:32 <mikal> Yeah, in the same way that I am ok with continued cleanup of API unit tests
21:17:43 <mriedem> FFE before FF?
21:17:45 <mriedem> we've had 2 FFEs
21:18:02 <mikal> bauzas: you mean for priority features?
21:18:05 <bauzas> mriedem: nah, explain the process, not acting on it:
21:18:09 <dansmith> FFEs at this point would only be for priorities, right?
21:18:10 <sdague> well, I thought we were always good for test work until we go into deep freeze
21:18:12 <bauzas> mikal: I mean for anyone
21:18:21 <mikal> bauzas: that's been and gone
21:18:26 <mriedem> there shouldn't be FFEs after 3/19
21:18:31 <mikal> bauzas: the non-priority FFE process was weeks ago
21:18:38 <bauzas> oh ok
21:18:53 <mikal> :)
21:19:06 <mikal> OK, back to priorities
21:19:16 <mikal> Do we have any that we think are at risk of not finishing their kilo work?
21:19:29 <dims> do we add any priorities based on the ops meetup feedback?
21:19:39 <bauzas> some have already been deferred thanks to john
21:19:41 <mikal> dims: not for kilo I wouldn't think
21:19:48 <mriedem> dims: not unless they are bugs
21:19:57 <dims> cool mikal mriedem
21:19:59 <mriedem> that's business as usual
21:20:26 <mikal> Ok, it sounds like we can move on again
21:20:32 <mikal> #topic Gate status
21:20:37 <mriedem> a-ok
21:20:40 <mikal> So, I've been travelling this week so a bit distracted
21:20:43 <mikal> Tell me of the gates
21:20:46 <mriedem> ^
21:20:50 <mikal> I am disappointed nothing blew up
21:20:53 <sdague> dims: I think the only kilo priority is the scheduler reporting bug, which I'll work on tomorrow
21:20:58 <mriedem> things blow up, just not our problem right now
21:21:04 <mikal> Ok, good
21:21:05 <dansmith> seems to be pretty good aside from being jammed full of 300 things racing towards freeze
21:21:07 <dansmith> but stable
21:21:11 <mikal> If everything was really ok I'd be confused
21:21:22 <dims> sdague: there was a quota one that jogo dug up too
21:21:24 <bauzas> sdague: which bug ?
21:21:26 <mikal> So, on that gate wedging thing...
21:21:33 <sdague> bauzas: the one we talked about this morning
21:21:35 <anteaya> oh infra is having issues with hosts, does that make you feel better?
21:21:35 <dansmith> sdague: that's a bug, not a new priority though right?
21:21:38 <bauzas> sdague: you mean the one we discussed ?
21:21:38 <bauzas> ok
21:21:44 <sdague> dansmith: correct
21:21:45 <mikal> If I knew what patches to babysit through merge I'd be happy to sneak things in over the weekend while everyone sleeps
21:21:57 <mriedem> if it's wedging the gate, i think that's covered
21:21:59 <mikal> So, if you're sitting on an approval that is in gate hell, ping me and I can keep an eye on it
21:22:01 <dansmith> and the quota thing is a can of worms
21:22:23 <mikal> anteaya: it does, thanks
21:22:30 <anteaya> mikal: happy to help
21:22:38 <sdague> yeh, the quotas thing is mostly figuring out a test strategy I think
21:22:47 <anteaya> and ++ for the nocturnal +A's they are the best
21:22:49 <sdague> at least in the short term
21:22:53 <mikal> So yeah, if you have an approved patch you deeply care about merging, email me and I'll keep an eye on it
21:23:03 <mikal> Moving on again I suspect
21:23:04 <dansmith> I think jogo was thinking on that yesterday
21:23:19 <mikal> #topic Bugs
21:23:28 <mikal> Nova is finally bug free, yes?
21:23:38 <jogo> dansmith: yup, right now trying to reproduce the issues
21:23:47 <dansmith> jogo: cool
21:24:12 <mriedem> so on bugs
21:24:14 <mriedem> this came up last week https://bugs.launchpad.net/nova/+bug/1323658
21:24:16 <openstack> Launchpad bug 1323658 in OpenStack Compute (nova) "Nova resize/restart results in guest ending up in inconsistent state with Neutron" [Critical,Confirmed]
21:24:16 <dims> mikal: i am trying to keep New/Undecided around 30
21:24:17 <mikal> This cpu time thing looks valid, but has a proposed fix
21:24:33 <mriedem> i put up several chagnes to help debug which are merged, and a revert of the skipped tests in tempest
21:24:38 <mriedem> after like 9 rechecks those haven't failed
21:24:42 <mriedem> so i'm not sure that's critical for k-3
21:24:54 <mikal> mriedem: this is for 1323658, yes?
21:24:59 <mriedem> yes
21:25:08 <mriedem> sdague: which reminds me, wanna drop the -2 here? https://review.openstack.org/#/c/161768/
21:25:15 <mikal> mriedem: so you're saying it might not be a critical bug any more?
21:25:24 <mriedem> i think that's exactly what i said :)
21:25:40 <sdague> mriedem: done
21:25:43 <melwitt> okay, thanks. I put that bug there because I was concerned if it was a serious issue we needed to workaround asap
21:25:47 <mikal> mriedem: can you uncritical it in LP then please?
21:25:50 <mriedem> sure
21:25:53 <mikal> Ta
21:26:23 <mriedem> otherwise https://launchpad.net/nova/+milestone/kilo-3
21:26:25 <sdague> mriedem: though you failed unit tests in tempest now
21:26:39 <mriedem> sdague: that's a thing
21:26:42 <mriedem> known issue
21:27:27 <mriedem> i'm not sure why https://bugs.launchpad.net/nova/+bug/1383465 is still critical for k-3 when it's been around since last october?
21:27:29 <openstack> Launchpad bug 1383465 in OpenStack Compute (nova) "[pci-passthrough] nova-compute fails to start" [Critical,In progress] - Assigned to Yongli He (yongli-he)
21:27:38 <melwitt> (ignore my earlier comment, got mixed up about which bug we were talking)
21:27:40 <dansmith> where is the patch for that?
21:27:45 <dansmith> it's a pretty important issue
21:27:48 <mriedem> https://review.openstack.org/#/c/131321/
21:28:07 <dims> mriedem: since it has the potential for nova-compute to bail out totally
21:28:20 <mriedem> yeah, i mean, it *sounds* bad
21:28:26 <dansmith> okay, we need to get eyes on that one I think
21:28:32 <dansmith> but yeah, maybe not critical
21:28:39 <dansmith> but we broke that in juno
21:28:47 <dansmith> and it would be real bad if we don't merge the fix in kilo :/
21:29:24 <dims> agree
21:29:29 <mikal> I will add it to my review list for later today unless people beat me to it
21:29:45 <melwitt> this one I put https://bugs.launchpad.net/nova/+bug/1372670 libvirt bug fixed upstream in newer release, but the proposed fix involves a workaround config option and can't seem to get an agreement whether to have it be configurable or not
21:29:46 <openstack> Launchpad bug 1372670 in OpenStack Compute (nova) "libvirtError: operation failed: cannot read cputime for domain" [High,In progress] - Assigned to Eduardo Costa (ecosta)
21:30:17 <mriedem> this is the only other critical k-3 bug https://bugs.launchpad.net/nova/+bug/1431201
21:30:18 <openstack> Launchpad bug 1431201 in OpenStack Compute (nova) "kilo controller can't conduct juno compute nodes" [Critical,In progress] - Assigned to Sylvain Bauza (sylvain-bauza)
21:30:32 <dansmith> mriedem: we're on that one
21:30:35 <mriedem> k
21:30:42 <dansmith> mriedem: it is a recent regression
21:30:51 <dansmith> I dunno why,
21:31:06 <jogo> sdague: FYI for that one, if we can put 'old' nova in a venv we can gate in restarting old nova-compute  with new nova-* running
21:31:10 <bauzas> mriedem: I'm on that bug, fairly good progress with the help of dansmith
21:31:11 <dansmith> but I always find it funny when people use "conduct" as the verbular description of what conductor does :)
21:31:15 <jamielennox> Can i request targetting https://review.openstack.org/136931 / bug 1424462 to k-3 as well, the code's been up for a while it just needs another +2
21:31:17 <openstack> bug 1424462 in OpenStack Compute (nova) "Nova/Neutron v3 authentication" [Medium,In progress] https://launchpad.net/bugs/1424462 - Assigned to Jamie Lennox (jamielennox)
21:31:20 <sdague> jogo: or we could look at the logs
21:31:41 <dansmith> sdague: well, restarting compute will find a whole other class of issues
21:31:49 <mikal> jamielennox: I will add that to my review list too
21:31:49 <jogo> sdague: well we don't test the startup code for old nova-compute in partial grenade
21:31:49 <dansmith> sdague: but for this one, looking in the logs is good
21:32:03 <bauzas> dansmith: +1
21:32:09 <jogo> but if this one was in the logs all along ... sigh
21:32:10 <sdague> jogo: yeh, that's just a lot of infrastructure that we don't have ready yet
21:32:18 <dansmith> jogo: right, and it breaks too often
21:32:26 <dansmith> but it's non-trivial
21:32:32 <dansmith> I think with multinode it would be easier right?
21:32:34 <jogo> sdague: yup, just pointing that out for when that infra is in place
21:32:46 <mriedem> there is multinode now
21:32:48 <bauzas> jogo: yeah it wasn't noticed because I was not aware that grenade was not covering n-cpu startups nor periodic tasks
21:32:51 <mriedem> in experimental
21:32:55 <jogo> dansmith: actually yeah ... wouldn't be hard to throw something together for that but may  be overkill honestly
21:33:06 <dansmith> mriedem: right, I mean trying to do a test that restarted it would be easier in multinode vs. adding a venv supported thing
21:33:13 <jogo> bauzas: well it should cover periodic tasks, but tempest means they don't always fail
21:33:24 <dansmith> jogo: well, it'd be cool I think, but yeah
21:33:43 <bauzas> jogo: I mean that it was something unnoticeable by Tempest
21:33:46 <sdague> yeh, lets take this offline. Honestly, for this case, log inspection will nail this regression (and others like it)
21:33:55 <jogo> sdague: agreed
21:33:56 <bauzas> sdague: agreed
21:34:01 <mikal> So moving on now?
21:34:09 <sdague> and in L with some venv support, we can do other things
21:34:15 <sdague> but that's all post release
21:34:15 <dansmith> you mean lemming
21:34:37 <mikal> #topic Stuck reviews
21:34:45 * mdbooth has one
21:34:50 <mdbooth> https://review.openstack.org/#/c/158269/
21:34:50 <mikal> So, we have  https://review.openstack.org/#/c/158269/ as a candidate for discussion
21:35:20 <mdbooth> My argument on this one is the same as it was before.
21:35:32 <mdbooth> It's a serious bug, and this is the only fix on the table for it.
21:35:41 <dansmith> well, there are other ways to fix this
21:35:45 <dansmith> like the one you proposed first
21:35:49 <dansmith> which I'd rather see than this
21:35:51 <mdbooth> The proposed fix simply adds an assertion around existing code assumptions.
21:35:53 * jogo doesn't want to have the exact same discussion again
21:35:57 <dansmith> jogo: +1
21:35:59 <mdbooth> So it's good.
21:36:25 <mdbooth> The objection to it seems to be that it makes a change to an unpopular feature without removing it.
21:36:41 <mdbooth> I can rebut the objections specifically if that would help anybody.
21:37:05 <mikal> So, I think there are two issues here
21:37:17 <mikal> We should delete all of the instances for unlucky deployers
21:37:22 <mikal> That seems ... bad
21:37:41 <mikal> I think we can discuss the underlying design problem, but that's probably better at the summit
21:37:46 <dansmith> so, the evacuate code is broken, and broken for other hypervisors that this doesn't address. I've got a spec up for lemming
21:37:51 <dansmith> and I'm going to commit to fixing it
21:37:51 <mikal> And not something we're goign to fix in kilo
21:38:55 <dansmith> spec: https://review.openstack.org/#/c/161444/
21:39:32 <mriedem> summit topic?
21:39:35 <mriedem> at least meetup day?
21:39:41 <mikal> For the larger problem, yes
21:39:43 <mdbooth> dansmith: I'm sure that's very good, but it's not good to go now, and it doesn't fix any of the other problems.
21:39:45 <dansmith> we need a summit topic around this and the larger problem for sure
21:40:02 <mikal> Do we think we can land something to stop the instance loss problem in kilo that wont cause us heaps of pain later?
21:40:03 <dansmith> mdbooth: it will fix *ever* deleting instances when we shouldn't
21:40:03 <mdbooth> Also, there is nothing in my fix which would prejudice fixing evacuate.
21:40:24 <mdbooth> mikal: This fix is good to go.
21:40:41 <dansmith> mikal: we landed a workaround flag that lets you disable the destructive behavior if you want
21:40:44 <mikal> mdbooth: well, except that three cores -1'ed it
21:41:02 <sdague> dansmith: did that land? I thought that was blocked?
21:41:08 <mdbooth> mikal: You have to read why, though.
21:41:19 <dansmith> mikal: which is a stopgap which is good enough for me in the short term, and for particularly susceptible hypervisors they can fix it in their own drivers
21:41:20 <mdbooth> mikal: As I said, this is an unpopular feature.
21:41:22 <dansmith> sdague: nope, it landed
21:41:32 <mdbooth> There seems to be a kneejerk reaction against anything relating to it.
21:41:34 <dansmith> sdague: t'was unblocked
21:42:04 <dims> 2 conversations? :)
21:42:11 <mriedem> https://review.openstack.org/#/c/159890/
21:42:19 <mriedem> that's the config option added to disable
21:42:20 <mikal> I feel like we're not going to resolve this before we run out of time
21:42:23 <jogo> so we have now just had the exact same conversation as we did last week
21:42:24 <bauzas> mdbooth: I honestly think it deserves a discussion at the Summit for this
21:42:26 <jogo> can we just move on
21:42:42 <mikal> So...
21:42:47 <dims> mdbooth: is it related to this? https://blueprints.launchpad.net/nova/+spec/vmware-one-nova-compute-per-vc-cluster please add it there if so
21:42:48 <mdbooth> bauzas: This is a bugfix. Summit is for design discussion.
21:42:55 <mdbooth> dims: No.
21:43:03 <mikal> The workaround flag stops the instance loss situation if the deployer uses it with vmware or ironic, yes?
21:43:09 <dansmith> mikal: yes
21:43:11 <mdbooth> mikal: No
21:43:19 <dansmith> wtf? of course it does
21:43:32 <mdbooth> Firstly, the user would have to manually enable it, which they never would.
21:43:37 <dansmith> that's what he just said
21:43:45 <mriedem> let's move on
21:43:47 <bauzas> mdbooth: IMHO that's more than an fix, but rather something directly hitting operators
21:43:49 <mdbooth> Secondly, if anybody enables it, it breaks evacuate.
21:44:05 <mdbooth> Ok, I thought I'd give it one more go.
21:44:09 <mdbooth> I'll abandon it.
21:44:11 <mikal> So, we have many flags where we expect the operator to know what to do
21:44:15 <mikal> That's what documentation is for
21:44:16 <mriedem> operators will probably read the release notes before upgrading
21:44:18 <mdbooth> Thanks for listening.
21:44:24 <mriedem> this should be a flag called out in the release notes
21:44:33 <dansmith> mriedem: I'll add it
21:44:37 <mikal> Its got a docimpact on it
21:44:47 <mdbooth> If you enable it, it breaks evacuate
21:45:00 <mdbooth> My fix doesn't need to be enabled, and doesn't break evacuate.
21:45:03 <dansmith> evacuate is broken for those drivers
21:45:05 <mdbooth> FWIW
21:45:07 <mriedem> config is always redone per release, docimpact doesn't mean much otherwise i don't think if you don't follow up yourself with release notes
21:45:22 <mdbooth> My fix doesn't need to be enabled by anybody, and doesn't break evacuate by anybody.
21:45:23 <jogo> MOVING ON
21:45:35 <mikal> jogo: breath deeply
21:45:36 <dansmith> +1 for moving on
21:45:42 <mikal> Ok, let's move on
21:45:52 <mikal> #topic Open Discussion
21:46:00 <mikal> Nothing on the agenda
21:46:00 <sdague> unrelated to this - https://review.openstack.org/#/c/150929/ - ec2 deprecate
21:46:04 <dansmith> ooh
21:46:07 <dansmith> a much better topic
21:46:10 <mikal> Heh'
21:46:17 <mriedem> sdague: tempest seems pretty unhappy
21:46:17 <bknudson> I wanted to bring up https://review.openstack.org/#/c/157501/ at one of these meetings.
21:46:18 <dansmith> sdague: I tried to +2 that eight times, but gerrit wouldn't let me
21:46:20 <sdague> though I have to fix code one more time because of oslo_log
21:46:21 <mikal> sdague: so it seems like ops at the meetup were not freaked out?
21:46:21 <mriedem> something with that s3server change
21:46:34 <sdague> they were less freaked out for sure
21:46:48 <sdague> they want a story forward about ec2 being available in the ecosystem
21:46:49 <anteaya> this is targetting for L now so I didn't want to take time in priorites but obondarev_ has a new patchset up for the nova-net to neutron migration proxy: https://review.openstack.org/#/c/150490/
21:46:59 <mikal> sdague: I haven't gotten around to tweaking the review to meet ttx's requirements, have you?
21:47:06 <sdague> I did not get yelled at when I talked about the path into the stackforge project
21:47:19 <sdague> I talked with ttx at the meetup, he's ok with deprecation now
21:47:25 <mikal> Pardon?
21:47:29 <sdague> it was a word definition issue
21:47:32 <mikal> Did you talk to him while holding a knife?
21:47:37 <sdague> nope
21:47:37 <dansmith> lol
21:47:38 <jogo> umm http://logs.openstack.org/29/150929/3/check/gate-nova-python27/062df96/console.html#_2015-03-12_19_07_42_323
21:47:47 <sdague> jogo: yes, oslo_log
21:47:48 <mriedem> jogo: yeah, needs work a bit
21:47:50 <sdague> fixing now
21:48:02 <dansmith> they were in philly, so "at gunpoint" could have been a thing
21:48:14 <mikal> All I know about philly is the cheese
21:48:23 <anteaya> cheese steak
21:48:26 <sdague> hmmm... cheese steak
21:48:28 <bknudson> cream cheese
21:48:32 <dansmith> cheeeeese
21:48:32 <mikal> Yeah, that one
21:48:39 <dims> sdague: thanks!
21:48:47 <mikal> So apart from discussing our love of cheese are we done here?
21:48:50 * bauzas heard the word 'cheese' ?
21:48:52 <sdague> anyway, ttx is supportive of this being called deprecation, I asked him to add comments there
21:48:58 <jogo> sdague: was that a trick to allow dansmith to +2 it again?
21:48:59 <tonyb> mikal: bknudson had a review to look at
21:49:03 <bknudson> wanted to bring up https://review.openstack.org/#/c/157501/ quickly...
21:49:05 <dansmith> jogo: :D
21:49:05 <sdague> heh
21:49:05 <mikal> sdague: that is the exact opposite of what he told me
21:49:10 <mikal> sdague: so I will proceed to be confused
21:49:15 <bknudson> essentially the start of a long project, so wanted some quick feedback on it.
21:49:19 <dansmith> mikal: let's stack up some +2s on there, let ttx comment and you can +W, okay?
21:49:30 <bknudson> as in, if it's a non-starter.
21:49:31 <mikal> dansmith: works for me
21:49:43 * dansmith drops the mic and walks out
21:50:00 <tonyb> bknudson: If it's part of a long project that feels like an L spec to me
21:50:19 <sdague> yeh, this seems more spec like to me
21:50:21 <bknudson> tonyb: yes, I'll put a spec up for L.
21:50:22 <mikal> bknudson: yeah, I think a discussion of why its needed would be good
21:50:29 <mikal> bknudson: and a spec is the right framework for that
21:50:33 <mikal> Cool
21:50:46 <dansmith> mikal: can haz early mark?
21:50:48 <tonyb> bknudson: FWIW I think the change is okay but there is no context on why and what it gets us?
21:50:58 <sdague> tonyb: ++
21:50:58 <mikal> For example, if we were going to do that, how does it map into perhap using a rootwrap daemon in the future?
21:51:06 <anteaya> anyone able to review https://review.openstack.org/#/c/150490/ next week at all? or way too busy?
21:51:26 <mikal> anteaya: no promises at this point in the release cycle unfortunately
21:51:31 <sdague> also, unless we actually parameter filter in our rootwrap I think it's pointless for compute
21:51:39 <anteaya> mikal: figured as much, just need to tell obondarev_ that I tried
21:51:54 <bknudson> the goal of this is to get a start on doing rootwrap or whatever correctly.
21:52:00 <mikal> Sounds like we have a plan for the priv separation thing though
21:52:09 <sdague> bknudson: right, but you've looked at the compute policy right?
21:52:18 <mikal> bknudson: I know neutron has been doing stuff around that, so it would be interesting to learn from what they've done as well
21:52:29 <sdague> it's got at least half a dozen own your box escalations
21:52:34 <mikal> But yeah, I think this meeting is done?
21:52:40 <dansmith> soo done
21:52:44 <mikal> Heh
21:52:46 <bknudson> we looked at the policy over the OSSG meetup. There's going to be a lot of work to get it all cleaned up.
21:52:46 <mikal> Going...
21:52:51 <mriedem> can i talk about evacuate some more with jogo?!
21:52:56 <mikal> ...going...
21:53:03 <dims> haha
21:53:05 <mikal> mriedem: sure, just not here
21:53:05 <bknudson> will spec it.
21:53:06 * dansmith smacks mriedem with a fish
21:53:10 <mikal> ...gone
21:53:13 <mikal> #endmeeting