21:00:59 <russellb> #startmeeting nova
21:01:00 <openstack> Meeting started Thu Sep 12 21:00:59 2013 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:02 <russellb> hello, everyone!
21:01:04 <openstack> The meeting name has been set to 'nova'
21:01:04 <mriedem1> hi
21:01:09 <dansmith> yo
21:01:24 <russellb> anyone else here?  :-)
21:01:27 <dripton> hi
21:01:27 <alaski> hi
21:01:39 <russellb> cool.
21:01:44 <melwitt> hi
21:01:48 <hartsocks> he
21:01:58 <russellb> #topic havana
21:02:08 <russellb> congratulations to everyone on getting havana-3 released!
21:02:22 <russellb> 91 blueprints and 200+ bug fixes
21:02:35 <russellb> err, 47 blueprints in h3 i mean
21:02:38 <russellb> 91 across all of H
21:02:49 <russellb> our next target is to produce a release candidate (RC1)
21:03:02 <russellb> #link https://launchpad.net/nova/+milestone/havana-rc1
21:03:09 <russellb> there is not set date for rc1, it's whenever we're ready
21:03:29 <russellb> so we basically need to produce the list of everything that we want to complete/fix before that point, and get it done
21:03:42 <russellb> any bug we want to make sure gets fixed should be targeted to havana-rc1 in launchpad
21:03:44 <mriedem1> so the 26th is a soft date?
21:03:44 <mriedem1> https://wiki.openstack.org/wiki/Havana_Release_Schedule
21:03:49 <russellb> yes
21:04:02 <russellb> just a guideline
21:04:08 <russellb> it could be today if we thought it was ready
21:04:19 <russellb> but it's not :-)
21:04:31 <russellb> so let's start by going over the blueprints still pending
21:04:38 <russellb> the first is encrypted ephemeral storage
21:04:46 <russellb> we got the encrypted volumes one merged already, and this is related
21:04:55 <russellb> jog0: you around?  know the status of that one?
21:05:05 <russellb> i believe it's close, just needs some last changes
21:05:17 <russellb> but it needs to get in either this week, or in the first couple days of next week
21:05:39 <russellb> mikal: you around?  you have the libvirt-console-logging blueprint
21:06:05 <russellb> which is kind of a bug anyway ... so doesn't have as strict of a deadline
21:06:16 <russellb> last i checked it looked like that one mainly needed review attention
21:06:23 <russellb> so we could use some help there
21:06:27 <russellb> hartsocks: yours is the last blueprint
21:06:32 <hartsocks> yeah.
21:06:51 <russellb> i don't see any problem getting that in
21:06:56 <russellb> just haven't had a chance to look recently
21:07:04 <russellb> hartsocks: what's the status?
21:07:31 <hartsocks> There's two lines in one test that I added that I *could* remove. They aren't necessary but they aren't hurting anything either.
21:07:51 <russellb> ok
21:08:02 <jog0> russellb: just got in
21:08:10 <jog0> I did that is
21:08:19 <russellb> jog0: cool
21:08:37 <russellb> jog0: was just trying to check on the ephemeral encryption status
21:08:47 <jog0> russellb: dansmith hit some issues with the keymgr patch and that is being fixed now (using code from nova/tests in deployment)
21:08:53 <russellb> jog0: dprince?
21:08:59 <jog0> err yeah
21:09:01 <russellb> hartsocks: i just pinged dansmith to take a look at yours
21:09:20 <hartsocks> thanks
21:10:21 <russellb> ok, so on the bugs list
21:10:42 <russellb> the biggest issue with the bug list is that we  need to catch up on triage
21:10:52 <russellb> so i proposed a bug triage date for tuesday, september 17 to try to catch back up
21:11:07 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2013-September/014921.html
21:11:31 <russellb> we need to get through all of these new bugs to find any of the major stuff that we really need to fix before RC1
21:11:58 <johnthetubaguy> +1 sounds like a good idea
21:12:04 <mriedem1> russellb: how have you been tagging bugs strictly for unit test cleanup?
21:12:05 <cyeoh> may not matter, but tempest is having a bug day on the same day
21:12:06 <russellb> so we do that, and then attack the list until everything is done, and that's the trigger for RC1
21:12:13 <russellb> mriedem1: "testing"
21:12:17 <russellb> is the bug tag i've used
21:12:33 <mriedem1> russellb: ah, ok, see that now in the nova tag table, nice, thanks
21:12:42 <russellb> cyeoh: yeah, i wasn't sure if that was a good or bad thing, but having multiple projects attack bug lists at the same time seemed like it could be good
21:12:46 <russellb> mriedem1: np
21:12:57 <cyeoh> russellb: ok
21:13:15 <russellb> if you want to help focus on a sub-set of the bugs, check out our bug tags
21:13:17 <russellb> #link https://wiki.openstack.org/wiki/Nova/BugTriage
21:13:40 <russellb> but really that's the biggest thing we need to accomplish in the next week as a project - producing our target RC1 list
21:13:51 <russellb> (and progress on fixing things)
21:14:00 <russellb> that's pretty much all i've got for today
21:14:06 <russellb> any comments or questions on all of that?
21:14:30 <jog0> russellb: I assume we will prioritize bugs on the recheck list
21:14:34 <jog0> for rc1
21:14:48 <johnthetubaguy> +1, will look at what xensever ones need RC1 tags
21:15:03 <russellb> jog0: yeah, we should target the ones that are making a bigger impact on the rc1 list
21:15:32 <russellb> jog0: that's something you could definitely help with ... targeting the most important ones we should be looking at
21:15:49 <jog0> russellb: on it
21:15:55 <russellb> awesome
21:16:05 <hartsocks> I'm trying to compile a bug list for our subteam and figure out how to prioritize them. I will post that on the ML when I get it put together. Any tips on that would be helpful.
21:16:11 <jog0> that means please don't run recheck no bug unless you are sure
21:16:37 <lifeless> this seems like a machine learning problem
21:16:53 <lifeless> throw good logs + failed logs at a classifier
21:17:59 <jog0> lifeless: ++ are you volunteering?
21:18:04 <russellb> ha
21:18:37 <lifeless> jog0: sure, once I finish the deployment problem :P
21:18:47 <russellb> sounds like a good plan
21:18:50 <dripton> Do we have a maintained list of feature freeze exceptions somewhere?
21:18:53 <lifeless> jog0: speaking of, I have a few nova q's for tripleo for you, when you have a minute :>
21:19:02 <russellb> dripton: it's basically the list of blueprints targeted at havana-rc1
21:19:26 <russellb> the only blueprints listed there were given exceptions to be there
21:19:46 <dripton> That's obvious only in hindsight; thanks.  I was digging through emails.
21:20:30 <russellb> all good
21:20:30 <russellb> #topic open discussion
21:20:35 <russellb> anything else?
21:21:01 <hartsocks> That one multi-process bug got reopened...
21:21:04 <hartsocks> I posted https://review.openstack.org/#/c/46184/
21:21:55 <hartsocks> I think there's some process table weirdness when under load.
21:21:56 <russellb> that darn test :)
21:22:24 <hartsocks> I wrote a small book of a comment explaining why this patch is different.
21:23:04 <russellb> ok cool
21:23:28 <geekinutah> I have a random item of discussion regarding sync_power_states periodic task
21:23:40 <russellb> sure
21:23:41 <geekinutah> I'm not sure this is a bug, but it seems a little off to me
21:24:07 <geekinutah> so in most cases sync_power_states treats the hypervisor as authoritative and updates the DB with the hypervisor status
21:24:28 <geekinutah> except in the case that the DB shows the instance as being shutdown but the hypervisor is running
21:24:33 <geekinutah> in that case it shuts down the instance
21:25:11 <geekinutah> I don't know why in only that one case we sync from DB to hypervisor
21:25:55 <geekinutah> seems like that is out of the scope of the periodic task, or if anything, should be left up to the deployer
21:25:59 <geekinutah> any thoughts?
21:26:23 <mrodden1> i have heard quite a few complaints on that behavior
21:26:26 <jog0> geekinutah: what does git blame say?
21:26:32 <jog0> history wise
21:26:34 <russellb> it actually makes sense to me, i think
21:26:58 <dansmith> is this shutdown like stop or delete?
21:26:58 <russellb> because that means the user tried to shut it down via nova, but it didn't work for whatever reason
21:27:00 <geekinutah> jog0: it was in the original implentation, I can't remember who it was, yao I think
21:27:11 <russellb> dansmith: stop pretty sure ...
21:27:17 <geekinutah> dansmith: yeah
21:27:33 <russellb> the other way around, the user can stop within the VM
21:27:33 <dansmith> russellb: right, agreed
21:27:38 <russellb> and then nova needs to catch up
21:27:48 <geekinutah> russellb: it also means that if an instance crashes and an operator starts it back up manually and not through the API then nova comes along and kills it again
21:27:56 <russellb> so it may seem odd, but i think it's right ... nova needs to go with whatever the user intent was (as best we can tell)
21:27:59 <geekinutah> really they should start it through the api
21:28:07 <geekinutah> but, yeah, there it is
21:28:45 <russellb> right, nova assumes (as i think it should be able to) that it's authoritatively in control of the VM lifecycle
21:28:55 <dansmith> definitely
21:29:11 <geekinutah> they comments in the task actually treat the hypervisor as authoritative
21:29:25 <geekinutah> let me get some line numbers out...
21:29:41 <geekinutah> I agree that something should be authoritative all the time
21:29:48 <mrodden1> i think it means that hypervisor is authoritative on "power_state"
21:29:50 <geekinutah> as it stands it has an exception for one case
21:30:03 <mrodden1> but shutdown is a vm_state
21:30:09 <mrodden1> iirc
21:30:09 <russellb> i think it's that user intent is authoritative
21:30:16 <johnthetubaguy> +1
21:30:28 <russellb> which happens to be the HV power state most of the time ... except in this case
21:30:34 <russellb> where the user tried to stop it from the API but it didn't work
21:30:36 <russellb> (yet)
21:30:39 <geekinutah> okay
21:30:52 <geekinutah> so our instances crashes, at the same time sync_power_states runs
21:31:01 <geekinutah> now we update the DB to show the instance as being shutdown
21:31:11 <dansmith> or does a poweroff in the instance
21:31:31 <dansmith> but you can't go from off->on inside the instance, only by violating the rules and doing it at the hypervisor itself
21:33:11 <russellb> any other topics?
21:34:54 <geekinutah> I feel sated :-D
21:35:00 <geekinutah> ty all
21:35:05 <russellb> cool :)
21:35:07 <russellb> thanks everyone!
21:35:09 <annegentle> russellb: how's the v3 API doc?
21:35:16 <annegentle> I just can't type fast enough! :)
21:35:19 <russellb> ooh, just caught me!
21:35:19 <russellb> cyeoh: ^ ?
21:35:32 <annegentle> whew!
21:35:38 <cyeoh> annegentle: the api sample patches are all in the review queue
21:35:57 <annegentle> cyeoh: ok I've been watching them, does Kersten need our help to figure out the WADL?
21:35:59 <cyeoh> annegentle: still need to do the work to convert that to a spec document though
21:36:17 <annegentle> cyeoh: ah okay, through automation or writing by hand?
21:37:07 <cyeoh> annegentle: going the automation route, I've given a Kersten a sample of the meta files we intend on generating (that stuff isn't in the review queue)
21:37:20 <annegentle> cyeoh: ok sounds like plan, I like plans
21:37:31 <annegentle> cyeoh: let us know if we can help
21:37:47 <cyeoh> annegentle: ok, thanks!
21:38:30 <russellb> alright, thanks everyone!  (for real this time)
21:38:33 <russellb> #endmeeting