21:00:59 #startmeeting nova 21:01:00 Meeting started Thu Sep 12 21:00:59 2013 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:02 hello, everyone! 21:01:04 The meeting name has been set to 'nova' 21:01:04 hi 21:01:09 yo 21:01:24 anyone else here? :-) 21:01:27 hi 21:01:27 hi 21:01:39 cool. 21:01:44 hi 21:01:48 he 21:01:58 #topic havana 21:02:08 congratulations to everyone on getting havana-3 released! 21:02:22 91 blueprints and 200+ bug fixes 21:02:35 err, 47 blueprints in h3 i mean 21:02:38 91 across all of H 21:02:49 our next target is to produce a release candidate (RC1) 21:03:02 #link https://launchpad.net/nova/+milestone/havana-rc1 21:03:09 there is not set date for rc1, it's whenever we're ready 21:03:29 so we basically need to produce the list of everything that we want to complete/fix before that point, and get it done 21:03:42 any bug we want to make sure gets fixed should be targeted to havana-rc1 in launchpad 21:03:44 so the 26th is a soft date? 21:03:44 https://wiki.openstack.org/wiki/Havana_Release_Schedule 21:03:49 yes 21:04:02 just a guideline 21:04:08 it could be today if we thought it was ready 21:04:19 but it's not :-) 21:04:31 so let's start by going over the blueprints still pending 21:04:38 the first is encrypted ephemeral storage 21:04:46 we got the encrypted volumes one merged already, and this is related 21:04:55 jog0: you around? know the status of that one? 21:05:05 i believe it's close, just needs some last changes 21:05:17 but it needs to get in either this week, or in the first couple days of next week 21:05:39 mikal: you around? you have the libvirt-console-logging blueprint 21:06:05 which is kind of a bug anyway ... so doesn't have as strict of a deadline 21:06:16 last i checked it looked like that one mainly needed review attention 21:06:23 so we could use some help there 21:06:27 hartsocks: yours is the last blueprint 21:06:32 yeah. 21:06:51 i don't see any problem getting that in 21:06:56 just haven't had a chance to look recently 21:07:04 hartsocks: what's the status? 21:07:31 There's two lines in one test that I added that I *could* remove. They aren't necessary but they aren't hurting anything either. 21:07:51 ok 21:08:02 russellb: just got in 21:08:10 I did that is 21:08:19 jog0: cool 21:08:37 jog0: was just trying to check on the ephemeral encryption status 21:08:47 russellb: dansmith hit some issues with the keymgr patch and that is being fixed now (using code from nova/tests in deployment) 21:08:53 jog0: dprince? 21:08:59 err yeah 21:09:01 hartsocks: i just pinged dansmith to take a look at yours 21:09:20 thanks 21:10:21 ok, so on the bugs list 21:10:42 the biggest issue with the bug list is that we need to catch up on triage 21:10:52 so i proposed a bug triage date for tuesday, september 17 to try to catch back up 21:11:07 #link http://lists.openstack.org/pipermail/openstack-dev/2013-September/014921.html 21:11:31 we need to get through all of these new bugs to find any of the major stuff that we really need to fix before RC1 21:11:58 +1 sounds like a good idea 21:12:04 russellb: how have you been tagging bugs strictly for unit test cleanup? 21:12:05 may not matter, but tempest is having a bug day on the same day 21:12:06 so we do that, and then attack the list until everything is done, and that's the trigger for RC1 21:12:13 mriedem1: "testing" 21:12:17 is the bug tag i've used 21:12:33 russellb: ah, ok, see that now in the nova tag table, nice, thanks 21:12:42 cyeoh: yeah, i wasn't sure if that was a good or bad thing, but having multiple projects attack bug lists at the same time seemed like it could be good 21:12:46 mriedem1: np 21:12:57 russellb: ok 21:13:15 if you want to help focus on a sub-set of the bugs, check out our bug tags 21:13:17 #link https://wiki.openstack.org/wiki/Nova/BugTriage 21:13:40 but really that's the biggest thing we need to accomplish in the next week as a project - producing our target RC1 list 21:13:51 (and progress on fixing things) 21:14:00 that's pretty much all i've got for today 21:14:06 any comments or questions on all of that? 21:14:30 russellb: I assume we will prioritize bugs on the recheck list 21:14:34 for rc1 21:14:48 +1, will look at what xensever ones need RC1 tags 21:15:03 jog0: yeah, we should target the ones that are making a bigger impact on the rc1 list 21:15:32 jog0: that's something you could definitely help with ... targeting the most important ones we should be looking at 21:15:49 russellb: on it 21:15:55 awesome 21:16:05 I'm trying to compile a bug list for our subteam and figure out how to prioritize them. I will post that on the ML when I get it put together. Any tips on that would be helpful. 21:16:11 that means please don't run recheck no bug unless you are sure 21:16:37 this seems like a machine learning problem 21:16:53 throw good logs + failed logs at a classifier 21:17:59 lifeless: ++ are you volunteering? 21:18:04 ha 21:18:37 jog0: sure, once I finish the deployment problem :P 21:18:47 sounds like a good plan 21:18:50 Do we have a maintained list of feature freeze exceptions somewhere? 21:18:53 jog0: speaking of, I have a few nova q's for tripleo for you, when you have a minute :> 21:19:02 dripton: it's basically the list of blueprints targeted at havana-rc1 21:19:26 the only blueprints listed there were given exceptions to be there 21:19:46 That's obvious only in hindsight; thanks. I was digging through emails. 21:20:30 all good 21:20:30 #topic open discussion 21:20:35 anything else? 21:21:01 That one multi-process bug got reopened... 21:21:04 I posted https://review.openstack.org/#/c/46184/ 21:21:55 I think there's some process table weirdness when under load. 21:21:56 that darn test :) 21:22:24 I wrote a small book of a comment explaining why this patch is different. 21:23:04 ok cool 21:23:28 I have a random item of discussion regarding sync_power_states periodic task 21:23:40 sure 21:23:41 I'm not sure this is a bug, but it seems a little off to me 21:24:07 so in most cases sync_power_states treats the hypervisor as authoritative and updates the DB with the hypervisor status 21:24:28 except in the case that the DB shows the instance as being shutdown but the hypervisor is running 21:24:33 in that case it shuts down the instance 21:25:11 I don't know why in only that one case we sync from DB to hypervisor 21:25:55 seems like that is out of the scope of the periodic task, or if anything, should be left up to the deployer 21:25:59 any thoughts? 21:26:23 i have heard quite a few complaints on that behavior 21:26:26 geekinutah: what does git blame say? 21:26:32 history wise 21:26:34 it actually makes sense to me, i think 21:26:58 is this shutdown like stop or delete? 21:26:58 because that means the user tried to shut it down via nova, but it didn't work for whatever reason 21:27:00 jog0: it was in the original implentation, I can't remember who it was, yao I think 21:27:11 dansmith: stop pretty sure ... 21:27:17 dansmith: yeah 21:27:33 the other way around, the user can stop within the VM 21:27:33 russellb: right, agreed 21:27:38 and then nova needs to catch up 21:27:48 russellb: it also means that if an instance crashes and an operator starts it back up manually and not through the API then nova comes along and kills it again 21:27:56 so it may seem odd, but i think it's right ... nova needs to go with whatever the user intent was (as best we can tell) 21:27:59 really they should start it through the api 21:28:07 but, yeah, there it is 21:28:45 right, nova assumes (as i think it should be able to) that it's authoritatively in control of the VM lifecycle 21:28:55 definitely 21:29:11 they comments in the task actually treat the hypervisor as authoritative 21:29:25 let me get some line numbers out... 21:29:41 I agree that something should be authoritative all the time 21:29:48 i think it means that hypervisor is authoritative on "power_state" 21:29:50 as it stands it has an exception for one case 21:30:03 but shutdown is a vm_state 21:30:09 iirc 21:30:09 i think it's that user intent is authoritative 21:30:16 +1 21:30:28 which happens to be the HV power state most of the time ... except in this case 21:30:34 where the user tried to stop it from the API but it didn't work 21:30:36 (yet) 21:30:39 okay 21:30:52 so our instances crashes, at the same time sync_power_states runs 21:31:01 now we update the DB to show the instance as being shutdown 21:31:11 or does a poweroff in the instance 21:31:31 but you can't go from off->on inside the instance, only by violating the rules and doing it at the hypervisor itself 21:33:11 any other topics? 21:34:54 I feel sated :-D 21:35:00 ty all 21:35:05 cool :) 21:35:07 thanks everyone! 21:35:09 russellb: how's the v3 API doc? 21:35:16 I just can't type fast enough! :) 21:35:19 ooh, just caught me! 21:35:19 cyeoh: ^ ? 21:35:32 whew! 21:35:38 annegentle: the api sample patches are all in the review queue 21:35:57 cyeoh: ok I've been watching them, does Kersten need our help to figure out the WADL? 21:35:59 annegentle: still need to do the work to convert that to a spec document though 21:36:17 cyeoh: ah okay, through automation or writing by hand? 21:37:07 annegentle: going the automation route, I've given a Kersten a sample of the meta files we intend on generating (that stuff isn't in the review queue) 21:37:20 cyeoh: ok sounds like plan, I like plans 21:37:31 cyeoh: let us know if we can help 21:37:47 annegentle: ok, thanks! 21:38:30 alright, thanks everyone! (for real this time) 21:38:33 #endmeeting