21:02:35 #startmeeting nova 21:02:36 Meeting started Thu Aug 14 21:02:35 2014 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:02:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:02:39 The meeting name has been set to 'nova' 21:02:41 o/ 21:02:43 hi 21:02:50 o/ 21:02:53 Oh, we say hello in the logged bit do we? 21:02:57 o/ 21:02:57 :P 21:03:03 mikal: yes :) 21:03:06 it doesn't count otherwise 21:03:08 Heh 21:03:17 The agenda is at https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting as always 21:03:18 o/ 21:03:28 #topic Juno-3 status, Feature Proposal Freeze August 21st, September 4th Juno-3, FF and SF 21:03:34 #note mikal sent an email out with review "priorities" an hour or so ago 21:03:38 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html 21:03:50 dansmith has already replied with some status updates there 21:04:03 * jogo walks in late 21:04:05 But the punch line is that we need to spend some time reviewing important BPs for j-3 21:04:16 The ironic one seems to be under control 21:04:24 And the vmware one is suffering under the reign of minesweeper 21:04:39 But helping out with reviews on those would be much appreciated 21:04:43 mrda and I are slaughtering the ironic one 21:04:47 i think the CI system formerly known as minesweeper is fixed 21:04:55 dansmith: in a good way? 21:05:01 mikal: yes, like "killing it" 21:05:03 thanks dansmith, your reviewing prowess is greatly appreciated! 21:05:04 hmm 21:05:09 yes, in a good way 21:05:17 guess I need some non-violent references 21:05:18 dansmith: do you need another core to hand hold with you? 21:05:38 im late i joined the open-tack meeting but no one was there 21:05:38 mikal: I can stir up some reviews once I think it's really ready if you want 21:05:49 dansmith: that would work 21:05:49 * mriedem resists dance in gym class w/ sweaty hands joke 21:05:52 mikal: but if anyone else wants to jump on that's cool too 21:06:17 Yeah, I think dansmith is doing a good job, but helping hijm out would be cool too 21:06:25 So... I am going to regret this... 21:06:32 What else is urgent that we should be reviewing for j-3? 21:06:43 Are there things that should be high that we have the priority wrong on? 21:07:37 link to the bp list? 21:07:39 w/ priorities? 21:07:45 #link https://launchpad.net/nova/+milestone/juno-3 21:07:50 thanks 21:07:53 NP 21:08:01 Maybe we can come back to that in open discussion 21:08:05 To give people time to think it over 21:08:08 Instead of stalling now 21:08:35 The other thing that a few people have raised with me is the idea of bringing back some form of "review day" back 21:08:44 Be it a day, or a week rotation 21:08:51 how does that work? 21:08:54 I think its a good idea 21:09:09 adrian_otto: there would be a roster of nova-cores, who would devote their time while rostered to just doing code reviews 21:09:22 Its something we used to do when vishy was PTL, but it fell out of fashion 21:09:37 why? were the cores exhausted by it? 21:09:49 I think we all just got busy 21:09:53 And sort of forgot about it 21:10:01 I don't really find them all that useful to be honest 21:10:08 I think it depends on your workflow 21:10:12 sounds like a good idea to me. 21:10:18 Some cores do reviews every day 21:10:25 So it wouldn't change much for them 21:10:33 Other cores it helps to have the external motivation 21:10:38 So... Let me rephrase. 21:10:48 I think its a good idea, but I want someone else to run it 21:11:06 So, does anyone think it s agood enough idea to volunteer to come up with a roster and encourage people to actually do reviews on their days? 21:11:38 be the official nag 21:11:48 Yeah 21:11:59 Perhaps report in this meeting who did a great job etc etc 21:12:01 so i don't get it, its' like 'office hours'? 21:12:10 so people can just bug the cores that are in during those office hours to review their patches? 21:12:16 mriedem: not really 21:12:22 Its more carving time out of your schedule to do reviews 21:12:29 oh ok 21:12:39 If you were rostered today we'd expect you to clear your schedule for the day and just do code reviews all day 21:12:40 I thinkt aht should probably be an arrangement between the cores and the PTL 21:12:43 for us undisciplined folk 21:13:03 Pretty much 21:13:10 not a bad idea for me, i've been pretty sporadic with reviews lately 21:13:25 rather than something published and trolled by all contributors to get things to land 21:13:26 so it might help forcing me personally 21:13:32 Well, if you review the things from that email I sent out today, then that has a similar effect overall 21:13:45 yeha 21:14:00 So, it sounds like no volunteer to run it though 21:14:20 #action mikal to ask openstack-dev if anyone wants to give running a review day roster a go 21:14:26 shocking no one wants to be the offical nag 21:14:32 jogo: yeah, I know right? 21:14:41 I think you mean "shocking that jogo doesn't want to be the official nag" :P 21:14:50 Ok, anything else on j-3 and release status? 21:15:31 #topic Bugs 21:15:37 when does the culling of the blueprints happen? 21:15:44 proposal deadline 21:15:53 jogo: bps without code should be being culled now 21:16:08 jogo: bps with code will hit the FF eventually 21:16:23 (Well, without code and with no chance of proposing all the code by FPF) 21:16:23 sounds like an action item fo johnthetubaguy 21:16:34 Yeah, he's at a cold meat themed music festival this week 21:16:40 So I expect that to pick up next week 21:16:46 Anyways, bugs 21:16:53 It seems our bug fix rate has leveled out 21:17:00 if sure has :-( 21:17:01 I say based on zero data, just my impression 21:17:18 tjones: do we need to start doing a weekly mail out of the most important bugs? 21:17:27 tjones: like we are for j-3 bp reviews? 21:17:29 http://54.201.139.117/nova-bugs.html 21:17:30 yeah good idea 21:17:34 i can start that 21:17:41 1059 open bugs 21:17:42 tjones: that would be appreciated 21:17:49 ++ 21:17:52 That's a lot less than earlier in the cycle, but its still 1,000 too many 21:18:04 So, if each of you closes 100 bugs over the weekend... 21:18:09 ...then the code wouldn't merge 21:18:13 there are 3 critical bugs - the 1st one has patches merged for ironic and oslo and could use revirews for nova https://blueprints.launchpad.net/nova/+spec/use-oslo-vmware 21:18:36 its a test issue 21:18:41 tjones: that's a blueprint not a bug? 21:18:52 LOL 21:18:57 Or is it a big bug or something? 21:18:58 https://bugs.launchpad.net/nova/+bug/1328997 21:19:00 Launchpad bug 1328997 in nova "Unit test failure: openstack_citest" is being accessed by other users\nDETAIL: There are 1 other session(s) using the database." [Critical,In progress] 21:19:04 Oh, ok 21:19:04 sorry 21:19:14 NP 21:19:14 that was something else i wanted to discuss later 21:19:43 is there a nova patch to review for that? 21:19:47 or does it need to be created still? 21:19:57 Yeah, I'm scrolling through looking for it 21:20:11 um - wait i see 1 merged now 21:20:37 like a while ago 21:20:57 So do we think its fixed and just in the wrong state? 21:20:59 ok this is a left over that should be closed then. sorry was looking from the bottom 21:21:01 yep 21:21:08 Yay! One less bug! 21:21:09 excellent 21:21:11 I like those 21:21:22 the other 2 arguably are not cirical as they only affect the vmwareapi 21:21:22 tjones: is the source for your webpage open? 21:21:33 tjones: because matbe we can move it to status.openstack.org somewhere 21:21:39 tjones: didn't we say that no bug can be critical if it's only affecting one driver? 21:21:45 yes it is on gitup - i can end the link to you 21:21:55 gitup? 21:21:55 dansmith: yes and i knew you were going to say that 21:21:57 tjones: thanks that would be great 21:22:03 github ;-P 21:22:05 is that like github for cowboys? 21:22:08 dansmith: I assume it depends on how bad it is? 21:22:10 yahooo! 21:22:16 yes i will decrease prio 21:22:17 dansmith: "deletes all instances" would probably still be critical 21:22:23 mikal: the definition of critical on the wiki says it has to affect all users I think 21:22:32 mikal: which by definition doesn't apply to a single HV issue 21:22:39 Fair point 21:22:48 But I suspect we'd argue about it at the time if it happened 21:22:58 But that's a hypothetical, so let's not dig into it too hard 21:23:12 tjones: any other bug stuff we should know apart from "please fix them"? 21:23:13 https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29 21:23:18 in any case - they could use a review 21:23:28 yeah there are 179 bugs that need review 21:23:46 Is there a dashboard of those? 21:23:54 so when people are reviewing stuff - please review bugs too 21:24:12 click the "ready for review" radio button on http://54.201.139.117/nova-bugs.html 21:24:25 dims gave me a fix to sort the coumns which i'll put up today 21:24:26 Cool. 21:24:39 then you san see them ordered by age as well 21:24:40 I will include that in the next review priorities email as well then 21:24:49 great 21:25:12 We should probably move on unless there's something else in bugs land 21:25:28 that's pretty much it - there are about 30 that could be knocked off as they are abandoned. thats it from me 21:25:36 #topic Gate status 21:25:40 mriedem: did you add this item? 21:25:45 mikal: yeah 21:25:50 gate is more or less ok 21:25:50 Want to talk to it then? 21:25:59 but, InstanceInfoCacheNotFound traces are pervasive 21:26:11 like hundreds of thousands per week, 90% successful runs 21:26:21 That's... a lot 21:26:36 sounds like a stacktrace we should fix 21:26:38 yeah i thought it was just stable/icehouse, so was glad to see this https://review.openstack.org/#/c/112520/1 21:26:47 but logstash says it's still on master 21:26:53 Is there a bug filed for it? 21:27:09 the old bug was https://bugs.launchpad.net/nova/+bug/1304968 21:27:10 Launchpad bug 1304968 in nova "Nova cpu full of instance_info_cache stack traces due to attempting to send events about deleted instances" [High,Fix released] 21:27:13 so, 21:27:22 that error is just a symptom of an instance being deleted, 21:27:33 and the code looking it up not being careful about it 21:27:39 so that fix stands on its own, 21:27:48 but there are just other cases on master tha trigger the same behavior I think 21:28:08 what's the hit rate on master? less than icehouse, I hope/assume? 21:28:19 dansmith: so its a case of working through them all and cleaning them up one at a time? 21:28:26 http://goo.gl/hHMGdO 21:28:31 mikal: yeah 21:28:48 Kibana will mostly be master right? 21:28:51 mriedem: is that limited to master? 21:28:54 master is 212K in 7 days 21:28:56 And it says 235,362 hits in the last seven days 21:28:56 dansmith: nope 21:29:21 dansmith: so we want this for stable/icehouse https://review.openstack.org/#/c/112520/1 21:29:23 But stable runs are relatively rate comparatively, right? 21:29:40 mriedem: the first few I see are the lifecycle thing 21:29:41 s/rate/rare/ 21:29:49 oh 21:29:54 hits on stable/icehouse in 7 days are 19K 21:30:03 but don't we see this for any grenade job as long as it's still present in icehouse? 21:30:04 so yeah, stable/icehouse is far less, and we probably just need to merge that backport 21:30:11 right 21:30:11 yeah 21:30:17 old side grenade logs are rife with these 21:30:20 as long as this isn't in icehouse, then we'll see it charged against master 21:30:22 yeah 21:30:24 which makes debugging grenade hard 21:30:36 So we want two things? 21:30:38 so dansmith can you hit that backpor 21:30:39 Merge that stble fix 21:30:45 And then walk through master cleaning up the others? 21:30:50 yeah 21:30:59 if it's mostly grenade, the stable fix should bring the rate way down 21:31:00 So... Who wants to give the master stuff a go? 21:31:08 Oh, true 21:31:17 mikal: i can open a new bug for it on master to investigate 21:31:18 mriedem: done 21:31:26 mriedem: that would be good 21:31:27 dansmith: cool, i'll bug mtreinish to hit the +A 21:31:29 lets see what this does to our rate when we get it merged 21:31:37 but obviously any that persist still need chasing 21:31:38 mtreinish: https://review.openstack.org/#/c/112520/1 21:31:40 I can review the stable change after the meetnig if no one beats me to it 21:31:59 Anything else on gate stuff? 21:32:01 nope 21:32:05 Cool 21:32:14 #topic Mid-cycle meetup summary 21:32:23 This is mostly informational... I was asked to write a summary up. 21:32:28 I've been doing that, but its long. 21:32:42 So I've started posting the bits I've done, instead of blocking on perfection 21:32:42 mriedem: done 21:32:50 #link http://www.stillhq.com/openstack/juno/000005.html -- social stuff 21:32:53 #link http://www.stillhq.com/openstack/juno/000006.html -- containers 21:32:56 #link http://www.stillhq.com/openstack/juno/000007.html -- ironic 21:32:58 ^-- what I've go done so far 21:33:10 There will be more later today, but its going to take me a while to get through it all 21:33:21 Topics I've written up but need proof reading before posting are db2 and cells, there will be more after that as well 21:34:04 I think its likely that there are cases where my recollection of what we agreed differs from other people's, so feel free to ping me if you're concerned 21:34:31 #topic Open Discussion 21:34:43 mriedem: did you add the powerkvm thing? 21:34:48 so what happens with unapproved juno specs? 21:34:52 mikal: yeah, we sorted that out before the meeting 21:34:57 mikal: powerkvm ci was disabled on monday 21:34:59 bknudson: please hold 21:35:03 krtaylor updated the pkvm ci wiki page 21:35:12 mriedem: ok, so its mostly informational? 21:35:16 mriedem: no action required? 21:35:18 mikal: yeah, we can move on 21:35:20 Cool 21:35:29 bknudson: so, there's been some talk in the release meeting about that 21:35:37 e.g., https://review.openstack.org/#/c/103617/ 21:35:43 I think what is likely to happen is that K specs will open soonish 21:35:51 But with the expectation of no review until we ship J 21:35:53 I didn't get around to working on it as I had hoped... too much other stuff 21:36:02 So we'll be asking people to effectively rebase to the new directory 21:36:10 People who don't rebase after a timeout we assume have lost interst 21:36:17 And we can abandon those reviews 21:36:29 mikal: oh I have one thing: https://review.openstack.org/114044 21:36:32 adrian_otto: ^ 21:36:40 mikal: sounds easy enough 21:36:41 #action mikal to propose a K spec plan on the mailing list 21:36:41 * adrian_otto perks up 21:36:51 But I want to wait for John to get back so we're on the same page 21:37:03 jogo: go! 21:37:06 jogo: I will cover that in the subteam update 21:37:20 but now it fine too 21:37:26 *is 21:37:28 so adrian_otto posted https://review.openstack.org/114044 which is a kilo spec for a container service 21:37:32 adrian_otto: we don't do subteams as an agenda item any more 21:37:36 So do it now 21:37:42 yeah, I thought we were going to want to see this in stackforge first 21:37:52 I thought at the mid-cycle we agreed to send this to stackforge first 21:37:54 Oh, I didn't notice that 21:38:01 We did agree to go to stackforge first 21:38:05 Are you asserting that means no spec? 21:38:09 the rationale for starting it in nova-specs was to be sure the concept was well socialized among Nova devs 21:38:12 Or that the plan in the spec needs to reflect that? 21:38:19 and that we will otu the compute program in a style consistent with Nova 21:38:49 I'd fine if we don't merge that spec 21:38:51 adrian_otto: so I like that idea, do it in stackforge but have a design doc 21:38:58 adrian_otto: cool, that works 21:39:00 but I'd like us to collaborate on it where we are comfortable 21:39:03 So... I think we should have all the cmpute specs in one place 21:39:07 To be less confusing to our victims 21:39:10 and since there is no compute program spec repo, that made the most sense 21:39:28 adrian_otto: ++ sounds like we are all on the same page (I think) 21:39:40 how about this: 21:39:51 just put this doc in the new repo, formatted like our spec, 21:40:00 so that when we want to adopt it, we propose that doc as a spec 21:40:12 because the point of it being in stackforge, I thought, was to let it iterate quickly, 21:40:18 which might change some of this ascii art, I'd think :P 21:40:19 dansmith: but we should also get nova feedback in the idea before they actaulyl do anything 21:40:25 sure, sure 21:40:36 We could review a thing in stackforge though 21:40:38 just doesn't seem worth having an uncommitted spec for us 21:40:40 so they interate in the right direction 21:40:44 It would be good to not bring up anotehr specs repo right now 21:40:48 Its a distraction 21:40:52 mikal: works for me 21:40:55 But if it was in the project repo that would work 21:41:16 adrian_otto: thoughts? 21:41:22 I'm happy to land a derivative of this proposal in the project repo 21:41:28 I think the mechanics of this is worth a mailing list thread to make sure people know what's happening 21:41:36 I think the important thing now is that we refine it together 21:41:39 adrian_otto: do you want to do that or shall I? 21:41:47 yeah, and ping us when it's up so we can go review it over there 21:42:08 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/043113.html ML THREAD About Containers Service 21:42:21 mikal: done 21:42:27 Heh, thanks 21:42:41 adrian_otto: well that has the spec in our repo 21:42:52 jogo: its ok, we can reply to the mail once its moved 21:43:02 yup 21:43:09 so dansmith and mikal are apparently not aligned yet on where the spec should begin. 21:43:25 I thought we were 21:43:33 I think we agreed in the end 21:43:40 if I submit a review for a stackforge repo it will take about 9 days for that to happen 21:43:42 But I can reply to that email 21:43:47 and I want to make progress in the mean time 21:43:48 And then Dan can argue with me if he disagrees 21:44:12 adrian_otto: what's the slow bit? Infra getting to the review? 21:44:12 adrian_otto: well no matter what the outcome you will need a stackforge repo 21:44:25 yes, I will make a project repo, and put the spec in there 21:44:40 Cool 21:44:40 I am suggesting we iterate on https://review.openstack.org/114044 until then 21:44:46 adrian_otto: thanks 21:44:53 with the understanding that I will abandon that 21:45:03 So, we also deferred discussion of possible bp priority errors until now 21:45:04 and make a link to its sucessor 21:45:20 adrian_otto: yep, but I think that one can stay up for discssion while you wait for your shiney new repo 21:45:34 mikal: tx. 21:45:50 I'll submit a review for the new repo 21:46:08 adrian_otto: ping me with the details when you do that and I will see what I can do to speed it up 21:46:16 mikal: ok 21:46:23 We should move on... 21:46:28 mikal: can you get me out of parking tickets too? 21:46:38 So, does anyone feel we've gotten bp priority for j-3 horribly wrong? 21:46:42 dansmith: yes, yes I can 21:46:46 sweet 21:47:10 there are 77 bp's and i haven't looked through the list, so no idea 21:47:13 ok not horriblly - but this one https://blueprints.launchpad.net/nova/+spec/use-oslo-vmware is medium and it is very closely related to the spawn refactor which is high. they are both very critical for us 21:47:32 tjones: does that one block the spawn refactor? 21:47:35 tjones: or is just related? 21:47:44 i.e. do we have a critical bp depending on a medium bp? 21:48:15 they are very closely related - but not blocked. i almost would have said the oslo one is higher than the refactor in terms of cleaning up the dirver 21:48:32 And the code is out for the oslo one? 21:48:34 tjones: I think what is holding things up is minesweeper 21:48:46 is code is out. minesweeper is limping. 21:48:56 the hardware gets here next week - hurrah! 21:49:08 of course then it has to be deployed.... 21:49:10 Stupid physical constraints 21:49:13 lol 21:49:38 Ok, so how about this... 21:49:49 We can track that one as important, but let's leave it at medium for now 21:49:59 Until we land some of the other open highs, I worry about diluting attention and getting nothing done 21:50:21 ok sure 21:50:24 thanks 21:50:33 Thanks for being understadning 21:50:37 np 21:50:46 So... Anything else for open discussion? 21:50:51 Or do you want an early mark? 21:51:40 Going... 21:51:54 ...going... 21:52:07 gone 21:52:08 see you next time 21:52:08 ...gone 21:52:13 Thanks for your time peeps 21:52:19 #endmeeting