21:02:24 #startmeeting nova 21:02:25 Meeting started Thu May 22 21:02:24 2014 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:02:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:02:29 The meeting name has been set to 'nova' 21:02:36 Who is here for the nova meeting? 21:02:37 o/ 21:02:40 hi 21:02:41 o/ 21:02:43 hi 21:02:46 hi 21:02:48 o/ 21:03:03 Cool 21:03:11 #topic IRC nick ping service 21:03:24 So, first off a quick announcement 21:03:41 Stealing an idea from Keystone, if you add your IRC nick to the list on https://wiki.openstack.org/wiki/Meetings/Nova 21:03:43 present 21:03:47 Then I will ping that list at the start of meetings 21:03:56 Which might help people remember to appear here 21:04:24 I wont bother this week because I'd just be pinging myself... 21:04:33 Ok, moving on 21:04:44 #topic Juno mid-cycle meetup date and location 21:05:02 devananda and I talked about this a fair bit during the summit 21:05:16 I was talking to him because I'd like to see the ironic guys more in attendance at the meetup than last time 21:05:33 The suggestion we've come up with is the week of 14 July, which is the week before OSCON 21:05:58 Intel has offered their Portland campus if we want it, which I think makes sense for all the people who are likely to be going on to OSCON 21:06:11 I'm hoping co-locating with OSCON will make it easier for some people to get travel approval 21:06:12 portland where? 21:06:14 hillsboro? 21:06:23 Beaverton IIRC 21:06:31 Which I am told is about 20 miles out of Portland 21:06:39 But I've never been there 21:06:40 yeah, it's definitely a relocation form OSCON 21:06:49 like, you'd want to move hotels for sure 21:06:56 Yeah, I think that's ok though 21:07:08 i think the train goes to beaverton doesn't it? 21:07:09 For people flying in though, it makes the flights free for the meetup if they're doing both 21:07:10 well, it'll be really annoying for me, but alas :) 21:07:15 or subway or whatever it is called 21:07:32 tjones: it does, but it isn't really walkable from there to the campus, depending on which campus it is 21:07:37 Hey, no connections dansmith. 21:07:57 No connections? 21:08:23 dan has no connecting flights. right? 21:08:29 Oh, I see 21:08:46 it's going to be seriously inconvenient for me for a different reason, but not much we can do about that 21:08:59 or are we voting? 21:09:11 The current proposal is that the ironic people have a room, we have a room, and when we feel we need to talk we all pile into one of the rooms. 21:09:13 could it be the week after oscon/ 21:09:25 Hmmm, one sec 21:09:29 Let me check the etherpad of doom 21:09:56 https://etherpad.openstack.org/p/juno-nova-midcycle-options was the original brainstorming etherpad 21:10:59 * mikal is now finding the juno release dates 21:11:06 There's a deadline in there somewhere too 21:12:05 mikal: https://wiki.openstack.org/wiki/Juno_Release_Schedule 21:12:07 Juno FPF is August 21st. 21:12:22 cburgess: thanks 21:12:29 So... juno-2 is the week of OSCON 21:12:32 You wil be close to the j2 date. 21:12:47 So the week after SOCON is the first week of juno-3, instead of the last week of juno-2 21:12:54 s/SOCON/OSCON/ 21:13:11 I think we could make that work though 21:13:15 Do other people have an opinion? 21:14:12 ...silence...? 21:14:23 week after oscon it is! :) 21:14:28 Heh 21:14:37 Well, let me see if the venue / ironic people are avaialble 21:14:43 But it sounds like an option 21:14:51 I have no opinion either way atm 21:15:00 It would mean a three day meetup instead of something longer though, because I'd have to leave on the Wednesday to get to AU in time for pyconau 21:15:07 (Or you have a hackfest without me) 21:15:12 three days is okay, right? 21:15:15 yes 21:15:17 Yeah, that was the plan 21:15:18 we almost ran out of stuff on the third day last time 21:15:21 yup 21:15:26 With a possible hang around hackfest for people who wanted one 21:15:34 that's fine 21:15:43 Ok, I will investigate 21:15:46 thanks 21:16:08 #action mikal to find out of if the week of 28 July is available instead 21:16:13 Not bad having it short for folks who are also at OSCON, means we aren't gone for 2 full weeks. 21:16:26 cburgess: this is true, especially for people doing CLS as well 21:16:34 cburgess: which I think devananda wanted to do 21:16:42 mikal: did we talk about benue? 21:16:47 venue even 21:16:57 geekinutah: yeah, Intel in Portland 21:17:04 okay, I missed that part 21:17:07 NP 21:17:13 Anything else on midcycle meetup? 21:17:48 Ok, moving on 21:17:55 #topic Post summit spec status 21:18:07 I have an etherpad for this at https://etherpad.openstack.org/p/juno-nova-summit-specs 21:18:31 you want us to update status? 21:18:34 Basically I spent some time yesterday trying to work out which specs tied to thigns we'd decided at the summit 21:18:54 I think the spec reviewers should be giving priority to those specs to unblock things we now have concensus on 21:18:56 would this be better reflected by updating the status/targets on http://blueprints.launchpad.net/ ? 21:19:10 sgordon: well, that happens after we've approved the spec 21:19:15 not really 21:19:21 ? 21:19:28 it has statuses for in discussion, review, etc 21:19:34 and targeting happens before approval 21:19:37 Oh, I see 21:19:45 Well, step 1 is I am sure that this etherpad list is wrong 21:19:49 yeah 21:19:55 that's mainly what i am getting at i guess :) 21:20:00 So, for people who presented something at the summit, can you please make sure your spec is in this list? 21:20:12 We can then review those / tweak LP statuses 21:20:31 I suspect many of them also need edits based on the outcome of the summit, so marking that would be helpful too 21:20:36 question: what about https://review.openstack.org/#/c/86947/ (Use libvirt storage pools)? It's been going on since before summit, but we discussed it at summit, and I'd like to get some reviews in 21:20:48 #action mikalt o email openstack-dev and say these things there as well 21:21:17 directxman12: yep, you should add that to the etherpad please, presumably under "needs review" 21:21:46 I think as we get more practise at specs we can be a bit more organized with this 21:22:06 Perhaps next time we should _require_ a spec proposal in gerrit before we accept the session at the summit for example 21:22:21 that sound like a worthy goal 21:22:31 #action Specs reviewers, please be paying attention to https://etherpad.openstack.org/p/juno-nova-summit-specs 21:22:44 mikal: +1 21:23:10 So, I guess this part of the meeting is mostly just a call to action, unless anyone has any specs they feel we need to discuss right now? 21:23:14 I'd prefer just summit specs 21:23:18 We can do others at the end 21:24:05 Nothing else? 21:24:22 Ok, moving on 21:24:26 #topic Bugs 21:24:41 tjones has kindly agreed to stay on as bug triage leader person 21:24:55 I've asked her if she can do an update in each of our meetings about the current state of bugs 21:25:07 tjones: have you managed to get one for this week, or do you need more prep time? 21:25:14 nope - im ready 21:25:19 Go for it! 21:25:39 mikal: and i talked about having a top ten bug list to review at this meeting each week. 21:26:02 since i don't have enough visibility into every subteam - i am going to need to depend on the subteam leaders to help me out with this 21:26:39 i've created an etherpad for tracking each week, i'd like you guys to help out by putting bugs on this and i can help push them if needed 21:26:40 https://etherpad.openstack.org/p/NovaTopTenBugs 21:27:06 currently there are only 2 - the 1 critical bug and something that is blocking minesweeper (since i am the subteam lead for vmwareapi too) 21:27:26 Is someone assigned to those bugs at the moment? 21:27:28 next week - i'd like to see more. comments or concerns with this? 21:27:42 for the 2nd yes. for the 1st no 21:28:06 but it may have disappeared 21:28:21 mriedem: was looking into it 21:28:23 why is that tempest timeout one a top ten? 21:28:30 we have A LOT of tempest timeout bugs 21:28:38 they are all intermittent 21:28:40 so it is being cared for, is is the only bug marked ciritical 21:28:47 due to load on the single node host when running tempest 21:28:55 it probably shouldn't be critical 21:29:00 it's been around since at least february 21:29:01 i'm assuming a critical bug always should be here 21:29:08 let's drop the severity 21:29:23 ok sure. 21:29:24 tjones: you're right that we should be tracking critical bugs 21:29:29 http://lists.openstack.org/pipermail/openstack-dev/2014-May/035253.html 21:29:31 But it sounds like we can just tweak the severity on this one 21:29:33 if anyone has ideas 21:29:58 also during triage i've noticed we have a number of issues with resize - several new bugs on it. I don't have the #s handy but this is something to look out for 21:30:31 tjones: yeah, fixing a few resize/migration issues this week 21:30:40 at least i've seen a few bug fixes related to that this week 21:31:00 * devananda catches up on scrollback 21:31:04 we need multi-node tempest to flush those out 21:31:07 Yeah, there's also heaps of older live migration bugs is someone is playing in that area 21:31:17 mriedem: agreed 21:31:25 #action mikal to ping on multinode devstack status 21:31:30 that's all i wanted to say today. we'll see if this etherpad idea works and if not we can change things 21:31:41 tjones: thanks! 21:32:04 tjones and I also talked about having a bug day sometime in a couple of weeks, we'll lt people know when we have a more specific plan 21:32:06 i'll send something out on the ML too about this 21:32:24 ah yes - and that too. 21:32:29 Anything else on bugs anyone? 21:32:42 i was hoping next week - but maybe the week after on wednesday. i'll send out info on the ML 21:33:17 Sounds like a plan 21:33:30 Moving on 21:33:42 #topic Subteam reports 21:33:52 We seem to be growing subteams quite a bit at the moment 21:34:01 There's now a NFV team, and a libvirt team 21:34:06 Have I missed any other new subteams? 21:34:26 there is a containers team which would like to register. 21:34:35 Oh yeah, I saw an email about that one too 21:34:38 the NFV one is a bit...different 21:34:39 adrian_otto is leading that charge, although I don’t see him present 21:34:41 in that it's cross project 21:34:59 sgordon: agreed, but I think we should ask to be kept informed because it will affect us 21:35:04 though reporting into the impacted projects is expected 21:35:11 sgordon: which I am sure russellb will do anyways 21:35:19 atm i think that would be nova/neutron/heat 21:35:23 probably others down the line 21:35:36 mikal: I was going to ask if we can/should make the docker driver a subteam as well… in order to report status and keep everyone else in the loop — or if we should just stay quiet over in stackforge? ;-) 21:36:00 ewindisch: I have no problem with docker being a subteam if they'd like to be 21:36:36 mikal: +1 21:36:39 ewindisch: I think being quiet over on stackforge is a poor way to kept us aware of what's happening 21:36:44 So yeah, let's do that 21:36:47 mikal: precisely. 21:36:55 Although I don't know how that's different from a containers subteam? 21:37:08 Unless the containers subteam is about a new service, and docker is about a driver 21:37:14 Which is possible 21:37:25 mikal: docker subteam == driver; containers subteam == api, service, and a bit of shared-code-DRY 21:37:52 Ok, well as long as you don't think you're doubling up 21:38:02 not at all. 21:38:05 Cool 21:38:10 So, who wants to do a subteam report? 21:38:52 ...sound of crickets... 21:39:05 ok i'll go 21:39:08 Heh 21:39:10 Go! 21:39:45 vmwareapi - just recovering from the summit. updating specs based on discussions there. spawn refactor moviing along - 2 more to go with phase 1 (1 has a +2 hint, hint) 21:39:57 and done 21:40:13 tjones: what's the review number for the first unmerged one in the chain? 21:40:29 https://review.openstack.org/86443 21:40:46 thanks for asking :-) 21:40:58 Ok, I will take a look after this meeting unless someone beats me to it 21:41:08 mriedem: gave the 1st +2 21:41:13 Although, this one has only had 59 revisions, is it ready yet?!? 21:41:23 - many many many are rebases 21:41:23 mikal: you had a +2 one so get it again 21:41:40 tjones: I know, but its still pretty impressive 21:41:43 cosmetic change so re-+W 21:41:49 yea - i 21:41:54 mriedem: cool 21:41:54 it has been a journey 21:42:06 Ok, any other subteams? Scheduler is the most obvious one I think. 21:42:12 fwiw, i think the hyper-v driver is going to be going in the same direction soon with how the unit tests are structured 21:42:18 from what i was looking at today 21:42:25 same issues with testing driver/vmops/utils from the top level 21:42:26 mriedem: as in a big refactoring? 21:42:46 hyper-v has far fewer patches, but when there is one i have the same concerns about how the UT is designed 21:42:59 Fair enough 21:43:07 But it sounds like they already agree? 21:43:12 mikal: not that it's a subteam, but i'll be posting two specs for ironic by EOD 21:43:13 no idea 21:43:29 mriedem: should we talk to them about it? 21:43:35 hyper-v guys were refactoring tests to use mock in icehouse i thought but not sure what happened with that 21:43:36 devananda: cool, let me know the review numbers when you do 21:43:42 no one wants to review mox->mock refactoring 21:44:02 containers subteam - just figuring out what we are / what our mission is… all kittens and self-loathing from there. Should have more next week. 21:44:27 Ok, it sounds like we're done with subteam reports 21:44:30 Going... 21:44:35 docker subteam - rampaging through bugfiling… 21:44:41 and investigating cinder support 21:44:52 going... 21:44:56 big blocker there, actually, is that most of the useful code is in libvirt/volume.py 21:45:08 ewindisch: you're allowed to refactor 21:45:08 and that’s it. 21:45:20 mikal: yeah, I think I’ll need to - and I’m planning a spec for that refactor 21:45:33 ewindisch: sounds like a plan to me 21:45:36 bug we might need to discuss if it’s better to put that stuff into oslo or cinder itself at this point? 21:45:36 Anything else on subteams? 21:45:50 ^- done 21:46:01 ewindisch: well, I guess it depends on what stuff. Let's wait for the spec and then discuss. 21:46:35 agreed 21:46:40 #topic Open Discussion 21:46:50 Ok, so what else do people want to cover? 21:47:10 sooooo, about the no-db-scheduler stuff 21:47:28 Is there a spec? 21:47:39 there is, I stuck it on the etherpad under contentious 21:48:08 I have a question to server state/status and get no response from ML/IRC, just asking here. Currently the server API will present both state and status, what's exact difference of these two? 21:48:11 is it fair to say we came out of that session mostly concluding that work can go on, but it needs to be optional? 21:48:20 yjiang5: hang ten a sec 21:48:31 geekinutah: definitely optional 21:48:40 mikal: are we surfing? 21:48:42 geekinutah: there also seemed to be a fair bit of concern about the table design in memcache 21:48:59 geekinutah: i.e. there might be existing design patterns which could be reused 21:49:14 geekinutah: is boris-42 the one working on this, or is it someone else at Mirantis? 21:49:18 mikal: yah, but at least we can take the big -2 off the spec and start arguing about implementation 21:49:35 mikal: it's boris-42's baby, but I want to help it along a bit 21:49:50 I feel like arguing in a spec about that bit is going to be long and painful 21:50:05 Would it be possible to try and get the people who had concerns to chat to boris-42 and you more directly? 21:50:24 Just to speed the process up 21:50:43 i had a lot of concern about the proposed way to maintain distributed state 21:50:57 yeah, that sounds doable 21:51:03 happy to chat with folks -- also, boris-42 is going to be in seattle next week IIRC 21:51:12 devananda: would you be willing to do an IRC chat with the Mirantis guys to try and address those concerns? 21:51:16 I know jaypipes was concerned too 21:51:29 he and I were planning to meet up while he's here, so we can work on that too 21:51:38 Ok, cool 21:51:45 I have concerns about requiring a full host state to be sent every time, but I can detail that on the spec 21:51:49 So yeah, let's try and get some agreement on that before we worry too much about the spec 21:51:58 alaski: sounds good to me 21:52:21 do you want to revisit the results in next meeting or keep it offline till we have something simpler to propose? 21:52:25 geekinutah: and maybe we can aim for a status update in the next meeting? 21:52:32 Heh 21:52:34 sounds good :-) 21:52:39 geekinutah: so, a quick status update sounds like a good plan 21:52:55 #action geekinutah to try and get people talking about the design of the no db scheduler data store and report back next week 21:53:20 Anything else on no db scheduler? 21:53:45 let's move on, that's good for now 21:53:52 yjiang5: back to you 21:53:55 mikal: sure 21:54:01 yjiang5: can you repost your question? 21:54:07 I have a question to server state/status and get no response from ML/IRC, just asking here. Currently the server API will present both state and status, what's exact difference of these two? 21:54:50 What was the email subject line? 21:54:50 sorry, I mean service state/status 21:55:16 I sent it before the summit, let me try to find it. 21:55:33 yjiang5: you mean "nova service-list" 21:55:34 ? 21:55:39 dansmith: yes 21:55:50 yjiang5: and do you mean state/reason? 21:56:10 dansmith: no, I mean the "status" and "state" of the nova service-list. 21:56:32 oh, the smiley face 21:56:40 status is the admin status enabled/disabled, 21:56:57 the state is :-) if the node has checked in in the last 60 seconds or whatever, and :-( if not 21:57:22 you can disable a service in :-) state so that it won't be scheduled to 21:57:38 it's an admin state, whereas the smiley tells you if the service appears to be alive/online 21:57:56 dansmith: got it. So a service disabled can still be :-) , right? 21:58:01 I'm trying to find a good reference to this in the docs, but not succeeding 21:58:01 yes 21:58:32 dansmith: frankly speaking, it's really ......... not clear. I know the underlying code implemenation, but IMHO, it's confusing. 21:58:49 dansmith: thanks for clarification. 21:58:56 yjiang5: on the other hand, it seems so self-explanatory to me, I've never even thought about it needing to be documented :) 21:59:02 Given I can't find anything in the docs, I think it could certainly be documented better 21:59:03 mikal: thanks. 21:59:19 #action mikal to chase better documentation for "nova service-list" 21:59:33 Which probably means I file a docs bug 21:59:40 So, we're out of time 21:59:46 Anything else super urgent which can't be done in email? 21:59:48 mikal: thanks. 22:00:23 Ok, thanks for your time peoples 22:00:30 #endmeeting