21:00:31 #startmeeting nova 21:00:32 Meeting started Thu Oct 23 21:00:31 2014 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:36 The meeting name has been set to 'nova' 21:00:40 o/ 21:00:40 o/ 21:00:42 mriedem appears pumped for a meeting 21:00:46 o/ 21:00:53 Oh, pings 21:00:56 mikal tjones cburgess jgrimm adrian_otto mzoeller funzo mjturek jcook ekhugen irina_pov krtaylor: le ping 21:00:59 i'm not really all that excited 21:01:06 #topic Kilo Specs 21:01:14 Specs are open, as yuo should all know 21:01:19 * jogo walks in late 21:01:20 We've already approved 15, which is pretty cool 21:01:25 o/ 21:01:30 o/ 21:01:31 o/ 21:01:35 o/ 21:01:39 mikal: we can use a lot more spec reviews 21:01:42 Just a reminder that if you're blocked on a procedural -2, you should ping the reviewer and ask for it to be removed 21:01:52 jogo: yep, that's not unexpected 21:02:00 So... 21:02:11 Are there any BP's that want to request a fast track this week? 21:02:19 (The blueprints too small to require a spec that is) 21:02:23 o/ 21:02:27 o/ 21:02:37 Is that hello, or please fast track me? 21:03:04 Ok, no fast tracks 21:03:09 Anything else for specs? 21:03:14 hello! 21:03:14 Apart from that we need to do more reviews? 21:03:16 hello for me 21:03:25 mikal: there is a spec for a new virt driver 21:03:37 jogo: oh, haven't seen that one 21:03:41 jogo: what hypervisor? 21:03:52 err I misread it slightly https://review.openstack.org/130447 21:04:00 system Z 21:04:06 libvirt/kvm on there 21:04:08 my bad 21:04:11 zkvm 21:04:14 dansmith: ^ you should impliment that spec 21:04:15 * mriedem hasn't read it 21:04:19 Heh 21:04:23 implement 21:04:30 Ok, well it will get reviewed in due course I imagine 21:04:35 they should already know about the requied virt driver APIs from the meetup 21:04:40 Shall we move on from specs? 21:05:04 #topic Kilo Summit 21:05:16 oui, oui 21:05:20 So, we had that brainstorming etherpad, which has turned into this proposed schedule 21:05:28 == Wed == 21:05:28 9:00 - 10:30 Cells 21:05:28 11:00 - 11:40 Functional testing 21:05:28 11:50 - 12:30 Glance client library 21:05:28 1:50 - 2:30 Unconference 21:05:30 2:40 - 3:20 CI status checkpoint 21:05:31 comstud has been around more since he left openstack than he was before, I think 21:05:33 3:30 - 4:10 Virt drivers: Consistency and Minimum requirements 21:05:35 4:30 - 5:10 Objects Status and Deadlines 21:05:38 5:20 - 6:00 NFV Proposed Specs (need specs) 21:05:40 == Thu == 21:05:43 9:00 - 9:40 API microversions 21:05:45 9:50 - 10:30 nova-network migration 21:05:48 11:00 - 12:30 Scheduler / Resource tracker 21:05:50 1:40 - 2:20 Unconference 21:05:53 2:30 - 3:10 Upgrades / DB migration plan 21:05:55 3:20 - 4:00 Containers Service 21:05:58 4:30 - 6:00 Workload management 21:06:00 == Fri == 21:06:03 "Meetup day" 21:06:05 We only have 18 slots this time, because of the meetup day 21:06:08 But we've bumped things which aren't easy to timebox to the meetup day 21:06:11 Which I think makes sense 21:06:26 I need to enter that schedule into the app that publishes it real soon now, so I will do that todayish 21:06:28 dansmith: i wasa thinking the same thing 21:06:37 dansmith: loose ends and no other work! 21:06:44 heh 21:06:46 Anything else we need to discuss about the summit at this point? 21:07:18 Ok, moving on again 21:07:25 #topic Gate Status 21:07:29 http://status.openstack.org/elastic-recheck/gate.html 21:07:33 Are these notes in the wiki from last week or this? 21:07:35 jogo probably knows latest status 21:07:45 mriedem: pretty healthy actaully 21:07:49 Nice 21:08:01 gate queue has been low all week 21:08:08 the gate status notes in the meeting agenda are old 21:08:14 we can fix that 21:08:15 i'm not working anything i don't think, i.e. the quota stuff 21:08:21 everyone approve more of my patches :) 21:08:26 Heh 21:08:33 Ok, its good its not busted at the moment 21:08:41 Move on again again? 21:08:45 cinder and neutron appear to have the biggest weirdo bugs 21:08:59 move on 21:09:06 #topic Open Discussion 21:09:10 Ok, I only have one thing 21:09:16 I think we should skip the meeting next week 21:09:19 Anyone disagree? 21:09:24 I'l be on a plane 21:09:30 so, I don't care either way :D 21:09:30 ++ to skip 21:09:32 I imagine most people will be travelling, packing, or avoiding human contact 21:09:46 is next week supposed to be the 'week off'? 21:10:00 mriedem: that might be this week? I'd have to check. 21:10:12 doesn't matter 21:10:36 work'em if you got'em 21:10:40 Hmm, there's nothing on the release schedule 21:10:45 Anything else for open discussion? 21:10:52 I have a few bugs I'd like to point to 21:10:53 Or can we set a record for shortest meeting ever? 21:11:00 dansmith: go for it 21:11:22 there are two really ugly pci related bugs that I think snuck in during the rushed reviews of that stuff during FFE 21:11:25 both are pretty major, IMHO 21:11:30 and want to make sure other people agree 21:11:36 first is a config format breakage: 21:11:40 https://bugs.launchpad.net/nova/+bug/1383345 21:11:43 Launchpad bug 1383345 in nova "PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given" [Medium,Confirmed] 21:11:58 second prevents starting compute service if certain pci devices aren't available for claiming at startup: 21:12:04 https://bugs.launchpad.net/nova/+bug/1383465 21:12:06 Launchpad bug 1383465 in nova "[pci-passthrough] nova-compute fails to start" [Critical,Confirmed] 21:12:25 both seem pretty bad to me and need fixing quickly 21:12:45 I think there are people on top of both of them, but wanted to raise them for visibility of the reviews that I hope will show up soon 21:12:49 The first one is being worked on it seems 21:12:54 I agree its a problem 21:13:35 Ditto the second one 21:13:44 the second one is likely harder to resolve, but since there seemed to be some feelings that it was okay to block compute starting, 21:13:50 So, I guess you're saying we should keep an eye out for fixes and review them when they happen? 21:13:54 I just wanted to make sure nobody else thinks that's reasonable :) 21:14:11 mikal: well, that and want to make sure that we're on the same page re: severity and desired behavior 21:14:31 I feel like its good when nova-compute starts 21:14:34 not able to start compute seems...not ok? 21:14:37 yeah 21:14:40 It would be unexpected that you can't manage any instances on that hypervisor 21:15:00 The second bug also doesn't have an assignee 21:15:22 we should require a pci CI system 21:15:22 I think maybe yjiang5 is going to work up a fix for that one? 21:15:30 for further work on this stuff 21:15:39 jogo: sadly, very little if any of this is covered by testing at all :( 21:15:42 at least have it run periodically 21:15:43 dansmith: Robert and I are looking on it. 21:15:51 yjiang5: okay though so, thanks :) 21:15:53 dansmith: well until that is fixed, block new pci features 21:16:06 dansmith: this is what happens when we don't test 21:16:19 jogo: no argument from me. 21:16:19 That sounds like something we should talk over in the CI summit session 21:16:25 But I agree that CI is important here 21:17:12 the third thing is kinda minor: 21:17:15 jogo: we have a internal system for testing, but not sure why it missed the first issue. For the second issue, it's a not-so-common scenerio and I don't think we have test case for it. 21:17:17 https://review.openstack.org/#/c/130601/ 21:17:29 but it changes the way we dispatch our build_and_run method in compute a little 21:17:36 so thought it would be good to get some more eyes on it 21:18:02 yjiang5: if we used pci devices in the grenade tests upstream, we would have caught it I think, because we restart compute, right? 21:18:24 or at least, could have had a chance of hitting it, I guess 21:18:33 since we restart compute with some instances defined 21:18:45 dansmith: seems that issue happens when compute node reboot, and the PCI devices on the system changes, some device that was asigned to a instance before the reboot disapeared. 21:19:07 yjiang5: I can see that stopping the instance from starting, but not all of nova-compute 21:19:12 I dunno, I don't think joe changed his physical system 21:19:13 yjiang5: ahh didn't realize it was a uncommon case 21:19:14 I feel like we should add a test case for that 21:19:20 And talk CI for PCI too 21:19:37 mikal: sure. 21:19:53 So, step 1. Let's fix those bugs and backport them. 21:20:00 Step 2 can be better CI for this. 21:20:05 yep 21:20:21 anyway, that's it from me 21:20:26 Cool 21:20:29 Anything else from anyone? 21:20:30 dansmith: I don't know why the device disapear, but from the log, it not shown after reboot. But I totally agree we should not block nova compte service in such situation. 21:20:35 are we going to talk about ceph CI at some CI talk thingy? 21:20:39 yjiang5: okay 21:20:42 mriedem: good question 21:20:46 russellb started that but i haven't heard progress 21:20:48 russellb: ^ I think you worked on that 21:21:06 We have a CI session, but I am not sure how much time we'll have 21:21:09 i'd feel a lot better about reviewing ceph related patches if we had CI 21:21:13 I think we have at least another person in redhat interested in finishing that up, but I don't know the status 21:21:16 Ceph being a intra-libvirt CI thing 21:21:27 well, as a storage backend 21:21:30 for shared storage 21:21:32 If we don't get to it in that session we'll add it to the meetup agenda 21:21:49 this https://review.openstack.org/#/c/121745/ 21:22:06 mriedem: yeah, same :/ 21:22:20 I've said several times I don't feel equipped to review that 21:22:21 like....maybe we can skip some known test failures for now if it's a handful in an rbd job? 21:22:31 Yeah, there was also that security vulnerability that might have been picked up by CI 21:22:33 like what we've talked about with cells 21:22:54 The sharing config drives between instances one 21:23:00 on https://review.openstack.org/#/c/121745/ i basically asked a billion questoins and then haven't looked back in awhile 21:23:19 and i have to just review it based on how it looks since i'm probably not going to setup a ceph backend to test it 21:23:26 and i have to take the other reviewers word for it working 21:23:48 I agree CI is important here 21:23:55 Basically I think CI is always imortant 21:23:56 anyway, food for thought 21:23:57 important even 21:24:19 i see there is also a fedora job lurking somewhere 21:24:27 maybe we can get newer libvirt tested on that as non-voting 21:24:37 We talked about that in Juno 21:24:42 I thought someone at RH was working on that 21:24:45 But then it went quiet 21:24:50 So yeah, that's another thing to chase as well 21:24:51 yeah there is actually a fedora job somewhere i saw this week 21:24:55 on the experimental queue 21:25:04 the queue of lost toys 21:25:09 heh 21:25:25 So, anything else? 21:25:52 mikal: I have something 21:25:53 have we heard any progress on a neuetronv2 api cleanup? 21:26:00 from beagles 21:26:06 mriedem: I have not 21:26:09 dansmith: do tell 21:26:10 or a spec or some such 21:26:18 I haven't seen any mention of progress or a spec 21:27:26 okay then 21:27:33 mikal: I was going to ask if I could have an early mark 21:27:42 Heh 21:27:44 but mriedem messed it up 21:27:50 Well, if you people would stop bringing things up 21:27:52 And making me work 21:27:56 Then yes, early marks for all 21:28:07 mriedem: I think pinging beagles in an email might be a good idea? 21:28:20 mikal: yeah i'll hound him somehow 21:28:25 when i care next :) 21:28:26 mriedem: thanks 21:28:50 Anything else? 21:29:44 Ok, sounds like we're done 21:29:45 bye 21:29:48 Last call... 21:30:00 Ok, see you in Paris! 21:30:07 #endmeeting