21:00:31 <mikal> #startmeeting nova
21:00:32 <openstack> Meeting started Thu Oct 23 21:00:31 2014 UTC and is due to finish in 60 minutes.  The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:36 <openstack> The meeting name has been set to 'nova'
21:00:40 <dansmith> o/
21:00:40 <mriedem> o/
21:00:42 <mikal> mriedem appears pumped for a meeting
21:00:46 <n0ano> o/
21:00:53 <mikal> Oh, pings
21:00:56 <mikal> mikal tjones cburgess jgrimm adrian_otto mzoeller funzo mjturek jcook ekhugen irina_pov krtaylor: le ping
21:00:59 <mriedem> i'm not really all that excited
21:01:06 <mikal> #topic Kilo Specs
21:01:14 <mikal> Specs are open, as yuo should all know
21:01:19 * jogo walks in late
21:01:20 <mikal> We've already approved 15, which is pretty cool
21:01:25 <jgrimm> o/
21:01:30 <thangp> o/
21:01:31 <alaski> o/
21:01:35 <alex_xu> o/
21:01:39 <jogo> mikal: we can use a lot more spec reviews
21:01:42 <mikal> Just a reminder that if you're blocked on a procedural -2, you should ping the reviewer and ask for it to be removed
21:01:52 <mikal> jogo: yep, that's not unexpected
21:02:00 <mikal> So...
21:02:11 <mikal> Are there any BP's that want to request a fast track this week?
21:02:19 <mikal> (The blueprints too small to require a spec that is)
21:02:23 <krtaylor> o/
21:02:27 <edleafe> o/
21:02:37 <mikal> Is that hello, or please fast track me?
21:03:04 <mikal> Ok, no fast tracks
21:03:09 <mikal> Anything else for specs?
21:03:14 <edleafe> hello!
21:03:14 <mikal> Apart from that we need to do more reviews?
21:03:16 <krtaylor> hello for me
21:03:25 <jogo> mikal: there is a spec for a new virt driver
21:03:37 <mikal> jogo: oh, haven't seen that one
21:03:41 <mikal> jogo: what hypervisor?
21:03:52 <jogo> err I misread it slightly https://review.openstack.org/130447
21:04:00 <jogo> system Z
21:04:06 <jogo> libvirt/kvm on there
21:04:08 <jogo> my bad
21:04:11 <mriedem> zkvm
21:04:14 <jogo> dansmith: ^ you should impliment that spec
21:04:15 * mriedem hasn't read it
21:04:19 <mikal> Heh
21:04:23 <jogo> implement
21:04:30 <mikal> Ok, well it will get reviewed in due course I imagine
21:04:35 <mriedem> they should already know about the requied virt driver APIs from the meetup
21:04:40 <mikal> Shall we move on from specs?
21:05:04 <mikal> #topic Kilo Summit
21:05:16 <comstud> oui, oui
21:05:20 <mikal> So, we had that brainstorming etherpad, which has turned into this proposed schedule
21:05:28 <mikal> == Wed ==
21:05:28 <mikal> 9:00 - 10:30	Cells
21:05:28 <mikal> 11:00 - 11:40	Functional testing
21:05:28 <mikal> 11:50 - 12:30	Glance client library
21:05:28 <mikal> 1:50 - 2:30	Unconference
21:05:30 <mikal> 2:40 - 3:20	CI status checkpoint
21:05:31 <dansmith> comstud has been around more since he left openstack than he was before, I think
21:05:33 <mikal> 3:30 - 4:10	Virt drivers: Consistency and Minimum requirements
21:05:35 <mikal> 4:30 - 5:10	Objects Status and Deadlines
21:05:38 <mikal> 5:20 - 6:00	NFV Proposed Specs (need specs)
21:05:40 <mikal> == Thu ==
21:05:43 <mikal> 9:00 - 9:40	API microversions
21:05:45 <mikal> 9:50 - 10:30	nova-network migration
21:05:48 <mikal> 11:00 - 12:30	Scheduler / Resource tracker
21:05:50 <mikal> 1:40 - 2:20	Unconference
21:05:53 <mikal> 2:30 - 3:10	Upgrades / DB migration plan
21:05:55 <mikal> 3:20 - 4:00	Containers Service
21:05:58 <mikal> 4:30 - 6:00	Workload management
21:06:00 <mikal> == Fri ==
21:06:03 <mikal> "Meetup day"
21:06:05 <mikal> We only have 18 slots this time, because of the meetup day
21:06:08 <mikal> But we've bumped things which aren't easy to timebox to the meetup day
21:06:11 <mikal> Which I think makes sense
21:06:26 <mikal> I need to enter that schedule into the app that publishes it real soon now, so I will do that todayish
21:06:28 <mriedem> dansmith: i wasa thinking the same thing
21:06:37 <comstud> dansmith: loose ends and no other work!
21:06:44 <dansmith> heh
21:06:46 <mikal> Anything else we need to discuss about the summit at this point?
21:07:18 <mikal> Ok, moving on again
21:07:25 <mikal> #topic Gate Status
21:07:29 <mriedem> http://status.openstack.org/elastic-recheck/gate.html
21:07:33 <mikal> Are these notes in the wiki from last week or this?
21:07:35 <mriedem> jogo probably knows latest status
21:07:45 <jogo> mriedem: pretty healthy actaully
21:07:49 <mikal> Nice
21:08:01 <jogo> gate queue has been low all week
21:08:08 <mriedem> the gate status notes in the meeting agenda are old
21:08:14 <dansmith> we can fix that
21:08:15 <mriedem> i'm not working anything i don't think, i.e. the quota stuff
21:08:21 <dansmith> everyone approve more of my patches :)
21:08:26 <mikal> Heh
21:08:33 <mikal> Ok, its good its not busted at the moment
21:08:41 <mikal> Move on again again?
21:08:45 <mriedem> cinder and neutron appear to have the biggest weirdo bugs
21:08:59 <mriedem> move on
21:09:06 <mikal> #topic Open Discussion
21:09:10 <mikal> Ok, I only have one thing
21:09:16 <mikal> I think we should skip the meeting next week
21:09:19 <mikal> Anyone disagree?
21:09:24 <dansmith> I'l be on a plane
21:09:30 <dansmith> so, I don't care either way :D
21:09:30 <jogo> ++ to skip
21:09:32 <mikal> I imagine most people will be travelling, packing, or avoiding human contact
21:09:46 <mriedem> is next week supposed to be the 'week off'?
21:10:00 <mikal> mriedem: that might be this week? I'd have to check.
21:10:12 <mriedem> doesn't matter
21:10:36 <mriedem> work'em if you got'em
21:10:40 <mikal> Hmm, there's nothing on the release schedule
21:10:45 <mikal> Anything else for open discussion?
21:10:52 <dansmith> I have a few bugs I'd like to point to
21:10:53 <mikal> Or can we set a record for shortest meeting ever?
21:11:00 <mikal> dansmith: go for it
21:11:22 <dansmith> there are two really ugly pci related bugs that I think snuck in during the rushed reviews of that stuff during FFE
21:11:25 <dansmith> both are pretty major, IMHO
21:11:30 <dansmith> and want to make sure other people agree
21:11:36 <dansmith> first is a config format breakage:
21:11:40 <dansmith> https://bugs.launchpad.net/nova/+bug/1383345
21:11:43 <uvirtbot> Launchpad bug 1383345 in nova "PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given" [Medium,Confirmed]
21:11:58 <dansmith> second prevents starting compute service if certain pci devices aren't available for claiming at startup:
21:12:04 <dansmith> https://bugs.launchpad.net/nova/+bug/1383465
21:12:06 <uvirtbot> Launchpad bug 1383465 in nova "[pci-passthrough] nova-compute fails to start" [Critical,Confirmed]
21:12:25 <dansmith> both seem pretty bad to me and need fixing quickly
21:12:45 <dansmith> I think there are people on top of both of them, but wanted to raise them for visibility of the reviews that I hope will show up soon
21:12:49 <mikal> The first one is being worked on it seems
21:12:54 <mikal> I agree its a problem
21:13:35 <mikal> Ditto the second one
21:13:44 <dansmith> the second one is likely harder to resolve, but since there seemed to be some feelings that it was okay to block compute starting,
21:13:50 <mikal> So, I guess you're saying we should keep an eye out for fixes and review them when they happen?
21:13:54 <dansmith> I just wanted to make sure nobody else thinks that's reasonable :)
21:14:11 <dansmith> mikal: well, that and want to make sure that we're on the same page re: severity and desired behavior
21:14:31 <mikal> I feel like its good when nova-compute starts
21:14:34 <mriedem> not able to start compute seems...not ok?
21:14:37 <mriedem> yeah
21:14:40 <mikal> It would be unexpected that you can't manage any instances on that hypervisor
21:15:00 <mikal> The second bug also doesn't have an assignee
21:15:22 <jogo> we should require a pci CI system
21:15:22 <dansmith> I think maybe yjiang5 is going to work up a fix for that one?
21:15:30 <jogo> for further work on this stuff
21:15:39 <dansmith> jogo: sadly, very little if any of this is covered by testing at all :(
21:15:42 <jogo> at least have it run periodically
21:15:43 <yjiang5> dansmith: Robert and I are looking on it.
21:15:51 <dansmith> yjiang5: okay though so, thanks :)
21:15:53 <jogo> dansmith: well until that is fixed, block new pci features
21:16:06 <jogo> dansmith: this is what happens when we don't test
21:16:19 <dansmith> jogo: no argument from me.
21:16:19 <mikal> That sounds like something we should talk over in the CI summit session
21:16:25 <mikal> But I agree that CI is important here
21:17:12 <dansmith> the third thing is kinda minor:
21:17:15 <yjiang5> jogo: we have a internal system for testing, but not sure why it missed the first issue. For the second issue, it's a not-so-common scenerio and I don't think we have test case for it.
21:17:17 <dansmith> https://review.openstack.org/#/c/130601/
21:17:29 <dansmith> but it changes the way we dispatch our build_and_run method in compute a little
21:17:36 <dansmith> so thought it would be good to get some more eyes on it
21:18:02 <dansmith> yjiang5: if we used pci devices in the grenade tests upstream, we would have caught it I think, because we restart compute, right?
21:18:24 <dansmith> or at least, could have had a chance of hitting it, I guess
21:18:33 <dansmith> since we restart compute with some instances defined
21:18:45 <yjiang5> dansmith: seems that issue happens when compute node reboot, and the PCI devices on the system changes, some device that was asigned to a instance before the reboot disapeared.
21:19:07 <mikal> yjiang5: I can see that stopping the instance from starting, but not all of nova-compute
21:19:12 <dansmith> I dunno, I don't think joe changed his physical system
21:19:13 <jogo> yjiang5: ahh didn't realize it was a uncommon case
21:19:14 <mikal> I feel like we should add a test case for that
21:19:20 <mikal> And talk CI for PCI too
21:19:37 <yjiang5> mikal: sure.
21:19:53 <mikal> So, step 1. Let's fix those bugs and backport them.
21:20:00 <mikal> Step 2 can be better CI for this.
21:20:05 <dansmith> yep
21:20:21 <dansmith> anyway, that's it from me
21:20:26 <mikal> Cool
21:20:29 <mikal> Anything else from anyone?
21:20:30 <yjiang5> dansmith: I don't know why the device disapear, but from the log, it not shown after reboot. But I totally agree we should not block nova compte service  in such situation.
21:20:35 <mriedem> are we going to talk about ceph CI at some CI talk thingy?
21:20:39 <dansmith> yjiang5: okay
21:20:42 <jogo> mriedem: good question
21:20:46 <mriedem> russellb started that but i haven't heard progress
21:20:48 <jogo> russellb: ^ I think you worked on that
21:21:06 <mikal> We have a CI session, but I am not sure how much time we'll have
21:21:09 <mriedem> i'd feel a lot better about reviewing ceph related patches if we had CI
21:21:13 <dansmith> I think we have at least another person in redhat interested in finishing that up, but I don't know the status
21:21:16 <mikal> Ceph being a intra-libvirt CI thing
21:21:27 <mriedem> well, as a storage backend
21:21:30 <mriedem> for shared storage
21:21:32 <mikal> If we don't get to it in that session we'll add it to the meetup agenda
21:21:49 <mriedem> this https://review.openstack.org/#/c/121745/
21:22:06 <dansmith> mriedem: yeah, same :/
21:22:20 <dansmith> I've said several times I don't feel equipped to review that
21:22:21 <mriedem> like....maybe we can skip some known test failures for now if it's a handful in an rbd job?
21:22:31 <mikal> Yeah, there was also that security vulnerability that might have been picked up by CI
21:22:33 <mriedem> like what we've talked about with cells
21:22:54 <mikal> The sharing config drives between instances one
21:23:00 <mriedem> on https://review.openstack.org/#/c/121745/ i basically asked a billion questoins and then haven't looked back in awhile
21:23:19 <mriedem> and i have to just review it based on how it looks since i'm probably not going to setup a ceph backend to test it
21:23:26 <mriedem> and i have to take the other reviewers word for it working
21:23:48 <mikal> I agree CI is important here
21:23:55 <mikal> Basically I think CI is always imortant
21:23:56 <mriedem> anyway, food for thought
21:23:57 <mikal> important even
21:24:19 <mriedem> i see there is also a fedora job lurking somewhere
21:24:27 <mriedem> maybe we can get newer libvirt tested on that as non-voting
21:24:37 <mikal> We talked about that in Juno
21:24:42 <mikal> I thought someone at RH was working on that
21:24:45 <mikal> But then it went quiet
21:24:50 <mikal> So yeah, that's another thing to chase as well
21:24:51 <mriedem> yeah there is actually a fedora job somewhere i saw this week
21:24:55 <mriedem> on the experimental queue
21:25:04 <mriedem> the queue of lost toys
21:25:09 <dansmith> heh
21:25:25 <mikal> So, anything else?
21:25:52 <dansmith> mikal: I have something
21:25:53 <mriedem> have we heard any progress on a neuetronv2 api cleanup?
21:26:00 <mriedem> from beagles
21:26:06 <mikal> mriedem: I have not
21:26:09 <mikal> dansmith: do tell
21:26:10 <mriedem> or a spec or some such
21:26:18 <dansmith> I haven't seen any mention of progress or a spec
21:27:26 <dansmith> okay then
21:27:33 <dansmith> mikal: I was going to ask if I could have an early mark
21:27:42 <mikal> Heh
21:27:44 <dansmith> but mriedem messed it up
21:27:50 <mikal> Well, if you people would stop bringing things up
21:27:52 <mikal> And making me work
21:27:56 <mikal> Then yes, early marks for all
21:28:07 <mikal> mriedem: I think pinging beagles in an email might be a good idea?
21:28:20 <mriedem> mikal: yeah i'll hound him somehow
21:28:25 <mriedem> when i care next :)
21:28:26 <mikal> mriedem: thanks
21:28:50 <mikal> Anything else?
21:29:44 <mikal> Ok, sounds like we're done
21:29:45 <mriedem> bye
21:29:48 <mikal> Last call...
21:30:00 <mikal> Ok, see you in Paris!
21:30:07 <mikal> #endmeeting