21:00:40 <russellb> #startmeeting nova
21:00:41 <openstack> Meeting started Thu Aug 29 21:00:40 2013 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:44 <russellb> Hello, everyone!
21:00:45 <openstack> The meeting name has been set to 'nova'
21:00:48 <mriedem> hi
21:00:51 <melwitt> hi
21:00:52 <alaski> hello
21:00:54 <n0ano> o/
21:00:54 <jog0> o/
21:00:55 <cyeoh_> hi
21:00:59 <russellb> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:01:00 <hartsocks> o/
21:01:04 <n0ano> ls -ltr /rpm/
21:01:12 <russellb> n0ano: orly?  :)
21:01:14 <bnemec> \o
21:01:21 <russellb> #topic havana-3 status
21:01:30 <n0ano> russellb, too many windows and the pointer doesn't follow my eyes yet :-)
21:01:31 <llu-laptop> o/
21:01:33 <russellb> #link https://launchpad.net/nova/+milestone/havana-3
21:01:49 <russellb> 17 blueprints implemented, 60 others up for review
21:01:58 <russellb> feature freeze merge deadline is in just over one week
21:02:01 <russellb> err, under one week
21:02:01 <mikal> Hi
21:02:16 <russellb> for reference, we completed 25 blueprints in h2, and 17 in h1
21:02:34 <russellb> so ... i suspect a large number will have to get bumped due to capacity, unfortunately
21:03:07 <russellb> if you need something to help you prioritize reviews, i suggest looking at http://status.openstack.org/reviews/
21:03:20 <russellb> if you haven't before.  it factors in bug and/or blueprint priority into the sorting
21:03:47 <russellb> any specific items anyone would like to cover regarding havana-3?
21:04:23 <russellb> other than ... for review in open_reviews: print 'please review: %s' % review
21:04:24 <russellb> :-)
21:04:32 <mikal> Heh
21:04:43 <mikal> The libvirt console stuff doesn't rate well in that review list
21:04:47 <mikal> Not sure why, its pretty important
21:04:48 <russellb> but if there are particular road blocks we should talk about, now is a good time ...
21:04:59 <russellb> mikal: not sure why either, it's one of the top priority blueprints
21:05:20 <russellb> mikal: oh, you have it linked to the bug, not blueprint
21:05:21 <mriedem> i've got a patch with a +2 that's in the dep chain for the powervm-configdrive blueprint...
21:05:28 <mriedem> kind of blocking that bp
21:05:32 <mikal> russellb: hmmm, ok I will fix that
21:05:42 <russellb> mikal: not sure how this will react if you link to both
21:06:04 <mikal> russellb: new plan then. Just review it!
21:06:04 <mikal> :P
21:06:09 <russellb> mikal: heh, k
21:06:35 <russellb> a lot of people seem to be busy with either travel, or pushing their own patches
21:06:44 <mriedem> or furlough...
21:06:48 <russellb> i really don't feel like there's more than usual review bandwidth right now
21:06:50 <russellb> mriedem: yes :-(
21:06:58 <russellb> they should have taken our feature freeze into account, gah
21:07:22 <mriedem> oh well, i rebased one of dansmith's topic branches :)
21:07:32 <russellb> mriedem: awesome, i'm sure he'll appreciate it
21:07:50 <russellb> any other havana-3 comments or questions?
21:08:03 <russellb> after the feature freeze, there is a process for requesting a feature freeze exception
21:08:04 <jog0> russellb: just an FYI I am working on getting a fake scale jenkins test up to check for major performance regressions like we have had in the past (https://review.openstack.org/#/c/43779/). It will try spawning 100VMs at once or so with fake virt driver.  Want to get it up after H3
21:08:17 <russellb> fancy
21:08:28 <russellb> definitely interested in how that comes along
21:08:33 <jog0> and another test for computer pcapi  compat
21:08:52 <jog0> russellb: early results are really good actaully (for perf)
21:09:02 <russellb> as in, haven't made it worse since grizzly?
21:09:08 <russellb> or made it better?
21:10:10 <yjiang5> jog0: will it have any performance impact to jekins itself to create 45 VMs?
21:10:26 <jog0> russellb:  didn't compare with grizzly actually but much better with the iptables and rootwrap fixes
21:10:32 <jog0> yjiang5: nope
21:10:43 <yjiang5> jog0: great.
21:10:44 <jog0> yjiang5: using the fakevirt driver so its even less work
21:11:35 <clarkb> jog0: there is the super micro flavor size if you do want to try spining up actual VMs
21:11:49 <clarkb> I think we can fit a bunch of those on the 8GB test nodes
21:12:22 <russellb> alright, let's jump to subteams
21:12:25 <jog0> clarkb: thats an option but the test can't stress the test node otherwise it will be flakey itself
21:12:25 <russellb> #topic sub-team reports
21:12:32 <russellb> any subteams want to give a report?
21:12:35 * hartsocks waves
21:12:39 <russellb> jog0: sorry, we can come back to it in open discussion
21:12:44 <n0ano> scheduler - nothing to report, no meeting this week.
21:12:53 <russellb> n0ano: alrighty
21:12:55 <russellb> hartsocks: go for it
21:13:07 <jog0> russellb: I am done, np
21:13:25 <hartsocks> I've got 6 reviews for blueprints in front of me…
21:13:46 <hartsocks> I'm tempted to whine for reviews right now … but I do mail the mailing lists...
21:13:55 <hartsocks> #link https://review.openstack.org/#/c/30282/
21:13:56 * russellb nods
21:14:04 <hartsocks> That's the one blueprint I'll risk some ire for.
21:14:25 <hartsocks> Everything is up and either in revision or waiting on reviewer attention.
21:14:34 <hartsocks> That's about it.
21:14:42 <russellb> alright, i know you guys are dying for that to go in :)
21:14:48 <hartsocks> totally.
21:15:02 <russellb> if for some reason it doesn't happen, apply for an exception
21:15:03 * mriedem thinks about looking for more vmware tests...
21:15:09 <johnthetubaguy> not much from xenapi, doc work and bug work starting to pick up soon, few reviews pending, more work towards gating tests
21:15:22 <russellb> things contained to the drivers should be easier to grant exceptions for as opposed to more core changes
21:15:28 <hartsocks> BTW: you should start to see some vmware CI people posting on reviews.
21:15:33 <russellb> johnthetubaguy: cool
21:15:34 <russellb> hartsocks: nice
21:15:50 <hartsocks> Right now we're just using paste.openstack.org while we square away our infrastructure...
21:16:00 <hartsocks> getting public IP's things like that.
21:16:01 <russellb> sure, but a great step in the right direction
21:16:47 <russellb> hartsocks: anything else?
21:16:53 <mriedem> i guess i'm a subteam of one, for powervm CI we're working on hw capital requests...
21:16:59 <hartsocks> that's it.
21:17:19 <russellb> mriedem: cool, good to hear ... about 6 months to have that sorted ...
21:17:46 <russellb> #topic open discussion
21:17:52 <russellb> alright folks, anything else you want to chat about?
21:18:02 <mriedem> any news on mid cycle meetup?
21:18:13 <russellb> nope, still collecting interest data
21:18:21 <mikal> Hawaii!
21:18:22 <russellb> but seems there's enough interest to have it
21:18:33 <geekinutah> I'm curious about refactoring test code
21:18:35 <russellb> i'm holding off on any further planning until after the next election
21:18:41 <cyeoh_> mikal: +1!
21:18:47 <geekinutah> a few things, like instance_get_by_uuid
21:18:52 <mikal> russellb: the location of the meetup should be part of your election platform
21:19:17 <russellb> mikal: ha, if only money weren't an issue :-)
21:19:20 <russellb> Hawaii +1
21:19:45 <geekinutah> so we have all sort of stubs defined for instance_get_by_uuid, same for get _by_host and a few others
21:20:09 <russellb> geekinutah: wanting to centralize some of that?
21:20:22 <geekinutah> I'm wondering if I should try and refactor into a common fake, or just fakes per test directory...
21:20:32 <russellb> common fake if it makes sense
21:20:33 <geekinutah> yeah, it's all over the place
21:20:34 <mriedem> geekinutah: take a look at this: https://review.openstack.org/#/c/42283/
21:20:42 <russellb> i'm sure some cases make different assumptions about what the fake provides
21:20:45 <mriedem> geekinutah: that's one that's doing instance dict to object conversion in fakes
21:20:52 <russellb> but as much as possible, i think a common one makes sense
21:21:01 <mriedem> geekinutah: and i keep -1'ing it to make it more centralized
21:21:11 <geekinutah> it's a big problem
21:21:24 <geekinutah> I found 300 or so fakes scattered throughout everything
21:21:41 <russellb> fun
21:21:42 <yjiang5> geekinutah: I think with object changes, we can simply provide fake for each object?
21:21:46 <mriedem> yeah, everyone creates their own stubs or just does copy/paste from existing tests
21:22:08 <geekinutah> yjiang5: objects do help alot
21:22:21 <geekinutah> what I was thinking is that I would start tackling specific test directories, moving them to their own fakes
21:22:30 <russellb> baby steps +1
21:22:31 <geekinutah> then I will actually get code submitted instead of killing myself
21:22:47 <russellb> especially when it would be moving under your feet like crazy otherwise
21:22:56 <bnemec> ISTR the neutron api test have some fake factory methods that can return a default, but also allow you to override certain parts of the fake if necessary.
21:23:34 <geekinutah> k, sounds like that's a good way to go then
21:25:03 <russellb> any other topics?
21:25:10 <clarkb> eyes on https://bugs.launchpad.net/nova/+bug/1218133 would be awesome. That particular test file has potential for being very flaky
21:25:12 <uvirtbot> Launchpad bug 1218133 in nova "MultiprocessWSGITest.test_terminate_sigterm fails because the wait for workers to die is unreliable." [Undecided,New]
21:25:52 <geekinutah> ummmm, so that being said about the fakes and all
21:26:21 <russellb> clarkb: ACK, don't think that test is new, are failures just cropping up?
21:26:29 <russellb> clarkb: or has it been a consistent nagging problem?
21:26:53 <russellb> i don't see it on http://status.openstack.org/rechecks/ ...
21:26:58 <geekinutah> some kind of stub duplication detection stuff might be interesting to look into
21:26:58 <clarkb> russellb: I don't think it is a consistent nagging problem yet, but there was one failure yesterday and as I started digging into it realized that it was just flaky
21:27:09 <clarkb> russellb: so probably lower priority than the things causing most of the gate resets
21:27:11 <russellb> gotcha
21:27:12 <geekinutah> I did it with a bunch of grepping, but we could probably automate that and avoid some of this pain in the future
21:27:21 <mriedem> clarkb: maybe showing up due to parallel runs now?
21:27:35 <russellb> it's a unit test though
21:27:41 <russellb> so, no different than it has been for a good while
21:28:01 <mriedem> oh right, tempest is paralle, nova isn't
21:28:02 <mriedem> right?
21:28:07 <clarkb> both are parallel
21:28:18 <hartsocks> hrm… this looks like a lot of IPC stuff I did a long time ago.
21:28:33 <clarkb> I have a hunch that 99% of the time it works but when a slave has a bad neighbor scheduling happens such that 5 seconds isn't long enough for the workers to die
21:28:46 <russellb> clarkb: appreciate the analysis in the bug
21:29:17 <hartsocks> I used to use sig user1 and sig user2 to have processes check on each other's health… would that help here?
21:30:27 <russellb> hartsocks: if you fix this, you earn 3 code reviews from me on the reivews of your choice, haha
21:30:39 <hartsocks> Okay. Now I'm motivated.
21:31:00 <russellb> my offer is logged, i have to actually do it now
21:31:23 <russellb> anyway, let's keep looking into this ...
21:31:28 <russellb> any other topics before i shut down the meeting?
21:32:04 <russellb> thanks everyone!  good luck in the final push for h3!
21:32:07 <russellb> #endmeeting