21:00:40 #startmeeting nova 21:00:41 Meeting started Thu Aug 29 21:00:40 2013 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:44 Hello, everyone! 21:00:45 The meeting name has been set to 'nova' 21:00:48 hi 21:00:51 hi 21:00:52 hello 21:00:54 o/ 21:00:54 o/ 21:00:55 hi 21:00:59 #link https://wiki.openstack.org/wiki/Meetings/Nova 21:01:00 o/ 21:01:04 ls -ltr /rpm/ 21:01:12 n0ano: orly? :) 21:01:14 \o 21:01:21 #topic havana-3 status 21:01:30 russellb, too many windows and the pointer doesn't follow my eyes yet :-) 21:01:31 o/ 21:01:33 #link https://launchpad.net/nova/+milestone/havana-3 21:01:49 17 blueprints implemented, 60 others up for review 21:01:58 feature freeze merge deadline is in just over one week 21:02:01 err, under one week 21:02:01 Hi 21:02:16 for reference, we completed 25 blueprints in h2, and 17 in h1 21:02:34 so ... i suspect a large number will have to get bumped due to capacity, unfortunately 21:03:07 if you need something to help you prioritize reviews, i suggest looking at http://status.openstack.org/reviews/ 21:03:20 if you haven't before. it factors in bug and/or blueprint priority into the sorting 21:03:47 any specific items anyone would like to cover regarding havana-3? 21:04:23 other than ... for review in open_reviews: print 'please review: %s' % review 21:04:24 :-) 21:04:32 Heh 21:04:43 The libvirt console stuff doesn't rate well in that review list 21:04:47 Not sure why, its pretty important 21:04:48 but if there are particular road blocks we should talk about, now is a good time ... 21:04:59 mikal: not sure why either, it's one of the top priority blueprints 21:05:20 mikal: oh, you have it linked to the bug, not blueprint 21:05:21 i've got a patch with a +2 that's in the dep chain for the powervm-configdrive blueprint... 21:05:28 kind of blocking that bp 21:05:32 russellb: hmmm, ok I will fix that 21:05:42 mikal: not sure how this will react if you link to both 21:06:04 russellb: new plan then. Just review it! 21:06:04 :P 21:06:09 mikal: heh, k 21:06:35 a lot of people seem to be busy with either travel, or pushing their own patches 21:06:44 or furlough... 21:06:48 i really don't feel like there's more than usual review bandwidth right now 21:06:50 mriedem: yes :-( 21:06:58 they should have taken our feature freeze into account, gah 21:07:22 oh well, i rebased one of dansmith's topic branches :) 21:07:32 mriedem: awesome, i'm sure he'll appreciate it 21:07:50 any other havana-3 comments or questions? 21:08:03 after the feature freeze, there is a process for requesting a feature freeze exception 21:08:04 russellb: just an FYI I am working on getting a fake scale jenkins test up to check for major performance regressions like we have had in the past (https://review.openstack.org/#/c/43779/). It will try spawning 100VMs at once or so with fake virt driver. Want to get it up after H3 21:08:17 fancy 21:08:28 definitely interested in how that comes along 21:08:33 and another test for computer pcapi compat 21:08:52 russellb: early results are really good actaully (for perf) 21:09:02 as in, haven't made it worse since grizzly? 21:09:08 or made it better? 21:10:10 jog0: will it have any performance impact to jekins itself to create 45 VMs? 21:10:26 russellb: didn't compare with grizzly actually but much better with the iptables and rootwrap fixes 21:10:32 yjiang5: nope 21:10:43 jog0: great. 21:10:44 yjiang5: using the fakevirt driver so its even less work 21:11:35 jog0: there is the super micro flavor size if you do want to try spining up actual VMs 21:11:49 I think we can fit a bunch of those on the 8GB test nodes 21:12:22 alright, let's jump to subteams 21:12:25 clarkb: thats an option but the test can't stress the test node otherwise it will be flakey itself 21:12:25 #topic sub-team reports 21:12:32 any subteams want to give a report? 21:12:35 * hartsocks waves 21:12:39 jog0: sorry, we can come back to it in open discussion 21:12:44 scheduler - nothing to report, no meeting this week. 21:12:53 n0ano: alrighty 21:12:55 hartsocks: go for it 21:13:07 russellb: I am done, np 21:13:25 I've got 6 reviews for blueprints in front of me… 21:13:46 I'm tempted to whine for reviews right now … but I do mail the mailing lists... 21:13:55 #link https://review.openstack.org/#/c/30282/ 21:13:56 * russellb nods 21:14:04 That's the one blueprint I'll risk some ire for. 21:14:25 Everything is up and either in revision or waiting on reviewer attention. 21:14:34 That's about it. 21:14:42 alright, i know you guys are dying for that to go in :) 21:14:48 totally. 21:15:02 if for some reason it doesn't happen, apply for an exception 21:15:03 * mriedem thinks about looking for more vmware tests... 21:15:09 not much from xenapi, doc work and bug work starting to pick up soon, few reviews pending, more work towards gating tests 21:15:22 things contained to the drivers should be easier to grant exceptions for as opposed to more core changes 21:15:28 BTW: you should start to see some vmware CI people posting on reviews. 21:15:33 johnthetubaguy: cool 21:15:34 hartsocks: nice 21:15:50 Right now we're just using paste.openstack.org while we square away our infrastructure... 21:16:00 getting public IP's things like that. 21:16:01 sure, but a great step in the right direction 21:16:47 hartsocks: anything else? 21:16:53 i guess i'm a subteam of one, for powervm CI we're working on hw capital requests... 21:16:59 that's it. 21:17:19 mriedem: cool, good to hear ... about 6 months to have that sorted ... 21:17:46 #topic open discussion 21:17:52 alright folks, anything else you want to chat about? 21:18:02 any news on mid cycle meetup? 21:18:13 nope, still collecting interest data 21:18:21 Hawaii! 21:18:22 but seems there's enough interest to have it 21:18:33 I'm curious about refactoring test code 21:18:35 i'm holding off on any further planning until after the next election 21:18:41 mikal: +1! 21:18:47 a few things, like instance_get_by_uuid 21:18:52 russellb: the location of the meetup should be part of your election platform 21:19:17 mikal: ha, if only money weren't an issue :-) 21:19:20 Hawaii +1 21:19:45 so we have all sort of stubs defined for instance_get_by_uuid, same for get _by_host and a few others 21:20:09 geekinutah: wanting to centralize some of that? 21:20:22 I'm wondering if I should try and refactor into a common fake, or just fakes per test directory... 21:20:32 common fake if it makes sense 21:20:33 yeah, it's all over the place 21:20:34 geekinutah: take a look at this: https://review.openstack.org/#/c/42283/ 21:20:42 i'm sure some cases make different assumptions about what the fake provides 21:20:45 geekinutah: that's one that's doing instance dict to object conversion in fakes 21:20:52 but as much as possible, i think a common one makes sense 21:21:01 geekinutah: and i keep -1'ing it to make it more centralized 21:21:11 it's a big problem 21:21:24 I found 300 or so fakes scattered throughout everything 21:21:41 fun 21:21:42 geekinutah: I think with object changes, we can simply provide fake for each object? 21:21:46 yeah, everyone creates their own stubs or just does copy/paste from existing tests 21:22:08 yjiang5: objects do help alot 21:22:21 what I was thinking is that I would start tackling specific test directories, moving them to their own fakes 21:22:30 baby steps +1 21:22:31 then I will actually get code submitted instead of killing myself 21:22:47 especially when it would be moving under your feet like crazy otherwise 21:22:56 ISTR the neutron api test have some fake factory methods that can return a default, but also allow you to override certain parts of the fake if necessary. 21:23:34 k, sounds like that's a good way to go then 21:25:03 any other topics? 21:25:10 eyes on https://bugs.launchpad.net/nova/+bug/1218133 would be awesome. That particular test file has potential for being very flaky 21:25:12 Launchpad bug 1218133 in nova "MultiprocessWSGITest.test_terminate_sigterm fails because the wait for workers to die is unreliable." [Undecided,New] 21:25:52 ummmm, so that being said about the fakes and all 21:26:21 clarkb: ACK, don't think that test is new, are failures just cropping up? 21:26:29 clarkb: or has it been a consistent nagging problem? 21:26:53 i don't see it on http://status.openstack.org/rechecks/ ... 21:26:58 some kind of stub duplication detection stuff might be interesting to look into 21:26:58 russellb: I don't think it is a consistent nagging problem yet, but there was one failure yesterday and as I started digging into it realized that it was just flaky 21:27:09 russellb: so probably lower priority than the things causing most of the gate resets 21:27:11 gotcha 21:27:12 I did it with a bunch of grepping, but we could probably automate that and avoid some of this pain in the future 21:27:21 clarkb: maybe showing up due to parallel runs now? 21:27:35 it's a unit test though 21:27:41 so, no different than it has been for a good while 21:28:01 oh right, tempest is paralle, nova isn't 21:28:02 right? 21:28:07 both are parallel 21:28:18 hrm… this looks like a lot of IPC stuff I did a long time ago. 21:28:33 I have a hunch that 99% of the time it works but when a slave has a bad neighbor scheduling happens such that 5 seconds isn't long enough for the workers to die 21:28:46 clarkb: appreciate the analysis in the bug 21:29:17 I used to use sig user1 and sig user2 to have processes check on each other's health… would that help here? 21:30:27 hartsocks: if you fix this, you earn 3 code reviews from me on the reivews of your choice, haha 21:30:39 Okay. Now I'm motivated. 21:31:00 my offer is logged, i have to actually do it now 21:31:23 anyway, let's keep looking into this ... 21:31:28 any other topics before i shut down the meeting? 21:32:04 thanks everyone! good luck in the final push for h3! 21:32:07 #endmeeting