14:02:22 <russellb> #startmeeting nova
14:02:23 <openstack> Meeting started Thu Feb  6 14:02:22 2014 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:27 <openstack> The meeting name has been set to 'nova'
14:02:33 <mriedem> hi
14:02:40 <hartsocks> \o
14:02:42 <russellb> hi everyone!
14:02:47 <PhilD> Hi
14:02:47 <n0ano> o/
14:02:57 <russellb> #topic general
14:03:05 <russellb> first, the schedule
14:03:12 <russellb> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
14:03:23 <baoli> rkukura, #openstack-meeting is open now
14:03:32 <russellb> blueprint approval deadline was this week
14:03:43 <russellb> code for blueprints must be posted in 2 weeks from today
14:03:54 <russellb> i3 quickly closing in on us, so be aware :)
14:04:08 <russellb> other big thing ... nova meetup next week
14:04:15 <russellb> #link https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
14:04:29 <johnthetubaguy> sounds good
14:04:31 <russellb> if you have any questions, let me know
14:04:42 <russellb> we've been collecting topics on https://etherpad.openstack.org/p/nova-icehouse-mid-cycle-meetup-items
14:05:02 <russellb> i'm going to start organizing that into a very rough cut at schedule for some of the big things
14:05:02 <n0ano> anyone know what time we start on Mon?
14:05:06 <russellb> n0ano: 9am
14:05:14 <n0ano> too civilized :-)
14:05:26 <russellb> i'm not a great morning person
14:05:50 <russellb> i want to tag specific big topics to cover in the mornings, at least
14:06:10 <russellb> like, tasks ... want to see if we can bang through review/approval of design, at least
14:06:17 <russellb> and finalizing plans for taking v3 to final
14:06:32 <russellb> neutron interaction is another big one
14:06:35 <johnthetubaguy> +1 for task discussion
14:06:42 <russellb> process discussion would be another
14:06:50 <russellb> so if you have any topic requests, please record them
14:07:09 <garyk> hi
14:07:12 <ndipanov> russellb, in the etherpad?
14:07:17 <russellb> ndipanov: yeah etherpad
14:07:33 <russellb> we can discuss more in open discussion if we have time
14:07:36 <russellb> #topic sub-teams
14:07:40 <hartsocks> \0
14:07:45 * n0ano gantt
14:07:53 <russellb> hartsocks: you're up
14:07:56 * johnthetubaguy raises xenapi hand
14:08:04 <hartsocks> #link https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
14:08:32 <hartsocks> Just a quick note there. The CI guys are putting notes on that Wiki. Some status update info is linked on that page as well.
14:08:50 <russellb> cool
14:08:56 <russellb> do you have your tempest config published?
14:09:00 <russellb> what tests you exclude, if any?
14:09:23 <hartsocks> We had some infra trouble earlier in the week. And, we'll link that info and info about configs, excludes etc. on that wiki.
14:09:27 <garyk> features that are not supported are not tested - for example security groups
14:09:41 * russellb nods
14:09:42 <mriedem> hartsocks: gary: i'd like to see more vmware people scoring on this before the rest of us are approving it: https://review.openstack.org/#/c/70137/
14:09:54 <russellb> just would be good to have that reference, i think all the driver CI systems should
14:10:05 <garyk> mriedem: sure
14:10:14 <mriedem> it kind of blew up
14:10:18 <mriedem> and it's blocking your other fixees
14:10:31 <garyk> mriedem: what do you mean it blew up?
14:10:38 <hartsocks> we are going to go through a set of reviews as a team.
14:10:43 <mriedem> just got bigger than the oriinal couple of patch sets
14:10:50 <mriedem> lots of back and forth between vmware people it seemed
14:11:01 <mriedem> so i'm waiting for that to settle before getting  back on it
14:11:03 <garyk> mriedem: correct - it was a result of comments on the patch sets
14:11:10 <mriedem> yeah, i know
14:11:10 <garyk> mriedem: understood
14:11:43 <garyk> if possible i'd lile to talk about the deferral of the image - caching
14:12:07 <garyk> i sent a mail to the list. not sure if you guys have seen it. not sure whu it was pushed out of I? the code has been reasy since december
14:12:29 <johnthetubaguy> garyk: blueprint wasn't approved I guess?
14:12:37 <garyk> johnthetubaguy: bp was apporved
14:12:49 <russellb> if it was approved, then i shouldn't have deferred it
14:12:50 <russellb> link?
14:12:57 <russellb> i didn't think it was
14:13:01 <garyk> it was then pending a BP that was partially implemented and the proposer of the BP went missing
14:13:09 <garyk> sec
14:13:26 <garyk> https://blueprints.launchpad.net/openstack/?searchtext=vmware-image-cache-management
14:13:40 <russellb> not approved
14:13:42 <russellb> "Review"
14:14:04 <garyk> it was approved then move due to https://blueprints.launchpad.net/nova/+spec/multiple-image-cache-handlers which is completely unrelated
14:14:44 <garyk> so what do we need to move from review to approved? this is a parity feature. as part i implemned the generic aging code
14:15:05 <johnthetubaguy> I think I stated it need more info for the docs team, or something like that?
14:15:17 <russellb> the reason i put it in "Review" originally was i wanted the plan to be around making existing code generic where possible
14:15:23 <russellb> and i never got back to review it again
14:15:35 <russellb> looks like johnthetubaguy also wanted docs added
14:15:51 <russellb> but perhaps more as a note before code merges
14:15:57 <garyk> russellb: thanks for the clarification. basically it is exactly the same as libvirts aging. no new config var.
14:16:10 <russellb> sharing some code now?
14:16:19 <russellb> where it makes sense?
14:16:22 <garyk> can you please clarify what i need to add?
14:16:46 <russellb> or did it just end up being sharing config?
14:16:47 <garyk> the code that is possible to share is shared. can you please look at the mail on the list. i guess we can take it from there
14:17:06 <russellb> heh ok if you wish, i mean you have my attention now
14:17:18 <russellb> anyway it's probably fine
14:17:25 <garyk> the config is the same as libvirts. the aging is just implemented in the driver - based on the generic code
14:17:31 <russellb> ok.
14:17:36 <russellb> anything else on vmware?
14:17:49 <hartsocks> We could go on about our BP? :-)
14:17:50 <garyk> i do not want to take too much of the meeting time and i think i interupted theminesweepd update
14:18:07 <russellb> hartsocks: open discussion if you'd like
14:18:12 <russellb> johnthetubaguy: let's talk xenapi
14:18:18 <russellb> johnthetubaguy: i'm very interested to check in on CI status ...
14:18:30 <BobBall> I'll talk to that.  CI is annoyingly close.
14:18:39 <johnthetubaguy> BobBall: go for it
14:18:55 <BobBall> Our initial efforts were on getting something integrated with the -infra stuff
14:19:07 <johnthetubaguy> the key thing for me, is that we are putting for an external system now, since -infra stuff has stalled I guess?
14:19:16 <BobBall> and we've got a bunch of patches up to enable that - but getting review time on that has been tricky
14:19:44 <BobBall> For the last two? weeks we've been focused on setting up our own 3rd party system rather than using -infra
14:19:46 <russellb> yeah they've been pretty slammed
14:19:48 <BobBall> which is annoying close
14:20:03 <BobBall> Got nodepool running and deploying xenserver nodes and another script that runs the tests on them
14:20:06 <russellb> any timeframe for turning it on?
14:20:08 <BobBall> joining the dots ATM
14:20:23 <BobBall> I'f you'd asked me at the beginning of this week I would have said "Friday" ;)
14:20:46 <BobBall> So it really could be any day now.
14:20:46 <russellb> how about confidence level that you'll have it running by icehouse-3?
14:20:54 <BobBall> High
14:20:58 <russellb> ok, great
14:21:12 <johnthetubaguy> we have tempest running outside of the automation right?
14:21:23 <johnthetubaguy> and seen it run in VMs that nodepool should be creating
14:21:35 <johnthetubaguy> so it really is just joining the dots now, we hope
14:21:36 <BobBall> We have full tempest running + passing in VMs in RAX using the -infra scripts
14:21:52 <BobBall> yes, indeed.
14:21:52 <johnthetubaguy> yeah, so almost, but no cigar
14:22:03 <johnthetubaguy> thats all from xenapi
14:22:06 <russellb> ok, well cutting it close
14:22:09 <russellb> but sounds like good progress
14:22:19 <BobBall> Too close, yes :/
14:22:37 <russellb> while we're still on driver CI ... hyper-v CI caught a bug yesterday, so that was cool to see
14:22:37 <johnthetubaguy> yeah, aimed for gate integrated, and that gamble didn't quite work this time, but at least thats much closer for Juno
14:22:47 <russellb> johnthetubaguy: great goal though
14:23:04 <russellb> johnthetubaguy: bet you can get more attention early cycle
14:23:05 <johnthetubaguy> yeah, should be awesome soon :)
14:23:16 <BobBall> So, yes, we distracted ourselves from integration to get the 3rd party stuff working so we don't miss the Icehouse deadline.
14:23:26 <russellb> sounds like a good plan
14:23:42 <russellb> n0ano: alright, gantt / scheduler stuff
14:23:51 <n0ano> tnx, couple of things...
14:23:53 <russellb> n0ano: i saw your mail about devstack support, i assumed that means devstack support merged?
14:24:18 <n0ano> russellb, yep, it's there and I sent an email out on how to configure it (with a typo that just got corrected)
14:24:25 <russellb> great
14:24:31 <n0ano> couple of things from the meeting...
14:25:11 <n0ano> no_db scheduler - still working on it, turn out removing the compute_node table from the DB is harder than originally thought but they'll get it eventually
14:25:32 <russellb> it's starting to feel a bit risky for icehouse anyway
14:25:41 <russellb> not sure how others feel about it ...
14:25:59 <russellb> but massive architectural change to critical component better early in the cycle than toward the end
14:25:59 <n0ano> for icehouse - as an option maybe, not as a default I hope
14:26:06 <russellb> ok, perhaps
14:26:14 <russellb> guess i was assuming we'd do it as *the* way it works
14:26:24 <johnthetubaguy> what about a little caching, that gets no db during a user call to the scheduler?
14:26:36 <russellb> johnthetubaguy: your bp?
14:26:40 * johnthetubaguy feels dirty for plugging his patch
14:26:43 <johnthetubaguy> yeah...
14:26:51 <russellb> johnthetubaguy: i was going to say, i think that got deferred, but based on your description, i'd be happy to sponsor it
14:26:55 <johnthetubaguy> https://review.openstack.org/#/c/67855/
14:27:03 <russellb> johnthetubaguy: because your approach sounds simple / low risk
14:27:15 <n0ano> I think the current plan is a a complete cut to the no_db way, but if you think there's a  way to stepwise develop that would be good
14:27:17 <johnthetubaguy> yeah, that one is approved, for a single host solution
14:27:24 <johnthetubaguy> it just races with multiple hosts
14:27:32 <garyk> i have a few issues with that one when the scheduling deals with data that is not homgenous, but we can take it offline
14:27:34 <russellb> n0ano: i'm not sure there's a sane way to make it optional
14:27:56 <johnthetubaguy> we need to find one, I had a rough idea, but now is not the time
14:27:58 <n0ano> russellb, note I said `if', I not sure there's a way either.
14:27:58 <russellb> johnthetubaguy: ah, you approved it yourself :-p
14:28:08 <johnthetubaguy> basically status update from compute, and stats fetch driver
14:28:27 <johnthetubaguy> russellb: yeah, probably...
14:28:39 <russellb> that's fine, in this case anyway, heh
14:29:29 <russellb> i guess we can leave the full no_db thing targeted for now
14:29:39 <russellb> but the closer we get, the less comfortable i will be with it merging
14:29:44 <russellb> we'll see
14:29:47 <n0ano> anyway, we also had a bit of a debate about scheduler request rate vs. compute node update rate, I don't expect that to be resolved until we have a working no_db scheduler with performance numbers.
14:30:02 <russellb> yes, performance numbers are key
14:30:29 <n0ano> code forklift - devstack changes merged in, still working on getting the pass the unit test changes merged
14:30:30 <mspreitz> the debate was about how those things scale
14:30:32 <johnthetubaguy> yeah, performance testing show the complex cache didn't work, so I stripped it down to what is up for review now
14:30:48 * n0ano fighting gerrit gets interesting
14:30:56 <russellb> so on the forklift, couple of things
14:31:06 <russellb> 1) i responded to your devstack message with some next steps on getting gate testing running
14:31:18 <russellb> i'd love to see an experimental job against nova that runs tempest with gantt instead of nova-scheduler
14:31:28 <russellb> it's not actually that much work, just the right tweaks in the right places
14:31:35 <n0ano> saw that, after the unit test patches are in I intend to work on that
14:31:42 <russellb> OK great
14:31:48 <russellb> ok, the unit test patches ....
14:31:57 <russellb> my only issue was whether or not we should be including scheduler/rpcapi.py
14:32:04 <russellb> longer term, that needs to be in its own repo
14:32:24 <garyk> we also need to have the scheduler to use objects
14:32:26 <russellb> we could merge it now, and do that later, but that makes the change history against the current gantt more complicated to replay on regenerating the repo later
14:32:35 <n0ano> russellb, yeah, I think I dealt with that this morning but, due to gate testing, I had to add that file in the first patch and then delete it in the second
14:32:57 <garyk> if the scheduler uses objects then the foklift will be a lot easier in the long run and it will be backward compatiable with nova
14:33:06 <garyk> it will help with the database parts
14:33:26 <russellb> anyone working on that?
14:33:34 <n0ano> garyk we're just doing what the current scheduler does
14:33:39 <garyk> objects?
14:33:48 <garyk> i started. initial patches git blocked
14:33:59 <garyk> i need to go back and speak with dan smith about those
14:34:23 <russellb> n0ano: ok i'll review again today to see what you updatec
14:34:53 <johnthetubaguy> as an aside, I guess the scheduler rpcapi and cells rpcapi both need to become public APIs, similar issues
14:35:06 <russellb> why cells?
14:35:06 <n0ano> btw, I'm not too worried about regenerating, now that I've done it once the second time should be easier, even if the history is a little convoluted
14:35:23 <russellb> in the scheduler case, it's just the public API to gantt
14:36:21 <johnthetubaguy> cells, we wanted to make that public, so people can plug in there too
14:36:38 <russellb> ah, i see ...
14:36:50 <russellb> i think the public part would be the manager side, right?
14:36:58 <johnthetubaguy> but lets ignore that comment for now… its irrelevant really
14:37:02 <russellb> ok :)
14:37:11 <russellb> related: i want to talk about the future of cells next week
14:37:11 <johnthetubaguy> yes, possibly
14:37:29 <russellb> #topic bugs
14:37:34 <russellb> bug day tomorrow!
14:37:36 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/026320.html
14:37:49 <russellb> thanks johnthetubaguy for putting message together
14:37:57 <johnthetubaguy> np
14:37:58 <russellb> shall we just collaborate in #openstack-nova?
14:38:03 <johnthetubaguy> I think that works best
14:38:11 <johnthetubaguy> people spot its happening that way
14:38:11 <russellb> agreed
14:38:45 <russellb> 196 new bugs
14:38:59 <johnthetubaguy> well, thats under the 200 we had before
14:39:09 <russellb> yeah, someone(s) have been working on it apparently :)
14:39:18 <johnthetubaguy> email works, who knew
14:39:26 <russellb> johnthetubaguy: :-)
14:39:38 <russellb> i'll be very pleased if we can hit 150 tomorrow
14:39:43 <russellb> and i'll buy cake if we hit 100
14:39:50 <russellb> or something
14:39:53 <a-gorodnev> beer
14:39:58 <garyk> russellb: are you talking about triage or fixes?
14:40:00 <n0ano> a-gorodnev, +1
14:40:02 <russellb> triage
14:40:09 <russellb> fixes are great too of course
14:40:22 <russellb> but we have to stay on top of triage to make sure we're catching the most critical reports and getting attention on them
14:40:23 <garyk> ack
14:40:55 <russellb> so we'll talk bugs tomorrow ... so on for now
14:40:57 <russellb> #topic blueprints
14:41:06 <mriedem> o/
14:41:10 <russellb> #link https://blueprints.launchpad.net/nova/+milestone/icehouse-3
14:41:23 <russellb> my biggest ask right now is to please make sure the status is accurate on your blueprints
14:41:34 <russellb> because it's honestly too many for me to keep on top of and keep updated
14:41:35 <mriedem> jog0 has a tool scanning those now
14:41:48 <russellb> mriedem: nice
14:41:51 <russellb> script all the things
14:41:53 <mriedem> patches with unapproved/wrong blueprint links
14:42:07 <johnthetubaguy> cool, I should try get hold of that
14:42:08 <russellb> time to bring out the -2 hammer
14:42:11 <russellb> johnthetubaguy: :-)
14:42:24 <mriedem> status?
14:42:26 <johnthetubaguy> I was going to take on keeping the blueprints honest, I will try do a loop through those soon
14:42:45 <russellb> johnthetubaguy: hugely appreciated
14:42:59 <russellb> and this is another big topic for next week ... revisiting our process around blueprints
14:43:16 <mriedem> i wanted to point out that i've got my patches up for db2 support after working out the kinks locally, but of course it's going to be blocked on CI: https://review.openstack.org/#/c/69047/
14:43:25 <mriedem> which is on hold until after the chinese new year...
14:43:27 <garyk> would it be possible for you guys to review BP's next week - implementations that is
14:43:37 <russellb> garyk: some i'm sure
14:43:41 <russellb> mriedem: CI?
14:43:45 <garyk> if there are a lot of people in the same room then it can be done in groups
14:43:48 <mriedem> 3rd party CI with a db2 backend
14:44:09 <mriedem> that was contingent on the blueprint
14:44:12 <russellb> mriedem: yeah, that's going to be a blocker ... but if it's running in time, ping
14:44:22 <mriedem> i don't have high hopes...
14:44:36 <russellb> k, well saves me from having to break bad news to you then, you're fully aware :)
14:44:50 <mriedem> i've already been setting low expectations internally :)
14:45:02 <russellb> the next cut on the icehouse-3 list is stuff still "not started"
14:45:16 <russellb> PhilD: you have one "Not Started"
14:45:22 <russellb> https://blueprints.launchpad.net/nova/+spec/tenant-id-based-auth-for-neutron
14:45:41 <russellb> not sure assignees of the others are here
14:46:03 <PhilD> Code is up for review: https://review.openstack.org/#/c/69972/
14:46:16 <russellb> heh, k
14:46:25 * russellb sets needs review
14:46:34 <johnthetubaguy> OK, so I will try make sure we have all the stats straight on those, I see joe's tool now
14:46:55 <PhilD> thanks
14:46:56 <russellb> there's probably some that are dead
14:47:03 <russellb> abandoned patches that haven't been touched in weeks
14:47:06 <johnthetubaguy> also, abandoned patches, etc, should get detected
14:47:07 <russellb> should just defer those too
14:47:10 <johnthetubaguy> yeah
14:47:19 <russellb> need to aggressively prune this list toward reality
14:47:26 <johnthetubaguy> time to deploy some scripting!
14:48:06 <russellb> wheeee
14:48:10 <russellb> the blueprint API is not great.
14:48:18 <a-gorodnev> I'm interested in one BP that hasn't be approved
14:48:23 <russellb> better than screen scraping
14:48:31 <russellb> a-gorodnev: which one is that
14:48:38 <a-gorodnev> https://blueprints.launchpad.net/nova/+spec/glance-snapshot-tasks
14:48:49 <a-gorodnev> there tons of text there
14:48:57 <a-gorodnev> but nothing special
14:49:03 <russellb> yeah that's not going to happen
14:49:19 <a-gorodnev> I understand that it's already too late
14:49:41 <johnthetubaguy> we already delete snapshots that error out, so we are doing better now
14:50:04 <a-gorodnev> of course, the target is ouf of i3
14:50:46 <russellb> #topic open discussion
14:50:54 <russellb> 10 minutes to go, can talk blueprints or anything else
14:51:45 <mriedem> any major gate blockers?
14:51:58 <mriedem> this week seems quiet except for the one alexpilottie pointed out yesterday
14:52:14 <ndipanov> russellb, https://review.openstack.org/#/c/49395/ it's been sitting there for months, and is a nice fix
14:52:16 <mriedem> which reminds me to check if that quieted another one down
14:52:32 <ndipanov> probably needs a rebase tho
14:52:38 <russellb> mriedem: yeah, we reverted it ...
14:52:59 <russellb> i saw one gate bug related to shelving?  though the timeframe sounded like it was probably caused by the same thing alex reverted yesterday
14:53:11 <mriedem> russellb: that's what i needed to correlate today
14:53:19 <mriedem> because yeah https://review.openstack.org/#/c/71105/
14:53:38 <mriedem> the shelving fail and the nwfilter in use fail show up around the same time
14:53:49 <russellb> k let me know what you find
14:53:54 <mriedem> yar!
14:54:12 <mriedem> ah cool, looks like that revert fixed the nwfilter in use fail
14:54:59 <russellb> cool, makes sense
14:55:12 <mriedem> the patch that won't go away :)
14:55:42 <sahid> :)
14:56:03 <mriedem> trends look the same with the shelving error, so must have been the root cause
14:56:06 <mriedem> jog0 was right!
14:57:02 <garyk> ndipanov: i had some comments on https://review.openstack.org/#/c/49395/
14:57:29 <russellb> alright, well thanks for your time everyone
14:57:33 <russellb> bye!
14:57:35 <russellb> #endmeeting