14:02:27 <russellb> #startmeeting nova
14:02:28 <irenab> can we discuss on openstack-dev next steps for few mins?
14:02:29 <openstack> Meeting started Thu Jan  9 14:02:27 2014 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:33 <openstack> The meeting name has been set to 'nova'
14:02:37 <russellb> Hello everyone!
14:02:39 <hartsocks> whee!
14:02:44 <russellb> and welcome to the first 1400 UTC nova meeting
14:02:45 <llu-laptop> ah, nova is here :)
14:02:45 <johnthetubaguy> hi
14:02:50 <dripton> hi
14:02:59 <garyk> hi
14:03:02 <PaulMurray> hi
14:03:04 <russellb> #topic general
14:03:04 <shanewang> hi
14:03:11 <llu-laptop> hi
14:03:12 <russellb> so, we're alternating meeting times for now
14:03:18 <mriedem> hi
14:03:23 <russellb> we'll see how it goes.  as long as this is attended well, we'll keep it
14:03:24 <shanewang> and meeting channel.
14:03:29 <russellb> yeah and channel, oops
14:03:32 <russellb> i'll update the wiki about that
14:03:57 <russellb> some general things first ...
14:04:02 <russellb> nova meetup next month
14:04:05 <russellb> #link https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
14:04:11 <russellb> be sure to sign up so we know you're coming
14:04:15 <russellb> if you're able
14:04:22 * johnthetubaguy is looking forward to it
14:04:36 <russellb> my thinking on the schedule for that was to ... not really have a strict one
14:04:46 <russellb> perhaps having unconference in the mornings, and open afternoons to just hack on things together
14:04:58 <russellb> that could be working through tough bugs, design, whatever
14:05:06 <johnthetubaguy> that sounds quite good
14:05:09 <hartsocks> I like that.
14:05:16 <russellb> ok great
14:05:19 <russellb> so we'll plan on that for now then
14:05:24 <russellb> which isn't much of a plan
14:05:27 <russellb> more of a plan to not plan too much
14:05:28 <johnthetubaguy> in unconference u can gather friends for the afternoon I guess
14:05:31 <russellb> sure
14:05:40 <johnthetubaguy> maybe an etherpad to collect ideas?
14:05:45 <russellb> that's be good
14:05:53 <russellb> if anyone creates one before me, just link it from that wiki page
14:05:57 <mriedem> yeah, i could move my notes to etherpad
14:06:05 <russellb> mriedem: perfect
14:06:11 <garyk> how many people are expected?
14:06:25 <russellb> i'll check
14:07:04 <russellb> 20 registered so far
14:07:13 <mriedem> +1 hopefully soon
14:07:15 <russellb> we have space for a lot more than that :)
14:07:34 <russellb> should be fun
14:07:37 <russellb> ok, another thing ....
14:07:44 <russellb> please put january 20th on your schedule
14:07:46 <russellb> gate bug fix day
14:07:54 <garyk> thanks, and one last question: will there be any chance to connect from remote?
14:07:55 <russellb> we really, really, really need to step it up on that front
14:08:10 <russellb> garyk: maybe ... hadn't thought about it.  we could try hangouts or something
14:08:12 <shanewang> garyk:+1
14:08:27 <garyk> a hangout is a great idea
14:08:31 <russellb> we can do that at least for the unconference part
14:08:34 <russellb> the rest may not work well
14:08:46 <johnthetubaguy> well, should be on IRC for the rest I guess
14:08:50 <russellb> true
14:08:54 <hartsocks> Is this Foo Camp (or Bar Camp) style unconference?
14:09:04 <russellb> well ...
14:09:09 <russellb> i don't know about doing pitches and voting
14:09:13 <russellb> other than maybe on etherpad
14:09:25 <hartsocks> okay.
14:09:33 <russellb> i'm not sure we'll have *that* much demand for the time that we have to formally rank stuff
14:09:40 <russellb> just a guess though
14:09:55 <russellb> i may just be a dictator on the schedule based on proposals
14:09:57 <russellb> :-p
14:10:17 <johnthetubaguy> +1 we give you a veto on a session
14:10:25 <russellb> heh
14:10:29 <russellb> ok onward
14:10:31 <russellb> #topic sub-teams
14:10:35 <llu-laptop> is Jan 20th appropriate for gate bug squash? is  23rd the icehouse-2?
14:10:43 <russellb> anyone want to give a report on what a sub-group has been working on?
14:10:46 <russellb> llu-laptop: we'll come back to that in a bit
14:10:56 * johnthetubaguy raises hand a little
14:11:05 <russellb> johnthetubaguy: go for it
14:11:23 <garyk> will you bring denis rodman as a side kick?
14:11:24 <johnthetubaguy> So we are working on getting XenServer images into the node pool
14:11:36 <johnthetubaguy> then getting Zuul testing stuff on XenServer
14:11:47 <russellb> johnthetubaguy: that's great to hear, good progress?
14:11:48 <johnthetubaguy> slow progress, and infra-review welcome
14:12:07 <johnthetubaguy> yeah, its really just not getting reviewed at the moment, but it was over christmas I guess
14:12:07 <russellb> confident that it's still good icehouse timeframe material?
14:12:12 <johnthetubaguy> matel is working on it
14:12:21 <russellb> ok, yeah, holidays put a big delay on everything
14:12:27 <russellb> nova queue is still recovering from holidays, too
14:12:30 <johnthetubaguy> russellb: it should be / better be
14:12:44 <russellb> cool
14:12:56 <johnthetubaguy> at least tempest is running well on XenServer
14:13:00 <russellb> excellent
14:13:02 <johnthetubaguy> Citrix did some good work on that
14:13:17 <johnthetubaguy> so, its more a wire up into Zuul exercise
14:13:31 <johnthetubaguy> and get an image in rackspace that has XenServer + Devstack domu
14:13:38 <johnthetubaguy> which is done, just needs more testing, etc
14:13:59 <russellb> i think that's a cool approach btw, hooking your nodes into infra
14:14:04 <russellb> instead of running your own zuul/jenkins/etc
14:14:30 <johnthetubaguy> yeah, fingers crossed
14:14:54 <russellb> so, i saw you in a PCI discussion before this meeting :)
14:15:01 <johnthetubaguy> yeah, thats true
14:15:07 <russellb> what's going on with that
14:15:12 <johnthetubaguy> I think we agreed on what the user requests could be
14:15:23 <johnthetubaguy> waiting to here from ijw on confirming that though
14:15:29 <johnthetubaguy> I wrote up some stuff here:
14:15:36 <johnthetubaguy> https://wiki.openstack.org/wiki/Meetings/Passthrough#New_Proposal_for_admin_view
14:15:55 <johnthetubaguy> The final interactions between Nova and Neutron are less clear to be honest
14:16:13 <johnthetubaguy> basically we keep this:
14:16:17 <johnthetubaguy> nova boot --image some_image --flavor flavor_that_has_big_GPU_attached some_name
14:16:20 <johnthetubaguy> we add this:
14:16:20 <garyk> are there people from neutron involved here?
14:16:42 <johnthetubaguy> nova boot --flavor m1.large --image <image_id> --nic net-id=<net-id>,nic-type=<slow | fast | foobar> <vm-name>
14:16:57 <johnthetubaguy> garyk: possible, but not enough I don't think
14:17:19 <johnthetubaguy> (where slow is a virtual connection, fast is a PCI passthrough, and foobar is some other type of PCI passthrough)
14:17:22 <garyk> i know that there are a lot of dicussion that the melanox, intel, cisco etc are having
14:17:25 <johnthetubaguy> I kinda see it like nova volumes
14:17:27 <shanewang> I believe cisco guys are from neutron?
14:17:41 <johnthetubaguy> garyk: those are the folks I was talking
14:17:55 <johnthetubaguy> just didn't see any non-vendor types
14:17:55 <garyk> johnthetubaguy: thanks.
14:18:13 <garyk> will the nic_type have a 'type of service' or be a set of k,v pairs?
14:18:23 <johnthetubaguy> thats all TBC
14:18:34 <garyk> ok, thanks
14:18:41 <johnthetubaguy> just wanted to make sure the user didn't need to know about macvtap, or whatever it is
14:18:56 <johnthetubaguy> I see it like the user requesting volume types
14:18:59 <russellb> glad to see some stuff written down, and thanks a bunch for helping with this
14:19:00 <garyk> maybe it is something worth looking into placing it in a flavor - for example someo gets gold service
14:19:09 <johnthetubaguy> no worries, seems to be getting somewhere
14:19:21 <garyk> they could have better connectivity, storage etc.
14:19:23 <johnthetubaguy> we have some guys from bull.net working on adding PCI passthrough into XenAPI too
14:19:45 <johnthetubaguy> the reaon I don't like flavor is due to this:
14:19:52 <johnthetubaguy> nova boot --flavor m1.large --image <image_id> --nic net-id=<net-id-1> --nic net-id=<net-id-2>,nic-type=fast --nic net-id=<net-id-3>,nic-type=faster <vm-name>
14:19:58 <garyk> that is, we introduce a notion of service levels
14:20:12 <johnthetubaguy> yeah, I flip flop on this
14:20:27 <johnthetubaguy> I like flavor being the main thing that gives you what you charge
14:20:39 <johnthetubaguy> but you can dymanically pick how many nics you want anyways
14:20:51 <johnthetubaguy> lets see how it works out anyways
14:21:03 <johnthetubaguy> one question on all this...
14:21:03 <russellb> ok, any other sub-team reports?
14:21:06 * russellb waits
14:21:14 <hartsocks> I have a short one.
14:21:16 <johnthetubaguy> I wonder about moving PCI alias to host aggregates
14:21:34 <johnthetubaguy> so its more dynamic, rather than in nova.conf
14:21:46 <russellb> yeah, definitely prefer things to be API driven than config driven where we can
14:22:06 <johnthetubaguy> OK, I was thinking the same, just wanted to check
14:22:20 <russellb> sometimes config is a much easier first revision
14:22:22 <russellb> and that's OK
14:22:42 <garyk> it could be: nova boot --flavor m1.large … —service_type gold
14:22:42 <garyk> then the scheduler could take this into account and assign a host that can provide that service
14:22:42 <garyk> yeah, i can give a scjedulre update
14:22:42 <garyk> scheduler update if n0nao is not around
14:22:42 <garyk> johnthetubaguy: yeah, host aggregates could be a soln
14:22:44 <russellb> but makes sense to move to an API later
14:22:54 <johnthetubaguy> yeah, they have some stuff on config already
14:23:07 <garyk> i am in favor of the api.
14:23:13 <johnthetubaguy> I am kinda trying to just agree what the config should be yet, so thats not a big deal just yet :)
14:23:22 * russellb nods
14:23:27 <russellb> ok, hartsocks go for it
14:23:28 <johnthetubaguy> cool, sorry
14:23:31 <johnthetubaguy> I am all done...
14:23:33 <russellb> all good :)
14:23:37 <russellb> good stuff
14:24:05 <hartsocks> Just wanted to say we're looking to get a few bug fixes in for our driver.
14:24:16 <hartsocks> Those affect the CI stability on Minesweeper.
14:24:29 <hartsocks> I've spammed the ML about priority order on these.
14:24:32 <hartsocks> Also
14:24:40 <hartsocks> http://162.209.83.206/logs/58598/7/
14:25:07 <russellb> logs!
14:25:12 <hartsocks> tada
14:25:25 <hartsocks> http://162.209.83.206/logs/
14:25:26 <russellb> excellent
14:25:47 <garyk> There is a etherpad that has all of the vmware I2 issues - https://etherpad.openstack.org/p/vmware-subteam-icehouse-2
14:25:49 <hartsocks> We're not confident in the infra's stability yet to do the −1 votes.
14:26:07 <hartsocks> garyk: thank you, I posted those in the ML post earlier too.
14:26:27 <russellb> hartsocks: are you planning to move to testing all nova changes at some point?
14:26:40 <hartsocks> When we can plug the session management issues...
14:26:50 <russellb> ok, so eventually, that's fine
14:27:02 <hartsocks> … and an inventory issue we have when adding ESX hosts.
14:27:13 <johnthetubaguy> is it worth targeting those bugs at I-2, or did you do that already?
14:27:18 <hartsocks> Eventually will be sooner if we can get more reviews?
14:27:22 <russellb> heh
14:27:32 <mriedem> the donkey and the carrot
14:27:34 <russellb> like i said earlier, nova queue seems to be still recovering from the holidays
14:27:35 <hartsocks> I'll double check all the listed bugs today.
14:27:45 <hartsocks> :-)
14:27:51 <hartsocks> I've said my bit.
14:27:51 <garyk> minesweeper is doing the following: nova and neutron
14:27:51 <garyk> here is a list - https://review.openstack.org/#/dashboard/9008
14:28:06 <russellb> you guys do seem to be high on these lists ... http://russellbryant.net/openstack-stats/nova-openreviews.html
14:28:46 <russellb> ok, garyk did you have scheduler notes?
14:28:50 <hartsocks> yeah, kind of aware of that.
14:29:00 <garyk> russellb: yes
14:29:44 <garyk> 1. the gantt tree  for the forklift is ready but still are waiting to do development there
14:29:55 <garyk> the goal will be to cherry pick changes
14:30:31 <russellb> great
14:30:51 <garyk> fir all those that are not aware gantt is the forklift of the scheduler nova code to a sperate tree
14:30:51 <garyk> 2. we spoke yestertday to move the existing scheduler code to support objects (i am posting patches on this) so that the transition to an external scheduler may be easirer
14:30:54 <russellb> so priorities: 1) keep in sync with nova, 2) get it running as a replacement for nova-scheduler, with n o new features
14:31:18 <russellb> yeah that'd be nice
14:32:02 <garyk> thats about it at the moment.
14:32:02 <garyk> don dugger did great work with gantt and all the others in infra. kudos to them
14:32:11 <johnthetubaguy> or use object support to test (1) keep in sync with nova?
14:32:34 <garyk> yeah. hopefully. we still have instance groups in deveopment - pending API support and a few extra filters in the works
14:32:55 <garyk> the idea of the object support is to remove the db access from the scheduler. this can hopefully leverage the objects that can work with deifferent versions
14:33:20 <russellb> direct use of the db API at least?
14:33:25 <garyk> at the moment the changes we are making is in nova. hopefully these few patches may be approved and then cherry picked
14:33:28 <russellb> not using objects to talk to db through conductor right?
14:34:09 * russellb assumes so
14:34:13 <garyk> yes, to talk to the db via conductor.
14:34:24 <russellb> well, we don't want gantt calling back to nova
14:34:39 <russellb> so that won't work ...
14:34:39 <shanewang> garyk: can you point me the patchset?
14:34:52 <russellb> besides, i thought we were going with the no-db-scheduler blueprint for that
14:35:00 <russellb> basically not using the db at all anymore
14:35:17 <russellb> and just giving it a cache and sending it data to update the cache over time
14:35:39 <shanewang> that is what boris-42 is doing.
14:35:43 <russellb> right
14:36:20 <russellb> ok, onward for now
14:36:21 <russellb> #topic bugs
14:36:38 <russellb> 193 new bugs
14:36:51 <russellb> been staying roughly level lately
14:37:11 <johnthetubaguy> do we have any cool stats of the bugs yet?
14:37:11 <russellb> well, last month or so anyway
14:37:12 <russellb> http://webnumbr.com/untouched-nova-bugs
14:37:12 <garyk> why? can you please clarify
14:37:12 <garyk> those patch sets for the 'no db' support have drivers where the one example is a sql alchemy one (unless i am misunderstanding)
14:37:12 <garyk> that is for the host data.
14:37:12 <garyk> there is the instance data that needs to be updated
14:37:22 <ndipanov> russellb, so we are cool with those patches
14:37:22 <ndipanov> russellb, oookay
14:37:55 <russellb> not sure if my connection is messed up or what, i just caught a bunch of stuff from garyk and ndipanov ... sorry, wasn't trying to ignore you guys
14:38:06 <johnthetubaguy> +1
14:38:24 <russellb> so, also on bugs, https://launchpad.net/nova/+milestone/icehouse-2
14:38:31 <russellb> lots of stuff targeted to icehouse-2
14:38:38 <russellb> the most concerning parts are all of the critical ones
14:38:45 <russellb> we're the biggest gate failure offender right now
14:38:49 <russellb> and pitchforks are coming out
14:39:07 <russellb> so we really need to put time into this
14:39:23 <russellb> january 20 was proposed as a gate bug fix day by sdague
14:39:28 <russellb> which is great, but we shouldn't wait until then
14:39:53 <russellb> i'm trying to clear off my plate so i can start focusing on these bugs
14:40:11 <russellb> anyone interested in working with me and others on these?
14:40:18 <johnthetubaguy> I am traveling next week I am afraid :(
14:40:36 <russellb> i'll allow it :)
14:40:46 <russellb> well if anyone has some time available, please talk to me
14:40:57 <russellb> i'm going to try to start organizing a team around these bugs
14:41:04 <garyk> i am happy to work on bugs
14:41:23 <russellb> these gate bugs are starting to *massively* impact the gate for everyone
14:41:49 <russellb> gate queue got over 100 yesterday, approaching over 24 hours for patches to go through
14:41:51 <russellb> because of so many resets
14:41:56 <russellb> 65 patches deep right now
14:42:29 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2014-January/023785.html
14:42:35 <russellb> garyk: great, i'll be in touch
14:42:58 <ndipanov> I'll look into some too next week I hope
14:43:00 <garyk> regarding the bugs - i wanted us to try and formalize the VM diagnostics and then try and log this information when there is a gate failure - https://wiki.openstack.org/wiki/Nova_VM_Diagnostics
14:43:29 <mriedem> well, any tempest failure
14:43:30 <garyk> that is, looking at VM diagnostics may help isolate the cause of issues - at least let us know if it was related to the VM, network ir storage
14:43:37 <garyk> yeah, tempest failures
14:43:46 <russellb> garyk: that's a good idea
14:43:59 <russellb> the more data we can collect on failures the better, really
14:44:50 <russellb> ok, next topic
14:44:51 <mriedem> dansmith had some ideas on debugging the nova-network related large ops one yesterday
14:44:55 <mriedem> we should talk to him today about that
14:44:58 <russellb> mriedem: sounds good!
14:45:01 <russellb> #topic blueprints
14:45:08 <russellb> #link https://launchpad.net/nova/+milestone/icehouse-2
14:45:17 <russellb> if you have a blueprint on that list, please make sure the status is accurate
14:45:25 <russellb> or rather, "Delivery"
14:45:33 <mriedem> instance type -> flavor will move to i3
14:45:38 <russellb> we're going to start deferring "Not Started" blueprints soon to icehouse-3
14:45:43 <russellb> mriedem: ok go ahead and bump it
14:46:07 <mriedem> done
14:46:14 <russellb> so another blueprint issue ... i've greatly appreciated the team effort on blueprint reviews, that has worked well
14:46:23 <russellb> however, our idea for doing nova-core sponsors for blueprints has been a flop
14:46:25 <russellb> nobody is doing it
14:46:29 <russellb> and so virtually everything is Low
14:46:38 <russellb> and that's not really much better than before
14:46:44 <ndipanov> russellb, what if the status is inaccurate?
14:46:52 <russellb> ndipanov: change it :-)
14:47:05 <russellb> if it's yours you should be able to change it, if not, ask me (or someone on nova-drivers)
14:47:18 <mriedem> it takes 2 cores to move from Low, i see some bps with 1 sponsor - maybe move to 1 sponsor?
14:47:21 <ndipanov> russellb, done thanks
14:47:37 <mriedem> 1 +2 doesn't get the patches merged though
14:47:42 <russellb> right
14:47:46 <russellb> that's why we were requiring 2
14:47:54 <johnthetubaguy> yeah, I think 2 is correct
14:48:04 <johnthetubaguy> I sponsor the odd patch in the hope someone else joins me
14:48:04 <russellb> we can either stick with this plan, and try to promote it better
14:48:15 <russellb> or just punt the whole thing and start trying to sort them based on opinion
14:48:29 <russellb> johnthetubaguy: yeah, i think you and dansmith have done some of that, not many others
14:48:46 <garyk> how can we get core people interested in blueprints? there are ~25 bp's that are waiting review and are all low.
14:49:03 <johnthetubaguy> well, I think it reflects the current reality of the review rate though
14:49:09 <garyk> so that means none are sponsored…. i just feel that a very small percentage of these may even get review cyces
14:49:10 <russellb> johnthetubaguy: perhaps
14:49:31 <russellb> if the vast amount of Low is actually a reflection of our review bandwidth, then we have a whole different problem
14:49:36 <russellb> i guess the question is ... how many Low blueprints land
14:49:36 <johnthetubaguy> maybe worth a quick reminder email to nova-core?
14:49:51 <russellb> here's the icehouse-1 list https://launchpad.net/nova/+milestone/icehouse-1
14:50:08 <russellb> johnthetubaguy: yeah, guess we can try that and see
14:50:17 <russellb> because i still really like the theory behind it :)
14:50:36 <russellb> better tooling could help too
14:50:43 <russellb> if looking over these was a more natural part of dev workflow
14:50:46 <russellb> but that's not a short term fix
14:51:03 <johnthetubaguy> yeah, to tools could be a lot better, auto adding people to reviews, etc
14:51:03 <mriedem> i wish the gd whiteboard textarea had timestamps
14:51:13 <mriedem> and audited for who left the comment
14:51:14 <russellb> yes the whiteboard sucks
14:51:18 <garyk> i must be honest it is concerning. ~10 landed and some were just minor issues like configration options
14:51:20 <johnthetubaguy> +1
14:51:33 <johnthetubaguy> +1 to the whiteboard
14:51:40 <russellb> icehouse-1 isn't the best example ... icehouse-1 snuck up really fast
14:51:45 <russellb> so it just happened to be what could land very fast
14:51:49 <johnthetubaguy> true, it was short
14:51:52 <mriedem> i1 was also summit time
14:51:57 <johnthetubaguy> havana-2?
14:51:57 <russellb> right
14:52:09 <mriedem> h2 didn't have the new model
14:52:13 <russellb> well havana wasn't using this approach, yeah
14:52:15 <garyk> and i-2 had xmas and new years. i guess that also has a part. but we are a couple of weeks away and there is a meetup in the middle
14:52:22 <russellb> that was prioritized based on my opinion largely :-)
14:52:34 <johnthetubaguy> indeed, just curious on general throughput
14:52:40 <hartsocks> personally, my excuse is we had a baby.
14:52:44 <hartsocks> :-)
14:52:58 <russellb> hartsocks: heh, that'll be me for Juno
14:53:00 <mriedem> hartsocks: plan  9 months ahead next time
14:53:02 <russellb> life happens
14:53:10 <hartsocks> mriedem: lol
14:53:14 <mriedem> sorry honey....
14:53:38 <russellb> johnthetubaguy: unfortunately we can't see the h2 list now ...
14:53:50 <russellb> johnthetubaguy: after the final release everything gets moved to the big havana list
14:53:55 <johnthetubaguy> yeah, didn't work for me either, old reassignment thing
14:53:59 <mriedem> i think to get eyes on blueprints, and sponsors, people need to show up to the meeting and bring them up
14:54:07 <russellb> heh
14:54:11 <russellb> well on that note ...
14:54:11 <mriedem> the new meeting time should help that
14:54:14 <russellb> #topic open discussion
14:54:22 <russellb> open discussion is a good time to ask for eyes on things
14:54:24 <mriedem> meetup etherpad: https://etherpad.openstack.org/p/nova-icehouse-mid-cycle-meetup-items
14:54:32 <russellb> mriedem great, add it to the wiki?
14:54:38 <mriedem> sounds like hyper-v CI is coming along: http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-01-07-16.01.log.html
14:54:41 <mriedem> russellb: sure
14:54:43 <garyk> If possible could people please comment on https://wiki.openstack.org/wiki/Nova_VM_Diagnostics
14:54:44 <mriedem> i also posted to the ML
14:54:44 <russellb> thanks
14:54:55 <russellb> cool
14:55:16 <mriedem> i'd like eyes on the 3 patches i have in this i2 bp that is moving to i3, these just need another +2: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/flavor-instance-type-dedup,n,z
14:55:22 <mriedem> well, 2 patches, one is approved
14:55:27 <mriedem> they are just refactor
14:55:41 <russellb> wonder how many blueprints aren't approved yet ... *looks*
14:55:52 <russellb> https://blueprints.launchpad.net/nova/icehouse
14:55:59 <garyk> regarding the meetup - would it be possible that a day bew devoted to reviews of BP's? that is there will be a bunch of people together. Why not all get heads down and review BP's
14:56:13 <russellb> not bad, 7 waiting on a blueprint reviewer
14:56:14 <mriedem> garyk: i have bp review on the etherpad
14:56:20 <garyk> not the proposed BP but the code
14:56:32 <garyk> mriedem: thanks!
14:56:34 <mriedem> russellb: also looking for approval on this https://blueprints.launchpad.net/nova/+spec/aggregate-api-policy
14:56:36 <mriedem> code looks ready
14:56:37 <russellb> ah, group code reviews, that could work
14:57:10 <russellb> mriedem: seems fine, approved
14:57:16 <mriedem> thanks
14:57:27 <mriedem> anyone hear any status/progress on docker CI?
14:57:35 <johnthetubaguy> maybe in utar we could get through the backlog a little?
14:57:48 <mriedem> since they are adding sub-driver for LXC
14:58:00 <russellb> well, docker folks are not adding that
14:58:04 <russellb> that's zul
14:58:14 <russellb> johnthetubaguy: yeah hope so, that'd be cool
14:58:23 <russellb> re: docker CI, i've been in touch with them
14:58:27 <russellb> they are fully aware of the requirement
14:58:33 <russellb> and want to meet it, but haven't seen movement yet
14:58:46 <russellb> sounds like eric w. is taking over maintaining the docker driver in nova
14:58:56 <russellb> don't see him here
14:59:17 <mriedem> ok, i'm hoping to be pleasantly surprised with the hyper-v CI since it's been so quiet
14:59:30 <russellb> yeah, but it was a nice surprise to see something get spun up
14:59:40 <mriedem> yup
14:59:42 <garyk> a change in neutron broke their ci - they are out there :)
14:59:42 <russellb> nice to see this all seem to come together
14:59:57 <russellb> oh right, i saw that email
15:00:00 <russellb> darn windows
15:00:05 <mriedem> garyk: yeah, i saw alex in -neutron the other day
15:00:10 <mriedem> talking about it
15:00:48 <russellb> alright we're out of time
15:00:56 <russellb> #openstack-nova is always open for nova chatter :)
15:00:58 <russellb> thank you everyone!
15:01:04 <russellb> next week we'll be back to 2100 UTC
15:01:08 <garyk> thanks for time! have a good weekend
15:01:08 <russellb> and alternating from there
15:01:13 <russellb> #endmeeting