14:00:34 #startmeeting nova 14:00:35 Meeting started Thu Jun 12 14:00:34 2014 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:38 The meeting name has been set to 'nova' 14:00:50 hello everyone 14:00:53 good morning 14:00:57 nope, nobody 14:00:58 hi 14:01:02 hi 14:01:05 hello 14:01:10 hi 14:01:17 * johnthetubaguy attempts australian accent 14:01:19 afternoon 14:01:31 #topic Juno mid-cycle meet up 14:01:33 hi folks 14:01:33 G'day Blue 14:01:42 so this is mostly informational 14:01:44 johnthetubaguy, just say g'day a lot 14:01:55 #link https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint 14:02:03 n0ano: sure :) 14:02:14 anyone got anything about the meet up 14:02:20 * danpb will sadly miss the mid-cycle meetup due to holiday plans 14:02:21 not seen an etherpad started for topics yet 14:02:47 johnthetubaguy: there isn't an etherpad but the wiki has some things under 'nova specifics' 14:02:48 #link https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803 14:02:52 is the registration 14:03:19 #link https://etherpad.openstack.org/p/juno-mid-cycle-meetup 14:03:26 well I created one, lets put topics on there I guess 14:03:53 any more for any more? 14:04:11 #topic Juno-1 14:04:24 well again, this is just to say its released, effectively 14:04:31 #link https://github.com/openstack/nova/releases/tag/2014.2.b1 14:04:36 sorry, back to meetup, there's a pad at https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint 14:04:47 or a link to one anyway 14:05:07 i'll put the etherpad link in the wiki 14:05:10 n0ano: just added that, but yes, feel free to add to it :) 14:05:22 ah, oops 14:05:26 two links 14:05:28 never mind 14:05:35 anyways, any more on juno-1? 14:05:38 hopefully not 14:05:40 nope 14:05:45 #topic Juno-2 14:06:00 OK, so we are unfreezing the approving of blueprints now 14:06:20 johnthetubaguy: Hi 14:06:26 #info its time to start reviewing blueprints again 14:06:33 I have a pending review https://review.openstack.org/#/c/86606/ 14:06:50 baoli: going to push those to the Open discussion if thats OK 14:06:57 sure 14:06:58 just thinking process wise for now, any issues? 14:07:10 we really didn't get much into juno-1 14:07:19 so we kinda need to fix that in juno-2 14:07:21 johnthetubaguy: i felt like we had 2 weeks last week for juno-1 bp reviews 14:07:35 like june 20-something was the deadline 14:07:38 but might be confused 14:08:11 hmm, OK, some miss communication there, sorry about that 14:08:18 anyway, gate has been bad for the last week 14:08:19 two weeks ago at this time we had two weeks ish 14:08:22 ha 14:08:30 personally i'm fine with fewer approved blueprints 14:08:30 but the gate has hampered the push a bit, for sure 14:08:38 we should get more people to help with that, if possible 14:08:42 we have ~1500 bugs or something crazy 14:08:45 be nice to get a priority on spec reviews, otherwise we end up with a worse merge crunch for juno-2 which will then flow on to juno-3 14:08:56 mriedem: yeah more bugs was good 14:08:59 cyeoh: agreed 14:09:07 johnthetubaguy: i mean a backlog 14:09:08 I think we should have a blueprint review push this next week 14:09:11 so != good :) 14:09:29 mriedem: oh yeah, I missready lol 14:09:37 so we have a shitload of bugs 14:09:45 we have reverts for gate issues on a seemingly daily basis 14:09:53 I meant we fixed more bugs than blueprints, but not sure we moved the neadle too far 14:09:54 poor attendance at the bug day last week 14:10:16 yeah, I had a bad clash with other stuff myself, which was screwy 14:10:18 i feel like our priorities aren't in the right place...so adding spec reviews and features on top of all that is counter productive 14:10:28 mriedem: do we have lots of reverts because we're simply merging stuff we shouldn't have in the first place? (sorry I haven't been able to keep up with them) 14:10:40 cyeoh: some yes - we'll get to that in the bug discussion 14:10:53 yeah… lets sit on that one for now 14:11:12 anyway we can move on 14:11:14 * dansmith slinks in late 14:11:14 mriedem: so my thinking is we try and get through the blueprint backlog 14:11:33 then we draw a soft line in the sand, to say no more, we have all we can do for juno 14:11:38 lets do more bugs 14:11:56 what is 'getting through the blueprint backlog' though? 14:12:02 the specs that are up for juno? 14:12:20 mriedem: I think its more decide what blueprints currently up for reveiw we really need 14:12:29 to getting the list of high and medium sorted 14:12:40 does that make any sense? 14:12:53 then we kinda shut the floodgates, so we can get some bugs fixed 14:12:56 sort of yeah, i know what you're saying 14:13:41 I am thinking of a fixed size funnel, and wanting to leave room for bug fixes, and structural work 14:14:19 one last thing, any blueprints people don't want to leave till juno-3 but we need in juno 14:14:24 its worth a think 14:14:42 stuff that will be too risky for juno-3 but we want it 14:14:53 some of the scheduler rework feels like that 14:14:54 Perpahs we need to have acut off date for new BPs - and then prioirtise the ones that are in nova-specs at that point (both for spec review and implementation) 14:15:13 PhilD: agree with that 14:15:21 PhilD: yeah, I think that is what I am saying, just not very well 14:15:25 like -2 the ones currently proposed with "we're not going to spend time on this, sorry" ? 14:15:26 basically if you don't have a spec filed by mid juno-2 yo'ure out 14:15:34 dansmith: +1 14:15:36 the ones that we don't want to prioritize I mean 14:15:41 then prioritize the rest that are available for review 14:15:45 mriedem: +1 14:15:52 otherwise they'll just keep coming 14:15:56 so.. 14:15:58 I think it's important that we look at what we have, prioritize, and not just assume anything that is proposed gets the same amount of attention 14:16:05 3rd July sounds good right? 14:16:17 well we'd want some flexibility in there IMHO 14:16:26 yeah, its that prioritise, we need to do that better 14:16:27 because sometimes blueprints are really very trivial 14:16:28 danpb: sure there can be exceptions 14:16:37 danpb: yeah, always exceptions possible 14:16:48 trivial low-risk bp's or late in the release changes, like nova/neutron events for example 14:16:54 johnthetubaguy: hi! I'd like to point out the ironic driver spec which has been up for ~3 weeks 14:17:16 We coudl say that after a date you need to start putting entries into the "K" directory for specs - maybe moving things there rather tnan -2 would be better too 14:17:17 devananda: we stopped reviewing for two weeks, to get juno-1 sorted, appologies 14:17:27 johnthetubaguy: let's not leave it until the last minute :) 14:17:31 devananda: yours is in that high priority list in your head! 14:17:47 PhilD: thats fair 14:17:54 so… the proposal 14:18:09 July 3rd, no new proposals for juno specs 14:18:23 allow proposals for K, but low priority for reviews 14:18:31 exceptions to be raised in nova-meeting for review 14:18:35 johnthetubaguy: "in my head"? 14:18:35 does that work? 14:19:02 Works for me - and also idetify specs currently in juno that should be moved to K ? 14:19:08 devananda: yeah, there is an etherpad of priorities from the summit, but its a bit neglected right now 14:19:24 PhilD: thats a point... 14:19:38 that sounds fine to me, but definitely want an email to openstack-dev so people know about the deadline well in advance 14:19:59 By July 10, nova-drivers to agree high and medium priority items, and stuff that must not be in Juno as its too late 14:20:09 Feels that it would be better to more a spec to K reather than -2 it if we mean "not yet". 14:20:19 cyeoh: +1 14:20:40 PhilD: well, −2 till you re-propose this for K 14:20:41 yeah, "-2" is a very negative thing to say to a contributor and should be avoided unless we don't want it ever 14:20:56 PhilD: I think -2 no forever, while move to K is next cycle. 14:21:06 danpb: +1 14:21:08 Is that ""nova-drivers propose a list that we confirm in a nova meeting" ? 14:21:20 PhilD: yeah, thats fair 14:21:28 or just FFE 14:21:33 seems like the same process 14:21:34 philthefairguy 14:21:39 nova-drivers say they want to defer 14:21:47 if you have a strong case for keeping it in juno, FFE 14:21:49 #action johnthetubaguy to send process for Juno blueprints to dev list for review 14:22:00 then PTL decides i guess - with core team backing for review 14:22:25 we're really just moving the FFE thing way to the left 14:22:41 well, either way, shout up you feel like we have the wrong end of the stick 14:23:00 let them shout in the ML, let's move on :) 14:23:13 yeah 14:23:20 #topic bugs 14:23:20 Well FF for specs being eariler that FF for implemetaion makes sense to me 14:24:04 OK, not seen a hot bug list, but we have a few on the meeting agenda from mikal 14:24:04 tjones doesn't appear to be around 14:24:23 http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html 14:24:31 lp1328694 14:24:35 that's mine 14:24:38 oh goody 14:24:45 so when we talk about things that merged which shouldn't have, ^ 14:24:53 that was a feature that got in via bug report 14:25:00 crap 14:25:12 api and db api changes, cli changes, doc impacts, potential performance impacts, etc 14:25:32 the question now is do we fix the bug where ceilometer is spamming the n-api logs, 14:25:43 or do we revert the nova change and make this go through nova-specs 14:25:45 johnthetubaguy: I dont see a link in scrollback to the priories etherpad you mention, or in the list of Juno Summit etherpads. would you mind sharing that? 14:26:15 devananda: I will have to dig that up, was a few nova meetings ago 14:26:23 mriedem: how long ago did the api change merge? 14:26:29 mriedem: I am tempted to say revert and thinking it though 14:26:29 cyeoh: couple weeks 14:26:38 johnthetubaguy: ack, thanks for the pointer. i'll look in the archives 14:26:41 cyeoh: https://review.openstack.org/#/c/81429/ 14:27:00 merged on 5/22 14:27:04 cli change was after that 14:27:10 and ceilometer change after that which exposed the bug we have now 14:27:17 introduced by the api change above 14:27:20 revert if it's after juno start? 14:27:33 my bigger concern is the polling 14:27:33 leifz: people deploy off trunk though, its not always that simple 14:27:47 PhilD: what do you recon about this specific instance? 14:28:10 it seems like we should revert this and do this properly, but we did just release it in Juno-1 14:28:13 ceilometer is hitting the nova db and api server every time it polls on all servers and all floating ips 14:28:21 so maybe we revert, and re-spin Juno-1 14:28:28 idk about juno-1 14:28:42 johnthetubaguy: what does that mean? 14:28:46 the only consumer is ceilometer 14:28:47 johnthetubaguy: "respin j1" 14:29:03 dansmith: I was just thinking about that too, it doesn't really mean anything :( 14:29:07 :) 14:29:08 remove the tag? 14:29:11 no 14:29:14 not worth that 14:29:15 Sorry - got distracted. What was the question ? 14:29:18 johnthetubaguy: okay, I was wondering if you had a delorean I didn't know about :) 14:29:35 delorean won't fit the tuba 14:29:35 dansmith: I have a lot of junk in my loft, but sadly no :( 14:29:40 johnthetubaguy: heh 14:29:43 mriedem: true, true 14:29:47 anyway... 14:29:53 to revert or not to revert 14:29:55 the question is if we revert this https://review.openstack.org/#/c/81429/ 14:29:56 yuck 14:30:01 along with the related novaclient change 14:30:08 and then make this go through bp review process 14:30:08 If it's a small window, 14:30:12 and we know it's bad, 14:30:15 I'd rather revert ASAP 14:30:21 yeah, +1 14:30:26 especially if it was a special interface to be consumed by a machine and not a user 14:30:27 +1 to revert 14:30:38 OK, anyone against a revert? 14:30:39 20 day window is small in my book +1 14:30:46 would be good if some of the nova core team will read the related ML and respond with opinions there 14:30:46 mriedem: are you cool to propose the revert? 14:30:48 since that has the defaults 14:30:51 *details 14:31:16 Revert sounds OK to me 14:31:16 #help please respond to http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html 14:31:18 johnthetubaguy: i'm more than cool, but want informed consensus first, i.e. responses in the ML so i know people read it 14:31:31 mriedem: yeah, totally 14:31:41 so, lp1323658 14:31:41 i am more or less worried about allowing precedent for ceilometer to change nova apis for it's polling needs 14:31:53 http://lists.openstack.org/pipermail/openstack-dev/2014-June/037221.html 14:32:00 and just a reminder to reviewers that any api change even if its backwards compatible *has* to go through nova-specs 14:32:02 ssh timeout bug, a request for nova help 14:32:19 cyeoh: agreed, maybe I should send an email about that 14:32:42 #action johnthetubaguy to ensure we restate that all api changes need a nova-spec 14:32:53 so about the ssh timeout help 14:33:06 was that neutron and nova-network? 14:33:44 #help need someone to help with lp 1323658 as it would help the gate a lot 14:33:46 the email from kyle was for neutron 14:33:46 Launchpad bug 1323658 in nova "Nova resize/restart results in guest ending up in inconsistent state" [Undecided,New] https://launchpad.net/bugs/1323658 14:34:00 it's a set of neutron scenario tests 14:34:16 sounds like a problem when the instance goes through resize/restart 14:34:20 mriedem: yeah, just with it being nova side, I was curious, but I guess we need to dig 14:34:22 yeah 14:34:30 there was a separate gate blocker on ssh timeouts with nova-network 14:34:43 we reverted a tempest change on monday morning to get past that, but the bug is still open 14:34:45 ah, thats probably why I am confused 14:34:53 we're thinking there is a leak in nova-network somewhere 14:35:06 i'm suspicious of the ec2 3rd party tempest tests and/or the ec2 api 14:35:16 given those don't get much love and run concurrently with the scenario tests that were failing 14:35:24 Could the bug here be that stop/resize do a power off that could lead to a data corruption ? 14:35:44 PhilD: good sell for your blueprint there, but yeah, you have a point 14:35:45 (I have that fix pendeing to do a controlled shutdown instead) 14:36:08 PhilD: not sure, could be that networking isn't associated correctly when the instance comes back up 14:36:10 fails to come back up would mean no ssh 14:36:21 they suggest no console output 14:36:25 like the VM failed to boot 14:36:25 If its reproducabel it woudl be easy for someone to pull in my propsoed fix and see if it helps 14:36:54 PhilD: well its kinda flakey rather than always, at least that was my impression, but its worth a whirl 14:37:03 oh wait 14:37:10 mriedem: what does leak mean here, leaking floating ips or something? 14:37:17 were we going to skip the graceful shutdown in the gate, becuase it gets too slow? 14:37:31 dansmith: yeah i think so 14:37:44 * johnthetubaguy is worries about the time, hoped to start sub teams at half past 14:37:55 anyway, the nova-network ssh timeout bug is discussed here http://lists.openstack.org/pipermail/openstack-dev/2014-June/037002.html 14:37:59 yeah 14:38:02 With th enew fix it only adds ~5 minutes to the gate 14:38:03 mikal has a patch to add more trace logging to nova-network 14:38:21 mriedem: did you say there was something else you wanted to cover here? 14:38:26 but we seemed to be hitting the problem around 250 instances 14:38:30 johnthetubaguy: no 14:38:46 mriedem: cool, we covered it now 14:38:47 johnthetubaguy: oh wait 14:38:53 sure 14:38:55 from agenda 14:38:56 "spotting bug "themes", like force_config_drive and resize/migrate (mostly due to those not being tested with multi-node hosts in the gate)" 14:38:57 any more on bugs 14:39:02 tjones had that, i added resize/migrate 14:39:13 i've been tagging resize/migrate bugs even though it's not an official tag 14:39:14 yeah 14:39:18 but in the hopes that we avoid duplicates 14:39:29 since we don't have multi-node testing in the gate to test migrations 14:39:40 mriedem: that was my reason to add that idea there, yeah 14:39:51 i'm going to attend the qa meeting today, see what needs to be done for that 14:40:00 so mikal wanted me to raise ideas about how we improve our bug triage 14:40:07 basically a call to put thinking caps on 14:40:12 and send proposals to the ML 14:40:15 I wouldn't mind some answers on https://bugs.launchpad.net/nova/+bug/1327406 14:40:17 Launchpad bug 1327406 in nova "The One And Only network is variously visible" [Undecided,In progress] 14:41:00 mriedem: as you said, I think picking a bug theme, then chasing it down, looking for duplicates, etc, might be a good way to go 14:41:22 attendance at the bug meeting yesterday wasn't great from what i could tell, it was my first time though :) 14:41:25 so next time you pick up a bug, or triage a bug, maybe check for duplicates, and maybe we start tagging some common themes 14:41:27 johnthetubaguy: we should probably put bug discussion as the last agenda item next time as it expands to consume all available time :-) 14:41:41 yeah let's move on 14:41:49 mriedem: yeah, I need to make myself free for that again, its a slightly awkard time for me 14:41:57 yeah 14:42:06 it's basically tagging, not really triage 14:42:08 #topic Gate status 14:42:18 I think we actually covered this in bugs 14:42:20 "sucky" 14:42:22 lets move on 14:42:24 :P 14:42:25 quite 14:42:26 yeah 14:42:28 it's better 14:42:31 from friday 14:42:39 help please help, ssh bug is one 14:42:50 http://status.openstack.org/elastic-recheck/ 14:42:50 #topic Sub team reports 14:43:00 * n0ano gantt 14:43:13 Libvirt: nothing especially notable to report this week 14:43:42 xenapi: same, nothing to report, general CI progress 14:43:48 n0ano: fire away 14:43:51 nova-api: just looking for nova-spec reviews so we can start merging stuff 14:44:10 ironic is finally unblocked after the revert to the HostState.__init__ landed. 14:44:16 cyeoh: ack, they are in the top priority list in my head too 14:44:19 johnthetubaguy: is this right time to talk about sriov? 14:44:26 biggest thing is we've decided to abondon the no-db BP (for now), given recent improvement it's a premature optimzation for the moment 14:44:28 johnthetubaguy: thx! 14:44:43 we were able to get in a lot of bug fixes this week. otherwise, just looking for our spec to be reviewed so we can start planning to merge the driver around the time of the mid-cycle 14:44:49 baoli: yep, I have added you to the queue, 14:44:56 thx! 14:45:03 n0ano: any more on scheduler 14:45:07 So the nova-spec for sriov is pending 14:45:11 I see some progress on the split out, just proving hard 14:45:29 forklift is still WIP (work in progress), hope for some concrete results by juno-2, we'll see 14:45:41 that's about all 14:45:54 we also need core reviewers for this bug: https://review.openstack.org/#/c/81954/ 14:46:55 baoli: keep bugging me about your sepc, we should get to that in the blueprint push thats coming up 14:47:03 OK, any more sub team reports? 14:47:26 johnthetubaguy: sure. 14:47:36 #topic Open Discussion 14:47:39 I'm working with several folks on getting the nova-docker hypervisor plugin squared away for feature parity and tempest test failures fixed. I'd like to have the plugin be considered for merging back into the nova tree (probably K?). does anyone have thoughts about this? 14:48:15 i have a question about our policy wrt changes which help Python3 portability 14:48:18 eg https://review.openstack.org/#/c/98573/ 14:48:42 Joe rejected that saying we shouldn't do python3 port work 14:49:05 so i have a question about nova.virt.baremetal. how much do ya'll want to deprecate it? 14:49:06 IMHO if people wish to contribute patches to help Nova code portaibility to Python3 we should welcome it 14:49:25 danpb: yeah, that response confuses me 14:49:26 so that when our external deps do finally support python3, nova code will mostly be ready 14:49:35 danpb: yeah, I didn't think we were going to block that, just not put it as high priority 14:50:11 #action johnthetubaguy to reach out to jogo about https://review.openstack.org/#/c/98573/ 14:50:19 Ok, so we have some agenda items here 14:50:24 alaski: you added some? 14:50:28 anyone know if there's a wiki page mentioning nova's python3 status ? 14:50:36 if so i could edit it to make this policy clearer 14:50:46 danpb: I don't remember one, worth proposing I guess 14:50:53 johnthetubaguy: I didn't... maybe something was carried over from last week? 14:50:58 danpb: maybe on the code review page as well? 14:51:09 alaski: oops, probably my bad 14:51:35 danpb: https://wiki.openstack.org/wiki/Python3 14:51:38 cyeoh: I put down your v3 API specs, but I guess we should discuss those in the review 14:51:41 johnthetubaguy: well I can ask about tasks here 14:51:42 or is everyone content leaving nova.virt.baremetal in its current semi-frozen state in the tree for ever? 14:52:02 mriedem: ah, thanks 14:52:04 devananda: i'd like the nova bugs tagged with baremetal to be triaged/moved if that's the case 14:52:20 devananda: if they aren't critical for nova bm driver, let's move them from nova to ironic 14:52:34 devananda: if we leave it in forever, then someone still has to deal with any security issues that may arise with it 14:52:46 and its a burden for people when they want to refactor internal code 14:52:54 mriedem: so, it /should/ be deprecated and replaced by ironic, but that wont happen if no one reviews it 14:52:55 yeah, I feel bad about leaving it in tree for ever 14:53:00 johnthetubaguy: yep I'm happy to discuss that in review - v2.1 on v3 is the priority, but also the API policy check one (by alex xu which is half merged anyway)) and tasks (alaski) 14:53:08 danpb: exactly 14:53:18 so IMHO, we should be aiming to replace it with ironic, not let it rot forever 14:53:19 devananda: that should'nt prevent the ironic team from triaging those bugs though right? 14:53:21 danpb: i dont want to see that. it's unmaintaned code. it shouldn't be there 14:53:28 mriedem: totally different code. 14:53:36 mriedem: the baremetal driver is untested and unmaintaned 14:53:52 i hope we have a warning in the driver to that effect... 14:53:56 mriedem: with a slight exception -- tripleo was, until recently, using it and, well, filing tons of bugs 14:54:02 mriedem: we dont 14:54:05 johnthetubaguy: also although we've slated micoversions for the mid cycle I do think we need to move it along a bit beforehad because tasks is probably a bit dependent on it in practice (if we want to merge it fully in Juno) 14:54:25 mriedem: its tagged Tier-3 isn't it 14:54:25 cyeoh: yeah, that makes sense 14:54:47 danpb: it never really got deprecated officially yet though 14:54:49 danpb: yeah https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Group_C 14:54:56 its in limbo 14:55:09 devananda: so just to be clear, what you want is more reviews? 14:55:09 right ^ but there's not an actual Deprecation warning in the logs, is what I mean 14:55:10 cyeoh: yes. at this point I'm still aiming for v3 as is, but am happy to use tasks to make progress on our new direction 14:55:12 any thoughts about the nova-docker driver moving back into the tree provided the tempest tests are passing? 14:55:33 johnthetubaguy: one thing I want, which we talked *at length* about at the summit, is landing the nova.virt.irnoic driver 14:55:42 johnthetubaguy: which requires getting reviews on the spec as a first step 14:55:55 johnthetubaguy: and, once that's agreeable and approved, then getting reviews on the driver code 14:55:56 devananda: i'm adding one now 14:56:12 alaski: yep I'm ok with that - either will let us expose it as experimental and get some real world testing on it. 14:56:14 johnthetubaguy: which will be a forklift from the ironic tree of several thousand lines. 14:56:17 mriedem: ++ 14:56:26 devananda: agreed, just checking, are you proposing that baremetal lives for ever? 14:56:28 once Ironic lands, we should definitely add a deprecation warning in Baremetal 14:56:33 johnthetubaguy: absolutely not :) 14:56:46 devananda: sorry, totally misread what you put then, good good 14:56:53 johnthetubaguy: was just playing devil's advocate, since taht is what would happen if we dont merge nova.virt.irnoic (or you guys dont simply kick baremetal out) 14:57:04 three minute warning 14:57:04 devananda: I was hoping thats true :) 14:57:22 dansmith: ack 14:57:30 dansmith: and thank you :) 14:57:36 there's a fairly large team from lots of companies committed to maintaining ironic (and the nova driver for it) at this point 14:57:39 * dansmith has another meeting to get to 14:57:45 and almost no one looknig at baremetal 14:58:12 devananda: I couldn't agree more to removing it, but would be good to have the transition plan sorted first 14:58:17 anyways we should review the spec 14:58:21 johnthetubaguy: well, turns out we do have it sorted 14:58:35 devananda: awesomeness 14:58:39 any more for any more? 14:58:49 thoughts about nova-docker? 14:58:50 we are out of agenda items I think 14:58:54 and time 14:58:57 haha 14:59:06 fungi, just do it :-) 14:59:13 funzo: well I think it follows the usual patter, prove stability, then propose the spec with the details 14:59:19 pattern^ 14:59:31 johnthetubaguy: ok 14:59:33 thank you 15:00:08 funzo: the bigger discussion is feature compatibility 15:00:12 cool, so we are done 15:00:18 thanks all for attending 15:00:22 #endmeeting