16:00:09 <ildikov> #startmeeting cinder-nova-api-changes
16:00:10 <openstack> Meeting started Thu Sep 28 16:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:14 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
16:00:16 <ildikov> johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang1 raj_singh lyarwood jungleboyj stvnoyes
16:00:19 <jgriffith> oh... ildikov is here :)
16:00:28 <jungleboyj> @!
16:00:28 <_pewp_> jungleboyj (ه’́⌣’̀ه )/
16:00:40 <ildikov> couldn't start the meeting earlier :)
16:00:44 <mriedem> o/
16:01:03 <ildikov> my availability is a bit hectic today
16:01:10 <ildikov> who would like to co-chair?
16:01:13 <stvnoyes> o/
16:01:20 <jgriffith> ildikov: happy to help out if needed
16:02:00 <ildikov> topics are: uses_shared_connection flag, live_migrate patch if there's anything on it, figuring out how R/W and R/O works in Cinder and then agree on policies and our strategy on them
16:02:06 <ildikov> #chair jgriffith
16:02:07 <openstack> Current chairs: ildikov jgriffith
16:02:18 <smcginnis> o/
16:02:19 <ildikov> jgriffith: floor is yours on the flag :)
16:02:26 <jgriffith> I'll start perhaps this will be short
16:02:39 <jgriffith> I have a Spec and a patch started for the flag....
16:03:03 <jgriffith> https://review.openstack.org/#/c/507670/
16:03:17 <jgriffith> I'm going to rework that a bit but the general idea remains the same
16:03:34 <jungleboyj> jgriffith:  Cool.  I will review.  Hadn't looked as I wasn't sure the status.
16:03:36 <jgriffith> I've since proposed:  https://review.openstack.org/#/c/508209/
16:03:52 <jgriffith> which I'll use as the foundation for updating the shared-targets flag patch
16:04:15 <jgriffith> The gist of this is that we'll have a column that indicates shared_targets are used by a backend (defaults to true)
16:04:48 <jgriffith> and we'll have a column that indicates a backend unique identifier (that's where the UUID column in services comes in)
16:05:21 <jgriffith> Due to the fact that it is possible to have a single backend serving up multiple c-vol services I'll also add a config option for an admin to set that identifier if needed
16:05:34 <jgriffith> I need to update the spec for that
16:06:01 <jgriffith> I believe this addresses the concerns we had around the attach/detach races with shared targets
16:06:08 <jungleboyj> Oh ... That is why we need that.
16:06:11 <jungleboyj> I get it now.
16:06:17 <jgriffith> well, at least it provides mechanisms to deal with it
16:06:43 <jgriffith> jungleboyj: yeah, so refresher;  The proposal was to use the backend-identifier to create a lock if needed
16:06:53 <jungleboyj> Right.
16:06:57 <jgriffith> kk
16:07:18 <jgriffith> Does anybody have any questions about that?
16:07:20 <mriedem> "indicates shared_targets are used by a backend (defaults to true)"
16:07:24 <mriedem> why not default to None?
16:07:35 <mriedem> defaulting to saying all backends are shared targets seems bad
16:07:50 <jgriffith> mriedem: so default to True seemed "safest" because it won't *hurt* if a backend isn't actually using them
16:08:01 <jgriffith> The converse however is not the case
16:08:06 <mriedem> what sets the flag?
16:08:18 <jgriffith> the migration sets it intially
16:08:29 <mriedem> sure but at runtime
16:08:37 <jgriffith> and then service startup verifies it's correct... and additionally volume-create sets it
16:08:54 <jgriffith> it utilizes capabilitly reporting from the driver
16:08:55 <mriedem> ok, so existing volumes might have it wrong based on their type/backend,
16:09:09 <mriedem> but worst case is we're unnecessarily locking, yes?
16:09:12 <jgriffith> mriedem: that's correct, but they'll have it wrong in such a way that it won't cause problems
16:09:18 <jgriffith> just slight ineffeciencies
16:09:21 <jgriffith> mriedem: correct
16:09:31 <mriedem> ok, maybe point that out in the spec to note we've thought about it :)
16:09:36 <mriedem> when we ask in a year
16:09:52 <jgriffith> mriedem: will do, thought I did but that may have just been a comment in the code :)
16:10:09 <jgriffith> I'll make sure it's noted in the spec and also written up in dev docs
16:10:13 <mriedem> i haven't read any of it, so idk
16:10:24 <jgriffith> mriedem: so you're saying there's a chance :)
16:10:30 <jungleboyj> That all makes sense to me.
16:10:36 <jgriffith> ie that I didn't skip something
16:10:45 <mriedem> sure
16:10:57 <jungleboyj> And we are giving the users a way to set what they use to lock if necessary.
16:11:10 <jgriffith> One suggestion was to use a single column, if None don't use locks, if not None use what's there
16:11:13 <jgriffith> I didn't like that
16:11:19 <jgriffith> seemed to obtuse
16:11:47 <jungleboyj> ++
16:11:56 <jgriffith> besides, I think the UUID in the services column is something we should've had already anyway
16:12:08 <jgriffith> useful for other things
16:12:20 <jgriffith> So that's about all I have on that front for now
16:13:03 <jgriffith> anybody have a burning topic they want to discuss here?
16:13:09 <jgriffith> Or questions about anything?
16:13:38 <jgriffith> oh... ildikov has an agenda  :)
16:13:49 <jgriffith> #topic live_migrate patch
16:13:56 <jgriffith> stvnoyes: ?
16:13:57 <stvnoyes> fyi - I'll be away starting tomorrow and all next week.
16:14:11 <jgriffith> stvnoyes: Unacceptable!!
16:14:12 <jgriffith> :)
16:14:26 <stvnoyes> i have a rv up that mriedem is rv'ing now. hopefully I can update it (if needed) before I leave
16:14:58 <stvnoyes> boy, when they make you chair, it really goes to your head ;-)
16:15:11 <jgriffith> stvnoyes: you have no idea!
16:15:12 <jgriffith> :)
16:15:17 <jungleboyj> stvnoyes:  jgriffith  Doesn't believe in vacation
16:15:46 <jgriffith> stvnoyes: mriedem anything else to talk about on live-migrate?
16:15:55 <mriedem> just posted some comments,
16:16:01 <mriedem> i need to run through the test changes in this patch,
16:16:05 <stvnoyes> ok great. i'll take a look
16:16:07 <mriedem> and re-run the live migration patch that hits this code
16:16:17 <stvnoyes> ...after lunch (if that's ok with john)
16:16:28 <jgriffith> stvnoyes: I suppose :)
16:16:33 <johnthetubaguy> I do still hope to review that soon... just sadly hasn't happened yet
16:17:18 <jgriffith> Ok, sounds like we're moving forward still, just need review time
16:17:27 <jgriffith> anything else on live-migrate?
16:17:37 <jgriffith> Going once...
16:17:46 <stvnoyes> good here
16:17:51 <jgriffith> Going twice...
16:17:53 <jgriffith> Cool
16:18:08 <jgriffith> #topic handling R/O, R/W on Cinder side
16:18:20 <jgriffith> yeah... umm, that
16:18:27 <jungleboyj> Yeah ...
16:18:40 <jungleboyj> So, I have talked to smcginnis  and tommylikehu  a little on this.
16:19:12 <jgriffith> jungleboyj: share your findings with us brave ambassador
16:19:21 <jungleboyj> It seems like the default policy we were leaning towards there is that, if possible, the first attachment is R/W and subsequent ones are only allowed to be R-O .
16:19:40 <jungleboyj> Not sure if it is possible to do that, but that, I think, is the direction we want to go.
16:19:53 <johnthetubaguy> that sounds bad for the folks with the cluster management tools
16:20:02 <jungleboyj> If the admin is smarter than us then he can enable multiple R/W attaches .
16:20:09 <jungleboyj> johnthetubaguy:  How so?
16:20:18 <johnthetubaguy> they don't have any hooks to call into cinder to do the enable, they just expect two r/w volumes, as I understand it
16:20:25 <jgriffith> johnthetubaguy: +1
16:20:46 <johnthetubaguy> just like moving an IP between two hosts using vrrp
16:20:48 <johnthetubaguy> or some such
16:20:48 <jgriffith> FWIW I don't like config as the controller for that either
16:20:55 <jungleboyj> Well, wouldn't they just set the policy for their cloud to allow multiple R/W attaches .
16:21:00 <jgriffith> Seems like we could bake it into the attachment request?
16:21:22 <jgriffith> and as far as backends that don't support it, fall back to our types model like we've done in the past
16:21:28 <johnthetubaguy> so the attachment request means we change the Nova API so you say read only vs read/write?
16:21:41 <jgriffith> ie  If user wants a multi-attach volume they need to specify that
16:21:56 <jungleboyj> jgriffith:  So an option with the attach?
16:22:10 <jgriffith> johnthetubaguy: yeah, I'm thinking something like an additional option to attach that let's one request what they want (RW or RO)
16:22:11 <johnthetubaguy> my worry is the concept of "first" seems very racey
16:22:13 <jungleboyj> If it is a second attach and the backend doesn't support it we fail?
16:22:18 <smcginnis> Add additional attachment r/o, add additional attachment r/w, etc?
16:22:22 <jgriffith> johnthetubaguy: +1
16:22:39 <jungleboyj> Duh, that seems so much more sensible.  :-)
16:22:53 <jungleboyj> Good thing we have smart people to think of such things.  :-)
16:23:06 <johnthetubaguy> right now, all attachments are r/w basically
16:23:11 <smcginnis> jgriffith: That means another update to the attach api. And another MV bump, right?
16:23:17 <jungleboyj> Right.
16:23:19 <jgriffith> jungleboyj: well, it seems simple until we start implementing and find all the things we're not thinking of :)
16:23:28 * jungleboyj laughs
16:23:28 <jgriffith> smcginnis: yes it does
16:23:31 <johnthetubaguy> so we could just explicitly request that for everything from Nova when it creates an attachment
16:23:49 <jgriffith> smcginnis: or I could wrap it up in the existing bump that's in flight
16:23:55 <smcginnis> jgriffith: ++
16:24:02 <johnthetubaguy> so... can't we ignore this fight for the first version?
16:24:31 <jgriffith> johnthetubaguy: typically yes, but MV's make everybody think "oh... how do I do this without having to bump again"
16:24:37 <johnthetubaguy> like v1 everything is r/w like we have today
16:24:55 <johnthetubaguy> jgriffith: they were invented with the exact opposite intent
16:25:11 <mriedem> heh yeah was going to say that
16:25:13 <jgriffith> johnthetubaguy: I know.. but psychology and dev laziness are amazing forces
16:25:20 * johnthetubaguy nods
16:25:27 <johnthetubaguy> people forgot how bad creating extensions was
16:25:35 <johnthetubaguy> ...anyways
16:25:55 <jgriffith> johnthetubaguy: so I'm certainly not opposed to just treating R/O volumes as an additional feature later, independent of multi-attach
16:26:15 <jgriffith> but I think smcginnis and others had some concerns about people corrupting all of their data
16:26:20 <johnthetubaguy> right, it feels like we just leave the R/O stuff for next cycle, focus on multi-attach for v1
16:26:39 <johnthetubaguy> so some users will not want to use multi-attach till we do the whole R/O attachment thingy
16:26:49 <jungleboyj> jgriffith:  Yeah, how big a gun do we want to give people?
16:26:53 <johnthetubaguy> but I am saying we just punt that, for now?
16:26:57 <mriedem> didn't we say at the ptg that people can already corrupt their data today? or was that just specific to boot from volume with the wrong volume?
16:26:58 <jgriffith> I'm EZ/PZ on this and will let others decide what's best.  I can see strong arguments for either approach
16:27:14 <jgriffith> mriedem: yeah, we may have
16:27:28 <jungleboyj> I kind of remember that.
16:27:29 <johnthetubaguy> so people should be able to turn off multi-attach, I think we all agree that right?
16:27:31 <jgriffith> mriedem: BFV did come up as a special case here
16:27:34 <mriedem> johnthetubaguy: yes
16:27:37 <jungleboyj> johnthetubaguy:  Yes
16:27:40 <jgriffith> johnthetubaguy: def
16:27:41 <mriedem> by policy, that could go into cinder today
16:27:51 <johnthetubaguy> so this is really about how many folks will turn on the crazy shotgun?
16:27:58 <johnthetubaguy> mriedem++
16:28:11 <johnthetubaguy> I think we add the crazy but simple thing now
16:28:22 <jgriffith> smcginnis: jungleboyj sound ok to you?
16:28:27 <johnthetubaguy> and we come back, with user feedback, and work out the read only thing in SYD?
16:28:28 <smcginnis> Yep
16:28:41 <jgriffith> honestly I'm almost 100% certain we'll revisit this before it's all said and done anyway :)
16:28:46 * johnthetubaguy notes he is not in SYD, and giggles a little
16:28:47 <stvnoyes> fyi - oracle's RAC (a clustered FS) needs multiattach R/W, but not bfv
16:29:13 <jungleboyj> Ok, I think that sounds like a reasonable approach.
16:29:14 <mriedem> we also said we could put a policy in nova for bfv with multiattach
16:29:14 <johnthetubaguy> so bfv really only makes sense with RO, thats a good point
16:29:27 <smcginnis> jgriffith: We revisit everything before it's all said and done most of the time.
16:29:29 <johnthetubaguy> mriedem: +1 as long as we don't check it 2k times
16:29:41 <jgriffith> stvnoyes: FYI back at you; RAC is one of the number one use cases I've been asked about for this over the years
16:29:44 <jgriffith> just FWIW
16:29:46 <mriedem> johnthetubaguy: this isn't listing instances so we should be fine :)
16:29:53 <johnthetubaguy> mriedem :)
16:30:04 <jgriffith> stvnoyes: which means I'm kinda counting on you for insight on this :)
16:30:09 <smcginnis> Yep, same for me with RAC.
16:30:24 <johnthetubaguy> ++ on RAC (and whatever the MS thing is called)
16:30:50 * jungleboyj sighs about MS .
16:30:57 <smcginnis> ASM, Veritas Storage Manager, MS clustering...
16:31:02 <jgriffith> Ok, so seems like we have a concencus on strategy
16:31:04 <johnthetubaguy> so.. policy in Nova for BFV, policy in Cinder to turn it off
16:31:22 <stvnoyes> i'm no RAC expert (I was on oracle virt product prior to this) but I can certainly find things out about it from other people at the company
16:31:23 <jgriffith> johnthetubaguy: not sure if we'd need that in Nova FWIW
16:31:37 <jgriffith> johnthetubaguy: Cinder has the bootable flag and knows if it's bootable or not
16:31:48 <jgriffith> might be cleaner to leave all of those concerns to the Cinder side
16:31:53 <jgriffith> maybe?
16:31:58 <johnthetubaguy> oh... even better
16:32:05 <mriedem> check the recap notes from the ptg, we waffled on this a bit there
16:32:08 <johnthetubaguy> although I wonder if nova sets that flag
16:32:19 <smcginnis> Wait, we want a policy for mutliattach too, right? Not just bfv.
16:32:22 <johnthetubaguy> we said it was needed, overlay tempfs
16:32:37 <johnthetubaguy> smcginnis: I was meaning bfv with multi-attach
16:32:39 <johnthetubaguy> my bad
16:32:51 <jgriffith> ok... so for right now here's my proposal (which I kinda hate that I'm saying it)
16:32:54 <smcginnis> johnthetubaguy: OK, just making sure it's clear.
16:32:59 <johnthetubaguy> smcginnis++
16:33:03 <jgriffith> 1.  Multiattach for data volumes only
16:33:17 <jgriffith> 2.  No consideration of R/O (that's a new and separate thing)
16:33:23 <johnthetubaguy> (data = bootable=false?)
16:33:30 <jgriffith> That means no multi-attach BFV for V1
16:33:39 * johnthetubaguy nods
16:33:41 <mriedem> this is the recap email btw http://lists.openstack.org/pipermail/openstack-dev/2017-September/122238.html
16:33:44 <jgriffith> unless as we go it falls nicely into place
16:34:09 <jungleboyj> jgriffith:  ++
16:34:17 <jgriffith> Nothings written in stone, except OV's and MV's :)
16:34:28 <smcginnis> jgriffith: +1
16:34:33 <stvnoyes> jgriffith:  +1
16:34:42 <jgriffith> mriedem: johnthetubaguy sound like a sane plan?
16:34:56 <johnthetubaguy> I think so... should check some things
16:35:08 <johnthetubaguy> (2) means all attachments assumed to be r/w attachments?
16:35:12 <mriedem> might want to also float whatever policy change you're going to make by the operators and dev MLs
16:35:23 <jgriffith> mriedem: good idea
16:35:40 <jgriffith> johnthetubaguy: correct on (2)
16:35:43 <johnthetubaguy> I can take this to the scientific WG meeting, or get oneswig too
16:35:55 <johnthetubaguy> jgriffith: cool, I think I am happy then
16:35:59 <jgriffith> johnthetubaguy: that would be great
16:36:13 <jungleboyj> johnthetubaguy:  Yeah, the more feedback we get the better.
16:36:29 <jgriffith> and of course if folks have more feedback or thoughts we can discuss again in next weeks (or the next weeks....) meeting
16:36:36 <johnthetubaguy> so I know people want the BFV thing, and want the read_only thing, but the idea here is to keep moving
16:36:57 <smcginnis> Definitely a good phase II goal to add those.
16:37:02 <ildikov> the good thing is that we can evolve on those easily
16:37:04 <jgriffith> johnthetubaguy: yeah, I want it too; and although I'm willing to say it's out of scope here I'm not counting it out
16:37:16 <smcginnis> But functionality that covers 80% of the needs is better than 0%.
16:37:22 <johnthetubaguy> I think it should be phase II, like next cycle
16:37:31 <jgriffith> What I mean is I'd like to set a goal and if we exceed it then nothing wrong with that right?
16:37:32 <ildikov> well, easier than some other things anyway
16:37:32 <smcginnis> johnthetubaguy: ++
16:37:34 <johnthetubaguy> I think this is more like 40% coverage, but I could be wrong
16:37:40 <smcginnis> jgriffith: Agree
16:37:42 <johnthetubaguy> but thats way better than 0%
16:37:43 <jungleboyj> Right.  Styick with the simpler cases to start with and then iterate.
16:38:13 <jgriffith> also attach has taught me that trying to get 100% in one shot is painful :)
16:38:19 <johnthetubaguy> so.. I am tempted to see if we can get a spec up for phase II, if someone is willing?
16:38:20 <jgriffith> not that there was a choice there
16:38:25 <ildikov> jgriffith: +1 :)
16:39:06 <jgriffith> johnthetubaguy: I'm willing to draft something for the Cinder side, but honestly knowing how I am with specs and the fact that there are 2 or 3 in front of it I'm not going to promise anything in terms of timelines
16:39:19 <johnthetubaguy> that fair enough
16:39:39 <ildikov> jgriffith: johnthetubaguy: I can look into it too, I need to clean up mine in Nova based on the above anyway
16:39:40 <johnthetubaguy> I guess all we need for feedback is
16:39:53 <johnthetubaguy> phase II enables BFV multi-attach and RO multi-attach
16:40:20 <johnthetubaguy> phase I is all R/W attachments, and no BFV
16:40:29 <johnthetubaguy> ... so we should dig into the BFV thing
16:40:42 <johnthetubaguy> mriedem I think nova doesn't check the bootable flag right now?
16:41:14 <jungleboyj> johnthetubaguy:  That sounds right.
16:41:24 <mriedem> would have to look
16:41:26 <ildikov> johnthetubaguy: in my old multi-attach patch I check whether BFV tries to boot from the multiattach volume and fail if it does
16:41:50 <johnthetubaguy> ildikov: I think we probably want that version of the policy for now
16:42:11 <jungleboyj> ildikov: ++
16:42:13 <johnthetubaguy> I would love this to be all on the cinder side, but I don't think we respect enough today to make that possible
16:42:13 <ildikov> johnthetubaguy: I guess it only means code cosmetics by putting the logic elsewhere
16:42:14 <jgriffith> johnthetubaguy: mriedem it does but I don't know if it's the context your'e looking for...
16:42:36 <johnthetubaguy> jgriffith: it does?
16:42:53 <jgriffith> https://review.openstack.org/#/c/508209/
16:42:55 <jgriffith> oops
16:43:01 <jgriffith> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1034
16:43:07 <ildikov> johnthetubaguy: well, the BFV case it's not necessarily on the Cinder side
16:44:26 <jgriffith> obviously there's needed modifications but the basic flow is there at least
16:44:37 <jgriffith> and there is the concept of using and checking said flags
16:44:48 <johnthetubaguy> that's all promising
16:44:52 <mriedem> oh interesting,
16:44:54 <mriedem> however,
16:44:55 <mriedem> NOTE
16:45:02 <mriedem> that's only when bfv from an image-defined bdm
16:45:14 <mriedem> which is,
16:45:19 <mriedem> a volume-backed instance snapshot image
16:45:23 <mriedem> that has bdm info in it
16:45:51 <johnthetubaguy> oh, rather than a bdm passed on the boot command line
16:46:10 <mriedem> oh hold the phone
16:46:15 <mriedem> i think it's just a really poorly named method
16:46:17 <johnthetubaguy> actually, that looks badly named
16:46:24 <jgriffith> mriedem: that's what I thought
16:46:29 <jgriffith> and a confusing comment
16:46:47 <jgriffith> re "poorly named method"
16:46:55 <johnthetubaguy> I assume the volumes nova creates form an image correctly are marked as bootable?
16:46:56 <jgriffith> because if you trace back to who calls it
16:47:29 <johnthetubaguy> (not that it matters in that case)
16:47:43 <mriedem> johnthetubaguy: doesn't say here https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L513
16:47:44 <johnthetubaguy> so seems like cinder policy on bootable=True could work
16:47:55 <jgriffith> anyway... might need some tweaking of things but there's a foundation at least
16:48:25 <mriedem> cinder must default bootable to true for a new volume
16:48:26 <johnthetubaguy> mriedem: yeah, doesn't seem promising there, but it would also never be multi-attach, thankfully)
16:48:42 <jgriffith> mriedem: ?
16:49:17 <jgriffith> mriedem: negatory, unless I'm misunderstanding you.  Cinder defaults volumes to bootable=False until it actually lays down an image on them
16:49:45 <mriedem> oh, well we pass an image_id here https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L514
16:49:48 <johnthetubaguy> ah, so passing an image would be the difference
16:49:52 <johnthetubaguy> phew
16:50:10 <mriedem> how does this work then? https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L535
16:50:14 * johnthetubaguy thinks this was all very educational
16:50:16 <mriedem> creates a blank volume
16:50:24 <mriedem> maybe that's not the root bdm
16:50:30 <johnthetubaguy> thats ephemerals
16:50:34 <mriedem> ok
16:50:59 <johnthetubaguy> no I have no clue on the API syntax for that
16:51:40 <johnthetubaguy> anyways, sounds like phase I plan A is intact
16:51:41 <jgriffith> Not sure if this is what you guys are looking for:  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1079
16:51:51 <jgriffith> sorry, not completely following the dialogue
16:51:54 <jgriffith> but anyway
16:52:10 <johnthetubaguy> that's what I found early tracing back, its all good
16:52:15 <jgriffith> IIRC that's the default BFV path there
16:52:36 <jgriffith> Ok
16:53:02 <jgriffith> Okie Dokie... anything else?
16:53:11 <jgriffith> #topic open-discussion
16:53:33 <jgriffith> ildikov: still with us?  Did we miss anything?
16:53:50 <ildikov> I think we're good
16:53:51 <jgriffith> smcginnis: jungleboyj ?
16:53:54 <johnthetubaguy> spoke to a scientific working group in london
16:53:59 <jungleboyj> I think we are good
16:54:03 <johnthetubaguy> they seemed happy we are making progress
16:54:14 <jungleboyj> That is good.
16:54:25 <ildikov> johnthetubaguy:  +1
16:54:26 <jgriffith> johnthetubaguy: cool!  Our mission is to make the world happy :)
16:54:35 <johnthetubaguy> I think I convinced them its complicated, hence the delay
16:54:39 <jgriffith> Or at least the OpenStack world
16:54:58 <jungleboyj> :-)
16:55:16 <johnthetubaguy> cool, next steps is review the code and specs?
16:55:20 <jgriffith> http://www.allmusic.com/album/give-the-people-what-they-want-mw0000196357
16:55:48 <jungleboyj> johnthetubaguy:  I think so and then start thinking about what we do for Phase II.
16:55:52 <ildikov> And get the flag into Cinder to get the microversion bump
16:55:57 * jgriffith realizes his presentation slides are sooo old he can start recycling them
16:56:05 <ildikov> And merge stuff :)
16:56:24 <jungleboyj> And make people happy
16:56:28 <johnthetubaguy> jgriffith: s/kilo/queens/
16:56:41 <jgriffith> johnthetubaguy: hehe... close enough :)
16:57:03 <jgriffith> johnthetubaguy: most people haven't upgraded from Kilo yet anyway :)
16:58:22 <jgriffith> Ok... let's wrap this up then?
16:58:23 * johnthetubaguy is happy he working with folks running pike clouds
16:58:37 <smcginnis> :)
16:58:39 <jgriffith> johnthetubaguy: +1
16:58:50 <jgriffith> You're a lucky lucky man!
16:59:01 <jungleboyj> Sounds good.
16:59:15 <jgriffith> alrighty then, until next week.  Thanks all!!!
16:59:19 <jgriffith> #endmeeting