17:00:14 <ildikov> #startmeeting cinder-nova-api-changes
17:00:15 <openstack> Meeting started Thu Mar 23 17:00:14 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:19 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
17:00:29 <ildikov> DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood breitz
17:00:29 <jungleboyj> o/
17:00:30 <smcginnis> o/
17:00:45 <jungleboyj> smcginnis:  Jinx
17:00:52 <mriedem> o/
17:01:03 <jungleboyj> ildikov: Can you add me to your ping list please?
17:01:19 <ildikov> jungleboyj: sure
17:01:21 <johnthetubaguy> o/
17:01:25 <jungleboyj> Thank you.
17:01:40 <ildikov> jungleboyj: done
17:01:45 <ildikov> hi all
17:01:49 <ildikov> let's start :)
17:02:25 <ildikov> so we have lyarwood's patches out
17:02:50 <ildikov> I saw debates on get_attachment_id
17:03:11 <ildikov> is there a chance to get to a conclusion on what to do with that?
17:03:31 <ildikov> johnthetubaguy: did you have a chance to catch mdbooth?
17:03:55 <mdbooth> He did
17:04:09 <johnthetubaguy> yeah, I think we are on the same page
17:04:17 <johnthetubaguy> it shouldn't block anything
17:04:18 <mriedem> i'm +2 on the bdm.attachment_id changes
17:04:22 <mriedem> or was
17:04:58 <johnthetubaguy> yeah, I am basically +2 as well, but mdbooth suggested waiting to see the first patch that reads the attacment_id, which I am fine with
17:04:58 <ildikov> mdbooth: johnthetubaguy: what's the conclusion?
17:05:23 <johnthetubaguy> basically waiting to get the patch on top of that that uses the attachment id
17:05:31 <johnthetubaguy> which is the detach volume patch, I think
17:05:34 * mdbooth thought they looked fine, but saw no advantage merging them until they're used. Merging early risks a second db migration if we didn't get it right first time.
17:06:18 * mdbooth is uneasy about not having an index on attachment_id, but can't be specific about that without seeing a user.
17:06:41 <ildikov> is it a get_by_attachment_id that the detach patch uses? I think it's not
17:07:25 <ildikov> it seems like we're getting ourselves into a catch 22 with two cores on the +2 side
17:07:37 <mriedem> we need jgriffith's detach patch restored and rebased
17:07:49 <jgriffith> sorry... late
17:07:52 <mriedem> https://review.openstack.org/#/c/438750/
17:08:02 <mriedem> https://review.openstack.org/#/c/438750/2/nova/compute/manager.py
17:08:12 <mriedem> that checks if the bdm.attachment_id is set and makes the detach call decision based on that
17:08:21 <mriedem> we list the BDMs by instance, not attacment id
17:08:33 <mriedem> but we check the attachment_id to see if we detach with the new api or old api
17:08:37 <johnthetubaguy> mriedem: +1
17:09:00 <jgriffith> mriedem that's wrong :)
17:09:34 <ildikov> mriedem: but that will not get the get_by_attachment_id usage for us
17:09:48 <mriedem> ildikov: we don't need get_by_attachment_id
17:09:54 <mriedem> jgriffith: why is that wrong?
17:10:03 <jgriffith> mriedem never mind
17:10:20 <ildikov> mriedem: I thought that's why we hold the attachment_id patches
17:10:21 <jgriffith> mriedem I don't think I'm following your statement... going through it again
17:10:47 <mriedem> jgriffith: so are you planning to restore and rebase that on top of https://review.openstack.org/#/c/437665/
17:10:57 <mriedem> ildikov: that's why mdbooth wanted to hold them,
17:10:59 <jgriffith> mriedem when it looks like it's going to land yes
17:11:00 <mriedem> which i don't agree with
17:11:12 <jgriffith> mriedem wait... don't agree with what?
17:11:14 <mriedem> jgriffith: i'm ready to go with those bdm.attachment_id changes
17:11:20 <mriedem> sorry,
17:11:28 <mriedem> i don't agree that we need to hold the bdm.attachment_id changes
17:11:29 <ildikov> mriedem: I'm on that page too
17:11:31 <mriedem> they are ready to ship
17:11:39 <mriedem> https://review.openstack.org/#/c/437665/ and below
17:11:40 <ildikov> johnthetubaguy: ^^?
17:11:45 <jgriffith> mriedem then yes, I'll do that today, I woul dLOVE to do it after it merges if possible :)
17:11:50 <mdbooth> mriedem: I don't necessarily disagree, btw. They're just not being used yet, so I thought we might as well merge them later.
17:11:59 <jgriffith> so in an hour or so :)
17:12:03 <mriedem> mdbooth: they aren't being used b/c jgriffith is waiting for us to merge them
17:12:08 <mriedem> hence the catch-22
17:12:15 <mriedem> so yeah we agree, let's move forward
17:12:23 <mdbooth> mriedem: Does jgriffith have a nova patch which uses them?
17:12:29 <mdbooth> These aren't exposed externally yet.
17:12:35 <jgriffith> mdbooth  I did yes
17:12:37 <ildikov> mriedem: thanks for confirming me :)
17:12:44 <mriedem> mdbooth: https://review.openstack.org/#/c/438750/ will yes
17:12:50 <johnthetubaguy> I could go either way, merge now, or rebase old patch on top. Doing neither I can't subscribe to.
17:13:32 <ildikov> johnthetubaguy: mriedem: let's ship it
17:13:43 <ildikov> johnthetubaguy: mriedem: one item done, move to the next one
17:13:46 * mdbooth is confused. That patch can't use attachment_id because it passed tests.
17:13:58 <mdbooth> And it doesn't 'have attachment_id
17:14:09 <mriedem> mdbooth: jgriffith and lyarwood originally wrote duplicate changes
17:14:12 <mriedem> at the PTG
17:14:14 <mriedem> or shortly after
17:14:26 <mriedem> jgriffith's patch ^ relied on his version of bdm.attachment_uuid
17:14:28 <ildikov> mdbooth: to be honest the detach flow currently built on the assumption we don't have attachment_id set and therefore it uses the old detach flow
17:14:32 <mriedem> we dropped those and went with lyarwood's
17:14:50 <mdbooth> Ok :)
17:14:53 <mriedem> so jgriffith will have to rebase and change attachment_uuid to attachment_id in his change
17:15:37 <mriedem> what's next?
17:15:49 <ildikov> mriedem: so we ship or hold now?
17:16:00 <mriedem> we merge lyarwood's bdm.attachment_id changes
17:16:10 <ildikov> mriedem: awesome, thanks
17:16:16 <hemna> sweet
17:16:30 <mriedem> i plan on going over johnthetubaguy's spec again this afternoon but honestly,
17:16:37 <mriedem> at this point, barring any calamity in there,
17:16:39 <mriedem> i'm going to just approve it
17:16:48 <jungleboyj> mriedem: +2
17:16:48 <ildikov> mriedem: I would just ship that too
17:17:19 <ildikov> mriedem: looks sane to me and we cannot figure everything out on paper anyhow
17:17:34 <johnthetubaguy> we all agree the general direction now, so yeah, lets steam ahead and deal with the icebergs as they come
17:17:44 <ildikov> johnthetubaguy: +1
17:18:14 <ildikov> we need to get the cinderclient version discovery patch back to life
17:18:21 * ildikov looks at jgriffith :)
17:18:52 <jgriffith> ildikov johnthetubaguy so my understanding is you guys wanted that baked in to the V3 detach patch
17:19:07 <jgriffith> so I plan on that being part of the refactor against lyarwood 's changes
17:19:26 <mriedem> i'm not sure we need that,
17:19:31 <mriedem> if the bdm has attachment_id set,
17:19:36 <mriedem> it means we attached with new enough cinder
17:19:57 <mriedem> so on detach, if we have bdm.attachment_id, i think we assume cinder v3 is there and just call the delete_attachment method or whatever
17:19:59 <johnthetubaguy> so I am almost +2 on that refactor patch, FWIW, there is just a tiny nit left
17:20:00 <ildikov> mriedem: we need that some time before implementing attach
17:20:04 <mriedem> the version checking happens in the attach method
17:20:09 <mriedem> s/method/patch/
17:20:12 <jgriffith> mriedem that works
17:20:19 <mriedem> ildikov:
17:20:23 <mriedem> yes it comes before the attach patch
17:20:25 <mriedem> which is the end
17:20:42 <ildikov> mriedem: ok, then we're kind of on the same page I think
17:20:43 <johnthetubaguy> yeah, attach is version negotiation time, everyone else just checks attachment_id to see if they have to use the new version or not
17:20:47 <jgriffith> and just hope something with the stupid migrate or HA stuff didn't happen :)
17:21:11 <mriedem> well, i think we'd get a 400 back if the microversion isn't accepted
17:21:16 <mriedem> and we fail the detach
17:21:18 <jgriffith> mriedem correct
17:21:28 <mriedem> we can make the detach code smarter wrt version handling later if needed
17:21:28 <johnthetubaguy> thats all legit, seems fine
17:21:32 <jgriffith> mriedem and I can revert to the old call in the cinder method itself
17:21:32 <mriedem> i just wouldn't worry about that right now
17:21:39 <johnthetubaguy> no revert, just fail hard
17:21:44 <mriedem> right hard fail
17:21:53 <jgriffith> ummm... ok, you's the bosses
17:22:07 <mriedem> we want to eventually drop the old calls,
17:22:11 <johnthetubaguy> so we only get in that mess if you downgraded Cinder after creating the attachment
17:22:12 <mriedem> so once we attach new path, we detach new path
17:22:13 <mriedem> or fail
17:22:22 <mriedem> yeah if you downgrade cinder, well,
17:22:26 <mriedem> that's on you
17:22:30 <johnthetubaguy> right, dragons
17:22:44 <ildikov> johnthetubaguy: if someone does that then I would guess they might have other issues too anyhow...
17:23:05 <johnthetubaguy> ildikov: right, its just not... allowed
17:23:15 <jungleboyj> johnthetubaguy:  I think that is a very unlikely edge case.
17:23:28 <jungleboyj> Something has really gone wrong in that case.
17:23:37 <mriedem> ok what's next?
17:23:37 <ildikov> certainly not something we would want to support or encourage people to do
17:23:38 <johnthetubaguy> ... so we fail hard in detach, if attachment_id is present, but the microversion request goes bad, simples
17:23:57 <mriedem> i think the detach patch is the goal for next week yes?
17:24:08 <ildikov> I think we need to get detach working
17:24:12 <mriedem> is there anything else?
17:24:20 <johnthetubaguy> the refactor is the question
17:24:39 <mriedem> this? https://review.openstack.org/#/c/439520/
17:24:51 <johnthetubaguy> we could try merge lyarwood's refactor first, I think its crazy close to being ready after mdbooth's hard work revieing it
17:25:02 <johnthetubaguy> thats the one
17:25:19 <ildikov> johnthetubaguy: ah, right, the detach refactor
17:25:36 <johnthetubaguy> I think it will make the detach patch much simpler if we get the refactor in first
17:25:50 <johnthetubaguy> mdbooth: whats your take on that ^
17:25:57 <mriedem> ok i haven't looked at the refactor at all yet
17:26:09 * mdbooth reads
17:26:15 <johnthetubaguy> to be clear, its just the above patch, and not the follow on patch
17:26:22 <mriedem> jgriffith's detach patch is pretty simple, it's just a conditional on the bdm.attachment_id being set or not
17:26:47 <ildikov> johnthetubaguy: I think lyarwood is out of office this week, so if that's the only thing missing we can add that and get this one done too
17:27:31 <johnthetubaguy> mriedem: hmm, I should double check why I thought it would be a problem
17:27:42 <mdbooth> I haven't had a chance to look at lyarwood's updated refactor in detail yet (been buried downstream for a few days), but when I glanced at it it looked like a very simple code motion which brings the code inline with attach.
17:28:40 <mdbooth> It looked low risk and obvious (barring bugs). Don't think I've seen the other patches.
17:28:51 <johnthetubaguy> so, I remember now...
17:28:53 <johnthetubaguy> https://review.openstack.org/#/c/438750/2/nova/compute/manager.py
17:28:58 <johnthetubaguy> we have many if statements
17:29:21 <johnthetubaguy> I think after lyarwood's patch we have many less, or at least, they are in a better place
17:29:57 <johnthetubaguy> if thats not the case, then I think the order doesn't matter
17:30:11 <mdbooth> johnthetubaguy: Right. All the mess is in once place.
17:30:19 <mdbooth> Centralised mess > distributed mess
17:30:28 <johnthetubaguy> yeah, mess outside the compute manager
17:30:33 <mriedem> johnthetubaguy: yeah that's true
17:30:51 <mriedem> ok so i'm cool with that
17:30:56 <mriedem> if you guys think the refactor is close
17:31:31 <ildikov> +1
17:32:46 <johnthetubaguy> so in summary, I guess that means jgriffith to rebase on top of https://review.openstack.org/#/c/439520
17:32:47 <johnthetubaguy> ?
17:32:54 <mriedem> johnthetubaguy: mdbooth: are one of you going to address the remaining nit?
17:33:03 <mriedem> it's just defining the method in the base class right?
17:33:16 <mriedem> we should be using ABCs there...but that's another story
17:33:19 <mdbooth> mriedem: It's open in my browser. I'll probably look in the morning.
17:33:26 <mriedem> mdbooth: ok
17:33:33 <mriedem> otherwise i'll address johnthetubaguy's comment
17:33:37 <mriedem> and he can +2 if he's happy
17:33:47 <johnthetubaguy> I was going to do that now
17:33:53 <mriedem> ok
17:34:07 <mriedem> yes so let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/439520
17:34:12 <mriedem> and the goal for next week is detach being done
17:34:36 <johnthetubaguy> #info let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/439520 and the goal for next week is detach being done
17:34:39 <jgriffith> mriedem johnthetubaguy thanks, that the one I pointed out this morning when I tried to hijack your other meeting :)
17:35:06 <ildikov> mriedem: johnthetubaguy: +1, thanks
17:36:19 <ildikov> ok, it seems we're done for today
17:36:29 <ildikov> anything else from anyone?
17:36:31 <mriedem> ok, so let's be done with multiattach talk for this meeting.
17:36:32 <mriedem> yes
17:36:39 <mriedem> at the ptg, we talked about a cinder imagebackend in nova
17:36:45 <mriedem> mdbooth is interested in that as well,
17:36:49 <mriedem> but we need a spec,
17:36:57 <mriedem> so i'm wondering if mdbooth or jgriffith would be writing a spec
17:37:19 <jgriffith> mriedem so I have some grievances with that approach :)
17:37:34 <smcginnis> Time for the airing of grievances.
17:37:34 <mriedem> air them out
17:37:44 <jgriffith> bring out the aluminum pole
17:37:46 <mriedem> i'll get the festivus pole
17:37:52 <smcginnis> ;)
17:38:04 <jgriffith> Here's the thing... I know you all hate flavor extra-specs, but lord it's simple
17:38:04 <mdbooth> jgriffith: I missed the PTG unfortunately. Do you want to have a detailed chat tomorrow?
17:38:26 <jgriffith> more importantly, there's a side effect of the cinder image-backend that I didn't think of
17:38:29 <jgriffith> it's all or nothing
17:38:46 <jgriffith> so you do that and you loose the ability to do ephemerals if you wanted to
17:38:51 <mriedem> i thought your whole use case was "i have cinder for storage, i want to use it for everything"
17:38:59 <mriedem> "i don't have any local storage on my computes"
17:39:00 <jgriffith> which meh... ok, I could work around that
17:39:09 <jgriffith> That's one use case yes
17:39:10 <johnthetubaguy> is this per compute node? I think thats OK?
17:39:19 <mriedem> yes it's per compute node
17:39:22 <mriedem> so i think that's fine
17:39:33 <jgriffith> so that's my question :)
17:39:43 <jgriffith> that addresses my customers that have ceph...
17:39:47 <mriedem> if you really want ephemeral, then you have hosts with ephemeral
17:39:52 <johnthetubaguy> thats totally fine right, some flavors go to a subset of the boxes
17:39:53 <mriedem> and put them in aggregates
17:39:56 <mriedem> right
17:40:00 <jgriffith> BUT, how do I schedule... because it seems you don't like scheduler hints either ;)
17:40:10 <mriedem> you have your cheap flavors go to the ephemeral aggregates
17:40:18 <mriedem> and your "enterprise" flavors go to the cinder-backend hosts
17:40:27 <jgriffith> mriedem so we use flavors?
17:40:30 <mriedem> jgriffith: the flavors tie to the aggregates
17:40:30 <mriedem> yes
17:40:34 <jgriffith> isn't that how I started this?
17:40:37 <mriedem> no
17:40:41 <johnthetubaguy> jgriffith: the approach we are talking about uses the normal APIs, no hints required
17:40:49 <mriedem> you started by baking the bdm into extra specs which we're not doing
17:40:57 <mriedem> right, this is all normal stuff
17:41:04 <jgriffith> mriedem flavor extra-specs TBC
17:41:04 <mdbooth> Incidentally, I've been trying to get the virtuozzo folks to implement this for their stuff: I'd like imagebackend to be stored in metadata attached to the bdm
17:41:24 <mdbooth> To me, this is the most sane way for this stuff to co-exist
17:41:26 <mriedem> jgriffith: but standard extra specs that are tied to aggregate metadata
17:41:45 <mdbooth> Having it hard-coded in nova.conf is extremely limiting
17:41:57 <mriedem> we also want the cinder imagebackend for other storage drivers, like scaleio
17:42:03 <mriedem> and i'm sure veritas at some point
17:42:54 <johnthetubaguy> mdbooth: so there is a thing where we should support multiple ephemeral backends on a single box at some point, and we totally need that, I just don't think thats this (I could be totally wrong)
17:43:01 <mdbooth> I believe there are going to be mixed use cases. For example, HPC where you want some of your storage to be local and really fast, even at the cost of persistence.
17:43:29 <mriedem> we talked about multiple ephemeral backends at the ptg didn't we?
17:43:44 <mriedem> the vz guys were talking about something like that, but i might be misremembering the use case
17:43:55 <mdbooth> Well the cinder imagebackend would be all or nothing as it stands
17:44:07 <mdbooth> Because it's always all or nothing right now
17:44:09 <johnthetubaguy> so the HPC folks care about resource usage being optimal, so they would segregate the workloads that want SSD, and just use volumes for the extra stroage, I think...
17:44:23 <mriedem> right,
17:44:27 <johnthetubaguy> but thats probably a rat hole we might want to dig out of
17:44:31 <mdbooth> mriedem: Yes, the vz guys want to add a new hard-coded knob to allow multiple backend
17:44:34 <mriedem> i think if you have different classes of storage needs you have aggregates and flavors
17:44:38 <mdbooth> I think that's just adding mess to mess
17:44:39 <johnthetubaguy> jgriffith: do you still support local volumes?
17:44:48 <mriedem> johnthetubaguy: they don't
17:44:57 <mriedem> that came up again in the ML recently
17:44:59 <johnthetubaguy> ah, that got taken out
17:45:00 <mriedem> i can dig up the link
17:45:00 <jgriffith> johnthetubaguy I don't know what you mean
17:45:07 <mriedem> LocalBlockDevice
17:45:10 <mriedem> or whatever cinder called it
17:45:19 <johnthetubaguy> yeah, like a qcow2 on local filesystem
17:45:19 <jgriffith> Oh!
17:45:23 <jgriffith> No, we removed that
17:45:42 <jgriffith> I'm happy to entertain a better way to do it, but not for Pike
17:45:48 <smcginnis> Technically, it's still there and deprecated and will be removed in Q.
17:45:51 <johnthetubaguy> so... I kinda see a world where all storage is in Cinder only, once we add that back in (properly)
17:45:52 <jgriffith> when I say entertain, I mean implement
17:45:59 <jgriffith> I already know how to do it
17:46:06 <johnthetubaguy> but the cinder ephemeral backend seems a good starting place
17:46:15 <jgriffith> johnthetubaguy I agree with that
17:46:18 <smcginnis> johnthetubaguy: +1
17:46:20 <mdbooth> +1
17:46:27 <jgriffith> I'm just concerned WRT if we'll get there or not
17:46:49 <johnthetubaguy> so if we don't write the spec, we certain not to get there
17:46:53 <mriedem> http://lists.openstack.org/pipermail/openstack-dev/2016-September/104479.html is the thread
17:46:59 <jgriffith> Looking at the *tricks* some backends have used to leverage Nova Instances makes it disturbing
17:47:09 <jgriffith> johnthetubaguy touchet
17:47:27 <johnthetubaguy> jgriffith: I had to say that, the urge was too strong :)
17:47:38 <jgriffith> I was hoping to make one last ditch effort at a plea for flavor extra-specs... but I see that ship has certainly sailed and I fell off the pier
17:47:39 <johnthetubaguy> mriedem: thanks thats the one
17:47:40 <mriedem> http://lists.openstack.org/pipermail/openstack-dev/2017-February/112847.html was picking it back up during the PTG
17:47:58 <mriedem> i've also asked jaypipes to chime in on that thread,
17:47:59 <jgriffith> johnthetubaguy I would've thought less of you if you hadn't said it ;)
17:48:03 <mriedem> because it came up in the nova channel again yesterday
17:48:08 <mriedem> and the answer was traits and placement,
17:48:13 <mriedem> but i'm not sure how that works yet in that scenario
17:49:14 <mriedem> i need the answer to this http://lists.openstack.org/pipermail/openstack-dev/2017-February/112903.html
17:49:20 <johnthetubaguy> mriedem: yeah, we get into ensemble scheduler that was proposed at one point, but lets not go there, but worth finding the notes on that
17:50:06 <mriedem> i think the answer during the PTG was,
17:50:12 <mriedem> you have a resource provider aggregate,
17:50:20 <mriedem> and the compute and storage pool are in the aggregate,
17:50:32 <johnthetubaguy> mriedem: I think we need the aggregate to have both resource providers in them
17:50:33 <johnthetubaguy> and a resource type that matches the volume provider
17:50:35 <mriedem> and there is a 'distance' attribute on the aggregate denoting the distance between providers,
17:50:49 <mriedem> and if you want compute and storage close, then you request the server with distance=0
17:50:59 <johnthetubaguy> so there is a spec about mapping out the DC "nearness" in aggregates
17:51:01 <jgriffith> good lord
17:51:06 <johnthetubaguy> we can't escape that one forever
17:51:15 <jgriffith> Look.. there's another idea you might want to consider
17:51:19 <mriedem> johnthetubaguy: i think that's essentially what jay told me this would be
17:51:24 <jgriffith> We land the cinder image-backend
17:51:30 <johnthetubaguy> mriedem: yeah, sounds like the same plan
17:51:42 <jgriffith> Have a cinder-agent or full cinder-volume service on each compute node...
17:52:00 <jgriffith> If you want something fast, place on said node with cinder capacity, skip the iscsi target and use it
17:52:24 <jgriffith> but the fact is, all this talk about "iscsi isn't fast enough" has proven false in most cases
17:52:34 <johnthetubaguy> we are really just talking about how that gets modelled in the scheduler, we are kinda 60% there right now I think
17:52:59 <jgriffith> johnthetubaguy I know, but i"m suggesting before doing the other 40% we consider an alternative
17:53:08 <mriedem> should we suggest this as a forum topic?
17:53:12 <johnthetubaguy> jgriffith: depends on how much you spend on networking vs the speed you want, agreed its possible right now
17:53:22 <mriedem> the "i want to use cinder for everything" thing has come up a few different ways
17:53:28 <johnthetubaguy> mriedem: we might want to draw this up a bit more crisply first, but yeah, I think we should
17:53:31 <jgriffith> mriedem I'm willing to try and discuss it
17:53:48 <jgriffith> mriedem but honestly if it's like other topics that's already decided I don't want to waste anyones time, including my own
17:53:57 * johnthetubaguy sits on his hands so he doesn't say he will try to write this up
17:54:05 <mriedem> well the point of the forum session is to get ops and users in the room too
17:54:20 <jungleboyj> mriedem: ++
17:54:31 <mriedem> anyway, i can take a todo to try and flesh out something for that
17:54:41 <johnthetubaguy> mriedem: I am happy to review that
17:54:54 <johnthetubaguy> so the baby step towards that is Cinder backend ephemeral driver
17:55:01 <mriedem> yeah
17:55:06 <johnthetubaguy> which helps the "diskless" folks
17:55:21 <jgriffith> johnthetubaguy the good thing is that certainly does open up many more long term possibilities
17:55:46 <johnthetubaguy> it seems aligned with our strategic direction
17:55:49 <jungleboyj> There has been enough request around the idea that we should keep trying to move something forward.
17:55:59 * johnthetubaguy puts down management books
17:56:13 <jgriffith> jungleboyj which request?
17:56:23 <jgriffith> or... I mean "which idea"
17:56:41 <mriedem> using cinder for everything
17:56:43 <jgriffith> local disk?  Ephemeral Cinder?  Cinder image-backend
17:56:44 <mriedem> always boot from volume
17:56:49 <jgriffith> mriedem thank you
17:57:02 <mriedem> 3 minutes left
17:57:02 <jgriffith> I get distracted easily
17:57:04 <jungleboyj> Ephemeral Cinder
17:57:04 <johnthetubaguy> root_disk = -1, etc
17:57:19 <johnthetubaguy> so next steps...
17:57:33 <johnthetubaguy> forum session, and nova spec on cinder ephemeral back end?
17:57:40 <johnthetubaguy> mriedem takes the first one
17:57:44 <jungleboyj> So, the plan is that the question of being able to co-locate volumes and instances will be handled by the placement service?
17:57:44 <johnthetubaguy> the second one?
17:57:44 <mriedem> yup
17:58:01 <mriedem> jungleboyj: that will be part of the forum discussion i think
17:58:12 <jungleboyj> mriedem: Great
17:58:16 <mriedem> who's going to start the nova spec for the cinder image backend?
17:58:24 <jgriffith> mriedem I'll do it
17:58:26 <jgriffith> I promise
17:58:32 <mriedem> ok, work with mdbooth
17:58:37 <mriedem> he's our imagebackend expert
17:58:43 <jgriffith> mdbooth Lucky you :)
17:58:47 <johnthetubaguy> I want to review both of those things when they exist
17:59:02 <jgriffith> mdbooth I meant that you get to work with me, not an enviable task
17:59:18 <mdbooth> jgriffith: Hehe
18:00:05 <johnthetubaguy> #action mriedem to raise forum session about cinder managing all storage, scheduling, etc
18:00:22 <johnthetubaguy> #action jgriffith to work with mdbooth on getting a spec together for the cinder image backend
18:00:28 <ildikov> and a timeout warning
18:00:29 <johnthetubaguy> something like that?
18:00:32 <mriedem> yes
18:00:34 <mriedem> works for me
18:00:37 <johnthetubaguy> sweet
18:00:38 <mriedem> times up
18:00:45 <johnthetubaguy> I guess we are done done
18:00:53 <jungleboyj> :-)
18:01:02 <ildikov> awesome, thank you all!
18:01:07 <smcginnis> thansk
18:01:09 <ildikov> great meeting today :)
18:01:23 <ildikov> see you next week :)
18:01:26 <ildikov> #endmeeting