17:00:30 <ildikov> #startmeeting cinder-nova-api-changes
17:00:30 <openstack> Meeting started Thu Mar  2 17:00:30 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:34 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
17:00:43 <jungleboyj> o/
17:00:45 <lyarwood> o/
17:00:48 <ildikov> DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood
17:00:49 <breitz> o/
17:00:59 <mriedem> lyarwood:
17:01:02 <mriedem> oh he's here
17:01:03 <mriedem> o/
17:01:10 <ildikov> :)
17:01:22 <ildikov> hi all :)
17:01:35 <lyarwood> mriedem: I can leave if you want to say something ;)
17:01:40 <jungleboyj> ildikov: You survived MWC ?
17:01:59 <mriedem> lyarwood: no i want you to be very involved
17:02:09 <mriedem> ildikov: i'm out in about a half hour
17:02:10 <mriedem> fyi
17:02:18 <ildikov> jungleboyj: I have very small brain activity right now, but it's enough to keep me alive and breathig :)
17:02:34 <jungleboyj> ildikov: :-)
17:02:40 <ildikov> mriedem: thanks for the note, then let's start with the activities in Nova
17:03:21 <ildikov> the remove check_attach patch got merged, thanks to everyone, who helped me out!
17:03:30 <ildikov> one small thing is out of the way
17:03:54 <ildikov> we have a bunch of things up for review under the Nova bp: https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis
17:04:22 <ildikov> lyarwood is working on some refactoring on the BDM and detach
17:04:40 * johnthetubaguy wonders into the room a touch late
17:04:42 <lyarwood> yeah, just trying to get things cleaned up before we introduce v3 code
17:05:07 <lyarwood> the bdm UUID stuff isn't directly related btw, we've been trying to land it for a few cycles
17:05:18 <lyarwood> but I can drop it if it isn't going to be used directly in the end here
17:05:31 <ildikov> johnthetubaguy: mriedem: is there anything in that refactor code that would need to be discussed? Or the overall direction looks good?
17:06:00 <johnthetubaguy> moving detach into the BDM makes sense to me
17:06:21 <johnthetubaguy> I would love to see the number of if use_new_api things very limited to a single module, ideally
17:06:31 <mriedem> move detach into the bdm?
17:06:35 <mriedem> i haven't seen that
17:06:37 <lyarwood> into the driver bdm
17:06:42 <mriedem> oh
17:06:44 <lyarwood> https://review.openstack.org/#/c/439520/
17:06:46 <johnthetubaguy> yeah, sorry, that
17:06:54 <mriedem> i saw the change to call detach before destroying the bdm, and left a comment
17:07:02 <mriedem> makes sense, but i feel there are hidden side effects
17:07:07 <lyarwood> there's still a load of volume_api code in the compute api that I haven't looked at yet
17:07:07 <mriedem> because that seems *too* obvious
17:07:28 <johnthetubaguy> it needs splitting into two patches for sure
17:07:40 <mriedem> on the whole i do agree that having the attach code in the driver bdm and the detach code in the compute manager separately has always been confusing
17:07:43 <lyarwood> johnthetubaguy: https://review.openstack.org/#/c/440693/1
17:07:57 <mriedem> yes https://review.openstack.org/#/c/440693/ scares me
17:08:12 <mriedem> not for specific reasons
17:08:14 <mriedem> just voodoo
17:08:19 <lyarwood> bdm voodoo
17:08:27 <johnthetubaguy> heh, yeah
17:09:11 <mriedem> the cinder v3 patch from scottda seems to have stalled a bit
17:09:26 <mriedem> i think that's an easy add which everyone agrees on,
17:09:32 <mriedem> and that should also make cinder v3 the default config
17:09:46 <johnthetubaguy> yeah, +1 keeping that one moving
17:09:51 <johnthetubaguy> thats the one with the context change, etc
17:09:55 <johnthetubaguy> well, on top of that
17:10:04 <mriedem> yeah the context change was merged in ocata i think
17:10:10 <mriedem> it's already merged anyway
17:10:25 <johnthetubaguy> yeah, that should keep moving
17:10:39 <lyarwood> should we track the v3 by default patches against the bp?
17:10:40 <ildikov> mriedem: I can pick the v3 switch up
17:10:58 <johnthetubaguy> lyarwood: probably
17:11:13 <ildikov> lyarwood: we have them on this etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
17:11:27 <lyarwood> ildikov: thanks
17:11:32 <ildikov> I wouldn't track it with the BP as my impression was that we want that in general
17:11:37 <mriedem> lyarwood: we want the cinder v3 thing regardless
17:11:39 <mriedem> ildikov: right +1
17:12:05 <lyarwood> understood
17:12:25 <mriedem> so to keep things moving at a minimum this week,
17:12:35 <mriedem> i think we do the bdm uuid and attachment_id changes
17:12:36 <ildikov> microversions were/are the question
17:12:43 <mriedem> and then also cinder v3
17:12:50 <mriedem> those are pretty easy changes to get done this week
17:12:54 <mriedem> as nothing is using them yet
17:12:57 <johnthetubaguy> we just request 3.0 to start with I assume?
17:12:58 <ildikov> I wonder whether we can switch to v3 and then add the negotiation for the mv
17:13:28 <ildikov> mriedem: +1
17:13:55 <mriedem> mv?
17:14:04 <ildikov> microversion :)
17:14:04 <johnthetubaguy> yeah, I thought we said just use the base version for now, and get that passing in the gates right?
17:14:16 <mriedem> johnthetubaguy: sure yeah
17:14:17 * ildikov is lazy to type the whole word all the time... :)
17:14:20 <mriedem> let's not overcomplicate the base change
17:14:32 <ildikov> my thinking as well
17:14:34 <johnthetubaguy> mriedem: +1
17:14:35 <mriedem> default to cinder v3, make sure it's there
17:14:51 <mriedem> release note
17:14:52 <mriedem> etc etc
17:14:56 <johnthetubaguy> sounds like we have lyarwood's BDM schema changes that can keep going also?
17:15:00 <mriedem> yes
17:15:01 <ildikov> and the earlier we switch the more testing we get even if 3.0 should be the same as v2
17:15:12 <mriedem> those patches are what we should get done this week
17:15:20 <lyarwood> I can get that done
17:15:20 <johnthetubaguy> yeah, sounds good
17:15:23 <mriedem> bdm schema changes (attachment_id and uuid) and cinder v3
17:15:31 <johnthetubaguy> yup, yup
17:15:53 <mriedem> i think we want to deprecate nova using cinder v2 also, but that can be later and separate
17:16:04 <johnthetubaguy> so, next week (still ignoring the spec), whats the next bit?
17:16:09 <mriedem> but start the timer in pike
17:16:21 <johnthetubaguy> I guess thats lyarwood's refactor stuff?
17:16:35 <mriedem> probably, and version negotiation
17:16:39 <mriedem> to make sure 3.27 is available
17:16:42 <johnthetubaguy> deciding yes or no to the BMD voodoo worries
17:16:57 <johnthetubaguy> true, we can put the ground work in for doing detach
17:18:22 <johnthetubaguy> so if we are good with that, I have spec open questions to run through?
17:18:25 <ildikov> we can move forward with the attach/detach changes
17:18:32 <johnthetubaguy> no... only detach
17:18:47 <johnthetubaguy> leaving attach to the very, very end I think
17:18:48 <ildikov> and have the microversion negotiation as a separate small change that needs to make it before anything else anyhow
17:19:14 <ildikov> johnthetubaguy: I meant the code to see how it works and not to merge everything right now
17:19:30 <ildikov> johnthetubaguy: but agree on finalizing detach first
17:19:46 <johnthetubaguy> yeah, could do attach WIP on the end, so we can test detach for real
17:19:53 <johnthetubaguy> we probably should actually
17:20:01 <ildikov> my point was mainly that we should not stop coding detach just because the mv negotiation is not fully naked yet
17:20:18 <johnthetubaguy> mv negotiation I thought was easy though
17:20:28 <ildikov> johnthetubaguy: jgriffith has a WIP up that does attach
17:20:31 <johnthetubaguy> either 3.27 is available, or its not right?
17:20:43 <jungleboyj> Right.
17:21:11 <jungleboyj> And the agreement was we would fall back to base V3 if 3.27 isn't available.
17:21:13 <ildikov> johnthetubaguy: I need to check whether the cinderclient changes are merged to get the highest supported version
17:21:27 <johnthetubaguy> is_new_attach_flow_available = True or Flase
17:21:43 <ildikov> smth like, that's easy
17:21:49 <johnthetubaguy> ildikov: we don't want the highest supported version though, we want 3.27, or do you mean we need cinderclient to support 3.27?
17:21:56 <ildikov> just currently Nova tells Cinder what version it wants
17:22:17 <johnthetubaguy> right, we currently always want the base version
17:22:21 <jungleboyj> Right, need cinderclient to say what it can support.  I am not sure if that code is in yet.
17:22:35 <ildikov> jungleboyj: me neither
17:22:44 <johnthetubaguy> we can get the list of versions and see if 3.27 is available
17:22:49 <johnthetubaguy> ah, right, we need that I guess
17:23:14 <johnthetubaguy> I mean its a simple REST call, so we could hold that patch if it stops us moving forward I guess
17:23:27 <johnthetubaguy> anyways, that seems OK
17:23:35 <johnthetubaguy> thats next weeks thing to chase
17:23:58 <ildikov> johnthetubaguy: it's just the matter of agreeing how we get what version is supported by Cinder and then act accordingly in Nova
17:24:20 <johnthetubaguy> mriedem: can you remember what we do in novaclient for that discovery bit?
17:24:23 <mriedem> https://review.openstack.org/#/c/425785/
17:24:31 <ildikov> johnthetubaguy: so for now just go with base v3 and see what made it into Cinder and add the rest as soon as we can
17:24:34 <mriedem> the cinder client version discovery is merged, but not released
17:24:39 <mriedem> smcginnis: need ^ released
17:24:53 <johnthetubaguy> cool, thats simple then
17:25:13 <ildikov> ok, I remember now, it needed a few rounds of rechecks to make it in
17:25:13 <jungleboyj> Argh, mriedem beat me to it.
17:25:17 <mriedem> ildikov: so add that to the list of things to do - cinderclient release
17:25:31 <jungleboyj> mriedem: +2
17:25:48 <jungleboyj> mriedem: You want me to take that over to Cinder.
17:25:55 <johnthetubaguy> #action need new cinder client release so we have access to get_highest_client_server_version https://review.openstack.org/#/c/425785/
17:26:20 <mriedem> jungleboyj: sure, anyone can propose the release
17:26:26 <mriedem> smcginnis has to sign off
17:26:31 <jungleboyj> I'll take that.
17:26:36 <mriedem> make sure you get the semver correct
17:27:12 <ildikov> mriedem: added a short list to the etherpad too with the small items in the queue we agreed on
17:27:50 <johnthetubaguy> #link https://etherpad.openstack.org/p/cinder-nova-api-changes
17:29:31 <johnthetubaguy> I would love to go over the TODOs in the spec
17:30:19 <johnthetubaguy> does that make sense to do now?
17:30:31 <ildikov> johnthetubaguy: I have to admit I couldn't get to the very latest version of the spec yet
17:30:51 <johnthetubaguy> thats fine, this is basically loose ends that came up post ptg
17:30:54 <johnthetubaguy> #link https://review.openstack.org/#/c/373203/17/specs/pike/approved/cinder-new-attach-apis.rst
17:31:02 <ildikov> johnthetubaguy: but we can start to go over the TODOs
17:31:04 <johnthetubaguy> the first one is the shared storage connection stuff
17:31:25 <johnthetubaguy> I am tempted to say we delay that until after we get all the other things sorted
17:31:32 <johnthetubaguy> lyarwood: do you think thats possible? ^
17:31:44 <lyarwood> +1
17:31:45 <lyarwood> yeah
17:32:01 <johnthetubaguy> there just seem to be complications we should look at separately there
17:32:18 <jungleboyj> That makes sense.  We need everything else stabilized first.
17:32:22 <johnthetubaguy> cool, we can do that then, split that out
17:32:33 <ildikov> johnthetubaguy: that will only be problematic with multi-attach only, right?
17:32:54 <johnthetubaguy> ildikov: I believe so, I think its new attachments, shared connections, multi-attach
17:33:10 <johnthetubaguy> #action split out shared host connections from use new attachment API spec
17:33:17 <johnthetubaguy> I think I am happy the correct things are possible
17:33:38 <johnthetubaguy> (which is probably a bad sign, but whatever)
17:33:44 <johnthetubaguy> so the next TODO
17:33:48 <johnthetubaguy> evacuate
17:33:56 <johnthetubaguy> how do you solve a problem like... evacuate
17:34:07 * jungleboyj runs for the fire exit
17:34:16 <johnthetubaguy> heh
17:34:29 <johnthetubaguy> so mdbooth added a great comment
17:34:39 <johnthetubaguy> if you evacuate an instance, thats cool
17:34:46 <johnthetubaguy> now detach a few volumes
17:34:55 <johnthetubaguy> then a bit later delete the instance
17:35:07 <johnthetubaguy> some time later the old host comes back from the dead and needs to get cleaned up
17:35:15 <johnthetubaguy> we kinda have two options:
17:35:50 <johnthetubaguy> (1) leave attachments around so we can try detect them (although finding them could be quite hard from just a migration object, when the instance has been purged from the DB)
17:36:38 <johnthetubaguy> (2) delete attachments when we have success on the evacuate on the new destination host, and leave the hypervisor driver to be able to find the unexpected VMs (if possible) and do the right thing with the backend connections where it can
17:37:07 <johnthetubaguy> (3) just don't allow an evacuated host to restart until the admin has manually tidied things up (aka re-imaged the whole box)
17:37:32 <lyarwood> there's another option
17:37:49 <lyarwood> update the attachment with connector=None
17:37:53 <jungleboyj> johnthetubaguy:  I think number two makes the most sense to me.  If the instance has successfully evacuated to the new host those connections aren't needed on the original.  Right?
17:37:57 <lyarwood> that's the same as terminate-connection right?
17:38:11 <johnthetubaguy> lyarwood: but why not just delete it?
17:38:22 <jungleboyj> johnthetubaguy:  ++
17:38:28 <lyarwood> johnthetubaguy: so the source host is aware that it has clean up
17:38:31 <lyarwood> has to*
17:38:37 <lyarwood> if it ever comes back
17:38:48 <johnthetubaguy> lyarwood: but the source host is only stored in the connector I believe?
17:38:54 <ildikov> I think attachment-delete will clean up the whole thing on the Cinder side
17:39:25 <ildikov> so no more terminate-connection
17:39:36 <johnthetubaguy> so the problem, not sure if I stated it, is finding the volumes
17:39:48 <johnthetubaguy> if the volume is detached...
17:39:51 <johnthetubaguy> after evacuate
17:40:07 <ildikov> is the problem here that when the host comes up then it tries to re-initiate the connection, etc?
17:40:08 <johnthetubaguy> we get the instance, we know it was evcuated, but we only check half of the volumes to clean up
17:40:17 <johnthetubaguy> its trying to kill the connection
17:40:42 <johnthetubaguy> the vm has moved elsewhere, so to stop massive data corruption, we have to kill the connection
17:41:55 <johnthetubaguy> so I got distracted there
17:42:08 <johnthetubaguy> yeah, we have the migration object and instances to tell the evacuate happened
17:42:12 <ildikov> attachment-delete should take care of the target, the question is what happens when the host comes up again, I think that was raised last week, but I will read the comment properly now in the spec :)
17:42:34 <johnthetubaguy> but its hard to always get the full list of volume attachments we have pre-evacuate at that point
17:43:00 <lyarwood> johnthetubaguy: that's why I suggested only updating the attachment
17:43:04 <ildikov> on the other hand I guess if the original host is not really down, neither fenced properly then we just look into the other direction like nothing happened and it's surely not our fault :)
17:43:11 <lyarwood> johnthetubaguy: each attachment is unique to an instance, host and volume right?
17:43:24 <johnthetubaguy> lyarwood: you mean reuse the same attachment on the new host?
17:43:27 <lyarwood> not instance sorry
17:44:02 * johnthetubaguy is failing to visualise the change
17:44:05 <lyarwood> johnthetubaguy: no, just keep the old ones with a blank connector that I assumed would kill any export / connection
17:44:28 <johnthetubaguy> so I don't think that fixes the problem we are facing
17:44:37 <ildikov> lyarwood: currently you cannot update without connector
17:44:41 <johnthetubaguy> if the use does detach, we can't find that volume on the evcuated host
17:44:48 <johnthetubaguy> user does
17:44:57 <ildikov> lyarwood: and attachment-delete will kill the export/connection
17:45:40 <lyarwood> johnthetubaguy: but that would be a detach of the volume on the new host using a new attachment
17:46:08 <johnthetubaguy> lyarwood: thats fine, the problem is we still need to tidy up on the old host, but we didn't know we had to
17:46:29 <lyarwood> johnthetubaguy: right so there's additional lookup logic needed there
17:46:33 <lyarwood> johnthetubaguy: that we don't do today
17:46:34 <johnthetubaguy> I guess we might need an API to list all valid attachments in Cinder, and gets all attachments on the host from os.brick, and delete all the ones that shouldn't be there
17:46:43 <johnthetubaguy> lyarwood: the problem is we don't have that data right now
17:46:51 <johnthetubaguy> or there is no way to get that data, I mean
17:46:58 <johnthetubaguy> because we delete the records we wanted
17:47:06 <johnthetubaguy> I thinking about this function: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L624
17:47:11 <jungleboyj> johnthetubaguy: Wasn't that proposed at the PTG?
17:47:30 <johnthetubaguy> jungleboyj: we thought we could get the data at the PTG, but I don't think we can
17:47:49 <johnthetubaguy> if you delete the instance, or detach volumes, we loose the information we were going to use to look up what needs cleaning up
17:47:52 <johnthetubaguy> (I think...)
17:48:03 <ildikov> johnthetubaguy: how does it work today?
17:48:44 <lyarwood> johnthetubaguy: the data within nova yes, but we can reach out to cinder and ask for any additional attachments not associated with a bdm (from being detached etc)
17:49:06 <johnthetubaguy> lyarwood: I don't think cinder has those APIs today
17:49:29 <johnthetubaguy> lyarwood: if we delete the data in cinder it doesn't have it either
17:49:38 <johnthetubaguy> so... lets back up to what ildikov said, and I forgot
17:49:46 <johnthetubaguy> step 1: be as rubbish as today
17:49:52 <johnthetubaguy> step 2: be less rubbish
17:50:04 <johnthetubaguy> I was jumping to step 2 again, my bad
17:50:18 <johnthetubaguy> so lets leave step 2 for another spec
17:50:21 <ildikov> johnthetubaguy: I'm not saying not to be less rubbish
17:50:48 <johnthetubaguy> yep, yep
17:50:56 <ildikov> I just hoped we're not that rubbish today and might see something in the old flow we're missing in the new...
17:51:08 <johnthetubaguy> yeah, me too, thats my new proposal
17:51:23 <johnthetubaguy> I forgot, we have the instances on the machine
17:51:30 <jungleboyj> :-)  One step at a time.
17:51:31 <johnthetubaguy> we destroy those
17:51:50 <johnthetubaguy> so, worst case we just have dangling connections on the host
17:52:08 <johnthetubaguy> like stuff we can't call os.brick for
17:52:19 <johnthetubaguy> but I think we can always get a fresh connector, which is probably good enough
17:52:52 <breitz> it seems dangerous to rely on any post evacuate cleanup (for things outside of nova - ie a cinder attachment) on the orig source host.  it seems like those things need to be cleaned up as part of the evacuate itself.  but perhaps i'm not understanding this correctly.
17:53:18 <johnthetubaguy> breitz: evacuate is when the source host is dead, turned off, and possibly in a skip
17:53:24 <breitz> right
17:53:43 <johnthetubaguy> but sometimes, the host is brought back from the dead
17:53:49 <breitz> but the world moves on - so when that host comes back - we can't rely on it.
17:53:54 <johnthetubaguy> if we just kill all the instances, we avoid massive data loss
17:54:09 <breitz> sure - can't allow the instances to come back
17:54:17 <johnthetubaguy> we keep migration records in the DB, so we can tell what has been evacuated, even if the instances are destroyed
17:54:23 <johnthetubaguy> breitz: totally, we do that today
17:54:31 <breitz> right
17:54:45 <johnthetubaguy> my worry is we don't have enough data about the instance to clean up the volumes using os.brick disconnect
17:54:58 <johnthetubaguy> but if we don't have that today, then whatever I guess
17:55:05 <johnthetubaguy> so this goes back to
17:55:12 <breitz> that I get - that cleanup is what I'm saying needs to be done when moving to the new dest.
17:55:30 <ildikov> johnthetubaguy: if we remove the target on the Cinder side that should destroy the connection or it does not?
17:55:39 <johnthetubaguy> breitz: but it can't be done, the host is dead? I am just worrying about the clean up on that host
17:55:39 <breitz> and somehow that info needs to be presented.  not wait until the orig source comes back up to do.
17:56:05 <johnthetubaguy> I think we just delete the attachments in cinder right away, to do the cinder tidy up
17:56:07 <lyarwood> ildikov: we terminate the original hosts connections in cinder today
17:56:20 <johnthetubaguy> lyarwood: I see what you are saying now, we should keep doing that
17:56:31 <johnthetubaguy> which means delete the attachments now, I think
17:56:37 <breitz> yes - do the delete attachments right away.
17:56:47 <ildikov> johnthetubaguy: lyarwood: yep, that's what I said too
17:56:49 <lyarwood> johnthetubaguy: right, update would allow some cleanup by delete would be in-line with what we do today
17:56:57 <johnthetubaguy> lyarwood: the bit I was missing is we do terminate today
17:57:05 <lyarwood> johnthetubaguy: yeah we do
17:57:11 <lyarwood> johnthetubaguy: via detach in rebuild_instance
17:57:17 <ildikov> johnthetubaguy: delete is supposed to do that for you in the new API
17:57:25 <johnthetubaguy> it makes sense, I just forgot that
17:57:31 <johnthetubaguy> yeah, so right now that means delete attachment
17:57:46 <johnthetubaguy> which isn't ideal, but doesn't make things any worse
17:57:47 <johnthetubaguy> lets do that
17:58:01 <ildikov> lyarwood: update at the moment is more finalizing attachment_create
17:58:22 <lyarwood> understood
17:58:41 <ildikov> lyarwood: you cannot update without connector as that would also mean you're putting back the volume to 'reserved' state and you don't want to do that here
17:58:56 <lyarwood> ah
17:59:07 <johnthetubaguy> ildikov: we kinda do that by creating a second attachment instead
17:59:10 <ildikov> lyarwood: and most probably neither in general
17:59:32 <ildikov> johnthetubaguy: yes, but that reserves the volume for the new host at least
17:59:36 <johnthetubaguy> so during the evacuate we don't want someone else "stealing" the volume
17:59:43 <johnthetubaguy> the new attachment does that fine
18:00:02 <johnthetubaguy> we just need to create the new one (for the new host) before we delete the old one
18:00:10 <johnthetubaguy> lyarwood: is that the order today?
18:00:16 <ildikov> that will work
18:00:47 <lyarwood> johnthetubaguy: hmmm I think we terminate first
18:02:06 <lyarwood> johnthetubaguy: yeah in _rebuild_default_impl we detach that inturn terminates the connections first before spawn then initializes the connection on the new host
18:02:28 <johnthetubaguy> lyarwood: I wonder about creating the attachment in the API, and adding it into the migration object?
18:02:53 <johnthetubaguy> ah, right, there it is: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2651
18:02:55 <lyarwood> johnthetubaguy: yeah, we could update after that
18:03:28 <johnthetubaguy> lyarwood: thats probably simpler
18:04:10 <johnthetubaguy> lyarwood: I was actually wondering in the live-migrate case, we have two sets of attachment_ids, when do we update the BDM, I guess on success, so maybe keep the pending ones in the migration object?
18:04:43 <lyarwood> johnthetubaguy: yup, we almost did the same with connection_info in the last cycle
18:05:00 <johnthetubaguy> lyarwood: yeah, thats why I was thinking in evacuate we could copy that
18:05:09 <johnthetubaguy> but I am probably over thinking that one
18:05:19 <johnthetubaguy> your idea sounds simpler
18:05:25 <johnthetubaguy> works on both code paths too
18:05:57 <ildikov> +1 on similar solutions
18:06:52 <lyarwood> cool, I need to drop now, I'll follow up with the BDM uuid and attachment_id patches in the morning \o_
18:07:08 <ildikov> we're out of time for today
18:07:25 <johnthetubaguy> yeah, I need to run too
18:07:27 <johnthetubaguy> thanks all
18:07:29 <ildikov> johnthetubaguy: can we consider evacuate good for now?
18:07:43 <ildikov> johnthetubaguy: or it will need more chats on the next meeting?
18:07:44 <johnthetubaguy> yep, thats both of my TODOs covered
18:08:00 <ildikov> johnthetubaguy: great, thanks for confirming
18:08:27 <ildikov> ok let's focus on the smaller items to merge this week and until the next meeting
18:08:33 <ildikov> thank you all!
18:08:43 <johnthetubaguy> would love +1s or -1s on the spec too :)
18:09:05 <ildikov> #action everyone to review the Nova spec!
18:09:10 <jungleboyj> ++
18:09:11 <ildikov> johnthetubaguy: ack :)
18:09:28 <ildikov> #endmeeting