16:00:22 <ildikov> #startmeeting cinder-nova-api-changes
16:00:23 <openstack> Meeting started Thu Dec 21 16:00:22 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:28 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
16:00:35 <ildikov> johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang1 raj_singh lyarwood jungleboyj stvnoyes
16:00:40 <mriedem> o/
16:00:55 <jungleboyj> @!
16:00:55 <_pewp_> jungleboyj (。・д・)ノ゙
16:01:13 <ildikov> mriedem: jungleboyj: morning :)
16:01:44 <stvnoyes> o/
16:01:47 <jungleboyj> Snowy morning.  :-)
16:02:00 <ildikov> I don't expect a crowd today and only have a few topics
16:02:04 <ildikov> stvnoyes: hi :)
16:02:23 <stvnoyes> hi :-)
16:03:08 <ildikov> ok, let's start
16:03:43 <ildikov> so we need to stamp the Cinder spec on multi-attach: https://review.openstack.org/#/c/523608/
16:04:08 <ildikov> I haven't seen an update from jgriffith as of yet since the last round of comments
16:04:56 <jungleboyj> ildikov:  Me either.  I looked at that earlier.  jgriffith seems to be AWOL.  ;-)
16:05:29 <ildikov> I think we are kind of clear on most the items
16:05:59 <ildikov> I hope we can clean up the shelved offloaded stuff at some point with a Nova microversion bump
16:06:23 <ildikov> running this late in the cycle it won't happen now though
16:06:36 <mriedem> that's probably an easy cleanup for rocky
16:06:47 <ildikov> I think the volume type aspect is clear now
16:06:52 <ildikov> mriedem: +1
16:07:02 <jungleboyj> mriedem:  ++
16:07:27 <ildikov> and we need some client clean up too on the Cinder side as pointed out by hemna
16:07:56 <jungleboyj> Yeah, the --multi-attach option?
16:08:05 <ildikov> yep, I meant that one
16:08:20 <jungleboyj> ildikov:  Ok, will have to go through deprecation and stuff on that, but not a big deal.
16:09:01 <ildikov> jungleboyj: sure, we can look into that
16:09:18 <ildikov> jungleboyj: it's a bit of a corner case though, but we can figure out how to handle it once we get there
16:10:13 <ildikov> jungleboyj: do you want to give it a try to update the spec?
16:10:56 <jungleboyj> ildikov:  Looking at it, I was really hoping that jgriffith would jump in there but I can try to address the comments and see if they satisfy people.
16:11:43 <ildikov> jungleboyj: let's give it a try and hopefully we can get it in soon now
16:11:56 <ildikov> we can always update it later if needed
16:12:18 <jungleboyj> Ok.  I will give that a shot after the meeting here.
16:12:24 <ildikov> jungleboyj: thank you
16:12:29 <jungleboyj> ildikov: Welcome.
16:13:26 <ildikov> I will hunt down jgriffith after the spec lands to see where he got with the code parts in the meantime
16:14:19 <ildikov> on the Nova side I cleaned up the libvirt patch: https://review.openstack.org/#/c/267587/
16:15:07 <ildikov> I know that we have pseudo code in the Nova spec for the lock, but if someone would have a ton of time I could use some 'how to do locks' lessons
16:15:35 <ildikov> as I'm not sure I got the full picture on how it should look like by the end...
16:15:47 <mriedem> well the first thing there is we need the shared_targets flag off the volume
16:16:24 <ildikov> yeah, that'll decide on whether or not we will have to lock
16:16:26 <mriedem> so we have to get the volume at the new microversion, i assume when we know we're detaching from a volume that has multiple attachments?
16:16:44 <mriedem> would have to look at the spec again
16:17:00 <ildikov> didn't we kind of want to have that lock for shared_targets overall?
16:17:09 <ildikov> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/cinder-volume-multi-attach.html
16:18:00 <mriedem> so that can be attachments from different volumes?
16:18:06 <mriedem> i.e. it's an existing problem?
16:18:29 <jungleboyj> ildikov:  The locking was for shared_targets, yes.
16:18:40 <ildikov> I think we said that there's an existing problem with shared_targets it's just gets worse with multi-attach
16:18:56 <mriedem> ok i missed that part
16:19:02 <ildikov> *it
16:19:33 <jungleboyj> Right.
16:19:38 <mriedem> so if we're going to do that regardless of the volume saying it's multiattach or not, i guess that patch comes in before the multiattach one right?
16:19:49 <ildikov> we had a chat about this at the PTG with the Cinder guys saying that brick is smarter now, etc, but there's in general a shared_targets issue that locks would help with
16:20:17 <ildikov> yeah, in that case it should
16:20:47 <ildikov> jungleboyj: is there any bug report about this issue or it's just an urban legend that we believe is true?
16:21:29 <jungleboyj> ildikov:  Just a legend.
16:21:43 <ildikov> jungleboyj: never mind then :)
16:22:11 <mriedem> ok, so i can maybe help a big here on the plumbing
16:22:15 <mriedem> *bit
16:23:19 <ildikov> I have very basic questions on how locks work in general
16:23:47 <ildikov> like I saw instance lock in the code but that doesn't fully answer my how exactly can I lock the host
16:24:00 <mriedem> depends on if the lock is external or not
16:24:07 <ildikov> or well, the attach/detach volume operations to get them serialized
16:24:13 <mriedem> we use https://github.com/openstack/nova/blob/master/nova/utils.py#L71
16:24:26 <jgriffith> Yes, that mv bump to expose shared_targets and lock name should be before multiattach
16:24:35 <jgriffith> (sorry, joining the party late)
16:24:39 <mriedem> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L291
16:24:56 <mriedem> you can pass external=True here https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L231
16:24:57 <ildikov> external from what perspective?
16:25:02 <mriedem> by default, the lock is just per-process
16:25:09 <mriedem> an external lock is on the file system
16:25:10 * ildikov hides after that question now...
16:25:13 <mriedem> so across processes
16:25:21 <ildikov> oh, ok, makes sense
16:25:22 <mriedem> that's only per-host though,
16:25:27 <mriedem> not across all hosts, that would be a DLM
16:25:29 <mriedem> like etd
16:25:31 <mriedem> *etcd
16:25:33 <ildikov> we want it per host, so that's fine
16:25:49 <mriedem> now,
16:25:57 <mriedem> you can only run one nova-compute worker,
16:26:01 <mriedem> so i'm not sure we need an external lock,
16:26:10 <mriedem> unless someone is running multiple nova-compute processes on the same host
16:26:19 <mriedem> which is possible, but i'm not sure how many people actually do that
16:26:39 <ildikov> is it a huge extra burden to add 'external'?
16:27:07 <mriedem> no, but it could unnecessarily impose a semaphore where one isn't needed
16:27:15 <mriedem> which would impact performance
16:27:44 <mriedem> i think starting with a non-external lock is fine,
16:27:47 <ildikov> jgriffith: we were wondering if there's any explanation on the shared targets issue that exists independently from multi-attach that we discussed at the PTG
16:27:50 <jgriffith> my 2 cents is that I didn't think we needed the external in this case
16:28:00 <mriedem> if someone comes forward saying "omfg this breaks my crazy multi-nova-compute on single host deployment" then we can change it
16:28:06 <ildikov> ok, fair enough
16:28:20 <ildikov> just wanted to see the options here
16:28:42 <mriedem> ok so i could probably wip something up
16:29:05 <jgriffith> ildikov: the shared-targets issues outside of multi-attach?  Just the "unsafe" removal of a target that's still in use
16:29:21 <jgriffith> brick deals with it today but there are races there that can be problematic
16:29:29 <ildikov> mriedem: that would be pretty great :)
16:29:37 <jgriffith> although currently my understanding from folks is that it's pretty solid now
16:29:40 <ildikov> jgriffith: I think I remembered the races discussion
16:30:19 <jgriffith> ildikov: the conclusion we came to was that it was a nice improvement/enhancement for the non-multiattach world, and a requirements for multiattach
16:30:46 <ildikov> jgriffith: thanks for confirming
16:30:57 <jungleboyj> jgriffith:  Welcome!
16:31:26 <ildikov> jgriffith: I guess there isn't anything that would describe the need for this nice improvement/enhancement? :)
16:32:30 <mriedem> https://specs.openstack.org/openstack/cinder-specs/specs/queens/add-shared-targets-to-volume-ref.html
16:32:33 <mriedem> probably there ^
16:33:29 <ildikov> mriedem: ah, right, I forgot that it was written without any multi-attach poisoning in it...
16:33:47 <ildikov> then we can document that change as well
16:34:10 <mriedem> we should get this updated https://developer.openstack.org/api-ref/block-storage/v3/#show-a-volume-s-details
16:34:12 <mriedem> for the new fields
16:34:31 <ildikov> I can look into that
16:35:34 <ildikov> mriedem: if you upload a WIP on the lock stuff, I will look into the API microversion bump and the needed Cinder changes
16:35:36 <mriedem> https://bugs.launchpad.net/cinder/+bug/1739640
16:35:38 <openstack> Launchpad bug 1739640 in Cinder "3.48 volume response parameters are not in the v3 API reference" [Undecided,New]
16:35:38 <mriedem> here you go
16:35:53 <mriedem> i'll crank something out quick on the lock thing
16:35:59 <ildikov> mriedem: sweet, tnx
16:36:35 <ildikov> mriedem: is there anything in the libvirt patch that I should add/change?
16:37:01 <mriedem> i haven't looked at it lately
16:37:45 <ildikov> jgriffith: jungleboyj will update your spec on multi-attach; will you be around to answer questions and/or check the updated version?
16:38:05 <ildikov> mriedem: no problem, I was just wondering
16:38:12 <jungleboyj> jgriffith:  Or can you make the updates so I can then approve it?
16:38:28 <jgriffith> jungleboyj: will do
16:38:50 <jungleboyj> jgriffith:  You will make updates?  If so, can we do that today?
16:39:02 <jgriffith> jungleboyj: yes and yes
16:39:10 <ildikov> jgriffith: even better, thank you so much
16:39:18 <jungleboyj> jgriffith:  You the man!  Thank you.
16:40:12 <ildikov> I haven't gotten to the Tempest test yet
16:40:34 <ildikov> I hope our expert, stvnoyes will be able to help out with that part in January :)
16:41:10 <mriedem> i assume stvnoyes is frantically working on backporting all of this to kilo for oracle
16:41:42 <ildikov> I hope he found slaves for that :)
16:41:56 <mriedem> motion to adjourn
16:42:08 <jgriffith> I second
16:42:12 <jungleboyj> I third
16:42:25 <ildikov> I give up :)
16:42:31 <jgriffith> jungleboyj: no no no, procedure after a second is "all in favor"
16:42:37 <jgriffith> :)
16:42:39 <jungleboyj> all in favor?
16:42:43 <mriedem> aye
16:42:43 <ildikov> for today only though :)
16:42:44 <jgriffith> aye
16:42:56 <jungleboyj> They aye's have it.
16:43:18 <ildikov> Happy Holidays All!
16:43:22 <ildikov> that was my last topic
16:43:27 <ildikov> no meeting next week
16:43:36 <jungleboyj> Hope everyone has a Merry Christmas and a safe New Year.
16:43:37 <jgriffith> Indeed Merry Christmas everyone
16:43:46 <ildikov> I will check who's around on the 4th
16:43:54 <jungleboyj> jgriffith:  give me a ping when the spec is ready in case I don't see that come through.
16:44:14 <jungleboyj> ildikov: I am in Workshops in Raleigh so I will probably only be peripherally there.
16:44:34 <ildikov> jungleboyj: no worries, I'm an expert in talking to myself
16:44:47 * jungleboyj laughs
16:45:14 <ildikov> with all that said
16:45:25 <ildikov> Thank You All for everything!
16:45:54 <jungleboyj> ildikov:  Thank you!
16:45:59 <ildikov> it's a hell of a ride, but hopefully we will get there now
16:46:45 * jungleboyj sings "All I want for Christmas is multi-attach"
16:46:51 <ildikov> have fun and some rest and whatever makes you happy with family, friends, cats, etc next week(s)
16:47:12 <ildikov> jungleboyj: that makes me nauseous...
16:47:33 <ildikov> That's all Folks!
16:47:37 <ildikov> :)
16:47:41 <ildikov> #endmeeting