16:00:22 #startmeeting cinder-nova-api-changes 16:00:23 Meeting started Thu Dec 21 16:00:22 2017 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:28 The meeting name has been set to 'cinder_nova_api_changes' 16:00:35 johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang1 raj_singh lyarwood jungleboyj stvnoyes 16:00:40 o/ 16:00:55 @! 16:00:55 <_pewp_> jungleboyj (。・д・)ノ゙ 16:01:13 mriedem: jungleboyj: morning :) 16:01:44 o/ 16:01:47 Snowy morning. :-) 16:02:00 I don't expect a crowd today and only have a few topics 16:02:04 stvnoyes: hi :) 16:02:23 hi :-) 16:03:08 ok, let's start 16:03:43 so we need to stamp the Cinder spec on multi-attach: https://review.openstack.org/#/c/523608/ 16:04:08 I haven't seen an update from jgriffith as of yet since the last round of comments 16:04:56 ildikov: Me either. I looked at that earlier. jgriffith seems to be AWOL. ;-) 16:05:29 I think we are kind of clear on most the items 16:05:59 I hope we can clean up the shelved offloaded stuff at some point with a Nova microversion bump 16:06:23 running this late in the cycle it won't happen now though 16:06:36 that's probably an easy cleanup for rocky 16:06:47 I think the volume type aspect is clear now 16:06:52 mriedem: +1 16:07:02 mriedem: ++ 16:07:27 and we need some client clean up too on the Cinder side as pointed out by hemna 16:07:56 Yeah, the --multi-attach option? 16:08:05 yep, I meant that one 16:08:20 ildikov: Ok, will have to go through deprecation and stuff on that, but not a big deal. 16:09:01 jungleboyj: sure, we can look into that 16:09:18 jungleboyj: it's a bit of a corner case though, but we can figure out how to handle it once we get there 16:10:13 jungleboyj: do you want to give it a try to update the spec? 16:10:56 ildikov: Looking at it, I was really hoping that jgriffith would jump in there but I can try to address the comments and see if they satisfy people. 16:11:43 jungleboyj: let's give it a try and hopefully we can get it in soon now 16:11:56 we can always update it later if needed 16:12:18 Ok. I will give that a shot after the meeting here. 16:12:24 jungleboyj: thank you 16:12:29 ildikov: Welcome. 16:13:26 I will hunt down jgriffith after the spec lands to see where he got with the code parts in the meantime 16:14:19 on the Nova side I cleaned up the libvirt patch: https://review.openstack.org/#/c/267587/ 16:15:07 I know that we have pseudo code in the Nova spec for the lock, but if someone would have a ton of time I could use some 'how to do locks' lessons 16:15:35 as I'm not sure I got the full picture on how it should look like by the end... 16:15:47 well the first thing there is we need the shared_targets flag off the volume 16:16:24 yeah, that'll decide on whether or not we will have to lock 16:16:26 so we have to get the volume at the new microversion, i assume when we know we're detaching from a volume that has multiple attachments? 16:16:44 would have to look at the spec again 16:17:00 didn't we kind of want to have that lock for shared_targets overall? 16:17:09 https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/cinder-volume-multi-attach.html 16:18:00 so that can be attachments from different volumes? 16:18:06 i.e. it's an existing problem? 16:18:29 ildikov: The locking was for shared_targets, yes. 16:18:40 I think we said that there's an existing problem with shared_targets it's just gets worse with multi-attach 16:18:56 ok i missed that part 16:19:02 *it 16:19:33 Right. 16:19:38 so if we're going to do that regardless of the volume saying it's multiattach or not, i guess that patch comes in before the multiattach one right? 16:19:49 we had a chat about this at the PTG with the Cinder guys saying that brick is smarter now, etc, but there's in general a shared_targets issue that locks would help with 16:20:17 yeah, in that case it should 16:20:47 jungleboyj: is there any bug report about this issue or it's just an urban legend that we believe is true? 16:21:29 ildikov: Just a legend. 16:21:43 jungleboyj: never mind then :) 16:22:11 ok, so i can maybe help a big here on the plumbing 16:22:15 *bit 16:23:19 I have very basic questions on how locks work in general 16:23:47 like I saw instance lock in the code but that doesn't fully answer my how exactly can I lock the host 16:24:00 depends on if the lock is external or not 16:24:07 or well, the attach/detach volume operations to get them serialized 16:24:13 we use https://github.com/openstack/nova/blob/master/nova/utils.py#L71 16:24:26 Yes, that mv bump to expose shared_targets and lock name should be before multiattach 16:24:35 (sorry, joining the party late) 16:24:39 https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L291 16:24:56 you can pass external=True here https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L231 16:24:57 external from what perspective? 16:25:02 by default, the lock is just per-process 16:25:09 an external lock is on the file system 16:25:10 * ildikov hides after that question now... 16:25:13 so across processes 16:25:21 oh, ok, makes sense 16:25:22 that's only per-host though, 16:25:27 not across all hosts, that would be a DLM 16:25:29 like etd 16:25:31 *etcd 16:25:33 we want it per host, so that's fine 16:25:49 now, 16:25:57 you can only run one nova-compute worker, 16:26:01 so i'm not sure we need an external lock, 16:26:10 unless someone is running multiple nova-compute processes on the same host 16:26:19 which is possible, but i'm not sure how many people actually do that 16:26:39 is it a huge extra burden to add 'external'? 16:27:07 no, but it could unnecessarily impose a semaphore where one isn't needed 16:27:15 which would impact performance 16:27:44 i think starting with a non-external lock is fine, 16:27:47 jgriffith: we were wondering if there's any explanation on the shared targets issue that exists independently from multi-attach that we discussed at the PTG 16:27:50 my 2 cents is that I didn't think we needed the external in this case 16:28:00 if someone comes forward saying "omfg this breaks my crazy multi-nova-compute on single host deployment" then we can change it 16:28:06 ok, fair enough 16:28:20 just wanted to see the options here 16:28:42 ok so i could probably wip something up 16:29:05 ildikov: the shared-targets issues outside of multi-attach? Just the "unsafe" removal of a target that's still in use 16:29:21 brick deals with it today but there are races there that can be problematic 16:29:29 mriedem: that would be pretty great :) 16:29:37 although currently my understanding from folks is that it's pretty solid now 16:29:40 jgriffith: I think I remembered the races discussion 16:30:19 ildikov: the conclusion we came to was that it was a nice improvement/enhancement for the non-multiattach world, and a requirements for multiattach 16:30:46 jgriffith: thanks for confirming 16:30:57 jgriffith: Welcome! 16:31:26 jgriffith: I guess there isn't anything that would describe the need for this nice improvement/enhancement? :) 16:32:30 https://specs.openstack.org/openstack/cinder-specs/specs/queens/add-shared-targets-to-volume-ref.html 16:32:33 probably there ^ 16:33:29 mriedem: ah, right, I forgot that it was written without any multi-attach poisoning in it... 16:33:47 then we can document that change as well 16:34:10 we should get this updated https://developer.openstack.org/api-ref/block-storage/v3/#show-a-volume-s-details 16:34:12 for the new fields 16:34:31 I can look into that 16:35:34 mriedem: if you upload a WIP on the lock stuff, I will look into the API microversion bump and the needed Cinder changes 16:35:36 https://bugs.launchpad.net/cinder/+bug/1739640 16:35:38 Launchpad bug 1739640 in Cinder "3.48 volume response parameters are not in the v3 API reference" [Undecided,New] 16:35:38 here you go 16:35:53 i'll crank something out quick on the lock thing 16:35:59 mriedem: sweet, tnx 16:36:35 mriedem: is there anything in the libvirt patch that I should add/change? 16:37:01 i haven't looked at it lately 16:37:45 jgriffith: jungleboyj will update your spec on multi-attach; will you be around to answer questions and/or check the updated version? 16:38:05 mriedem: no problem, I was just wondering 16:38:12 jgriffith: Or can you make the updates so I can then approve it? 16:38:28 jungleboyj: will do 16:38:50 jgriffith: You will make updates? If so, can we do that today? 16:39:02 jungleboyj: yes and yes 16:39:10 jgriffith: even better, thank you so much 16:39:18 jgriffith: You the man! Thank you. 16:40:12 I haven't gotten to the Tempest test yet 16:40:34 I hope our expert, stvnoyes will be able to help out with that part in January :) 16:41:10 i assume stvnoyes is frantically working on backporting all of this to kilo for oracle 16:41:42 I hope he found slaves for that :) 16:41:56 motion to adjourn 16:42:08 I second 16:42:12 I third 16:42:25 I give up :) 16:42:31 jungleboyj: no no no, procedure after a second is "all in favor" 16:42:37 :) 16:42:39 all in favor? 16:42:43 aye 16:42:43 for today only though :) 16:42:44 aye 16:42:56 They aye's have it. 16:43:18 Happy Holidays All! 16:43:22 that was my last topic 16:43:27 no meeting next week 16:43:36 Hope everyone has a Merry Christmas and a safe New Year. 16:43:37 Indeed Merry Christmas everyone 16:43:46 I will check who's around on the 4th 16:43:54 jgriffith: give me a ping when the spec is ready in case I don't see that come through. 16:44:14 ildikov: I am in Workshops in Raleigh so I will probably only be peripherally there. 16:44:34 jungleboyj: no worries, I'm an expert in talking to myself 16:44:47 * jungleboyj laughs 16:45:14 with all that said 16:45:25 Thank You All for everything! 16:45:54 ildikov: Thank you! 16:45:59 it's a hell of a ride, but hopefully we will get there now 16:46:45 * jungleboyj sings "All I want for Christmas is multi-attach" 16:46:51 have fun and some rest and whatever makes you happy with family, friends, cats, etc next week(s) 16:47:12 jungleboyj: that makes me nauseous... 16:47:33 That's all Folks! 16:47:37 :) 16:47:41 #endmeeting