17:00:08 #startmeeting cinder-nova-api-changes 17:00:08 Meeting started Thu Mar 9 17:00:08 2017 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:11 The meeting name has been set to 'cinder_nova_api_changes' 17:00:24 DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis xyang1 raj_singh lyarwood breitz 17:00:30 * smcginnis can only lurk for a bit 17:00:32 \o 17:00:37 o/ 17:00:39 0/ 17:00:39 o/ 17:00:59 smcginnis: noted, tnx 17:01:07 hi all :) 17:01:45 o/ 17:01:49 o/ 17:01:58 let's start with the smaller action items before jumping into a philosophical topic 17:02:24 mriedem: johnthetubaguy: have you checked the switch to Cinder v3 patch? 17:02:27 how did last weeks work items go, I guess? 17:02:54 I haven't had chance 17:03:02 i haven't seen the latest yet no 17:03:03 johnthetubaguy: yep, basically switch to Cinder v3 and the BDM work 17:03:05 busy with other fires 17:03:06 https://review.openstack.org/#/c/438744/ 17:03:16 i reviewed the bdm data model patches of lyarwood's last week, 17:03:18 adn https://review.openstack.org/#/c/438750/ 17:03:22 and they still have some outstanding issues i think 17:03:33 #link https://etherpad.openstack.org/p/pike-nova-priorities-tracking 17:03:35 oh... ok; I guess I'll abandon mine 17:03:44 mriedem: johnthetubaguy: https://review.openstack.org/#/c/442004/ 17:03:48 did we get everything in that etherpad^ 17:03:59 lyarwood link to your patch (refresh my memory)? 17:03:59 ildikov: wrong patch 17:04:09 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/cinder-new-attach-apis 17:04:27 lyarwood oh... my work is done here :) 17:04:28 hmm the uuid changes are dropped from that bp 17:04:31 thanks! 17:04:59 mriedem: johnthetubaguy: https://review.openstack.org/#/c/420201/ 17:05:16 mriedem: I respun the db and object changes this morning but I think we still have some more work to do 17:05:25 mriedem: tnx for pointing out 17:05:49 OK I did wonder, thats the one 17:05:57 lyarwood: yeah you updated the attachment_id patch but not the bottom uuid patch 17:06:08 so for v3, did we say we would deprecate v2 support, or is that coming later? 17:06:37 SO why do we have multiple patches overlapping? 17:07:12 so I went ahead and abandoned my versions, I'm happy to hand this off :) 17:07:24 I just don't want to continue duplicating efforts 17:07:27 smcginnis: the attachment_id duplicates were just from ptg 17:07:34 people didn't know 17:07:46 pretty much 17:07:57 mriedem I think there's some other overlap in there, but lyarwood did point out that he had started them and was *changing* a few things 17:07:58 johnthetubaguy: i'd like to say that we're going to deprecate nova's support for cinder v2 in pike 17:08:07 lyarwood: did you fix the comments on the BDM patches? is it ready for review? 17:08:19 johnthetubaguy: but in a separate patch from the one that adds the support for cinder v3 and makes it the default 17:08:47 mriedem are you (or somebody) working on that? 17:08:47 ildikov: the attachment_id changes are ready for more reviews yes, I missed mriedem's comments in the original uuid changes 17:08:53 mriedem: That is a good thing I think. 17:08:55 mriedem: what would that deprecation mean besides the reno? 17:09:04 mriedem the "make v3 default in Nova thing" 17:09:04 jgriffith: on which part? 17:09:15 mriedem "make v3 default in Nova" 17:09:15 jgriffith: that's ildikov/scottda's patch above 17:09:19 mriedem thanks 17:09:21 https://review.openstack.org/#/c/420201/ 17:09:31 Cool. 17:09:34 mriedem: sounds good 17:09:56 jgriffith: yeap, I uploaded all the 20 lines of change, 18 of which is the reno :) 17:10:04 ildikov :) 17:10:18 ildikov: it means that if nova is configured to use cinder v2, we log a warning 17:10:19 plus the reno 17:10:23 jgriffith: I need to figure out the micorversion stuff now 17:10:40 just like how we used to log a warning if nova was configured to use cinder v1 17:10:41 mriedem: ok cool, I'll upload a follow up patch for that 17:10:52 Oh, I see.. there's the WIP for that (as per mriedem 's comment) 17:10:58 mriedem: got it, I was just wondering what else I might forget 17:11:05 where is the microversion patch? 17:11:21 https://review.openstack.org/#/c/385682/ 17:11:43 but thats not doing what we agreed we would do previously (yet) 17:11:46 johnthetubaguy: it was a previous patch set of the switch one 17:11:48 do we have that patch up? 17:11:57 johnthetubaguy: the latest non abandoned 17:12:16 johnthetubaguy: I need to get the microversion part up 17:12:25 OK, thats cool, thats still a TODO 17:12:27 johnthetubaguy: we need a cinderclient release for that one 17:12:36 right 17:12:47 * ildikov looks at smcginnis on an estimate when that might happen :) 17:12:50 are we looking close to getting the client released? 17:13:06 * jungleboyj was just pinging smcginnis about that. 17:13:13 lyarwood: cool, sounds good thanks! 17:13:20 He was waiting for https://review.openstack.org/#/c/385629/ to merge. 17:13:30 It isn't making much progress with scottda being pulled away. 17:13:35 lyarwood: is there anything in that chain that needs further discussions/work? 17:13:59 ildikov: db/object wise no, just more reviews once I've addressed the remaining comments from mriedem 17:14:13 don't we need this one: https://review.openstack.org/#/c/385641/2 17:14:14 ? 17:14:52 johnthetubaguy: yes, that's what they are talking about 17:15:01 Yes, but smcginnis Also wanted the other one in before doing a new client release. 17:15:01 unless that's old 17:15:15 jungleboyj: has anyone noticed the state of https://review.openstack.org/#/c/385641/ though? 17:15:15 jungleboyj: but I think we need this one in the release: https://review.openstack.org/#/c/385641/2 17:15:17 lyarwood: cool; I guess you're working on getting those addressed and that will be up shortly, right? 17:15:32 #link https://review.openstack.org/#/c/385641 17:15:36 ildikov: yup I'm working on it now 17:15:40 heh, the cinderclient target for nova is https://review.openstack.org/#/c/385641/ which needs work 17:15:44 jungleboyj: I see some activities on it, do we need to look more into who can take care of it? 17:15:48 mriedem 17:15:50 lyarwood: coolio, tnx! 17:15:51 +1 17:16:02 unless https://review.openstack.org/#/c/385629/ supersedes https://review.openstack.org/#/c/385641/ ? 17:16:14 johnthetubaguy: Good point. Yes, that needs to get addressed as well I think. 17:16:15 nvm it's a series 17:16:21 so you need both 17:16:35 ildikov: I thought we had someone on it but then it got quiet again. 17:16:38 so can the cinder team work on getting https://review.openstack.org/#/c/385641/ merged and released? 17:16:58 hmm, didn't we have another get_highest_version patch merged some time before the last client release? 17:17:08 #info need https://review.openstack.org/#/c/385641/ merged and released in cinderclient before nova can use cinder microversions 17:17:25 mriedem: Yeah, I will corner smcginnis and get it worked out. 17:17:34 jungleboyj: tnx! 17:17:46 Do we just need https://review.openstack.org/#/c/385641/ ? 17:17:52 ildikov: Welcome. 17:18:03 jungleboyj: it's a series 17:18:22 so you need both 17:18:29 we need to know if we can use the client to talk v3.27 or not 17:18:29 mriedem: so we have this merged: https://review.openstack.org/#/c/425785/ 17:18:53 ildikov: i'm not sure why we can't use that then 17:18:55 is scottda around? 17:19:04 can he explain the state of the changes for cinderclient and what nova needs? 17:19:07 mriedem: so I think we can use that 17:19:14 then what are the other patches for? 17:19:20 mriedem: and this is the bugfix that holds back the client release: https://review.openstack.org/#/c/385629/ 17:19:28 mriedem: so I think we only need the bugfix to land 17:19:33 https://bugs.launchpad.net/python-cinderclient/+bug/1632872 17:19:34 Launchpad bug 1632872 in python-cinderclient "discover_versions is broken" [Undecided,In progress] - Assigned to Justin A Wilson (justin-wilson) 17:19:40 yeah, i pointed that bug out a long time ago i think 17:19:45 october 17:19:47 geez 17:19:52 via code inspection 17:20:03 yeah so we can't use https://review.openstack.org/#/c/425785/ because it's broken 17:20:07 anyway 17:20:14 i think you guys know what you need to focus on for the client release 17:20:25 what a mess 17:20:43 * ildikov is in tears :( 17:20:54 So, get https://review.openstack.org/#/c/385629/ and https://review.openstack.org/#/c/385641/2 landed 17:21:13 jungleboyj we can talk in Cinder channel, I don't think it's that simple 17:21:23 and those actually have some issues anyway 17:21:29 jgriffith: :-( Ok. 17:21:34 so lets just leave that with you all 17:21:41 Sorry, just getting up to speed with that stuff. 17:21:41 like they'll blow up if you load them :) 17:21:46 call them I mean 17:21:47 I think you understand what we need right? 17:21:56 jgriffith: Halt and Catch Fire 17:22:05 jungleboyj: get the first landed and see what the other one is I believe that's obsolete and we need to figure out whether the ideas for version discovery are good enough or not 17:22:28 can I pass "3.27" or should I pass "3.0" into the client factory method thingy 17:22:37 I think thats the aim 17:22:45 based on what the server and client both support 17:22:47 johnthetubaguy: I believe we do know that we need a cinderclient that can tell Nova what version Cinder can offer to us 17:23:14 Yep. 17:23:29 johnthetubaguy: on the Nova side we should get the switch to base v3 landed as that's a VERY small change and seems working 17:23:38 ildikov: it's in the gate 17:23:44 johnthetubaguy: I will upload the v2 deprecation patch this week 17:24:07 mriedem: \o/ :) thanks! 17:24:39 so when we came up with how the microversion stuff will finally be handled I'll upload the Nova side changes for that 17:25:27 we should also start to look into the detach case code wise 17:25:39 jgriffith has a PoC up for that 17:25:56 and get the BDM patches landed that lyarwood is working on right now 17:26:03 version = 3.27 ? not client.server_supports_version(3.27) : 3.0 right? 17:26:17 I just abandoned them, but: https://review.openstack.org/#/c/438744/ and https://review.openstack.org/#/c/438750/ 17:26:42 jgriffith: i'm not sure why you dropped https://review.openstack.org/#/c/438750/ 17:26:51 jgriffith: i don't think there was a duplicate for that? 17:26:53 because it's part of a chain on the first one :) 17:27:05 mriedem: johnthetubaguy: is there anything with the detach refactor from lyarwood that's questionable? 17:27:05 but you can just rebase that onto the other series from lyarwood right? 17:27:05 I'll redo it and loose the dep 17:27:18 ildikov: i haven't looked at it 17:27:20 and I did it slightly differently than lyarwood so the follow on wouldn't work anyway 17:27:37 ildikov: just needs more review I think 17:27:41 mriedem yes, I can/will rework it to rebase against lyarwood 's patches 17:27:55 mriedem: it would be good to figure out whether that will work out or not so that we don't get delayed for nothing 17:27:56 jgriffith: you could rebase on my series and just move everything down into the driver_bdm detach call 17:28:15 lyarwood yeah, just waiting to see movement on those before doing so :) 17:28:16 lyarwood: has mdbooth gone through your refactor? 17:28:24 mriedem: Nope 17:28:31 ah there he is 17:28:39 ok honestly i need more eyes from nova people on this stuff, b/c i'm spread thin right now 17:28:42 jgriffith: kk understood 17:29:04 mriedem lyarwood: I can look hard at this next week 17:29:08 mriedem: do you have a list of Nova people who you would like to see these? 17:29:09 thanks 17:29:13 ildikov: mdbooth for one 17:29:15 so I added these patches into https://etherpad.openstack.org/p/pike-nova-priorities-tracking 17:29:20 btw the changes lyarwood proposed will greatly simplify a few things for us FWIW 17:29:22 mriedem: I can start to get the patches to their attention 17:29:32 he knows 17:29:46 johnthetubaguy: I saw, looks great, thank you! 17:29:49 so before we get ahead of ourselves, 17:30:01 i'd be happy if the cinder team can get the client stuff done, and the nova team can get the data model changes done 17:30:10 i don't want to lose focus on the first steps here 17:30:13 +1 17:30:25 because "review this and this other thing and you probably want to look at this too right now" is just overwhelming atm 17:30:26 mriedem understood, I'm trying to look at that now; but I'm not sure how *quickly* that's going to go 17:30:37 mriedem: +1, I think we have those items listed as first ones everywhere 17:30:48 mriedem: will annoy the Cinder team with the client part 17:30:50 mriedem smcginnis ildikov shall we create a sequence list? 17:31:04 something like: 17:31:05 1. cinderclient fixes 17:31:10 2. lyarwood 's refactor patches 17:31:13 i believe the sequence on the nova side is just the data model changes, detach refactor, then jgriffith's detach with v3 stuff 17:31:14 right? 17:31:16 3. nova default ot V3 17:31:22 jgriffith: it's roughly on top of this etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes 17:31:37 jgriffith: yeah we're on the same page 17:31:48 mriedem correct, but I'm calling out everything so we're all on the same page 17:31:55 ok 17:31:55 mriedem: right, the v3 change is desired independently from the new attach/detach API 17:32:04 yeah, the micro-version stuff is needed before detach will work, I guess that could be two patches there 17:32:05 because there's sort of a lot of irons in the fire right now 17:32:08 jgriffith: +1 17:32:41 johnthetubaguy: how close do you feel the spec is at this point? i know mdbooth has been reviewing, but i haven't gone through it since before the ptg 17:32:46 so the list is in the spec work items list: https://review.openstack.org/373203 17:32:49 #action jgriffith take a shot at the version discovery stuff in the client 17:32:55 can't believe I'm saying that 17:33:03 it feels almost there, but mdbooth and I found a... nit shall we say 17:33:10 micro-versions mumble mumble grumble 17:33:25 jgriffith: from scratch, or take over scottda's changes? 17:33:25 jgriffith: all as we need is, can we use 3.27 or not 17:33:35 johnthetubaguy: Probably something to be consciously swept under the rug, I think 17:33:36 jgriffith: I'm blushing reading that :) 17:33:43 mriedem we'll see, right now considering "loosely" based on scottda 's stuff 17:33:55 jgriffith: ok whatever works for you guys 17:33:55 mdbooth: yeah, my update basically did just that 17:34:02 johnthetubaguy: I'd be happy documenting that we can't do that 17:34:09 Cool, haven't looked yet 17:34:23 I'd like to model more closely to what exists on the Nova side... anyway, we'll see 17:34:39 so I don't feel the spec has had many cinder eyes on it post PTG 17:34:49 I feel like lyarwood and mdbooth have had a good look at many of the details in the spec 17:35:03 johnthetubaguy: I remember you were asking about volume migration or smth like on the Cinder channel today 17:35:06 feels more like polish and re-writing now 17:35:24 ildikov: its part of swap volume in the spec 17:35:37 johnthetubaguy: ok cool 17:35:53 johnthetubaguy: is there anything would worth discussing here so you can progress? 17:35:57 ildikov: basically there is no way migrate works with multi-attach, but it doesn't block the new API 17:36:12 the key point is attachments I think 17:36:27 johnthetubaguy: fair enough, that should not block the new API work 17:36:36 I am assuming that migrate_volume_completion deletes attachments as required 17:36:45 we can write up a separate spec with multi-attach corner cases and things that not going to work 17:36:47 as long as that is true, I think we are good 17:36:54 ildikov: yup 17:37:18 johnthetubaguy: and are we sure that's deleting? 17:37:23 johnthetubaguy: Have you proposed any of the changes to the cinder api we discussed in the nova spec? 17:37:38 the migration completion callback 17:37:55 mdbooth: no, I am basically going to ignore the problem for now, or try to 17:38:13 lets take that conversation to a speparate spec 17:38:26 johnthetubaguy: Ok, will look later. That's going to require changes on both sides in any case. 17:38:27 ildikov: thats my question really 17:38:40 johnthetubaguy: if there's anything we could/should do in Cinder we can look into that 17:38:56 so basically, how does migrate work with the new API 17:39:03 I wrote up in the spec how I think it works 17:39:08 jgriffith: jungleboyj: smcginnis: do you have answer to that migrate_volume_completion question? 17:39:10 but I dunno if thats how its implemented 17:39:34 line 251 in here: https://review.openstack.org/#/c/373203/19/specs/pike/approved/cinder-new-attach-apis.rst 17:39:45 sorry.. 17:39:49 scrolling back 17:39:54 ildikov: tl;dr we want to pass attachment ids instead of volume ids everywhere in the volume migration flow 17:39:59 mdbooth: sweet, that would be good, I added a comment on why I put the instance into the ERROR state 17:40:11 ildikov I'll have to look, that was vish and avishay that put those together 17:40:29 I am hoping basically it does the delete attachment stuff for us 17:40:33 mdbooth: I think in general that should be the desire from the Cinder side as well to use attachment_id 17:40:39 but something tells me it just breaks in a heap right now 17:40:40 ildikov: Sweet 17:40:52 mdbooth: need to look into the current state of things tough 17:41:07 yeah, I don't think we should change the API as part of this spec, it covers enough changes as it is 17:41:19 johnthetubaguy I'll look at your spec today as well if that helps 17:41:31 jgriffith: that would be great, need lots of eyes on that 17:42:15 jgriffith: I think it's definitely good to have the Cinder side view on it 17:42:47 Incidentally, I still think that for multi-attach with shared connections we're going to end up locking exclusively on the Nova side 17:42:52 mriedem: to answer your question, my main worry is how we keep swap volume working, other than that we are good, so its tempting to merge it with open questions at this point 17:42:53 Don't know if that's worth bringing up here 17:43:18 mdbooth: even when its in the API saying what should happen, the lock is still held on the nova side 17:43:26 mdbooth: eventually we will need to talk about that 17:43:28 I want to leave that till post this API move stuff 17:44:04 its totally a requirement to get multi-attach working 17:44:07 johnthetubaguy: Ok, sure. But basically the problem is ordering. The only thing which can know everything about call order is the nova compute process. 17:44:26 mdbooth: +! 17:44:28 oops 17:44:30 +1 17:44:48 So the api can say: "... and you're the last one", but by the time it executes it might not be. 17:45:05 mdbooth: if you could list concerns, ideas, anything here that would be great: https://etherpad.openstack.org/p/cinder-nova-api-changes 17:45:17 mdbooth: correct, thats why I proposed to re-query that fact with the lock held 17:45:30 mdbooth: the lock has to be nova side 17:45:38 johnthetubaguy: I might have to squint at that again too. 17:45:48 * johnthetubaguy always fails at not talking about something 17:46:01 * mdbooth too, sorry 17:46:01 mdbooth: that's the etherpad to get multi-attach work so we're trying to add tasks there and action items too so we don't miss anything 17:46:17 mdbooth: no worries, its so tempting to try fix this stuff 17:46:35 so... can we get folks to review the spec 17:46:47 make sure it says what we said at the PTG, and matches the cinder view of the world? 17:46:57 mdbooth: I agree on that part re the last attachment, we need to figure that out 17:47:11 #help please review the nova spec: https://review.openstack.org/#/c/373203 17:47:11 johnthetubaguy: +1 17:47:32 smcginnis: jungleboyj: please take a look on the Nova spec ^^ 17:47:42 * mdbooth is still trying to pimp this change: https://review.openstack.org/#/c/383859/ 17:47:50 as we have jgriffith signed up already :) 17:47:55 It will require reworking for the new api, but I think the approach is solid 17:48:37 mdbooth: reworking the new API? 17:48:52 ildikov "reworking *for* the new API" 17:49:07 johnthetubaguy: Adding it to my list. 17:49:09 mad rush to get every change related to attach/detach possible in NOW before we change the API version it seems 17:49:13 jgriffith: ah, ok, tnx :) 17:49:25 not a great idea IMO 17:50:02 but haven't looked at patch yet either so... 17:50:14 jgriffith: do you mean from testing perspective? 17:50:30 mdbooth: thats not applying to cinder volumes yet, its just nova ephemeral? or did I miss read that? 17:50:43 johnthetubaguy: No, that's cinder volumes on shared filesystems 17:50:51 I mean from too many moving levers perspective, we'll never get things settled enough to actually finish the new API work if we keep adding "one more thing" 17:51:03 mdbooth: OK, so I guess I don't understand it then, I need to read more 17:51:22 jgriffith: ah, ok 17:51:35 johnthetubaguy: I believe it's race free and supports multi-attach (at least notionally). 17:51:53 jgriffith: knowing about issue and have them on the radar is good IMHO 17:52:04 mdbooth: my worry is the locking is too technology specific, the previous proposal at the PTG was technology agnostic, I should get on a hangout one morning to work out whats different between the two approaches 17:52:06 ildikov sure, always 17:52:08 Effectively does reference counting with additional sanity checks on the compute side. 17:52:11 mdbooth: they might actually be the same 17:52:23 jgriffith: but we need to get progress on the items you listed earlier before moving on to solve others 17:52:31 johnthetubaguy: Sounds like a plan. 17:52:31 mdbooth: both approaches did that 17:52:39 mdbooth: you around Monday? 17:53:03 johnthetubaguy: Tuesday is better. Lets take that offline. 17:53:05 #action mdbooth and johnthetubaguy to work out how their approaches are the same and how they are different 17:53:11 mdbooth: ack 17:54:04 mdbooth: so the instance going into ERROR state, I think that was the only other thing you mentioned 17:54:07 in the spec 17:54:19 we have 6 minutes from the today's meeting 17:54:22 johnthetubaguy: Yeah, I think that's a detail that will come out in the wash. 17:54:27 I forgot to say the migration object should also go into the error state, as needed 17:54:35 but anyways, updated the spec with that 17:54:37 I was more observing the current behaviour, rather than commenting on what it should be. 17:54:49 ah, got ya 17:55:03 I didn't see where we set the instance to an error state. 17:55:11 yeah, I think we need to do better there, and put the instance into ERROR, if we have a failed rollback, if rollback works, then you go back to ACTIVE 17:55:25 we already do that for certain other exceptions 17:55:39 like say you can't talk to the other compute node over RPC and it never responds, etc 17:56:05 basically any failed rollback should go into the error state 17:56:15 (there is a bug where sometimes it gets stuck in a migrating state, but it should be in the ERROR state) 17:56:32 ildikov: what else do we need to cover, we have the work for this week I think 17:56:40 lets recap that I guess 17:56:52 1) cinder client release 17:56:52 johnthetubaguy: I think we're good with the action items 17:57:05 johnthetubaguy: please review the v3 switch patch, it's really tiny 17:57:10 2) BDM patches 17:57:18 ildikov: its already +Wed and in the gate I think 17:57:39 johnthetubaguy: I'm not myself today :) 17:57:47 ildikov: no worries 17:57:48 so yep 17:58:04 3) review nova spec 17:58:18 and 4) is detach refactor 17:58:24 4) patches to use new cinder client release, + detatch 17:58:28 yeah, 17:58:35 detach depends on the new cinder client release 17:58:42 I think 17:58:44 the refactor work should not 17:58:58 right, I guess I called them BDM patches 17:59:09 so +1 17:59:25 ah ok, I thought those are more the attachment_id and UUID changes 17:59:36 it looks good then 18:00:02 theres a few, its in the etherpad 18:00:09 I guess we are out of time 18:00:32 yep, I think we have enough work to get done until the next meeting 18:00:50 anything else from anyone that's urgent for today? 18:01:32 I take this as a no :) 18:01:35 * johnthetubaguy goes for his dinner 18:01:48 thanks everyone! please do reviews! 18:02:09 have a good rest of the day! 18:02:14 #endmeeting