*** ducttape_ has quit IRC | 00:07 | |
*** lamt has joined #openstack-meeting-cp | 00:12 | |
*** ducttape_ has joined #openstack-meeting-cp | 00:29 | |
*** lamt has quit IRC | 00:31 | |
*** ducttape_ has quit IRC | 00:52 | |
*** ducttape_ has joined #openstack-meeting-cp | 00:53 | |
*** ducttape_ has quit IRC | 01:14 | |
*** ducttape_ has joined #openstack-meeting-cp | 01:34 | |
*** ducttape_ has quit IRC | 01:51 | |
*** ducttape_ has joined #openstack-meeting-cp | 01:52 | |
*** ducttape_ has quit IRC | 02:01 | |
*** ducttape_ has joined #openstack-meeting-cp | 02:02 | |
*** diablo_rojo has quit IRC | 02:38 | |
*** ducttape_ has quit IRC | 02:46 | |
*** ducttape_ has joined #openstack-meeting-cp | 02:47 | |
*** ducttape_ has quit IRC | 02:52 | |
*** gouthamr has quit IRC | 02:53 | |
*** lamt has joined #openstack-meeting-cp | 05:02 | |
*** lamt has quit IRC | 05:05 | |
*** rderose has quit IRC | 05:21 | |
*** rarcea has joined #openstack-meeting-cp | 07:24 | |
*** rarcea has quit IRC | 07:53 | |
*** rarcea has joined #openstack-meeting-cp | 08:05 | |
*** DFFlanders has joined #openstack-meeting-cp | 08:41 | |
*** DFFlanders has quit IRC | 10:51 | |
*** sdague has joined #openstack-meeting-cp | 11:46 | |
*** ducttape_ has joined #openstack-meeting-cp | 13:52 | |
*** breton has quit IRC | 14:07 | |
*** ducttape_ has quit IRC | 14:08 | |
*** lamt has joined #openstack-meeting-cp | 14:11 | |
*** lamt has quit IRC | 14:12 | |
*** gouthamr has joined #openstack-meeting-cp | 14:19 | |
*** lamt has joined #openstack-meeting-cp | 14:25 | |
*** breton has joined #openstack-meeting-cp | 14:32 | |
*** ducttape_ has joined #openstack-meeting-cp | 14:57 | |
*** lamt has quit IRC | 15:01 | |
*** lamt has joined #openstack-meeting-cp | 15:19 | |
*** lamt has quit IRC | 15:33 | |
*** bswartz has quit IRC | 15:34 | |
*** lamt has joined #openstack-meeting-cp | 15:34 | |
*** rakhmerov has quit IRC | 15:43 | |
*** ativelkov has quit IRC | 15:44 | |
*** ativelkov has joined #openstack-meeting-cp | 15:46 | |
*** rakhmerov has joined #openstack-meeting-cp | 15:49 | |
*** diablo_rojo has joined #openstack-meeting-cp | 15:57 | |
*** lamt has quit IRC | 15:58 | |
*** diablo_rojo has quit IRC | 15:59 | |
*** ayoung has quit IRC | 16:00 | |
*** diablo_rojo has joined #openstack-meeting-cp | 16:01 | |
*** markvoelker has quit IRC | 16:07 | |
*** ducttape_ has quit IRC | 16:08 | |
*** ducttape_ has joined #openstack-meeting-cp | 16:09 | |
*** ayoung has joined #openstack-meeting-cp | 16:10 | |
*** markvoelker has joined #openstack-meeting-cp | 16:12 | |
*** lamt has joined #openstack-meeting-cp | 16:18 | |
*** lamt has quit IRC | 16:19 | |
*** lamt has joined #openstack-meeting-cp | 16:31 | |
*** ducttape_ has quit IRC | 16:39 | |
*** ducttape_ has joined #openstack-meeting-cp | 16:39 | |
*** mriedem has joined #openstack-meeting-cp | 16:58 | |
ildikov | #startmeeting cinder-nova-api-changes | 17:00 |
---|---|---|
openstack | Meeting started Thu Mar 2 17:00:30 2017 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 17:00 |
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)" | 17:00 | |
openstack | The meeting name has been set to 'cinder_nova_api_changes' | 17:00 |
jungleboyj | o/ | 17:00 |
lyarwood | o/ | 17:00 |
ildikov | DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis xyang1 raj_singh lyarwood | 17:00 |
breitz | o/ | 17:00 |
mriedem | lyarwood: | 17:00 |
mriedem | oh he's here | 17:01 |
mriedem | o/ | 17:01 |
ildikov | :) | 17:01 |
ildikov | hi all :) | 17:01 |
lyarwood | mriedem: I can leave if you want to say something ;) | 17:01 |
jungleboyj | ildikov: You survived MWC ? | 17:01 |
mriedem | lyarwood: no i want you to be very involved | 17:01 |
mriedem | ildikov: i'm out in about a half hour | 17:02 |
mriedem | fyi | 17:02 |
ildikov | jungleboyj: I have very small brain activity right now, but it's enough to keep me alive and breathig :) | 17:02 |
jungleboyj | ildikov: :-) | 17:02 |
ildikov | mriedem: thanks for the note, then let's start with the activities in Nova | 17:02 |
ildikov | the remove check_attach patch got merged, thanks to everyone, who helped me out! | 17:03 |
ildikov | one small thing is out of the way | 17:03 |
ildikov | we have a bunch of things up for review under the Nova bp: https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis | 17:03 |
ildikov | lyarwood is working on some refactoring on the BDM and detach | 17:04 |
* johnthetubaguy wonders into the room a touch late | 17:04 | |
lyarwood | yeah, just trying to get things cleaned up before we introduce v3 code | 17:04 |
lyarwood | the bdm UUID stuff isn't directly related btw, we've been trying to land it for a few cycles | 17:05 |
lyarwood | but I can drop it if it isn't going to be used directly in the end here | 17:05 |
ildikov | johnthetubaguy: mriedem: is there anything in that refactor code that would need to be discussed? Or the overall direction looks good? | 17:05 |
johnthetubaguy | moving detach into the BDM makes sense to me | 17:06 |
johnthetubaguy | I would love to see the number of if use_new_api things very limited to a single module, ideally | 17:06 |
mriedem | move detach into the bdm? | 17:06 |
mriedem | i haven't seen that | 17:06 |
lyarwood | into the driver bdm | 17:06 |
mriedem | oh | 17:06 |
lyarwood | https://review.openstack.org/#/c/439520/ | 17:06 |
johnthetubaguy | yeah, sorry, that | 17:06 |
mriedem | i saw the change to call detach before destroying the bdm, and left a comment | 17:06 |
mriedem | makes sense, but i feel there are hidden side effects | 17:07 |
lyarwood | there's still a load of volume_api code in the compute api that I haven't looked at yet | 17:07 |
mriedem | because that seems *too* obvious | 17:07 |
johnthetubaguy | it needs splitting into two patches for sure | 17:07 |
mriedem | on the whole i do agree that having the attach code in the driver bdm and the detach code in the compute manager separately has always been confusing | 17:07 |
lyarwood | johnthetubaguy: https://review.openstack.org/#/c/440693/1 | 17:07 |
mriedem | yes https://review.openstack.org/#/c/440693/ scares me | 17:07 |
mriedem | not for specific reasons | 17:08 |
mriedem | just voodoo | 17:08 |
lyarwood | bdm voodoo | 17:08 |
johnthetubaguy | heh, yeah | 17:08 |
mriedem | the cinder v3 patch from scottda seems to have stalled a bit | 17:09 |
mriedem | i think that's an easy add which everyone agrees on, | 17:09 |
mriedem | and that should also make cinder v3 the default config | 17:09 |
johnthetubaguy | yeah, +1 keeping that one moving | 17:09 |
johnthetubaguy | thats the one with the context change, etc | 17:09 |
johnthetubaguy | well, on top of that | 17:09 |
mriedem | yeah the context change was merged in ocata i think | 17:10 |
mriedem | it's already merged anyway | 17:10 |
johnthetubaguy | yeah, that should keep moving | 17:10 |
lyarwood | should we track the v3 by default patches against the bp? | 17:10 |
ildikov | mriedem: I can pick the v3 switch up | 17:10 |
johnthetubaguy | lyarwood: probably | 17:10 |
ildikov | lyarwood: we have them on this etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes | 17:11 |
lyarwood | ildikov: thanks | 17:11 |
ildikov | I wouldn't track it with the BP as my impression was that we want that in general | 17:11 |
mriedem | lyarwood: we want the cinder v3 thing regardless | 17:11 |
mriedem | ildikov: right +1 | 17:11 |
lyarwood | understood | 17:12 |
mriedem | so to keep things moving at a minimum this week, | 17:12 |
mriedem | i think we do the bdm uuid and attachment_id changes | 17:12 |
ildikov | microversions were/are the question | 17:12 |
mriedem | and then also cinder v3 | 17:12 |
mriedem | those are pretty easy changes to get done this week | 17:12 |
mriedem | as nothing is using them yet | 17:12 |
johnthetubaguy | we just request 3.0 to start with I assume? | 17:12 |
ildikov | I wonder whether we can switch to v3 and then add the negotiation for the mv | 17:12 |
ildikov | mriedem: +1 | 17:13 |
*** stvnoyes has joined #openstack-meeting-cp | 17:13 | |
mriedem | mv? | 17:13 |
ildikov | microversion :) | 17:14 |
johnthetubaguy | yeah, I thought we said just use the base version for now, and get that passing in the gates right? | 17:14 |
mriedem | johnthetubaguy: sure yeah | 17:14 |
* ildikov is lazy to type the whole word all the time... :) | 17:14 | |
mriedem | let's not overcomplicate the base change | 17:14 |
ildikov | my thinking as well | 17:14 |
johnthetubaguy | mriedem: +1 | 17:14 |
mriedem | default to cinder v3, make sure it's there | 17:14 |
mriedem | release note | 17:14 |
mriedem | etc etc | 17:14 |
johnthetubaguy | sounds like we have lyarwood's BDM schema changes that can keep going also? | 17:14 |
mriedem | yes | 17:15 |
ildikov | and the earlier we switch the more testing we get even if 3.0 should be the same as v2 | 17:15 |
mriedem | those patches are what we should get done this week | 17:15 |
lyarwood | I can get that done | 17:15 |
johnthetubaguy | yeah, sounds good | 17:15 |
mriedem | bdm schema changes (attachment_id and uuid) and cinder v3 | 17:15 |
johnthetubaguy | yup, yup | 17:15 |
mriedem | i think we want to deprecate nova using cinder v2 also, but that can be later and separate | 17:15 |
johnthetubaguy | so, next week (still ignoring the spec), whats the next bit? | 17:16 |
mriedem | but start the timer in pike | 17:16 |
johnthetubaguy | I guess thats lyarwood's refactor stuff? | 17:16 |
mriedem | probably, and version negotiation | 17:16 |
mriedem | to make sure 3.27 is available | 17:16 |
johnthetubaguy | deciding yes or no to the BMD voodoo worries | 17:16 |
johnthetubaguy | true, we can put the ground work in for doing detach | 17:16 |
johnthetubaguy | so if we are good with that, I have spec open questions to run through? | 17:18 |
ildikov | we can move forward with the attach/detach changes | 17:18 |
johnthetubaguy | no... only detach | 17:18 |
johnthetubaguy | leaving attach to the very, very end I think | 17:18 |
ildikov | and have the microversion negotiation as a separate small change that needs to make it before anything else anyhow | 17:18 |
ildikov | johnthetubaguy: I meant the code to see how it works and not to merge everything right now | 17:19 |
ildikov | johnthetubaguy: but agree on finalizing detach first | 17:19 |
johnthetubaguy | yeah, could do attach WIP on the end, so we can test detach for real | 17:19 |
johnthetubaguy | we probably should actually | 17:19 |
ildikov | my point was mainly that we should not stop coding detach just because the mv negotiation is not fully naked yet | 17:20 |
johnthetubaguy | mv negotiation I thought was easy though | 17:20 |
ildikov | johnthetubaguy: jgriffith has a WIP up that does attach | 17:20 |
johnthetubaguy | either 3.27 is available, or its not right? | 17:20 |
jungleboyj | Right. | 17:20 |
jungleboyj | And the agreement was we would fall back to base V3 if 3.27 isn't available. | 17:21 |
ildikov | johnthetubaguy: I need to check whether the cinderclient changes are merged to get the highest supported version | 17:21 |
johnthetubaguy | is_new_attach_flow_available = True or Flase | 17:21 |
ildikov | smth like, that's easy | 17:21 |
johnthetubaguy | ildikov: we don't want the highest supported version though, we want 3.27, or do you mean we need cinderclient to support 3.27? | 17:21 |
ildikov | just currently Nova tells Cinder what version it wants | 17:21 |
johnthetubaguy | right, we currently always want the base version | 17:22 |
jungleboyj | Right, need cinderclient to say what it can support. I am not sure if that code is in yet. | 17:22 |
ildikov | jungleboyj: me neither | 17:22 |
johnthetubaguy | we can get the list of versions and see if 3.27 is available | 17:22 |
johnthetubaguy | ah, right, we need that I guess | 17:22 |
johnthetubaguy | I mean its a simple REST call, so we could hold that patch if it stops us moving forward I guess | 17:23 |
johnthetubaguy | anyways, that seems OK | 17:23 |
johnthetubaguy | thats next weeks thing to chase | 17:23 |
ildikov | johnthetubaguy: it's just the matter of agreeing how we get what version is supported by Cinder and then act accordingly in Nova | 17:23 |
johnthetubaguy | mriedem: can you remember what we do in novaclient for that discovery bit? | 17:24 |
mriedem | https://review.openstack.org/#/c/425785/ | 17:24 |
ildikov | johnthetubaguy: so for now just go with base v3 and see what made it into Cinder and add the rest as soon as we can | 17:24 |
mriedem | the cinder client version discovery is merged, but not released | 17:24 |
mriedem | smcginnis: need ^ released | 17:24 |
johnthetubaguy | cool, thats simple then | 17:24 |
ildikov | ok, I remember now, it needed a few rounds of rechecks to make it in | 17:25 |
jungleboyj | Argh, mriedem beat me to it. | 17:25 |
mriedem | ildikov: so add that to the list of things to do - cinderclient release | 17:25 |
jungleboyj | mriedem: +2 | 17:25 |
jungleboyj | mriedem: You want me to take that over to Cinder. | 17:25 |
johnthetubaguy | #action need new cinder client release so we have access to get_highest_client_server_version https://review.openstack.org/#/c/425785/ | 17:25 |
mriedem | jungleboyj: sure, anyone can propose the release | 17:26 |
mriedem | smcginnis has to sign off | 17:26 |
jungleboyj | I'll take that. | 17:26 |
mriedem | make sure you get the semver correct | 17:26 |
ildikov | mriedem: added a short list to the etherpad too with the small items in the queue we agreed on | 17:27 |
johnthetubaguy | #link https://etherpad.openstack.org/p/cinder-nova-api-changes | 17:27 |
johnthetubaguy | I would love to go over the TODOs in the spec | 17:29 |
johnthetubaguy | does that make sense to do now? | 17:30 |
ildikov | johnthetubaguy: I have to admit I couldn't get to the very latest version of the spec yet | 17:30 |
johnthetubaguy | thats fine, this is basically loose ends that came up post ptg | 17:30 |
johnthetubaguy | #link https://review.openstack.org/#/c/373203/17/specs/pike/approved/cinder-new-attach-apis.rst | 17:30 |
ildikov | johnthetubaguy: but we can start to go over the TODOs | 17:31 |
johnthetubaguy | the first one is the shared storage connection stuff | 17:31 |
johnthetubaguy | I am tempted to say we delay that until after we get all the other things sorted | 17:31 |
johnthetubaguy | lyarwood: do you think thats possible? ^ | 17:31 |
lyarwood | +1 | 17:31 |
lyarwood | yeah | 17:31 |
johnthetubaguy | there just seem to be complications we should look at separately there | 17:32 |
jungleboyj | That makes sense. We need everything else stabilized first. | 17:32 |
johnthetubaguy | cool, we can do that then, split that out | 17:32 |
ildikov | johnthetubaguy: that will only be problematic with multi-attach only, right? | 17:32 |
johnthetubaguy | ildikov: I believe so, I think its new attachments, shared connections, multi-attach | 17:32 |
johnthetubaguy | #action split out shared host connections from use new attachment API spec | 17:33 |
johnthetubaguy | I think I am happy the correct things are possible | 17:33 |
johnthetubaguy | (which is probably a bad sign, but whatever) | 17:33 |
johnthetubaguy | so the next TODO | 17:33 |
johnthetubaguy | evacuate | 17:33 |
johnthetubaguy | how do you solve a problem like... evacuate | 17:33 |
* jungleboyj runs for the fire exit | 17:34 | |
johnthetubaguy | heh | 17:34 |
johnthetubaguy | so mdbooth added a great comment | 17:34 |
johnthetubaguy | if you evacuate an instance, thats cool | 17:34 |
johnthetubaguy | now detach a few volumes | 17:34 |
johnthetubaguy | then a bit later delete the instance | 17:34 |
johnthetubaguy | some time later the old host comes back from the dead and needs to get cleaned up | 17:35 |
johnthetubaguy | we kinda have two options: | 17:35 |
johnthetubaguy | (1) leave attachments around so we can try detect them (although finding them could be quite hard from just a migration object, when the instance has been purged from the DB) | 17:35 |
johnthetubaguy | (2) delete attachments when we have success on the evacuate on the new destination host, and leave the hypervisor driver to be able to find the unexpected VMs (if possible) and do the right thing with the backend connections where it can | 17:36 |
johnthetubaguy | (3) just don't allow an evacuated host to restart until the admin has manually tidied things up (aka re-imaged the whole box) | 17:37 |
lyarwood | there's another option | 17:37 |
lyarwood | update the attachment with connector=None | 17:37 |
jungleboyj | johnthetubaguy: I think number two makes the most sense to me. If the instance has successfully evacuated to the new host those connections aren't needed on the original. Right? | 17:37 |
lyarwood | that's the same as terminate-connection right? | 17:37 |
johnthetubaguy | lyarwood: but why not just delete it? | 17:38 |
jungleboyj | johnthetubaguy: ++ | 17:38 |
lyarwood | johnthetubaguy: so the source host is aware that it has clean up | 17:38 |
lyarwood | has to* | 17:38 |
lyarwood | if it ever comes back | 17:38 |
johnthetubaguy | lyarwood: but the source host is only stored in the connector I believe? | 17:38 |
ildikov | I think attachment-delete will clean up the whole thing on the Cinder side | 17:38 |
ildikov | so no more terminate-connection | 17:39 |
johnthetubaguy | so the problem, not sure if I stated it, is finding the volumes | 17:39 |
johnthetubaguy | if the volume is detached... | 17:39 |
johnthetubaguy | after evacuate | 17:39 |
ildikov | is the problem here that when the host comes up then it tries to re-initiate the connection, etc? | 17:40 |
johnthetubaguy | we get the instance, we know it was evcuated, but we only check half of the volumes to clean up | 17:40 |
johnthetubaguy | its trying to kill the connection | 17:40 |
johnthetubaguy | the vm has moved elsewhere, so to stop massive data corruption, we have to kill the connection | 17:40 |
*** antwash has left #openstack-meeting-cp | 17:41 | |
johnthetubaguy | so I got distracted there | 17:41 |
johnthetubaguy | yeah, we have the migration object and instances to tell the evacuate happened | 17:42 |
ildikov | attachment-delete should take care of the target, the question is what happens when the host comes up again, I think that was raised last week, but I will read the comment properly now in the spec :) | 17:42 |
johnthetubaguy | but its hard to always get the full list of volume attachments we have pre-evacuate at that point | 17:42 |
lyarwood | johnthetubaguy: that's why I suggested only updating the attachment | 17:43 |
ildikov | on the other hand I guess if the original host is not really down, neither fenced properly then we just look into the other direction like nothing happened and it's surely not our fault :) | 17:43 |
lyarwood | johnthetubaguy: each attachment is unique to an instance, host and volume right? | 17:43 |
johnthetubaguy | lyarwood: you mean reuse the same attachment on the new host? | 17:43 |
lyarwood | not instance sorry | 17:43 |
* johnthetubaguy is failing to visualise the change | 17:44 | |
lyarwood | johnthetubaguy: no, just keep the old ones with a blank connector that I assumed would kill any export / connection | 17:44 |
johnthetubaguy | so I don't think that fixes the problem we are facing | 17:44 |
ildikov | lyarwood: currently you cannot update without connector | 17:44 |
johnthetubaguy | if the use does detach, we can't find that volume on the evcuated host | 17:44 |
johnthetubaguy | user does | 17:44 |
ildikov | lyarwood: and attachment-delete will kill the export/connection | 17:44 |
*** ducttape_ has quit IRC | 17:45 | |
lyarwood | johnthetubaguy: but that would be a detach of the volume on the new host using a new attachment | 17:45 |
johnthetubaguy | lyarwood: thats fine, the problem is we still need to tidy up on the old host, but we didn't know we had to | 17:46 |
lyarwood | johnthetubaguy: right so there's additional lookup logic needed there | 17:46 |
lyarwood | johnthetubaguy: that we don't do today | 17:46 |
johnthetubaguy | I guess we might need an API to list all valid attachments in Cinder, and gets all attachments on the host from os.brick, and delete all the ones that shouldn't be there | 17:46 |
johnthetubaguy | lyarwood: the problem is we don't have that data right now | 17:46 |
johnthetubaguy | or there is no way to get that data, I mean | 17:46 |
johnthetubaguy | because we delete the records we wanted | 17:46 |
johnthetubaguy | I thinking about this function: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L624 | 17:47 |
jungleboyj | johnthetubaguy: Wasn't that proposed at the PTG? | 17:47 |
johnthetubaguy | jungleboyj: we thought we could get the data at the PTG, but I don't think we can | 17:47 |
johnthetubaguy | if you delete the instance, or detach volumes, we loose the information we were going to use to look up what needs cleaning up | 17:47 |
johnthetubaguy | (I think...) | 17:47 |
ildikov | johnthetubaguy: how does it work today? | 17:48 |
lyarwood | johnthetubaguy: the data within nova yes, but we can reach out to cinder and ask for any additional attachments not associated with a bdm (from being detached etc) | 17:48 |
johnthetubaguy | lyarwood: I don't think cinder has those APIs today | 17:49 |
johnthetubaguy | lyarwood: if we delete the data in cinder it doesn't have it either | 17:49 |
johnthetubaguy | so... lets back up to what ildikov said, and I forgot | 17:49 |
johnthetubaguy | step 1: be as rubbish as today | 17:49 |
johnthetubaguy | step 2: be less rubbish | 17:49 |
johnthetubaguy | I was jumping to step 2 again, my bad | 17:50 |
johnthetubaguy | so lets leave step 2 for another spec | 17:50 |
ildikov | johnthetubaguy: I'm not saying not to be less rubbish | 17:50 |
johnthetubaguy | yep, yep | 17:50 |
ildikov | I just hoped we're not that rubbish today and might see something in the old flow we're missing in the new... | 17:50 |
johnthetubaguy | yeah, me too, thats my new proposal | 17:51 |
johnthetubaguy | I forgot, we have the instances on the machine | 17:51 |
jungleboyj | :-) One step at a time. | 17:51 |
johnthetubaguy | we destroy those | 17:51 |
johnthetubaguy | so, worst case we just have dangling connections on the host | 17:51 |
johnthetubaguy | like stuff we can't call os.brick for | 17:52 |
johnthetubaguy | but I think we can always get a fresh connector, which is probably good enough | 17:52 |
breitz | it seems dangerous to rely on any post evacuate cleanup (for things outside of nova - ie a cinder attachment) on the orig source host. it seems like those things need to be cleaned up as part of the evacuate itself. but perhaps i'm not understanding this correctly. | 17:52 |
johnthetubaguy | breitz: evacuate is when the source host is dead, turned off, and possibly in a skip | 17:53 |
breitz | right | 17:53 |
johnthetubaguy | but sometimes, the host is brought back from the dead | 17:53 |
breitz | but the world moves on - so when that host comes back - we can't rely on it. | 17:53 |
johnthetubaguy | if we just kill all the instances, we avoid massive data loss | 17:53 |
breitz | sure - can't allow the instances to come back | 17:54 |
johnthetubaguy | we keep migration records in the DB, so we can tell what has been evacuated, even if the instances are destroyed | 17:54 |
johnthetubaguy | breitz: totally, we do that today | 17:54 |
breitz | right | 17:54 |
johnthetubaguy | my worry is we don't have enough data about the instance to clean up the volumes using os.brick disconnect | 17:54 |
johnthetubaguy | but if we don't have that today, then whatever I guess | 17:54 |
johnthetubaguy | so this goes back to | 17:55 |
breitz | that I get - that cleanup is what I'm saying needs to be done when moving to the new dest. | 17:55 |
ildikov | johnthetubaguy: if we remove the target on the Cinder side that should destroy the connection or it does not? | 17:55 |
johnthetubaguy | breitz: but it can't be done, the host is dead? I am just worrying about the clean up on that host | 17:55 |
breitz | and somehow that info needs to be presented. not wait until the orig source comes back up to do. | 17:55 |
johnthetubaguy | I think we just delete the attachments in cinder right away, to do the cinder tidy up | 17:56 |
lyarwood | ildikov: we terminate the original hosts connections in cinder today | 17:56 |
johnthetubaguy | lyarwood: I see what you are saying now, we should keep doing that | 17:56 |
johnthetubaguy | which means delete the attachments now, I think | 17:56 |
breitz | yes - do the delete attachments right away. | 17:56 |
ildikov | johnthetubaguy: lyarwood: yep, that's what I said too | 17:56 |
lyarwood | johnthetubaguy: right, update would allow some cleanup by delete would be in-line with what we do today | 17:56 |
johnthetubaguy | lyarwood: the bit I was missing is we do terminate today | 17:56 |
lyarwood | johnthetubaguy: yeah we do | 17:57 |
lyarwood | johnthetubaguy: via detach in rebuild_instance | 17:57 |
ildikov | johnthetubaguy: delete is supposed to do that for you in the new API | 17:57 |
*** mriedem has quit IRC | 17:57 | |
johnthetubaguy | it makes sense, I just forgot that | 17:57 |
johnthetubaguy | yeah, so right now that means delete attachment | 17:57 |
johnthetubaguy | which isn't ideal, but doesn't make things any worse | 17:57 |
johnthetubaguy | lets do that | 17:57 |
ildikov | lyarwood: update at the moment is more finalizing attachment_create | 17:58 |
lyarwood | understood | 17:58 |
ildikov | lyarwood: you cannot update without connector as that would also mean you're putting back the volume to 'reserved' state and you don't want to do that here | 17:58 |
lyarwood | ah | 17:58 |
johnthetubaguy | ildikov: we kinda do that by creating a second attachment instead | 17:59 |
ildikov | lyarwood: and most probably neither in general | 17:59 |
ildikov | johnthetubaguy: yes, but that reserves the volume for the new host at least | 17:59 |
johnthetubaguy | so during the evacuate we don't want someone else "stealing" the volume | 17:59 |
johnthetubaguy | the new attachment does that fine | 17:59 |
johnthetubaguy | we just need to create the new one (for the new host) before we delete the old one | 18:00 |
johnthetubaguy | lyarwood: is that the order today? | 18:00 |
ildikov | that will work | 18:00 |
lyarwood | johnthetubaguy: hmmm I think we terminate first | 18:00 |
lyarwood | johnthetubaguy: yeah in _rebuild_default_impl we detach that inturn terminates the connections first before spawn then initializes the connection on the new host | 18:02 |
johnthetubaguy | lyarwood: I wonder about creating the attachment in the API, and adding it into the migration object? | 18:02 |
johnthetubaguy | ah, right, there it is: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2651 | 18:02 |
lyarwood | johnthetubaguy: yeah, we could update after that | 18:02 |
johnthetubaguy | lyarwood: thats probably simpler | 18:03 |
johnthetubaguy | lyarwood: I was actually wondering in the live-migrate case, we have two sets of attachment_ids, when do we update the BDM, I guess on success, so maybe keep the pending ones in the migration object? | 18:04 |
lyarwood | johnthetubaguy: yup, we almost did the same with connection_info in the last cycle | 18:04 |
johnthetubaguy | lyarwood: yeah, thats why I was thinking in evacuate we could copy that | 18:05 |
johnthetubaguy | but I am probably over thinking that one | 18:05 |
johnthetubaguy | your idea sounds simpler | 18:05 |
johnthetubaguy | works on both code paths too | 18:05 |
ildikov | +1 on similar solutions | 18:05 |
lyarwood | cool, I need to drop now, I'll follow up with the BDM uuid and attachment_id patches in the morning \o_ | 18:06 |
ildikov | we're out of time for today | 18:07 |
johnthetubaguy | yeah, I need to run too | 18:07 |
johnthetubaguy | thanks all | 18:07 |
ildikov | johnthetubaguy: can we consider evacuate good for now? | 18:07 |
ildikov | johnthetubaguy: or it will need more chats on the next meeting? | 18:07 |
johnthetubaguy | yep, thats both of my TODOs covered | 18:07 |
ildikov | johnthetubaguy: great, thanks for confirming | 18:08 |
ildikov | ok let's focus on the smaller items to merge this week and until the next meeting | 18:08 |
ildikov | thank you all! | 18:08 |
johnthetubaguy | would love +1s or -1s on the spec too :) | 18:08 |
ildikov | #action everyone to review the Nova spec! | 18:09 |
jungleboyj | ++ | 18:09 |
ildikov | johnthetubaguy: ack :) | 18:09 |
ildikov | #endmeeting | 18:09 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings" | 18:09 | |
openstack | Meeting ended Thu Mar 2 18:09:28 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:09 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.html | 18:09 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.txt | 18:09 |
openstack | Log: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.log.html | 18:09 |
jungleboyj | Thanks all! | 18:10 |
*** MarkBaker has joined #openstack-meeting-cp | 18:15 | |
*** lamt has quit IRC | 18:16 | |
*** ducttape_ has joined #openstack-meeting-cp | 18:45 | |
*** ducttape_ has quit IRC | 18:50 | |
*** MarkBaker has quit IRC | 19:02 | |
*** rarcea has quit IRC | 19:31 | |
*** Rockyg has joined #openstack-meeting-cp | 19:34 | |
*** lamt has joined #openstack-meeting-cp | 19:35 | |
*** lamt has quit IRC | 19:36 | |
*** lamt has joined #openstack-meeting-cp | 19:37 | |
*** ayoung has quit IRC | 19:53 | |
*** ducttape_ has joined #openstack-meeting-cp | 20:16 | |
*** ducttape_ has quit IRC | 20:21 | |
*** lamt has quit IRC | 20:23 | |
*** lamt has joined #openstack-meeting-cp | 20:25 | |
*** rocky_g has joined #openstack-meeting-cp | 20:38 | |
*** DFFlanders has joined #openstack-meeting-cp | 20:47 | |
*** lamt has quit IRC | 21:22 | |
*** ducttape_ has joined #openstack-meeting-cp | 21:47 | |
*** ducttape_ has quit IRC | 21:52 | |
*** anteaya has quit IRC | 21:54 | |
*** anteaya has joined #openstack-meeting-cp | 22:06 | |
*** diablo_rojo_phon has joined #openstack-meeting-cp | 22:14 | |
*** gouthamr has quit IRC | 22:33 | |
*** gouthamr has joined #openstack-meeting-cp | 22:56 | |
*** breitz has quit IRC | 23:01 | |
*** breitz has joined #openstack-meeting-cp | 23:02 | |
*** ducttape_ has joined #openstack-meeting-cp | 23:04 | |
*** rocky_g has quit IRC | 23:05 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!