16:00:09 <ildikov> #startmeeting cinder-nova-api-changes
16:00:10 <openstack> Meeting started Thu Jul 13 16:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:13 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
16:00:19 <ildikov> johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang1 raj_singh lyarwood jungleboyj stvnoyes
16:00:38 <stvnoyes> o/
16:01:23 <mriedem> o/
16:01:32 <jungleboyj> Lurking.  In two meetings at the moment.  :-)
16:01:52 <ildikov> let's wait a half minute and then we can start :)
16:02:10 <smcginnis> Maybe I'm here, maybe I'm not. The world may never know.
16:02:33 <ildikov> ok, let's dive in
16:02:59 <ildikov> al the open reviews are here: https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis
16:03:02 <ildikov> *all
16:03:07 <ildikov> Cinder changes are merged
16:03:20 <ildikov> we're discussing some live_migration related items on the review at the moment
16:03:38 <stvnoyes> i am going thru matt's comments
16:03:43 <ildikov> mriedem: stvnoyes: is there anything to discuss here to sort it out quicker?
16:03:57 <stvnoyes> i think I understand his points
16:04:10 <stvnoyes> so no need here unless Matt wants to clarify anything\
16:05:10 <ildikov> mriedem: is there anything for live_migrate to discuss here?
16:05:25 <mriedem> i've only gone through the compute manager parts and pointed out 2 issues,
16:05:30 <mriedem> and as stvnoyes said i think he gets it
16:05:54 <ildikov> ok, sounds good
16:06:31 <ildikov> moving on then
16:06:45 <ildikov> swap_volume is annoyingly close at this point
16:07:02 <mriedem> yes i'm working on an etherpad of focus areas to send to the ML
16:07:21 <ildikov> mriedem: sounds great, thank you
16:07:37 <ildikov> and we have the remove check_detach and attach patches besides the two we mentioned above already
16:07:55 <ildikov> stvnoyes has Grenade running and started to work on the test already
16:08:15 <ildikov> stvnoyes: anything we should discuss about that one?
16:09:23 <stvnoyes> i am looking at the test I am going to do. attach a volume pre, and then detach post. The most obvious place to do that is in cinder resource.sh as that already has code to ssh into the guest to check the volume. anyone see any issues with that?
16:09:54 <stvnoyes> vs doing it in nova's resource.sh
16:09:56 <mriedem> i don't think we need to worry about sshing into the guest
16:10:33 <stvnoyes> so no verification? that will make it simpler. then just attach/detach and see that it works?
16:10:57 <mriedem> yes
16:11:01 <stvnoyes> ok
16:11:09 <mriedem> i think you have to do it in cinder's resource.sh because the cinder upgrade scripts run after the nova ones
16:11:21 <mriedem> so the nova server instance will be created by the time the cinder resource.sh runs
16:11:30 <stvnoyes> yep
16:11:59 <mriedem> ah i see it already boots a server from volume https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L122
16:12:05 <stvnoyes> yes
16:12:39 <mriedem> and then on the post upgrade it deletes that server
16:12:53 <stvnoyes> i i was going to add a volume to that bfv server in cinder
16:13:11 <mriedem> it already does
16:13:17 <mriedem> openstack server create --volume $id
16:13:20 <mriedem> that's why an image isn't specified
16:13:50 <mriedem> the image is used from the volume in the root disk
16:13:50 <mriedem> openstack volume create --image $DEFAULT_IMAGE_NAME --size 1 $CINDER_VOL
16:13:56 <stvnoyes> ok then i'll need to stop the server to detach the boot disk
16:14:14 <mriedem> well, maybe this already covers what we needed to test
16:14:28 <mriedem> you can't detach the boot disk i don't think, nova won't let you
16:14:28 <stvnoyes> but it doesn't do a detach?
16:14:58 <stvnoyes> the test that is or does that happen when the server is deleted?
16:15:25 <mriedem> by default delete_on_termination is goign to be False,
16:15:26 <smcginnis> Yeah, only way to detach a boot disk is to blow away the instance and create a new one.
16:15:34 <stvnoyes> i was going to add a new volume pre, and then detach it post (on that bfv instance)
16:15:40 <mriedem> so grenade first deletes the instance and then deletes the volume
16:16:40 <mriedem> this is where we start dealing with that in the compute on instance delete https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2223
16:16:56 <mriedem> so we destroy the guest https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2263
16:17:13 <mriedem> we would detach the volume here https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2295
16:17:26 <mriedem> which is just updating the cinder db state for the volume
16:17:45 <smcginnis> stvnoyes: If you're looking at doing this with a separate volume (not the boot volume) then that should be fine.
16:18:10 <mriedem> once that's done we get here https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2336
16:18:24 <mriedem> bdm.delete_on_termination is False so we don't attempt to delete the volume
16:18:27 <mriedem> so grenade deletes it
16:18:57 <mriedem> so we have one flow covered already https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2285-L2296
16:19:00 <stvnoyes> I think Matt is saying a detach is implicit in the shutdown code which will get called by delete so we don't need to create a second volume
16:19:18 <mriedem> the flow we do'nt have covered is the non-bfv case
16:19:21 <mriedem> where you do
16:19:23 <mriedem> 1. create server
16:19:25 <mriedem> 2. create volume
16:19:29 <mriedem> 3. attach volume on old side
16:19:32 <mriedem> 4. detach volume on new side
16:19:36 <mriedem> 5. delete server and volume
16:19:51 <smcginnis> stvnoyes: Yep. Just if you were saying you wanted to explicitly test it in a test, then that could work with a non-boot vol. But implicitly, it should all get deleted in the end anyway.
16:20:19 <mriedem> the nova resource script will create a server here https://github.com/openstack-dev/grenade/blob/master/projects/60_nova/resources.sh#L85
16:20:39 <mriedem> so we could create a 2nd volume in the cinder resource script to attach to that server
16:21:05 <mriedem> it wouldn't be a bootable volume
16:21:12 <mriedem> no ssh
16:21:14 <mriedem> nothing like that
16:21:39 <stvnoyes> ok, I got it. I will add something to cover the non-bfv case. I think I can just use the bfv server already in the cinder resource and add a volume to that. Or do you think it's important to add the disk to the nova non-bfv server?
16:22:02 <mriedem> i think either is fine
16:22:15 <mriedem> probably easier to contain in the single cinder resource script
16:22:29 <stvnoyes> agree
16:22:34 <mriedem> so right before this https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L198
16:22:38 <mriedem> you'd detach the 2nd volume
16:22:44 <mriedem> then server delete, then bootable volume delete
16:22:58 <mriedem> easy peasy
16:23:57 <ildikov> :)
16:26:30 <ildikov> are we all in an agreement on this plan?
16:26:45 <smcginnis> Ship it.
16:26:46 <ildikov> or any further questions/aspects to discuss?
16:26:59 <jungleboyj> :-)
16:27:05 <stvnoyes> i'm good
16:27:19 <ildikov> ok, sounds good then :)
16:28:07 <ildikov> mriedem: any hints on who to start to or stop to annoy for reviews or what would be the best strategy to get this whole thing done in Pike?
16:28:25 <mriedem> i'm working on it
16:28:37 <ildikov> mriedem: or anything else you see uncovered?
16:28:49 <mriedem> not right now
16:29:34 <ildikov> ok, then I think we're good for today unless someone has something to bring up we haven't touched already
16:29:55 <ildikov> mriedem: let me know if I can do anything to make some progress
16:30:11 <stvnoyes> thanks all :-)
16:30:17 <ildikov> mriedem: if I have to test quotas then I might end up testing quotas, just let me know :)
16:30:49 <jungleboyj> Thanks ildikov
16:31:45 <ildikov> ok, that's all folks for today then
16:31:49 <ildikov> thanks everyone!
16:31:51 <smcginnis> o/
16:31:56 <johnthetubaguy> I am hoping I will get chance to look at those reviews!
16:32:13 <ildikov> we are so very close, so thanks for all the efforts so far and let's make this happen! :)
16:32:18 <ildikov> johnthetubaguy: hey :)
16:32:48 * johnthetubaguy waves
16:33:12 <ildikov> johnthetubaguy: I hope everything's good with you, we all missed you!
16:33:39 <johnthetubaguy> not too bad, still trying to tie down my next OpenStack job, but hoping I am getting there
16:34:25 <ildikov> johnthetubaguy: I hope for the best! Fingers crossed!
16:35:07 <ildikov> johnthetubaguy: I guess there's not too much I can help with, but let me know if there would be!
16:35:48 <ildikov> johnthetubaguy: it would also be great if you could take a look at the remaining reviews
16:36:06 <johnthetubaguy> yeah, hoping to take a look at those again
16:36:25 <johnthetubaguy> need to get by eye back in, I suspect
16:36:56 <ildikov> if you have any questions feel free to ping me anytime
16:37:17 <ildikov> otherwise the few of us taking care of these babies are very responsive in the reviews
16:37:58 <ildikov> is there any questions/topics you would want to bring up now?
16:40:30 <ildikov> I guess that's a no :)
16:40:52 <ildikov> johnthetubaguy: I hope you checking in means I can say Welcome back! :)
16:41:15 <ildikov> johnthetubaguy: and I also have my thoughts with you that job hunting goes well!
16:42:05 <ildikov> alright, that's all for the meeting for today
16:42:21 <ildikov> see you All next week here the latest!
16:42:36 <mriedem> bye
16:42:41 <ildikov> #endmeeting