16:00:02 #startmeeting cinder-nova-api-changes 16:00:02 Meeting started Thu Oct 26 16:00:02 2017 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:06 The meeting name has been set to 'cinder_nova_api_changes' 16:00:08 johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang1 raj_singh lyarwood jungleboyj stvnoyes 16:00:21 o/ 16:00:23 o/ 16:00:38 @! 16:00:39 <_pewp_> jungleboyj (✧∇✧)╯ 16:01:02 * smcginnis head explodes 16:01:11 hi All :) 16:01:44 smcginnis: after one minute? :) 16:01:45 o/ 16:01:48 No smcginnis Nooo!!!!!! 16:02:45 ok, let's start 16:03:13 so we had a milestone last week by merging an updated version of the multi-attach spec 16:03:36 Yay! 16:03:45 special thanks to mriedem for the last round of cleanups on it! 16:03:55 and all of you for contributing to figuring it out again 16:05:18 we talked about capturing some of the policy bits and pieces and other relevant parts on the Cinder side 16:05:38 I will consult with jgriffith on this 16:05:51 note there were 2 todos in the spec 16:05:59 i've got a test up for one of those 16:06:06 https://review.openstack.org/#/c/515426/ 16:06:19 to see what happens if you try to attach an attached volume to the same instance with the new v3 attach flow 16:06:28 regardless of the multiattach flag on the volume 16:06:36 i have a feeling cinder will be cool with that 16:06:40 but not sure 16:07:08 maybe not though - nova will still call os-reserve i think 16:07:19 and looking at the db conditional update code in cinder, that might block it 16:07:21 it should 16:07:36 it won't call that with the new flow 16:07:48 ok that would be a problem then 16:08:08 but, let's see what happens 16:08:46 the other todo in the spec was determining if nova would require a microversion for the change to support multiattach, and i think we would, 16:08:57 and we have an example to follow (2.49 for tagged attached) 16:09:35 I was emotionally prepared for the latter 16:09:45 I guess we should have a spec update too 16:09:51 we can do that later 16:09:54 just fyi 16:10:02 unless someone has objections to the idea 16:10:22 sounds good! 16:10:27 otherwise it looks like next up is https://review.openstack.org/#/c/514853/ and the backend_id and shared_targets stuff on the cinder side 16:10:34 i need to re-review ^ the latest 16:10:51 yep, jgriffith has just uploaded a fixed version 16:11:06 and the new attach patch is in a good shape too 16:11:15 I'll be rewriting the shared_targets changes now that the online migration stuff is there 16:11:29 mriedem: dansmith thanks for the help on that BTW 16:12:03 np 16:12:07 one little comment inline 16:12:09 otherwise lgtm 16:12:15 yessir 16:12:55 mriedem: yeah, not following; but lemme run it and see 16:13:07 jgriffith: you run the migration once with a limit of 2 and you have 3 total 16:13:09 just saying, 16:13:09 mriedem: assert passed 16:13:19 right 16:13:20 run it again with count=2 and assert total 1 and updated 1 16:13:23 so run the migratoin twice 16:13:27 mriedem: total is the total number of entries in the table 16:13:33 Oh 16:13:34 got ya 16:13:38 total w/o a uuid set 16:13:44 yes 16:13:47 cool 16:14:13 and then find an intern to write some fixtures for you guys :) 16:14:39 :) 16:14:43 if you're doing a contributor thing at the summit, might be something to put in a list of stuff people can work on 16:14:43 lol :) 16:14:44 with examples 16:14:58 cleaning up the low level db api mocks i mean 16:15:35 there are occasions to point people to tasks like this 16:16:04 both the training and the on boarding room, so I will let jungleboyj and smcginnis to give it some extra thoughts :) 16:16:44 ok, so the online migration is on track and I would expect that the shared_targets patch will be an easier bit to fix up 16:16:48 ildikov: Good thought. 16:17:02 but jgriffith can correct me if I'm overly ambitious :) 16:17:04 ildikov: should be 16:17:18 jgriffith: coolio 16:17:31 but then again I thought fixing up the UUID one was going to be *simple* so... who knows :) 16:18:03 jgriffith: it's almost weekend here, so give it a bit more positivity :) 16:18:31 johnthetubaguy: mriedem: any chance you can look at the new attach patch? 16:18:33 That trick never works :) 16:18:44 not right now 16:19:04 jgriffith: you're so cruel to me today :) 16:19:11 i'm waiting on those tempest test results too 16:19:49 ok, let's see those and then if any Nova cores could take a look at that patch that would be great 16:19:59 so we can have some progress in parallel 16:20:43 to the margin, I also put the libvirt patch for multi-attach on top of the new attach patch 16:21:37 I know we're not there yet, however once we are that one should be an easy one to land 16:22:12 it's also small, so a quick view is also appreciated to see whether the concept used there is acceptable or not 16:23:12 that's what I mainly have for today 16:23:16 I noticed that tempest doesn't allow for multiple servers being validatable. A floating ip issue. So I was going to update tempest to allow multiple servers to be validatable so we can check multiattachments from inside the vms. It will be useful when we start writing MA tempest tests. Seem reasonable? 16:23:24 I would love to see the new attach patch landed as soon as possible 16:24:04 stvnoyes: probably ask the qa team 16:24:18 bring it up in a weekly meeting maybe, or just ask around in the -qa channel 16:24:21 andreaf or mtreinish 16:24:28 stvnoyes: I like your proactivity :) 16:24:32 ok will do 16:24:50 stvnoyes: tnx 16:26:07 ok, anything else from anyone for today? 16:26:27 nope 16:27:04 stvnoyes: I'm pretty sure you can create 2 servers that you can ssh into in tempest, there are definitely tests for neutron stuff doing that 16:27:15 we can talk about it later in -qa 16:27:44 mtreinish: ok, thanks 16:27:47 mtreinish: sounds good, thanks! 16:28:22 then this is it for today 16:28:50 let's keep in touch on the tempest patch and get the Cinder bits and the new attach patch landed as soon as we can 16:29:09 oh, one more thing 16:29:33 does next week work for the most of us or is it "travel day" already? 16:30:02 * jungleboyj will be over the ocean somewhere. 16:30:19 won't work for me 16:30:28 cancel next week 16:30:36 ok, cancel then 16:30:52 so don't forget about our forum session: https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20457/cindernova-cross-project-session-on-multi-attach?BackURL=https%3A%2F%2Fwww.openstack.org%2Fsummit%2Fsydney-2017%2Fsummit-schedule%2Fglobal-search%3Ft%3Dmulti-attach%23eventid%3D20457 16:31:52 the point is to collect feedback on how people intend to use the thing with regards to the follow up plans with touched on earlier 16:32:44 please make it there if you can so we can answer questions and ensure we cover all the aspects we were discussing earlier 16:33:27 we can sync up on the Forum session on the project channels or the ML in case needed 16:33:28 about that, 16:33:34 who is seeding the agenda? 16:33:47 like, what questions are going to be asked? assuming policy and r/w r/o stuff 16:33:55 maybe some background on the sticky parts in the spec 16:33:57 like boot from volume 16:33:58 mriedem: Good question. 16:34:14 the schedule has jay's face on it so i'm deferring to him 16:34:27 who will likely defer to ildikov 16:34:44 mriedem: those two were the main questions/concerns 16:35:00 mriedem: Ouch ... but you are kind-of right. 16:35:45 So, what we talked about last time: 16:35:49 mriedem: beyond that we can share current limitations on libvirt 16:36:00 How do people want to use multi-attach? 16:36:11 What are the expectations for the functionality? 16:36:17 i'm not sure i'd ask that, 16:36:22 with 40 minutes, 16:36:23 and depending on the audience raise the flag for people on the Cinder back end side 16:36:27 i'd be as specific as possible 16:36:31 mriedem: Ok ... 16:36:35 like, this is what we're going to do in queens 16:36:37 these are the limitations 16:36:51 is that cool with people (yes/no) 16:37:01 I would rather tell what is the bare minimum people can expect and recruit as many people as possible to test it out 16:37:03 then if there is time, get into whatever future stuff you want to find out 16:37:14 ildikov: Ok. That makes sense. 16:37:30 asking "what do you want?" will be a mes 16:37:32 *mess 16:37:34 So, we tell them ... 16:37:35 mriedem: +1, I think we're on that same page 16:37:42 "i want to pass volume type to nova!" 16:38:03 We support one r/w volume and other volues are r/o. 16:38:08 Sorry, attachments. 16:38:22 We are not supporting boot from volume with multi-attach. 16:38:46 we support all of them being r/w and make people aware that they can screw things up if they are not careful enough very easily 16:38:56 and then wish good luck and smile :) 16:39:06 ildikov: Did we agree to that? 16:39:17 ildikov: Or are you messing with me? 16:39:48 jungleboyj: we didn't figure out the R/O part when I last checked, but I might be missing something here 16:40:13 jgriffith: mriedem ? 16:40:27 jungleboyj: you need to re-read the nova spec 16:40:31 :) 16:40:38 mriedem: Saw that coming. 16:40:43 * jungleboyj head explodes 16:41:16 Ok, so, I am going to go read the Cinder and Nova Specs and put together an etherpad for you guys to approve. 16:42:08 jungleboyj: I can take the burden of MC-ing that session if you want 16:42:29 ildikov: That would be good. :-) People will be nicer to you. 16:42:37 jungleboyj: and thanks for the etherpad in advance :) 16:43:11 jungleboyj: haha, not 100% sure about that, but we will see :) 16:43:12 ildikov: Yep, I think that is a fair split of work. Will get me caught up. 16:43:24 jungleboyj: +1, thanks 16:43:55 mriedem: any remaining concerns to the session? 16:44:05 or anyone else? 16:45:10 no 16:45:32 cool 16:45:41 then I think we're now good for today 16:46:09 please review the two Cinder patches and the new attach patch in Nova before the Summit 16:46:30 that would at least make me very happy, I know it doesn't matter that much, but still 16:46:46 and safe travels for next week and see you soon!!! 16:46:58 thanks everyone! 16:47:09 #endmeeting