13:00:03 #startmeeting nova api 13:00:04 Meeting started Wed Sep 28 13:00:03 2016 UTC and is due to finish in 60 minutes. The chair is alex_xu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:08 The meeting name has been set to 'nova_api' 13:00:11 who is here today? 13:00:35 o/ 13:01:13 sdague: johnthetubaguy gmann are you around for api meeting 13:01:21 o/ 13:01:45 let us wait one mins? 13:02:29 ok, just three of us 13:02:44 let us start the meting 13:02:56 #topic action from previous meeting 13:03:04 action: alex_xu to work on generic diagnostics spec 13:03:15 #link https://review.openstack.org/357884 13:03:30 Sergey already help on this spec 13:03:57 * edleafe wanders in late 13:04:06 i haven't seen the latest on that one yet 13:04:11 there is one hot point in the spec 13:04:37 use which value to identify the disk 13:05:11 for libvirt i thought we can't rely on the device name 13:05:13 there are three options at here, bdm_id, device_name, and disk local path 13:05:16 so that's why people wanted tags 13:05:25 mriedem: yea, +1 13:05:53 mriedem: the tags is really for normal user, which defined by normal user, and used by normal user 13:05:54 i don't like the idea of leaking the bdm id 13:05:55 o/ 13:06:16 expose the tags in an API for debug, that sounds duplicated 13:06:23 bdm id is per cell and if we're talking about the id field in the db table it could collide with bdms in other cells 13:06:59 so the disk local path is the last choice 13:07:00 mriedem: it's not a uuid? 13:07:25 or we create a uuid for this API 13:07:27 it is not 13:07:39 johnthetubaguy: but we doesn't have api expose the bdm also 13:08:04 sdague: yeah we don't have a uuid on the bdm table 13:08:10 we've talked about having one forever 13:08:15 but never had use cases 13:08:26 dansmith has the patches to make that happen though 13:08:47 disk local path is the path on the host? 13:09:11 mriedem: for network device, it can be a URI, very virt driver specific 13:09:16 * johnthetubaguy scratches head 13:09:48 hmm, if the only point is having a unique identifier per disk/bdm, then i think i'd just go with a uuid 13:10:01 and we revive dansmith's patches to add a uuid field to the bdm 13:10:13 yeh, uuid seems better than a uri 13:10:38 the point for disk path is this is debug API, so it should be ok for debug user 13:11:25 https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bug/1489581+owner:dms@danplanet.com 13:11:37 it feels like we should do it properly and add a uuid anyways 13:11:43 johnthetubaguy: ++ 13:12:13 so do way need API expose that uuid? 13:12:55 s/way/we 13:13:06 i'd think so 13:13:39 something like /servers/{uuid}/bdm? 13:14:09 so we spoke about the VIF APIs I think, pointing to neutron, feels like a unified list of disks that has all bdms, not just volumes, might be what we want? 13:14:30 johnthetubaguy: yeh, a GET call for that as a sub resource probably makes some sense 13:14:36 like alex_xu just said 13:14:51 yeah, I just wonder if we already have something close... 13:15:29 oh, we call it os-volume_attachments, thats clearly not a bdm 13:15:34 we have vol-attachment API, but it won't include swap and ephermal disk 13:16:02 nor a bdm uuid since that doesn't exist yet 13:16:09 brb 13:16:19 yeh, so /servers/{uuid}/bdm seems like a good approach there 13:16:37 is it a little duplicated with vol-attachment API? 13:16:47 I guess, is the disk device info the only hold out here? Could we split that out? 13:17:19 so diagnostics - disk on first go, get that in. Sort out bdm consistent exposure in parallel, then diag + bdm as second pass. 13:17:37 just so that the rest of the cleanup work can be sorted 13:18:15 sdague: that sounds like a good idea 13:18:22 we have microversion, so that is cool for making progress :) 13:19:03 another idea is if we don't have clear use-case for /servers/uuid/bdm, we can put some bdm attribute with uuid in the diag API 13:19:49 so, honestly, it feels like more consistently dealing with bdm and not just in the attachment api seems sensible 13:19:52 honestly, it feels like BDMs need some thought, generally. Where do we want to take disks and flavors. 13:20:14 it feels a bit wrong we don't expose bdms in a GET api 13:20:27 johnthetubaguy: yeh 13:20:47 ok, so plan forward. Split diagnostics spec so there is a non disk version 13:20:49 ah, i see the point 13:20:58 get that sorted and landed 13:21:22 then there is probably a bdm api spec (which includes uuid exposure) and diag + bdm 13:21:24 right? 13:22:07 +1 13:22:13 yea 13:23:42 that would be a regression from the existing diagnostics api, 13:23:50 so if you wanted disk stuff you'd have to use v2.1 or something lower 13:24:11 the existing diagnostics api doesn't expose a disk id though, but it does expose some disk stuff 13:24:16 or just no id, but with disk info 13:24:23 mriedem: yea 13:24:29 ok, well, maybe that then 13:24:53 the thing is, I'd hate to have the standardization get held up on disk id thing that is going to take a while to sort out 13:26:10 if long-term you want both disk path and id in the diag api, 13:26:13 then you could do disk path now 13:26:15 and add id later 13:26:28 do we want disk path? 13:26:32 idk 13:26:42 it's really hard to tell what should be in this thing w/o input from someone that uses it 13:27:24 we could query the ops list to see if anyone uses the api and if so, what they think about options we're talking about 13:28:46 yes, although releasing a smaller reduced, but standardized API feels like an improvement 13:29:02 they may just tell us to delete it, I guess 13:30:12 so, with the microversion it would let us replace this with something standard, even if missing info 13:30:58 i've got to get my kid on the bus so back in a while 13:31:31 so, what we should do? 13:31:33 ok, maybe another topic, because we seem to not be agreed here 13:31:45 yeah, lets come back to this later 13:31:51 I don't think we're going to get reasonable feedback off of the ops list for a thing like this 13:32:06 and I think standarization is more important than missing info 13:32:27 +1 for just getting this standardized 13:32:30 +1 13:32:32 even with missing info 13:33:47 so let me feedback to the spec, then we revisit the id problem later? 13:35:11 alex_xu: sounds good 13:35:50 #action alex_xu feedback to the diag API spec about just standardized the API first 13:36:09 so let's go next one 13:36:16 action: johnthetubaguy to sketch out what an ideal security group workflow looks like in the nova api now with neutron as the presumed backend 13:36:33 #link https://etherpad.openstack.org/p/ocata-nova-security-groups 13:36:37 so that etherpad I had, includes some ideas 13:37:09 nova boot --nic net-id=uuid_net1 security-group=db --nic ned-id=uuid_net2 security-group=api --flavor 1 --image test-image test-server 13:37:17 is where I was thinking we could go 13:37:25 to add a security group 13:38:00 then you just need to look at the port in neutron to find out or modify any details around that 13:38:16 johnthetubaguy: so we just need fail when use it with nova-network? 13:38:17 johnthetubaguy: yeh, that makes a lot of sense. 13:38:39 alex_xu: I am thinking nova-network dies in a few weeks, lets just worry about neutron 13:38:39 alex_xu: well nova-net calls after 2.35 are pretty suspect anyway 13:38:55 right, use the older version of the API if you must use nova-network still 13:38:58 ok, cool 13:39:14 johnthetubaguy: can you turn that into a spec? That seems like it would solve a bunch of things 13:39:27 I'm thinking should we stop some point which works for nova-network after 2.35 13:39:53 yeah, I can make that into a spec 13:40:36 alex_xu: well, deleting nova-network is going to make it stop after 2.35 13:40:39 and before 2.35 for that matter 13:40:57 right, this becomes simpler once we nuke nova-net 13:41:20 ah, I see now 13:41:21 #action johnthetubaguy to create a spec out of ideas in https://etherpad.openstack.org/p/ocata-nova-security-groups 13:41:46 ok, so let us go to next one 13:41:49 the bit I like about --nic is that already means nothing for nova-net 13:42:04 johnthetubaguy: yea 13:42:10 action: mriedem to write up spec for os-virtual-interface deprecation 13:42:11 not really 13:42:21 --nic can pass the network id or fixed ip for nova-net 13:42:26 you just can't pass a port for nova-net 13:42:39 alex_xu: i didn't get that done 13:42:59 mriedem: oh, I didn't know that was a thing 13:43:13 mriedem: it's fine, so let us talk about that when it is ready 13:43:40 so next one 13:43:45 action: alex_xu to write a spec for deprecating the proxy api to set/delete image metadata 13:43:57 #link https://review.openstack.org/377528 13:44:06 mriedem already give some review, I need update the spec 13:44:53 the only highlight in the spec is there is quota check in server create_image API, which I thought we should remove 13:45:35 i think the quota check should be gone, it's actually kind of silly that nova has that at all given glance could blow up even if you pass the nova quota check for image metadata properties 13:45:38 if the glance quota is lower 13:45:58 mriedem: yea 13:46:09 the thing i'm nervous about is moving the image create before the volume snapshot 13:46:25 not for any known reason right now except dragons 13:46:36 yea, I double check that, probaly test it in my local env, ensure it is safe 13:46:52 note also, 13:46:59 cinder probably has a quota on volume snapshots 13:47:14 so you are fixing one thing by moving image create before vol snapshot, but could be breaking the quota check on vol snapshot 13:47:18 well, not breaking but you could still fail 13:47:46 fwiw this is probably why nova-api checks port quota with neutron before casting to the compute to create new ports 13:47:49 racey as that is 13:48:30 mriedem: I have a spec in the works to possibly change that, but yeah 13:48:44 # Number of volume snapshots allowed per project (integer value) #quota_snapshots = 10 13:48:47 ^ from cinder.conf 13:49:04 alex_xu: so i think i'd rather not even move the image create before the vol snapshot 13:49:08 since either could fail a quota check 13:49:08 honestly, leaving the order the same is probably a good idea, just to keep the API semantics the same 13:49:18 yeah, because either could fail 13:49:23 yes i don't want to introduce some new weirdness we don't know about if we don't have to 13:49:30 but if we do vol snapshot first, then create image on glance failed on quota, that sounds waste. in the glance side, it just a db call. 13:49:59 well, vol snapshot quota check in cinder should also be a db call, 13:50:09 or do you mean, create image is a db call, then you still have to upload the data to glance 13:50:13 mriedem: even if we don't move that, but sounds like we should have some rollback code for removing vol snaphsot when quota fail on glance? 13:50:15 which you won't have to do if you fail the image create 13:50:57 they could both fail for other reasons though, feels like we should fix that anyways 13:51:00 we'd probably want that yeah 13:51:07 right it sounds like a bug exposure today 13:51:30 I mean treat it separately outside this spec, I guess 13:51:31 we could test it by creating a volume-backed instance, set cinder snapshot quota to 0 and try to snapshot 13:51:35 and make sure everything is cleaned up 13:51:44 johnthetubaguy: +1 - i think it's just a bug 13:51:51 yeah 13:52:03 ok, I can move that to a bug 13:52:29 well, we have the rest of the spec, and a separate bug, to be clear 13:53:16 yes the rest of the spec is pretty clear 13:53:23 #action alex_xu is going to double check the moving create image, then separate the spec to a spec and a bug 13:54:01 ok, so let us go next one? 13:54:17 yeah 6 min 13:54:37 action: mriedem to follow up with gmann about testing swap-volume in tempest 13:54:56 looks like we can't finish all the items today 13:55:24 gmann is out for awhile so i'll handle that tempest test, 13:55:28 it just needs to move from scenario to api 13:55:46 #action mriedem to move swap volume test from scenario to api https://review.openstack.org/#/c/299830/ 13:55:54 mriedem: cool, thanks 13:56:06 action: sdague to start in on a capabilities spec 13:56:18 #link https://review.openstack.org/#/c/377756/ 13:56:31 it's pretty early, but hopefully a reasonable starting point 13:56:58 yea, sounds like it is related to qualitative part of placement 13:57:18 the key thing I wanted to capture is that we're going to need a way to query *all* possible capabilities, as well as what's allowed on any particular resource 13:57:31 otherwise we'll be in versioning hell 13:57:57 +1 13:58:05 I'm going to be out until the next meeting, so feedback there probably worth while 13:58:20 oops, I'm in holiday next week also 13:58:32 there are a couple of good feedback points 13:58:46 do we still want to cancel this meeting next week? 13:59:04 oops, you mean out before the next meeting 13:59:13 one which is what degree of machine consumption vs. doc consumption we should end up with 13:59:28 sdague: I like the idea of ensuring we can see the full list of possible capabilities, to help with versioning, as you say 13:59:29 the other is about granularity, though I honestly think that's mostly a comes later 13:59:49 1 mins left 13:59:53 granularity gets decided independently of the mechanism to expose I think 13:59:59 I am quite keen on very course grained, to a human could understand things 14:00:03 so a 14:00:14 so let us back to nova channel 14:00:18 thanks all 14:00:18 sure 14:00:20 #endmeeting