13:00:02 <alex_xu> #startmeeting nova api
13:00:02 <openstack> Meeting started Wed May 24 13:00:02 2017 UTC and is due to finish in 60 minutes.  The chair is alex_xu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:05 <openstack> The meeting name has been set to 'nova_api'
13:00:08 <alex_xu> who is here today?
13:00:22 <gmann> o/
13:00:43 <cdent> o/
13:01:29 <alex_xu> not sure sdague johnthetubaguy around also
13:01:30 <Kevin_Zheng> o/
13:02:07 <alex_xu> #topic priorities
13:02:25 <alex_xu> #link https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/policy-docs
13:02:40 <alex_xu> the only one priority item, and looks like we are super close
13:03:19 <gmann> alex_xu: yea, seems like gate behaved bad, ll recheck on those
13:03:34 <alex_xu> yea, I saw live-migration job always failed
13:03:46 <alex_xu> gmann: thanks for those patches also
13:04:18 <gmann> yea
13:04:36 <alex_xu> gmann: only have a question for this https://review.openstack.org/#/c/461728/, i think we should mark it as deprecated, since the cfg opt "CONF.api.hide_server_address_states" deprecated also
13:04:37 <sdague> o/
13:05:10 <alex_xu> without that cfg opt, that rule doesn't work also I think
13:05:42 <gmann> alex_xu: so with cfg deprecation we will show address always?
13:06:10 <alex_xu> gmann: i guess so
13:06:35 <gmann> alex_xu: but policy may be for separate purpose to hide from few users... but do not know if that make sense or not
13:06:48 <gmann> sdague: johnthetubaguy  ^^ what you say?
13:06:58 <alex_xu> yea, i'm not sure why people want to hide the addresses
13:07:20 <gmann> yea, me too its their server address only
13:07:24 <alex_xu> and that is undiscoverable configuration to the API response?
13:07:51 <sdague> I honestly have no idea, we have the commit where that was added?
13:10:00 <sdague> this is the base commit b3bbd09131e127e7540f4ccdb1376c10bace8b7a
13:10:40 <sdague> so, the reason for this is that until you are in ACTIVE the addresses could change (late fail that triggers a reschedule, possibility that you get different addresses)
13:10:54 <sfinucan> o/
13:11:01 <sdague> I think the point of deprecating was that we'd always do the hide until ACTIVE, right?
13:11:04 <gmann> this one https://review.openstack.org/#/c/18414/
13:11:09 <sdague> gmann: yeh
13:11:49 <sdague> alex_xu: I agree, that policy point should go away, and that should just be the default
13:11:58 <gmann> sdague: yea before ACTIVE, its not stable
13:12:12 <alex_xu> sdague: cool
13:12:19 <sdague> gmann: right, but by default, if we have any content in there it's just passed up
13:12:29 <gmann> yea
13:12:39 <alex_xu> but I made a mistake, that cfg isn't deprecated, just the name deprecated... but it is also worth to deprecate that rule?
13:12:40 <sdague> someone should probably write down the desired behavior here, it honestly could be a light weight spec
13:13:06 <sdague> alex_xu: yeh, I feel like that's one of those extensions that was added when it should have just been sane default behavior
13:14:22 <alex_xu> sdague: got it
13:14:29 <gmann> ok
13:15:03 * alex_xu write a note, will bring this up when we review the api work items in next release
13:15:09 <gmann> sdague: you mean to write a spec for cfg and policy deprecation with expected behavior
13:15:43 <sdague> gmann: yeh
13:15:54 <sdague> honestly, it should be really straight forward
13:16:05 <gmann> sdague: ok, ll add one for queue for next release
13:16:07 <sdague> I think we all have it in our head, but it should be written down
13:16:08 <gmann> yea
13:16:27 <sdague> honestly, you could do it this week to clarify, and we could get all the deprecations set now
13:16:42 <gmann> sdague: sure.
13:17:08 <alex_xu> gmann: thanks
13:17:34 <alex_xu> cdent: anything update for uwsgi?
13:18:06 <cdent> no, at this stage I think the devstack change needs reviews, but that's about it: https://review.openstack.org/457715
13:18:18 <alex_xu> cdent: ah, thanks
13:19:27 <alex_xu> sdague: anything update from summit, if there is anything really something related nova api?
13:20:02 <cdent> I want to ask about some of the service catalog stuff
13:20:09 <cdent> there was a thread recently:
13:20:17 <sdague> alex_xu: honestly the only thing I've been working on since was the cross service request ids
13:20:22 <sdague> which is api related
13:20:36 <cdent> http://lists.openstack.org/pipermail/openstack-dev/2017-May/116574.html
13:20:40 <alex_xu> sdague: got it
13:20:43 <sdague> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117367.html
13:20:55 * mriedem joins late
13:21:01 <cdent> in which some people said they'd like to continue to be able to not use the service catalog
13:21:12 <cdent> for inter-service interactions
13:21:29 <cdent> if we agree with that we'll need to make some adjustments to how the resource tracker (and others) call placement
13:21:36 <cdent> to allow endpoint_override
13:21:51 <sdague> cdent: how I read that issue was different
13:22:08 <sdague> there are a class of data intensive APIs (glance is in that camp)
13:22:55 <mriedem> this sounds like the spec efried is working on https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/use-service-catalog-for-endpoints.html
13:22:59 <cdent> it's not certain/clear from the second message that data intensity is the only reason. it apears some people just want explicit control
13:23:11 <mriedem> get endpoints in a generic way from the catalog but it allows for a backdoor override
13:23:11 <sdague> direct addressing, and more specifically server lists, allows for flimsy HA and network path optimization
13:23:19 <cdent> but since the thread didn't really resolve I thought I would point out that that was the case
13:23:53 <sdague> the thing that triggered it was getting rid bypassing the service catalog for glance
13:24:00 <gmann> this is to have different endpoints based on different catalog right? like v2 and v2.1 catlog on devstack side
13:24:01 <sdague> up until that point, people mostly didn't care
13:24:21 <sdague> gmann: no, it's the fact that something like glance many folks want to
13:24:39 <sdague> 1) optimize their paths from computes to glance, possibly very much at the rack level
13:25:09 <sdague> 2) turn off https for nova -> glance (but only for that case) which is a 35% performance boost
13:25:27 <alex_xu> wow
13:25:32 <gmann> humm
13:25:41 <sdague> 3) have some level of HA, which the api_servers list provides, as if we fail an op we pop to the next server and try again
13:26:13 <sdague> which is way less infrastructure than HA proxy, which is a performance bottleneck when you stick that in front of something trying to stream 100GB images
13:26:16 <alex_xu> so it is about different compute node to access different api servers
13:26:30 <sdague> well, specifically the glance ones
13:26:41 <sdague> and it might be the same servers, but a different network path
13:27:09 <cdent> sdague: all that you say is true, but also that thread specifically says "In addition to the Glance to nova-compute use case, the feedback during the LDT session surfaced potential use cases for other services."
13:27:19 <sdague> cdent: sure
13:27:22 <cdent> so at the moment there's some ambiguity that we'll need to resolve
13:27:57 <sdague> right, and that message didn't help clarify
13:28:07 <sdague> it would be good to figure out what "other use cases are"
13:28:15 <sdague> I totally get and understand the glance case
13:28:44 <cdent> sorry, I'm clearly not getting my point across: we, nova, did not respond to that thread, so have left people hanging, we need to fix that. If nobody else wants to I can bump the thread with some additional questions
13:29:04 <sdague> cdent: sure, that sounds reasonable
13:29:10 <cdent> k, will do
13:29:20 <sdague> it did also hit in the middle of summit
13:29:20 <cdent> #action cdent to bump the service catalog philosophy thread
13:29:28 <mriedem> was this mdorman's thread?
13:29:32 <sdague> yes
13:29:34 <mriedem> which was a revival of the old one
13:29:34 <mriedem> ok
13:30:26 <mriedem> i can reply with a simple question,
13:30:32 <mriedem> have you read https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/use-service-catalog-for-endpoints.html and if so, does it suit your needs?
13:31:01 <cdent> goferit
13:32:07 <alex_xu> cool
13:32:12 <alex_xu> anymore question?
13:32:51 <alex_xu> ok
13:32:55 <alex_xu> #topic open
13:33:04 <alex_xu> gmann: you are an agenda?
13:33:09 <gmann> yea
13:33:13 <alex_xu> s/are/have/
13:33:18 <gmann> https://bugs.launchpad.net/nova/+bug/1693168
13:33:19 <openstack> Launchpad bug 1693168 in OpenStack Compute (nova) "Different APIs 'os-quota-class-sets' Response between v2 and v2.1" [Undecided,New] - Assigned to jichenjc (jichenjc)
13:33:35 <gmann> we have difference bug from v2 to v2.1 for quota class APIs
13:34:01 <alex_xu> i guess nobody use the quota class API?
13:34:21 <gmann> that happened due to 'os-server-group-quotas' extensions which i never knew before investigating this bug
13:34:22 <mriedem> no one uses microversions :)
13:34:26 <gmann> heh
13:34:42 <alex_xu> hmmm
13:34:54 <gmann> nobody complained about this yet so do not know how imp quota class API is :)
13:35:07 <alex_xu> mriedem: feedback from summit? still no one use microversion?
13:35:22 <mriedem> alex_xu: clouds are just rolling up to or past mitaka,
13:35:26 <gmann> or noone switched to v2.1 from v2?
13:35:29 <mriedem> no one seems to know how users are using microversions
13:35:36 <mriedem> mordred is just starting to look at microversions
13:35:50 <mriedem> the quota class API is how you get default quota information
13:35:51 <gmann> mriedem: its in base v2.1
13:35:56 <mriedem> since there is only one quota class, which is 'default'
13:36:00 <alex_xu> ok, so just waiting for a while...
13:36:23 <mriedem> but i think if you also use os-quota-sets for a tenant id, you get default quotas too if there aren't any overrides
13:36:26 <mriedem> for the tenant i mean
13:36:39 <mriedem> so,
13:36:43 <gmann> yea, if not overridden
13:36:51 <mriedem> to break this in v2.1 and then fix it in 2.46 or whatever seems wrong
13:37:35 <mriedem> although anyone past 2.1 using this api would start getting different results
13:37:40 <gmann> we had same case with vif's net-id: break in v2.1 and fixed in i think 2.8
13:38:12 <mriedem> we dropped the 2.0 code in newton?
13:38:25 <sdague> so, the bug isn't super clear to me. This is that server_groups isn't in the key list?
13:38:27 <gmann> mriedem: yes
13:38:32 <mriedem> i'm just thinking of a case where the cloud isn't new enough to get this fix, but is new enough that you can't use 2.0
13:38:55 <mriedem> "if extension 'os-server-group-quotas' was enabled in v2, then 'server_groups' and 'server_group_members' were present in APIs response."
13:39:08 <alex_xu> sdague: yes
13:39:13 <sdague> but we're not including those now
13:39:26 <gmann> sdague:  yea, 2.1 never included those
13:39:28 <sdague> and half the world hates server groups?
13:39:38 <mriedem> but NFV loves them
13:39:47 <mriedem> which is 75% of openstack revenue :)
13:40:08 <alex_xu> see the thread about drop retry, there are people use it out side NFV
13:40:18 <mriedem> that's the 25%
13:40:31 <sdague> alex_xu: that's assuming that all afinity is done with server groups
13:40:43 <sdague> you can also do affinity at boot time
13:40:59 <mriedem> affinity with server groups is at boot time
13:41:12 <mriedem> anyway,
13:41:25 <sdague> mriedem: a server group is created if you pass a scheduler hint during boot?
13:41:50 <mriedem> sdague: i'd have to dig, but it's irrelevant there isn't it
13:41:56 <mriedem> *here
13:42:00 <sdague> yeh, sure
13:42:07 <sdague> just trying to make sure I understand the scope of the problem
13:42:12 <gmann> user can still add/update the server groups quota but APi dpes not showing that back in quota class set APIs
13:42:23 <alex_xu> how much people use quota-class for setting default value?
13:42:24 <gmann> quota set APIs has no issue
13:42:34 <sdague> gmann: right, that feels like a bug fix to go fix and backport
13:42:56 <gmann> sdague: but backport till newton only, we have this since killo
13:42:57 <sdague> and a note in api-ref that some versions had a bug where those keys weren't shown, it was not intentional
13:43:05 <sdague> gmann: right, I get that
13:43:19 <gmann> i see, api-ref clear doc can help
13:43:28 <sdague> yeh, make it as right as we can
13:43:33 <sdague> and explain in docs where we screwed up
13:43:40 <mriedem> hold up,
13:43:49 <mriedem> so we're going to start introducing new keys in the 2.1 response?
13:44:09 <sdague> mriedem: if by new you mean the keys that people already have
13:44:35 <mriedem> already have as of 2.0
13:44:35 <gmann> it can be interop issue ? few cloud return those keys and few not
13:44:43 <sdague> that we document are there - https://developer.openstack.org/api-ref/compute/?expanded=show-a-quota-detail
13:44:58 <mriedem> that link doesn't work
13:45:14 <sdague> https://developer.openstack.org/api-ref/compute/?expanded=show-a-quota-detail#show-a-quota
13:45:23 <mriedem> that's a different api
13:45:46 <mriedem> alex_xu: to answer your question, we don't know how uses the quota classes api
13:45:54 <mriedem> we don't know how uses any api, or how much, or in what ways
13:45:56 <mriedem> *who
13:46:06 <alex_xu> yea...i see
13:46:15 <gmann> seems like no api-ref  for quota class set
13:46:17 <mriedem> this is definitely a regression for anyone that had code that worked on 2.0 and now uses 2.1
13:46:19 <sdague> ok, I guess I didn't understand the bug
13:46:21 <mriedem> if they needed those values
13:46:33 <mriedem> yes there is no api-ref for quota classes apparently
13:46:53 <mriedem> os-quota-sets is the tenant/user specific api
13:47:22 <mriedem> quota classes sets the global defaults
13:47:34 <mriedem> which sets a value in the db which overrides the default quotas in config
13:48:01 <mriedem> the way we check for quota limits is (1) project-specific limits, (2) default quota class in db (3) config
13:48:22 <mriedem> https://docs.openstack.org/developer/nova/quotas.html#checking-quota
13:48:52 <mriedem> anyway,
13:49:06 <mriedem> we seem to be inconsistent about saying when something needs a microversion or not
13:49:23 <mriedem> as noted, we regressed the device tags stuff after 2.32
13:49:49 <mriedem> that was fixed in 2.42 https://docs.openstack.org/developer/nova/api_microversion_history.html#maximum-in-ocata
13:49:59 <mriedem> rather than bug fixing it into the regressed microversions
13:50:19 <mriedem> it makes for terrible ux, yes
13:50:20 <alex_xu> if something break the cirtical feature, then we fix it without microversion
13:50:38 <mriedem> alex_xu: do you have an example of that?
13:51:16 <gmann> so we had similar issue in past with vif net-id we missed to add in v2.1 and fixed in 2.12 #link Ic8b26df8d7e69bd71d23dfbc983fa3449c16fa7d
13:51:43 <gmann> and that was mainly for interop things
13:52:11 <sdague> the fact that it's for this API (which isn't documented) instead of the one I thought it was on the user side means I'm less inclined to backport
13:52:24 <sdague> it's an admin API, the only change is the get
13:52:43 <mriedem> right quota classes is only GET and PUT
13:52:49 <gmann> yea
13:53:00 <sdague> and there is a way to update with the config and set things that way
13:53:07 <sdague> it's annoying that the read path is dumb
13:53:08 <mriedem> https://github.com/openstack/nova/blob/master/nova/policies/quota_class_sets.py
13:53:20 <sdague> I'd say that any fix should come with API docs
13:53:29 <mriedem> i don't know what 'is_admin:True or quota_class:%(quota_class)s', is
13:53:52 <mriedem> does that mean, you can GET if you're admin or the quota_class is something you've defined in policy?
13:54:06 <sdague> mriedem: I honestly have no idea, it's policy, it's madness
13:54:22 <mriedem> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/quota_classes.py#L66
13:54:40 <mriedem> so id there is like 'default'
13:54:44 <mriedem> that's the only quota class we have
13:54:51 <gmann> mriedem: i think so, admin or user defined ids for quota class in policy
13:54:59 <mriedem> there is no user-defined id
13:55:12 <mriedem> btw, there is no way to list quota classes,
13:55:13 <mriedem> there is only 1
13:55:17 <mriedem> it is the alpha and omega
13:55:25 <mriedem> unless you're rax probably
13:55:27 * alex_xu can't find that example, as remember there is strict validation for the instance name
13:55:47 <mriedem> anyway i don't know how that policy works, if it even does anything
13:55:59 <mriedem> so,
13:56:09 <mriedem> i think if you can still get the quota limits for those keys via os-quota-sets,
13:56:13 <alex_xu> 4 mins left
13:56:15 <mriedem> which i think you can, but it would be good to confirm,
13:56:23 <mriedem> then we fix this is in os-quota-class-sets via microversion
13:56:29 <mriedem> and document the workaround
13:56:45 <gmann> mriedem: yea, i confirmed os-quota-set return those
13:57:16 <mriedem> the only difference is os-quota-sets is per-project
13:57:22 <mriedem> but if you're the admin project, then that should be ok
13:57:26 <gmann> #link https://github.com/openstack/nova/blob/master/doc/api_samples/os-quota-sets/quotas-show-defaults-get-resp.json
13:57:39 <mriedem> ok good
13:57:48 <mriedem> i really need a devstack so i can poke at things like this
13:58:22 <mriedem> nova quota-class-show default
13:58:23 <alex_xu> 2 mins left
13:58:24 <mriedem> is the other thing to test
13:58:26 <mriedem> as non-admin
13:58:29 <gmann> yea, it took me while to confirm all this by taking mitaka branch and enabling that extension in v2
13:58:31 <mriedem> if someone has a devstack
13:58:38 <alex_xu> so.. what is our decision?
13:58:55 <mriedem> alex_xu: i think we document the workaround and fix os-quota-class-sets in a microversion
13:58:59 <mriedem> sdague: gmann: ^ agreed?
13:59:14 <alex_xu> 50 sec for vote
13:59:21 <sdague> mriedem: yep, works for me
13:59:23 <gmann> yea, ok for microversion
13:59:25 <mriedem> i'll update the bug report
13:59:36 <gmann> mriedem: thanks.
13:59:45 <alex_xu> ok...great
14:00:05 <alex_xu> it is time to close the meeting, thanks all
14:00:07 <alex_xu> #endmeeting