18:00:02 <lbragstad> #startmeeting keystone
18:00:03 <openstack> Meeting started Tue Dec 19 18:00:02 2017 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:07 <openstack> The meeting name has been set to 'keystone'
18:00:07 <lbragstad> #link https://etherpad.openstack.org/p/keystone-weekly-meeting
18:00:09 <raildo> o/
18:00:09 <lbragstad> agenda ^
18:00:11 <cmurphy> o/
18:00:15 <lbragstad> ping ayoung, breton, cmurphy, dstanek, edmondsw, gagehugo, henrynash, hrybacki, knikolla, lamt, lbragstad, lwanderley, kmalloc, rderose, rodrigods, samueldmq, spilla, aselius, dpar, jdennis, ruan_he
18:00:20 <lbragstad> davidalles: ^
18:00:21 <hrybacki> o/
18:00:24 <lbragstad> o/
18:00:26 <gagehugo> o/
18:00:27 <ruan__he> o/
18:00:32 <davidalles_> hello
18:00:36 <lbragstad> we have all sorts of stuff to talk about today
18:00:42 <cleong> hi
18:01:04 <lbragstad> just a quick couple of announcements, gagehugo is gonna give a quick vmt update, then we'll get into the specs
18:01:36 <lbragstad> we'll give folks a couple minutes to join
18:03:14 <lbragstad> alright - let's get started
18:03:28 <lbragstad> #topic announcements: feature proposal freeze
18:03:32 <lbragstad> #info Feature proposal freeze for Queens is this week
18:03:45 <lbragstad> for what we've accepted for this release, i don't think anything is in jeopardy
18:04:03 <lbragstad> unified limits, application credents, project tags, and system scope all have implementations that are up and ready for review
18:04:26 <lbragstad> which means we will have a busy milestone for reviews, but glad to see things proposed
18:04:39 <lbragstad> in addition to that
18:04:40 <lbragstad> #info All targeted/expected specifications have merged
18:04:46 <hrybacki> good job to all the folks doing hard work to make that happen
18:04:53 <lbragstad> +2
18:05:10 <gagehugo> yay
18:05:29 <lbragstad> so - as far as what we discussed at the PTG and the Forum/Summit, we're consistent in our roadmap and our specification repository
18:05:34 <lbragstad> which is a really good thing
18:05:53 <lbragstad> especially now that unified limits merged
18:05:54 <lbragstad> #link https://review.openstack.org/#/c/455709/
18:06:11 <lbragstad> thanks cmurphy kmalloc wxy for pushing that through and capturing the details we got from sdake
18:06:16 <lbragstad> sdague*
18:06:21 <lbragstad> (sorry sdake!)
18:06:32 <sdake> lbragstad it happens
18:06:39 <sdake> (all the time:)
18:06:48 <gagehugo> lol
18:07:03 <lbragstad> :)
18:07:05 <cmurphy> woot
18:07:46 <lbragstad> tons of things are up for review for those looking to review some code
18:07:55 <lbragstad> new features too that need to be played with
18:08:21 <lbragstad> and without a doubt, some client stuff that still needs to be done based on the new feature work
18:08:34 <lbragstad> (unified limits, application credentials, for sure)
18:08:58 <lbragstad> there are no shortages of work, that's for sure :)
18:09:21 <lbragstad> thanks again to folks for shaping up the release, things are looking good
18:09:31 <lbragstad> #topic VMT coverage
18:09:33 <lbragstad> gagehugo: o/
18:09:46 <gagehugo> o/
18:10:13 <gagehugo> so fungi and lhinds have looked over the vmt doc for keystonemiddleware and documented their findings here
18:10:23 <gagehugo> #link https://etherpad.openstack.org/p/keystonemiddleware-ta
18:10:44 <gagehugo> so now I think the idea is that the keystone team and them meet to finalize it
18:11:00 <lbragstad> sounds like we need to review their findings
18:11:05 <gagehugo> yeah
18:11:14 <gagehugo> s/finalize/review and finalize/
18:11:33 <gagehugo> we can setup a meeting after the holidays if that works
18:11:51 <lbragstad> that sounds good to me
18:11:53 <gagehugo> also there are drafts for keystoneauth and oslo.cache
18:12:08 <gagehugo> #link https://review.openstack.org/#/q/topic:keystone-vmt
18:12:11 <kmalloc> gagehugo: thanks! I appreciate you working so hard on this
18:12:26 <gagehugo> kmalloc :) anytime
18:12:34 <gagehugo> it's been an interesting process
18:12:48 <gagehugo> that's all I got for that update
18:12:49 <lbragstad> ++
18:12:54 <lbragstad> cool
18:12:57 <lbragstad> thanks gagehugo
18:13:12 <lbragstad> #topic specification review and discussion
18:13:21 <lbragstad> hrybacki:  ruan__he davidalles_ o/
18:13:24 <hrybacki> o/
18:13:28 <lbragstad> #link https://review.openstack.org/#/c/323499/
18:13:34 <davidalles> here we are
18:13:49 <davidalles> Enterprise objective in telecom Cloud is to be able to Audit and Remediate the list of users and their roles on multiple openstack deployments inside one country via dedicated processed run outside from openstack IaaS
18:13:54 <kmalloc> uhm.
18:13:58 <davidalles> As a consequence we want to implement inside the IaaS regions the Identity and Access Management features so that local keystones are the source of truth on ‘per-openstack deployment’ basis
18:14:01 <kmalloc> my -2 disappeared from a patch.
18:14:12 <kmalloc> hold on i need to go chat w/ infra. brb
18:14:12 <lbragstad> kmalloc:  doh!
18:14:20 <hrybacki> still there kmalloc
18:14:24 <davidalles> currently what is missing is the proper way to synchronize the reference for projects
18:14:45 <davidalles> on the various openstack deployments
18:15:00 <jdennis> this is essentially an attempt to federate diverse clouds runining different version of opentstack
18:15:01 <kmalloc> nvm - terrible gerrit UI, old patches don't show scores.
18:15:12 <davidalles> ==>synchro is run from one external tool
18:15:22 <davidalles> which will use the standard keystone APIs
18:15:46 <davidalles> except for the keystone/createProject where we would like to 'force' the project's ID:
18:16:02 <davidalles> same openstack ID on all deployment
18:16:09 <ruan__he> when we talk about an external tool, it's a module which coordinates multiple Openstack
18:16:21 <davidalles> correct
18:16:24 * knikolla sorry for being late
18:16:45 <ruan__he> if we have this feature, the module will be able to create projects of  different openstack with the same id
18:17:02 <ruan__he> this is what we call "synchronization"
18:17:27 <jdennis> shouldn't this be an upstream project to work out all the details of federated disjoint clouds?
18:17:48 <ruan__he> it could be, I've presented this work to LOCC yet
18:18:10 <ruan__he> but, don't we need to do that step by step?
18:18:25 <lbragstad> davidalles: ruan__he when we talked, it sounded like you had some restrictions you had to adhere to?
18:19:16 <davidalles> my request on keystone team is to start step-by-step: allowing to synchronize the project’s ID across sites
18:19:40 <davidalles> (then we may open a refactoring for this external tool for IAM & ACM)
18:19:41 <kmalloc> i am against the standard APIs being used for that.
18:19:42 <kmalloc> ftr
18:19:51 <kmalloc> this should be a system-scoped/admin only option IMO
18:20:01 <ruan__he> kmalloc, this feature existed before
18:20:07 <kmalloc> not in v3
18:20:16 <kmalloc> it has never existed in v3
18:20:36 <davidalles> sounds before noone fully expressed a need to 'port it' to v3?
18:20:52 <davidalles> sounds no one fully..
18:21:00 <kmalloc> no, we explicitly decided that we didn't want it in v3, it was brought up
18:21:09 <kmalloc> and it was decided that v3 100% controlls all ids.
18:21:23 <davidalles> ah, it is different!
18:21:39 <ruan__he> for which raison you want 100% control?
18:22:12 <kmalloc> for a lot of reasons, mostly centering around the fact that we had issues with guaranteeing no conflicts for generated/derived resources
18:22:18 <kmalloc> aka, mapping ending for users.
18:23:15 <davidalles> [I understand that Cern company also have the same request because the same requirement]: How do you identify one alternative approach
18:23:23 <kmalloc> it prevents malicious use / issues creating a project that could then generate for mapped users a known set of IDs from a different IDP or to surreptitiously own a domain/project prior to the IDP being linked
18:23:27 <davidalles> to comply to the requirement?
18:23:33 <ruan__he> if it only option, it will not change the standard usage
18:23:35 <rmascena> I believe that the revocation events + role assignments controls should become way more difficult to deal with with we enable that change, so I believe that we should specify in that spec, how we are gonna deal with that kind of sync evens on that scenario
18:23:52 <kmalloc> so, if you want this, I am going to tell you it cannot be in the standard APIs.
18:24:06 <kmalloc> it must be done as a privileged (cloud admin/system scope)
18:24:26 <kmalloc> it means a cloud admin/system scope is needed to do this work, not a "domain admin"
18:24:33 <kmalloc> the issue is around domain-admins who can create projects.
18:24:39 <lbragstad> even so - we should still think about the ramifications of the revocation argument
18:24:43 <kmalloc> and similar effects.
18:24:52 <cmurphy> why does this need to be part of the API instead of synchronizing the project table?
18:25:14 <davidalles> Agreed: let's enforce this via 'system scoping' only because this is a Cloud Service Provider privilege
18:25:25 <jdennis> cmurphy: because the are entirely different clouds
18:25:35 <knikolla> why not use k2k with a mapping which guarantees project -> project mappings.
18:25:39 <hrybacki> ^^ does everyone understand this is their use case?
18:25:51 <kmalloc> now. that said, the best option is to navigate allowing an external authority on project IDs similar to federated auth, that can be sync'd.
18:25:56 <kmalloc> but that is new tech.
18:25:58 <davidalles> ... I understand that K2K is a synchro on Authentication, not authz
18:26:01 <kmalloc> and i don't know how that works.
18:26:09 <hrybacki> davidalles: ruan__he can you explain (high level) your project and how it differs from the way the cores are used to working with OpenStack?
18:26:38 <kmalloc> k2k is basically SAML2 between keystones.
18:26:42 <ruan__he> ok, we set up a upper layer to coordinate multiple independant openstack
18:26:58 <kmalloc> it is AuthN with AuthZ still being based upon the SP side of the handshake (RBAC)
18:27:34 <davidalles> I understand that standard usage of openstack deployments are independent cloud and customers are consuming various clouds in 'hybrid way' (from one or multiple CSP)
18:27:35 <ruan__he> admin manipulate directly on the upper layer, and the upper layer in charge of send requests to each openstack
18:27:59 <lbragstad> but it's expected to be able to use the same token between each cloud?
18:28:11 <lbragstad> so it's more like different regions in a single deployment, right?
18:28:14 <hrybacki> lbragstad: yes, that is what they are going for. I don't know of anyone else that has tried this
18:28:23 <kmalloc> things we also need to explicitly cover: what happens if cloudX has the same ID already as CloudY for a project. -- this is something we can't really fix (nor should we) but it is a major security concern [in your upper layer]
18:28:25 <hrybacki> single openstack per region
18:28:40 <hrybacki> being synced/controlled by an 'over the top layer' of custom software
18:28:47 <kmalloc> if you're using the same token across different clouds, i am a hard -2 on all this.
18:28:55 <kmalloc> we're not adding that support upstream in keystone.
18:28:59 <davidalles> Here for Telco cloud with multiple sites, the CSP is responsible for the creation of the customer accounts (projects on all sites, allocating the users and configuring their permissions)
18:29:08 <knikolla> with k2k you can trivially get a token for the other side.
18:29:09 <kmalloc> it is just way way way too much exposure.
18:29:24 <ruan__he> as a security expert, the same project with different project ID in each openstack is a security concern !
18:29:30 <kmalloc> k2k is explicitly setup and you're getting a token issued by the otherside.
18:29:50 <kmalloc> so revocations/project checks/etc are all managed on the SP side.
18:30:12 <davidalles> and because the CSP is the Telco operator it is forced to comply to local regulation with hard IAM process and strong Audit & Compliance management
18:30:19 <hrybacki> kmalloc lbragstad -- how do we handle very large consumers of OpenStack that want to push bounds? I think the 'opt in' approach here makes this a 'enter at your own risk' Perhaps there is a process around 'experimental' features?
18:31:18 <lbragstad> we used to mark APIs as experimental, but we haven't done that in a while
18:31:24 <lbragstad> we also don't do extensions
18:31:30 <kmalloc> hrybacki: simply put, I'm going to say if you want to use the same token on multiple clouds, you're doing a ton of synchornization already, all IDs must be in sync, so my stance is I am not conceding any thing besides project IDs via system scope
18:31:44 <davidalles> 'experimental' for CSP usage at 'system scoping' would be a very nice balance for use: and our openstack editor will support this with/out support exception.
18:32:10 <davidalles> but on my side I am not requesting to synchronize all the IDs on all the resources
18:32:16 <ruan__he> kmalloc, we have tested K2K, it has a performance issue for our requirement
18:32:33 <lbragstad> is it possible to get bugs opened for those performance issues?
18:32:35 <kmalloc> you're probably moving into realms where you're solving how to sync the backends for keystone behind the scenes. i don't want every resource to have the "submit an API request with an ID to make a special resource"
18:32:49 <davidalles> reminder: only need to be able to control the assignment on 'per-keystone deployement'
18:33:02 <kmalloc> ruan__he: how about proper SSO [OIDC] or SAML to the individual keystone's
18:33:03 <kmalloc> ?
18:33:15 <kmalloc> rather than "just take a token issued by keystone A to keystone B
18:33:15 <knikolla> ruan__he: only when you get a token, you can reuse it for the lifetime of it with zero impact on performance
18:33:16 <kmalloc> "
18:33:28 <lbragstad> davidalles: that can be done with roles assignments on shadow-users, right?
18:33:40 <knikolla> kmalloc: i agree, this sounds like a federation scenario
18:33:50 <jdennis> I am really concerned about all the other id's, this just feels like an opportunity for difficult to debug synchronization issues.
18:34:17 <davidalles> all the other resources (and their IDs) are in the 'project scoping'
18:34:20 <kmalloc> jdennis: i think this is something that needs to be built into the backend in the write phase, a driver. not on top of the APIs.
18:34:24 <davidalles> then no request on them
18:34:59 <lbragstad> davidalles: ruan__he is it possible to see what happened with your performance findings?
18:35:17 <lbragstad> if there are things that aren't performant from a federation perspective upstream, we should fix those
18:35:44 <ruan__he> if we make this feature as an option, we can show you later how we achieve this federation
18:36:11 <lbragstad> it sounded like you tried k2k already but hit performance road blocks?
18:36:17 <ruan__he> lbragstad, we worked on k2k 2 years before, and then we decided to go to another approach
18:36:50 <lbragstad> were the issues that drove you to another approach performance related?
18:36:59 <jdennis> but you're asking to expose an API that may have significant issues, what do we do when the API is present and the problems are found?
18:37:13 <ruan__he> another important point is that, it's not only about the federation of 2-3 openstack, but about maybe more than 10!
18:38:03 <davidalles> This requirement is on the shoulders of the CSP, to comply to regulations and hard OPS process
18:38:13 <kmalloc> jdennis: i really feel like this whole thing needs to be a driver.
18:38:29 <davidalles> and exactly we are dealin,g with tenS of telco datacenters
18:38:29 <kmalloc> jdennis: so it can coordinate centrally and provide acks/etc.
18:38:50 <kmalloc> rather than a soft "we'll try and do it via public APIs"
18:39:03 <lbragstad> davidalles: who is the CSP in this case? just so everyone can level set on terms
18:39:36 <lbragstad> ruan__he:  if we're federating 10 different deployments and it trips up because performance is bad, then we should try to fix those performance issues
18:39:58 <davidalles> The CSP is the Telco itself: both IaaS provider
18:40:00 <ruan__he> kmalloc, maybe a driver will be a good choice, but at least we should start by making it as an option?
18:40:16 <davidalles> of the multiple Telco DC and the customer who deploys the workloads
18:40:27 <kmalloc> the benefit is a driver can be developed out of tree you can build in all the logic to cooridnate the project-id sync.
18:40:39 <kmalloc> and if you're liking it, we can consider it for in-tree.
18:40:49 <kmalloc> or it could be on a feature branch
18:41:28 <ruan__he> we can contribute to this feature firstly, and in charge of developping a driver later
18:41:37 * lbragstad thinks that could lead to interop issues
18:41:51 <kmalloc> so, let me be really clear, a "new API" is a big hurgle
18:41:52 <kmalloc> hurdle
18:42:06 <kmalloc> we cannot ever remove an API
18:42:10 <kmalloc> so.
18:42:24 <davidalles> to ALL: is a driver-logic acceptable when you accessed that 'V3 must control the generation of IDs;  driver is installed by the CSP
18:42:42 <davidalles> ... the 'system scoping' in other way
18:42:48 <ruan__he> it's not a new API, but an option for a existing API
18:42:49 <kmalloc> the driver is below the API, so - the "make a project" generates an ID in the manager/controller layer
18:42:57 <hrybacki> kmalloc: are modifications to an existing api considered a new api?
18:42:58 <jdennis> no new API's, especially one that could be problematic
18:43:01 <kmalloc> ruan__he: it is NOT an option for an existing API
18:43:28 <kmalloc> hrybacki: i'm going to -2 changes to the project_api for specifying the ID.
18:43:50 <kmalloc> this really needs to be a system-scope thing if we're doing it.
18:43:57 <kmalloc> i'm willing to be open to that.
18:44:11 <kmalloc> just not a modification of the current project_api.
18:44:12 <kmalloc> now....
18:44:17 <hrybacki> let's help ruan__he and davidalles develop a plan that is congruent with the core's thoughts
18:44:22 <davidalles> driver could be acceptable: but could this be done as experiental in Queen?
18:45:02 <davidalles> because RedHat company is out distro editor and they are working on 'per Long Term Support' basis: Queen then 'T' version
18:45:04 <kmalloc> davidalles: it is something that would be best on a feature branch of out of tree as expirimental- we've largely moved away from in-tree expirimental (though lbragstad could say I'm wrong and we can do that again, i'd be ok iwth it)
18:45:27 <kmalloc> so in driver logic, as a quick rundown of how it would work:
18:45:42 <kmalloc> API request to make new project is received. the id is generated at the manager/controller layer
18:46:32 <kmalloc> the driver would check/sync the project ID to the participating clouds (with ACKs) -- this is not via public APIs though another mechanism to guarantee uniquness and acks / cloud was down and needs to get created ids after it comes back up cases
18:46:39 <kmalloc> driver would then store the project id locally.
18:47:06 <cmurphy> ++
18:47:15 <lbragstad> so - we developed k2k specifically for this
18:47:19 <kmalloc> you need to handle the sync logic, and the "catchup from downtime" logic in the external service / keystone-comes-back-on-line.
18:47:20 <jdennis> kmalloc: acks being "yes, I can accept this id"?
18:47:31 <kmalloc> jdennis: correct. or "I accepted and stored locally"
18:47:33 <lbragstad> if there are performance issues with k2k, then it'd be good to see what those are and go fix those
18:47:42 <kmalloc> lbragstad: ++
18:47:47 <knikolla> lbragstad: ++
18:48:00 <kmalloc> jdennis: probably a combination of "i can and did accept"
18:48:04 <lbragstad> because that means there are still issues in that implementation and it's not useable for what we designed it for
18:48:21 <kmalloc> and I 100% get the desire for the same project_ids across deployments
18:48:47 <kmalloc> i think that is a fine thing to strive towards, i worry about public apis to do the sync
18:48:55 <ruan__he> I haven't understand how the driver can garrantee the same projecd ID?
18:49:26 <kmalloc> ruan__he: the driver can modify the values stored. we only display the id in the response
18:49:34 <cmurphy> at the manager layer it would have to query the other cloud to get the ID and then use that instead of generating its own
18:49:49 <kmalloc> so you could even ask "hey external service, issue me a unique-uuid-that-is-not-conflicted" you can do that
18:50:42 <ruan__he> it seems to be a good temporal solution in short term
18:50:45 <davidalles> yep: seams possible: on site 'Y' or 'X', this request would result in 'the centralized ID generator knows the ID to be given because already allocate on site '1'
18:51:26 <kmalloc> you're still dealing with something that is sync'd between sites, but if it is just a store of IDs you can do high-performance lookup table
18:51:42 <kmalloc> and that should avoid the legal troubles of syncing the actual keystone data
18:51:58 <kmalloc> since it's all random (cryptographically generated) IDs and nothing else.
18:52:34 <kmalloc> now, that said, it would still be a requirement that the IDs are UUID-like in form, since other projects require that.
18:52:37 <kmalloc> (e.g. nova)
18:53:00 <kmalloc> i am very very concerned with changing the APIs and/or adding a new API.
18:53:21 <lbragstad> is it possible to get the performance metrics where k2k starts to become unusable?
18:53:26 <knikolla> also look into https://github.com/openstack/mixmatch
18:53:30 <kmalloc> but i would like to also fix k2k
18:53:31 <knikolla> if you decide to go k2k
18:53:40 <kmalloc> if there are real performance issues.
18:53:50 <kmalloc> it seems like this is exactly what k2k is meant to fix.
18:53:53 <ruan__he> but if the modification of api doesn't impact others, I don't think that it will be an issue
18:53:54 <knikolla> it's a k2k proxy with caching.
18:54:19 <hrybacki> I would still like to hear a ack/nack on the idea of running this as an experimental feature (for record)
18:54:19 <lbragstad> that's my concern... we built something to solve this already, but we're going to try and solve it again by changing our API
18:54:24 <kmalloc> ruan__he: you're missing the point. APIs are not allowed to be "config based behavior" in keystone.
18:54:33 <kmalloc> so, once we land a change it's forever
18:54:41 <kmalloc> and we need to support it
18:54:46 <knikolla> and we pledged to never remove v3.
18:54:58 <kmalloc> we already built things for this, if they aren't working for perf reasons, i want to fix that
18:55:00 <ruan__he> kmalloc, i get the point, that's why I called it an option
18:55:03 <kmalloc> if they don't work for functional reasons.
18:55:46 <kmalloc> then i want to supply more functional support, but adding/changing APIs is a large hurdle if it could be done behind the scenes cleanly and not impacting interop/security of the APIs (user-facing UX)
18:55:49 <jdennis> ruan__he: there no such thing as an option, it's either in the API or not
18:55:50 <lbragstad> the concern for an optional solution is that it can changed across deployments (which is something we've been trying to not do openstack-wide)
18:55:54 <kmalloc> jdennis: ++
18:56:25 <lbragstad> it just makes interoperability really hard for consumers moving between clouds
18:56:30 <kmalloc> i've -2'd a number of things that are "optional" proposed to keystone. Keystone should work 100% the same (API wise) across all deployments of that release.
18:56:53 <davidalles> ... OKi... because I am not a source code specialist: is this driver mechanism already available: can we use it inside the existing Queen code
18:57:09 <ruan__he> kmalloc, but at the same time, we should improve keystone for new use cases
18:57:19 <davidalles> without any source code modification ... ? or even on previous versions?
18:57:35 <kmalloc> davidalles: you can load any driver via keystone's config
18:57:44 <kmalloc> you could subclass the current one and write your logic over the top
18:57:57 <kmalloc> so supply a driver and say "load this driver"
18:58:13 <ruan__he> kmalloc, the way we use openstack becomes much more wide then before
18:58:21 <cmurphy> there is some minimal documentation here https://docs.openstack.org/keystone/latest/contributor/developing-drivers.html
18:58:28 <lbragstad> #link https://docs.openstack.org/keystone/latest/contributor/developing-drivers.html
18:58:36 <lbragstad> bah - cmurphy beat me to it!
18:58:56 <davidalles> understood: thanks.... Ruan, how to measure the work effort to develop the driver? staff*weeks and what's the level of knowledge to do it?
18:59:08 <lbragstad> one minute remaining
18:59:18 <davidalles> and how to share with out partner?
18:59:18 <lbragstad> we can move to -keystone to finish this up if needed
18:59:24 <hrybacki> apologies for the lack of open discussion
18:59:26 <davidalles> brauld: are you there?
18:59:30 <lbragstad> but the infra folks we need the meeting room soon :)
18:59:40 <cmurphy> i wrote an assignment driver myself, it only took a couple of days
18:59:49 <cmurphy> a resource driver would not take that much work
18:59:54 <kmalloc> drivers are easy imo
18:59:55 <lbragstad> let's move to openstack-keystone
19:00:00 <lbragstad> #endmeeting