16:00:19 <lbragstad> #startmeeting keystone
16:00:19 <openstack> Meeting started Tue Jun  5 16:00:19 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:22 <openstack> The meeting name has been set to 'keystone'
16:00:26 <lbragstad> #link https://etherpad.openstack.org/p/keystone-weekly-meeting
16:00:28 <lbragstad> agenda ^
16:00:34 <lbragstad> ping ayoung, breton, cmurphy, dstanek, gagehugo, hrybacki, knikolla, lamt, lbragstad, lwanderley, kmalloc, rodrigods, samueldmq, spilla, aselius, dpar, jdennis, ruan_he, wxy, sonuk
16:00:52 <knikolla> o/
16:01:04 <hrybacki> o/
16:01:29 <gagehugo> o/
16:02:23 <sonuk> o/
16:03:21 <cmurphy> o/
16:04:01 <lbragstad> #topic proxy calls between projects
16:04:03 <lbragstad> knikolla: o/
16:04:06 <knikolla> o/
16:04:34 <knikolla> what i'm working on and what pays my bill is working on resource federation in openstack
16:04:52 <knikolla> currently i have a proxy called mixmatch which proxies api calls between services in different deployment
16:05:10 <knikolla> handling k2k saml exchange, and scoping the token to the correct project on the remote deployment
16:05:30 <kmalloc> o/
16:05:35 <knikolla> was thinking, if we forget about k2k. the proxy could be used to allow nova to access things in other projects
16:05:46 <knikolla> example, attach a volume in a different projects
16:05:59 <knikolla> essentially allowing granular ACLs, since you can divide things in many projects.
16:06:14 <knikolla> this is the elevator pitch.
16:06:43 * kmalloc holds the "door close button" to give knikolla more time to pitch.
16:07:07 <knikolla> thanks, haha.
16:07:55 <knikolla> so basically you'd have a project with subprojects. and for people who should have access to everything, they are granted a role on the parent. for people who just need specific objects, just one or multiple subprojects.
16:08:22 <knikolla> you'd still be able to use the staff as if they were in a single project though
16:08:24 <knikolla> stuff*
16:08:33 <knikolla> this was brought up in a discussion with ayoung
16:08:40 <knikolla> if he's around
16:08:44 <kmalloc> so the proxy only affects a single hierarchy
16:08:45 <kmalloc> ?
16:09:07 <kmalloc> as in, resource must be in a peer project or the local project.
16:09:14 <kmalloc> not "any project configured"
16:09:21 <knikolla> potentially.
16:09:32 <knikolla> currently it's less restricted (it looks at all the projects you have access to)
16:09:36 <kmalloc> ah
16:09:41 <kmalloc> ok, so it could be any project
16:09:44 <knikolla> since it uses your token to get a list of projects
16:10:01 <kmalloc> but realistcally, it could be limited with enhancements to mixmatch
16:10:14 <knikolla> yes
16:10:15 <kmalloc> ok cool, thanks, was curious on the "current state of the world"
16:10:26 * kmalloc is looking at the mixmatch code >.>
16:12:07 <knikolla> thoughts, opinions, questions?
16:13:04 <lbragstad> knikolla: would you be using this internally for k2k and resource sharing across projects?
16:13:36 <knikolla> lbragstad: it's main use case right now is cross attaching a volume between multiple clouds
16:13:43 <knikolla> the proxy is registered as the cinder endpoint
16:13:48 <wxy|> o/
16:13:49 <knikolla> nova makes an attach call
16:14:06 <knikolla> and the proxy figures out where the volume is, does k2k, gets a token, and forwards the request to the appropriate cloud
16:14:16 <knikolla> and sends the response back
16:14:23 <knikolla> nova is happy without knowing it went to another cloud
16:14:34 <knikolla> potentially, it can be used without k2k, just rescoping inside a cloud
16:14:47 <knikolla> allowing usage of resource from one project you have access to another
16:15:05 <lbragstad> i want to say i've heard that use case before
16:15:18 <lbragstad> but i can't remember who it was
16:15:53 <kmalloc> it was related to the more fine-grained permissions [splitting resources in the same project]
16:16:08 <kmalloc> same concept but without needing to add extra bits everywhere for access control
16:16:14 <lbragstad> that too
16:16:20 <knikolla> the topic today is related to fine-grained, yes.
16:16:48 <lbragstad> would it be possible to use this to do fine-grained access within the same project?
16:17:15 <knikolla> lbragstad: if you make the endpoint only accessible via proxy, yes
16:17:26 <knikolla> otherwise a use could bypass it and talk straight to the service.
16:17:28 <kmalloc> might be weird.
16:17:30 <knikolla> user*
16:17:31 <lbragstad> sure
16:17:35 <kmalloc> and you could still bypass the proxy
16:17:41 <kmalloc> which defeats the purpose
16:17:47 <knikolla> yes.
16:18:06 <kmalloc> ayoung had a concept (blog post) about access tags [think k8s] in openstack to solve inthesameproject fine grained control
16:18:23 <kmalloc> honestly, i would be unhappy trying to wedge fine grained control via the proxy in a single project
16:18:39 <kmalloc> as it doesn't really buy security in any way, and is easily bypassed
16:18:40 <knikolla> ++
16:18:55 <lbragstad> so the primary use cases is sharing resources across projects
16:19:01 <kmalloc> across projects is no different than across clouds [really]
16:19:12 <knikolla> this is more like a, use keystone hmt and role assignment to half-ass access tags.
16:19:40 <kmalloc> it's the inverse of what adam proposed
16:19:53 <kmalloc> it's across project, vs internal project
16:20:22 <kmalloc> we would need something like mixmatch if we wanted access tags intra-project anyway [due to model of data and RBAC in openstack in general]
16:20:27 <knikolla> yes, thought if the projects are subprojects or a parent project. it's easier to wrap your brain around.
16:20:42 <knikolla> though*
16:20:47 <knikolla> of*
16:21:12 <kmalloc> so, silly question: can you horizontally scale mixmatch, one per service?
16:21:22 <knikolla> yes.
16:21:23 <kmalloc> one for cinder, one for nova, one for neutron [or a mix of those]
16:21:25 <kmalloc> cool.
16:21:41 <kmalloc> i worry about a single-point-of-failure that does everything(tm) from a design perspective
16:21:48 <kmalloc> and "recommended method of deployment"
16:22:07 <knikolla> you can even have multiple per service.
16:22:28 <knikolla> it's a flask app with a sqlite db for caching where things are. and memcached for caching credentials.
16:22:35 <kmalloc> load balanced or do you mean, CinderX and CinderY, the same way you'd have multiple cinders today [does that even work?]
16:22:53 <knikolla> load balanced
16:22:56 <kmalloc> ok.
16:23:18 <kmalloc> so you'd have HA, but at 10000 cinders we have issues [extreme case]
16:23:26 <kmalloc> but that is a scale issue not a "functional" issue
16:24:01 <kmalloc> ok, so back to the meeting topic
16:24:11 <kmalloc> what is the pitch besides "hey i have cool tech"?
16:24:31 <kmalloc> is this "accept this into keystone as a project owned and under openstack" [officail + working with TC] or something else?
16:25:09 <knikolla> the pitch is "this could solve one use case people have been screaming for with small changes"
16:25:16 <knikolla> does it make sense to pursue that use case
16:25:23 * kmalloc puts down that he has a vested interest due to his employer in this kind of project.
16:25:28 <kmalloc> full disclosure
16:25:59 <knikolla> there's the edge use case where it would allow you in an edge cloud to fetch images from a "home" cloud
16:26:02 <knikolla> through k2k
16:26:06 <kmalloc> right.
16:26:08 <lbragstad> knikolla: would it require keystone to own the code? would it be a new feature? would it be a new project under the identity umbrella?
16:26:11 <knikolla> without having a glance server there
16:26:37 <kmalloc> lbragstad: i would assume it remains a separate repo, it's isolated code, just under our umbrella if we drive down this path
16:26:47 <kmalloc> i wouldn't want it "in" keystone.
16:27:06 <knikolla> ++
16:27:11 <kmalloc> keystone server, that is, because it can then be isolated security concerns
16:27:20 <lbragstad> ok - are there docs on mixmatch?
16:27:22 <kmalloc> it is a consumer of keystone, just like anything else.
16:27:26 <lbragstad> like - how it works with keystone?
16:27:27 <knikolla> i know ayoung was excited about a general purpose service mesh proxy like istio
16:27:34 <knikolla> but this is probably not THAT general purpose
16:27:53 <lbragstad> like how it fits architecturally with everything else?
16:27:54 <knikolla> mixmatch.readthedocs.io
16:28:12 <lbragstad> #link http://mixmatch.readthedocs.io/en/latest/reference.html
16:28:26 <ayoung> knikolla, I wasn't excite. I never get excited.
16:28:42 <knikolla> there he is
16:29:01 <ayoung> I'm not 100% sure that access tags would really be either necessary nor sufficient
16:29:06 <knikolla> (assume the docs have faults, i have been running low on interns to update them)
16:29:17 <ayoung> what I am learning is that there are a lot of rules people want to be able to put on the resources
16:29:18 <kmalloc> ayoung: right, but it was highlighted as a similar case (brain thinking wise)
16:29:30 <kmalloc> regardless if we drive down that path
16:29:37 <lbragstad> knikolla: has this idea been socialized anywhere besides irc?
16:29:37 <ayoung> one thing to evaluate is OPA
16:29:42 <lbragstad> mailing lists or whatnot?
16:30:14 <ayoung> Istio + OPA is a viable approach for a microservice mesh, might be something we make optional but supported?
16:30:28 <knikolla> lbragstad: the proxy in general or the fine grained permissions use case?
16:30:38 <lbragstad> the use case we are discussing here
16:30:39 <hrybacki> we are in the very early stages of exploring OPA fwiw
16:31:11 <hrybacki> we being my downstream team
16:31:35 <knikolla> lbragstad: no.
16:32:20 <lbragstad> ok
16:32:35 <lbragstad> it seems interesting, and i can see the use case
16:32:46 <lbragstad> but it would be nice to gauge interest, too
16:32:58 <lbragstad> especially if keystone is going to own it
16:33:27 <knikolla> if keystone owns it i'm by implication almost 100% on keystone, haha
16:33:52 <lbragstad> well - that's a good reason to own it, but i'm biased :)
16:34:43 <lbragstad> i'd like to know a little more about how it works and how it fits in with various pieces
16:34:46 <lbragstad> but i'll go read the docs
16:34:52 <lbragstad> and probably come back with questions
16:35:00 <kmalloc> ++
16:35:04 <kmalloc> i agree i need more time with it
16:35:06 <lbragstad> does anyone else have questions on this?
16:35:12 <knikolla> please do, anything just ping me
16:35:16 <kmalloc> but tentatively i think it could fit within keystone's umbrella
16:35:24 <lbragstad> knikolla:  do you have anything else to share?
16:35:41 <kmalloc> or potentially closer to KSM's umbrella.
16:35:47 <kmalloc> but same thing
16:36:00 <knikolla> i was thinking about KSM and how it could interact with it.
16:36:30 * knikolla hands over the mic.
16:36:46 <knikolla> ping me on -keystone anytime about questions.
16:36:50 <lbragstad> sounds good
16:36:54 <lbragstad> thanks for bringing it up
16:36:58 <kmalloc> it also [might] be something that doesn't need to be a full fledged service
16:37:17 <kmalloc> but something we could wrap into a library if we wanted to bake it in [something around ksa]
16:37:24 <lbragstad> i'm going to jump into the next topic since we have several other things to get through
16:37:29 <kmalloc> but that requires code changes in the other services.
16:37:31 <lbragstad> we can circle back in open discussion if that's cool
16:37:33 * kmalloc lets the topic change
16:37:40 <lbragstad> #topic auditing improvements
16:37:45 <lbragstad> i dont see rm_work around
16:37:55 <kmalloc> yeah.
16:38:00 <lbragstad> but context is that he wants to propose adding soft deletes to keystone API
16:38:07 <lbragstad> #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T20:42:15
16:38:17 <ayoung> soft deletes good
16:38:18 <lbragstad> in case anyone is interested or curious ^
16:38:30 <lbragstad> but if he shows up, i'll let him make his case
16:38:31 <kmalloc> i still hate soft deletes as a concept
16:38:44 <kmalloc> i would accept the code/spec if someone wants to move forward with it
16:39:14 <lbragstad> i pinged him in -keystone, but we'll see if we can catch him later
16:39:15 <cmurphy> is this related to resource cleanup?
16:39:24 <kmalloc> cmurphy: i think it is.
16:39:25 <ayoung> kmalloc, once someone claims a project ID, that ID is a fact that we should not delete.  Least not for a long long while
16:39:26 <lbragstad> kind of
16:39:28 <kmalloc> from the eavesdrop
16:39:37 <lbragstad> i asked about that, and i had a hard time getting a straight answer
16:39:43 <lbragstad> it sounds like it was, but it wasn't
16:39:48 <ayoung> think of it like the requirements that a company keep their files for X number of years
16:39:50 <kmalloc> ayoung: i expect we never will have a project id collision with uuid4
16:40:20 <kmalloc> FTR, i am 100% against issuing tokens for "deleted" [soft or otherwise] projects
16:40:23 <ayoung> not collision
16:40:25 <lbragstad> #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T21:13:18
16:40:42 <kmalloc> but i don't really care if we keep records around.
16:40:45 <lbragstad> #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T21:21:52
16:40:54 <ayoung> it is a record that user X had project Y and that we can recreate, and if needs be, perform operations in the cloud based on that ID in the future
16:41:22 <kmalloc> recreation is unlikely to be reasonably supportable in many cases [without implicit renameing]
16:41:31 <kmalloc> well, s/recreation/undelete
16:41:36 <kmalloc> lets not say it's being recreated
16:41:47 <kmalloc> because it really isn't
16:42:18 <kmalloc> if we want to add a "deleted" column to projects and make DELETE call just set that value, i'm really not terribly unhappy with the idea.
16:42:33 <gagehugo> I assume a "deleted" project would be considered "disabled" as well? so we wouldn't issue tokens for it
16:42:34 <kmalloc> and let cloud ops undelete/look into history.
16:42:34 <lbragstad> as i thought about this, the bit the felt weird is that we have enabled/disabled, and now we'd be adding soft delets to the mix
16:42:53 <kmalloc> lbragstad: nah, we supplant "soft delete" with our hard delete
16:43:25 <kmalloc> and track deleted time with a "cleanup" option so "remove records for projects with deleted and deleted before <datetime>
16:43:25 <lbragstad> so what's the difference between soft delete and disabled?
16:43:34 <kmalloc> disabled doesn't free name
16:44:06 <kmalloc> disabled is "the project exists but is unusable"
16:44:14 <kmalloc> deleted is "the project effectively doesn't exist"
16:44:23 <kmalloc> i would still purge roles/access etc
16:44:26 <kmalloc> with deleted
16:44:41 <kmalloc> you just maintain the project record and we allow cloud admin to introspect projects that are deleted.
16:44:47 <kmalloc> [when, by whom, etc]
16:44:54 <hrybacki> have to drop early to make an appt, sorry all!
16:45:00 <kmalloc> hrybacki: have a good one man
16:45:02 <gagehugo> couldn't we emit a cadf notification with that info?
16:45:17 <hrybacki> thanks kmalloc !
16:45:17 <hrybacki> o/
16:45:18 <gagehugo> project deleted [who, when, other]
16:45:48 <kmalloc> we can. but maintaining the project record could be useful
16:46:06 <kmalloc> cadf notifications are ... lets say ephemeral to be nice
16:46:11 <kmalloc> they sometimes miss, or are missed
16:46:38 <knikolla> ++ agree with kmalloc
16:46:38 <lbragstad> well - YMMV with notifications
16:46:52 <kmalloc> the change we're looking at here is a db migration for a couple new columns and changing the internal behavior of delete
16:46:54 <lbragstad> one of the use cases rm_work brought up was around doing something with orphaned resources
16:47:16 <lbragstad> once something consumes a notification for a deleted project, the project is already gone
16:47:19 <gagehugo> true
16:47:43 <knikolla> can't an admin token effectively just work for deleting them regardless of project deletion?
16:47:49 <kmalloc> delete would stil do everything you'd expect, except it wouldn't remove records.
16:47:59 <kmalloc> knikolla: not in all projects
16:48:11 <kmalloc> s/projects/services
16:48:15 <kmalloc> it's hit and miss
16:48:34 <kmalloc> i'm less worried [right now] about the resource cleanup, i'll want proposals on how to address security concerns
16:49:03 <kmalloc> but i'm fine with maintaining records in the db [with a purge action/api, based upon "datetime"] if that is what folks want
16:49:07 <kmalloc> for audit/etc
16:49:18 <kmalloc> the key is you'll be able to see the ids and whatnot
16:49:35 <kmalloc> right now, you may not know how long VM X has supposed to be dead because the project is gone
16:49:39 <kmalloc> when was it deleted?
16:49:57 <kmalloc> or if there was a network event that caused you to miss the "delete project" event
16:50:15 <lbragstad> thats another concern with notifications
16:50:26 <kmalloc> the way i'd accept this proposal is if we are changing the delete behavior to maintain rows
16:50:28 <kmalloc> and records
16:50:33 <kmalloc> not "add another form of delete":
16:50:37 <kmalloc> that is special and magical
16:50:56 <lbragstad> either way - i wanted to share the context
16:51:11 <lbragstad> because it sounds like rm_work would be interested in doing this depending on our reactions
16:51:13 <kmalloc> and if rm_work or someone else wants to do it, i'll happily review spec/code
16:52:01 <lbragstad> but - something to think about if you think this is something that is useful
16:52:32 <lbragstad> #topic default roles work has begun
16:52:40 <lbragstad> thanks hrybacki for getting this stuff rolling
16:52:56 <lbragstad> #link https://review.openstack.org/#/c/572243/
16:53:05 <lbragstad> nice to see implementations coming through
16:53:14 <lbragstad> i think he just wanted to get this on people's radar
16:53:28 <knikolla> awesome!
16:53:41 <lbragstad> #topic open discussion
16:53:54 <lbragstad> we have a few minutes left if folks have questions
16:55:16 <gagehugo> what should I eat for lunch?
16:55:29 <knikolla> teriyaki spicy chicken
16:55:37 <knikolla> that's what i just ordered for delivery
16:56:02 <kmalloc> gagehugo: pho
16:56:12 <kmalloc> gagehugo: it's a pho kind of day
16:56:19 <gagehugo> both sound good
16:56:26 <lbragstad> everyday is a pho kind of day
16:56:29 <kmalloc> lbragstad: ++
16:57:11 <lbragstad> if we don't have anything else - we can call it a few minutes early
16:57:25 * kmalloc goes and submits a new laptop refresh to red hat... so the nag emails stop =/
16:57:30 <kmalloc> with my few extra minutes
16:57:34 <lbragstad> thanks for coming, i appreciate it!
16:57:44 <lbragstad> #endmeeting