17:02:15 #startmeeting keystone-office-hours 17:02:16 Meeting started Tue May 29 17:02:15 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:20 The meeting name has been set to 'keystone_office_hours' 17:02:57 * knikolla goes to grab lunch 17:03:52 * gagehugo ditto 17:05:08 tritto 17:17:29 quaditto 17:55:02 Harry Rybacki proposed openstack/keystone-specs master: Follow-up -- replace 'auditor' role with 'reader' https://review.openstack.org/570990 17:55:25 lbragstad: ^^ 17:55:29 sweet 18:22:34 knikolla, I think your proxy and Istio are covering similar ground. What I am wondering is what the API would look like for Proxy to consume 18:23:25 lbragstad, did you go to https://www.youtube.com/watch?time_continue=143&v=x9PhSDg4k6M ? Its pretty much Dynamic Policy reborn...how many years ago was that? 18:23:49 i didn't go to the one 18:23:54 i had a conflict with something else i think 18:24:45 it was on my schedule to watch later though 18:25:43 lbragstad, just watched through it. Basically, a service prior to Keystone that update multiple un-synced keystones 18:25:46 ayoung: what API are you referring to? 18:25:49 hub and spoke model 18:26:00 knikolla, the cross-project access thing 18:26:21 if a user from one project needs to access a resource in another and has to get a new token, its kinda yucky 18:26:28 ayoung: the normal openstack APIs. the proxy is transparent. 18:26:53 knikolla, right now it is K2K, but using the users creds 18:27:17 ayoung: the proxy just goes through all the projects the user has access to 18:27:26 I guess that would be more like get the resource, find what proejct it is, and request a token for that project..all done by the proxy? 18:27:37 ayoung: yes. 18:28:00 might have some scale issues there. I would rather know which project a-priori....somehow 18:28:42 ayoung: caching works 18:28:48 go where it was last time 18:29:13 or there might be a push model by listening through the messagebus for notifications of creations 18:29:31 knikolla, like a symlink 18:29:46 knikolla, lets use the volume mount as the example 18:29:51 P1 holds the Vm 18:29:56 P2 holds the volume 18:30:06 Ideally, I would add a symlink in P1 to the volume 18:30:24 a placeholder that says "when you get this resource, go to P2 to get it" 18:30:42 so explicit instead of implicit by searching for it? 18:30:47 but...it should be at the keystone level 18:30:54 knikolla, what if we tagged the P1 project itself 18:31:04 "additional resources located in P2" 18:31:52 ayoung: maybe do this at the level above in the project hierarchy 18:32:11 knikolla, its not a strict hierarchy thing 18:32:25 should be a hint: not enforcing RBAC, 18:33:29 its almost like a shadow service catalog 18:33:32 ayoung: but it makes things easier to understand. and provides a cleaner way to implement granularity by subdiving a project. 18:33:42 "get Network from PN, Storage from PS, IMage from PI" 18:34:02 and...yes, you should be able to tag that on a parent project and have it inherited down 18:34:27 ayoung: same thing but with different clouds and you have the open cloud exchange we want. 18:34:52 knikolla, ooooooh 18:35:11 so...part of it could be the Auth URL for the remote project 18:35:41 ayoung: it's in the keystone service catalog. all service providers are there. 18:35:59 knikolla, but in this case it would be a pointer to the SP 18:36:22 like "on this project, for networkm, us SP1:PN 18:36:25 use 18:36:39 project level hints 18:36:40 like a local project symlinking to a remote cloud's project? 18:36:47 'zactly! 18:37:15 i've called these sister-projects during presentations. 18:38:54 knikolla, do you have a formal proposal for how to annotate the sister-projects? 18:40:02 ayoung: no I don't. In my notes I have "scope to a project with the same name as the local one, on the domain assigned to the IdP". 18:40:16 knikolla, OK...starting another etherpad for this 18:40:29 https://etherpad.openstack.org/p/sister-projects 18:44:38 ayoung: minus the annotation stuff (proxy goes everywhere searching for stuff), the cross-attaching thing works already. 18:45:29 knikolla, ++ 18:45:59 knikolla, this could be big 18:46:37 knikolla, I think we have the topic for our Berlin presentation 18:46:42 ayoung: what's different this time than the other times I proposed this? 18:46:56 "We've done unspeakable things with Keystone" 18:47:11 knikolla, the fact that we can use it inside a single openstack deployment for one 18:47:17 the annotations for second 18:47:34 and constant repitition to beat it through people's heads, of course 18:47:45 we call it keystone-istio to get people's attention, too 18:47:58 its real service mesh type stuff 19:13:27 ayoung: istio is more about connecting apps though, right? 19:14:47 knikolla, its about any app to app communication, and used for multiple use cases. pretty much all cross cutting concernts 19:15:14 access control, Denial of Service control, bl;ue/green deployments 19:15:31 it is a proxy layer. those are typically used for 3 things 19:15:43 security, lazy load, remote access 19:16:01 https://en.wikipedia.org/wiki/Proxy_pattern#Possible_Usage_Scenarios 19:16:23 logging is often done that way, too 19:17:53 i have concerns on performance for a generic app proxy with python. the openstack-service to openstack-service use case is slightly different since they are terribly slow anyway. 19:18:14 knikolla, Istio is in Go 19:18:28 kmalloc, who makes your 1/4 rack? 19:19:02 ayoung: you want to adopt istio or make what we have more similar to istio? 19:19:20 ayoung: startach 19:19:27 ayoung: or something like that, sec 19:19:43 https://www.amazon.com/12U-4-Post-Open-Rack/dp/B0037ECAJA kmalloc 19:19:49 ayoung: https://www.amazon.com/gp/product/B00P1RJ9LS/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 19:20:11 same thing, different seller 19:20:16 kmalloc, ah even better price tho 19:20:20 yup 19:20:42 they make a few options, up to 42U 19:21:11 do not get the 2-post or the 2-post-HD. wont work for you 19:21:43 kmalloc, these the shelve rails 19:21:45 https://www.amazon.com/NavePoint-Adjustable-Mount-Server-Shelves/dp/B0060RUVBA/ref=pd_lutyp_sspa_dk_typ_pt_comp_1_6?_encoding=UTF8&pd_rd_i=B0060RUVBA&pd_rd_r=736717d5-d9cf-40f1-a796-f73d9ba525bc&pd_rd_w=4OmZr&pd_rd_wg=wiOng&pf_rd_i=desktop-typ-carousels&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=8337014667200814173&pf_rd_r=8M47S57ND2AEMDDDBMQF&pf_rd_s=desktop-typ-carousels&pf_rd_t=40701&psc=1&refRID=8M47S57ND2AEMDDDBMQF 19:22:49 ayoung: i used https://www.amazon.com/gp/product/B00TCELZTK for the UPS, you can also get https://www.amazon.com/gp/product/B0013KCLQC for heavier items 19:22:59 the full shelf is VERY nice. 19:23:39 I think for the poweredges I want the rail version 19:24:15 sure, be wary though, some of the rail versons don't play well with server cases, they consume just enough (~1-2mm) space that the servers scrape 19:24:35 so measure your servers and make sure you have a few mm on either side where the rails would normally go 19:24:57 shouldn't really be an issue with any "real" server with rail mount points 19:24:58 but.... 19:25:01 ymmv 19:25:07 understood 19:25:36 what about these: 19:25:42 https://www.amazon.com/dp/B00JQYUI7G/ref=sspa_dk_detail_6?psc=1&pd_rd_i=B00JQYUI7G&pd_rd_wg=yrH6s&pd_rd_r=XHT079H16NRJYSZAQ9ER&pd_rd_w=hzj5S 19:26:08 i don't see how those would work for anything 19:26:18 not surew what the heck those even are 19:26:23 yeah...thought they were rails at first 19:31:25 ayoung: ping again, you are thinking of adopting istio or morphing what we already have in mixmatch to be more like istio? 19:31:42 knikolla, I'm still digesting what I saw at the summit 19:31:48 I think we need something like Istio 19:31:59 whether that is Istio or your proxy or something else yet is unclear 19:32:27 ack 19:33:48 knikolla, I think that the proxuy technology is one question, and what APIs Keystone needs to support it is a second related one 19:34:41 ayoung: it depends how many birds are you trying to hit 19:34:49 i have something that fits the openstack-service to openstack-service 19:35:13 which probably won't work with app to app. 19:36:14 knikolla, take some time to look at Istio, and tell me if it is an effort you could support. 19:37:29 ayoung: i'll play around with it. 19:37:37 knikolla, TYVM 19:45:50 it was about time i learned Go. :/ 20:42:15 keystone seems to do hard-deletes on projects in the DB -- is that a correct assessment? and if so, is there any way to make it do soft-deletes, or any specific reason it wasn't done that way? 20:42:55 rm_work: we support disabling projects, which does just about the same thing you'd expect a soft delete to do 20:42:59 ok 20:43:06 so it may just be a "using it wrong" issue 20:43:22 if you disable a project, users can't authenticate to it, use it, etc... 20:43:34 k 21:13:18 lbragstad: the issue we're trying to solve is around orphaned objects -- keystone projects get deleted and we have servers and stuff that we now can't see who owned them 21:13:45 yeah - that's a problem 21:13:48 but if we can't control exactly what users do -- i feel like we should be able to enforce soft-delete (disable) only 21:13:59 one thing that might help 21:14:10 like i'd be tempted to locally patch the delete call to just set the disabled flag instead 21:14:20 if `soft_delete = True` or something in config 21:14:41 what if your delete flow does a disable first? 21:14:57 i mean this is like 21:15:02 end-users delete a project 21:15:15 it's not really something we control, unless we refuse project deletes based on policy 21:15:17 then consume the notification from keystone about the disabled project and clean things up before you delete it 21:15:20 which is just confusing for everyone involved 21:16:05 that was one of the main reasons we implemented notification support in keystone 21:16:15 ok well isn't that still a patch to keystone we'd have to do? 21:16:21 to change the "delete" call to do a disable first? 21:16:35 no - more like horizon, but still a patch somewhere, yes 21:16:48 I can't control what John Doe CloudUser does with his projects 21:16:49 we don't use horizon, just API 21:17:13 and the issue is when random end-users create projects, use them, and then delete them with resources still on them 21:17:17 via the API 21:17:19 the idea was that keystone would emit notifications about state changes for projects, then other services would subscribe to the queue 21:17:47 it could see the notification come in via the message bus (which still isn't ideal... but) 21:17:58 pull the project id out of the payload 21:18:06 and clean up instances/volumes accordingly 21:18:07 so we should be listening to the keystone notifications and deleting everything that exists for projects based on their ID? (this sounds like a Reaper related thing) 21:18:27 but that's ... really not what we want, I think. what we want is just a soft-delete <_< 21:19:02 even if you have a soft delete, something has to do the clean up 21:19:05 I guess we could have something listen to the notifications, and for each deleted project it sees, just archive that to another table or something 21:19:06 right? 21:19:10 not necessarily 21:19:31 sometimes it's because someone left the company and we need to reassign their stuff to another project, or deal with it intelligently at least 21:19:36 rather than blindly wipe everything out 21:19:54 or just someone does something dumb 21:19:59 and we need to undo it 21:20:15 and it's a lot easier to undo an accidental project delete, than wiping out all resources in the cloud for that project :P 21:20:48 or rather 21:21:04 it's a lot easier to undo an accidental project delete *when all it did is remove one DB record*, as opposed to issuing cascading deletes to all services in the cloud for all objects 21:21:52 i'm hearing two different use cases here 21:22:02 you're not wrong i guess 21:22:09 1.) you want to clean up orphaned objects in certain cases 21:22:16 2.) and transfer of ownership 21:22:18 well, we don't want it automated in ANY case 21:22:24 we want to be able to deal with it later 21:22:27 in all cases 21:22:37 sure 21:22:47 just that the way projects get deleted might be different 21:22:54 but in all cases, what we want is them to be soft-deleted 21:23:11 and not clean up anything 21:23:16 the issue is not that the orphans exist 21:23:23 it's that we can't tell who they used to belong to 21:23:40 for auditing purposes, or making a decision on cleanup 21:24:22 kmalloc: has opinions on this, and we were going to discuss it in YVR but i'm not sure we did 21:25:07 just seems like soft-delete is done in most places, except keystone (and maybe neutron?) 21:25:36 if you had a soft delete capability in keystone, how would you expect it to work differently from disable? 21:25:46 i'm not sure i would 21:26:07 i mean i would probably literally implement it as "if CONF.soft_delete: disable; else: delete" 21:26:57 you COULD go a little further and have a deleted flag... and just use that as a sort of explicit filter (?show_deleted=true) 21:27:00 so - why not restrict project deletion to system administrators and just leave disable available to customers 21:27:05 but i don't know if that's necessary 21:27:18 lbragstad: that's what i mentioned earlier as the only solution i could think of 21:27:27 right 21:27:44 but it seems like a bad solution just because as an outlier it is very confusing to people 21:27:57 but yes, we could do that 21:27:59 if your users can disable/enable and not delete - then you can manually do whatever you need to as a system admin 21:28:04 not sure how many thousands of workflows we'd break 21:28:39 would those workflows still break if you had CONF.soft_delete? 21:28:47 which seems like the main blocker, because if we did that there's a good chance whoever ok'd it would be fired :P 21:28:48 no 21:28:57 because it would still say "204 OK" or whatever 21:29:06 and then ideally be filtered from API lists 21:29:17 (by default) 21:29:28 the same as how every other soft-delete that i'm aware of works 21:29:43 basically it just pretends to delete, unless you really go digging 21:30:48 so from a typical user's perspective, they couldn't tell the difference 21:30:51 but it doesn't remove the DB entry and throw a wrench in auditing 21:31:50 a quick fix for us could be like, throw a delete-trigger on the project table and have it archive -- at least we could look them up later if we HAD to <_< right now even that isn't possible. sometimes we get lucky looking through backups if the project was long-lived... 21:32:05 ^^ but that is dumb and i would never actually do that (it's just an example) 21:32:28 I'm honestly surprised this hasn't come up frequently 21:33:25 it has 21:33:36 very often actually 21:33:37 https://www.lbragstad.com/blog/improving-auditing-in-keystone 21:35:45 k 21:35:50 basically yes, that seems right 21:36:04 but I wouldn't say it's *too* heavy handed 21:38:46 it would be a lot of work to our API 21:39:05 it seems like the work would be more on the backends side 21:39:20 for the API wouldn't you just have to add another query param? 21:39:27 like "show_deleted"? 21:39:41 yeah - we'd probably need to support something like that 21:39:53 and implement soft deletes for all keystone resources, mainly for consistency 21:40:10 yeah that expands the scope of things a little, but i don't think you're wrong 21:40:12 (i can imagine it being frustrating to have projects soft delete but not something else like users or groups) 21:40:48 i still think it's something that's needed. 21:40:52 we'd also need to double check the api with HMT 21:41:02 but i guess maybe there aren't enough people that agree with my opinion for it to have happened 21:41:29 which means it probably won't any time soon, unless I go do it :P (and then get agreement from enough cores to accept the patches) 21:41:30 i don't think people is disagreeing with you, but no one has really stepped up to do the work 21:41:46 s/is/are/ 21:41:49 so you think if it was done, no one would object to merging? 21:42:19 the last time i discussed it around the Newton time frame, people were only opposed to the dev resource aspect of it 21:42:26 k 21:42:32 and making sure if we did it, it was done consistently 21:42:39 afaik 21:42:48 noted 21:42:59 i don't think people had super strong opinions on saying absolutely not to soft-deletes 21:43:07 s/not/no/ 21:43:15 wow - typing is really hard 21:43:27 it can be, yes :P 21:43:33 that was the main purpose of the post that i wrote 21:43:52 i think the use case for auditing is important, but at the time those were the three options that were clear to me 21:44:01 based on my discussions with various people 21:46:12 but - yeah... it's an important use case and I get it, but i also know kmalloc and ayoung have a bunch of thoughts on this 21:47:10 i wouldn't be opposed to discussing it again, and seeing if we can do something to Stein or T 21:47:22 discussing it as a larger group* 21:47:48 yeah, I mean, I'll be in Denver 21:47:55 for the PTG? 21:48:01 yeah 21:48:06 if we want to discuss it then 21:48:09 sure 21:48:20 we can throw it on the meeting agenda to for next week 21:48:39 if you feel like getting more feedback sooner than september 21:50:48 what time are your meetings? 21:51:15 https://etherpad.openstack.org/p/keystone-weekly-meeting 21:51:24 1600 UTC on tuesdays 21:51:32 so - 11:00 AM central 21:51:59 rm_work: are you based in texas? 21:52:10 not anymore 21:52:16 kinda ... nomadic 21:52:19 ack - i wasn't sure 21:52:28 yeah after I left castle, I go all over :P 21:52:36 cool 21:53:16 well - we can throw it on the agenda for next week if you'll be around 21:53:29 otherwise, the use case seems straight-forward enough to kickstart on the mailing list 21:57:13 yeah we could do a quick topic on it I suppose -- I can try to show up for that 21:57:25 lbragstad, I supposed we don't support directly mapping a federated user into a domain admin (domain-scoped token) do we? It's been awhile since I looked that piece of code. Just curious if anything has changed. 21:57:36 just for feedback purposes -- though whether or not it is important enough to us to get resources on it anytime soon is another question 21:57:46 which is why i figured PTG would be easier timing 21:58:40 gyee: ummm 21:59:03 you could map a user into a group with an admin role assignment on a domain 21:59:21 but are you asking if trading a SAML assertion for a domain-scoped token works? 21:59:29 but do we directly issued a domain-scoped token as the result of that? 21:59:33 right 21:59:40 hnmmm 21:59:47 I don't remember we ever support that 22:01:25 gyee: https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_v3_federation.py#L3861 ? 22:01:39 oh - wait... 22:01:40 nevermind 22:01:44 that's an IDP test case 22:02:34 yeah 22:02:58 all these tests seem to authenticate for an unscoped token before trading it for a domain-scoped token 22:02:59 https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_v3_federation.py#L3147 22:03:29 right, that's what I thought 22:03:45 but part of that flow with horizon is asking which project you want 22:03:47 to work on 22:04:01 so if it lists domains, horizon might support building a domain-scoped authentication request 22:04:28 let me dive into that code again, someone told me today you can get a domain-scoped token for federation user 22:04:31 i feel like this was on the list of things we wanted to improve with horizon a few releases back 22:05:03 but I don't remember ever seeing that functionality 22:05:27 cmurphy: _might_ know off the top of her head? 22:05:45 i remember she was working on some of that stuff during those joint team meetings between keystone and horizon 22:06:03 k, let me check with her as well 22:06:04 thanks man 22:06:15 gyee: no problem, let me know if you hit anything weird 22:06:32 #endmeeting