17:56:56 #startmeeting keystone-office-hours 17:56:57 Meeting started Tue Apr 24 17:56:56 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:56:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:57:01 The meeting name has been set to 'keystone_office_hours' 17:58:13 lbragstad: email sent to -dev 17:58:35 re stable 17:59:03 checking 17:59:16 nice 17:59:32 fyi - https://review.openstack.org/#/c/563974/ and https://review.openstack.org/#/c/562716/ close bugs 18:01:00 lbragstad looking 18:01:23 the one about enforcement limits needs some feedback on the API bits 18:01:38 GET /v3/limit_model versus GET /v3/limit/model 18:01:48 i left comments in review and i can respin that pretty each 18:01:50 easy* 18:31:20 Lance Bragstad proposed openstack/keystone-specs master: Add idea for alternative service catalogs https://review.openstack.org/564042 18:31:47 mnaser: ^ relevant to your discussion in -tc the other day 19:13:49 lbragstad with that double token provider use-case, we should avoid doing this: https://review.openstack.org/#/c/558918/ 19:13:50 right? 19:15:16 gagehugo: yeah - it was a far fetched use case 19:15:32 i was just trying to think of things that would require the location of both repositories 19:17:03 then again - if someone is rolling their own token providers, they might not have an issue just exposing new configuration values 19:17:04 * lbragstad shrugs 19:17:35 hmm 19:20:44 leave it up for now, we will likely discuss it once dev work begins on jwt 19:20:50 I guess* 19:21:01 yeah - that works 19:28:30 * knikolla finally booked flights/hotel for vancouver. 19:38:13 \o/ 20:26:48 Lance Bragstad proposed openstack/keystone master: Add conceptual overview of the service catalog https://review.openstack.org/563974 20:36:43 kmalloc: do you happen to know where the translation you speak of happens? https://review.openstack.org/#/c/530509/4/oslo_context/context.py 20:39:01 i see some stuff in keystonemiddleware/audit/_api.py 20:39:12 but that doesn't seem right 20:44:30 oh... 20:45:05 https://github.com/openstack/keystonemiddleware/blob/686f7a5b0b13a7ef4c7ce6721e6c9e601816ad45/keystonemiddleware/auth_token/_request.py#L201-L217 20:50:20 Not off the top of my head 20:50:32 Will look when done with lunch. 20:53:21 i think i found a clue 21:17:41 this is weird, i see where ksm scrubs the headers when it receives a request 21:17:58 and then it sets them appropriately if the user and service tokens are valid 21:18:03 which make total sense 21:18:15 the request object trucks along through middleware 21:18:23 following the wsgi pipeline 21:19:15 and then in the case of nova, it reaches a different piece of middleware called NovaKeystoneContext that processes the headers using oslo.context to build a context object for nova 21:20:08 where it gets strange is that ksm sets the headers like X-Project-Id 21:20:39 but oslo.context looks for request.headers['HTTP_X_PROJECT_ID'] 21:20:59 * lbragstad goes to dig in requests 21:22:59 s/requests/webob/ 21:28:14 baha - https://github.com/Pylons/webob/blob/4e8c7ecc20bed6ce6c64daa3dcb97cc328058e8c/src/webob/headers.py#L111-L115 22:15:42 Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens https://review.openstack.org/564072 22:43:47 lbragstad: aha 03:13:03 wangxiyuan proposed openstack/keystone master: Update IdP sql model https://review.openstack.org/559676 03:13:04 wangxiyuan proposed openstack/keystone master: Fix the test for unique IdP https://review.openstack.org/563812 03:39:54 wangxiyuan proposed openstack/keystone master: Invalidate the shadow user cache when deleting a user https://review.openstack.org/561908 06:27:48 Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class https://review.openstack.org/564110 06:55:02 wangxiyuan proposed openstack/keystonemiddleware master: Follow the new PTI for document build https://review.openstack.org/562951 08:32:35 is the unified limits thing used by services already? (in pike) 08:37:07 how do unified limits interact with reselling? 08:37:16 eh, s/reselling/keystone federation/ 12:40:25 Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class https://review.openstack.org/564110 12:41:19 kmalloc, lbragstad, ^^ I think that should address the review comments 12:41:31 kmalloc: ping me when you're up and we can talk aout your compat concern and stuff 14:01:41 mordred: o/ 14:01:44 Here 14:02:41 Drinking absurdly strong coffee 14:02:46 But here. 14:04:03 hi there. what does the keystone policy service actually do? as far as i understand, policies are handled by every openstack component individually (that is by using oslo.policy). or is the keystone architecture as described at https://docs.openstack.org/keystone/latest/getting-started/architecture.html not up-to-date? 14:05:45 rabel: that was an API that was introduced shortly after the v3 API but never really fully realized 14:11:30 kmalloc: \o/ 14:12:35 So, my olny concern is that the returned catalog will be different between releases of ksa with this for the same unchanged OpenStack (in theory) 14:13:02 And that, in itself, would be a behavior change in ksa / might break someone using it 14:13:08 kmalloc: so - in general, the whole aliases thing is aimed at letting deployers start to deploy things with better service types without breaking their users - at themoment nobody should be doing it because it would break everyone. it SHOULD return all the same data for the existing clouds 14:13:32 of course, I could totally be wrong 14:13:41 If it was anything but KSA I would have already +2'd it 14:13:48 FTR 14:13:48 yah - totally 14:14:24 I am ok with it if this should just work and not break anyone... But you get my concern :) 14:14:53 I'm wondering what we should do to validate whether the concern is a thing ... oh, and YES - I totally agree on not wanting to do the thing you are concerned with 14:14:57 But I want to be 3x sure we aren't really changing any behavior that someone might be using. 14:15:19 Yeah, I don't know how to be sure. 14:15:28 the one thing I could come up with in my head that could be different ... 14:15:32 Which is why I wanted to chat ;) 14:15:53 lbragstad: so keystone in reallity only consists of 5 components instead of 6? 14:15:57 is that if a deployer deployed cinder using service-type bloack-storage and didn't use volumev2 or volumev3 - and the user requested service-type volumev2 14:16:18 Yep, that is the only real example I can come up with. 14:16:20 rabel: i guess it depends on your use case 14:16:31 kmalloc: in that case, today the user would get "can't find endpoint" and after this change they'd get the block storage endpoint 14:16:54 but as far as the policy stuff is concerned, that direction never really panned out because we were going to be able to pull all policies across all of openstack into a single service 14:17:24 How likely is that though. 14:18:35 mordred: I am inclined to approve this patch as is. 14:19:22 kmalloc: cool. I think as a second verification, I'll write the sdk consumption patch so we can see it do all of shade/sdk functional tests too 14:19:34 Heck, I kind of want to make KSA2 and just sump some stuff and clean up things. :P 14:19:40 kmalloc: and we should make sure that's happy before we cut the release 14:19:47 kmalloc: dude. srrsly 14:19:50 Yeah, sounds good. 14:19:55 the number of flags we've grown 14:20:01 lbragstad: is this going to happen in the future? i mean having a central policy service? should we add a note to https://docs.openstack.org/keystone/latest/getting-started/architecture.html that keystone's policy service is not actually used and the corresponding api is deprecated? 14:20:08 You know, it means we are successful with it. 14:20:23 And we have been careful to make it stable. 14:20:36 We did a good job (tm) on ksa 14:20:42 rabel: i doubt it 14:20:58 but there have been variants of the idea proposed 14:21:17 that don't include a full fledged policy mapping for every service in keystone 14:21:49 mordred: we can chat about ksa2 (strictly API compat, but behavioral cleanup) next release 14:22:22 Let's get a sign off from lbragstad on this change. 14:22:55 And we can roll.forward (sdk stuff as a depending change) 14:23:10 * mordred nudges lbragstad 14:23:26 * lbragstad looks for a link 14:24:14 lbragstad: service aliases. Sorry on mobile (+_+) 14:24:30 https://review.openstack.org/564110 ? 14:25:08 https://review.openstack.org/#/c/462218/4 14:25:23 But yes, that stack 14:25:38 See my comments and the wuick chat mordred and I had here just now. 14:25:55 But I'm inclined to accept this, but want other eyes first. 14:26:27 lbragstad: thank you! what do you thin about adding a note to the docs? 14:26:45 rabel: yeah - i don't see an issue with that 14:27:16 fwiw - the API reference has a warning https://developer.openstack.org/api-ref/identity/v3/index.html#policies 14:27:48 as far as the architecture document is concerned, i wouldn't be opposed to just remove the policy section 14:28:12 it's has no real significance on the overall architecture of keystone 14:28:25 and is mostly boilerplate code 14:28:52 thinking about it from someone trying to digest that document, it's just an extra thing they have hold in their mental map 14:28:53 lbragstad: ok, I will create a change for deleting that section 14:29:31 rabel: thanks 14:31:03 kmalloc: you're good with the test added at the end of that change? 14:33:02 lbragstad, kmalloc: cmurphy also often has good eyes for spotting when I do something crazy 14:33:16 heh 14:33:53 trying to finish something else up, can look later tonight if it isn't already taken care of 14:35:15 lbragstad: more down in that same document there are some keystone internals including more stuff about policy. i don't touch that stuff, right? 14:36:00 rabel: yeah - i would leave that because it's describing the relationship we have with oslo.policy to do policy enforcement 14:38:41 David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst https://review.openstack.org/564239 15:04:34 David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst https://review.openstack.org/564239 15:07:01 David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst https://review.openstack.org/564239 15:37:24 Lance Bragstad proposed openstack/keystoneauth master: Use Status variables in tests https://review.openstack.org/564258 15:53:23 Lance Bragstad proposed openstack/keystoneauth master: Reference class variable in Status https://review.openstack.org/564262 16:01:46 kmalloc: for that ksm change, wouldn't we want service users who have roles on the system to also have that header populated? 16:02:22 uhm... 16:02:25 maybe? 16:02:39 it wasn't clear to me. 16:03:56 i mean - that might be a little bit into the future 16:04:16 but, i figured it might be possible to have service users that have roles on the system 16:05:51 trying to figure that out 16:06:45 also do we want to call this: openstack-system-scope: system 16:06:51 or .. openstack-scope-type ? 16:07:12 that's a good question 16:07:21 we already have the project-id domain-id bits 16:07:27 which relay scope 16:07:37 annnd i wouldn't want to call it openstack-auth-system-scope and openstack-service-system-scope 16:07:46 which is why i don't think it belongs in the template 16:08:06 (thats what the template does) 16:09:32 got it 16:09:36 that's fair 16:10:04 Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens https://review.openstack.org/564072 16:10:05 new version ^ 16:10:08 cool. 16:10:11 no tests yet 16:10:18 but just working through the feel of it 16:10:43 yah 16:27:33 i'm going to take lunch quick, but i'll get the ksm patch tested this afternoon 16:40:09 Merged openstack/keystoneauth master: Expose version status in EndpointData https://review.openstack.org/559125 17:31:32 mordred: once lbragstad's questions are answered i'm good with the aliases change 17:37:05 kmalloc: awesome. in exchange, I have written you a whole new patch to be excited about 17:37:13 kmalloc: are you ready to be excited by it? 17:37:14 oh no :P 17:37:43 kmalloc: (I realized there was one last piece lurking when I went to write the sdk patch) 17:37:51 hehe 17:38:12 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 17:38:40 you're gonna like the comment I left in this one - I at least I hope you are 17:38:45 god, volume2 version=3 17:38:50 right? 17:38:52 wow, we are baaaaaaad. 17:38:53 :P 17:39:11 this is all literally because of the original impl of python-novaclient that got cargo culted everywhere 17:39:50 since it hid the version discovery docs, there was no way for cinder to rev the version without volumev2 - and we're STILL cleaning up after it years later 17:40:04 EVEN THOUGH openstack has been able to support doing this cleanly since basically day one 17:40:05 yeah 17:40:12 it looks good, going to wait for zuul 17:40:16 it's all 100% the fault of a single library 17:40:26 also... https://review.openstack.org/#/c/564110/2 needs another pass 17:40:28 * mordred stabs everyone 17:40:34 you can't remove a public function 17:40:48 but it's a small fix to just use the class for that function 17:41:22 ah wait. that hasn't been released 17:41:29 bah, i need to drink coffee before reviewing 17:41:30 :P 17:42:39 +2 now 17:42:47 unless a release is cut between now and when it lands. 17:45:01 I'm +2 on the followups from lbragstad ... I feel bad for not co-authoring efried though 17:45:59 kmalloc: you saw those, yeah? 17:48:16 yeah - the whole volumev2, volumev3, block-storage test case kinda makes me sad 17:48:18 but... 17:48:32 i assume that's a different problem 17:48:46 such sadness 17:49:08 lbragstad: I can make a class - and will do that real quick -if it subclasses dict it doesn't break json.dumps 17:49:19 nice 18:00:32 mordred: yeah +2 on those 18:29:28 hmm - our ksm tests are pretty dated 18:35:02 we reference two unsupported token providers 19:00:41 Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens https://review.openstack.org/564072 19:00:44 kmalloc: tested ^ 19:58:14 lbragstad, does oslo-context have support for system scope? I assume it does, just want to confirm 19:58:26 ayoung: it's in the pipe 19:58:36 https://review.openstack.org/#/c/530509/ 19:58:46 but it needs this too https://review.openstack.org/#/c/564072/ 19:58:47 lbragstad, I think that needs to land before https://review.openstack.org/#/c/564072/ middleware will work. 19:58:59 yeah 19:59:11 I got fooled by that doing is_admin_project. Jamie helped straighten it out 19:59:11 middleware will set the header but it won't be recognized by anything 19:59:25 yeah - i tried reverse engineering the approach in that patch 19:59:26 and thus policy can't get enforced on it 19:59:29 not sure if i got close of not 19:59:31 or not* 19:59:39 have jamielennox look at it 19:59:54 I really don't trust anyone else with oslo-context 20:00:07 i groked at it for a while 20:00:19 the ksm stuff is intense too 20:00:30 the header translation bits threw me for a loop 20:00:30 yep 20:00:43 I'll keep on those reviews. I've been beating on that beast for years nowq 20:00:55 keep beating please 20:00:59 those should be good to review 20:01:16 we'll have to get both of those merged before the patch to nova will make any sense 20:36:15 Lance Bragstad proposed openstack/oslo.policy master: Update documentation to include usage for new projects https://review.openstack.org/564340 21:07:33 hey any thoughts on a change in about the last 3 weeks that might have affected the saml2 items? 21:08:13 the puppet tests that deployed Shibboleth/saml2 is failing now http://logs.openstack.org/87/558887/10/check/puppet-openstack-beaker-centos-7/cc9be41/logs/keystone/keystone.txt.gz#_2018-04-25_18_29_34_339 21:08:58 i would guess it's https://review.openstack.org/#/c/350815/ but i'm not sure what we should be loading instead 21:15:45 mwhahaha: i think this shouldn't be set http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/federation/shibboleth.pp#n102 21:16:07 [auth]/methods = [...],saml2 should be enough 21:16:23 so drop the module_plugin bits 21:16:31 yeah 21:16:38 k i'll give that a shot 23:19:32 lbragstad: what is that header - a boolean? 08:40:13 Hello, I'm thinking about keystone cluster in two data centers. Can you recommend me some best practices or do you have some architectural plan? 08:43:04 Does keystone support this currently ? assert:supports-zero-downtime-upgrade 09:32:18 eEbx: we don't really have a good document on that yet :/ you might try the #openstack-operators room or the openstack-operators mailing list for some best practice advice 09:33:01 cmurphy: ok thanks a lot 09:33:45 jmccarthy: we don't currently assert that tag, but we do support rolling upgrades (we're just short of asserting the tag by some CI requirements) and zero downtime should be achievable that way 09:35:18 cmurphy: Ok - this is the best docs for this at the moment is it ? https://docs.openstack.org/keystone/pike/admin/identity-upgrading.html 09:37:09 jmccarthy: yes or the queens version https://docs.openstack.org/keystone/queens/admin/identity-upgrading.html 09:37:24 though i don't think it's changed 09:37:34 cmurphy: Oh yes, great - thanks ! :) 09:37:51 np 11:22:18 Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class https://review.openstack.org/564110 11:22:19 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 11:22:20 Monty Taylor proposed openstack/keystoneauth master: Make VersionData class https://review.openstack.org/564469 11:23:35 cmurphy: ^^ that last patch should take care of your original review on the version data patches 11:23:57 mordred: sweet 11:24:41 cmurphy: if we don't watch out, we're going to have a workable service type story 11:25:15 whoa 12:22:45 o/ 13:04:33 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 13:04:34 Monty Taylor proposed openstack/keystoneauth master: Allow tuples and sets in interface list https://review.openstack.org/564495 13:57:28 jamielennox: the system scope one? 13:57:33 it's a dictionary 15:16:51 Merged openstack/keystone master: Remove policy service from architecture.rst https://review.openstack.org/564239 15:44:20 Lance Bragstad proposed openstack/oslo.policy master: Update documentation to include usage for new projects https://review.openstack.org/564340 15:56:57 if we propose moving the default roles session from Thursday to Monday at 11:35 how do people feel about that? 15:57:21 +1 from me 15:57:32 yes please 15:59:52 knikolla: have you heard of any updates about the mutable configs goal? 16:00:14 i was just reading the weekly release email and it reminded me of that 16:14:06 I'm trying to associate a user with a project using Keystone's REST APIs. Initially I set the default_project_id field but that seems more like a suggestion. 16:14:46 empty_cup: correct 16:14:58 empty_cup: you need to explicitly give that user authorization 16:15:16 I'm also assigning the role to user on project and both return successfully but when I list the projects that a user belongs to, it is empty. 16:16:09 empty_cup: we have a note about default_project_id here - https://developer.openstack.org/api-ref/identity/v3/index.html#users 16:16:42 empty_cup: which APIs are you calling? do you have a trace? 16:17:30 Ok, the confirmation helps that I need to create a role and assign it to a user on a project 16:21:05 lbragstad: I'm working on a trace. Basically I create a user, then a role, assign the user to the role on an existing project. 16:21:23 ok 16:21:28 that sounds about right 16:21:57 what api are you asking for a user's projects? 16:22:33 Although when I list projects for the user there is nothing in the array. I'm using v3. I can see each in Horizon except for the role 16:23:13 I am creating all of the objects in a separate domain although I can view them while logging in as the default admin account on the default domain. 16:23:23 Except for the role, the role doesn't show up. 16:25:43 hmm 16:25:52 are you using /v3/role_assignments 16:25:58 or GET /v3/auth/projects ? 16:26:12 are you using openstackclient? 16:27:21 yep, v3 for everything, that's my reference 16:27:50 sorry, i'm using post requests for everything -- i have a python script or using curl 16:29:23 ok - if you have a token for the user you're trying to list projects for, you should be able to use GET /v3/auth/projects 16:29:29 and get a list of all projects you have authorization on 16:29:54 otherwise, as an administrator, you should be able to call the /v3/role_assignments API and list all role assignments present in the deployment 16:31:20 i'm working on it, thanks for the help 16:41:08 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 16:42:25 Monty Taylor proposed openstack/keystoneauth master: Allow tuples and sets in interface list https://review.openstack.org/564495 16:44:47 lbragstad: I got the class extraction done in that stack ^^ like you wanted 16:44:55 lbragstad: https://review.openstack.org/#/c/564469/1 16:45:12 oh -cool 17:19:39 lbragstad[m]: I confirmed by using the roles?domain_id that I see the role created. And I have a trace of activity. Is there a pastebin of choice the channel uses? 17:19:51 I feel like I'm really close to understanding how keystone works 17:21:21 empty_cup: http://paste.openstack.org is a good one 17:21:25 empty_cup: and yay for understanding! 17:26:36 Pasted here: http://paste.openstack.org/show/719965/ 17:41:24 i'm starting to think it may be influenced by the scope of the admin token 17:43:56 empty_cup: which API are you calling? 17:47:43 lbragstad[m]: i haven't checked with regards to mutable config. will do. 17:47:57 libragstad: http://localhost/identity/v3 17:49:27 that's the root path of the v3 api 17:51:53 oh sorry, i misunderstood the question, which api call? the pastebin contains the resulting text for each API call. i can include the URL data as well 17:54:44 lines 15 and 16 in your paste seem to be missing the call you made? 17:59:44 lbragstad[m]: /roles?domain_id= using the same domain_id fed into the creation commands 18:00:30 oh - gotcha 18:00:57 so - that's only going to give you a list of role filtered by the domain, were you looking for it to give you a list of people with authorization on that domain? 18:10:22 that's a good point, i'll find the other command that will list authorized users 18:16:22 empty_cup: https://developer.openstack.org/api-ref/identity/v3/index.html#id594 might help 18:55:34 ok i was applying the role to the domain and not on the project. that's fixed 18:56:34 now through the use of policy.json i can craft a policy that says this role has the ability to create users only within this project 18:56:57 technically - yes... 18:57:07 is there a better way? 18:57:50 but we do have some stuff in the works to make it so that you don't have to roll a custom policy for that kind of thing https://bugs.launchpad.net/keystone/+bug/1748027 18:57:52 Launchpad bug 1748027 in OpenStack Identity (keystone) "The v3 users API should account for different scopes" [High,Triaged] - Assigned to sonu (sonu-bhumca11) 19:06:12 neat 19:07:49 now that i have the trifecta of role, project, and user. when i list projects for user the array is still empty. should it be empty or populated with the project? 19:08:39 how are you listing the projects for a user? 19:08:52 GET /v3/auth/projects ? 19:09:15 /v3/users/{user_id}/projects 19:09:54 as an admin from default 19:10:59 empty_cup: and the user is in a different domain with a role assignment on a project in a different domain? 19:17:57 empty_cup: i was able to do this locally - http://paste.openstack.org/raw/719967/ 19:18:26 i was operating as the 'admin' user from devstack which has the administrator role and is within the default domain 19:23:32 cool, i'm looking at it. i've been using the 'admin' user from devstack as well 19:47:34 lbragstad: did morgan change his nick again or is he just not in channel? 19:48:00 i think kmalloc just dropped 19:48:05 i'm unaware of a nick change 19:48:16 but - would totally believe it if he did change nicks again :) 19:48:35 right? tough to keep up with him on that :) 19:49:00 it took me about 2 days to catch on to kmalloc 19:49:01 lbragstad: in any case, the plan of "make an sdk patch consuming the alias patches to make sure we didn't miss anything" TOTALLY bore fruit and caught a bug 19:49:20 nice! 20:10:17 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 20:13:53 lbragstad: ^^ this time with winning included 20:29:07 Merged openstack/oslo.policy master: Trivial: Update pypi url to new url https://review.openstack.org/563368 21:00:30 Doug Hellmann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files https://review.openstack.org/564627 21:00:50 lbragstad: is there the ability for a project "admin" to create users scoped to a specific project within a specific domain? 21:01:29 Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases https://review.openstack.org/564299 21:02:14 empty_cup: an admin can give users roles on specific projects, which essentially acts as scope 21:02:34 empty_cup: are you trying to find a way to make it so that a user can only work within one project? 21:02:46 other wise users at technically scoped to domains 21:03:03 which act as containers for users, groups, and projects 21:03:08 and sometimes roles 21:03:38 Doug Hellmann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files https://review.openstack.org/564627 21:07:38 lbragstad: oh, i'm starting to see that's not how it is designed. the user is created in the domain and then based on roles is allowed into projects 21:11:54 correct 21:12:04 keystone tries to be explicit with authorization 21:12:28 so even if a user's domain is set, they don't automatically get authorization on projects within that domain 21:12:48 an administrator of some kind must grant them authorization explicitly 21:26:38 got it 22:55:58 lbragstad: ok, it's kind of weird (or just not something we'd done before) passing a dict through the environment headers 22:56:20 is it a known dict or something that will change frequently? 01:50:35 jamielennox: that's a good question 01:51:03 right now it will always be {"all", true} 01:51:30 just because we don't have the ability to scope to something other than the system as a whole 01:52:23 that said, it could change to something like {"compute": "7fce0720478d4595bc02970a1d467d51"} 01:52:26 in the future 01:52:36 if 7fce0720478d4595bc02970a1d467d51 is the service id or something like that 01:52:38 * lbragstad shrugs 01:52:48 i'm not sure what the best way is to relay the information 01:52:57 s/the/that/ 01:55:47 that's an interesting target - the concept has obviously changed a bit since i last looked 01:56:10 there was talk about having system be able to be targetted to a region 01:56:19 right 01:56:32 but last i was involved you could do a region and then use roles to be able to restrict to compute 01:56:45 before we just had to deal with an id that could belong to a project or a domain, so this is obviously more complicated than that 09:49:55 lei zhang proposed openstack/keystone master: Fix the outdated URL https://review.openstack.org/564714 13:45:53 cmurphy: this weeks bug report https://gist.github.com/lbragstad/80862a9111ff821af07e43e217c52190 13:46:15 lbragstad: nice 14:48:44 oh look. it's a jamielennox 14:57:48 yay thank you to whoever started adding to the update etherpad 14:57:57 * cmurphy is scatterbrained today 15:16:07 cmurphy: no worries - i just jotted down a couple things 15:16:17 feel free to reword it how ever you wish 15:24:43 already sent it out 15:37:22 oh - fantastic 16:30:33 jamielennox: i responded to your comments here - https://review.openstack.org/#/c/530509/5/oslo_context/context.py 16:30:48 curious if you have a suggestions for what to do with `system` if it's a dictionary 16:54:04 lbragstad: delete it 16:54:11 lbragstad: delete everything 16:54:42 * lbragstad types in sudo rm -rf / 16:57:25 --no-preserve-root 17:05:04 lol 17:25:00 Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object https://review.openstack.org/559129 17:25:13 mordred: o/ 17:25:18 See I'm here! 17:25:29 * kmalloc kicks irccloud hard. 17:25:37 kmalloc: gagehugo wxy i thought i was hitting transient issues with loading the backend drivers... but i couldn't reproduce it 17:25:41 ^ 17:26:10 lbragstad: in what way? Oh in the token model change? 17:26:11 that patch introduces a new base class that uses bootstrap instead of the massive ksfixtures loading TestCase does today 17:26:15 yeah 17:26:23 Cool. Yay bootstrap 17:26:41 it'd be nice to get rid of all the stuff we have that loads custom test data 17:26:46 in favor of what bootstrap does 17:27:05 then push the test class specific data needed to certain tests into the setup classes of those classes 17:27:18 needed by* 17:33:14 ideally - we could start converting TestCase to TestCaseWithBootstrap 17:33:38 once everything is moved over, we could rename TestCaseWithBootstrap to TestCase and remove the load_fixtures method 17:33:56 Hah, turns out, irccloud crashed my phone :P 17:34:07 Uninstall and reinstall all fixed. 17:34:38 hmm 17:36:40 mordred: now let me remember what I was going to ask you.... Ugh. 17:56:55 Oh right, Mordred, the question doesn't really impact you because you're a consumer of other clouds. 17:57:14 Was going to ask about MFA and some.defaults for say a whole domain. 17:58:03 lbragstad: I'll implement some logic this weekend around MFA rules and such for domains and get the resource-options for projects/domains/groups done so we have a place for that to live. 17:59:00 cmurphy: do you see a benefit for resource options on app creds? 17:59:26 If so, since I'm adding the support code in other places, happy to do it there as well. 18:01:40 kmalloc: i'm missing context 18:02:14 Like the pci-dss user options, do you see a value to support such things in app creds. 18:02:33 Not anything specific, but options for the app-creds that are fluid like that 18:03:13 I'm writing code to support it in other subsystems (resource) and can easily hit app-creds too if that would be of benefit 18:07:53 kmalloc: maybe i've been out of it this week but i'm not understanding what this would be introducing 18:08:12 kmalloc: but i see value in making everything consistent so go for it 18:08:31 Okie, I'll propose it, and we can discuss in review. 18:10:10 that will help 18:14:54 kmalloc, what do you think about these servers for a DIY cluster: https://www.ebay.com/itm/Dell-PowerEdge-R610-Server-2x-2-53GHz-E5540-8-Cores-32GB-SAS6i-2x146G/191545700823?hash=item2c990371d7:g:rmQAAOSwvfZZ57~d 18:17:26 supported CPU on RHEL 7 etc 18:21:25 But I bet they have no management controllers, which means no IPMI and thus can't work for Director 18:41:12 Lance Bragstad proposed openstack/keystone master: Decouple bootstrap from cli module https://review.openstack.org/558903 18:41:12 Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object https://review.openstack.org/559129 18:41:50 ^ kmalloc ok - those should pass now 19:01:43 ayoung: hmmm 19:01:48 lbragstad: thanks 19:01:58 yup 19:02:06 ayoung: Dell, should have idrac card compat 19:02:20 ayoung: at least. 19:02:28 But I'd go hp 19:07:26 lbragstad: re: one of your questions - I don't THINK we need specific docs on aliases ... the tl;dr is "you can now just use the official type regardless of what the deployer used" 19:07:38 but with a good dose of "it'll also do the right thing if you ignore this" 19:07:39 ayoung: but otherwise those aren't too bad. I personally would worry about the sas drives vs sata. Just so $$$ to replace. 19:07:47 what do you think? 19:08:02 mordred: ok 19:08:15 mordred: link to the aliases would be my recommendation, and just a simple page showing them (docs published) 19:08:30 But I'd be ok with that down the line. Not needed to land this now. 19:08:50 Heck, I'll even sign up to help make sure that is a thing. 19:08:58 kmalloc: ++ 19:09:15 kmalloc, lbragstad: also, replied to the other question on https://review.openstack.org/#/c/462218/4/keystoneauth1/access/service_catalog.py inline 19:09:16 * kmalloc tosses on the top of the TODO pile 19:09:52 Ah, nice. 19:10:29 ok - so that's part of the service-types-authority api 19:10:35 Yeah 19:10:38 and we shouldn't have to worry about the sorting bit 19:10:48 Exactly, it should be ordered for us. 19:10:55 i saw it mentioned in the comment, but wasn't sure if that fell on our shoulders or not 19:11:04 Nah. Good thing too :) 19:11:17 yah - part of the api there (also, it should really never change because we should not add any new aliases - but you never know) 19:11:26 mordred: do we have a keystone/KSA/service-types cross gate? 19:11:27 kmalloc: oh - also - I made more patches for you 19:11:31 Cauuuuuseeeeee..... 19:11:38 kmalloc: we do not - that's a great idea 19:12:03 Yah. That is something we need. ASAP imo 19:12:12 Woo, more patches! 19:12:16 honestly - a cross-gate with service-types-authority and os-service-types and keystoneauth 19:12:53 * mordred can get that cranked out 19:12:56 Yep. I'd prob run keystone under it too, just to avoid mocking and make sure keystone isnt doing stupid. 19:13:23 If you don't get to it, it's on my next week to poke at while I work on doc publish. 19:13:23 kmalloc: cmon. keystone never does stupid 19:13:56 * kmalloc looks around. Looks at keystone...looks at domains.... 19:14:02 Yeah... Nevvvvvaaaarrrrr 19:14:12 ;) 19:14:17 hehe 19:15:35 Hmm... I should buy another couple xeon-d low power machines for a full OpenStack cluster... 19:16:01 (Brie might actually shove me in a hole and leave me there:p ;) 19:16:39 I want an ironic managed lab >.> 19:18:47 mordred: totally unrelated, any advice on a good combo smoker/grill... I ... Need one for summertime things.. 19:19:13 lbragstad: ^ you too, I'd ask dolphm, but you're both right here. 19:19:37 funny you ask... 19:19:43 i was just working on a comparison 19:19:49 * lbragstad digs 19:20:15 https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md 19:21:52 Steve Noyes proposed openstack/keystone master: [WIP] Enables MySQL Cluster support for Keystone https://review.openstack.org/431229 19:29:29 lbragstad: niiice 19:29:55 those were a couple models i was looking at 19:30:33 imo - if you're looking for efficiency and easy of use, go with a pellet grill 19:30:45 if you're looking to go old school, get an off set 19:31:03 or a kamado, but i was talking to dolph and his goes through a lot of charcoal 19:31:24 but you get higher cooking temperatures than you would on most pellet setups 19:31:47 so - mimicking wood-fired pizza would be easier 19:32:05 the down side is that it's hard to find a smoker that does it all 19:32:44 if you're into cooking from your bed, you can get a GMG which are wifi enabled and programmable 19:33:23 which seems interesting, but looking at a screen is the last thing i want to do when i'm cooking outside 19:36:38 * mordred is a fan of offset 19:36:46 agrees about finding a smoker that does it all 19:37:12 it's literally impossible 19:37:18 for my next trick I'm planning a custom offset for smoking and then a weber for charcoal grilling 19:37:57 I currently have a cheap offset smoker that's also a charcoal grill - it's fine for smoking, but is a bit lame for the charcoaling 19:38:23 (in that I have to use WAY more charcoal to get the heat since the shape of the enclosure isn't really helping any) 19:38:24 can you open up the heat box? 19:38:46 some offsets come with grates above the fire box for direct heat 19:39:14 so you can grill and smoke at the same time, or use it as a hot plate https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md#yoder-cheyenne 19:40:13 lbragstad: I'm using this right now: https://www.lowes.com/pd/Char-Griller-Duo-Black-Dual-Function-Combo-Grill/1245537?cm_mmc=SCE_PLA-_-SeasonalOutdoorLiving-_-Grills-_-1245537:Char-Griller&CAWELAID=&kpid=1245537&CAGPSPN=pla&store_code=2516&k_clickID=a2bb504e-60a1-4625-9f39-056f5bc1bc35&gclid=CjwKCAjwlIvXBRBjEiwATWAQItbnNvU9z0SWASwoL0XSqG95MqJuWmzg3qm-isXIjNGwoPoLGycmkxoCuY0QAvD_BwE 19:40:18 wow. terrible link, sorry 19:40:26 still got the job done 19:40:28 :) 19:40:40 it's mostly garbage. however - it turns out meat is VERY forgiving 19:40:56 and I've smoked many delicious things in it 19:41:05 nice 19:41:26 all full-log - none of this pellet or fan-assisted madness 19:41:54 old school ;) 19:41:55 fans are not needed- physics creates airflow for you just via the chimney effect 19:42:38 i've cooked on https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md#char-griller-acorn quite a bit (just a cheaper kamado) 19:42:57 ++ 19:42:58 combined with https://bbqguru.com/storenav?categoryid=1&productid=22 and you have a pretty sweet smoking setup 19:43:28 dolph uses ^ with a kamado joe and it works amazing, we did beef ribs on it at 225 and it was rock solid 19:43:52 the fan just controls airflow and is hooked up to a thermostat 19:44:10 it can keep temperature within a perfect range for smoking 19:44:13 i was impressed 19:44:19 yah - that's too much tech for me 19:44:35 * mordred make fire 19:44:40 * mordred ungh 19:44:54 lbragstad: http://www.bellfab.com/ <-- I'll be getting my next smoker from that guy 19:46:25 oh wow 19:47:06 that's a lot of cooking surface 20:29:09 Oh yes, Weber for grilling 20:29:32 Damn that is a nice smoker! 20:30:08 mordred: I just want to be able to use the wine cask staves. :) 20:31:05 kmalloc: :) 20:31:26 kmalloc: so - what I've actually done with them is use them in conjunction with lump charcoal 20:31:52 in a two-zone setup using the barrel smoker as a charcoal grill 20:32:41 closing the lid for some indirect cooking/smoking - then shifting the meat over the coals/wood-fire for a finishing 20:33:24 lbragstad: yah - the thing I especially like about that guy too is that I can go custom - so that I can get the exit chimney out the side, etc 20:33:28 Oooh 20:33:38 Good to know! That sounds good. 20:34:01 kmalloc: yah - that way I can use like one per cook - get the smoke from it without having to, you know, use the whole box just to get a nice fire :) 20:34:22 Yes they get expensive otherwise 20:34:33 mordred: that's nice, i know yoder has a build your own process for offsets, so you can get the features you want 20:35:08 * lbragstad has 0 experience on an off set 20:37:13 mordred: how long does it take him to build one? 20:37:20 er... on average 20:38:17 lbragstad: not sure yet - we're just in the initial stages of discussing - I'll let you know when I get a timeframe from him 20:38:47 * mordred wants to be able to get an entire half-pig on the cook surface - so it's not gonna be a small one 20:39:09 the transit is gonna be ... fun 20:39:20 nice 20:39:46 i was gonna ask - how do you plan on getting it home? ;) 20:39:53 free shipping? 20:40:27 I plan on driving a pickup truck up to oklahoma and hoping I can find enough people to help me get it off the truck and in to my yard once I'm back 20:40:56 offer bbq coupons 20:41:26 "redeemable for limit 1 (one) plate of bbq" 20:44:00 ++ 22:12:33 kmalloc, I think the lack of ipmi is going to be the killer for me after all. I want something that does full Tripleo. 22:42:00 Lance Bragstad proposed openstack/keystone master: WIP: re-use bootstrap in tests https://review.openstack.org/564905 23:13:10 ayoung: it is exactly why I went super micro and considered HP for lab boxes. 23:50:44 Using the openstack client i've created a new domain, a user, and a role and have made a role assignment on the domain. In the policy file I've given that role access. How come when listing roles I receive service catalog is empty? 23:51:08 I was imagining I would see the single role in the domain 00:57:51 when operating devstack, should it run from the master branch? 00:58:02 is the master branch a "develop" branch? 01:05:50 i'm rerunning ./stack.sh while on the stable/queens branch. Originally it was situated on master. 01:06:48 What prompted this path was the user showing as unauthenticated when originally set with a password and options showing up as empty 10:53:43 Merged openstack/keystone master: Invalidate the shadow user cache when deleting a user https://review.openstack.org/561908 13:24:24 O/ 13:55:09 o/ 13:56:42 Zzzz 13:56:44 ;) 14:00:51 o/ 14:11:21 Matt Riedemann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files https://review.openstack.org/564627 15:22:25 kmalloc, lbragstad there's a failure in the openstacksdk patch that I made to have it consume the ksa alias/discovery stuff that looks real - although it looks VERY strange 15:22:49 Hmm 15:22:54 kmalloc, lbragstad: tl;dr - it looks like cinder endpoints in devstack are being discovered incorrectly - missing the version 15:23:02 Doh! 15:23:08 I'm digging in now to figure out if it's a bug in ksa or a bug in sdk's use of ksa 15:23:24 This is KSA master or a release? 15:23:44 master 15:23:55 this is the "test the new ksa patches with shade/sdk before landing them" 15:24:07 Ok. Good hope that means no release with the bug. ;) 15:24:35 Also yay for that test. 15:25:20 http://logs.openstack.org/94/564494/1/check/openstacksdk-functional-devstack-tips/52ba4b2/testr_results.html.gz is the failed test run 15:25:58 o/ 15:26:01 kmalloc: yah. I mean, also it's a patch to remove discovery related logic from sdk and replace it with just passing parameters to ksa ... which is awesome - but it's entirely possible there's a nuance in that patch that's bong 15:26:02 Walking teh dog, those.logs don't load well (too big) on mobile (any logs) 15:26:09 Will look as soon as I am home. 15:26:12 kmalloc: bah. get a bigger phone 15:26:25 It is actual data volume 15:26:40 This is a pixel XL 2. Chrome just crashes with our logs 15:26:54 (it is also a bad phone imo) 15:27:50 well, you could use a laptop as a phone and then you could look at logs on your phone while you walk the dog 15:28:18 Oh that is the report not the full log 15:28:33 That loaded ok 15:30:07 the first failure actually looks like it's doing the right things (ignore for a sec the fact that it's a test of v2 and it's talking to v3) 15:30:30 Ahh 15:30:39 Hehe 16:26:37 Lance Bragstad proposed openstack/keystone master: Use the provider_api module in limit controller https://review.openstack.org/562712 16:26:37 Lance Bragstad proposed openstack/keystone master: Add configuration option for enforcement models https://review.openstack.org/562713 16:26:38 Lance Bragstad proposed openstack/keystone master: Add policy for limit model protection https://review.openstack.org/562714 16:26:38 Lance Bragstad proposed openstack/keystone master: Implement enforcement model logic in Manager https://review.openstack.org/562715 16:26:39 Lance Bragstad proposed openstack/keystone master: Expose endpoint to return enforcement model https://review.openstack.org/562716 16:43:43 * lbragstad goes to take a run over lunch 18:57:06 dims: just got done parsing the google doc link you added to the jwt spec 18:57:19 it sounds like they are iterating on jwt to provide service tokens? 18:57:49 right, this one seems to be under discussion still 18:58:55 so a workload is equivalent to a service then? 18:59:21 i'm still trying to build a mapping of terms 19:03:14 it appears they're looking to do nested jwts to limit the power of service scoped token? 19:08:58 @lbragstad : i am not sure :) we may have to go to one of their meetings may be in a week or so (this week is KubeconEU) 19:09:08 oh - right 19:09:35 dims: is there a spec for just the jwt work they did? 19:10:11 this seems like it's reusing some of that to solve a different problem, so i'm just wondering if there is another document that has more context 19:14:55 @lbragstad : let me check 19:19:53 @lbragstad : https://github.com/kubernetes/community/pull/1460/commits/6e209490c441d8df84b6b5d8e352c0e2491a41bd may be? 19:20:18 @lbragstad : was looking through notes from the container identity work group - https://docs.google.com/document/d/1uH60pNr1-jBn7N2pEcddk6-6NTnmV5qepwKUJe9tMRo/edit?ts=59a03344#heading=h.n00r55m5f4gw 19:25:36 Lance Bragstad proposed openstack/keystone master: Implement enforcement model logic in Manager https://review.openstack.org/562715 19:25:36 Lance Bragstad proposed openstack/keystone master: Expose endpoint to return enforcement model https://review.openstack.org/562716 19:25:42 dims: thanks 20:56:14 anyone around to talk limits? 21:35:06 lbragstad, I am, but I don't think I count. Have not really thought about limites in a while. 21:35:11 Just need a sounding board, fire away 21:37:08 * lbragstad thanks ayoung for being his rubber duck 21:37:11 ok 21:37:39 i'm reviewing the current specification for CERNs limit use cases 21:37:48 https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst 21:38:23 it's expected to be a "strict" model, it in that appears to lean on the side of being explicit versus implicit 21:38:33 Good 21:38:48 in queens we implemented registered limits and project limits 21:39:09 where registered limits are something that need to be created before a limit can be associated to a project 21:39:13 "A child may have no more quota than it's parent. The 21:39:14 sum of all **direct children** is must be <= the 21:39:14 parent. all children would have to have default quota 21:39:14 equal to its parent's." 21:39:14 } 21:39:16 they also act as a default 21:39:57 so an operator could registered a limit of 10 cores for the compute service for a deployment 21:40:06 So...management of Quota happens inside Keystone, where we have full access to all data. Enforcement makes a call to Keystone? 21:40:32 yeah - pretty much, the limit and it's validation is stored within keystone 21:40:48 go on...I have about 5 minutes 21:40:48 and exposed to services and consumed via a library to make it easier to adopt/understand 21:41:01 so - given a registered limit of 10 cores 21:41:12 each project without an explicit limit "override" or project limit 21:41:21 will assume a default limit of 10 core 21:41:23 cores* 21:41:45 now - imagine you have project A, whose limit is 20 and usage is 1 21:41:56 and A has two children projects 21:41:58 B and C 21:42:12 neither have project overrides 21:42:25 so - they assume the default value for cores set by the registered limit 21:42:34 (e.g. 10 a piece) 21:42:41 a piece? 21:42:58 yeah - becaues of the registered limit 21:42:59 so the default is each project gets 10, including child projects 21:43:21 let's say the parent is A in this example and it has a project limit of 20 cores 21:43:25 (so overriding the default 21:43:29 and that is OK, cuz there are two, and the parent project has 20 21:43:32 and B and C are siblings 21:43:51 ok - here's where it get's confusing 21:43:58 (for me at least) 21:44:22 should you be able to create project D, which is a sibling to B and C without a project limit override? 21:44:34 I never found a solution to this problem that I liked 21:44:49 because SUM(B.limit, C.limit, D.limit) > A.limit 21:44:53 I would say yes 21:45:24 ok - so let's say we can do that 21:45:36 we now have B, C, and D under A 21:45:48 So... 21:45:54 all children are relaying on the default value of 10 cores and A's limit is still 20 21:46:23 If A has a quoate of 20, and A creates B and assignes to it 10, A then has 10 remaining...right? 21:46:27 it is tracked that way? 21:46:59 yeah - i think so 21:47:11 so - let's assume 21:47:18 A.usage = 1 21:47:22 B.usage = 2 21:47:35 C.usage = 10 21:47:37 But usage is not tracked in Keystone 21:47:41 right... 21:47:43 only in Nova 21:47:50 this is where is get's really fuzzy 21:47:58 D.usage = 0 21:48:22 let's say you want to use 10 cores in D 21:48:40 and the interface for oslo.limit from the service handing out cores is: 21:49:01 enforce(project_usage, project_id) 21:49:30 so, leaving oslo.limit to use the project ID get the hierarchy of limits from keystone and evaluate the usage to say "yes" or "no" 21:50:02 don't we have a circular dependency because oslo.limit still needs more usage information from the other children in the tree to make the decision? 21:51:43 the current method signature doesn't allow oslo.limit to determine if (A.usage + B.usage + C.usage + D.usage) <= A.limit (which it finds by querying keystone for limit information) 21:57:41 to me, it seems like we have to problems... 1.) there is a certain level of ambiguity in the model and 2.) requiring services to pass in *all* resources and their owning projects to oslo.limit is going to be painful 21:58:32 s/to problems/two problems/ 22:04:51 lbragstad, yes, that is the problem with usage not being recorded in Keystone. That has always been the tough-to-solve problem 22:04:59 right 22:05:04 so - what about this 22:05:14 * lbragstad whips out the crazy idea notebook 22:05:21 I had thought about an idea of a resource pool 22:05:43 so you have a unified identifier for the quota. Additional level of indirection 22:06:00 parent project ID could be the resource pool id by default 22:06:06 what if all limit validation operations assumed the registered limit for all projects without an explicit override? 22:06:15 not sure it is a solution 22:06:22 ayoung: the resource pool? 22:06:23 I need to parse that 22:06:46 does the resource pool just push usage the leaves of the tree? 22:06:48 yeah, resource pool requires the projects to keep track of what project is in what resource pool 22:07:11 I think that was why I discarded it last time It came up...let me turn to your statement 22:07:29 ok 22:07:41 So...I kindof think that all quotas should be explicit, and at the project level 22:08:02 like, if I have 100 quota, and 10 proejcts, each get 10, or each get 5 and I keep 50 at the parent or something] 22:08:17 so - let's back up the example 22:08:22 A.limit is still 20 22:08:27 and then push back on the user to request/authorize push-down and automate reclaim 22:08:49 B and C are still children of A and are siblings 22:09:04 resetting to your problems initial state? 22:09:09 yep 22:09:12 q(A)=20 22:09:16 no children 22:09:26 create proejct B child of A 22:09:32 by default, 0 quota 22:09:41 if we assume A.limit to be divisible by the number of children in the tree, then it's clear that project D shouldn't be created, right? 22:09:54 explicitly grab 5 quota from A, and now q(A)=15, q(B)=5 22:10:01 B quickly burns through quota. 22:10:10 well - by default, B and C get 10 each because of the registered limit 22:10:13 A has a policy of "allow 5 floating" 22:10:30 so, automated, Nova could say "requiest more quota for B" 22:11:04 and Keystone would grab 1 (or whatever) from "Floating:" and now q(A)=14 q(B)=6 22:12:06 now once B deletes a VM, the question is how to rebalance 22:12:18 so - the interesting part is that the default limit established by the registered limit resource keeps the service from having to pass in all resources and owning projects when calculating usage 22:12:58 OK...so if the default is each get 10, then the question is, would we want to split A, or force the new quota to come from outside A 22:13:09 like...maybe A is just a polaceholder project, and should never have a VM 22:13:17 maybe 22:13:21 it depends 22:13:42 but you'd have to go to A to update A.limit to 30 in order to create another child project under A 22:13:49 right 22:14:12 OR, more likely, new projects get 0 quota, or the create-project call fails 22:14:28 or some other reasonable failure mode 22:14:38 the outcome would be the same, in that case you'd have to supply explicit overrides 22:15:04 If the goal is to say "give explicit quota to A, and then let that float among all of A's childre" it is one value system 22:15:16 if the goal is "each proejct should have a strict quota" it is a different one 22:15:22 strict quota is easier to enforce 22:15:37 floating...I still suspect needs to use Keystone as a clearing house 22:15:45 the more i think about it - the more i think they should be separate models 22:15:50 and...I suspect that clearing through Keystone is OK 22:15:58 yes, they are def separate models 22:16:11 because they affect how the service passes usage information to oslo.limit 22:16:27 if request/approve quota are easy and light weight, then explict quota makes sense 22:16:31 and ideally, that should be consistent regardless of the enforcement model configured in keystone 22:17:05 the question quickly becomes one of churn 22:17:29 Since Keystone can't call into Nova to free up Quota, nova has to trigger the free-action when the VM is deleted 22:18:04 well - it is possible for keystone to store a limit that is higher than a project's usage for a given resource 22:19:28 if H.limit is 15 and H.usage is 15, you can update H.limit to be 10 if you want 22:20:13 the usage enforcement check should prevent any more resources to be allocated to project H until usage is back under a limit of 10 22:20:27 that can be done by an admin, or members of the project 22:21:23 It was this veruy discussion that got the Cinder folks to withdraw "have keystone record quota" the first time around...I want to say that discussion happened in Portland 22:21:47 keystone is just storing the limit information though 22:21:57 we're still not tracking the actual usage information 22:23:37 if the usage exceeds the limit, that's fine and should be handled by the oslo.limit library during the usage calculation 22:24:03 at least until the usage is back under the limit threshold 22:24:13 You hit my trigger word 22:24:18 "just" 22:24:38 lol - it's *just* that easy ayoung 22:24:38 So, I think that we need to make all quotas explicit 22:25:39 that makes the code easier to write i think 23:28:20 ayoung: as in, no floating quotas (within children)? 23:30:11 ayoung: remember, all nodes are strictly leaning on keystone for limits -- it is on the services to do "reserve" and "consume" and "free" for active quota usage -- we don't want to hold what is in use for every service. 02:43:44 Merged openstack/keystone master: Use the provider_api module in limit controller https://review.openstack.org/562712 04:35:44 XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 04:40:40 Lance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model https://review.openstack.org/565412 04:59:46 XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url https://review.openstack.org/565418 07:09:42 OpenStack Proposal Bot proposed openstack/keystonemiddleware master: Imported Translations from Zanata https://review.openstack.org/565455 10:43:42 wondering if anyone could help me better understand the heat and federated user problems: https://bugs.launchpad.net/keystone/+bug/1589993 10:43:43 Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged] 10:44:09 I am using a group mapping and I think I see the same problems as described in the bug (as I don't do the create role assignment at login thingy) 11:00:55 kmalloc: the issue I reported yesterday on the new ksa stack is an issue with the devstack endpoint registration 13:39:50 johnthetubaguy: ping - i reviewed wxy|'s patch for unified limits with the use cases from CERN 13:39:58 was curious if I could run something by you 13:40:56 i attempted to walk through the scenarios here https://review.openstack.org/#/c/565412/ 13:41:34 lbragstad: sure, I should pick your brain about a federation but with heat in return :p 13:41:57 johnthetubaguy: that sounds fair - we can start with federation stuff 13:42:15 OK, basically its this bug again: https://bugs.launchpad.net/keystone/+bug/1589993 13:42:16 Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged] 13:42:17 there is a lot of context behind https://bugs.launchpad.net/keystone/+bug/1589993 13:42:33 we seem to hit that when doing dynamic role assignment 13:42:42 which I think is expected? 13:42:53 well, known I guess 13:42:59 sure 13:43:08 i remember them bringing this up to us in atlanta 13:43:12 during the PTG 13:43:27 the problem is doing the dynamic role assignment on first login doesn't work for our use case 13:44:11 basically because we want the group to map correct to an assertion that can be quite dynamic (level of assurance) 13:44:44 not sure if I am missing a work around, or some other way of doing things 13:45:12 well - we do have something that came out of newton design sessions aimed at solving a similar problem 13:45:24 it was really for the "first login" case 13:46:06 where the current federated flow resulted in terrible user experience, because a user would need to hit the keystone service provider to get a shadow user created 13:46:41 then an admin would have to come through and manually create the assignments between that shadow user and various projects 13:47:09 which isn't a great a experience for users, because it's like "hurry up and wait" 13:47:12 so we built - https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning 13:47:34 right, but the auto provisioning fixes the roles on first login, I thought? 13:48:19 * lbragstad double checks the code 13:49:19 we would need it remove role assignments and replacement on a subsequent login 13:49:58 (FWIW, I suspect we are about to hit application credential issues for the same reason, but its a pure guess at this point) 13:51:10 so - it looks like the code will run each time federated authentication is used 13:51:40 maybe we didn't test this properly then 13:51:46 so if the mapping changes in between user auth requests, the mapping will be applied both times 13:51:54 would it remove role assignments? 13:52:15 it would not 13:52:21 not the mapped auth bit 13:52:29 yeah, that is the bit the group mapping does for "free" 13:52:30 but... it might if you drop the user 13:52:55 so the use case is where the IdP says if two factor was used 13:53:06 if the two factor isn't used, then they get access to less projects 13:53:49 huh 13:53:52 I mean its not two factor, its a generic level of assurance assertion, but same difference really 13:53:53 ok - that makes sense 13:54:09 its not something I had considered before working through this project 13:54:15 right - it's elevating permissions based on attributes of the assertion 13:54:53 its using EGI AAI, its an IdP proxy, so a single user can login with multiple idendities, with varing levels of assurance, some folks need a minimum level 13:55:24 couldn't you use the regular group mapping bits? 13:55:34 we do, but that breaks heat 13:55:39 or seems to 13:55:42 ah 13:57:01 * lbragstad thinks 13:57:33 (not sure about how this maps to application credentials yet, as the openid connect access_token dance is really stupid for ansible/CLI usage patterns) 13:58:29 the problem is that that trust implementation relies on the role assignment existing in keystone 14:00:12 i think it would work with auto-provisioning, but then the problem becomes cleaning up stale role assignments 14:00:27 right? 14:00:41 yeah 14:00:52 so -something that would be very crude 14:01:08 think is not cleaning those up is a feature for some users, but could be a mapping option I guess force_refresh=True? 14:01:32 would be to use auto-provisioning and then purge users using keystone-manage mapping purge 14:02:03 * lbragstad looks back at the code 14:02:40 ah - nevermind, mapping_purge only works for ID mappings 14:02:44 maybe a always_purge_old_roles mapping? would that delete app creds as we modify the role assinments? 14:02:45 not shadow users 14:02:57 ah 14:03:27 so - with a refresh option, how would that get invoked? 14:03:28 I guess we want something to say that the mapping should delete all other role assignments, somehow 14:03:56 in a way that doesn't always clear out all current application credentials 14:04:34 (unless there were role assignments delete, then I guess you have to invalidate app creds, which seems fine) 14:04:43 hmm, needs writing up I think 14:05:09 yeah - that's the stance we take on app creds currently 14:05:27 yankcrime: are you following along with my thinking here, or have I gone crazy :) 14:05:27 anytime a role assignment is removed for a user, we invalidate the application credential 14:05:42 lbragstad: yeah, I think I still like that, its simple 14:06:27 I think the action is to write up the use case, and document the missing piece 14:06:36 like a partial spec write up or something? 14:06:44 (a very short one!) 14:07:18 at least documenting the use case would be good 14:07:41 i hadn't really heard anything back from the heat team 14:08:26 OK, cool 14:09:12 feels like the group mapping might want to get deprecated if it really doesn't work with these features 14:09:19 johnthetubaguy: not at all, i think that's a succinct summary of what we're trying to achieve 14:09:26 but anyways, that is for that write up 14:09:30 yankcrime: cool 14:09:45 now you may have walked into my trap, but we should work out who should write that up :) 14:10:07 ok - one last question 14:10:11 bah i'd hoped that by agreeing i'd avoid that trap ;) 14:10:22 lbragstad: sure 14:10:35 the use-case is pretty clear, i don't mind getting that drafted 14:10:41 (clear to us i mean) 14:10:46 you always know which users need this refresh behavior, right? 14:11:02 o/ 14:11:13 I think its all users in this case, so I think the answer is yes 14:11:16 we can infer which users need this refresh by other attributes 14:11:25 hmm 14:11:27 (claims, whatever) 14:11:43 ok - i was thinking about a possible implementation that might be pretty eas y 14:12:00 but yeah right now for mvp we can safely assume all users in this case 14:12:11 but it would require using the keystone API to set an attribute on the user's reference that would get checked during federated authentication 14:12:32 if that can be set in the mapping? maybe 14:12:55 what if we are overthinking this? 14:13:10 what if...we really don't want dynamic role assignments or any of that 14:13:24 and instead...somehow use HMT to solve the problem. 14:13:26 like... 14:13:36 first login, a federated user gets put into a pool 14:13:50 and...we have a rule (hand wave) that says that user can create their own project 14:14:00 the pool is the parent project, and it has 0 quota 14:14:34 the child project create is triggered by the user themself, and they get an automatic role assignment for it 14:14:41 ayoung: is this mercador? 14:15:18 yankcrime, I thought it was psedoconical 14:15:29 * ayoung just made a map joke 14:16:05 * yankcrime googles pseudoconical 14:16:13 https://en.wikipedia.org/wiki/Map_projection 14:16:19 so while I do like that, the main use case is to get access to existing project resources based on IdP assertions, that are dynamic, i.e. revoke access if assertions change 14:16:33 ayoung: lol 14:16:51 johnthetubaguy, I caught a bit about the groups 14:17:20 keystone supports one way to do that - but things like trusts don't work with it 14:17:37 it seems to me that the role assignments should be what control that...the IdP data should be providing the input to the role assignments, but should not be creating anything 14:17:45 if we use the other way (with auto provisioning) then the features work, but cleaning up the assignments is harder 14:17:47 i remember there was a proposed change to have the assignments from the mapping persist until the user logs in again through federation and the attributes are reavaluated 14:18:14 knikolla: interesting, do you know where that ended up 14:18:15 ? 14:18:23 https://review.openstack.org/#/c/415545/ 14:18:29 knikolla: yeah, that is basically what we need here 14:18:33 * yankcrime nods 14:18:56 Dec 28, 2016 14:19:02 That is as old as some of my patches 14:19:30 let me take a look at that 14:20:42 I kinda like the mapping doing a "clean out any other assignments" thing, and just have groups treated like any other role assignment 14:21:27 the delete re/add thing screws with app creds, which would be bad 14:22:01 lbragstad: back to quotas, do you have more context to your patch, was that from an email? 14:22:47 johnthetubaguy: it wasn't - but wxy too a shot at working the CERN use cases into https://review.openstack.org/#/c/540803/ 14:23:17 and i tried to follow it up with this - https://review.openstack.org/#/c/565412/1 14:23:43 lbragstad: why is it not restricted to two levels? 14:24:10 johnthetubaguy: which specification? 14:24:49 so...it seems to me that group membership should be based solely on the data in the assertion. If the data that triggers the group is not there, the user cannot get a token with roles based on that group 14:25:17 johnthetubaguy: https://review.openstack.org/#/c/540803 14:25:38 groups are not role assignments. Groups are attributes from the IdP that are separate from the user herself 14:25:50 johnthetubaguy: there are some things in there that need to be updated 14:26:13 and that's one of them, because i don't think we actually call that out anywhere 14:26:23 johnthetubaguy: so - good catch :) 14:27:05 so...we should not trigger any persisted data on the group based on the IdP data. It should all be dynamic. 14:27:41 ayoung: technically, sure. But as a user we want to dynamically assign a collection of role assignments, based on attributes we get from the IdP that change over time, right now we are doing that via groups (... which breaks heat) 14:28:10 because the role assignment isn't persistent and breaks when heat tries to use a trust 14:28:32 so the rub is you want to create an application credential, so you don't have to go via the IdP for automation, that gains what the federated user had at the time 14:28:46 and you want the heat trust to persit, as long as it can 14:29:29 now if the IdP info changes on a later login (or a refresh of the role assignments is triggered via some automation process out of band), sure that access gets revoked, but thats not a usual every day thing 14:30:00 johnthetubaguy, so what you are saying is that Heat trusts are based on attributes that came in a Federated assertion. Either we have them re-affirmed everytime (ephemeral) or we make them persistant, and then we are stuck with them for all time. 14:30:01 anyways, I think we need to write up this scenario, if nothing else so its very clear in our heads 14:30:11 Hmmm 14:30:33 This is a David Chadwick question. 14:30:37 yeah, its in between those too... its like the user is partially deleted... 14:31:11 it is a bit fishy, to be sure 14:31:26 its a cache of IdP assertions 14:31:32 but anyways 14:33:56 johnthetubaguy: so - the examples i worked on last night to unified limits only account for two levels 14:34:49 lbragstad: they sound sensible as I read through them, I am adding a comment on the other spec a second, will see what you think about it. 14:35:05 OK...so the question is "how long should the data from the assertion be considered valid" 14:35:35 on one hand, the assertion has a time limit on it, which is something like 8 hours 14:35:40 johnthetubaguy: granted, both examples I wrote in the follow on break 14:35:42 and that is not long enough for most use cases 14:36:14 one the other hand, Keystone gets no notifications for user changes from the IdP, so if we make it any longer, those are going to be based on stale data 14:36:43 So having a trust based on a Federated account needs to be an elevated level of priv no matter what 14:36:59 same is true of any other delegation, to include app creds 14:37:01 same with app credentials 14:37:03 yeah 14:37:09 so, who gets to decide? 14:37:37 I think the answer is that the ability to create a trust should be an explicit role assignment 14:38:04 if you have that, then the group assignments you get via a federated token get persisted 14:38:24 another way to do it is at the IdP level, but that is fairly course grained 14:38:33 it feels like this is all configuration of the mapping 14:38:38 and...most IdPs don't provide more than identifying info 14:38:51 johnthetubaguy, so, it could be 14:38:55 should the group membership or assignment persist or not 14:39:02 one part of the mapping could be "persist group assignments" 14:39:29 but, the question is whether you would want that for everyone from the IdP 14:40:00 ayoung proposed openstack/keystone master: Enable trusts for federated users https://review.openstack.org/415545 14:40:07 mordred: ah good to know and phew 14:40:09 that is just a manual rebase knikolla 14:41:00 ayoung: personally I am fine with global, but I could see arguments both ways 14:41:02 johnthetubaguy, I see three cases 14:41:06 kmalloc: I have a patch up to fix devstack and the sdk patch now depends on it - and correctly shows test failures in the "install ksa from release" but test successes with "install ksa from master/depends-on" 14:41:26 kmalloc: https://review.openstack.org/#/c/564494/ 14:41:41 1. Everyone from the IdP is ephemeral. Persist nothing. 2. Ecveryone is persisted. 3. Certain users get promoted to persisted after the fact. 14:41:51 actually, a fourth 14:42:23 4. there is a property in the Assertion that allows the mapping to determine whether to persist the users groups or not 14:42:44 note that "everyone is persisted " does not mean that "all groups should be persisted" either 14:42:50 mordred: looking now. 14:42:59 it merely means that a specific group mapping should be persisted 14:43:04 johnthetubaguy, that make sense? 14:44:05 mordred: so uh... Dumb question, I thought we worked hard for versionless entries in the catalog? 14:44:17 ayoung: I think so... although something tells me we are missing a bit, just not sure what right now 14:44:22 mordred: this is, afaict, undoing the "best practices" 14:45:31 johnthetubaguy, I think if we make it possible to identify on a group mapping that the group assignment should be persisted, we'll have it down. We will also need a way to clean out group assignments, although that technically can be done today with e.g. an ansible role 14:46:41 right, today we have persist roles or transient groups, what we really need is a "clear out stale" at login option, but I need to write this all down, I am getting lost 14:50:42 johnthetubaguy, not at login 14:50:48 johnthetubaguy, that may never happen 14:51:34 its only one way to get refreshed info, but its a way. 14:51:53 you would need out of band automation... 14:52:11 mordred: this is sounding like a bug in ksa 14:52:27 mordred: or in cinder handling versionless endpoints 14:52:36 ayoung: that is the fishy bit I was worried about before 14:52:48 limited time trusts 14:53:17 make them refreshable...you have to log in periodically with the right groups to keep it alive 14:53:23 so...limited time group assignments? 14:53:58 put an expiration date on some groups assignments, with a way to make them permanent via the group API 14:55:15 ayoung: yeah, that could work, like app cred expiry I guess 14:55:15 kmalloc: versionless endpoints in the catalog don't work for cinder - so the bug is there 14:55:38 Ahhhh. 14:55:48 kmalloc: I believe that basically we somehow missed actually getting cinder updated in the same way nova got updated 14:55:57 So, how did this ever work :P ;) 14:56:08 nobody has used the block-storage endpoint for any reason yet 14:56:18 Haha 14:56:21 Ok then. 14:56:23 the other endpoints int he catalog in devstack - for volumev2 and volumev3 - are versioned 14:56:46 lbragstad: added a comment here: https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst@35 14:56:55 johnthetubaguy: just saw that - reading it now 14:57:47 kmalloc: so - in short, the answer to "how did this ever work?" is "It never has" :) 14:57:55 johnthetubaguy: is VO a domain? 14:58:00 or a top level project? 14:58:02 We need to bug cinder folks, but, annnyway, this.sounds like a clean fix for now. 14:58:19 kmalloc: yah - ksa correctly fails with the misconfigured endpoint 14:58:35 i'm struggling with the mapping of terms 15:00:04 ok - so VO seems like a parent project or a domain 15:00:10 mordred: yay successfully failing >.> 15:00:38 and VO groups are children projects of the VO 15:02:35 lbragstad: yeah 15:02:50 johnthetubaguy: ok 15:02:54 lbragstad: VO = parent project, in my head 15:03:10 cool - so if we set the VO to have 20 cores 15:03:31 you *always* want the VO to be within that limit, right? 15:03:50 lbragstad: yes 15:04:04 ok 15:04:23 if we set 20 cores on the VO, and the default registered limit is 10 15:04:38 lbragstad: basically the PI pays the bill for all resources used by anyone in the VO 15:04:49 (via a research grant, but whatever) 15:04:54 sure 15:05:06 so the VO has to be within whatever the limit is 15:05:21 it can never over-extend it's limit, right? 15:05:28 correct 15:05:38 here are where my questions come in 15:05:41 no exceptions, wibble races and if you care about them 15:05:53 say we have a VO with 20 cores 15:05:54 so you say this "which defeats the purpose of not having the service understand the hierarchy." 15:06:07 we should get back to that bit after your questions 15:06:13 yeah 15:06:20 so the VO doens't have any children 15:06:29 and the default registered limit for cores is 10 15:06:34 yep 15:06:46 so any project without a specific limit assumes the default 15:06:50 right? 15:06:52 yep 15:06:54 cool 15:07:09 so - as the PI, I create a child under VO 15:07:21 yep, gets a limit of 10 15:07:28 yep - because i didn't specify it 15:07:30 for all 15 sub projects I create 15:07:41 then i create another child 15:07:49 so - two children, each default to 10 15:07:52 yep 15:07:58 and total resources in the tree is 20 15:08:05 technically, I'm at capacity 15:08:26 so not in the model I was proposing 15:08:34 (in my head) 15:08:43 say both child use all of their quota 15:08:47 it might be the old "garbutt" model, if memory severs me 15:09:03 right, so we have project A, B, C 15:09:09 say we also created D 15:09:16 B, C, D are all children of A 15:09:27 A = VO, B and C are children with default limits for cores 15:09:40 yeah 15:09:48 should you be able to create D? 15:09:50 likes add D, same as B and C 15:09:52 without specifying a limit? 15:09:55 I am saying yes 15:10:01 (just go with it...) 15:10:08 ok 15:10:11 so - let's create D 15:10:21 so we have A (limit 20), B, C, D all with limit of 10 15:10:30 yep - so the sum of the children is 30 15:10:34 yep 15:10:50 so lets say actual usage of all projects is 1 test VM 15:10:57 so current actually usage is now 4 15:11:05 sure 15:11:20 project A now tries to spin up 19 VMs, what happens 15:11:34 well thats bad, its only allowed 16 15:11:39 right 15:11:43 because B, C, D are using some 15:11:58 * lbragstad assumes the interface for using oslo.limit is enforce(project_id, project_usage) 15:12:19 that is the bit I don't like 15:12:41 i'm not sure i like it either, but i'm curious to hear why you don't 15:12:59 so my assumption was based on the new Nova code, which is terrible I know 15:13:16 in that world we have call back to the project: 15:13:26 count_usage(resource_type, project_id) 15:13:34 oh... 15:13:45 so when you call enforce(project_id, project_usage)... you get the idea 15:14:08 right - oslo.limit gets the tree A, B, C, and D from keystone 15:14:21 then uses the callback to get all the usage for that tree 15:14:28 right, so the limit on A really means sum A, B, C and D usage 15:14:47 and any check on a leaf means check the parent for any limit 15:14:58 hmm 15:15:04 or something a bit like that 15:15:15 ok - i was operating yesterday under the assumption there wouldn't be a callback 15:15:18 now.. this could be expensive 15:15:32 ^ that's why i was operating under that assumption :) 15:15:40 when there are lots of resources owned by a VO, but then its expensive when you are a really big project too 15:15:43 i didn't think we'd actually get anyone to bite on that 15:15:52 so its one extra thing that is nice about the two level limit 15:16:25 are you saying you wouldn't need the callback with a two-level limit? 15:16:29 s/limit/model/ 15:16:38 I think you need the callback still, in some form 15:16:48 ok - i agree 15:16:58 it just you know its not going to be too expensive, as there are only two levels of hierarchy 15:18:14 you could still have a parent with hundreds of children 15:18:18 i suppose 15:18:35 you could, and it would suck 15:19:09 ok - so i think we hit an important point 15:19:26 now we don't have any real use cases that map to the one that is efficient to implement 15:19:42 we can invent some, for sure, but they feel odd 15:20:03 if you assume oslo.limit accepts a callback for getting usage of something per project, then you can technically create project D 15:20:39 so even in the other model, what if project A has already used all 20? 15:20:40 if you can't assume that about oslo.limit, then creating project D results in a possible violation of the limit, because the library doesn't have a way to get all the usage it needs to make a decision with confidence 15:21:10 even though the children only sum to 20, it doesn't help you 15:21:52 i think that needs to be a characteristic of the model 15:22:02 agreed 15:22:06 if A.limit == SUM(B.limit, C.limit) 15:22:17 then you've decided to push all resource usage to the leaves of the tree 15:22:29 which we can figure out in oslo.limit 15:22:45 so when you go to allocate 2 cores to A, we can say "no" 15:22:55 well, I think what you are looking for is where project A usage counts any resource limits in the children, before counting its own resources 15:23:17 oh... 15:23:24 so A, B, C, (20, 10, 10) means project A can't create anything 15:23:30 i guess i was assuming that A.usage = 0 15:24:07 this is where the three level gets nuts, as you get usages all over the three levels, etc 15:24:16 yeah..... 15:24:39 I think we can't assume A has usage 0, generally it has all the shared resources 15:24:53 or the stuff that was created before someone created sub projects 15:24:59 right... 15:25:01 same difference I guess 15:25:34 anyways, I think that is why I like us going down the specific two level Virtual organisation case, it makes all the decisions for us, its quite specific and real 15:25:46 anyone else feel like a lot of these problems come down to "we know when to do the action, but we have no event one which to undo it." 15:26:25 XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url https://review.openstack.org/565418 15:27:32 johnthetubaguy: ok - so what do we need to accomplish this? the callback? 15:28:06 lbragstad: this code may help: https://github.com/openstack/nova/blob/c15a0139af08fd33d94a6ca48aa68c06deadbfca/nova/quota.py#L172 15:28:23 lbragstad: you could get the usage for each project as a callback? 15:28:57 well - somewhere the service (e.g. nova in this case) code, I'm envisioning 15:29:16 limit.enforce(project_id, usage_callback) 15:29:41 oslo.limit get's the limit tree 15:29:57 then uses the callback to collect usage for each node in the tree 15:30:21 and compares that to the request being made to return a "yes" or "no" to nova 15:30:54 sorry - limit.enforce(resource_name, project_id, usage_callback) 15:30:58 looking for a good line in Nova to help 15:31:38 lbragstad: https://github.com/openstack/nova/blob/644ac5ec37903b0a08891cc403c8b3b63fc2a91c/nova/compute/api.py#L293 15:32:15 ok 15:32:35 part of that feels like it would belong in oslo.limit 15:34:17 lbragstad: my bad, here we are, this moves to an oslo.limit call, where oslo.limit has been setup with some registed callback, a bit like CONF reload callback: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L157 15:34:26 and it would look like limit.enforce('injected_files', 'e33f5fbbe4a4407292e7ccef49ebac0c', api._get_injected_file_usage_for_project) 15:34:33 note the check quota, create thing, recheck quota model 15:35:11 sorry, the delta check is better, check we can increase by one, etc, etc. 15:35:23 ahh 15:35:53 so - objects.Quotas.check_deltas() would be a replaced by a call to oslo.limit 15:36:12 yeah 15:36:30 i suppose the oslo.limit library is going to need to know the usage requested, too 15:36:34 more complicated example, to note how to make it more efficient: https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L842 15:36:53 thats the delta bit in the above two examples 15:37:28 the other case is a bit odd... its a more static limit, there are a few different cases 15:37:39 so limit.enforce('cores', 'e33f5fbbe4a4407292e7ccef49ebac0c', 2, api._get_injected_file_usage_for_project) 15:37:55 so it needs to be a dict 15:38:02 https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L852 15:38:08 otherwise it really sucks 15:38:27 hmm 15:38:41 I mean oslo.limits wouldn't check the user quoats of course, they are deleted 15:38:41 doesn't the max count stuff come from keystone now though? 15:39:49 don't we only need to know the resource name, requested units, project id, and usage callback? 15:40:30 so you would want an oslo limits API to check the max limits I suspect, but that could be overkill 15:41:36 anyways, focus on the instance case, yes that is what you need 15:41:54 how is a max limit different from a project limit or registered limit? 15:42:14 the count of existing resources would always return zero I think 15:42:42 its like, how many tags can you have on an instance, its a per project setting 15:43:03 but the count is not per project, its per instance 15:43:16 I messed that up... 15:43:17 reset 15:43:28 tags on an instance is a good example 15:43:32 to add a new tag 15:43:37 check per project limit 15:43:56 but you count the instance you are setting it on, not loads of resources owned by that project 15:44:08 XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 15:44:15 so usually we don't send a delta, we just check the actual number 15:44:24 but we can probably ignore all that for now 15:44:29 hmm 15:44:29 focus on the basic case first 15:44:30 ok 15:44:36 XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 15:44:38 i think i agree 15:44:40 core, instances, ram when doing create instance 15:44:54 * lbragstad has smoke rolling out his ears 15:45:00 did you get how we check quotas twice for every operation 15:45:24 yeah - i think we could expose that both ways with oslo.limit 15:45:43 and nova would just iterate the creation call and set each requested usage to 1? 15:45:53 lbragstad: based on my experience that is a good sign, it means you understand a good chunk of the problem now :) 15:45:54 to maintain the check for a race condition, right? 15:46:47 so the second call you would just set the deltas to zero I think, I should check how we do that 15:47:18 yeah, this is the second check: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L180 15:47:23 that feels like it should be two different functions 15:47:25 delta of zero 15:47:35 I mean it could be 15:48:00 the first call you want it to tell you if you can create the thing 15:48:17 the second call happens after you've created said thing 15:48:24 yeah 15:48:51 and the purpose of the second call is to check that someone didn't put us over quota in that time frame, right? 15:49:03 yeah 15:49:05 and if they did, we need to clean up the instance 15:49:37 yeah 15:49:42 ok 15:49:49 it is a very poor implementation of this: https://en.wikipedia.org/wiki/Optimistic_concurrency_control 15:50:16 so the first call happens in the "begin" 15:50:30 and the second is done in the "validate" 15:50:51 basically 15:51:01 ok - i think that makes sense 15:51:10 although its all a bit fast and loose, kill the API and you will be left over quota, possibly 15:51:38 it all assumes some kind of rate limiting is in place to limit damage around the edge cases 15:51:39 right- because you didn't have a chance to process the roll back 15:52:14 we don't hide the writes from other transactions, basically 15:52:28 well, don't hide the writes until commit happens 15:53:02 its a bit more optimistic that optimistic concurrency control, its more concurrency damage limitation 15:53:11 s/that/than/ 15:53:25 * lbragstad nods 15:53:49 Ok, did we both burn out yet? I think I am close 15:54:05 same... 15:54:24 I think you understand where I was thinking with all this though...? 15:54:30 does that make more sense to you now? 15:54:34 i think so 15:54:38 * johnthetubaguy waves arms about 15:54:45 i think i can rework the model 15:54:54 because things change lot with the callback bit 15:55:12 I don't think its too terrible to implement 15:55:33 yeah - i just need to draw it all out 15:55:50 but that callback let's us "over-commit" limits but maintain useage 15:55:52 right? 15:55:58 I think it means we can implement any enforcement model in the future (if we don't care about performance) 15:56:02 and that's the important bit you want regarding the VO/ 15:56:08 Merged openstack/keystone master: Fix the outdated URL https://review.openstack.org/564714 15:56:08 yeah 15:56:16 ok - so to recap 15:56:37 1.) make sure we include a statement about only supporting two-level hierarchies 15:56:54 2.) include the callback detail since that's important for usage calculation 15:57:10 is that it? 15:57:28 probably... 15:57:34 I guess what does the callback look like 15:57:59 list of resources to count for a given project_id? 15:58:16 or a list of project_ids and a list of resources to count for each project? 15:58:25 maybe it doesn't matter, so simpler is better 15:58:41 i would say it's a method that returns an int that represents the usage 15:58:53 it can accept a project id 15:58:59 s/can/must/ 15:59:15 because it will depend on the structure of the tree 15:59:29 and just let oslo.limit iterate the tree and call usage 15:59:32 call for usage* 16:00:02 i received a 401, and looked at the keystone log and see 3 lines: mfa rules not processed for user; user has no access to project; request requires authentication 16:00:47 lbragstad: yeah, I think we will need it to request multiple resources in a single go 16:01:09 johnthetubaguy: I don't oppose to two-level hierarchies. Actually I like it and I think it is simple enough for the origin proposal. 16:01:18 lbragstad: its a stupid reason, think about nova checking three types of resources, we don't want to fetch the instances three times 16:02:20 wxy|: cool, for the record I don't oppose multi-level, its just that feels like a next step, and I want to see us make little steps in the right direction 16:02:40 ++ 16:02:49 this is a really hard problem... 16:03:31 yeah, everytime I think its easy I find some new horrible hole! 16:04:10 exactly... 16:04:20 would love to get melwitt to take a peak at this stuff, given all the Nova quota bits she has been doing, she has way more detailed knowledge than me 16:05:07 yea 16:05:14 i can work on documenting everything 16:05:24 which should hopefully make it easier 16:05:30 instead of having to read a mile of scrollback 16:06:02 awesome, do ping me again for a review when its ready, I will try make time for that, almost all our customers will need this eventually, well most of them anyways! 16:06:38 once the federation is working (like CERN have already) they will hit this need 16:07:18 johnthetubaguy: ++ 16:08:03 OK, so I should probably call it a day, my brain is mashed and its just past 5pm 16:08:22 johnthetubaguy: sounds like a plan, thanks for the help! 16:08:22 now is not the time for writing ansible 16:08:43 no problem, happy I could help 16:09:21 johnthetubaguy: yeah. The only problem I concern is about the implementation. In your spec, to enable the hierarchy is controlled by the API with "include_all_children". It means that in a keystone system, some project may be hierarchy, some don't. I's a little mess-up for a Keystone system IMO. How about the limit model driver way? So that all project's behavior will be the same at the same time. 16:13:05 johnthetubaguy: some way like this: https://review.openstack.org/#/c/557696/3 Of cause It can be changed to two-level easily. 16:54:39 * ayoung moves Keystone meeting one hour earlier on his schedule 16:57:44 ayoung: set the meeting time in UTC/or reykjavik timezone (if your calendar does't have UTC) 16:58:04 ayoung: means you wont have to "move" it explicitly :) 16:58:36 the latter use of reykjavik was a workaround for google calendar (not sure if that has been fixed) 17:02:20 the ical on eavesdrop actually has the meeting in UTC 17:02:34 lbragstad: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 17:02:39 ugh 17:02:43 #endmeeting