17:01:03 #startmeeting keystone-office-hours 17:01:04 Meeting started Tue Jun 19 17:01:03 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:07 The meeting name has been set to 'keystone_office_hours' 17:01:46 lookit i spy a gyee 17:02:12 stepping away to grab lunch real quick and i'll back back to talk about migration options 17:03:18 kmalloc, IRC is my source of enlightenment :-) 17:03:55 I'll review the IRC log tomorrow. Then the patch will be ready tomorrow I think. 17:20:33 knikolla: cmurphy: there's a thread on the Edge ML on Keystone architectures which touches on federation 17:21:02 knikolla: cmurphy: is either of you subscribed to that ML and could chime in? 17:21:19 lbragstad: kmalloc: the question applies to you too in case you're interested ^^ 17:30:35 * cmurphy resubscribes 17:34:59 thanks wxy 18:12:04 http://lists.openstack.org/pipermail/openstack-dev/2018-June/131630.html 18:15:20 lbragstad++ 18:54:55 Jeremy Freudberg proposed openstack/keystone master: Expose duplicate role names bug in trusts https://review.openstack.org/576610 18:54:55 Jeremy Freudberg proposed openstack/keystone master: Fix duplicate role names in trusts bug https://review.openstack.org/576611 18:55:24 ^ lbragstad, kmalloc: there it is 19:03:46 jeremyfreudberg: awesome 19:03:59 kmalloc: lemme know when you free to continue to migration stuff 19:04:06 (caught up in TC scrollback) 19:38:33 lbragstad: i am 19:38:46 lbragstad: landlords just left (and I have 2 more errands to run today but they can wait) 19:38:58 need to get the AC from the storage unit and a 2nd one for my office. 19:39:28 Merged openstack/keystoneauth master: Add minimum version for requirements https://review.openstack.org/575685 19:39:39 lbragstad: we need to fix @wip to be smarter 19:39:51 lbragstad: i want it to handle explicit "what kind of error ot expect" 19:39:55 not just "an error" 19:40:25 jeremyfreudberg: OH it's trust with implied roles that get expanded 19:40:29 jeremyfreudberg: thats... wow. 19:40:32 jeremyfreudberg: nice catch 19:41:13 lbragstad: because i dislike that @wip doesn't lock us into a specific error. what if we change something and the error case changes... thats bad =/ 19:43:15 kmalloc: happy to help. since my newest patch prevents keystone from shooting itself in the foot, do you think we even need https://review.openstack.org/#/c/576548 (which covers up the foot shooting) 19:43:44 nah, we don't need it. it isn't a bad case to use suspenders and a belt though :P 19:53:05 jeremyfreudberg: was that the cause of your issue in sahara? 19:53:37 lbragstad: yes, this bug was the only cause (the case sensitivity stuff did NOT affect us) 19:54:20 so sahara was affected by duplicate role names in the token response? 19:54:24 strange 19:57:30 Ben Nemec proposed openstack/oslo.policy master: Avoid redundant policy syntax checks https://review.openstack.org/511426 19:57:38 lbragstad: yes, because the token which had duplicate role names was used to create a second trust (in heat) 19:58:18 oh - so sahara was pulling that list, which included duplicates, and then put them in a request to build a new trust, which broken 19:58:20 broke8 19:58:27 /me sigh 19:58:32 broke* 19:58:45 correct 20:03:19 lbragstad, kmalloc, mordred: https://review.openstack.org/576634 Release keystoneauth 3.9.0 20:03:58 * efried hopes that was done correctly, been a while 20:04:12 efried: looks good to me 20:04:42 jeremyfreudberg: i assume you already have a patch that corrects that behavior in sahara? 20:05:21 lbragstad: no, because it's all tied up in how sahara and heat interact 20:05:45 sahara passes a token to heat which just so happens to have this problem, and heat tries to create a trust with it 20:07:45 Morgan Fainberg proposed openstack/keystone master: Add Flask-RESTful as a requirement https://review.openstack.org/574414 20:08:12 Morgan Fainberg proposed openstack/keystone master: Implement scaffolding for Flask-RESTful use https://review.openstack.org/574415 20:08:28 Morgan Fainberg proposed openstack/keystone master: Keystone adheres to public_endpoint opt only https://review.openstack.org/574502 20:08:34 Morgan Fainberg proposed openstack/keystone master: Convert json_home and version discovery to Flask https://review.openstack.org/574736 20:09:34 the tl;dr version is that it's usually not sahara's fault, and that if sahara with the integration for all components works, then OpenStack works 20:09:37 * tosky runs :) 20:11:11 lbragstad: ^ did I get the minimum bit added ok on 574414? 20:11:14 this is the start of a bug fix if anyone is interested in reviewing - https://review.openstack.org/#/c/562714/ 20:11:45 kmalloc: looks ok to me 20:17:29 lbragstad: migrations? 20:17:45 sure 20:19:37 so - would we be able to take the traditional path for the migration? 20:19:50 create new table with correct schema, migrate data, drop old table? 20:20:18 probably 20:20:58 it is likely to be a headache, because you have to do a lot of heavy lifting to determine which registered limit to FK to 20:21:03 because name is non-unique 20:21:27 so its, (name, service[, region]) for each migration to figure out what to FK to 20:22:52 name as in service name? 20:23:00 no, you mean resource_name 20:23:10 resource_name 20:23:11 yeah 20:23:55 Morgan Fainberg proposed openstack/keystone master: Add support for before and after request functions https://review.openstack.org/576637 20:27:02 hmm 20:29:19 lbragstad: question regarding DocumentedRuleDefaults 20:29:28 so, what was the intended purpose of the description field? 20:29:59 I ask because Barbican uses the same rule for multiple parts of the API with similar but not the same purpose 20:30:06 it's to describe the API in human terms as opposed to just being POST /v3/users 20:30:23 you could say description='Create a new user' 20:30:30 right. I'm wondering if moving the description /into/ the operations dict might be useful 20:31:33 example: https://docs.openstack.org/barbican/latest/api/reference/secret_metadata.html#put-v1-secrets-uuid-metadata-key and https://docs.openstack.org/barbican/latest/api/reference/secret_metadata.html#put-v1-secrets-uuid-metadata 20:31:40 utilize the same rule 20:32:37 but server different functions. We could 1) abstract the description to something that fits both 2) cover one but not the other 3) change DocumentedRuleDefault to handle description at the operation level 20:33:05 I'm guessing they aren't the only ones using the same rule for similar but different apis 20:34:42 i'm not quite understanding i don't think 20:35:01 is the problem that they use the same rule to protect the policy? 20:38:20 lbragstad: lemme submit a PS and show you 20:38:30 gimmie 5, wrapping up this one controller 20:45:21 okay lbragstad: so for their secretmeta controller they have two endpoints: /v1/secrets/{secret-id}/metadata and /v1/secrets/{secret-id}/metadata/{key-id} 20:45:31 https://review.openstack.org/#/c/575218/8/barbican/common/policies/secretmeta.py 20:45:37 four rules, with 6 actions 20:45:56 secret_meta:put and secret_meta:get rules are used in both endpoints 20:46:24 similar actions but not the same. However the way that they have laid out their poicy makes a single description field hard to use imo 20:46:24 oh 20:46:28 unless I'm being too verbose 20:46:39 yeah. and if they did it, you know others did too 20:46:47 I'm taking notes on re-occuring patterns 20:47:01 so - in that case, is there anything stopping barbican from creating a new policy for other of those endpoints? 20:47:27 lbragstad: not that I am aware of 20:47:38 you can keep the policy value/check_str the same 20:47:38 other than they don't want to make any 'policy changes' in Rocky 20:48:40 breaking them apart would let you be more descriptive about each policy since it's making each more specific 20:48:50 right. I'll make a note of that 20:49:02 lbragstad: I've been using https://wiki.openstack.org/wiki/Barbican/Policy to help me track this 20:49:09 and expanding it quite a bit 20:49:15 do we have anything akin to this in keystone? 20:49:21 then you'd just change the barbican code to enforce one of the endpoint on the new policy 20:49:32 * hrybacki nods 20:49:32 we haven't used a wiki in ages 20:49:44 ack 20:49:47 if an operator overrides the default 20:49:53 it can be aliased 20:50:29 https://docs.openstack.org/oslo.policy/queens/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule 20:50:43 I wrote up an example in there - but you might not need something that involved 20:51:11 oh that's nice lbragstad 20:51:19 they do have a decprecated api 20:51:35 i wrote an example of breaking a policy into more granular policies https://docs.openstack.org/oslo.policy/queens/reference/api/oslo_policy.policy.html#oslo_policy.policy.DocumentedRuleDefault 20:51:45 its the example right above ^ 20:51:56 * hrybacki nods 20:52:11 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 20:53:42 Gage Hugo proposed openstack/keystone master: WIP Add docs for case-insensitivity in keystone https://review.openstack.org/576640 20:56:25 kmalloc: pulling down the client patches quick for limits and checking some existing constraints 20:58:29 okie 20:58:37 lbragstad: ... i hate our enforcement model 20:58:48 it's such a PITA to work with 20:58:55 i think i almost have it working without decorators. 20:58:56 ? 20:59:08 oh... @controller.protected? 20:59:16 yeah 20:59:17 https://review.openstack.org/#/c/576639/ 20:59:18 see 20:59:28 it's starting to come together in a much cleaner way 21:00:37 all methods on flask dispatched requests will have an "after request" check to ensure enforcement was called 21:01:01 and in the method it is the responsibility of the contributor to do ".enforce_call() 21:01:13 which will do a lot of the magic @protected does now. 21:01:41 but eliminates all the stupid callback bits. 21:04:05 http://paste.openstack.org/raw/723860/ 21:04:33 so - for the migration 21:04:49 1.) create a new table called limits 21:04:53 ok 21:05:01 that has the correct schema, FK relationship 21:06:31 yes 21:06:41 and the new model object that is smarter/loads the right things. 21:06:44 2.) for every limit, check the limit.resource_name, limit.service_id, limit.region_id (if applicable) 21:07:06 use those values to query the registered limit table 21:07:17 finding the id of the registered limit those values apply to 21:07:48 and update the limit.registered_limit_id to the registered limit id of that query 21:08:09 yes. 21:09:06 hmmm 21:09:15 could this be done without another table? 21:09:42 what if step 1 is just adding a new column for limit.registered_limit_id 21:09:45 ? 21:11:30 well, we are already doing a pivot for the PK change 21:11:44 is that being done on the limit table though? 21:11:49 yah 21:11:51 on both 21:11:53 or the registered limit table? 21:11:56 oh 21:12:10 https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad 21:12:31 oh holy crap we're adding more triggers 21:12:36 * kmalloc is going to cry 21:12:47 * lbragstad pumps brakes 21:12:53 we need to stop adding triggers in mysql migrations 21:12:58 they are terrifying and horrible 21:13:13 https://review.openstack.org/#/c/576025/5/keystone/common/sql/expand_repo/versions/046_expand_update_pk_for_unified_limit.py 21:13:33 hold up, if we're able to isolate the migrations a bit, and do one of the in place, we might not have to deal with the triggers bit (since we're not creating a new table) 21:13:34 and ultimately just cause tears 21:14:14 the PK change is going to need to be new tables 21:14:29 i pretty much want to -2 that for the trigger reason only 21:14:36 triggers should never have been used 21:14:48 ok - let's try and write down the changes we need 21:14:50 https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad 21:14:53 they are a terrible choice and are very very hard to debug. 21:14:58 and then see if we can break up the migrations to be a bit easier 21:18:01 lbragstad, kmalloc: i'm undecided on a review as to what score to give and what is considerent nitpicking or not. would be helpful to get your opinion 21:18:14 knikolla: what's up? 21:18:34 https://review.openstack.org/#/c/538371/6/keystone/tests/unit/limit/test_backends.py 21:19:05 the test correctly tests deletion of the limits, but it doesn't test that it didn't delete limits of other projects 21:19:12 the WHERE part of the sql query. 21:21:59 oh - that seems like it would be a sane test case 21:22:04 if not in that patch, at least in a follow on 21:24:21 lbragstad: alright. didn't want to look like I was nitpicking. 21:25:13 i think it's worthwhile to consider the author - wxy is always responsive to feedback and isn't going to be discouraged by constructive criticism 21:25:52 ++ 21:25:56 nit picking: This is a bad way to do X, please reimplement stylistically different -- or "your verbiage is not accurate". 21:26:11 "This needs an expanded test case like FOO" - not nit picking 21:26:17 and very reasonable 21:26:39 esp. for someone like wxy who gets it's not personal and is super responsive 21:26:59 knikolla: are you asking because of the ml culture change thread? 21:27:43 lbragstad: yes, in part. 21:29:49 advocating for further testing... is not nitpicking. you could easily also say "this can be a followup" and/or take on that followup yourself 21:30:15 ++ 21:30:34 just felt I should get some feedback first before I give feedback. 21:31:35 thanks! 21:32:57 knikolla: also if you see my style of review where i say [NIT] 21:33:07 it helps clarify what you think is a trivial thing 21:33:16 often i might have a review with 10+ nits but still +2 21:33:28 all nits can be fixed down the line. 21:33:43 kmalloc: agree. most of my undecisiviness was in the score between +1/+2 21:34:10 often +1 "everything looks good, but i have a serious question that needs answering, aka "what happens when X"" 21:34:15 imo 21:34:19 (from a core) 21:34:38 i avoid no-score because it seems like those get lost 21:34:52 so -1 = hey, I have a concern this might not do waht you want... or there is just no testing 21:35:04 i've been trying to do more no scores that i have in the past, but yeah... they are harder to see 21:35:16 unless i'm parsing emails from gerrit 21:35:19 ++ 21:35:19 +1 = looks good, but expand [in the review, not even a patchset] on what i need answered 21:35:30 +2 = anything nits only or close enough that it wont cause issues 21:35:45 that helps 21:35:47 -2 = "not just no, awwww heck no, but here is why we can't do it" 21:36:33 i used to no-score anything that i just had questions about, vs +1, 21:51:37 kmalloc: rewrote the proposed migration without triggers 21:52:06 looking 21:52:17 yep. 21:52:28 that would be my preference. 21:52:34 but that sums up the alternative 21:53:11 so - that is going to require the database model to be cluttered for a release 21:53:26 yes, it does. 21:53:55 does it also mean we have to run keystone at a specific version for a certain amount of time before moving to the next release? 21:54:13 nova had an issue with that with one of their migrations in the past i think, but there wasn't much they could do about it 21:54:27 no. 21:55:03 if you run migrate_data in Rocky, which we use to setup the FK, in stein we just stop referencing the old column. 21:55:15 and we can drop it from the model and have a contract migration 21:55:21 that drops the actual column 21:55:30 what if i'm on Queens 21:55:36 i upgrade to rocky 21:55:44 and the immediately upgrade to Stein 21:55:48 (ff upgrades) 21:55:50 do we support upgrading to rocky and then to stien without running data_migrate? 21:56:09 because it seems like i keep getting a different answer on this 21:56:30 no - i think running data migrate is required 21:56:45 then data migrate moves the data (fk) and stien doesn't care about old code. 21:56:50 and old columns 21:56:55 when the nodes are down 21:57:02 stien contract removes that old column 21:57:07 yep 21:57:17 * lbragstad ventures into edge case land 21:57:26 let's say i do all this without downtime 21:57:39 we are going to have to do a nullable allowed for the extra columns 21:57:44 in rocky. 21:57:57 because stien may or may not know about the columns 21:57:58 and a limit gets updated/created in the rocky migration 21:58:13 so the limit.registered_limit_id of that entry isn't populated 21:58:18 right so 21:58:21 and then i upgrade to stein 21:58:29 in rocky, registered_limit_id is nullable 21:58:41 in rocky service_id, resource_name, region_id are all nullable 21:58:50 (code enforcement on if they are populated) 21:59:10 in stien we drop service_id, resource_name, region_id, and make registered_limit_id not nullable 21:59:19 stein* 22:00:23 what happens if a limit sneaks in after the migration in rocky has been run to populate limit.registered_limit_ids? 22:00:46 on an older node? 22:00:58 like a Queens node accepts the POST /v3/limits requests 22:01:02 isn't migrate done when all nodes are rocky? 22:01:08 nope 22:01:12 *facepalm* 22:01:13 https://docs.openstack.org/keystone/latest/admin/identity-upgrading.html 22:01:14 we should fix that 22:01:17 that is bad. 22:01:21 how do we fix that though? 22:01:29 you need a place to transition things? 22:01:33 migrate happens once all nodes are on new code 22:01:37 data-migration 22:01:47 expand, upgrade code, migrate data, contract 22:01:56 code can run in n-1 scenario 22:02:00 for the schema 22:02:02 but can you run new code if the data hasn't been migrated yet? 22:02:10 app code is smart 22:02:36 mmm i know where this is going 22:02:51 you're going to make new code check all the places 22:02:56 lbragstad: https://github.com/openstack/keystone/blob/03a616d1bf5715ac74756f2cb3aec1f09352de81/keystone/identity/backends/sql_model.py#L104-L112 22:03:08 as an example 22:03:23 password_hash is the new column 22:03:27 password is the old 22:03:37 check for the new, if not there, do the thing with the old 22:03:59 writes write to both locations [in password case, because of security, we had a config option for that] 22:04:16 this is not security related, so you can just blindly write to both locations with the relevant data 22:04:26 migrate is run when the code is 100% on 22:05:39 yeah 22:05:48 another vesrion of that would be to have the new code check the old location 22:05:56 and make things consistent 22:06:14 except you still need to make sure in Stein all data is in the new place 22:06:18 which would allow you to have old and new nodes running during migrateion 22:06:31 in rocky, code does look in both places 22:06:32 yep 22:06:39 but migrate still has to occur. 22:07:12 mmmm 22:07:14 damn 22:07:16 nevermind 22:07:43 fi - data problems are hard 22:08:54 yep 22:10:23 Merged openstack/keystone master: Add policy for limit model protection https://review.openstack.org/562714 22:18:30 that's going to be a total pain 22:19:20 we're going to either need to change our upgrade process to make operators get all nodes on the new release before running migrate... or we break our rules and perform the migration in contract 22:23:09 i've done that a couple times 22:23:17 because it was the only way to avoid breaking rules 22:23:19 it sucked 22:23:57 if i think about it, it would easier to do that then change our upgrade process which likely breaks upgrade automation for operators :( 22:24:42 we should still change our upgrade process 22:25:52 even if it isn't this cycle 22:25:59 telegraph "we're changing to X" 22:29:32 Morgan Fainberg proposed openstack/keystone master: Migrate all password hashes to the new location if needed https://review.openstack.org/576660 22:29:42 lbragstad: ^ should be an easy review 22:29:57 ack 22:30:11 lbragstad: that does the lifting of moving password to password_hash if password_hash is none [cleanup] 22:30:23 and then in S, we can contract and drop the password.password column 22:30:29 :) 22:30:37 local test succeeded, we'll see what zuul says 23:05:39 attempted to summarize the migrations https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad 01:51:41 lbragstad: sqlite doesn't support change primary key, that's the reason I re-create the table 01:54:37 lbragstad: a similar case is here: https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/095_add_integer_pkey_to_revocation_event_table.py#L20-L22 02:01:21 kmalloc: lbragstad : If I understand correctly, trigger is used for sync the old data which is newly created during upgrading to the new schema. If drop the triggers, how to deal with this case? 02:04:32 Use app-level logic to keep the data in sync for a release. Triggers are very hard to debug, are not well tested and could end up causing issues for the small number of cases that use them. 02:05:25 So, keystone just writes to both places for rocky, and in stien we drop support for the old way. But defer to lbragstad if we are doing that or triggers are acceptable. 02:05:42 And contract happens in stein then. 02:05:49 kmalloc: emm, that's a way, let the code deal with the mix version data 02:07:19 That is how I usually handle these cases, easier to test/get right. And not a shot in the dark for say pgsql (very under tested). And not fighting with sqlite to 'test' the code. 02:09:51 But again, I am not blocking triggers, I just prefer to not use them. Checking with lbragstad on the way forward is best. I'll roll with what he recommends. 02:10:59 kmalloc: Ok, got it. 02:12:03 kmalloc: and for schema change, adding PK is not allowed in sqlite. So I tried to re-create the table in my new PS. 02:12:20 Hm. That is annoying. 02:13:43 yeah, I tested the in-place way in my env which Lance wrote here https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad ,all works well, then I upload the PS2 in this way, but the CI tells the sqlite doesn't like it. :( 02:14:47 I can help write some sqlite specific code. 02:14:50 If needed. 02:15:28 We have done that in the past a few times, special case for upgrade test. We will get some better tests in gate. Iirc. 02:18:00 kmalloc: so you prefer to use the in-place way with specific sqlite related code? 02:21:02 That is my preference, but I can't really impose my view here as the way forward if the general consensus is "use triggers" 02:21:35 Merged openstack/oslo.policy master: Add examples and clarification around scope_types https://review.openstack.org/568901 02:22:48 kmalloc: OK, let's wait for others opinion as well. Thanks for your suggestion. 02:23:07 +( 02:23:10 ++ 02:51:16 sunguangning proposed openstack/oslo.policy master: Remove some description from oslo policy https://review.openstack.org/576683 03:58:36 out of curiosity, are they any upstream tools for testing custom policy? 04:11:53 Chason Chan proposed openstack/python-keystoneclient master: Update IdentityProviderManager docstring https://review.openstack.org/576708 07:00:04 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 07:02:24 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 07:24:04 Morgan Fainberg proposed openstack/keystone master: Add Flask-RESTful as a requirement https://review.openstack.org/574414 07:24:32 Morgan Fainberg proposed openstack/keystone master: Implement scaffolding for Flask-RESTful use https://review.openstack.org/574415 07:24:43 Morgan Fainberg proposed openstack/keystone master: Keystone adheres to public_endpoint opt only https://review.openstack.org/574502 07:24:58 Morgan Fainberg proposed openstack/keystone master: Convert json_home and version discovery to Flask https://review.openstack.org/574736 07:25:05 Morgan Fainberg proposed openstack/keystone master: Add support for before and after request functions https://review.openstack.org/576637 07:25:13 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 07:27:45 adriant: https://docs.openstack.org/patrole/latest/ 07:51:31 wangxiyuan proposed openstack/keystone master: Strict two level hierarchical limit https://review.openstack.org/557696 09:54:22 hi, https://bugs.launchpad.net/keystone/+bug/1777671 in this bug what exactly we need to do? 09:54:23 Launchpad bug 1777671 in OpenStack Identity (keystone) "Incorrect use of translation _()" [Medium,Triaged] - Assigned to Deepak Mourya (mourya007) 10:26:04 deepak_mourya: here's an example of what needs to be fixed: http://git.openstack.org/cgit/openstack/keystone/tree/keystone/auth/core.py#n170 10:26:26 the string is being marked for translation with _() and then being passed to both the LOG and the exception 10:26:52 but we don't actually want to have the string for the LOG translated, only for the exception 10:27:37 so it should change to something like msg = 'Domain name cannot contain reserved characters.' ; LOG.warning(msg) ; raise exception.Unauthorized(message=_(msg)) 10:28:04 cmurphy: ok got it now 10:28:27 Thanks for the reply 10:28:46 no problem 12:39:49 o/ 13:32:10 this is a good documentation patch if anyone is interested https://review.openstack.org/#/c/569741/ 13:43:28 cmurphy: :) 13:44:47 kmalloc: sup 13:45:55 good morning! 13:46:07 good afternoon! 13:46:07 or evening... or whatever it is wherever you are 13:46:18 * kmalloc is pre-coffee. 13:47:26 lbragstad: i think i've done a reasonable job breaking down @protected and what we're extracting so a proper .enforce_call can be made. https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/rbac_enforcer/enforcer.py?h=refs/changes/39/576639/4#n157 13:47:40 lbragstad: it's not done, but it's on it's way. 13:48:15 sounds good 13:48:16 ayoung, adriant: ^ cc, just because i know you tried to take a stab at diving into @protected as well 13:48:25 i can start looking at that today or tomorrow 13:48:28 this is very flask-specific. 13:48:35 yeah just a "hey does this make sense" pass 13:48:55 is fine, because if it looks better than @protected, i've done something right. 13:49:23 yeah - that's my main goal 13:49:57 if we can remove @protected in favor of something that puts the authorization logic closer to business code 13:50:57 or makes authorization logic more clean/clear i think that'll be a big win 13:52:12 which will also be super handy for the default roles + scope types work 13:52:19 yeah. 13:52:28 the docstrings need further expansion too. 13:52:45 and we can add another wrapper syntactic sugar-style to it on top of enforce_call 13:53:36 but i am feeling much better about the enforcer having spent a ton of time diving into @protected and trying to understand the dense craziness. 13:54:14 yeah - it's intense 14:07:04 going back to the database migration discussions we were having yesterday 14:07:16 i _think_ we'll need three migrations 14:07:43 1. for auto-incrementing primary keys in registered limits 14:07:54 2. for auto-incrementing primary keys in limits 14:08:09 3. for reducing duplicate data between limit and registered limit tables 14:10:20 i think we're at a point with the notes in https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad that we can probably move them to bugs instead 14:23:54 https://bugs.launchpad.net/keystone/+bug/1777892 14:23:55 Launchpad bug 1777892 in OpenStack Identity (keystone) "Reduce duplicate data between unified limit tables" [Medium,Triaged] 14:30:36 https://bugs.launchpad.net/keystone/+bug/1777893 14:30:37 Launchpad bug 1777893 in OpenStack Identity (keystone) "Limit and registered limit tables should auto-increment primary keys" [Medium,Triaged] 14:40:23 cmurphy: would i be able to get your eyes on https://review.openstack.org/#/c/571309/ whenever you have a minute? 14:40:45 lbragstad: looking 14:41:04 it should be all squared away per your last set of comments 14:43:39 o/ 14:48:18 lbragstad: lgtm! 14:48:30 thanks cmurphy 15:12:30 lbragstad: i also advised wxy to confirm with you the direction we're going, trigger or not 15:12:51 lbragstad: i will stand behind whichever is the end choice, but i've made my opinion clear 15:13:09 sure - it's a big part of the reason why i wanted to write down a couple of the approachs 15:13:16 i'd like more feedback on it 15:13:30 and it's probably easier for people to parse if they have something they can look at 15:13:36 yep. 15:13:50 but yeah... it's hard problem 15:22:30 o/ 15:40:34 Lance Bragstad proposed openstack/keystone master: Simplify the issue token code path https://review.openstack.org/545450 15:40:45 kmalloc: ^ 15:41:34 nice 15:42:37 Morgan Fainberg proposed openstack/keystone master: Add support for before and after request functions https://review.openstack.org/576637 15:42:38 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 15:43:20 lbragstad: ok and that should now be passing tests. 15:43:27 sweet 15:43:36 the enforcer is not done, but it's at least got parity with today 15:44:22 probably another hour of coding and then an hour of test writing [might spin the tests up in a followup for the new enforcer] just to keep reviewability (too much code at once is hard) 15:44:55 lbragstad: i knew flask was going to be a rabbit hole... but FFS :P 15:45:30 once the enforcer is ready i'll be able to start moving apis to keystone.api 15:58:37 that last patch i pushed is in merge conflict, but i should have a cleaned up version here in a minute... 15:59:44 np, i need to run for some errands, be back around noon 15:59:47 (pacific) 15:59:52 ack 16:16:04 Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object https://review.openstack.org/559129 16:16:05 Lance Bragstad proposed openstack/keystone master: Simplify the issue token code path https://review.openstack.org/545450 16:16:35 had to wipe the +2 off of ^ :( 16:21:55 * lbragstad goes for a run 16:22:01 bbiab 16:43:17 Merged openstack/keystone master: Api-ref: Refresh the Update APIs for limits https://review.openstack.org/569741 18:19:41 Pavlo Shchelokovskyy proposed openstack/keystone master: Filter by entity_type in get_domain_mapping_list https://review.openstack.org/572446 18:51:15 i'm noticing something super weird with caching 18:53:39 i have a token model handler that serializes token objects to dictionary before caching them 18:54:03 and then it deserializes the data back to token model objects on cache hits 18:54:30 i can confirm that a token is getting serialized, which means it's getting put in cache 18:54:42 but when it is deserialized, bit' 18:54:49 s/bit'// 18:55:12 it only executes like halfway through the deserialization 18:56:02 knikolla: hi 18:56:25 knikolla: I read through the spec you linked in yesterday quickly for the Devstack plugin and test work 18:56:41 knikolla: is it tracked anywhere what's done and what's in flight/todo? 19:04:51 lbragstad: back 19:07:51 lbragstad: this the context cache? 19:07:56 lbragstad: or the main cache? 19:08:17 lbragstad: it might need a msgpack deserializer 19:09:17 lbragstad: can you post what you have and i'll take a look 19:11:34 yeah 19:11:45 i'll post a wip of what i have 19:28:12 ok - these are the changes i've made http://paste.openstack.org/show/723952/ 19:28:46 this is the failure with logging - http://paste.openstack.org/show/723953/ 19:33:56 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 19:34:44 lbragstad: ^ fyi, code complete, needs tests. 19:34:48 looking at your issue now 19:34:53 sweet 19:36:34 fyi - this is the test case that it's failing on https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/test_v3_protection.py#n1681 19:36:37 interesting: Traceback (most recent call last): 19:36:37 File "keystone/token/provider.py", line 170, in _is_valid_token 19:36:37 token_data = token.get('token', token.get('access')) 19:36:37 AttributeError: 'TokenModel' object has no attribute 'get' 19:36:54 right - did you see the handler code? 19:36:58 it's making an assumption you're dealing with a dict. 19:37:12 the authentication code it using the token model 19:37:24 the validation code is using the token reference (the old way) 19:37:43 ah 19:38:02 brb, dog needs to not explode inside 19:38:04 sorry 19:38:06 so - technically the token validation code assuming that's a dictionary is correct (for now) 19:38:11 now worries 19:38:14 no* 19:41:27 the log doesn't ever show deserializing 19:41:35 weird, right? 19:41:46 dog wants to play "chase me" instead of "go out" 19:41:47 even though it says it's been serialized and whatnot 19:41:52 so, nope. not chasing a dog around. 19:42:04 that sounds like a fun game 19:56:46 near dogsplosion 19:56:47 ok back 19:56:48 sooo 20:06:58 lbragstad: uhm 20:07:08 weird, right? 20:07:08 lbragstad: so, humor me... 20:07:22 i think you're never hitting a deserialization event 20:07:29 i would agree 20:07:49 it's never actually getting to that method 20:07:55 in _TokenModelHandler 20:08:06 you're failing before you hit deserialize 20:08:27 in self.get('/auth/tokens', token=admin_token, 20:08:27 headers={'X-Subject-Token': user_token}) 20:08:39 you've only requested each token a single time until that point 20:09:09 the context cache wont deserialize unless you get into "get" token. 20:09:26 right - that makes sense 20:09:32 you're not getting far enough for the context cache to work, so, caching is not even involved yet 20:10:21 Traceback (most recent call last): 20:10:22 File "keystone/token/provider.py", line 170, in _is_valid_token 20:10:22 token_data = token.get('token', token.get('access')) 20:10:22 AttributeError: 'TokenModel' object has no attribute 'get' 20:10:32 that is before you get to the deserialize point [somehow] 20:10:40 yeah... 20:10:42 hmm 20:11:08 Unexpected error or malformed token determining token expiry: 20:11:13 soooo 20:11:29 are we validating a freshly issued token? 20:11:46 we haven't validated a token at all 20:11:48 just issued 20:11:58 right - if what you're saying is true 20:12:09 we haven't issued the user's token back to them yet... 20:12:15 even if we had 20:12:21 context cache is memoization 20:12:26 meaning it is specific to the validate call 20:12:40 if you don't call "validate" we aren't caching 20:12:56 we cache tokens on issue 20:13:04 then our on-issue cache may be wonky 20:13:12 it was a thing amakarov implemented a while back 20:13:53 https://git.openstack.org/cgit/openstack/keystone/tree/keystone/token/provider.py#n173 20:14:14 yeah... that doesn't look quite right to me 20:14:17 self._validate_token.set(token_data, TOKENS_REGION, token_id) 20:14:22 sec. 20:14:42 yeah that isn't caching anything useful 20:14:58 it's setting the cache key to the TOKENS_REGION 20:15:04 which is a bogus cache-key 20:15:41 basically that code just wastes memory 20:15:53 both in memcache and in local context 20:15:57 because it stuff things in that can't result in hits? 20:16:13 because TOKENS_REGION object isn't a valid cache key 20:16:17 nothing would ever look that up 20:16:26 damn 20:16:28 there is a reason we typically don't use .set() 20:16:45 what's the method signature for set? 20:16:47 you have to generate the cache-key with args that look like what _validate would be called with 20:16:48 set()? 20:17:03 which *should* be the token id 20:17:10 .set(self, key, value) 20:17:27 where the key is a mangled set of "method, args, etc" run through a sha1 20:17:58 oh - it doesn't look like we're doing that... 20:18:23 let me confirm, it *may* do some cache-key work 20:18:34 but it for-sure doesn't work with TOKENS_REGION 20:18:39 as the value 20:19:03 if you're right about the method signature 20:19:05 shouldn't it be 20:19:26 self._validate_token.set(TOKEN_REGION, token.id, token) 20:20:17 https://review.openstack.org/#/c/309146/ 20:20:30 https://www.irccloud.com/pastebin/BuxcIEpz/ 20:20:55 so maybe it's right... 20:21:00 but... hold on 20:21:14 Merged openstack/python-keystoneclient master: Add support for registered limits https://review.openstack.org/537668 20:21:37 hm. 20:22:22 no, it likely should be self._validate_token.set(token, self, token.id) 20:22:34 sorry my dogpile foo is a little rusty 20:22:46 ok 20:23:12 yeah 20:23:15 if MEMOIZE.should_cache(ret): 20:23:15 self.get_project.set(ret, self, project_id) 20:23:15 self.get_project_by_name.set(ret, self, ret['name'], 20:23:15 ret['domain_id']) 20:23:15 return ret 20:23:20 that is an example 20:23:31 swaping TOKEN_REGION for self should fix that 20:23:45 and get you deserializing and actually getting pre-seeded caches 20:23:47 ok 20:23:58 right now every single token issued simply caches in the same key 20:24:00 :P 20:24:02 over and over and over 20:24:07 let me give that a shot quick 20:24:11 and that sounds like a bug 20:24:18 yeah it is a bug 20:24:27 and proof that this code was never actually tested 20:24:31 which would pretty much negate the enitre benefit of that feature 20:24:35 yep. 20:24:42 testing the cache is *hard* 20:24:58 there is a reason very few of us tend to write cache code. 20:25:14 which reminds me, i need to unwind the broken config thing soon 20:25:21 will do that in a few. 20:26:08 mmmm 20:27:05 ok let me look at the blame... i think we never had a test implemented for caching code 20:27:21 i think that needs to be a rule, cache code MUST always have expanded testing 20:27:31 we're still not hitting the deserialization 20:28:04 http://paste.openstack.org/show/723962/ 20:28:21 changes http://paste.openstack.org/show/723963/ 20:30:59 test changes - http://paste.openstack.org/show/723966/ 20:31:15 new logs - http://paste.openstack.org/show/723965/ 20:33:42 huh - so it is blowing up in the GET /v3/auth/tokens call on the admin token 20:33:46 Merged openstack/python-keystoneclient master: Add support for project-specific limits https://review.openstack.org/574391 20:33:53 right 20:34:02 and it's still not deserializing 20:34:10 yeah 20:34:57 I don't think it's even getting to .validate 20:35:08 i don't see a "missed" anywhere in your log 20:35:31 nope - because it's hitting the cache 20:35:35 but not deserializing 20:35:39 0.o 20:35:44 uhm. 20:35:50 is it hitting the cache? 20:35:56 it has to be 20:36:09 do me a favor, lets do some exception debugging. 20:36:12 http://paste.openstack.org/show/723967/ 20:36:34 https://www.irccloud.com/pastebin/9vgqFE2F/ 20:36:38 add in an explicit get 20:36:47 self._validate_token.get(self, token.id) 20:36:51 and pprint that 20:36:59 erm... 20:37:00 where do you want that? 20:37:05 right after the set 20:37:09 let's compare the results 20:38:38 compared to token/token_data and the return of .get() 20:38:50 interesting 20:38:52 http://paste.openstack.org/show/723968/ 20:39:16 http://paste.openstack.org/show/723969/ 20:39:19 ^ changes 20:40:08 well that clearly shows bugs in the deserializing code 20:40:12 it's failing because i did something wrong in deserialize 20:40:22 that is a start. 20:40:28 so - that proves something 20:40:38 which is that it's getting set in cache 20:40:52 the next thing to try is: call ._validate directly and compare .get() and ._validate responses 20:40:55 with self._validate_token.set(token, self, token.id) 20:40:57 once you have deserialize working 20:43:09 you should write a test for the handler 20:43:10 fixed deserialization 20:43:18 that just does serialize/deserialize of a rendered token 20:43:22 to ensure changes don't break it 20:43:28 iirc i did that with the revoke handler 20:43:31 http://paste.openstack.org/show/723970/ 20:44:13 and that is just doing .get() then ._validate(token.id) ? 20:44:40 yeah - it's just calling .get() right after it manually sets the token on the _validate_token() method 20:45:04 i notice two deserializations now 20:45:22 yeah - because the test is authenticating for two tokens 20:45:28 the admin_token and the user_token 20:45:34 ah right. 20:45:51 but the main issue still exists (where TokenModel is somehow getting in the mix in the validate token path) 20:45:51 ok now right below the .get call self._validate(token_id) 20:46:03 and see if it hits the cache 20:46:22 we can also enable cache-debugging (and show the generated keys) 20:47:05 http://paste.openstack.org/show/723971/ 20:47:42 deserialized twice, one for each token 20:47:55 so self._validate_token(token.id) is working 20:48:24 yep 20:48:29 thats good news(tm) 20:49:05 that clearly means we're not populating bad cache now 20:49:17 ok, but we're still failing. 20:49:38 because "somehow" validate is getting a TokenModel when it should be getting a dictionary 20:50:00 yup 20:50:02 which is still blowing my mind... 20:50:21 and it's def. not cache related [or well, not "context-cache/validate cache"] 20:50:28 let me see the whole diff again? 20:50:44 http://paste.openstack.org/show/723972/ 20:58:09 note that diff is on top of https://review.openstack.org/#/c/545450/10 20:58:21 hi. Just updated to Ocata from Newton, auth stopped working and seeing this error in the logs TypeError: __call__() got an unexpected keyword argument 'default_config_dirs' 20:58:29 anyone seen that before? 20:58:54 cant find any reference to default_config_dirs in any config files 20:59:18 lbragstad: so, ._validate is in-fact returning a tokenmodel now 20:59:28 lbragstad: and you're erroring in .is_valid_token 20:59:35 yep 20:59:50 markguz: do you have a whole trace? 21:00:16 lbragstad: fix is_valid token, the pprint for deserialization may just be getting lost in a flush. 21:01:00 lbragstad: http://paste.openstack.org/show/723973/ 21:01:39 lbragstad: my typical view on caching is also: disable caching and see if it works first 21:01:54 once that works, enable caching again 21:03:02 markguz: that sounds like some code mismatch of some sort. 21:03:58 markguz: how was the upgrade performed? [out of curiosity] 21:04:18 kmalloc: https://www.rdoproject.org/install/upgrading-rdo-3/ 21:05:30 hmm. 21:05:51 because default_config_dirs was an option added somewhere along the line. 21:06:38 it's like the option is being passed to an older [unaware] version of keystone 21:07:11 kmalloc: only one keystone running 21:07:29 right. 21:08:01 did keystone properly shutdown before the upgrade? 21:08:07 yup 21:08:14 i could see something being weird if some code was still running in mod_wsgi. 21:08:56 lbragstad: i've never seen that error before. 21:09:05 me either 21:09:57 default_config_dirs was added to oslo.confg in ocata so you need to make sure oslo.config is up to date 21:10:15 cmurphy: i need to add that option? 21:10:26 ooh 21:10:29 just upgrade oslo.config? 21:10:32 that could do it. thanks cmurphy 21:10:37 markguz: no, you need to make sure the oslo.config package is on ocata 21:10:46 ahhhh 21:10:48 yeah no kidding, good call cmurphy 21:10:48 version 3.20.0 at least it looks like 21:10:50 yeah that would do it 21:10:51 lbragstad: yeah might be that oslo.config package was out of date. 21:11:13 markguz: cmurphy swoops in and saves the day. it's her super power :) 21:11:22 (well one of them) 21:11:33 ^.^ 21:11:42 yeah that was not updated. think rdo need to put that in the update page 21:11:52 https://git.openstack.org/cgit/openstack/keystone/tree/requirements.txt?h=stable/ocata#n25 21:11:56 hehe, or make their keystone package depend on the minimum 21:12:09 kmalloc: yes 21:12:11 sounds to me like a bad rpm that doesn't know the minimum oslo.config needed 21:12:21 s/bad/not-quite-correct 21:12:32 the minimum we define upstream is 3.14 21:12:38 at least in stable ocata 21:12:54 oooh wonderful. 21:13:00 that might be a g-r bug then 21:13:21 kmalloc: fwiw - that issues goes away when i disable keystone.conf [token] cache_on_issue and keystone.conf [cache] enabled 21:13:33 lbragstad: ok that is interesting. 21:13:39 means it *is* cache related 21:13:50 good to know 21:13:55 somehow with cach... 21:13:58 oh wait a sec. 21:14:09 hooooooollllld the door... hodor! 21:14:15 i mean... 21:14:32 lbragstad: you didn;t update the validate pipeline to use the toknemodel did you? 21:14:34 just the issue one? 21:16:03 lbragstad: you're somehow getting a dict back when you don't pre-seed the cache? 21:16:13 or when you don't cache at all 21:16:20 via validate 21:17:09 * lbragstad back in 5 21:17:12 ok 21:52:19 sorry 21:52:30 * lbragstad was bombarded 21:52:57 heh 21:53:08 correct - only the issue token patch was updated to use the token model object 21:53:16 the validate path still builds a dictionary 21:53:24 and there is why you're failing. 21:53:25 using all the old way of doing things we're used to 21:53:34 because issue pre-seeds in the cache of the model 21:53:56 i wonder if the deserialize pprint is just lost in a flush due to the app bailing 21:53:59 checkout the last couple lines of the deserialize method though 21:54:21 i'm converting the token model back to a dictionary 21:54:40 don't do that. 21:54:47 (because i'm doing the token model work in two patches, one for token issuance and one for token validation) 21:54:52 i can squash them 21:55:02 but i'm not sure if we're covering up a cache problem? 21:55:04 deserialize should rehydrate to the same state 21:55:06 always 21:55:20 you should ensure calls to validate convert -> dict 21:55:21 if needed 21:55:27 ._validate 21:55:56 basically you need a "if isTokenModeel: token.to_dict() 21:56:00 for testing 21:56:23 if you turn off cache_on_issue 21:56:33 the problem also goes away, yah? 21:56:58 basically until both issue and validate emit TokenModel you shouldn't lean on cache_on_issue 21:57:05 it is a recipe for errors. 21:57:10 so - smash https://review.openstack.org/#/c/555931/1 into https://review.openstack.org/#/c/545450/ 21:57:41 * kmalloc waits for loading... 21:57:49 i'd like to make sure cache_on_issue always works 21:59:30 right, so you have to make sure issue and validate both do tokenmodel 21:59:32 in a single patch 21:59:39 ... also, i can't load review.openstack.org 22:02:08 changing fundamental format *OR* you need to make a dict-interface for the tokenmodel for compat until everything is converted 22:02:10 both are ok 22:02:22 i probably would do the dict-compat interface 22:02:53 [basically, behind the scenes build the token_dict and setup a __getattr__ to reference it] 22:03:02 erm. 22:03:05 __getitem__* 22:03:18 and then delete that interface once everything is converted 22:03:39 means for less code change in one swoop 22:04:59 true 22:05:46 but if we do that, we aren't reinflating to a the same thing? 22:08:27 oh... 22:08:30 i see what you mean 22:12:26 basically keep a dict state of the token in all cases on like "tokenmodel.__dictstate" and make .__getitem__ on TokenModel just reference TokenModel.__dictstate.__setitem__ 22:14:03 erm... TokenModel.__dictstate.__getitem__ 22:14:58 ok 22:15:24 that'd be one option - or we use the big hammer and make issue token and validation token work with TokenModel 22:15:25 it's ugly but can make it so anything that does Token[] can work until it's converted to know TokenModel.thing 22:15:30 totally 22:15:33 it's up to you 22:15:49 both will do the job 22:15:52 sure 22:16:13 * lbragstad assess risk 23:24:56 cmurphy: thanks, will take a look at it! 23:36:14 cmurphy: any clue if Patrole works with older versions of openstack services? 23:39:36 although I guess in my case the requirements_authority part is all I need and that's just parsing policy files vs requirements 23:48:26 Merged openstack/keystone master: Clarify scope responses in authentication api ref https://review.openstack.org/571309 00:52:28 if I'm using fernet tokens, that means that the keystone database is largely read-only right? I'm thinking of making a somewhat highly available/distributed keystone system with read-only satellite nodes and only one master that can tolerate its failure. 03:08:27 DHE, yes fernet makes the db mostly read only, but the more common way seems to be to just go for a multi-master approach at that stage because forcing write actions against the one node that can write might end up weird 07:12:53 adriant: I don't actually know that much about patrole, felipemonteiro is the best person to ask 07:50:26 cmurphy: thanks, will play with it first and then potentially chat to him. I was very very close to writing (had sort of starting writing something simple) that took a yaml file and used an admin account to make users/sessions and test raw API calls with tokens with only certain roles against our cloud: http://paste.openstack.org/show/724006/ 07:52:55 Patrole saves me a bunch of effort potentially, but at the very least gives me a way to confirm policy files make sense before even touching APIs. 07:53:12 hopefully, if I read the docs right. 07:53:32 yes I think so 08:37:50 ping knikolla - did you get any further with ksproj? 08:40:01 lbragstad: add some comments here: https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad ,L89, need your suggestion. Thanks. 10:42:51 yankcrime: o/ 10:44:01 I did. I also did some playing around with adjutant. 10:44:49 ah cool, i've been on holiday for a week so still in catch-up mode, but playing around with adjutant is also on my to-do 10:45:48 I would really like a vacation. Around 60% of last week was meetings for me. 10:46:41 Wanna talk about your requirements? 10:51:11 yeah can do, althought my thoughts are probably slightly half-baked 10:54:13 basic requirement is user invitation and onboarding (aup awareness / acceptance etc.), i think a lot of what's there in ksproj is along the right lines 10:55:15 are your users going to be federated? 10:55:28 yes, that's the goal 10:55:55 originally our thinking was that users would hit a page on which they'd request access, and then an admin would approve those 10:56:55 makes sense, it's pretty much the same goal that I have. With the addition that users also be able to invite/remove people from their projects. 10:58:07 ksproj currently can do invites for federated users and prompt for terms of agreement on invite acceptance 10:58:30 (on master) 10:58:55 on dev I was working on doing some refactoring so that I could add features to remove/list users from projects 10:59:51 based on my testing around adjutant I found that it also does invites, however it doesn't work with federated users 11:00:20 adriant: ^^ if you're around 11:01:32 ah that'd be a showstopper for us then as it stands right now 11:09:05 yankcrime: I wanna spend slighly more time with it, to see if it's hard or easy to patch that in. 11:09:36 I'd be happier to maintain ksproj if you're helping but also don't want to duplicate efforts by the community 11:10:05 yeah that 11:10:41 that's fair enough, i'll do some more hacking around with ksproj as i've a specific deployment in mind for it right now 11:11:39 i'll see about filling any gaps along the way and then let you know 11:11:54 yankcrime: cool, you know where to find me if you have questions. Though keep in mind I'm on EST. 11:12:02 no worries, and will do - thanks again knikolla 11:12:08 * knikolla goes to shower and have breakfast 11:48:40 adriant: regarding my read-only keystone db, I'm mostly looking at it from the standpoint of disaster recovery but with keystone (and only keystone) high availability. if region 1 burns to the ground I'll deal with it manually to get nova, glance etc up and running 11:49:24 but I'm running (or will be running) swift in multiple regions which only depends on keystone and I want to ensure it continues functioning even if region 1 is offline for a fiber cut or something. 11:49:32 so really this is me taking the lazy way out 13:07:06 Happy Summer (or winter) Solstice y'all 13:39:27 zhangzhaoshan proposed openstack/oslo.limit master: Update url in HACKING.rst https://review.openstack.org/577162 13:48:04 lbragstad: question -- so we can indicate that rules are deprecated now within policy -- but this isn't meant to indicate that a specific path/method is deprecated/slotted for removal, correct? 13:50:11 right 13:52:11 hrybacki: the DeprecatedRule is only meant to indicate a specific policy name or check string has been deprecated 13:54:17 ack, thanks for confirming lbragstad 13:55:16 lbragstad: I'm going to start making something similar to https://wiki.openstack.org/wiki/Barbican/Policy for Keystone today -- to assist in the API audit -- is there any additional info points you think I should capture? 13:56:22 not that i can think of 13:57:26 ++ 14:03:20 i bumped the minimum version of python-keystoneclient in osc for https://review.openstack.org/#/q/status:open+project:openstack/python-openstackclient+branch:master+topic:bp/unified-limits 14:03:26 ^ that should be working now 14:17:34 Lance Bragstad proposed openstack/keystone master: Simplify the issue token code path https://review.openstack.org/545450 14:50:15 Merged openstack/oslo.limit master: Update url in HACKING.rst https://review.openstack.org/577162 15:26:46 o/ 15:56:46 Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object https://review.openstack.org/559129 15:56:47 Lance Bragstad proposed openstack/keystone master: Simplify the issue token code path https://review.openstack.org/545450 16:47:17 Gage Hugo proposed openstack/keystone master: Remove unclear wording in parameters https://review.openstack.org/577235 17:39:44 Kristi Nikolla proposed openstack/keystone master: Simple usage docs for implied roles https://review.openstack.org/575911 18:03:07 Jeremy Freudberg proposed openstack/keystone master: Expose duplicate role names bug in trusts https://review.openstack.org/576610 18:03:07 Jeremy Freudberg proposed openstack/keystone master: Fix duplicate role names in trusts bug https://review.openstack.org/576611 18:06:03 ^ lbragstad, back to you... it would be great if the sahara gate became unblocked today 18:34:53 knikolla: hi 18:35:10 ildikov: o/ hi 18:35:37 in a meeting currently. i should be back in about 30 mins. 18:35:38 knikolla: I created an etherpad to work out a plan to continue the testing work and have people sign up: https://etherpad.openstack.org/p/ECG_Keystone_Testing 18:35:54 knikolla: cool, plz ping me, when you're available 18:43:28 When I run "openstack user list --domain " I get all my ldap users however when I run "openstack group contains user --group-domain --user-domain " against an LDAP enabled domain it comes back empty with 0 users. Is there anyone who can point me in the rigth direction for troubleshooting LDAP groups in keystone? 18:44:04 jeremyfreudberg: ack - reviewed 18:47:48 lbragstad: if you have a few moments to look at the new enforcer, i'd like to get a "yeah that looks better" or "oh god, kjust as bad or worse" before i write the tests/diving into using it 19:01:16 Jeremy Freudberg proposed openstack/keystone master: Fix duplicate role names in trusts bug https://review.openstack.org/576611 19:05:14 kmalloc: yeah - i can pull that up quick 19:05:22 i need a break from the TokenModel refactor anyway 19:06:05 hehe, i figured you might want a change. 19:08:19 is it https://review.openstack.org/#/c/576639/6 ? 19:20:45 kmalloc: hmm 19:21:03 so enforce() is getting replaced with enforce_call()? 19:21:32 .enforce_call is called in the method instead of decorating with @protected 19:21:48 but it's taking the place of enforce(), right? 19:22:31 no. taking the place of @protected/@filteredprotected 19:22:53 enforce still exists and is ultimately called down through the oslo_policy enforcer 19:23:03 enforce is the lower layer 19:23:46 today [pre-flask], we do @protected, which does "check authenticated", then calls common.authorization.check_protection, which then calls check_policy, builds policy_dict, then calls controllers(policy).enforce 19:24:00 which calls driver.enforcer, which calls policy_API 19:24:03 i think. 19:24:13 ok - yeah 19:24:14 that sounds right 19:24:14 or... the last part is it calls olos_policy.enforcer.enforce 19:24:20 it's absurd 19:24:30 and that doesn't even take into account callbacks. 19:27:12 so, instead of a very crazy call stack 19:27:30 ... we now call, in our method: .enforce_call() 19:27:47 and supply the same kind of information you'd supply for @protected/@filterprotected 19:28:16 and / or target info (which eliminates the callback) 19:28:28 since it is called mid-method 19:28:31 rather than as a decorator 19:28:34 knikolla: BTW, are you on the Edge Computing mailing list too? 19:29:10 and we have a couple wrappers to throw errors *if* enforce_call isn't called and/or the method isn't explicitly exempted from enforcement 19:30:49 ok 19:30:58 from a high level view - this seems sane 19:31:15 everything seems pretty well encapsulated 19:31:44 also i tried to add docstrings up and down and up so that it's "easy" to use 19:31:54 that'll be the other big part 19:32:16 right now, good luck knowing what @protected does 19:32:19 and what each thing is 19:32:25 it took me 3 days to unwind it 19:32:49 because it supports doing insane levels of things... and we literally use none of it 19:32:50 so - from the perspective of someone looking to write a new keystone API 19:32:59 and protect it 19:33:18 my main entry point is enforce_call() 19:33:20 i even added the same "functionality" to do the "get_member_from_driver" if needed. 19:33:36 yep, and the blueprint/API base automatically wraps the "API MUST BE PROTECTED" stuff for you 19:33:45 ildikov: o/ hi again. yes, i signed up for the edge computing mailing list the other day. 19:33:49 and there is a decorator to say "this is a whitelisted/non-protected api" 19:33:58 in the case of https://bugs.launchpad.net/keystone/+bug/1750660 19:33:59 Launchpad bug 1750660 in OpenStack Identity (keystone) "The v3 project API should account for different scopes" [High,Triaged] 19:34:26 knikolla: great 19:34:29 say i want to rework the authorization of that method to properly handle system-scope 19:34:35 knikolla: two things 19:34:55 lbragstad: changing the behavior based upon scope is easy. 19:35:04 kmalloc: i call enforce_call() first 19:35:07 relatively. 19:35:27 then i still have the context available to make the distinction between project-scope, domain-scope, and system-scope? 19:35:32 that being said... that change seems like a bad idea. 19:35:35 knikolla: there's a new thread on Keystone Edge architectures: http://lists.openstack.org/pipermail/edge-computing/2018-June/000294.html 19:36:00 calling /v3/projects and getting different respoinses, but then again, i guess that adheres to the "vary" header 19:36:00 knikolla: talking about what options we have, like federation vs DB replication, etc. 19:36:21 kmalloc: i think we'd have to in order to fix admin-ness? 19:36:26 lbragstad: yeah. 19:36:33 lbragstad: i didn't say we had another option 19:36:39 from an api standpoint... gross 19:36:44 knikolla: I think it could be beneficial to continue the Forum discussion either on the thread or on a follow up meeting 19:36:58 if i'm a system admin, i call GET /v3/projects i can get all projects 19:37:02 knikolla: can you chime in to the thread from Keystone capabilities perspective? 19:37:07 lbragstad: do you always get all projects? 19:37:10 in that case 19:37:19 or [obv. filterable] 19:37:22 if i'm a project admin and i call it, i only get projects under the project i admin 19:37:23 but assuming GET /projects 19:37:33 no ?filerparam 19:37:49 knikolla: the other thing is that I created an etherpad to organize our testing plans and look for volunteers: https://etherpad.openstack.org/p/ECG_Keystone_Testing 19:38:07 system scope = all projects, domain-admin = projects under my domain, project-admin = subprojects under mine? 19:38:14 knikolla: if you have any further info/pointers to the content already there, plz add them to the etherpad 19:38:29 or if i'm a domain admin, and i have project B and C under domain A, if i use a token scoped to domain A, then i should get a list of B and C back (and not E, F, G, which are under a different domain) 19:38:36 kmalloc: yaeh - i think so 19:39:02 lbragstad: then it's easy, thats business logic 19:39:09 cool 19:39:12 that just introspecs scope to know what to filter / ask for 19:39:14 that's going to be good 19:39:17 has zero to do with .enforce_call 19:39:24 awesome 19:39:39 enforce_call is "can I access the API and/or resource if populated in the policy_dict" 19:39:54 because we have %(target) 19:40:05 in the DSL, which could be project_foo.id 19:40:10 knikolla: those were all my topics for today :) 19:40:25 and you can check to see if user_id is allowed to on project_foo via %(target) 19:40:43 the role_assignment API is another one that is going to be like that 19:41:10 e.g. we shouldn't be listing system role assignments when a project admin asks for "all" role assignments 19:41:13 yeah, think of enforce as being "are you able to do X, given " 19:41:20 sure 19:41:21 if we change what response is looking like 19:41:30 enforce has already said "yep, you can do X" 19:41:32 ^ that will still be possible right? 19:41:40 treat it as a two step thing 19:41:57 same concept as project stuff 19:42:10 "oh this isn't a system scope, filter system roles" 19:42:14 it's all business logic 19:42:18 yeah 19:42:19 cool 19:42:44 calling enforce just makes sure you're access the API with a token of the right authorization and scope 19:42:44 ildikov: awesome! i'll get right to it. 19:42:47 checking user_auth.scope.id[a project id] == target.id 19:42:54 is what enforce is meant to do 19:43:01 the second step makes sure the response matches that information 19:43:18 knikolla: cool, thanks much! 19:43:23 or if user_auth.scope_type == API.required_scope 19:43:55 enforce doesn't care what scope you have unless it is supposed to limit an API to a scope.id or a scope_type 19:44:04 sure 19:44:08 and enforce doesn't care about what data is being returned 19:44:14 right 19:44:34 enforce is... like the honey badger . it just doesn't 19:44:36 that makes sense - so long as it's easy to make that business logic in the methods calling .enforce_call() i'm happy 19:44:58 yeah, that is the goal 19:45:28 omg. i have AC, and i feel so much better 19:45:42 my office was ~10-15 degrees warmer than the rest of the house 19:45:54 not ok when that made the office ~90-95 19:46:00 Kristi Nikolla proposed openstack/keystone master: Simple usage docs for implied roles https://review.openstack.org/575911 19:46:21 lbragstad: oh, this is an easy one: https://review.openstack.org/#/c/576660/ 19:46:34 lbragstad: if you want to get us close to dropping the old password column ;) 19:46:40 one more cycle after that lands. 19:46:58 Kristi Nikolla proposed openstack/keystone master: Simple usage docs for implied roles https://review.openstack.org/575911 19:47:29 lbragstad, cmurphy, gagehugo: fixed the grammar as suggested ^^ 20:07:05 Gage Hugo proposed openstack/keystone master: WIP Add docs for case-insensitivity in keystone https://review.openstack.org/576640 20:19:48 lbragstad kmalloc: so by default mySQL is case-insensitive right? https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/test_backend_sql.py#n310 is a bit confusing 20:20:30 correct - i tested that with a devstack install and it failed when i replicated that test by hand 20:20:56 Yes. 20:21:12 Only for varchar, it is case preserving though. 20:21:14 is sqlite case-sensitive? 20:22:39 Gage Hugo proposed openstack/keystone master: WIP Add docs for case-insensitivity in keystone https://review.openstack.org/576640 20:25:10 it looks like it's insensitive 20:31:24 kmalloc: did you happen to see wxy's comments here - https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad ? 20:33:46 lbragstad: did not 20:35:32 Gage Hugo proposed openstack/keystone master: WIP Add docs for case-insensitivity in keystone https://review.openstack.org/576640 20:53:45 Gage Hugo proposed openstack/keystone master: Add LDAP user-backed functional testing gate https://review.openstack.org/558940 20:54:31 Gage Hugo proposed openstack/keystone master: Add functional testing gate https://review.openstack.org/531014 21:37:22 Gage Hugo proposed openstack/keystone master: WIP Add docs for case-insensitivity in keystone https://review.openstack.org/576640 00:42:53 knikolla: No Adjutant doesn't do federated invites, but I'm curious how you're doing that in ksproj. Keytstone can't create a user in an external source, so I was going to add support for a pluggable user store which would talk to an external source for users/roles/groups, and handle the keystone parts in keystone. 00:45:27 adriant: I don't do anything to the external source. We are part of a federation of academic institutions, and allow login from any of their identity providers. 00:46:05 But when a user logs in, they won't have any projects. So invites just assign the logged in federated user to the invited project. 00:47:06 hmmm, in theory then if a shadow user already exists in keystone, Adjutant can potentially 'invite' it and assign a role. 00:47:53 yep. when someone click on the invite link with the token, they log in, ksproj figures out which user they are based on the token they received from keystone and assigns them. 00:48:29 so the identity of the token is what defines who is the invited user, rather than their username/email. 00:50:45 oh ok, so until they login, they don't exist in keystone, but once they do, then ksproj can act 00:51:18 so we'd potentially need a similar variant of the invite process in Adjutant that handles a case like that. 00:51:56 * adriant needs to play with federation 00:53:37 adriant: yep, ksproj -> keystone -> external idp -> keystone (create shadow user) -> ksproj with unscoped token and no roles -> ksproj assign role based on the invite link 00:55:04 knikolla: yeah, I think we can probably do something very similar in Adjutant, and potentially have it dynamic based on what the domain is configured for. 00:55:33 e.g if sql backend, then invite/create user as per normal, if external source, require login and shadow user creation before invite 00:56:30 the Task/action workflow can figure that out, set the token to require certain fields which the gui will interpret in Horizon, and handle that part differently. 00:57:11 or just have two variants of the invite process which you can turn on. 00:57:40 adriant: i'd be for the latter. We're moving away entirely from SQL users, so we'd want to disable that. 00:58:00 and enable the federated invite only. 00:59:01 Cool. Yeah, there would be a bunch of work to make it play nice, but I don't think it'd be that painful. And a lot of the code would be the same for both invite processes 00:59:38 and yeah, the way Adjutant's APIs and such are configured you'd effectively have the same API as two different pluggable variants you can turn on. 01:02:17 knikolla: there will hopefully be a lot of work later this year as I refactor a lot of the Adjutant internals, but if you're interested in helping me with features like that, I won't turn you away ;) 01:02:18 adriant: cool. I'd be happy to help if that avoids me having to maintain another tool. 01:02:36 but i'd have to finish this up in the coming month or so. 01:05:39 knikolla: no rush, Adjutant should be nicer to work with when I get done with the refactors I have planned anyway, and trying for feature parity with what you end up with in ksproj as the first step you do would be a sensible plan 02:46:01 wangxiyuan proposed openstack/keystone master: [WIP]Add auto increase primary key for unified limit https://review.openstack.org/576025 05:41:33 Merged openstack/keystone master: Expose duplicate role names bug in trusts https://review.openstack.org/576610 06:24:47 Vishakha Agarwal proposed openstack/keystone master: Added check to avoid keyerror "user['name']" https://review.openstack.org/576433 07:35:26 wangxiyuan proposed openstack/keystone master: Add auto increase primary key for unified limit https://review.openstack.org/576025 08:25:49 lvxianguo proposed openstack/python-keystoneclient master: fix misspelling of 'default' https://review.openstack.org/577368 10:22:35 yanpuqing proposed openstack/python-keystoneclient master: Delete keystoneclient.client.HTTPClient and request https://review.openstack.org/577387 10:26:47 yanpuqing proposed openstack/python-keystoneclient master: Delete keystoneclient.client.HTTPClient and request https://review.openstack.org/577387 11:20:02 hi all 11:20:32 can someone guide me to a good link on how to configure keystone-to-keystone federation in openstack ? 11:25:07 ygl: did you see https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#keystone-as-an-identity-provider-idp already? 11:26:25 cmurphy: thanks a lot. I will check it 11:28:03 ygl: i have a blog post too that might be helpful http://www.gazlene.net/demystifying-keystone-federation.html 11:30:07 cmurphy: thanks 11:48:41 Merged openstack/keystone master: Fix duplicate role names in trusts bug https://review.openstack.org/576611 12:00:17 cmurphy: can a LDAP be considered as an IdP ? 12:01:24 cmurphy: if that is the case then , can we say a keystone with an LDAP backend as a federated keystone ? 12:02:16 ygl: only if you're using Active Directory ADFS which provides a SAML endpoint 12:02:34 ygl: LDAP can be used as a regular identity backend for keystone but we wouldn't really call it federated 12:02:50 it's more like a drop in replacement for the sql database 12:08:48 cmurphy: so in the regular AD as identity backend for keystone case. is keystone involved to some extent in managing the authentication ? 12:12:05 ygl: yes, keystone has to accept credentials and pass them on to AD to do the authentication 12:12:55 cmurphy: ahh ! in that sense it is not a true federation. got it now :) thanks a lot 12:13:04 no problem :) 13:18:37 o/ 14:27:50 o/ 14:33:50 lbragstad[m]: okay first draft is up (https://docs.google.com/spreadsheets/d/1Bu9KIRDn63XGKhJb4vpc2LiXoTSOHcjRbqWHY--5iMQ/edit#gid=0) but I need to double check things and make some 'cosmetic changes' that I implemented half way through the audit 14:34:08 sounds good 14:34:12 if there is anything obvious missing please let me know 14:34:39 but we can use ^^ to map out (tentatively) all of our APIs to role/scope(s) and I'll start making changes to our policy accordingly 14:35:17 cool - i should be able to talk a look a little later today 14:36:53 great -- I'll try to get all of my updates in this morning 15:03:10 o/ 15:50:08 kmalloc: what should we do with bind? 15:55:32 elbragstad: ? 15:55:37 Reading up. 15:59:28 elbragstad: not seeing reference to bind? 16:08:39 the fernet token provider doesn't support bind 16:08:44 and it's the only token provider 16:10:05 i'm checking to see what impact that has on x509 16:10:48 bind is, iirc, mostly for krb5 stuff. 16:11:04 it sounds like we need to just drop all the bind functionality 16:11:09 since we have nothing but fernet 16:11:25 https://review.openstack.org/#/c/428388/ 16:11:26 non-api impacting. 16:11:42 yep 16:11:48 lets just drop token-bind code 16:12:17 i can roll a patch shortly 16:12:18 if you want 16:12:47 https://docs.openstack.org/keystone/latest/advanced-topics/configure_tokenless_x509.html 16:12:49 strange... 16:13:09 so - removing bind won't impact ^ 16:13:24 configuring service accounts to authenticate via x509? 16:13:38 should have zero effect 16:13:50 bind was "did the token auth with form X and maintained it" 16:14:02 so we supported x509 via token bind and another patch for authentication via x509 that was unreleated 16:14:17 right, bind was added enforcement to the token on top of auth 16:14:23 ah 16:14:53 so we can drop bind code? 16:15:02 writing a patch to do so right now 16:15:03 :) 16:15:10 unless you want to do it 16:15:16 we should totally be able to drop the bind code 16:15:21 i'm in the middle of ripping everything apart 16:15:29 i should be able to work it into my patch 16:15:32 and pull it out later 16:15:32 [the fact that we use fernet only now means we can't even test the bind code] 16:15:42 yeah... 16:15:57 and with most everyone moving to fernet, clearly no one is using it 16:16:27 that would include removing https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/test_token_bind.py then 16:17:23 how we want to track that removal? bug or removed blueprint? 16:17:36 technically it was never deprecated directly 16:17:45 it was deprecated indirectly via UUID token deprecation/removal 16:17:55 removed-as-of-rocky 16:18:07 ok 16:18:08 the code is unused [it's a unit test, totally synthetic] 16:18:25 the uuid deprecation explictly called bind out. 16:18:50 it fakes the token and checks against the faked-token 16:19:48 ok - i'm going to lump the removal into the abomination of a patch i have going, then propose it 16:20:00 against master as it's own isolated patch 16:20:48 ok 16:35:01 elbragstad: annnnd now the slow part, writing these tests: 16:35:05 https://www.irccloud.com/pastebin/DClGdon8/ 16:35:32 that's usually the most refreshing part 16:36:06 it's a lot of tests and a lot of mechanical work to set them up 16:36:12 because enforce is such a blackbox 18:34:54 Gage Hugo proposed openstack/keystone master: Add functional testing gate https://review.openstack.org/531014 18:41:45 Does anyone know why my ocata openstack install will import ldap users and groups but the groups don't have the member information. From the little I can find online my keystone ldap group config looks correct and it is showing up in keystone. 18:58:39 Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object https://review.openstack.org/559129 18:58:39 Lance Bragstad proposed openstack/keystone master: WIP: Simplify the issue token code path https://review.openstack.org/545450 18:58:40 Lance Bragstad proposed openstack/keystone master: Cleanup keystone.token.providers.common https://review.openstack.org/577507 18:59:46 * elbragstad hits the "Save Game" buttom 19:05:23 kmalloc: it'll need more polish.. but it works 19:08:52 Gage Hugo proposed openstack/keystone master: Add functional testing gate https://review.openstack.org/531014 19:32:23 Lance Bragstad proposed openstack/keystone master: Remove token bind capabilities https://review.openstack.org/577524 19:32:37 kmalloc: ^ 19:33:05 +6 -423, nice 19:33:06 Lance Bragstad proposed openstack/keystone master: Remove token bind capabilities https://review.openstack.org/577524 19:33:29 yep 19:36:06 elbragstad: thnx 19:36:32 elbragstad: i should have tests for @protected replacement 19:36:35 today 19:36:40 sweet 19:36:41 then i can fianlly start moving apis. 19:36:48 that's exciting 19:36:58 this one has been painful. 19:37:17 i swear... "Rocky - the release of painful refactors" 19:37:43 we can talk about the massive code shuffle: ->keystone.subsystem 19:37:46 for the stuff that isn't .api 19:49:51 elbragstad: want to see something awesome 19:50:22 * elbragstad pushes his glasses up 19:50:46 http://flask.pocoo.org/docs/1.0/testing/#testing-json-apis 19:51:12 now i just need to go figure out why were hard locked on sub 1.0 flask in openstack 19:51:23 because THAT right there is badass. 19:52:12 huh - nice 19:52:23 i bet that'd help with the plugin stuff we have 19:53:57 yep 20:06:52 elbragstad: https://review.openstack.org/#/c/577534/ 20:07:09 turns out flask has been < 1.0 for ~3 years in our g-r 20:07:19 so, before u-c/l-c work 20:07:37 huh - nice 20:07:50 elbragstad: hopefully we can get that landed and i can lean on the new testing bits, just so nice to be able to context manager for a client 20:07:56 rather than the wonky stuff we currently do 20:08:10 with self.app.client() as c: do X 20:08:26 gee whiz, that would make our test cases a LOT simpler 20:08:37 no more ".admin_request" things. 20:09:06 [well maybe, depending on if .client goes through the whole middleware stack] 21:11:55 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 22:04:25 Lance Bragstad proposed openstack/keystone master: WIP: Remove KeystoneToken object https://review.openstack.org/577567 22:04:34 kmalloc: ok - that one is really messed up ^ 22:04:44 i think i have a bunch more unwinding to do :( 22:05:56 heh 22:05:58 =/ 22:06:05 somehow oslo.policy is failing because we changed from using KeystoneToken to TokenModel 22:06:12 which is in an internal only object 22:06:33 but it gets set on the request context in keystonemiddleare, which we override 22:07:41 elbragstad: @protected is weird. 22:08:01 show me a traceback, i bet i can show you why it's failing 22:08:02 :P 22:08:15 [if you have one that isn't just a 401] 22:08:43 though my guess is that you're not extracting a sane bit of info about the subject-token. 22:08:49 and booom splody 22:09:01 http://paste.openstack.org/show/724160/ 22:09:38 line 195 of that trace 22:09:47 the credentials dictionary contains a 'token' key 22:09:56 yep 22:10:03 which is an instance of KeystoneToken, which inherits from dict 22:10:10 but... 22:10:40 don't look *there* 22:10:56 the policy_dict has a non-type that is unexpected 22:11:38 mm 22:12:12 i didn't think i added anything that would put that in the credentials dict via middleware? https://review.openstack.org/#/c/577567/1/keystone/middleware/auth.py 22:13:57 no, but the credentials dict comes from auth context 22:14:06 automagically 22:14:43 yeah... 22:14:44 hmmm 22:15:57 i'll dig into this a bit more... i need to through through everything anyway and reorganize bits of it, it's all a mess still 22:16:28 i'm not sure if it will be easier to do after @protected is gone, but somehow i think it will be 22:16:36 possibly 22:16:55 just because you have more knowledge of what the dict is going to end up looking like 22:17:03 yeah... 22:17:15 it's all pretty opaque 22:17:40 though, just for your benefit you may want to add in a debug in the call to policy controller.enforce 22:17:57 and print the creds/policy_dict/etc 22:18:02 yeah 22:18:06 and see what changes between pre-patch/post patch 22:18:36 i can only comment on the amount of time it's taken me to write a comparable bit of code that isn't full of suck 22:19:03 at least a week or two, and this time we have usable docstrings. 22:19:05 but... 22:19:11 it's still super opaque 22:19:20 just less "fog" and more "black box" 22:20:07 http://paste.openstack.org/show/724161/ 22:20:11 ^ before and after 22:20:26 * kmalloc waits for paste to load 22:20:39 uhoh 22:20:47 whelp. 22:20:59 project_id is None... 22:21:05 wtf 22:21:28 so, something must not be grabbing that from TokenModel property? 22:21:31 properly? 22:21:35 that is my guess. 22:23:02 it helps that i've been digging around in that code for the last week 22:23:03 :P 22:23:19 so, you need to look at what is setting things in the policy_dict. 22:23:29 ack - good call 22:23:31 i *think* domains are magical callback related things. 22:23:31 thanks kmalloc 22:23:45 so, it may not even be hitting @protected in the normal way 22:24:13 ++ 22:24:20 that helps 22:24:21 * kmalloc is going to be sad when your fix lands before enforcer does and the rebase hell 22:24:22 :P 22:24:30 we'll see 22:24:34 hehe. 22:24:34 there is a lot of cleanup left 22:24:42 i wanted to get to the oslo.limit stuff this week 22:24:45 yeah. we're on colliding paths. 22:24:55 just because flask touches everything . 22:24:59 yeah 22:25:05 so does KeystoneToken apparently 22:25:14 and refactoring the entire token provider api 22:25:15 eyah, sorry i wrote a ton of KeystoneToken code 22:25:16 :P 22:25:19 my bad. 22:25:29 meh - that's not the bad parts 22:25:46 oh 22:25:52 HAH i bet i know what is going on 22:25:57 lol 22:26:08 the policy_dict has an explicit flatten 22:26:17 you're passing a non-dict item in 22:26:18 yeah - it's in utils 22:26:26 *OF_COURSE* oslo_policy is exploding 22:26:28 i was *just* lookinga t that 22:26:43 it's not using the token key, it just can't figure out wtf to do with it 22:27:03 you need to make that flatten code do a token render into the policy dict 22:27:05 which is why the creds dict looks odd 22:27:06 and you should be fine 22:27:19 look in common.authorization 22:27:39 yep 22:27:40 nice 22:27:41 it might be set somewhere in there. 22:27:55 line 76 22:28:26 i'll go chase that in a day or two 22:28:59 i have a serious appreciation for the insantiy of a RBAC enforcment model we built 22:29:05 if i could totally re-write it, i would 22:29:18 but... i don't think i get that luxury 22:29:34 mostly because of how "our published policy" works. 22:32:09 thanks again for the help kmalloc 22:32:32 i'm going afk for a bit 00:25:09 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 00:25:10 Morgan Fainberg proposed openstack/keystone master: Bump flask requirements https://review.openstack.org/577586 00:31:04 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 00:31:05 Morgan Fainberg proposed openstack/keystone master: Don't replace the whole app just the wsgi_app backing https://review.openstack.org/577587 08:17:46 yuqian proposed openstack/keystone master: Update overriden to overridden https://review.openstack.org/577612 16:41:14 Morgan Fainberg proposed openstack/keystone master: Don't replace the whole app just the wsgi_app backing https://review.openstack.org/577587 16:41:15 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 16:41:34 elbragstad: ^ *save game* 16:41:40 getting closer! 16:42:23 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 19:03:42 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 21:32:06 Morgan Fainberg proposed openstack/keystone master: Make it easy to identify a 404 from Flask https://review.openstack.org/577627 21:34:17 Morgan Fainberg proposed openstack/keystone master: Make it easy to identify a 404 from Flask https://review.openstack.org/577627 21:34:18 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 21:35:42 Morgan Fainberg proposed openstack/keystone master: Make it easy to identify a 404 from Flask https://review.openstack.org/577627 21:35:42 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 21:38:40 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 22:03:19 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 22:31:06 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 23:14:38 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 00:14:06 Gage Hugo proposed openstack/keystone master: [WIP] Add functional testing gate https://review.openstack.org/531014 14:14:53 Morgan Fainberg proposed openstack/keystone master: Fix policy_dict subject token population https://review.openstack.org/577655 14:20:43 Morgan Fainberg proposed openstack/keystone master: Make it easy to identify a 404 from Flask https://review.openstack.org/577627 14:22:06 Morgan Fainberg proposed openstack/keystone master: Make it easy to identify a 404 from Flask https://review.openstack.org/577627 15:01:38 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639 15:29:25 Morgan Fainberg proposed openstack/keystone master: Implement base for new RBAC Enforcer https://review.openstack.org/576639