16:00:09 #startmeeting keystone 16:00:10 Meeting started Tue Dec 18 16:00:09 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:12 #link https://etherpad.openstack.org/p/keystone-weekly-meeting 16:00:14 agenda ^ 16:00:15 The meeting name has been set to 'keystone' 16:00:16 o/ 16:00:21 o/ 16:00:22 o/ 16:00:23 o/ 16:00:27 o/ 16:00:45 wow - better attendance than i was expecting :) 16:01:27 o/ 16:02:08 o/ 16:02:11 ok - cool 16:02:21 we have quite a bit on the agenda today - so we'll go ahead and get started 16:02:38 #topic Upcoming Meetings/Holidays 16:02:50 the next two tuesdays fall on holidays 16:03:16 so i'm not expecting to hold meetings unless folks *really* want to have one while celebrating 16:03:55 otherwise - we'll just pick things back up on January 8th 16:04:10 i'll send a note after the meeting with a reminder to the openstack-discuss mailing list 16:04:26 #topic Oslo Releases 16:04:39 kind of related to the holiday schedule 16:04:52 bnemec sent a note yesterday about oslo releases 16:04:52 #link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001047.html 16:05:15 I'm just about to propose the releases for this week. 16:05:22 this is just a reminder that if anyone needs anything from an oslo library for the next few weeks, we'll have to do it soon 16:05:44 o/ 16:05:44 there isn't anything on my radar 16:05:54 Holding off on privsep because it makes a significant change and I don't want to deal with it over the holidays, but I don't think that will affect keystone. 16:06:04 ack 16:06:09 * knikolla having a headache, but i'll lurk around. 16:06:30 yeah - we don't use privsep i don't think 16:06:53 bnemec: does oslo has something like feature freeze time? I wonder if we can have oslo.limit 1.0 release in Stein. 16:07:28 wxy|: We do have feature freeze, and it's a bit earlier than the OpenStack-wide feature freeze. 16:07:31 Let me find the details. 16:07:46 related to ^ - i pinged jaypipes and johnthetubaguy a few days ago about syncing up on that work 16:08:29 bnemec: Thanks, I'll pay attention for the deadline. 16:08:32 lbragstad: ++ 16:08:36 prior to berlin, there was a bunch of good discussion on the interface between nova and oslo.limit, but i don't think it has moved since then 16:08:43 For Rocky, Oslo's feature freeze actually coincided with Keystone's: https://releases.openstack.org/rocky/schedule.html 16:09:19 Which reminds me I probably need to get that on the Stein schedule. 16:09:25 Full details are here: http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html 16:09:27 if we ask again, we might not get a response this close to the holidays, but it might be worth putting together an action item for the beginning of January to follow up with the nova team on that stuff 16:09:59 lbragstad: it's good to have. 16:10:07 wxy| want to take that one with me? 16:10:26 lbragstad: sure. 16:10:34 #action lbragstad and wxy| to follow up with nova after the holidays about movement on oslo.limit + nova integration 16:10:38 cool 16:10:44 anything else oslo library related? 16:11:00 no, thanks 16:11:07 thanks wxy| 16:11:11 #topic Previous Action Items 16:11:37 i think the only previous action item we had was to get a spec up for protecting the admin role from being deleted 16:11:40 which cmurphy has done 16:11:45 #link https://review.openstack.org/#/c/624692/ 16:11:55 up for review if you're interested in taking a look ^ 16:12:11 #topic Reviews 16:12:18 does anyone have reviews that need eyes? 16:12:31 or anything in review they want to call attention to specifically? 16:12:47 https://review.openstack.org/624972 16:13:07 that's the last bit of all the docs work, right? 16:13:25 all of the admin guide consolidation/reorg yes 16:13:31 i'm still working on the federation guide 16:13:34 also interested in people's thoughts on https://review.openstack.org/623928 and the related bug report 16:13:44 awesome - thanks for picking up the remaining consolidation bits cmurphy 16:14:36 i'll take a look at 623928 today 16:15:14 any other reviews people want to bring up? 16:16:06 ok - moving on 16:16:14 #topic System scope upgrade cases 16:16:36 cmurphy and i have been going through the system scope changes for the projects API 16:16:42 and it got me thinking about another case 16:16:43 #link https://review.openstack.org/#/c/625732/ 16:17:05 i wanted to bring this to the rest of the group to walk through the upgrade, just so we're all on the same page 16:17:38 ^ that review is specific to groups (not projects), but it's applicable 16:17:50 if you look at #link https://review.openstack.org/#/c/625732/1/keystone/common/policies/group.py 16:18:14 you can see that I'm deprecating the previous policies and implementing the system reader role as the default 16:18:41 but... that only happens if a deployment sets ``keystone.conf [oslo_policy] enforce_scope=True`` and it's False by default 16:20:17 for example the policy for get_group would be '(rule:admin_required or role:reader)' 16:20:40 since deprecated policies are handled gracefully by oslo.policy in order to help with upgrade 16:21:17 so - if enforce_scope=False (the default), the get_group policy would be accessible by something with the `reader` role on a project 16:22:21 what exactly happens when a policy is deprecated? if the operator hasn't changed any defaults and policy is in code, does the new check string take effect or the old check string? 16:22:36 good question 16:22:39 they are OR'd 16:23:07 for example, the current policy for get_group is rule:admin_required 16:23:31 and if the new policy ends up being `role:reader`, it will be OR'd with the deprecated policy. 16:24:13 this allows operators a window of time to assign users roles for the new default, or make adjustments so that they can either 1. consume the new default or 2. copy/paste the old policy and maintain it as an override 16:25:39 so both policies will be allowed - so it's essentially more permissive while it's being deprecated? 16:26:04 with that specific example, it is 16:26:20 but... the new policy could be something like `role:reader AND system_scope:all` 16:27:05 which wouldn't allow someone with the reader role on a project to access the get_group API 16:28:07 i'm not a huge fan of encoding scope checks into check strings... 16:28:38 and it's redundant with scope_types... but after thinking about this for a week or so.. i'm not sure there is another way to roll out new policies in a backwards compatible way? 16:28:56 at least while we have enforce_scope=False by default 16:29:10 if enforce_scope=True, then `role:reader` alone would be a bit safer 16:29:27 i'm not sure either 16:30:12 so far, the best answer i have (which may not be the best) is... 16:31:05 1. deprecate the old policies 2. the new policies have the scope check in the check string :( 3. when we go to remove the old deprecated policies in Train we can clean up the policies to remove the scope checks from the check string 16:31:58 Do they need to be OR'd? In general I would expect the new rule to just take effect if the operator hasn't overridden the old one. 16:32:10 step 3 would also include a change for keystone to set ``keystone.conf [oslo_policy] enforce_scope=True`` 16:32:36 bnemec good question 16:33:50 the reason why we OR'd them is because if the new rule is less permissive, then we want to make sure operators have time to adjust assignment accordingly so that users can continue to access that API 16:34:25 otherwise, it would be possible for operators to break users on upgrade if the new, more restrictive rule, is used exclusively 16:34:38 How will they know they need to change it though? Is there a warning if it only passes the old, less restrictive rule? 16:34:58 * lbragstad grabs a link 16:36:00 http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n678 16:36:13 we run that code when we load rules in oslo.policy 16:37:33 Yeah, I guess that tells them it will change, but it doesn't necessarily tell them whether that's a problem. 16:38:02 yeah - that gets tough since it depends on how they have roles setup? 16:38:06 I guess they would test by explicitly setting the new policy so the deprecated one isn't OR'd in and see if anything breaks. 16:38:19 yes - exactly... 16:38:38 which is how i've hard to write some of the new keystone protection tests 16:38:42 had* 16:39:35 Yeah, it would be nice if we could be smarter with the warnings, but that would make the logic even more complicated and I already have a hard enough time following it. :-) 16:39:51 =/ 16:40:10 there certainly isn't a shortage of edge cases here 16:40:27 if people want to discuss this though, we can take it to office hours, too 16:41:32 my other question was about the organization of #link https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:implement-default-roles 16:41:33 Sounds good. I don't want to hold up the meeting any more than I already have. 16:41:49 lbragstad: I will soon update my patches for system scope too. 16:42:18 vishakha awesome - that's another reason why i wanted to talk about this as team, since we have multiple people doing the work 16:43:14 vishakha: ah, good to know, I'll review yours as well. 16:43:15 lbragstad: Yes will ping you for any doubts related to scopes. Thanks for the updates. 16:43:36 i know we have bugs open for the majority of this owrk 16:43:49 wxy|: thanks :) 16:43:53 #link https://bugs.launchpad.net/keystone/+bugs?field.tag=policy 16:44:36 but - as the people who have to review this stuff... is there anything I (we) can do organizationally to maintain the chaos/review queue 16:47:03 or make it easier for people to review in general 16:47:20 not sure there's much that can be done about sheer volume 16:47:41 yeah - that's the answer i was afraid of 16:48:04 Maybe talk to dhellmann. He does a lot of high volume review submission. 16:48:12 i wasn't sure if people wanted to team up on specific resources, or have a priority queue of some kind that applied focus to certain areas 16:48:26 bnemec oh - good call 16:48:52 he does but it's usually distributed across projects 16:49:25 so not so much review load on one team 16:49:47 i just sympathize with people looking at this and not knowing where to start - so if there is anything i can do to make that easier, i'm all ears 16:50:07 Yeah, but maybe he has some tricks for distributing it. I know they had a team split up the work for the python3-first stuff. 16:52:48 something we can talk about after the meeting, too 16:53:22 few minutes left and there are two more topics, so we can move on for now 16:53:35 #topic Tokens with tag attributes 16:53:41 ileixe o/ 16:53:46 o/ 16:54:28 It's about the RFE which returns token with 'tag' attribute. 16:54:45 tag with project 16:55:23 we are using the tag for oslo.policy 16:55:47 for example get_network only for matching tag 16:56:18 so - do you have custom policy check strings that are written to check the token directly? 16:56:34 in credential - yes 16:56:38 e.g., %(target.token.project.tag) 16:56:56 yes similar 16:57:52 I heard of system_scope first time in this place.. and this can be used for our purpose though. I'm not sure 16:58:19 do you have a more detailed example of why you need to override get_network? 16:58:36 We have two general scope 16:58:40 'dev' 'prod' 16:58:48 every project include in one of them 16:58:59 and we have also two network dev_net prod_net 16:59:08 provider_network they are 16:59:17 so 'dev' and 'prod' are not projects or domains? 16:59:22 yes 16:59:25 it just 16:59:32 scheme for our inhouse 16:59:40 codebase 16:59:50 we want to make some general scope to restrict resource 16:59:56 and for now I found 'tag' 17:00:23 sure - are you available to discuss this after the meeting in -keystone? 17:00:34 yes sure 17:00:43 ok - cool, meet you over there 17:00:52 thanks for the time everyone 17:00:55 #endmeeting