18:00:39 #startmeeting keystone 18:00:40 Meeting started Tue Mar 29 18:00:39 2016 UTC and is due to finish in 60 minutes. The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:42 \m/ (-v-) \m/ 18:00:44 The meeting name has been set to 'keystone' 18:00:52 #link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting 18:00:54 agenda ^ 18:01:12 #topic Keystone RC status 18:01:32 we actually have 2 release blockers, reported yesterday :( 18:01:33 #link https://bugs.launchpad.net/keystone/+bugs?field.tag=mitaka-rc-potential 18:01:42 morgan, do you havethe PG stuff undercontrol? 18:01:49 migration fails for long running deployments, and one postgres bug 18:01:51 postgresql 18:01:55 ayoung: huh? 18:02:13 ayoung: the migration 88 one needs unit tests 18:02:29 the migraion 91 one i haven't had time to explore/figure out what is going on 18:02:29 and the migration 091 one needs eyes 18:02:35 I'll take it 18:02:47 so, if anyone else wants to take a look, please do ! 18:02:47 stevemar: do we not have anything like this automated? 18:02:57 fwiw i can't reproduce the 091 issue for the life of me 18:03:15 crinkle: you are using postgres for your db? 18:03:17 91 might be PGSQL only 18:03:18 stevemar: yep 18:03:22 hm. 18:03:37 i am sure ayoung will have better luck 18:03:40 this might be another "long running environment issue" =/ 18:03:48 stevemar, link to where it is failing? 18:03:55 might be something in the log 18:03:55 o/ 18:04:00 and it might be a DB version 18:04:11 shaleh: not really, this is because the deployment was using havana and kept upgrading... his db didn't have some constraints apparently 18:04:17 morgan, We would see that error if the migration had already run 18:04:30 i wonder if the issue is with this: use_labels=True 18:04:38 in the select 18:04:41 (missed the start - this is the role name issue, I assume?) 18:04:54 the way the migration works is i makes a join, and the cols have names like user_password and local_user_password, but at the end, it drops the password column 18:04:56 stevemar: sounds like something we should have floating though. Install the release, put the vms to sleep, wake them up near the end for a round of testing. 18:04:56 henrynash: no the password one 18:05:02 oohhhh 18:05:06 hi, here now 18:05:08 henrynash: the role_name one is mostly under control [just needs unit tests] 18:05:16 henrynash: there are 2 bugs: https://bugs.launchpad.net/keystone/+bug/1562934 and https://bugs.launchpad.net/keystone/+bug/1562965 18:05:16 Launchpad bug 1562934 in OpenStack Identity (keystone) newton "liberty -> mitaka db migrate fails when constraint names are inconsistent" [Critical,In progress] - Assigned to Morgan Fainberg (mdrnstm) 18:05:16 \o 18:05:16 stevemar, I would have preferred we had done the drop in a separate migration, in the future 18:05:17 Launchpad bug 1562965 in OpenStack Identity (keystone) " liberty -> mitaka db migrate fails on postgresql 091 migration" [Undecided,New] - Assigned to Adam Young (ayoung) 18:05:19 morgan: ok 18:05:41 one is 086, the other is 091 18:05:48 stevemar: 088 18:05:51 oops, 088* 18:06:05 one has a fix, the other not so much 18:06:41 stevemar, let me get a linkto the failure 18:06:44 crinkle: the originator is prometheanfire, he's been hanging out in the -keystone channel and has been very helpful in testing out patches 18:07:20 stevemar, I think that is spurious 18:07:30 stevemar, are we seeing the failure in a gatejob? 18:08:03 ayoung: neither of these show in the gate 18:08:08 morgan, its not an erro 18:08:14 morgan, he was running these by hand 18:08:19 ayoung: nope, not in the gates 18:08:23 the migration is not designed to run multiple times 18:08:33 I discussed with him last night 18:08:33 ayoung: oh 91 was a multiple run issue? 18:08:36 yep 18:08:37 i know 88 wasn't. 18:08:43 morgan, that may be 18:08:45 ok lets close 91 as "invalid" 18:08:49 will do 18:08:49 if that is the case. 18:08:55 I'll past in the chat fragment 18:09:08 yeah, migrations are not always idempotent 18:09:18 easiest way to close bugs -- mark them as invalid 18:09:42 I thought we had a job running with other dbs e.g postgress 18:10:01 samueldmq: we do, but the originator has had the same db since havana 18:10:06 should be good to have with the dbs we are supposed to support 18:10:08 samueldmq: we do. the issue with 88 isn't related to pgsql, it just happens that the first person to upgrade was on pgsql 18:10:34 he has all sorts of constraints that we may not look for now, since we squashed migrations 18:10:42 the lesson learned here is "do not assume the constraint/index names are known, always programatically find them" 18:10:49 yep 18:10:53 morgan: ++ 18:11:03 morgan: so that is a real change…we should add this to developemnt.rst 18:11:07 is he trying to upgrade havana -> liberty? 18:11:17 henrynash: I was thinking the same thing 18:11:26 samueldmq: no, every release he does the db migration 18:11:32 henrynash: i think we should prob. add to developing.rst - it's not really a "change" it's something we kindof ignored in the past 18:11:37 he's been running the same deployment since havana :) 18:11:46 morgan: agreed 18:11:58 DO we have an upgrade path from Havana? 18:12:01 and if he hits it with a havana deploy... this bug is likely to hit a significant number of deploys 18:12:08 Anyway, he had 8 users in his database...I think he's going to be OK 18:12:10 stevemar: oh, that's good he's been running since there 18:12:14 ayoung: not directly 18:12:27 ayoung: it's not direct - it's "if you deployed havana and updated through current" 18:12:38 havan -> mitaka directly is not supported 18:12:38 ayoung: he upgrades his db every 6 months... but his initial db is from havana 18:12:43 ah 18:13:01 so anyone who hasn't done a wipe/redeploy from scratch since havana would be hit by this bug 18:13:06 ok...anyway, I think he got this error trying to debug the other, and doing so by hand 18:13:09 regardless of the db (except HP cause mongo) 18:13:12 stevemar: so some constraints have the old names ? 18:13:24 samueldmq: likely 18:13:36 stevemar: I didn't quite catch the issue ... maybe we mad a mistake in our migrations ? 18:13:38 samueldmq: maybe the constraints were renamed in the squash, who knows 18:13:49 stevemar: like we intended to change the constraint but didn't .. 18:13:57 we good? 18:13:59 stevemar: the constraints were named differently depending on if they were automatically named or explicitly named 18:14:05 stevemar: yes, squash is something very very hard to do 18:14:07 it's probably sqlalchemy changed how it assigns names. 18:14:13 stevemar: do we really need to do it ? 18:14:18 stevemar: and that automatic name is different based on sql-alchemy, mysql versions, pgsql versions, etc 18:14:30 bknudson: maybe, good point 18:14:33 …and the squash removes the old code where the constaint was created by hand 18:14:33 samueldmq: ^ 18:14:51 samueldmq: the squash is to make upgrading / maintaingin migrations with oslo.db updates easier. 18:15:15 basically we had a ot of cruft that did things the "old" way and likely this bug would have still occured as we fixed migrations to be more modern 18:15:30 since the code would have had to be significant refactored anyway 18:15:47 the reason nova started squashing was because their unit tests were taking forever. 18:15:57 bknudson: that too. 18:16:05 maybe we should put explicit names on everything 18:16:13 not even trust sqlalchemy naming 18:16:13 samueldmq: we do now. 18:16:31 yep 18:16:36 don't expect a name to be known still 18:16:41 programatically find it 18:17:17 it prevents typos in names from impacting your migrations/wedging things/etc 18:17:18 morgan: makes sense 18:17:35 so, please review: https://review.openstack.org/#/c/298402/ and if someone wants to write unit tests, go ahead :) 18:17:57 i'm slammed in meetings all afternoon, but if no one picks it up, i may 18:18:04 if no one writes unit tests, i'll plan to do it tomorrow. 18:18:06 or tonight 18:18:11 i think morgan is busy too 18:18:18 i just can't commit to doing it until then 18:18:25 yeah, understandable 18:18:32 I can do it 18:18:38 okie dokie, i think we beat this horse enough 18:18:52 #topic cleaning up identity.core 18:18:55 rderose: ^ 18:18:57 you're up! 18:19:02 cool 18:19:04 In working on shadow users, one thing that I noticed was that the backends for identity were referencing code in the core. 18:19:12 For example, sql and ldap backends were calling the filter_user() method in the core.py 18:19:20 To me, this is probablematic for a couple reasons. 18:19:33 Separation of concerns. The core needs to know about the backend interface in order to dynamically load the plugins, but the backends shouldn't know anything about the core or other higher level modules. 18:19:49 The core (manager) is concerned with business logic, while the backends (drivers) are concerned with saving/retrieving data from the backend. 18:20:11 other programming languages don't try to put a bunch of classes in one file. 18:20:12 The second issue is circular dependency. Backend code could reference code in the core, as well as other higher level modules inside identity, and those methods could call methods in the backend; thus creating a circular dependency. 18:20:39 redrose: is it just that the manager is being used as a useful place to have shared code 18:20:39 My recommendation is to move the interface code out of core.py and into a separate module under backend, e.g. 18:20:40 that is a Newton thing 18:20:46 henrynash: ++ 18:20:47 right? 18:20:48 yeah 18:20:49 bknudson: if by "other" you mean Java. Sure :-) but most allow it. 18:21:02 - backend/interface.py or backend/base.py 18:21:17 so...core is supposed to be an interface. 18:21:18 That way both the core and backends only depend on the abstraction. The design would be more maintainable, less tightly coupled. 18:21:36 the circular deps are an artefact that we put the interface into the core file 18:21:49 redrobot: so manage should be the only one that should ever call -> driver (core) 18:21:54 henrynash: it creates a dependency from the backend to the core 18:21:56 not redrobot, rderose 18:22:11 a driver should be able to call another driver, and cycles should not be an issue. We could split the driver interface out of core 18:22:13 redrose: sure, understand the issue 18:22:19 seems reasonable. What is the negative? 18:22:23 ayoung: no. a driver should never call another driver 18:22:25 ayoung ++ 18:22:25 ayoung: ever 18:22:32 i always found it a bit weird that the manager and the interface were in the same spot 18:22:40 ayoung: a drive should be allowed to call a manager method 18:22:50 morgain: agreed 18:22:51 how can a driver call another driver? 18:23:00 morgan, so we have places where, when we delete a user, we clean up role assignments. 18:23:05 That suff should be in maanger 18:23:16 not sure why we still have vestiges 18:23:17 ayoung: manager -> manager is typically where we have had it 18:23:18 morgan: but a driver shouldn't call a manager method (circular dependency) 18:23:23 why would a driver need to call another driver? 18:23:25 redrtose: if you agree that a driver can call a manager method….isn’t that creating the dependency you are concerned about? 18:23:32 we've done a lot of cleanup on that front 18:23:34 morgan, ++ What if we want one backend be SQL and the other - KVS? 18:23:46 henrynash: didn't agree with that part 18:23:53 redrose: ok, thought not! 18:23:57 actually... where are we having a driver -> manager call? 18:24:11 amakarov: both sql and kvs will depend on the abstraction; not the core 18:24:14 because iirc i did a ton of work to make sure all cross-manager calls were in the manager layer 18:24:24 morgan, I've thied to find an example of it myself without success 18:24:26 rderose, I'll give you a codre review on that one...lets leave it at that 18:24:28 drivers should not be able to call managers 18:24:33 they don't perform business logic 18:24:34 sql and ldap both call filter_user() in the core 18:24:39 samueldmq: yeah sorry, you are right 18:24:44 rderose: that should be moved then 18:24:47 the backends shouldn't have been calling filter_user anyways 18:24:49 filter_user shouldn't be on the manager 18:24:53 it should be the manager that handles filtering user 18:24:54 if the driver needs to call it 18:24:59 I don't want identity/backends/common 18:25:22 ayoung: why not, it's only common to the backend 18:25:27 if it is business logic, it should be in the manager layer 18:25:30 the drivers don't have a way to call the manager 18:25:31 bknudson: ++ 18:25:31 rderose, there should be nothing common to the drivers 18:25:32 yeah, not big on the common file 18:25:39 all the common code should be in manager 18:25:49 if there is common code between LDAP and SQL we are doing it wrong 18:25:58 you'd have to pass the manager into the driver in order to be able to call methods on it 18:26:01 ayoung: or they are functions that exist elsewhere 18:26:11 bknudson: ++ 18:26:12 ayoung: because they are truely more generic. 18:26:14 ayoung: drivers shouldn't call manager methods. we don't need a dependency between driver and manager 18:26:26 rderose: right, so make the manager do the filter 18:26:27 OK, this one is on my radar. rderose we can work through this. Want to move on? 18:26:33 driver only needs to depend on the abstraction 18:26:34 rderose: i think is the simple solution 18:26:55 or, .filter_user is a base method on the driver parent class 18:27:05 and the drivers are allowed to call self.filter_user 18:27:28 this is too big for a meeting. 18:27:31 morgan, that would be my suggestion. filter_user should be a base method 18:27:35 rderose: you are 100% right, drivers don't call managers 18:27:38 managers call drivers 18:27:40 morgan: okay, but still would like to remove the backends interface out of core 18:27:43 rderose: you feel like you've god enough to move forward? 18:27:51 got* 18:27:53 stevemar: yeah 18:28:01 rderose: cool :) 18:28:06 rderose: shuffling location of driver interface is 100% fine (just leave a stub with deprecation warning for a cycle) 18:28:13 have a patch out there, please send me your feedback 18:28:16 thanks 18:28:17 anyone interested, please comment on https://review.openstack.org/#/c/296140 18:28:19 moving the backends interface out of core makes sense just to clean up the dependencies. 18:28:32 bknudson: sounds good 18:28:45 next topic 18:28:47 rderose: suggest: .base or .driver_base 18:28:52 #topic summit topics 18:28:53 rderose, don't go crazy on this 18:29:00 samueldmq, can you publish your POC for Policy from last summer? 18:29:00 ayoung: okay :) 18:29:11 i see i forgot to add the link 18:29:12 #link https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm 18:29:17 Ididn't see it in a review 18:29:17 ayoung: well, I am not sure I still have things 18:29:23 samueldmq, yes you do... 18:29:30 ayoung: most patches are abandoned, but I can try to retrieve 18:29:39 i have an alternative suggestion for dynamic policy 18:29:42 ayoung: what do you want specifically ? the code ? 18:29:43 samueldmq, I didn't see the abandoned ones 18:29:46 samueldmq, yes 18:29:46 lets use Apache Fortress 18:29:52 the POC for the middleware 18:30:04 breton: worth discussing at the summit for sure 18:30:15 breton, yo udon't realize the amount of inertia in Openstack 18:30:18 i know that we already had a chat about it, but there was a chat with Fortress author and a company that backs Fortress and they might be willing to implement things required for keystone 18:30:38 like group-based authorization (for federated use case) 18:30:39 so we kill keystone things other than authn ? and use apache fortress? 18:30:45 breton, I'm not going to hold my breath 18:30:53 nope 18:31:06 Fortress is a completely different topic. 18:31:09 i won't be at the summit, but there will be a person whom i will delegate the task of explaining how things work, with a POC 18:31:15 the fortress author was in atlanta pushing this, but things have moved significantly since then 18:31:21 RIght now, we need to figure out how to sync policy files 18:31:43 Fortress will just lead to nothing being done. Nothing against the technology 18:31:48 the problems are on the OS side 18:32:19 Its complete mismatch..lets not go that way 18:32:36 ayoung, why not? 18:32:38 believe me, if we could build a system around LDAP, I would have done so 18:32:47 i think we're better off making authorization pluggable, so people could use fortress if they want 18:32:52 breton, we need a spec :) 18:32:52 but we don't have to enforce it 18:32:53 stevemar, ok, stop 18:33:01 we havea very real topic to discuss 18:33:01 there is no problem of syncing policy files in fortress ;) 18:33:04 and it is not Fortress 18:33:08 or any other rewrite 18:33:25 don't leave us waiting in suspense? 18:33:36 stevemar, look at the etherpad from theagenda 18:33:43 we have two approaches we can take right now 18:33:49 1 use Puppet etc to manage policy 18:33:54 2 distribute via Keystone 18:34:02 I kinda like 2, 18:34:09 and we did already approve the spec 18:34:17 there is also #3 18:34:19 * breton ducks 18:34:19 I kinda like 1 18:34:42 ayoung: i am in the CMS manages policy files camp 18:34:45 jamielennox: ++ 18:35:06 so...I can go with either... 18:35:19 both have pros and cons which I tried to lay out 18:35:21 and i know i got overriden on this previously, but the old dynamic policy spec assumed there would be a lot of continuous updates which doesn't seem to be the case anymore 18:35:25 so, if we go puppet, here is the deal 18:35:30 jamielennox: ++ 18:35:34 ayoung: if 1, what do we do ? make our policy API usable and advise people to use it , 18:35:36 ? 18:35:43 so if you're updating something infrequently i'd rather not have hundreds of api nodes polling into keystone 18:35:45 if we make a change to a policy file, we have a little bit of a sync workflow to clear up 18:35:50 samueldmq, that is one choice, yes 18:35:58 samueldmq: +++++ 18:36:03 :) 18:36:08 the reason to stick with the policy API even if we update via Puppet is for queryability 18:36:09 people have the tooling for this and update this file and bounce the appropriate service is really easy 18:36:27 bonus points if you improve oslo policy to reload the file if it notices the change 18:36:29 jamielennox, that was my feeling, which is why I backed off dynamic 18:36:34 jamielennox: yep. 18:36:35 ayoung: true. It would be nice to know what a user/role/etc could do currently 18:36:36 jamielennox: it already does 18:36:38 so, what would the workflow look like 18:36:50 stevemar: awesome 18:36:52 right now, do we expect peole to have an external policy repo? 18:36:55 jamielennox: oslo.policy doing a mtime check and reload isn't bad. 18:36:58 somewhere that Puppet syncs out of? 18:37:08 * ayoung using puppet as shorthand. means all the deployment tools 18:37:17 ayoung: i think most people template in the CMS 18:37:18 morgan: was think inotify/fnotify/whatever the currently thing is 18:37:22 jamielennox: same thing 18:37:29 jamielennox: implementation detail 18:37:39 ayoung: where it comes from is irrelevant. Being able to ask what the current policy allows would be nice. 18:37:41 morgan, right now, in RDO I have a blank slate. They only use the RPM based policy defaults. 18:37:52 shaleh, it is not irrelevant. It is part of the workflow 18:37:57 shaleh, because we have a repo 18:38:13 and pulling policy out of the Keystone policy API is a viable approach, but does it work with Puppet? 18:38:15 ayoung: ok i think what shaleh is saying is it doesn't matter what is picked "central repo, templated, etc" 18:38:24 ayoung: the workflow could be anything at this point 18:38:26 why not just write out a tool that runs on the policy files and is able to list user capabilities 18:38:28 etc 18:38:30 ? 18:38:36 I think the issue is that while we can write some tools that valide policy, if we are not in control of distribution we can never (easlily) preovide tools that set poliy to some customer requirement 18:38:37 samueldmq, did that already 18:38:38 samueldmq: that is a possibility 18:38:44 samueldmq: but it is somewhat limiting 18:38:56 we didn't even needed to force people to use keystone API with their CMS 18:39:01 need* 18:39:03 so, we have a bit of a mapping problem, too 18:39:10 the Keystone API maps policy to endpoints 18:39:19 henrynash: by we there you mean keystone and i think thats a distribution problem not a keystone problem 18:39:20 but a single server might implement multiple endpoints 18:39:25 henrynash: as long as Keystone is provided a policy file, the service side is happy. Where it comes from does not matter to keystone 18:39:36 and we could have a breakage where the policy file is different for endpoints on the same server 18:39:43 I'd like to find a way to lock this down 18:40:00 and then ,either puppet pulls from Keystone, or Puppet pushes to the keystone policy api 18:40:12 I need to give openstack-puppet some guidance here 18:40:31 ayoung: what does the push to the keystone-api do? [what is the reason for the push] - just outline it 18:40:34 ayoung: as jamielennox and shaleh said above, maybe we shuld just not care about htis 18:40:36 ayoung: what exactly is your concern here? 18:40:38 ayoung: i think that's out of our control 18:40:39 it's easier for puppet to work with files than REST but it can reasonably do either 18:40:40 shaleh, jamielennox: when we have 100s of fine granined roles, some common between services, some not, some tied to endpoints, some not……I think deployers will need a set of tools very different than we can provide today 18:40:41 morgan, for queryability: 18:40:44 crinkle: ++ 18:40:45 provide the api, and let the cms decide where to put each thing 18:40:51 what policy file is endpoiunt xyz using 18:41:01 Horizon needs that 18:41:06 so lets take a step back. 18:41:10 ayoung: why does Horizon need it? 18:41:10 what if... what if... 18:41:17 consul? 18:41:19 :-) 18:41:24 we use the prefix bit like we have support for 18:41:25 * shaleh smacks samueldmq 18:41:39 shaleh, it modifes the UI based on the capabilities of the user. It uses policy to do that 18:41:45 and we can recommend folks deploy a single policy file ? 18:41:57 morgan, too big a jump 18:42:00 tried that already 18:42:04 and It is a good idea 18:42:06 ayoung: because we fail to provide an api that lets people ask what is ALLOWED not what is in a file 18:42:08 but we already do that today? 18:42:13 morgan: yea, not sure how you slurp it together from all the services 18:42:14 with ['compute'] 18:42:28 morgan, there are conflicts on the internal rules 18:42:33 "is_admin" for example 18:42:34 jamielennox: ++ that's the issue we hit when trying the unified policy thing 18:42:50 morgan, it is a distinct possiility, but remember, big tent 18:43:02 so doing this for nova and glance is do-able 18:43:06 on a somewhat related note there's a proposal from nova to have the default rules in their code base, and then generate their sample policy.json from the code. 18:43:10 i think this goes back to the services defining thier policy in code (default) and a dingle override. 18:43:14 bknudson, I think that is OK 18:43:14 similar to how config options are handled 18:43:18 bknudson: yeah that is the nova thing. 18:43:24 in some cases we can't even know what a user is able to do by looking at the policy 18:43:32 like nova has some scope checks in the code 18:43:41 bknudson, I actually like that their approach gives a sane "here is where you find the project on the VM object" 18:43:44 bknudson: i can see the advantage to that 18:43:52 samueldmq: scope checks are weird regardless. 18:43:59 the scope checks are what we want to be able to tweak 18:44:11 er...role checks 18:44:15 ayoung: yeah 18:44:19 the role checks are what we want to be able to tweak 18:44:31 so if we only want to look at the roles 18:44:39 it should be very easy to write a tool for that 18:44:40 shouldn't it , 18:44:42 ? 18:44:48 so...we are agreed that the general approach is to use Puppet etc, and that we will lay down support for making that clear? 18:44:58 ayoung: i think it's the best option 18:45:11 ayoung: it means we can leverage tools already existing 18:45:14 people like policy in CMS 18:45:20 I think the deploy tool is the best way to distribute policy files 18:45:21 morgan, ayoung: as you know, I really don’t like that we are defining scope checks in code, unless they can be totally overriden by the policy file…..too restrictive 18:45:23 ayoung: i'm in support of helping the cms tools 18:45:24 wht changed since last week? 18:45:25 ayoung: and what shaleh said 18:45:26 etc 18:45:31 morgan, OK. So, what happens if someone uploads a policy file to Keystone 18:45:41 should that be definitive, or is that an error? 18:45:43 ayoung: i'd deprecate the API 18:45:49 should the CMS pull from Keystone? 18:45:49 ayoung: same thing that happens today? not much 18:45:51 morgan, we can't 18:45:58 morgan, we needthe queryability still 18:45:58 ayoung: we can. 18:46:11 why not a tool ? :( 18:46:17 morgan, no, remember, we don't control all the tools 18:46:18 keystone could easily server /etc/keystone/policy.json 18:46:21 ayoung: what neess to query policy 18:46:25 ayoung: lets define that 18:46:27 clearly 18:46:29 vs API, maybe a tool is better for a CMS than an API is 18:46:31 get /v3/policy 18:46:32 done 18:46:36 shaleh: hold up 18:46:43 morgan, anythign that needs to know a-priori what a user can do. Any user interface 18:46:45 what things need to inspect/query policy directly 18:47:00 morgan, as I said, Horizon does. Right now it holds it in place. 18:47:07 ok lets start from horizon 18:47:07 Now, we could have Puppet sync to there as well 18:47:13 you could ask keystone what the policy is, but another instance of keystone might give a different answer anyways. 18:47:13 ayoung: we need to try and resolve Horizon's need 18:47:21 morgan: how about auditing if the a given security policy has been implemented 18:47:25 i think if oslo.policy can provide the queribility 18:47:26 its just not a very friendly way , as it makes Horizon special 18:47:26 the cms can push the policy to horizon too 18:47:29 ayoung: Horizon parsing our JSON is nuts 18:47:45 shaleh, no it is not. Please. 18:47:46 it would be easy to say CMS pushes to the interface location 18:48:06 ayoung: i was trying to make sure i understood it was horizon/ui not osc and cli tools 18:48:12 ayoung: which is why i asked what needed to query 18:48:29 shaleh: horizon should be using oslo_policy, oslo_policy can provide this 18:48:29 morgan, I'm also thinking 3rd party integration, and compliance 18:48:44 jamielennox: agreed 18:48:45 jamielennox, and it does 18:48:46 if it's "server" applications, we *can* expose something via oslo_policy from disk 18:48:51 ok, so not nuts 18:49:03 or whatever backend 18:49:12 i just am making sure we have a clear scope 18:49:14 thats all 18:49:23 jamielennox: I still say Horizon parsing it is a hack. What they really want to know is "what can user X do". 18:49:30 ok...so is the best practice: external file repo, puppet updates, puppet uploads to Keystone? 18:49:38 they have to infer it from our policy which means we always break Horizon 18:49:49 shaleh: you have a bigger issue too 18:49:52 there should be a better way. 18:49:52 shaleh: everyone has to infer that from policy 18:49:55 shaleh: some endpoints are different than others 18:50:03 shaleh: and could be considered "compute" =/ 18:50:05 morgan: understood 18:50:06 Understand, if Keystone is not the ssystem of record, we have very little control over policy in the future. We cannot autogenerate, merge, manage, etc 18:50:27 the implie_roles will not be expanded in the policy files, for example 18:50:41 we can't make common rules for multiple servers to consume 18:50:49 I mean, we can, just with offline tools 18:50:58 and maybe that is the desired approach 18:51:04 ayoung: i know the answer,but is keystone the right place for that? (the policy query)? 18:51:06 we don't have any say in what puppet does? 18:51:15 bknudson, I think we have a little 18:51:24 bknudson, puppet-keystone listens to us 18:51:27 ayoung: I concur….are we really saying that keystone should not be the system of record for policy? Taht shuuld be teh conceptual question 18:51:29 if we workwith them 18:52:14 henrynash, what is you opinion? 18:52:21 puppet might not be too happy to push something that's not a part of gate testing 18:52:28 ayoung: I think keystone should be the system of record for policy 18:52:32 henrynash, me too 18:52:34 henrynash: i think making keystone the system of record is going to be as much (if not more work) 18:53:04 it didn't sound like nova wanted to give up control of their policy files 18:53:05 and will continue to lag in adoption even if we do everything right. look at introducing new things in keystone and how hard it is to garner the adoption 18:53:14 bknudson: agreed 18:53:14 and other projects do not wa... what bknudson said 18:53:43 bknudson, I'vetalked it over with them. THere wre two distinct issues 18:53:48 I think the best solutuon is to double down on the CMS tools - it's where people (and projects) seem to like policy files. 18:53:55 one is that the binding of project to resource is a a Nova decision 18:53:55 morgan: that maybe true….supporting old driver version is a SHED load of work, but we diecided to do it since it was the right thing to do (I hope)….we should answer this as to teh correct conceptual answer…and if it’s loat sof work, then we need to plan it over many releases 18:54:11 Ideally, that would be done by code, as policy.json does not buy anything 18:54:20 henrynash: the issue is i think adoption would just stall permanently 18:54:23 the other, though, is what is the intention of the PAI. 18:54:28 Is it meant to be used by end users or not 18:55:02 so, while they should give good defaults, that is what the deployers should be able to customize 18:55:08 henrynash: i'm with morgan and bknudson on this one, adoption is so slow right now, we're better off helping the CMS tools in the short run 18:55:19 morgan: maybe hard, yes, but if the conceptual answer is correct, (and we see the road blocks coming down the road), then it will only be a matter of time 18:55:20 deployers do customize their policies 18:55:34 stevemar, I have people writing custom poliucy at customer sites. They need "read only" roles 18:55:43 ayoung: ++ 18:55:46 ayoung: we have that spec 18:55:52 which i haven't updated 18:55:55 henrynash: i disagree. i don't think it will ever be a good adoption rate internal to openstack. 18:55:59 ayoung: which is... what jamielennox just said ^ 18:56:08 jamielennox, and how are we going to distribute it. Especially if Nova hardcodes the role? They can't 18:56:15 we haven't seen a lot of pushback on the cross-project policy spec. so maybe things can move in a better direction 18:56:15 henrynash: unless we address the clear distribution story first. which CMS tools are already there. 18:56:35 nova's not hardcoding the rules, AIUI they want to generate the policy file in a similar way they generate a config file 18:56:45 rather than maintain it manually 18:56:46 morgan, so I'm fine with CMS as the distro mechanism, but that is not a complete solution 18:57:01 jamielennox: theres some hardcoding going on in projects 18:57:17 which i think is fine because it means it doesn't matter so much if your CMS policy gets out of sync a little because at least there is a real default 18:57:21 i think the best option is improve it, make it so distribution is clear, we have the tools to show how policy works, and supply the interface for horizon [which will still be semi-icky, btu eh it'll be better] 18:57:26 stevemar: right - but that's just wrong 18:57:35 once we're there, moving to a more direct api driven system or not would be easier. 18:57:39 jamielennox, that is my understanding, too. So In my view, the nova policy file would be the starting point for custom policy in a deployment. 18:57:43 morgan: if we just provide anothe CMS - I agree, few will use it, we have to show some of the advantages of keystone being the CMS…. 18:57:55 henrynash: as it stands i don't think there is one 18:58:08 ok..so I'll pursue Keystone<->puppet sync 18:58:26 ++ 18:58:39 ayoung: you and crinkle can swap reviews 18:58:39 ayoung: why is it bi-directional? 18:58:42 I suspect that the right approach is to validate policy offline, long before you touch puppet or Keystone anyway 18:58:43 ayoung: is there a speci of that? I’m not quite sure what that means? 18:58:56 henrynash, there is an etherpad right now 18:58:59 ayoung: i also want to push on the oslo_policy being able to do the lifting for "can you do X" more easily 18:59:00 ayoung: ++ i'm in favor or way better tooling 18:59:02 do you think puppet would accept handling policy files themselves so they can have a read-only role? 18:59:12 #link https://etherpad.openstack.org/p/tripleo-policy-updates 18:59:31 1 minute left 18:59:38 so my concern with puppet managing this sort of thing is that policy files get out of date in the CMS 18:59:46 jamielennox, right 18:59:52 there is always that risk 19:00:05 anyway...I have the general answer I was looking for. 19:00:07 so maybe the fastest solve for that is a .d dir or some way to layer these files that offsets that 19:00:10 our sample policy.json can get out of sync with the code, too. 19:00:20 If you have comments or proposals, update the etherpad 19:00:27 #endmeeting