18:04:56 #startmeeting keystone 18:04:57 Meeting started Tue Jul 28 18:04:56 2015 UTC and is due to finish in 60 minutes. The chair is morganfainberg. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:04:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:05:00 The meeting name has been set to 'keystone' 18:05:03 Hi and stuff 18:05:09 hi 18:05:22 yep, I;m here 18:05:27 #topic Reviews needed for List Role Assignments Performance patch 18:05:30 henrynash: lead on 18:05:33 ok, So this is something that samueldmq worked on during Kilo, but we bumped it 18:05:41 https://review.openstack.org/#/c/137202/ 18:05:49 henrynash: it has almost 1 year now :) 18:06:15 although it is great for performance…we really want to do this as well so we can base all code that needs to workout group and inheritance roles on this 18:06:34 so just a plea for some review time so ew can get it into L2 hopefully 18:06:47 wont land for L2 unless it's gating today 18:06:50 FYI 18:07:05 ok, so if not L2, then L3 ! 18:07:07 so assume it's an L3 18:07:33 there are a whole string of patches that follow this that start using this backend filtering 18:07:42 oyez 18:07:50 * morganfainberg waves ayoung to the front of the room. 18:08:00 no need to take up more time here, unless anyone has specific questions 18:08:16 henrynash, we have hard numbers oin the perf issues? 18:08:31 I can probably help validate performance 18:08:40 henrynash: if it's a "perf" reason to do the work, I'd like to see numbers validating the gain/benefit 18:08:44 Been doing a lot of that lately 18:08:45 dolphm: ++ thanks. 18:08:49 dolphm: ++ it'd be great ot have some benchmarking on that :) 18:08:57 i have not looked at this review since last week, but i'll get through it after this meeting 18:09:15 ayoung: no…but the current code is that any list of role assignments means we retrieve ALL role assignmenst i the whole system, and tehn filter them in teh controller 18:09:20 dolphm: let me know if you need a hand on that front 18:09:21 dstanek: thanks 18:09:36 we probably should make sure to include LDAP in there. I had a data set for a large user list...we should probably build up a "to scale" data set 18:09:44 target https://blueprints.launchpad.net/keystone/+spec/list-role-assignments-performance to l3? 18:09:55 ldap assignment is dying a terrible death (cc morganfainberg ) 18:10:07 ayoung: ^ 18:10:10 samueldmq: this is identity not assignment 18:10:11 yeah, solemente LDAP for Identity 18:10:19 ayoung: the current code works for both ldap and sql (even thouse assignent ldap is a lower priority) 18:10:35 bknudson: target for l3 when it lands. lets not try and predict it landing. 18:10:39 users and groups in LDAP, role assignments in SQL is my primary use case 18:10:40 this is assignment, list_role_assignments 18:11:14 bknudson: or if you feel strongly, go ahead and target it - i'll work w/ relmngmnt to untarget if needed down the line 18:11:35 morganfainberg, bknudson ++ 18:11:40 I don't even touch the identity code 18:11:50 dstanek, I still owe you an LDAP functional test, too 18:11:52 I don't feel strongly either way... wasn't sure how we're tracking. 18:12:06 bknudson: i'm at the point we should target when it lands if we cna 18:12:07 can* 18:12:28 rather than saying "this will land at X" and then untargeting later 18:12:36 ++ makes it easier to see what's going on in a glance 18:12:41 morganfainberg: and that could be done automatically by our tools 18:13:29 ok so, from my perspective - if we have clear numbers [since this claimed to be perf] i'm fine with this landing in l3 18:13:48 apparently it's not just perf since a bunch of other stuff is depending on it 18:14:00 dolphm, you've been testing speed, but not memory usage, right? 18:14:04 bknudson: yes, it moves the expasion logic to the manager level 18:14:07 and I assume the stuff that depends on it isn't needing it just for perf 18:14:12 Correct 18:14:16 bknduson: ++ 18:14:53 Our memory usage is generally not alarming 18:14:54 this might be more of a memory perf issue, if all those lists are duplicated. Might be worth considering for future perf issues 18:15:01 we might hit memory management thresholds 18:15:09 memory usage is mostly fine in keystone atm 18:15:17 good to know 18:15:25 list users had some issues that way, IIRC 18:15:32 it's worth evaluating - but not in this context 18:15:32 before limiting results 18:15:38 as a separate specific evaluation 18:15:44 if we have lots of group/inherited assingments, that would be toooo bad 18:15:46 since we haven't really done this for keystone in the past 18:16:00 ++ 18:16:01 lets not make that a requirement here [since we've mostly been fine] 18:16:32 morganfainberg, "for future ..." 18:16:39 ayoung: yes. 18:16:56 morganfainberg: Ok, that’s probably enough on that one 18:17:04 i see no issue with this landing in l3, please review it 18:17:15 I starred it. 18:17:17 unless someone has a concern 18:17:44 #topic Request from nova meetup: Document what other projects need to know about keystone 18:17:46 bknudson: o/ 18:17:58 the nova midcycle was in rochester last week 18:18:14 so I went to it since it's just on the other side of the facility 18:18:30 they had a few questions, so I suggested I'd write up a doc so they'd be able to refer to it in future 18:18:34 mostly about what's v3. 18:18:53 rather than try to do everything in a review I started an etherpad: https://etherpad.openstack.org/p/keystone-info 18:19:07 which I'll copy into a review for our docs next week. 18:19:16 I think that is split into two pieces; auth token (which is all Nova should care about) and stuff like what Heat and more workflow related projects need 18:19:19 so if you have anything to add just do it. 18:19:39 V3 auth token needs to be well supported. 18:20:07 for auth_token i almost just need to start again on the docs 18:20:09 nice writeup bknudson 18:20:22 there's so much that's been deprecated or otherwised moved that it gets confusing 18:20:58 nice. 18:21:07 dolphm, this kindof gets at why I wrote up that V2.0 bug 18:21:16 but i agree, there are very few services that should know anything more than what auth_token provides 18:21:21 link in a sec 18:22:16 jamielennox: ++ yep, pretty much auth_token and auth [keystoneauth] is the real limit of what services need to know unless they are heat 18:22:21 or similar orchestration 18:22:34 auth if they need to do independant $things 18:22:38 and most don't 18:22:44 but there is there ever a case where a V2 token needs to be properly converted to a V3 token, to inlcude domain values? 18:22:59 or they need to know, let's say, a given project's hierarhcy to do quota enforcement 18:23:01 :) 18:23:03 and that would be what i would add to that etherpad, yes we explain domains but be explicit that nova should never care about it 18:23:24 we have lots of requests saying how should i handle domains and the answer is to ignore them 18:23:29 ok so lets get that info added to the etherpad 18:23:34 https://bugs.launchpad.net/keystone/+bug/1477373 18:23:35 Launchpad bug 1477373 in Keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New] 18:23:37 Launchpad bug 1477373 in keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New] 18:23:38 Launchpad bug 1477373 in keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New] https://launchpad.net/bugs/1477373 18:23:46 ayoung: wait what? 18:23:47 bots are taking over 18:23:52 ayoung: that doesn't make sense at all 18:24:03 ayoung: if domain id changes?! 18:24:03 morganfainberg, we discussed at the midcycle. 18:24:13 morganfainberg, from "default" to some UUID 18:24:21 if the default domain changes... 18:24:28 if they are changing the default domain... 18:24:37 things are going to get weird 18:24:38 ok...lets say we have a setup where there is a lot of V2 tooling 18:24:50 but they are switching over to LDAP for end users 18:24:59 so the LDAP goes into a domain specific backend 18:25:01 and we allow that https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L791 18:25:33 get rid of your v2 tooling 18:25:37 are we saying that the default domain always *MUST* have a domain ID of _default_? 18:26:07 ayoung: i'd say default domain needs to have an id that isn't changed post deployment 18:26:09 ayoung: we can't as we provide a config option for it 18:26:19 bknudson, people have been building to V2 for a while. Getting rid of it will take a while 18:26:24 unless you're really willing to do all sorts of insane things 18:26:33 morganfainberg, "isn't changed post deployment" doesn't quite cut it 18:26:41 we don't communitate the domain id in any manageable way 18:26:58 ayoung: honestly i want to push people towards v3 more and more 18:27:07 the two options we discussed at the mid cycle were "query the config options via a web api" and "put hints in the v2 tokens" 18:27:18 but...maybe the "hints" part is no longer an issue... 18:27:20 * morganfainberg doesn't remember this convo 18:27:23 Because v2 clients are not domain aware, and the default domain is bit special in v3 18:27:30 Not* 18:27:36 I was thinking PKI, but with Fernet, it is not really a problem...we can always tell them to get v3 tokens... 18:28:09 some v2 tooling is still necessary from my experience. i've tried to use OSC but it does not have all commands available in the older CLIs 18:28:29 browne: ok so v2 keystone should have no impact on the other CLIs 18:28:30 Browne, like what? 18:28:39 browne, I saw that with neutron, but neutron CLI should work with V3 auth 18:28:41 so you're going to switch the domain_id for a couple of operations and then switch it back? 18:28:57 bknudson, neutron misses routers...maybe even subnets 18:28:58 like creating provider networks in OSC. you can do it in neutron CLI but not OSC 18:29:09 browne: this sounds like a bug in OSC 18:29:37 OSC has not been updated to support all commands from the project CLIs 18:29:47 ok...so are we finally at the point where we can say "deprecate V2.0" and we will have an aswer for all use cases? 18:29:47 neutron CLI supports v3 18:29:48 morganfainberg: agree, but becomes an inhibitor to going to keystone v3. i was trying to utilize the multi-domain backend feature 18:29:50 make bugs, people will close it 18:29:50 so lets fix the bug in OSC 18:30:09 browne: this is not a reason we should be holding onto v2 keystone -- 18:30:16 if so, and we can honestly kill v2.0 tokens, I'll be very happy 18:30:18 * morganfainberg is really getting tired of the auth is tied to the crud interface 18:30:29 if the CLIs aren't deprecated then they need to support v3 auth 18:30:35 morganfainberg: does it feel dirty? 18:30:57 lifeless: i am going to avoid a reaaaaaaly long rant on that here ;) 18:31:42 what doesn't work with users in ldap domain? 18:31:42 OK, so we can tell all of the other services "forget that v2.0 token API exists." right? 18:31:52 ayoung: that is what we should do. 18:31:52 are the other CLIs not deprecated? i haven't been paying attention 18:32:05 bknudson, ah, so if you do an install, the default domain gets all the service users 18:32:14 dstanek: i think we're the only one officially saying "no really stop using it and go to osc" 18:32:23 morganfainberg: bummer 18:32:24 dstanek: I haven't seen any other project deprecate their CLI 18:32:27 and then you creata new domain, put ldap in there, and make that the default domain for all the V2 tooling out there 18:32:47 esssentially you've got service users in your default domain 18:32:48 its more than just the osc thing. Its the keystone.rc files 18:33:04 so lets work on making v2 auth die. 18:33:09 browne: multi-domain backend will have problems with few apis. if you are using list_users ( all users), there is no such operation in multi_domain_backend. It is always list_user for a domain 18:33:13 maybe we should move on .. otherwise we won't cover the other topics in the meeting :( 18:33:29 haneef: that isn't really a reason to hang onto v2 18:33:32 at least the ones listed in the meeting page :) 18:33:42 haneef: in fact, i'd say that has no real bearing on v2 vs v3 18:33:45 the only people that need service users in the default domain now are the projects that mix their keystone_authtoken parameters and if you switch it over to v3 auth then the service doesn't work 18:33:57 there aren't many of those left (none off the top of my head) 18:33:58 bknudson, but if you have a thrid party app, that only knows about V2 auth, and your LDAP users will want to use that app, you make the default domain other thanthe one setup by the installer with the domain id of _default_ 18:33:58 samueldmq's waiting to be grilled :-) 18:34:20 jamielennox: and i just sent another email to the ML asking people to avoid doing that again 18:34:21 dstanek: eheh 18:34:25 morganfainberg: i saw 18:34:25 i saw some more cases it was being implemented 18:34:34 that was the context behind that bug. I'll try to make it clearer 18:34:40 ok 18:34:43 ayoung: I think you're going to have problems anyways... what if I have 2 LDAP domains? 18:34:57 bknudson, I'm just trying to solve common case 18:35:00 ok i think we're kind of off the topic here and into the weeds 18:35:07 obviosuly won't work for everyone 18:35:21 update https://etherpad.openstack.org/p/keystone-info with more info 18:35:28 and it'll eventually go in keystone developer docs 18:35:33 bknudson: ++ 18:35:33 then you can update it there 18:35:39 please update the etherpad 18:35:42 ok lets move on 18:35:47 #topic keystoneauth release? K2K in Horizon is waiting for the K2KAuthPlugin, should we add it in python-keystoneclient so we can speed up things? 18:36:14 jamielennox: this is a question for you -- how soon can we get keystoneauth ready to go? 18:36:18 are there library deps? thought that needed saml2 18:36:23 is this 1.0 release ? 18:36:24 we need to dump oslo.config and there were a couple minor things left 18:36:26 or 0.1 or something? 18:36:38 ayoung: nope 18:36:41 bknudson: this depends on keystoneauth 1.0 *or* needs to go into keystoneclient 18:36:43 how do the docs look? 18:36:45 i haven't been active on it for a week or two, but i posted a review the other day 18:36:45 umm 18:36:46 cool 18:36:56 https://review.openstack.org/#/c/205753/ 18:36:56 so the biggest blocker is oslo.config 18:37:01 probably #2 is docs 18:37:17 that review just removes everything to do with plugin loading and so removes the stevedore and oslo.config dep 18:37:25 ah 18:37:26 cool 18:37:36 neutron is being a real stick in the mud with keystoneauth reviews recently 18:37:37 this would mean that to do all the automated plugin loading for now you would still need to use keystoneclient 18:38:06 jamielennox: we need to find out how to do automated plugin loading without keystoneclient and without oslo.config 18:38:07 but it would mean we could unblock the k2k stuff and the federation stuff that doesn't have a great story around automatic loading anyway 18:38:20 jamielennox: lets not walk backwards to requring ksc for things 18:38:28 and figure out what we want to do about loading as a seperate step 18:38:33 make it a separate project 18:38:50 keystoneauth-config 18:38:51 i don't want to force people back to keystonelcient which is what that would do 18:38:52 morganfainberg: right - i'm not saying we keep it in ksc permanently, just get ksa moving along 18:39:10 jamielennox: we can't release a 1.0 if everyone keeps using keystoneclient and has no path forward 18:39:21 we need a path forward for 1.z 18:39:23 1.x* 18:39:33 bknudson: the failure isn't actually neutrons fault, there is some issue with git cloning the project i haven't figured out yet 18:39:52 jamielennox: is that the job failure i keep seeing? 18:39:53 jamielennox: thanks for looking into it 18:39:56 if we still need to use ksc, what your opinion regarding k2k plugin? 18:40:11 rodrigods: we aren't going to require keystoneclient 18:40:12 morganfainberg: particularly if the auth loading stuff is going into anotherproject then it would be fine to release 1.0 without it 18:40:23 jamielennox: then i want that other project ready as well 18:40:43 jamielennox: i am not willing to do a "we might or might not do this but we don't know and we aren't fixing anything but depending on this other thing" 18:40:53 sorry i'd rather scrub keystoneauth completly 18:40:57 morganfainberg: how about rodrigods just depend on exiting ksa with some red flag in mind? 18:41:08 rodrigods: k2k plugin will not change .... 18:41:11 i asked about http://logs.openstack.org/88/201088/5/check/gate-tempest-dsvm-neutron-src-keystoneauth/4439cd5/logs/devstacklog.txt.gz#_2015-07-28_13_55_04_277 in infra earlier today, but haven't gotten a chance to follow up on their hint 18:41:19 so i don't expet your patches would change one day 18:41:28 rodrigods: and we will just wait for ksa release 18:41:39 morganfainberg: stevemar same could be with moving osc to ksa. 18:41:58 morganfainberg: stevemar cause right now federated plugins are quite messy and it's just better to start using ksa. 18:42:02 or ksa-saml2 repo even 18:42:11 if all we're doing is moving logic out but everyone has to keep doing it the old way and we have no real knowledge of where loading is going 18:42:18 we're doing it wrong. 18:42:25 anything else the rest of us need to do? 18:42:42 so lets solve where loading is 18:42:50 and break the dep on oslo.config *and* not force keystoneclient 18:42:57 I kind of like the separate project for config / loading since then it would shield users from having to know which keystoneauth version they need. 18:43:06 bknudson: i'd support it 18:43:08 ok, i'll look at getting another project spun up for the loading as i've had mutliple people suggest it 18:43:13 jamielennox: ++ 18:43:15 thankx 18:43:19 OSC is very keen on seperate project 18:43:23 well dtroyer anyway 18:43:27 #topic Reseller 18:43:32 htruta: o/ 18:43:36 morganfainberg: o/ 18:43:39 please review 18:43:41 ^ 18:43:42 that should be quick 18:43:46 yes, please 18:43:50 ++ 18:43:59 should we set a target to L3 ? 18:44:01 https://review.openstack.org/#/c/157427/ 18:44:03 we don't have it yet 18:44:03 start from here 18:44:04 again 18:44:09 don't target until landing 18:44:22 don't try and predict landing, target when it has landed 18:44:52 morganfainberg: k 18:44:54 anything else on reseller? 18:45:03 that's all 18:45:10 samueldmq, your up 18:45:13 #topic Dynamic Policies 18:45:17 or you're up 18:45:19 hi 18:45:20 either way 18:45:27 where is the spec? 18:45:32 again? 18:45:42 We need to cut the policy overlay piece and go straight to centralize complete policy files in Keystone. Most deployers are not changing their policy files, but if they do, they want to test out locally with the local policy files. Deployers might as well submitt the entire tested policy file to keystone. 18:45:43 #link https://review.openstack.org/#/c/197980/ 18:45:45 and 18:45:51 #link https://review.openstack.org/#/c/134655/ 18:46:11 those 2 are the remaining ones, one for ksmiddleware and another for keystone controlling the cache mechanism 18:46:29 I think 80 should be non-contraversial; using HTTP properly 18:46:47 we've got reviews, I updated the spec, everything looked good but dstanek has some concerns on it 18:46:49 it is the 55 one that is the client side that needs the SFE 18:47:29 dstanek: ? 18:47:40 henrynash, so :endpoint_ids" are just the first hack. eventually, we will resolve the urls to endpoint_ids, but we can do it in steps 18:47:49 endpoint_ids are already implemented by the server. 18:48:20 basically dstanek's concerns are that, in our current endpoint model, we allow an effective service endpoint to have multiple policies (since multiple interfaces mean multiple ids) 18:48:22 samueldmq and i were discussing today how we could possibly use a list of endpoint ids and pick one for the service 18:48:32 dstanek: please go ahead 18:48:54 dstanek, "hard than you think" we are aware of how hard it can be, which is why we are going to do endpoint_id first 18:49:32 ayoung: if you allow the service to specify a list of endpoints, which policy do they get if each has a policy defined? 18:49:42 dstanek, are any of your concerns "stop ship" type concerns 18:50:21 dstanek, I would limit it to 1 endpoint id to start in the authtoken config section 18:50:23 ayoung: i think the model in fundamentally wrong :-( i guess it's possible that we can document and exact setup that would work 18:50:53 a service could, in theory have multiple config files, one per endpoint on the same server. In practice, they will share a config fi;e 18:51:03 BTW, did anyone chk with nova and others to see if the proposal meet their needs? 18:51:30 david8hu: this work of policy distribution is independent of what they have/want 18:51:44 david8hu: we allow policy definition/associoation, just putting the distribution together now 18:52:06 samueldmq, but they are ultimately the consumers. 18:52:12 david8hu, we've had a distinct lack of response from the other projects on Dynamic policy, with the exception of Horizon, that really needs it 18:52:15 i would rather know if it handles deployer needs/concerns 18:52:19 david8hu: our consumers here are the deployers 18:52:40 dstanek, horizon is currently caching the policy files. It would be most useful to UIs 18:52:55 to know that the policy file they had matched the one the endpoint is using 18:53:01 dstanek: so I've an email draft, but ayoung had some concerns about sending this out at this point 18:53:03 #link https://etherpad.openstack.org/p/centralized-policy-delivery-operators 18:53:27 samueldmq, all other services still need changes to pick this up don't they? 18:53:40 david8hu, no, that is the point 18:53:42 david8hu: no that will be a change in ksmiddleware 18:53:48 ayoung: ++ 18:53:51 they do not need to change, they inherit the config option from keystone 18:53:51 the right thing to do would be to allow the endpoint id (only 1 and not a list) to be specified for each instance of middleware 18:54:01 not even minor changes? 18:54:03 dstanek, ++ 18:54:20 dstanek: ayoung so we don't do url -> discovery ? 18:54:23 ayoung: endpoint_ids are my concern as well…..reading the spec doesn’t explain to me how endpoint_ids wrok in this context, what the limitations/complexities might be for a deployer etc. So I have a hard time supporting this as spec’d 18:54:29 and let the deployer choose the id ? 18:54:31 david8hu, they need to change to use the oslo.poliyc library anyway. Beyond that, it is all middleware 18:55:29 samueldmq: i've always said that you have to have a single ID for the middleware to use. i'd rather it just be a policy id, but any id would fix the ambiguity we keep creating 18:55:30 henrynash, I am asusming you are thinking that a singe server is backing multiple endpoints with multiple policy files, so how do we get the right one? Short answer, we don't support that to start 18:55:43 we support "you get one for this server. Pick one" 18:56:06 dstanek: the only concern with the policy id is if it's a uuid meaning you need to upload the data to know what the id is... bad bad bad for deployers 18:56:14 dstanek, if we go with the policy ID, we can't update the policy for an endpoiint without restarting the endpoint 18:56:16 ayoung: we just don’t eplain this in the spec…..other than saying “the deployer can set the endpoing_ids" 18:56:35 ayoung: if the middleware just knows about an id a service can deploy with muliple pipelines including that middleware with different Ids, right? 18:56:36 morganfainberg: another point is that we aren't holding the endpoint/policy association in keystone server that way 18:56:46 morganfainberg: and changign the policy id would require a restart 18:56:52 morganfainberg: endpint id has the same issue 18:57:06 henrynash, becasue the goal was to not have to do this... morganfainberg was pushing for a completely automated discovery...we want to support that. but please stop asking for us to boil the ocean...we are shooting for "actually implementable" here 18:57:10 samueldmq: that's a good point 18:58:02 dstanek, right, the goal here is to migratethe policy management from static files/ansible/config file to centralize in the keystone server 18:58:25 ayoung: I’m just asking for a clear spec….if the answer one endpoint, fine…just say so….what’s tough to support is a spec that isn;t clear about what is and isn’t supported 18:58:29 if that goal is not acceptable to the team...please speak up now. This has been my concern, that we are arguing about details, but have not agreed on the big picture 18:58:43 morganfainberg: should we continue in #keystone ? I think we're out of time .. infra folks should start they meeting 18:58:52 we have 1-2min 18:58:56 ok 18:59:11 ayoung: i could see having a service_id or endpoint_id in the config (but if would be nice to support having multiple instances of middleware) - no lists because that can't work 18:59:28 dstanek, I agree 18:59:33 we'll make that modification 18:59:43 ayoung: I support the principle of a Keysteone Polciy CMS (that’s what this is)….so it IS the detail for me 18:59:48 ayoung: i'm just wondering if any deployer cares 18:59:50 #endmeeting