18:02:43 #startmeeting keystone 18:02:44 Meeting started Tue May 21 18:02:43 2013 UTC. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:46 #link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting 18:02:47 The meeting name has been set to 'keystone' 18:02:48 \0 18:03:08 #topic Havana milestone 1 18:03:20 Just a reminder that our first milestone is on May 30 18:03:24 pluggable token provider! 18:03:27 So only BP not impl is the loggin one 18:03:28 #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 18:03:40 #link https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone 18:03:44 And it is up for review 18:03:55 that's targetted at m1 ^ 18:03:55 #link https://launchpad.net/keystone/+milestone/havana-1 18:04:03 and it's our last m1 blueprint 18:04:12 but it makes me sad, as it once again buries eventlet code deep in Keystone 18:04:13 so we're certainly in good shape for conservative milestone goals 18:04:37 ayoung: from openstack common? 18:05:03 dolphm, yeah keystone/openstack/common/rpc/amqp.py uses weakref to store the context in a WeakLocal object. log.py also does this 18:05:10 if there's any other blueprints that should be moved from m2 to m1 because they're VERY close to being ready, give me a heads up 18:05:43 #topic No downtime database migrations 18:05:55 i've had this on the agenda for a while, but we haven't had time for it :) 18:06:04 there was a summit session on the topic: 18:06:05 #link http://openstacksummitapril2013.sched.org/event/8d559940a2a905c8221ccaec19e91f28#.UZu3byuDQ19 18:06:14 and a subsequent discussion on the mailing list: 18:06:15 hey, missed first part but I just saw the unified logging stuff 18:06:17 #link http://lists.openstack.org/pipermail/openstack-dev/2013-May/008340.html 18:06:39 i just wanted to make everyone aware (go participate in the mailing list discussion!) ^ 18:06:42 dolphm, problem is that Keystone then needs to run with two different versions of the schema 18:06:55 ayoung: exactly 18:07:03 this isn't something we've ever tackled before in keystone 18:07:35 dolphm, we've probably already broken it for Griz-Havana 18:07:53 I don't think we've got any migrations in H yet. 18:08:02 if other projects go down this road, we'll certainly need to follow; if anyone wants to champion the effort, please follow the progress in other projects and comment on migrations reviews accordingly :) 18:08:20 ayoung: probably so; i see havana as a "let's try this out" cycle for keystone, at least 18:08:31 dolphm, sounds good. I would say, though, that from Keystone, it would probably be better if we had a fallback 18:08:37 it will take a testing infrastructure that we don't have today to make sure it works 18:08:41 say, we can validate tokens, but no database updates allowed 18:08:43 i assume someone is already working on that 18:08:51 and then we can upgrade Keystone while in that mode 18:09:34 If we cached the token revocation lists, we could put the server into a mode to just return those and the certificates 18:10:09 Then we can update Keystone, and, while new tokens can't be issues, existing ones will still work. 18:10:12 ayoung: all dependent on appropriate external testing though, and a stakeholder to champion it 18:10:44 #topic open discussion 18:10:53 short agenda today! 18:11:22 I've finally got the changes in to hopefully fix eventlet/dnspython problem : https://review.openstack.org/#/c/29712/ 18:11:34 dolphm, Jamie is going to try to work with the Kent folks to get the lazy provisioning ready for review 18:11:54 and this other set of changes is to get unit tests working again: https://review.openstack.org/#/c/29670/ 18:12:05 ayoung: cool, there's still some identity-api related discussions there first though 18:12:47 there's two discrete api features that are required first, one needs to apply across the entire api and will need to be implemented accordingly (it can't just affect /users) 18:12:47 did we decide which route we're going to take for LDAP domains? 18:13:07 what's the status on the read-only LDAP bp? 18:13:10 anyone working on that? 18:13:15 ayoung: ^ ? it sounded like you were leaning towards merging my patch? can you provide an update? 18:13:18 gyee: link? 18:13:32 didn't brad created one? 18:13:35 I wasn't sure 18:13:45 dolphm, so, while I am leaning toward yours, I told spzala that he got once more chance to get his working. 18:13:47 gyee: i don't see one 18:13:59 the more I think about it, the more I think multiple domains in one ldap is a mistake 18:14:04 dolphM: about the default domain patch, I am working on it ... as ayoung just mentioned 18:14:11 but I would hate to break it on the one person that decided to go that route 18:14:17 ayoung, LDAP is made for domains 18:14:25 gyee, right 18:14:44 gyee, and, I think that if we kept it that way, it makes the other code changes easier as well 18:14:56 namely, split identity into its composite parts 18:15:10 dolphm: the approach I am taking now is that, we can't really ignore domain_id so set domain_id to default when it's ignore case 18:15:11 spzala: ayoung: consider merging mine, and then applying some of spzala's work on top of it? i think there's value in both 18:15:23 spzala: ayoung: have about a week left for 2013.1.2 i think 18:15:42 dolphm, so if we merge yours, we will break domains in LDAP, right? 18:15:52 I'm ok-ish with this, if you are. 18:15:54 spzala: what do you mean "set domain_id" ? provide it to ldap? or provide it to the api? 18:16:06 ayoung: multi-domain support goes away, yes 18:16:08 dolphm: provide it to ldap 18:16:35 dolphm, the backend will have to fill in the domain_id with the default if it is an attribute ignore 18:16:45 so that domain_id is always there .... default or other 18:16:46 spzala: if domain_id is ignored, then provide the default domain to ldap? that doesn't sound like it solves anything 18:16:49 ayoung: agree 18:17:08 spzala: that's also the exact opposite behavior that i expect from the ignore_attribute lists 18:17:30 dolphm: we can't ignore the domain_id 18:17:41 dolphm: in a way it makes sense that 18:17:44 dolphm, I think we can choose to merge your patch at any point. So, if spzala doesn't get it working, yours gets approved by default 18:17:46 spzala: define "ignore"? 18:17:47 we always have a default domain 18:17:58 spzala: my patch validates the domain_id, and then avoids providing it to the ldap server 18:17:59 I think we want it to work so that if domain_id is ignored then any objects returned by Keystone have domain_id set to the default domain. 18:18:11 bknudson: +1 18:18:22 dolphm, but even if he does get it working, doesn't mean we have to take it...it just makes it a deliberate decision as opposed to a forced one 18:18:41 and if domain_id is ignored, the ldap server should be treated as domain-agnostic 18:20:17 dolphm: agree.. ignore is 'ignore' but since we have at least have a 'default' domain so was thinking to pass it to the server. 18:20:47 dolphm, so, lets give him a day or two to get it working, and agree that a reviewed patch has to be accepted in time to meet the 2013.1.2 deadline 18:20:57 otherwise, we go with you patch 18:21:15 your 18:21:18 spzala: pass the default domain ID to the LDAP server? what's the LDAP server going to do with it since there's no attribute for domain_id. 18:21:19 ayoung: thanks :) .. dolphm: hopefully i will update the patch today ..if it works out 18:21:51 bknudson: so by passing, to me, don't "stip off" the domain_id 18:22:16 I see that, that's what we do with ignore 18:22:34 spzala: the LDAP entries have a domain_id attribute? 18:23:29 bknudson, yes they do 18:23:31 bknudson: yes if I am not wrong 18:24:13 #link https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L389 18:24:37 There too https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L845 18:24:57 ayoung: that's the case when domain_id isn't ignored. 18:25:09 but what about when domain_id is in the ATTRIBUTE_IGNORE list? 18:25:23 dolphm, are you OK with this approach, or would you prefer that we just go with your patch? 18:25:36 (wrong channel) 18:25:36 ayoung: spzala: for what it's worth, jcannava tested both patches as of about a week ago in a deployment against read-only AD; both worked fine 18:26:15 ayoung: i don't think you can just ignore either patch, both have value at this point for stable 18:26:16 dolphm, um...but I am guesing he had an attribute that he could use for the domain_id 18:26:49 what we want in havana can be a another conversation 18:27:09 dolphm, what is it from your patch that would be pulled into spzala's? 18:27:43 I am OK with saying "remove multi-domain" as there is a chance it will affect 0 users 18:28:11 ayoung: validation that a valid domain was specified by the api; including a valid domain_id in all relevant responses back to the api 18:29:02 ayoung: the way i see it, my patch fixes the read-only issue by removing multi-domain support; spzala's patch on top of that could restore multi-domain, and maybe they could be backported together 18:30:21 dolphm, OK...that is a strategy. spzala can you rework your patch such that it applies on top of dolphm's and then we only have to decide "dolph's only" or "both" 18:30:40 dolphm, lets #action that 18:30:56 ayoung: sure, that's a good idea. 18:30:57 #action spzala to rework patch such that it applies on top of dolphm's 18:31:13 dolphm, we can accept your patch at any point, I think 18:31:18 any reason to hold off? 18:31:23 not for master 18:31:26 i don't think? 18:31:34 actually, it might make sense to get it in soon 18:31:44 then , if anyone is tracking master, we find out if we broken them.... 18:31:58 tracking master and doing multi domain ldap 18:32:09 which is definitely no one, but yes :) 18:32:50 bknudson, I +2ed. Feel free to approve if you are comfortable with it 18:33:04 dolphm, OK, so..I thin that leads into split identity 18:33:47 i think it does 18:33:47 dolphm, I posted to the mailing list, and am getting positive feedback. Are you OK with the general thrust of it? 18:34:19 ayoung: yeah; at the summit i was just worried the implementation was going to be very difficult, but it sounds like we're making lots of small rational steps in that direction 18:34:45 LDAP and potentiall PAM can "plug in" to fill a domain. My guiess is that domains, and projects would mainly be handled by SQL 18:35:16 so multi domain support across 2 or more LDAP servers should be do-able, and in a rational manner 18:35:43 is the consensus that projects should be in not-ldap? 18:35:51 dchadwick suggested that we further split project into projects and assignments 18:36:01 (that's always made sense to me, but i'm not an ldap stakeholder) 18:36:03 dolphm, I think that some people might keep them there for the short term 18:36:18 but I would say that most people would prefer to not have them in LDAP 18:36:40 so for a given domain, we could say that Identity is in LDAP, but proejcts and assignments are in sql or 18:36:48 all three are in LDAP 18:36:51 as they are today 18:37:00 Identity is users and groups? 18:37:05 bknudson, yes 18:37:15 projects is Projects and roles 18:37:19 that sure maps to ldap 18:37:26 assignments is the mapping between them 18:38:19 assignments is which users are in which groups? 18:38:34 bknudson, no 18:38:34 because ldap groups are lists of users (by DN) 18:38:56 bknudson, more like which users and/or groups are assigned to which roles 18:39:08 ayoung: Does "split" imply APIs, services and configuration needed for said split? 18:39:12 ok 18:39:23 brich, web API stays the same 18:39:30 brich, only the backend splits 18:40:05 there might be some ugliness with the unique identifiers, but I think the current scheme can handle it, so there should be no change from a client perspective 18:40:33 ayoung: but under the official API, we could have remotable services? My impression is that is where Chadwick's thought would go 18:40:49 brich, that would be a step beyond this. 18:41:00 but this work is necessary to implement his, I think 18:41:26 ayoung: agree. just wondered how far you were thinking to go in Havana 18:42:38 dolphm, ok to #action that I update the BP to reflect this? Are we agreed this is the right general direction? 18:42:56 ayoung: Looking at dolphm's patch https://review.openstack.org/#/c/28197/10/keystone/identity/backends/ldap/core.py , removes the DomainApi from LDAP... so spzala's is just going to re-add it back in? 18:43:39 bknudson, if it is approved, yes, 18:43:43 bknudson: I guess so, i think we need that for multi-domain support 18:44:02 spzala, how important is it to your group? 18:44:03 ayoung: which bp? 18:44:17 ayoung: multi domain support? 18:44:20 dolphm, first off, split identity, but also the domain subbp 18:44:37 ayoung: this all sounds like multiple blueprints, btw 18:44:41 #link https://blueprints.launchpad.net/keystone/+spec/split-identity 18:44:56 ayoung: read only domain is more important as far as I know 18:45:06 #link https://blueprints.launchpad.net/keystone/+spec/extract-projects-from-id 18:45:11 ayoung: there's seperate blueprints to split credentials and domains 18:45:29 (i seriously can't spell separate correctly from muscle memory) 18:45:41 :) 18:45:54 #link https://blueprints.launchpad.net/keystone/+spec/auth-domain-architecture 18:45:59 dolphm, neither can I 18:46:23 I don't speak english so I generally get a pass on spelling 18:47:18 OK, one last thing...the whole index/token revocation mess 18:47:47 termie's point is that the serving of the revocation list should not require iterating over a list.... 18:47:54 spzala: marked this as obsolete; let me know if that's innaccurate https://blueprints.launchpad.net/keystone/+spec/ldap-fake-domain 18:48:26 so he's blocked two patches which fix bugs. One is a fix to the token backendm, and the other indexes the sql backend 18:48:36 dolphm: it goes with the current patch on static domain 18:48:41 I think he has a point...so... 18:49:08 how are we going to generate a list without iterating over a list? 18:49:35 what if we held the revocation list in memcached, lazy constructed, and return it as a single document? When a revocation event comes in, we rebuild it. 18:49:36 maybe the problem is too many tokens 18:50:05 ayoung: that isn't a bad idea. 18:50:31 the servers are constantly requesting the revocation list, so lazy constructing doesn't make any difference 18:50:44 bknudson, nah, that would only happen once 18:50:49 at start up time 18:50:51 bknudson: the revocation events are infrequent enough that it shouldn't be too bad…unless revoke_tokens_for group / user is called with enough tokens to exceed a memcache page (the proposed fix doesn't handle that in either case) 18:51:10 morganfainberg, exactly 18:51:15 revocation events are infrequent, but the other servers are constantly asking for it. 18:51:17 ayoung, we do delta revocation list or one big list? 18:51:25 gyee, one big 18:51:26 gyee: currently one big list 18:51:29 doh! 18:51:38 gyee: but they do eventually timeout 18:51:50 when the token expires, don't need it in the list anymore 18:52:05 wait, timeout and revocation are two different thing right? 18:52:10 gyee, right 18:52:18 it does a memcache append() vs building it when a revocation occurs. but the current mechanism for evicting expired tokens from the list (vs. active that are revoked) is to wait for the page to expire…which it might not 18:52:21 so in theory, revocation list should be smaller 18:52:23 revocation is when you change roles and your old tokens go away 18:52:38 Is Keystone doing cryptographically signed tokens these days? 18:52:41 existing tokens are now invalid 18:52:59 gyee, a revoked token that has expired token no longer needs to be reported as revoked 18:53:08 soren, yes 18:53:09 ayoung, right 18:53:10 soren: yes, by default in havana 18:53:15 Cool. 18:53:16 soren: it's the PKI tokens. There's also UUID tokens. 18:53:22 soren: err, grizzly 18:53:31 Oh. Sorry. /me realises this is a meeting and not just #openstack-dev or whatnot 18:53:34 Carry on :) 18:53:38 soren, signed, but not encrypted 18:54:28 morganfainberg, OK, I'll writ that up. Are you up for implementing it? 18:54:32 sure 18:54:42 it wont be hard to do. 18:54:51 is this only for the memcache backend for tokens? 18:54:56 #action ayoung writes up revocation list out of memcache 18:55:06 bknudson: for now, since it is most directly affected. 18:55:15 #action morganfainberg implements memcached revocation list 18:55:16 SQL is more efficient at telling revoked vs. expired 18:55:18 are we splitting up the token backend? 18:55:24 bknudson, looks like it 18:55:29 the sql today is not efficient 18:55:35 ayoung, bknudson, not sure if I understand termie's commend on the review 18:55:37 i can work on SQL as well. 18:55:41 bknudson, it will be a cache 18:55:42 so he wants a clock impl? 18:55:56 gyee, clock is not going to happen, as elegant as it is 18:56:04 we would need a clock per user/project 18:56:06 gyee: the clock impl is good, just the problem is it changes the business rules in a way that would be bad. 18:56:19 since revoking all tokens for a user on an event is bad 18:56:27 (short of a password change) 18:56:55 so, elegant, but needs more than the simple concept. 18:56:56 yeah, seem like a small bang for big buck 18:57:45 ayoung: i'll take a gander at the SQL implementation as well and see if I can come up with something while I'm mucking around in the memcache driver here. 18:57:51 you think that bad, wait till henry's inherit role impl :) 18:58:33 we already have a couple of things for the sql backend 18:58:49 bknudson: ah, ok, so i shouldn't worry about the revocation stuff in it? 18:58:50 the most obvious one is to add the index to expires and revoked 18:59:03 (sorry realized the meeting was going on and hopped over to this window a little late) 18:59:04 #link https://blueprints.launchpad.net/keystone/+spec/cache-token-revocations 18:59:13 time to switch to -dev 18:59:16 #endmeeting