18:02:43 <dolphm> #startmeeting keystone 18:02:44 <openstack> Meeting started Tue May 21 18:02:43 2013 UTC. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:46 <ayoung> #link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting 18:02:47 <openstack> The meeting name has been set to 'keystone' 18:02:48 <gyee> \0 18:03:08 <dolphm> #topic Havana milestone 1 18:03:20 <dolphm> Just a reminder that our first milestone is on May 30 18:03:24 <gyee> pluggable token provider! 18:03:27 <ayoung> So only BP not impl is the loggin one 18:03:28 <dolphm> #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 18:03:40 <ayoung> #link https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone 18:03:44 <ayoung> And it is up for review 18:03:55 <dolphm> that's targetted at m1 ^ 18:03:55 <dolphm> #link https://launchpad.net/keystone/+milestone/havana-1 18:04:03 <dolphm> and it's our last m1 blueprint 18:04:12 <ayoung> but it makes me sad, as it once again buries eventlet code deep in Keystone 18:04:13 <dolphm> so we're certainly in good shape for conservative milestone goals 18:04:37 <dolphm> ayoung: from openstack common? 18:05:03 <ayoung> dolphm, yeah keystone/openstack/common/rpc/amqp.py uses weakref to store the context in a WeakLocal object. log.py also does this 18:05:10 <dolphm> if there's any other blueprints that should be moved from m2 to m1 because they're VERY close to being ready, give me a heads up 18:05:43 <dolphm> #topic No downtime database migrations 18:05:55 <dolphm> i've had this on the agenda for a while, but we haven't had time for it :) 18:06:04 <dolphm> there was a summit session on the topic: 18:06:05 <dolphm> #link http://openstacksummitapril2013.sched.org/event/8d559940a2a905c8221ccaec19e91f28#.UZu3byuDQ19 18:06:14 <dolphm> and a subsequent discussion on the mailing list: 18:06:15 <lbragstad> hey, missed first part but I just saw the unified logging stuff 18:06:17 <dolphm> #link http://lists.openstack.org/pipermail/openstack-dev/2013-May/008340.html 18:06:39 <dolphm> i just wanted to make everyone aware (go participate in the mailing list discussion!) ^ 18:06:42 <ayoung> dolphm, problem is that Keystone then needs to run with two different versions of the schema 18:06:55 <dolphm> ayoung: exactly 18:07:03 <dolphm> this isn't something we've ever tackled before in keystone 18:07:35 <ayoung> dolphm, we've probably already broken it for Griz-Havana 18:07:53 <bknudson> I don't think we've got any migrations in H yet. 18:08:02 <dolphm> if other projects go down this road, we'll certainly need to follow; if anyone wants to champion the effort, please follow the progress in other projects and comment on migrations reviews accordingly :) 18:08:20 <dolphm> ayoung: probably so; i see havana as a "let's try this out" cycle for keystone, at least 18:08:31 <ayoung> dolphm, sounds good. I would say, though, that from Keystone, it would probably be better if we had a fallback 18:08:37 <dolphm> it will take a testing infrastructure that we don't have today to make sure it works 18:08:41 <ayoung> say, we can validate tokens, but no database updates allowed 18:08:43 <dolphm> i assume someone is already working on that 18:08:51 <ayoung> and then we can upgrade Keystone while in that mode 18:09:34 <ayoung> If we cached the token revocation lists, we could put the server into a mode to just return those and the certificates 18:10:09 <ayoung> Then we can update Keystone, and, while new tokens can't be issues, existing ones will still work. 18:10:12 <dolphm> ayoung: all dependent on appropriate external testing though, and a stakeholder to champion it 18:10:44 <dolphm> #topic open discussion 18:10:53 <dolphm> short agenda today! 18:11:22 <bknudson> I've finally got the changes in to hopefully fix eventlet/dnspython problem : https://review.openstack.org/#/c/29712/ 18:11:34 <ayoung> dolphm, Jamie is going to try to work with the Kent folks to get the lazy provisioning ready for review 18:11:54 <bknudson> and this other set of changes is to get unit tests working again: https://review.openstack.org/#/c/29670/ 18:12:05 <dolphm> ayoung: cool, there's still some identity-api related discussions there first though 18:12:47 <dolphm> there's two discrete api features that are required first, one needs to apply across the entire api and will need to be implemented accordingly (it can't just affect /users) 18:12:47 <bknudson> did we decide which route we're going to take for LDAP domains? 18:13:07 <gyee> what's the status on the read-only LDAP bp? 18:13:10 <gyee> anyone working on that? 18:13:15 <dolphm> ayoung: ^ ? it sounded like you were leaning towards merging my patch? can you provide an update? 18:13:18 <dolphm> gyee: link? 18:13:32 <gyee> didn't brad created one? 18:13:35 <gyee> I wasn't sure 18:13:45 <ayoung> dolphm, so, while I am leaning toward yours, I told spzala that he got once more chance to get his working. 18:13:47 <dolphm> gyee: i don't see one 18:13:59 <ayoung> the more I think about it, the more I think multiple domains in one ldap is a mistake 18:14:04 <spzala> dolphM: about the default domain patch, I am working on it ... as ayoung just mentioned 18:14:11 <ayoung> but I would hate to break it on the one person that decided to go that route 18:14:17 <gyee> ayoung, LDAP is made for domains 18:14:25 <ayoung> gyee, right 18:14:44 <ayoung> gyee, and, I think that if we kept it that way, it makes the other code changes easier as well 18:14:56 <ayoung> namely, split identity into its composite parts 18:15:10 <spzala> dolphm: the approach I am taking now is that, we can't really ignore domain_id so set domain_id to default when it's ignore case 18:15:11 <dolphm> spzala: ayoung: consider merging mine, and then applying some of spzala's work on top of it? i think there's value in both 18:15:23 <dolphm> spzala: ayoung: have about a week left for 2013.1.2 i think 18:15:42 <ayoung> dolphm, so if we merge yours, we will break domains in LDAP, right? 18:15:52 <ayoung> I'm ok-ish with this, if you are. 18:15:54 <dolphm> spzala: what do you mean "set domain_id" ? provide it to ldap? or provide it to the api? 18:16:06 <dolphm> ayoung: multi-domain support goes away, yes 18:16:08 <spzala> dolphm: provide it to ldap 18:16:35 <ayoung> dolphm, the backend will have to fill in the domain_id with the default if it is an attribute ignore 18:16:45 <spzala> so that domain_id is always there .... default or other 18:16:46 <dolphm> spzala: if domain_id is ignored, then provide the default domain to ldap? that doesn't sound like it solves anything 18:16:49 <spzala> ayoung: agree 18:17:08 <dolphm> spzala: that's also the exact opposite behavior that i expect from the ignore_attribute lists 18:17:30 <spzala> dolphm: we can't ignore the domain_id 18:17:41 <spzala> dolphm: in a way it makes sense that 18:17:44 <ayoung> dolphm, I think we can choose to merge your patch at any point. So, if spzala doesn't get it working, yours gets approved by default 18:17:46 <dolphm> spzala: define "ignore"? 18:17:47 <spzala> we always have a default domain 18:17:58 <dolphm> spzala: my patch validates the domain_id, and then avoids providing it to the ldap server 18:17:59 <bknudson> I think we want it to work so that if domain_id is ignored then any objects returned by Keystone have domain_id set to the default domain. 18:18:11 <dolphm> bknudson: +1 18:18:22 <ayoung> dolphm, but even if he does get it working, doesn't mean we have to take it...it just makes it a deliberate decision as opposed to a forced one 18:18:41 <dolphm> and if domain_id is ignored, the ldap server should be treated as domain-agnostic 18:20:17 <spzala> dolphm: agree.. ignore is 'ignore' but since we have at least have a 'default' domain so was thinking to pass it to the server. 18:20:47 <ayoung> dolphm, so, lets give him a day or two to get it working, and agree that a reviewed patch has to be accepted in time to meet the 2013.1.2 deadline 18:20:57 <ayoung> otherwise, we go with you patch 18:21:15 <ayoung> your 18:21:18 <bknudson> spzala: pass the default domain ID to the LDAP server? what's the LDAP server going to do with it since there's no attribute for domain_id. 18:21:19 <spzala> ayoung: thanks :) .. dolphm: hopefully i will update the patch today ..if it works out 18:21:51 <spzala> bknudson: so by passing, to me, don't "stip off" the domain_id 18:22:16 <spzala> I see that, that's what we do with ignore 18:22:34 <bknudson> spzala: the LDAP entries have a domain_id attribute? 18:23:29 <ayoung> bknudson, yes they do 18:23:31 <spzala> bknudson: yes if I am not wrong 18:24:13 <ayoung> #link https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L389 18:24:37 <ayoung> There too https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L845 18:24:57 <bknudson> ayoung: that's the case when domain_id isn't ignored. 18:25:09 <bknudson> but what about when domain_id is in the ATTRIBUTE_IGNORE list? 18:25:23 <ayoung> dolphm, are you OK with this approach, or would you prefer that we just go with your patch? 18:25:36 <dolphm> (wrong channel) 18:25:36 <dolphm> ayoung: spzala: for what it's worth, jcannava tested both patches as of about a week ago in a deployment against read-only AD; both worked fine 18:26:15 <dolphm> ayoung: i don't think you can just ignore either patch, both have value at this point for stable 18:26:16 <ayoung> dolphm, um...but I am guesing he had an attribute that he could use for the domain_id 18:26:49 <dolphm> what we want in havana can be a another conversation 18:27:09 <ayoung> dolphm, what is it from your patch that would be pulled into spzala's? 18:27:43 <ayoung> I am OK with saying "remove multi-domain" as there is a chance it will affect 0 users 18:28:11 <dolphm> ayoung: validation that a valid domain was specified by the api; including a valid domain_id in all relevant responses back to the api 18:29:02 <dolphm> ayoung: the way i see it, my patch fixes the read-only issue by removing multi-domain support; spzala's patch on top of that could restore multi-domain, and maybe they could be backported together 18:30:21 <ayoung> dolphm, OK...that is a strategy. spzala can you rework your patch such that it applies on top of dolphm's and then we only have to decide "dolph's only" or "both" 18:30:40 <ayoung> dolphm, lets #action that 18:30:56 <spzala> ayoung: sure, that's a good idea. 18:30:57 <dolphm> #action spzala to rework patch such that it applies on top of dolphm's 18:31:13 <ayoung> dolphm, we can accept your patch at any point, I think 18:31:18 <ayoung> any reason to hold off? 18:31:23 <dolphm> not for master 18:31:26 <dolphm> i don't think? 18:31:34 <ayoung> actually, it might make sense to get it in soon 18:31:44 <ayoung> then , if anyone is tracking master, we find out if we broken them.... 18:31:58 <ayoung> tracking master and doing multi domain ldap 18:32:09 <dolphm> which is definitely no one, but yes :) 18:32:50 <ayoung> bknudson, I +2ed. Feel free to approve if you are comfortable with it 18:33:04 <ayoung> dolphm, OK, so..I thin that leads into split identity 18:33:47 <dolphm> i think it does 18:33:47 <ayoung> dolphm, I posted to the mailing list, and am getting positive feedback. Are you OK with the general thrust of it? 18:34:19 <dolphm> ayoung: yeah; at the summit i was just worried the implementation was going to be very difficult, but it sounds like we're making lots of small rational steps in that direction 18:34:45 <ayoung> LDAP and potentiall PAM can "plug in" to fill a domain. My guiess is that domains, and projects would mainly be handled by SQL 18:35:16 <ayoung> so multi domain support across 2 or more LDAP servers should be do-able, and in a rational manner 18:35:43 <dolphm> is the consensus that projects should be in not-ldap? 18:35:51 <ayoung> dchadwick suggested that we further split project into projects and assignments 18:36:01 <dolphm> (that's always made sense to me, but i'm not an ldap stakeholder) 18:36:03 <ayoung> dolphm, I think that some people might keep them there for the short term 18:36:18 <ayoung> but I would say that most people would prefer to not have them in LDAP 18:36:40 <ayoung> so for a given domain, we could say that Identity is in LDAP, but proejcts and assignments are in sql or 18:36:48 <ayoung> all three are in LDAP 18:36:51 <ayoung> as they are today 18:37:00 <bknudson> Identity is users and groups? 18:37:05 <ayoung> bknudson, yes 18:37:15 <ayoung> projects is Projects and roles 18:37:19 <bknudson> that sure maps to ldap 18:37:26 <ayoung> assignments is the mapping between them 18:38:19 <bknudson> assignments is which users are in which groups? 18:38:34 <ayoung> bknudson, no 18:38:34 <bknudson> because ldap groups are lists of users (by DN) 18:38:56 <ayoung> bknudson, more like which users and/or groups are assigned to which roles 18:39:08 <brich> ayoung: Does "split" imply APIs, services and configuration needed for said split? 18:39:12 <bknudson> ok 18:39:23 <ayoung> brich, web API stays the same 18:39:30 <ayoung> brich, only the backend splits 18:40:05 <ayoung> there might be some ugliness with the unique identifiers, but I think the current scheme can handle it, so there should be no change from a client perspective 18:40:33 <brich> ayoung: but under the official API, we could have remotable services? My impression is that is where Chadwick's thought would go 18:40:49 <ayoung> brich, that would be a step beyond this. 18:41:00 <ayoung> but this work is necessary to implement his, I think 18:41:26 <brich> ayoung: agree. just wondered how far you were thinking to go in Havana 18:42:38 <ayoung> dolphm, ok to #action that I update the BP to reflect this? Are we agreed this is the right general direction? 18:42:56 <bknudson> ayoung: Looking at dolphm's patch https://review.openstack.org/#/c/28197/10/keystone/identity/backends/ldap/core.py , removes the DomainApi from LDAP... so spzala's is just going to re-add it back in? 18:43:39 <ayoung> bknudson, if it is approved, yes, 18:43:43 <spzala> bknudson: I guess so, i think we need that for multi-domain support 18:44:02 <ayoung> spzala, how important is it to your group? 18:44:03 <dolphm> ayoung: which bp? 18:44:17 <spzala> ayoung: multi domain support? 18:44:20 <ayoung> dolphm, first off, split identity, but also the domain subbp 18:44:37 <dolphm> ayoung: this all sounds like multiple blueprints, btw 18:44:41 <ayoung> #link https://blueprints.launchpad.net/keystone/+spec/split-identity 18:44:56 <spzala> ayoung: read only domain is more important as far as I know 18:45:06 <ayoung> #link https://blueprints.launchpad.net/keystone/+spec/extract-projects-from-id 18:45:11 <dolphm> ayoung: there's seperate blueprints to split credentials and domains 18:45:29 <dolphm> (i seriously can't spell separate correctly from muscle memory) 18:45:41 <gyee> :) 18:45:54 <ayoung> #link https://blueprints.launchpad.net/keystone/+spec/auth-domain-architecture 18:45:59 <ayoung> dolphm, neither can I 18:46:23 <gyee> I don't speak english so I generally get a pass on spelling 18:47:18 <ayoung> OK, one last thing...the whole index/token revocation mess 18:47:47 <ayoung> termie's point is that the serving of the revocation list should not require iterating over a list.... 18:47:54 <dolphm> spzala: marked this as obsolete; let me know if that's innaccurate https://blueprints.launchpad.net/keystone/+spec/ldap-fake-domain 18:48:26 <ayoung> so he's blocked two patches which fix bugs. One is a fix to the token backendm, and the other indexes the sql backend 18:48:36 <spzala> dolphm: it goes with the current patch on static domain 18:48:41 <ayoung> I think he has a point...so... 18:49:08 <bknudson> how are we going to generate a list without iterating over a list? 18:49:35 <ayoung> what if we held the revocation list in memcached, lazy constructed, and return it as a single document? When a revocation event comes in, we rebuild it. 18:49:36 <bknudson> maybe the problem is too many tokens 18:50:05 <morganfainberg> ayoung: that isn't a bad idea. 18:50:31 <bknudson> the servers are constantly requesting the revocation list, so lazy constructing doesn't make any difference 18:50:44 <ayoung> bknudson, nah, that would only happen once 18:50:49 <ayoung> at start up time 18:50:51 <morganfainberg> bknudson: the revocation events are infrequent enough that it shouldn't be too bad…unless revoke_tokens_for group / user is called with enough tokens to exceed a memcache page (the proposed fix doesn't handle that in either case) 18:51:10 <ayoung> morganfainberg, exactly 18:51:15 <bknudson> revocation events are infrequent, but the other servers are constantly asking for it. 18:51:17 <gyee> ayoung, we do delta revocation list or one big list? 18:51:25 <ayoung> gyee, one big 18:51:26 <morganfainberg> gyee: currently one big list 18:51:29 <gyee> doh! 18:51:38 <bknudson> gyee: but they do eventually timeout 18:51:50 <bknudson> when the token expires, don't need it in the list anymore 18:52:05 <gyee> wait, timeout and revocation are two different thing right? 18:52:10 <ayoung> gyee, right 18:52:18 <morganfainberg> it does a memcache append() vs building it when a revocation occurs. but the current mechanism for evicting expired tokens from the list (vs. active that are revoked) is to wait for the page to expire…which it might not 18:52:21 <gyee> so in theory, revocation list should be smaller 18:52:23 <bknudson> revocation is when you change roles and your old tokens go away 18:52:38 <soren> Is Keystone doing cryptographically signed tokens these days? 18:52:41 <bknudson> existing tokens are now invalid 18:52:59 <ayoung> gyee, a revoked token that has expired token no longer needs to be reported as revoked 18:53:08 <ayoung> soren, yes 18:53:09 <gyee> ayoung, right 18:53:10 <dolphm> soren: yes, by default in havana 18:53:15 <soren> Cool. 18:53:16 <bknudson> soren: it's the PKI tokens. There's also UUID tokens. 18:53:22 <dolphm> soren: err, grizzly 18:53:31 <soren> Oh. Sorry. /me realises this is a meeting and not just #openstack-dev or whatnot 18:53:34 <soren> Carry on :) 18:53:38 <ayoung> soren, signed, but not encrypted 18:54:28 <ayoung> morganfainberg, OK, I'll writ that up. Are you up for implementing it? 18:54:32 <morganfainberg> sure 18:54:42 <morganfainberg> it wont be hard to do. 18:54:51 <bknudson> is this only for the memcache backend for tokens? 18:54:56 <ayoung> #action ayoung writes up revocation list out of memcache 18:55:06 <morganfainberg> bknudson: for now, since it is most directly affected. 18:55:15 <ayoung> #action morganfainberg implements memcached revocation list 18:55:16 <morganfainberg> SQL is more efficient at telling revoked vs. expired 18:55:18 <bknudson> are we splitting up the token backend? 18:55:24 <ayoung> bknudson, looks like it 18:55:29 <bknudson> the sql today is not efficient 18:55:35 <gyee> ayoung, bknudson, not sure if I understand termie's commend on the review 18:55:37 <morganfainberg> i can work on SQL as well. 18:55:41 <ayoung> bknudson, it will be a cache 18:55:42 <gyee> so he wants a clock impl? 18:55:56 <ayoung> gyee, clock is not going to happen, as elegant as it is 18:56:04 <ayoung> we would need a clock per user/project 18:56:06 <morganfainberg> gyee: the clock impl is good, just the problem is it changes the business rules in a way that would be bad. 18:56:19 <morganfainberg> since revoking all tokens for a user on an event is bad 18:56:27 <morganfainberg> (short of a password change) 18:56:55 <morganfainberg> so, elegant, but needs more than the simple concept. 18:56:56 <gyee> yeah, seem like a small bang for big buck 18:57:45 <morganfainberg> ayoung: i'll take a gander at the SQL implementation as well and see if I can come up with something while I'm mucking around in the memcache driver here. 18:57:51 <gyee> you think that bad, wait till henry's inherit role impl :) 18:58:33 <bknudson> we already have a couple of things for the sql backend 18:58:49 <morganfainberg> bknudson: ah, ok, so i shouldn't worry about the revocation stuff in it? 18:58:50 <bknudson> the most obvious one is to add the index to expires and revoked 18:59:03 <morganfainberg> (sorry realized the meeting was going on and hopped over to this window a little late) 18:59:04 <ayoung> #link https://blueprints.launchpad.net/keystone/+spec/cache-token-revocations 18:59:13 <dolphm> time to switch to -dev 18:59:16 <dolphm> #endmeeting