17:59:35 #startmeeting keystone 17:59:36 Meeting started Tue Apr 29 17:59:35 2014 UTC and is due to finish in 60 minutes. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:59:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:59:40 The meeting name has been set to 'keystone' 17:59:47 \)/ 18:00:10 i expect this meeting to be mostly open discussion, following an improv session by our special guest stevemar 18:00:23 #topic Design summit schedule 18:00:25 #link http://junodesignsummit.sched.org/ 18:00:27 Compress3ed tokens....review it 18:00:28 dolphm, do we get interpretive dance too? 18:00:36 morganfainberg: oh i hope so! 18:00:38 https://review.openstack.org/#/c/71181/ 18:00:48 You know you want it 18:01:12 so most projects are starting to post their design summit schedules to http://junodesignsummit.sched.org/ 18:01:54 it'd be awesome if everyone took this time to look for obvious conflicts between sessions that keystoners should be attending 18:02:03 dolphm, ++ 18:02:38 don't forget to look through the new "cross-project workshops" track as well 18:02:57 there are a lot of cross project items that are going to be cool 18:03:04 #link http://junodesignsummit.sched.org/overview/type/cross-project+workshops#.U1_pS1feRww 18:03:10 is there a barbican track or it is rolled into keystone? 18:03:26 Not rolled in 18:03:51 this one is an important one for anyone interested in SQLAlchemy, Dogpile, Alembic, etc http://junodesignsummit.sched.org/event/0a0e3cb824a090773c42b67d6ed85d29 18:04:00 The'yll just all be sitting together in the devlounge 18:04:07 gyee: looks like they'll have their own track, but it's not published yet. keep an eye out for it 18:04:16 gyee: their sessions aren't even reviewed yet 18:04:29 ayoung, dolphm, seem like there are some integration points we need to discuss 18:04:36 keystone and barbican I mean 18:04:51 gyee, ++ 18:05:24 gyee, this is something we'll likely just need to catch up in the dev-lounge/podarea/whatever it is 18:05:40 gyee, for in-person 18:05:45 dolphm - the "Federation" 2:40p Weds design session is before the 9:50 a summit session "Federated Identity & Federated Service Provider support". I think the summit session would drive attendance to the design session. 18:05:49 yes absolutely 18:05:56 summit-session being thurs 18:06:09 gyee: I'm curious about what integration points you have in mind 18:06:27 nkinder, like credential management 18:06:45 authorization 18:06:54 gyee: using it as the password store? 18:07:01 o/ late 18:07:09 nkinder, no, not password store 18:07:19 I mean access keys, certs, etc 18:07:21 nkinder, credentials, certs, etc 18:07:22 gyee, ++ 18:07:26 joesavak: looking... 18:07:28 joesavak, i agree, i was hoping we could switch things around 18:08:02 joesavak: where's the conference schedule? 18:08:02 dolpm - links: design: http://junodesignsummit.sched.org/event/ab0966f5ec41f78e929effd499e0286f#.U1_p4_ldUog summit session: http://openstacksummitmay2014atlanta.sched.org/event/e04ac68f0c98be95f9fcbf2812e44d5d#.U1_qf_ldUog 18:08:17 link: http://openstacksummitmay2014atlanta.sched.org/ 18:08:23 joesavak: thanks 18:10:20 gyee, you saw this: http://junodesignsummit.sched.org/event/9a6b59be11cdeaacfea70fef34328931 18:10:32 would swapping the Federation session with python-keystoneclient session resolve the conference conflict? 18:10:41 We'll need to figure out an identity scheme for the various Queue Sinks and sources 18:10:48 ayoung, w00t! 18:10:52 dolphm, i think it would 18:11:18 what is the conflict? 18:11:21 agreed 18:11:22 ayoung, lets get certmonger in there, we need a realistic provisioning 18:11:31 provisioning process 18:11:39 gyee, ++ 18:11:50 gyee, ++ 18:12:03 so python-keystoneclient Wednesday @ 2:40p and Federation Thursday @ 11:50a (with the Federated Identity conference session Thursday @ 9:50a) 18:12:16 dolphm +1 18:12:23 gyee, but certmonger only handles "Here are my certs" we need to figure out the mapping from X509 Cert name constraints to the endpoints and hypervisors etc... 18:12:34 we are having the same issue with tripleO with cert provisioning 18:12:53 thursday will be 100% identity then 18:13:06 we been told no self-signed certs, but we need a realistic solution, so I am pushing certmonger 18:13:10 dolphm, thats a good thing imo 18:13:21 dolphm, keeps general focus 18:13:23 dolphm: do you know if it's possible to reword the session description now it's in? 18:13:34 morganfainberg: ++ and friday is sort of post-auth stuff 18:13:41 authorization & service catalog 18:14:05 err thursday 18:14:10 gyee, do you guys have an API for submitting and retrieving a Cert to a CA separate from the Dogtog one? 18:14:30 no i was right, the 16th is friday :P 18:14:33 If not...we can drive that in Barbican. I think they have a BP for it already 18:14:51 ayoung, no, we don't have an agreed approach right now, that's the problem 18:14:57 agreed upon 18:15:53 ayoung, I am fine with Barbican, so as long as it is consistent 18:16:05 gyee, I'll be prepped to demo certmonger stuff, and we can discuss the wider CA story there. Barbican as a CA front works for me. 18:16:27 ayoung, ++ 18:16:28 ayoung, ++ 18:16:29 And certmonger<->Baribican makes sense 18:17:06 ayoung: agreed 18:17:11 (assuming no one sees any further scheduling conflicts -- if any arise, ping me ASAP so we can try to resolve them) 18:17:26 thanks dolphm! 18:17:30 #topic Open discussion following improv and interpretive dance by stevemar 18:17:31 stevemar: o/ 18:17:35 gyee, also, I think we'll need to driver the cryptography.py work for CMS, which means accepting a 509, and verifying a signature. That will help both Keystone Client verification as well as the Messaging verification 18:17:51 go stevemar go 18:18:06 i've prepared a table flip 18:18:10 (╯°□°)╯ ┻━┻) 18:18:15 thats all i got 18:18:15 rofl 18:18:16 heh 18:18:24 Nice. 18:18:26 ヽ(^o^)丿 18:18:45 lol 18:18:48 lol 18:18:50 ┬─┬ ノ( ^_^ノ) 18:18:52 serves me right for questioning the agenda 18:18:54 (~_~) 18:19:23 stevemar: damn straight 18:19:36 #link https://review.openstack.org/#/c/71181/ 18:19:50 Review compressed tokens. Please! 18:20:02 on it 18:20:40 Compressed tokens will lead to being able to extract the singer info from the token prior to verification, and thus fetching the certs, allowing for multiple providers. 18:22:12 Oh...also, anyone have any real resistance to an addition to the mapping API that says "just pass through the groups as is, don't require an explicit enumeration of them?" 18:22:45 *allowing* explicit enumeration is a good thing, but requiring it leads to duplication of effort for places that don't want or need it. 18:22:50 ayoung, do we have the bug/bp to get testing of the examples handy? 18:23:12 morganfainberg, you mean PKI signed tokens? 18:23:27 compressed rather? 18:23:32 ayoung: https://review.openstack.org/#/c/90472/ has a -2 on it but there's no suggestion what to do differently. 18:23:50 ayoung, was that one where we wanted to test the example scripts? 18:24:23 bknudson, if the "server" switches from PKI to uuid tokens, the existing tokens that were revoked are still in memcache 18:24:25 ayoung, i know we had ~2-3 reviews we wanted to test the example scripts. 18:24:30 ayoung: why is this only Partial-Bug: 1255321 rather than Closes-Bug? 18:24:37 and they went from revoked to unrevoked 18:25:06 yeah, I need to resplit the example scripts...on my list for this week 18:25:59 ayoung, ah just want to tag related reviews for tracking on that bug/bp if possible :) that way we can say "no no we're planning on implementing tests" and see what we need to incorporate. that way we don't miss something 18:26:07 but we still don't have anything that tests the example scripts? 18:26:10 dolphm, hmm...good question 18:26:31 ayoung: seems like auth_token always worked that way -- cached tokens aren't checked if they're revoked 18:26:39 that bug is a Server bug, and it needs this patch in order to issue signed tokens 18:26:50 stevemar, the plan is to work on that specifically in Juno, tag it to J1 or such so we have a timeline but not require it before. 18:26:56 stevemar, that way it's on the radar we have a timeline, but we don't block work 18:27:21 and yes, i'll commit to getting that work done if we get down to the wire on it 18:27:52 but hopefully i can convince everyone to collaborate on it vs. just waiting till it gets done at the last minute 18:28:18 dolphm, so the client patch is partial, and the Compressed PKI provider would be the other half. I have not yet written it, as it won't pass check until the client is avaialb3e. It is a simple one line difference from the current PKI provider 18:28:58 bknudson, you are correct, but we have a patch going through right now that is attempting to rectify that: let me see if I can find the link 18:29:09 bknudson, ayoung, didn't we have the same discussion last week about the other patch which does the same thing? 18:29:24 gyee, heh, yeah 18:29:28 from the other side 18:29:45 gyee: right, that patch merged and now uuid-only setup fails... because there's no revocation list 18:30:27 ayoung: could you post the keystone server changes for compressed tokens so I can try it/ 18:30:28 bknudson, I thought that patch is token format independent 18:30:34 bknudson, wilco 18:30:50 gyee: I thought it was too 18:30:54 ayoung, i like the idea of supporting a bunch of group names in mapping, but probably needs to be thought out a bit more, perfect for a side session in the dev lounge 18:31:04 stevemar, ++ 18:31:22 gyee: but if you use uuid tokens you probably didn't pki_setup, so auth_token can't get the revocation list 18:31:36 stevemar, I think we can do ity without an API change, actually, just a change on the enforcement of the rules processing 18:31:41 bknudson, ahhhh! I remember now 18:31:51 gyee: and with the change we check UUID tokens against the revocation list 18:31:59 there's also a bug about requiring PKI setup even though token format is set to UUID 18:32:12 ayoung, agreed, it might just open a discussion about how to handle domains, and we should get others opinion on that too 18:32:20 s/might/will 18:32:23 bknudson, I('m OK with that. Lets make it as painful as possible for people to stay on UUID tokens! But seriously...I wonder what the right logic is? Maybe check_revocations needs to be configurable. 18:32:24 gyee: so my proposed fix is to allow auth_token to fail to fetch the revocation list 18:32:31 stevemar, you mean hierarchical domains? :D 18:32:58 ayoung: it should be Closes-Bug in the client then, we'd track against keystone itself separately 18:33:07 gyee, the old fashioned normal one 18:33:12 dolphm, that makes sense...I'll change 18:33:28 gyee, not those crazy new fangled hierarchical ones 18:33:51 ayoung, the bug should be filed against server and client if it needs to be tracked in both places. 18:34:09 morganfainberg, it is 18:34:14 bknudson: i haven't been following - but that doesn't sound right - why would you need them to fail to fetch the list? 18:34:15 just listed as "affects" 18:34:20 ayoung, ah easy then 18:34:21 ayoung, yep see it 18:34:22 bknudson: you can still revoke a UUID token 18:34:31 stevemar, I am hungry for a hierarchical in-and-out burger, maybe I'll get one after the meeting 18:34:49 jamielennox: fetching the revocation list fails because there's no PKI certs. 18:35:20 bknudson: why? all it should need is the CA cert from the server 18:35:47 jamielennox: the server doesn't have a CA cert... if you're using UUID tokens why do you need CA cert? 18:35:58 deployments didn't need it before this change. 18:36:31 did we not allow UUID token revocations before this then? i was under the impression that we did 18:36:48 jamielennox, for UUID, validation is over the network 18:36:57 jamielennox: we did allow it. But we never checked the revocation list for UUID tokens. 18:36:58 gyee: but we would always allow cachig 18:37:16 jamielennox, right, caching is a security tradeoff 18:37:25 gyee: the revocation list was just a way to remove the UUID token from memcache before it expired 18:37:29 security-performance tradeoff 18:38:06 jamielennox, if we really want to do it right, have a lightweight process to listening on the events and update the cache accordingly 18:38:23 gyee: i think caching should be a given, but how long you cache for is the trade off! 18:38:36 gyee: yea, you mentioned that and i've no idea how you architect it 18:38:51 gyee: really we should have all the auth_tokens listening for notifications from keystone 18:38:54 dolphm, absolutely! but that's a decision customer needs to make 18:39:05 but you can't really do something like that in middleware 18:39:42 jamielennox: why not? 18:39:47 jamielennox, what is caching for anyway, if what you wanted is real time validation 18:40:22 i'd be worried if we only listened for revocation events and didn't have a list or something to catchup the process (or validate that is has what it needs) 18:40:28 dolphm: well it 18:40:43 dstanek: ++ the list is critical, which is why we started there 18:41:01 dolphm: well it's python you can do almost anything - but would you need to spawn a subprocess or something? because you need a server that can respond to notifications and middleware can't do that 18:41:29 gyee: i was just under the impression that we used to allow that - i guess we must never have 18:42:13 jamielennox: oh i didn't read "lightweight process" as "subprocess" - i just read that as message listener 18:42:35 jamielennox, security and performance is one of those things that will need constant calibrations 18:42:43 its not a one size fits all 18:43:05 what we can offer is flexibility, really 18:43:10 dolphm: right - but middleware is only triggered on an incoming request as a wrapper. if we wanted to actually have a listener then that would require a new thread of control that could sit and listen somehow 18:43:37 it could gobble up all the messages when it woke up 18:43:58 bknudson, there's no wake up 18:44:07 and if it got too far behind then it'd get the full list 18:44:07 gyee: there is - per request 18:44:21 its a separate process which always listening on the revocation events 18:44:31 amqp? 18:44:44 jamielennox: you should be able to create a listening on __init__, no? 18:44:44 bknudson: do the queue implementations support that - what if it's getting a request/day 18:44:47 listener* 18:45:04 ampq doesn't support pushing? 18:45:13 if you register a callback from a notification won't that callback be executed as the notification arrives and not need a request? 18:45:14 dolphm: sure - but it still needs to be a thread/subprocess 18:45:20 or it is strictly a pull model 18:45:22 ? 18:46:09 dstanek: all this stuff is controlled by an event loop somewhere, we *could* try to get access to that from middleware but that's crossign boundaries 18:47:05 btw -- the way nova works with the default setup is that there's a separate in-memory cache for each "thread" 18:47:32 gyee: both/either 18:47:33 so one thread could have the token cached and it works and another doesn't have the token cached and it fails 18:48:08 dolphm, both/either sounds good :-) 18:48:15 bknudson: is nova handling this differently? 18:48:45 bknudson, they have the *option* to use a distributed cache no? 18:48:49 jamielennox: i think he's just referring to auth_token's default in-memory token cache 18:48:58 there's nothing specific about nova in the example 18:49:07 gyee: I'm guessing if they use memcache they'd have consistent results. 18:49:41 memcache for token caching seems like overkill 18:49:57 bknudson, heh 18:50:05 so, does anyone know how the other projects support integrating the HTTP listener event loop with the AMQP event loop? 18:50:12 i haven't done that 18:50:34 jamielennox, not sure. 18:50:40 does it 'just work' because of socket.select()? 18:50:54 jamielennox, it might "just work" 18:51:42 because in that case we might be able to spawn a listener in __init__ 18:51:51 what project would be listening for amqp events? 18:51:56 bknudson: auth_token 18:52:05 although then we would have a listener per worker process which is not ideal 18:52:27 bknudson: ceilometer is the main one i think 18:52:27 jamielennox, make it configurable, per process or shared 18:52:59 jamielennox, the way the fan-out queues work that would be most correct if auth_token itself was meant to listen for that for a thread-local cache 18:53:21 jamielennox, /for a/with a 18:53:27 bknudson, make revocation checking a configurable option is, I think the only solution. 18:53:42 And, no, we are not spawing a separate thread for it. 18:53:53 morganfainberg: yea because it will be a lot of extra work for a memcache setup to have every listener try to remove a token from the cache 18:53:59 ayoung: ok, I'll take a look at that as a solution 18:55:02 bknudson, the issue is that the server doesn't know if the token format is PKI or UUID, and it doesn't know if it should be looking for revocation events. With PKI, revocation (events or list) is a must have, with UUID, it is a nice-to-have....but with caching, it really should be required 18:55:04 jamielennox, depending on the number of workers 18:55:06 ayoung: I might have an option to check revocation list for UUID tokens? 18:55:20 bknudson, there is no way to distinguish 18:55:35 it might have been issued as PKI, but thne the MD5/SHA256 submitted as the token itself 18:55:37 5 minutes 18:55:58 it will be treated as a Keystone lookup....but by the time it gets put in memcahced, that detail is forgotten 18:56:00 morganfainberg: 2? 18:56:08 ayoung: if auth_token gets a MD5 hash of PKI token it typically goes to the server 18:56:28 dolphm, maybe the clock here is off then \ 18:56:30 bknudson: why do you need the revocation list for UUID tokens? 18:56:36 dolphm, here = my computer 18:56:46 bknudson: you need to validate them online to populate headers anyway 18:56:50 dolphm, caching. 18:57:10 dolphm, if auth)_token is caching, you don't ask keystone for token validity 18:57:22 morganfainberg: because you already have, and trust that cached data for some duration 18:57:32 morganfainberg: what does that have to do with the revocation list? 18:57:38 :-) 18:57:47 (time) 18:57:51 dolphm, you could use the revocation list to know if you should trust that token (uuid or not) 18:57:51 the revocation list is cached on a different schedule 18:58:00 #endmeeting