18:01:55 #startmeeting keystone 18:01:56 Meeting started Tue Aug 20 18:01:55 2013 UTC and is due to finish in 60 minutes. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:59 The meeting name has been set to 'keystone' 18:02:01 * topol opening night jitters? 18:02:16 i should write this stuff in advance and just copy/paste it 18:02:17 #topic FeatureProposalFreeze & Havana milestone 3 18:02:39 m3 is cut sept 4 (2 weeks-ish)! 18:02:49 rolling up fast! 18:02:57 and more immanently, feature proposal freeze is starting to hit for several projects this week 18:02:59 dolphm, what's 8/28 date then? 18:03:01 ours is next week 18:03:11 #link https://wiki.openstack.org/wiki/FeatureProposalFreeze 18:03:17 gyee: read the description ^ 18:03:39 ah 18:03:44 when's FeatureFreeze? 18:04:01 9/4 18:04:02 September 4th 18:04:08 #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 18:04:14 overall release schedule ^ 18:04:47 feature freeze takes effect with milestone-proposed / release-proposed is created 18:05:08 then we open back up for icehouse 18:05:33 so, on a related note... 18:05:40 #topic pagination vs feature proposal freeze 18:05:44 #link https://blueprints.launchpad.net/keystone/+spec/pagination-backend-support 18:05:48 ah! 18:06:01 the one bp that has no implementation in review yet! 18:06:21 henrynash: i'm not caught up on the mailing list thread, but as far as i've read, there's certainly some discouragement against pursuing pagination at all? 18:06:44 dolphm, right, thought we are going to tackle filtering first 18:06:57 so if you following the url spec in the Bbp, I wrote up an enterhpad to summarize 18:07:12 #linnk https://etherpad.openstack.org/pagination 18:07:28 there doesn't seem to be a question about better filtering 18:07:48 bknudsonL agreed…everyone wants that 18:07:53 we should go after filtering first, least that area is more certain 18:08:05 henrynash: are you pursuing filtering in havana? 18:08:20 so I have working a filtering systems that allows drivers to filter if they can, the controller will if they can't 18:08:24 henrynash: should the pagination bp be un-targeted from havana? 18:08:38 I also have pagination work as well 18:08:41 (is there a filtering-in-drivers bp?) 18:08:48 we should separate them out into two BPs 18:08:53 gyee: ++ 18:09:03 dolphm: no..but maybe we should split it 18:09:06 definitely 18:09:22 i also think with filtering, if leveraged correctly, the pagination quesiton might go away. 18:09:27 ok, that's fine - I can that 18:09:32 leveraged/implemented/etc 18:09:34 henrynash: sounds like pagination is dependent on filtering? 18:09:40 so here's the rub, however 18:09:55 if we define really rich filtering which only some of our backends can do.... 18:09:56 morganfainberg: especially if you require filtering 18:10:02 dolphm: exactly 18:10:11 kvm does not do filtering. 18:10:16 oops, kvs 18:10:19 then we can't do things like limit the number of response or paginate at the driver leve 18:10:21 bknudson: no one does kvs 18:10:27 bknudson: kvs is a special case imo 18:11:14 if we don't have parity in what filters LDAP and SQL support, then we will still have long response times for the "poorer" backend 18:11:22 henrynash, not sure if I understand 18:11:27 they are two separate issues 18:11:28 LDAP has pretty good filtering. 18:11:34 even does soundex. 18:11:40 bknudson: sounds good 18:11:41 most of them do anyways 18:11:45 henrynash: what's the disparity in your implementation? 18:11:52 henrynash: perhaps it makes sense allow the driver to advertise what it can do? encourage filtering as the 1st choice (obviously tackle filtering first as a concept) 18:12:12 so the issue is conceptual, not implementation 18:12:44 assuming that we need to either paginate or put a had limit on how many response we want back from SQL/ or LDAP.... 18:12:44 morganfainberg, sure, drivers should tell Keystone what they can do 18:13:14 then you can't do that if you don't implement ALL you filters in the driver 18:13:24 …i.e. by the SQL or LDAP backend 18:13:42 henrynash, you don't have to implement ALL 18:14:04 henrynash: but what *can't* be implemented? why not require the implementation satisfy appropriate tests in test_backend? 18:14:05 but if you don't, you can't limit the output (say max 100 entries) 18:14:06 henrynash: you could also pass through filtering arguments if they are uniform if the driver supports it 18:14:06 you just need to document what you can do 18:14:17 henrynash: i don't see a use case (yet) for special casing a use case for filtering in drivers 18:14:39 err special casing *a specific filter* in a driver 18:14:50 dolphm: when I say drivers I mean use the filtering of the underlying database (SQL or LDAP) 18:14:56 henrynash: right 18:15:01 just pass whatever query params in the drivers and we are done :) 18:15:15 sure, I've done that 18:15:17 gyee: ++, just make sure we make the params uniform. 18:15:39 I think this is too complex an issue to discuss on IRC... 18:15:55 henrynash: you said you had no disparity in filtering capability between LDAP and SQL - so test equally in test_backend and ensure all driver implementations maintain feature parity 18:16:34 dolphm:…ok, so that means we will ONLY support filtering at the capability level that is the lowest common denominator between SQL and LDAP 18:16:42 diolphm: that might be ok 18:17:05 dolphm: we can't add "extra filtering at the controller level" 18:17:23 henrynash, we need all the query params 18:17:31 henrynash: so, split the bp into filtering & pagination, and put something up in time for feature proposal freeze to continue the discussion? 18:17:32 dolphm: for those drivers that are doing their one filtering 18:17:32 in the drivers 18:18:44 dolphm: I guess, I see them more inter-related, but ok to split 18:18:55 I've missed everything that's come before this, but no to actually doing filtering and pagination in the controller, but yes to standardizing the names of parameters in controllers 18:19:04 henrynash: i think we keep it at the lowest common denominator to start (filtering), if data shows that is insufficient we can approach how to pass other filtering through? It'll be more work to keep compat, but i think its the easiest to approach. 18:19:08 dolphm: that's the point I am trying to make…IF we want to paginate/limit, then it affects what filtering we can do where 18:19:38 do the backends do the paging, too? 18:19:42 #action henrynash to split new filtering blueprint out of existing 'pagination-backend-support' blueprint 18:19:45 bknudson: not currently 18:19:47 bknduson: in mine, yes 18:20:22 so not sure what the issue is... the backend gets that query parameters (filter & paging) and handles it. 18:20:39 bknudson: that's what I;m doing 18:21:22 (i only put this on the agenda to get a status update vs feature proposal freeze, which i think we've got ;) 18:21:25 moving on.. 18:21:26 #topic keystoneclient library - public vs private interfaces 18:21:28 bknudson: ! 18:21:40 this has come up in an irc discussion last week... 18:21:54 we've got several reviews affecting keystoneclient 18:22:05 sounds like this change https://review.openstack.org/#/c/40173/ broke what was perceived to be a public API 18:22:09 and to be safe, we really should not be changing anything public 18:22:20 bknudson: but we haven't defined what's public and what's private? 18:22:28 dolphm: precisely 18:22:28 There's a pep8 standard that says how to define public /private 18:22:34 but we're not really following it. 18:22:41 #link http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces 18:22:51 so not sure what to do about it... assume everything is public? 18:22:58 do the work to actually define what's public/private? 18:23:06 (bring the code up to pep8) 18:23:27 we really need to delineate between public and private. Our folks get burned by this all the time and yell at Brant or me 18:23:43 topol: in the client library? 18:23:48 I'll try to put something together for the mailing list... I had intended to do that. 18:23:51 so regarding the above review it's fairly easy to bring back the compatibility they want 18:23:51 dolphm, in general 18:24:03 wherever we are being lax 18:24:08 topol: in python interfaces? 18:24:14 however for example this one: https://review.openstack.org/#/c/42254/ it removes a function that has really only private usage but is defined in a public way 18:24:27 jamielennox: right, those are the weirdy ones. 18:24:32 topol: i want to understand specifically where we're causing people pain 18:24:37 dolphm, even in our driver code folks perceived interfaces to be public and got burned 18:24:50 topol: we never hear anyone speak up 18:24:53 jamielennox: safe thing to do, and I think what's indicated to do by openstack guidelines, is we don't allow it. 18:24:56 brant, u remember I hope 18:25:23 dolphm, fyi we (HP) is extending keystoneclient classes 18:25:26 topol: I remember... it wasn't the client it was the identity API. 18:25:29 bknudson and I handled the uproar behind the scenes 18:25:34 gyee: which ones 18:25:51 bknudson, agreed. but the principle applies 18:25:53 both v2 and v3 client classes 18:26:06 dolphm: same with metacloud (limitedly) 18:26:15 gyee: did you think you were extending a public or private interface? 18:26:16 we are extending client classes 18:26:26 but nothing burned yet. 18:26:58 and, if you thought it was public, why? 18:27:10 bknudson, the client classes are public 18:27:11 a "workaround": release the next version of keystoneclient as "0.4.0" (dissuading expectation of backwards compatibility) 18:27:23 so i think there are a couple of ways forward. First we need a standard way to start deprecating functions in keystoneclient. Then we need to be super vigilant about enforcing _function_names for things that are expected private 18:27:24 pep8 defines what's public, and by that definition nothing is public since it's not documented at the keystoneclient level. 18:27:53 bknudson: i think we need to be explicit, even if technically we aren't advertising anything as public 18:28:02 bknudson, exactly, key is documentation 18:28:06 jamielennox: we're talking about classes specifically here, not just methods 18:28:20 you can name classes with _ 18:28:21 dolphm, both i would say 18:28:22 and packages. 18:28:23 dolphm: i think his concern still applies for the initial transition. 18:28:59 one option is we fork keystoneclient2 from keystoneclient ... seems drastic 18:29:13 bknudson: or we do a staged deprecation 18:29:18 transitional + warnings 18:29:21 bknudson, well we still have the option to do keystoneclient 1 18:29:22 though.. that has other pitfalls 18:29:32 bknudson: jamielennox: and modules? 18:29:49 dolphm: and modules 18:30:00 i think there are modules like utils that are not supposed to be public on keystoneclient 18:30:02 yes modules 18:30:24 correct, _ prefix is good for any namespace in theory 18:30:28 to mark it as private 18:30:42 backwards compatibility can be restored in jamielennox's patch by introducing an "empty" keystoneclient.client implementation that just does "from keystonclient.httpclient import *" 18:30:52 or use the manager-driver approach, everything in manager is by definition public 18:31:07 dolphm: i would actually encourage something that issues a warning coupled with that if thats the approach 18:31:13 dolphm, right that first one is easy. you don't need import * i think you just need import HTTPClient 18:31:38 morganfainberg: ++ 18:31:38 i was waiting for the discoverability thing to go forward as it's using client.py but i can add that first 18:32:07 i also think proper use of __all__ is important for things that are public 18:32:25 pep8 also says to use __all__ 18:32:25 morganfainberg: which we don't use anywhere 18:32:50 therefore everything in keystoneclient is private therefore we can just change it. 18:32:50 i saw a rough implementation that populated __all__ with a decorator @public 18:33:07 it might be worth using that kind of _very_ explicit mechanism 18:33:11 might be too much overhead though 18:33:14 morganfainberg, yea, it was interesting but i think it's just better to do manually 18:33:23 regardless of implementation 18:34:06 so what do people think about enforcing pep8 public/private on keystoneclient changes? 18:34:18 bknudson: ++ all for it. 18:34:27 bknudson: i wish i could agree, but i'd argue that *any* examples we have documented in openstack-manuals etc should be considered public API's 18:34:44 dolphm: yes, I was going to look for examples in manuals... 18:34:46 bknudson: ++ moving forward 18:35:07 there is the one doc in keystoneclient that just mentions v2 client and tenants 18:35:13 sure, following pep8 is a good start 18:35:26 bknudson, absolutely in future, i'm not sure about what's currently there though 18:35:52 #action all core reviewers familiarize yourself with http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces and begin to enforce on reviews moving forward 18:35:53 taking for example the one in the review get_auth_ref_from_keyring had no business ever being public api and won't be in any examples 18:35:53 jamielennox: ++ 18:35:57 I'll try to make the time to see what I can do about going through and marking things deprecated, or setting __all__, etc. 18:36:18 docstrings on things that are meant to be public 18:36:23 bknudson / jamielennox: if there is a question, we make it private and provide a wrapper that logs deprecated 18:36:39 anyone want to take a pass at implementing __all__ everywhere, and renaming private methods appropriately? 18:36:48 but yes, i think we can say that what is there is public and we start deprecating for a release cycle 18:37:00 that way we break everyone all at once, and can release 0.4.0 accordingly :P 18:37:02 dolphm: put me down. I'll try to make the time for it. 18:37:08 I assume there isn't a huge hurry for this. 18:37:16 #action bknudson to implement __all__ and rename private methods accordingly 18:37:30 bknudson: if you need any help, let me know. i'll toss some cycles to help out 18:37:44 bknudson: same, I'll pitch in a help if I can 18:37:51 bknudson, same here 18:37:54 bknudson, yea i'm keen on this one too 18:37:55 lbragstad: bknudson: morganfainberg: thanks! 18:38:00 jamielennox: ditto ^ 18:38:07 #topic open discussion 18:38:27 aababilov: you wanted to bring up apiclient right? 18:38:34 dolphm, would it be possible to submit extensions after m3? 18:38:35 there's also a high priority code review on the agenda (m3 targeted bp impl): https://review.openstack.org/#/c/38308/ 18:38:58 I have a couple reviews for the notification stuff if people want to pick through it 18:39:00 fabio: identity-api docs or implementation? 18:39:07 fabio: and for havana or icehouse? 18:39:13 dolphm, both 18:39:13 stevemar: morganfainberg thanks for reviewing 18:39:34 np lbragtad 18:39:35 lbragstad: of course. 18:39:58 dolphm, after Havana m3 and before Icehouse m1 18:40:17 fabio: absolutely not 18:40:37 fabio: you're welcome to submit identity-api changes anytime, though 18:40:48 do we have some way to check in changes not to master? a topic branch or something? 18:40:49 fabio: their scope of impact will be limited to the next api revision, though 18:41:07 bknudson: sure, push to the corresponding branch in gerrit 18:41:13 dolphm: thanks for the clarification 18:41:24 dolphm: is there any activity on https://bugs.launchpad.net/keystone/+bug/1205506? Do you mind if I start working on it? 18:41:25 do we want them sitting in gerrit for months? 18:41:26 Launchpad bug 1205506 in keystone "get_group_project_roles() asks same ldap query for all groups associated with user" [Medium,Triaged] 18:42:00 bknudson: it'll mean likely heavy rebasing by the time they go in. 18:42:03 in either case. 18:42:12 all: please review OS-EP-FILTER, #link https://review.openstack.org/#/c/33118/ 18:42:24 fabio: i'll look at that today. 18:42:27 fabio: will do, it's looking pretty good 18:42:32 rverchikov: i'd ask ayoung (who's not online right now) since it's currently assigned to him 18:42:33 I'm in for 2 minutes...anyone have anything burning for me? 18:42:51 ayoung: you were gone? 18:42:58 rverchikov: in the mean time, feel free to comment on the bug / request bug assignment / suggest a solution 18:43:08 dolphm: thanks 18:43:09 rverchikov, patch submitted for that, I think 18:43:12 sorry for delay 18:43:14 rverchikov: slash, ayoung is back! 18:43:16 I'm here 18:43:39 ayoung: could you give a link?.. 18:43:43 bknudson: changes to topic branches can merge 18:43:45 quick question regarding caching implementation. I am certain i'll have the caching core and token (revocation list, token, validate) done for M3. any thoughts on other system i should see about hitting if possible? or just wait for I? 18:44:15 rverchikov, https://review.openstack.org/#/c/40283/ 18:44:15 morganfainberg: i noticed that get_domain was hit repeatedly and could benefit 18:44:24 morganfainberg: so, maybe a bit of identity? 18:44:29 rverchikov, if that is not sufficient, then, yes, take it and run 18:44:39 morganfainberg: assignment is probably hit pretty hard as well 18:44:42 dolphm: i'll aim for some id/assignment where the revocations aren't ghastly 18:44:45 erm 18:44:50 s/revocation/invalidation 18:45:10 some are a rabbit hole to handle invalidates for atm 18:45:12 does the revocation list get cached? 18:45:25 bknudson: on the remote ends, i think it does. 18:45:29 that would help when there's lots of tokens. 18:45:30 morganfainberg: domain crud should be about as complex as the token operations, so cache invalidation should be similar 18:45:37 it gets fetched on a schedule 18:45:41 the clients are requesting every second. 18:45:45 bknudson: but in my cache proposal, yes it is cached 18:45:55 defautl is set in auth_token middleware in client code 18:45:58 and explicitly refreshed on token revocation instead of just invalidated. 18:46:17 morganfainberg: that's going to be great. 18:47:09 bknudson: yeah, over time i want to have most anything hit regularly cached. 18:47:16 morganfainberg, +1 18:47:26 morganfainberg, caches that we can control are key 18:47:36 otherwise, people will cache outside of Keystone, and that will be bad 18:47:41 this is double plus good 18:47:43 ayoung: exactly 18:48:04 dogpile.cache was just accepted into requirements, so i'll be doing cleanup and posting changesets for review shortly 18:50:08 morganfainberg, i haven't looked at dogpile, but is it something that should be looked at for what is currently done by memcache/ 18:50:18 jamielennox: dogpile can use memcache backend, redis, in-memory, file-based, etc. 18:50:18 http provides ways to cache and check if something is changed since some time... 18:50:19 morganfainberg: awesome, i didn't see that 18:50:41 dolphm: pending gate. 18:51:05 jamielennox: dogpile.cache can back to memcache 18:51:44 morganfainberg: what's the benefit over a kvs driver? 18:51:45 gyee, but at least for auth_token the token is going to need to be shared across process 18:52:10 dolphm: in theory, you can hook a proxy in front of it and change behavior. ohter than that, maybe nothing 18:52:15 dolphm: i'll need to think about it 18:52:17 gyee, keyring can do it but i'm not sure it's advisable, and i think it's doing memcache there anyway 18:52:30 dolphm: well and the decorators can hook into it 18:52:32 I don't think keyring are meant for sharing across processes 18:52:33 jamielennox: why does auth_token need to share data across processes? 18:52:49 dolphm, tokens across httpd workers 18:52:49 dolphm: doesn't autho_token have a concept of memcache store as is? 18:52:56 jamielennox: i asked why 18:53:26 I guess to prevent multiple httpd worker threads all hitting keystone for an individual token 18:53:29 morganfainberg: yes 18:53:34 but i'm pretty sure that's in memcache anyway 18:53:47 we'll need to look at those options, but it's doable. 18:53:53 jamielennox: so, it's a nice-to-have performance gain, not a 'need' 18:54:01 dolphm, that's how it works when auth_token is used in conjunction with Swift cache 18:54:09 gyee: ahh. 18:54:12 token data is shared across all proxies 18:54:20 gyee: understood 18:54:37 i'm pretty sure it works that way now anyway, it was just if it was worth extending 18:55:02 keystoneclient keyring was meant for CLI 18:55:24 gyee, right, unfortuanetly it was implemented in the HTTPClient rather than in shell 18:55:31 ick. 18:55:46 yeah, I smell refactoring :) 18:56:00 just leave the old one there and deprecated. 18:56:47 the review i posted earlier at least abstracts it to a general cache object, the idea was to give a memcache backend when it got to auth_token 18:57:01 bknudson, sure, it should be in OpenStack CLI anyway 18:57:41 jamielennox, auth_token cache is abstracted 18:57:54 it was implemented exactly the same as Nova 18:58:25 gyee, i don't know enough about how caching there works 18:58:31 you can configure it to use an existing cache (i.e. swift_cache) or the default one 18:59:30 dolphm: precision: icehouse branch actually opens when we reach keystone havana-rc1 18:59:42 so we think the keyring work should be done in python-openstackclient? 18:59:56 i.e. whenever you get to the botton of your RC bugs list and produce a realistic release candidate 19:00:00 bknudson, no, should be in OpenStack CLI 19:00:11 ttx: ah, that makes sense. thank you! 19:00:14 right, that's python-openstackclient 19:00:21 especially if we want the auth code to be embedded by all the clients 19:00:29 #link http://wiki.openstack.org/ReleaseCycle 19:00:34 gyee: python-openstackclient == openstackCLI 19:00:40 o/ 19:00:41 #endmeeting