18:01:55 <dolphm> #startmeeting keystone
18:01:56 <openstack> Meeting started Tue Aug 20 18:01:55 2013 UTC and is due to finish in 60 minutes.  The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:59 <openstack> The meeting name has been set to 'keystone'
18:02:01 * topol opening night jitters?
18:02:16 <dolphm> i should write this stuff in advance and just copy/paste it
18:02:17 <dolphm> #topic FeatureProposalFreeze & Havana milestone 3
18:02:39 <dolphm> m3 is cut sept 4 (2 weeks-ish)!
18:02:49 <morganfainberg> rolling up fast!
18:02:57 <dolphm> and more immanently, feature proposal freeze is starting to hit for several projects this week
18:02:59 <gyee> dolphm, what's 8/28 date then?
18:03:01 <dolphm> ours is next week
18:03:11 <dolphm> #link https://wiki.openstack.org/wiki/FeatureProposalFreeze
18:03:17 <dolphm> gyee: read the description ^
18:03:39 <gyee> ah
18:03:44 <bknudson> when's FeatureFreeze?
18:04:01 <bknudson> 9/4
18:04:02 <dolphm> September 4th
18:04:08 <dolphm> #link https://wiki.openstack.org/wiki/Havana_Release_Schedule
18:04:14 <dolphm> overall release schedule ^
18:04:47 <dolphm> feature freeze takes effect with milestone-proposed / release-proposed is created
18:05:08 <dolphm> then we open back up for icehouse
18:05:33 <dolphm> so, on a related note...
18:05:40 <dolphm> #topic pagination vs feature proposal freeze
18:05:44 <dolphm> #link https://blueprints.launchpad.net/keystone/+spec/pagination-backend-support
18:05:48 <henrynash> ah!
18:06:01 <dolphm> the one bp that has no implementation in review yet!
18:06:21 <dolphm> henrynash: i'm not caught up on the mailing list thread, but as far as i've read, there's certainly some discouragement against pursuing pagination at all?
18:06:44 <gyee> dolphm, right, thought we are going to tackle filtering first
18:06:57 <henrynash> so if you following the url spec in the Bbp, I wrote up an enterhpad to summarize
18:07:12 <henrynash> #linnk https://etherpad.openstack.org/pagination
18:07:28 <bknudson> there doesn't seem to be a question about better filtering
18:07:48 <henrynash> bknudsonL agreed…everyone wants that
18:07:53 <gyee> we should go after filtering first, least that area is more certain
18:08:05 <dolphm> henrynash: are you pursuing filtering in havana?
18:08:20 <henrynash> so I have working a filtering systems that allows drivers to filter if they can, the controller will if they can't
18:08:24 <dolphm> henrynash: should the pagination bp be un-targeted from havana?
18:08:38 <henrynash> I also have pagination work as well
18:08:41 <dolphm> (is there a filtering-in-drivers bp?)
18:08:48 <gyee> we should separate them out into two BPs
18:08:53 <morganfainberg> gyee: ++
18:09:03 <henrynash> dolphm: no..but maybe we should split it
18:09:06 <dolphm> definitely
18:09:22 <morganfainberg> i also think with filtering, if leveraged correctly, the pagination quesiton might go away.
18:09:27 <henrynash> ok, that's fine - I can that
18:09:32 <morganfainberg> leveraged/implemented/etc
18:09:34 <dolphm> henrynash: sounds like pagination is dependent on filtering?
18:09:40 <henrynash> so here's the rub, however
18:09:55 <henrynash> if we define really rich filtering which only some of our backends can do....
18:09:56 <dolphm> morganfainberg: especially if you require filtering
18:10:02 <morganfainberg> dolphm: exactly
18:10:11 <bknudson> kvm does not do filtering.
18:10:16 <bknudson> oops, kvs
18:10:19 <henrynash> then we can't do things like limit the number of response or paginate at the driver leve
18:10:21 <dolphm> bknudson: no one does kvs
18:10:27 <morganfainberg> bknudson: kvs is a special case imo
18:11:14 <henrynash> if we don't have parity in what filters LDAP and SQL support, then we will still have long response times for the "poorer" backend
18:11:22 <gyee> henrynash, not sure if I understand
18:11:27 <gyee> they are two separate issues
18:11:28 <bknudson> LDAP has pretty good filtering.
18:11:34 <bknudson> even does soundex.
18:11:40 <henrynash> bknudson: sounds good
18:11:41 <bknudson> most of them do anyways
18:11:45 <dolphm> henrynash: what's the disparity in your implementation?
18:11:52 <morganfainberg> henrynash: perhaps it makes sense allow the driver to advertise what it can do? encourage filtering as the 1st choice (obviously tackle filtering first as a concept)
18:12:12 <henrynash> so the issue is conceptual, not implementation
18:12:44 <henrynash> assuming that we need to either paginate or put a had limit on how many response we want back from SQL/ or LDAP....
18:12:44 <gyee> morganfainberg, sure, drivers should tell Keystone what they can do
18:13:14 <henrynash> then you can't do that if you don't implement ALL you filters in the driver
18:13:24 <henrynash> …i.e. by the SQL or LDAP backend
18:13:42 <gyee> henrynash, you don't have to implement ALL
18:14:04 <dolphm> henrynash: but what *can't* be implemented? why not require the implementation satisfy appropriate tests in test_backend?
18:14:05 <henrynash> but if you don't, you can't limit the output (say max 100 entries)
18:14:06 <morganfainberg> henrynash: you could also pass through filtering arguments if they are uniform if the driver supports it
18:14:06 <gyee> you just need to document what you can do
18:14:17 <dolphm> henrynash: i don't see a use case (yet) for special casing a use case for filtering in drivers
18:14:39 <dolphm> err special casing *a specific filter* in a driver
18:14:50 <henrynash> dolphm: when I say drivers I mean use the filtering of the underlying database (SQL or LDAP)
18:14:56 <dolphm> henrynash: right
18:15:01 <gyee> just pass whatever query params in the drivers and we are done :)
18:15:15 <henrynash> sure, I've done that
18:15:17 <morganfainberg> gyee: ++, just make sure we make the params uniform.
18:15:39 <henrynash> I think this is too complex an issue to discuss on IRC...
18:15:55 <dolphm> henrynash: you said you had no disparity in filtering capability between LDAP and SQL - so test equally in test_backend and ensure all driver implementations maintain feature parity
18:16:34 <henrynash> dolphm:…ok, so that means we will ONLY support filtering at the capability level that is the lowest common denominator between SQL and LDAP
18:16:42 <henrynash> diolphm: that might be ok
18:17:05 <henrynash> dolphm: we can't add "extra filtering at the controller level"
18:17:23 <gyee> henrynash, we need all the query params
18:17:31 <dolphm> henrynash: so, split the bp into filtering & pagination, and put something up in time for feature proposal freeze to continue the discussion?
18:17:32 <henrynash> dolphm: for those drivers that are doing their one filtering
18:17:32 <gyee> in the drivers
18:18:44 <henrynash> dolphm: I guess, I see them more inter-related, but ok to split
18:18:55 <jamielennox> I've missed everything that's come before this, but no to actually doing filtering and pagination in the controller, but yes to standardizing the names of parameters in controllers
18:19:04 <morganfainberg> henrynash: i think we keep it at the lowest common denominator to start (filtering), if data shows that is insufficient we can approach how to pass other filtering through?  It'll be more work to keep compat, but i think its the easiest to approach.
18:19:08 <henrynash> dolphm: that's the point I am trying to make…IF we want to paginate/limit, then it affects what filtering we can do where
18:19:38 <bknudson> do the backends do the paging, too?
18:19:42 <dolphm> #action henrynash to split new filtering blueprint out of existing 'pagination-backend-support' blueprint
18:19:45 <dolphm> bknudson: not currently
18:19:47 <henrynash> bknduson: in mine, yes
18:20:22 <bknudson> so not sure what the issue is... the backend gets that query parameters (filter & paging) and handles it.
18:20:39 <henrynash> bknudson: that's what I;m doing
18:21:22 <dolphm> (i only put this on the agenda to get a status update vs feature proposal freeze, which i think we've got ;)
18:21:25 <dolphm> moving on..
18:21:26 <dolphm> #topic keystoneclient library - public vs private interfaces
18:21:28 <dolphm> bknudson: !
18:21:40 <bknudson> this has come up in an irc discussion last week...
18:21:54 <bknudson> we've got several reviews affecting keystoneclient
18:22:05 <dolphm> sounds like this change https://review.openstack.org/#/c/40173/ broke what was perceived to be a public API
18:22:09 <bknudson> and to be safe, we really should not be changing anything public
18:22:20 <dolphm> bknudson: but we haven't defined what's public and what's private?
18:22:28 <morganfainberg> dolphm: precisely
18:22:28 <bknudson> There's a pep8 standard that says how to define public /private
18:22:34 <bknudson> but we're not really following it.
18:22:41 <dolphm> #link http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces
18:22:51 <bknudson> so not sure what to do about it... assume everything is public?
18:22:58 <bknudson> do the work to actually define what's public/private?
18:23:06 <bknudson> (bring the code up to pep8)
18:23:27 <topol> we really need to delineate between public and private. Our folks get burned by this all the time and yell at Brant or me
18:23:43 <dolphm> topol: in the client library?
18:23:48 <bknudson> I'll try to put something together for the mailing list... I had intended to do that.
18:23:51 <jamielennox> so regarding the above review it's fairly easy to bring back the compatibility they want
18:23:51 <topol> dolphm, in general
18:24:03 <topol> wherever we are being lax
18:24:08 <dolphm> topol: in python interfaces?
18:24:14 <jamielennox> however for example this one: https://review.openstack.org/#/c/42254/ it removes a function that has really only private usage but is defined in a public way
18:24:27 <bknudson> jamielennox: right, those are the weirdy ones.
18:24:32 <dolphm> topol: i want to understand specifically where we're causing people pain
18:24:37 <topol> dolphm, even in our driver code folks perceived interfaces to be public and got burned
18:24:50 <dolphm> topol: we never hear anyone speak up
18:24:53 <bknudson> jamielennox: safe thing to do, and I think what's indicated to do by openstack guidelines, is we don't allow it.
18:24:56 <topol> brant, u remember I hope
18:25:23 <gyee> dolphm, fyi we (HP) is extending keystoneclient classes
18:25:26 <bknudson> topol: I remember... it wasn't the client it was the identity API.
18:25:29 <topol> bknudson and I handled the uproar behind the scenes
18:25:34 <dolphm> gyee: which ones
18:25:51 <topol> bknudson, agreed. but the principle applies
18:25:53 <gyee> both v2 and v3 client classes
18:26:06 <morganfainberg> dolphm: same with metacloud (limitedly)
18:26:15 <bknudson> gyee: did you think you were extending a public or private interface?
18:26:16 <morganfainberg> we are extending client classes
18:26:26 <morganfainberg> but nothing burned yet.
18:26:58 <bknudson> and, if you thought it was public, why?
18:27:10 <gyee> bknudson, the client classes are public
18:27:11 <dolphm> a "workaround": release the next version of keystoneclient as "0.4.0" (dissuading expectation of backwards compatibility)
18:27:23 <jamielennox> so i think there are a couple of ways forward. First we need a standard way to start deprecating functions in keystoneclient. Then we need to be super vigilant about enforcing _function_names for things that are expected private
18:27:24 <bknudson> pep8 defines what's public, and by that definition nothing is public since it's not documented at the keystoneclient level.
18:27:53 <morganfainberg> bknudson: i think we need to be explicit, even if technically we aren't advertising anything as public
18:28:02 <gyee> bknudson, exactly, key is documentation
18:28:06 <dolphm> jamielennox: we're talking about classes specifically here, not just methods
18:28:20 <bknudson> you can name classes with _
18:28:21 <jamielennox> dolphm, both i would say
18:28:22 <bknudson> and packages.
18:28:23 <morganfainberg> dolphm: i think his concern still applies for the initial transition.
18:28:59 <bknudson> one option is we fork keystoneclient2 from keystoneclient ... seems drastic
18:29:13 <morganfainberg> bknudson: or we do a staged deprecation
18:29:18 <morganfainberg> transitional + warnings
18:29:21 <jamielennox> bknudson, well we still have the option to do keystoneclient 1
18:29:22 <morganfainberg> though.. that has other pitfalls
18:29:32 <dolphm> bknudson: jamielennox: and modules?
18:29:49 <bknudson> dolphm: and modules
18:30:00 <jamielennox> i think there are modules like utils that are not supposed to be public on keystoneclient
18:30:02 <gyee> yes modules
18:30:24 <morganfainberg> correct, _ prefix is good for any namespace in theory
18:30:28 <morganfainberg> to mark it as private
18:30:42 <dolphm> backwards compatibility can be restored in jamielennox's patch by introducing an "empty" keystoneclient.client implementation that just does "from keystonclient.httpclient import *"
18:30:52 <gyee> or use the manager-driver approach, everything in manager is by definition public
18:31:07 <morganfainberg> dolphm: i would actually encourage something that issues a warning coupled with that if thats the approach
18:31:13 <jamielennox> dolphm, right that first one is easy. you don't need import * i think you just need import HTTPClient
18:31:38 <dolphm> morganfainberg: ++
18:31:38 <jamielennox> i was waiting for the discoverability thing to go forward as it's using client.py but i can add that first
18:32:07 <morganfainberg> i also think proper use of __all__ is important for things that are public
18:32:25 <bknudson> pep8 also says to use __all__
18:32:25 <dolphm> morganfainberg: which we don't use anywhere
18:32:50 <bknudson> therefore everything in keystoneclient is private therefore we can just change it.
18:32:50 <morganfainberg> i saw a rough implementation that populated __all__ with a decorator @public
18:33:07 <morganfainberg> it might be worth using that kind of _very_ explicit mechanism
18:33:11 <morganfainberg> might be too much overhead though
18:33:14 <jamielennox> morganfainberg, yea, it was interesting but i think it's just better to do manually
18:33:23 <morganfainberg> regardless of implementation
18:34:06 <bknudson> so what do people think about enforcing pep8 public/private on keystoneclient changes?
18:34:18 <morganfainberg> bknudson: ++ all for it.
18:34:27 <dolphm> bknudson: i wish i could agree, but i'd argue that *any* examples we have documented in openstack-manuals etc should be considered public API's
18:34:44 <bknudson> dolphm: yes, I was going to look for examples in manuals...
18:34:46 <dolphm> bknudson: ++ moving forward
18:35:07 <bknudson> there is the one doc in keystoneclient that just mentions v2 client and tenants
18:35:13 <gyee> sure, following pep8 is a good start
18:35:26 <jamielennox> bknudson, absolutely in future, i'm not sure about what's currently there though
18:35:52 <dolphm> #action all core reviewers familiarize yourself with http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces and begin to enforce on reviews moving forward
18:35:53 <jamielennox> taking for example the one in the review get_auth_ref_from_keyring had no business ever being public api and won't be in any examples
18:35:53 <dolphm> jamielennox: ++
18:35:57 <bknudson> I'll try to make the time to see what I can do about going through and marking things deprecated, or setting __all__, etc.
18:36:18 <bknudson> docstrings on things that are meant to be public
18:36:23 <morganfainberg> bknudson / jamielennox: if there is a question, we make it private and provide a wrapper that logs deprecated
18:36:39 <dolphm> anyone want to take a pass at implementing __all__ everywhere, and renaming private methods appropriately?
18:36:48 <jamielennox> but yes, i think we can say that what is there is public and we start deprecating for a release cycle
18:37:00 <dolphm> that way we break everyone all at once, and can release 0.4.0 accordingly :P
18:37:02 <bknudson> dolphm: put me down. I'll try to make the time for it.
18:37:08 <bknudson> I assume there isn't a huge hurry for this.
18:37:16 <dolphm> #action bknudson to implement __all__ and rename private methods accordingly
18:37:30 <morganfainberg> bknudson: if you need any help, let me know.  i'll toss some cycles to help out
18:37:44 <lbragstad> bknudson: same, I'll pitch in a help if I can
18:37:51 <stevemar> bknudson, same here
18:37:54 <jamielennox> bknudson, yea i'm keen on this one too
18:37:55 <dolphm> lbragstad: bknudson: morganfainberg: thanks!
18:38:00 <dolphm> jamielennox: ditto ^
18:38:07 <dolphm> #topic open discussion
18:38:27 <morganfainberg> aababilov: you wanted to bring up apiclient right?
18:38:34 <fabio> dolphm, would it be possible to submit extensions after m3?
18:38:35 <dolphm> there's also a high priority code review on the agenda (m3 targeted bp impl): https://review.openstack.org/#/c/38308/
18:38:58 <lbragstad> I have a couple reviews for the notification stuff if people want to pick through it
18:39:00 <dolphm> fabio: identity-api docs or implementation?
18:39:07 <dolphm> fabio: and for havana or icehouse?
18:39:13 <fabio> dolphm, both
18:39:13 <lbragstad> stevemar: morganfainberg thanks for reviewing
18:39:34 <stevemar> np lbragtad
18:39:35 <morganfainberg> lbragstad: of course.
18:39:58 <fabio> dolphm, after Havana m3 and before Icehouse m1
18:40:17 <dolphm> fabio: absolutely not
18:40:37 <dolphm> fabio: you're welcome to submit identity-api changes anytime, though
18:40:48 <bknudson> do we have some way to check in changes not to master? a topic branch or something?
18:40:49 <dolphm> fabio: their scope of impact will be limited to the next api revision, though
18:41:07 <dolphm> bknudson: sure, push to the corresponding branch in gerrit
18:41:13 <fabio> dolphm: thanks for the clarification
18:41:24 <rverchikov> dolphm: is there any activity on https://bugs.launchpad.net/keystone/+bug/1205506? Do you mind if I start working on it?
18:41:25 <bknudson> do we want them sitting in gerrit for months?
18:41:26 <uvirtbot> Launchpad bug 1205506 in keystone "get_group_project_roles() asks same ldap query for all groups associated with user" [Medium,Triaged]
18:42:00 <morganfainberg> bknudson: it'll mean likely heavy rebasing by the time they go in.
18:42:03 <morganfainberg> in either case.
18:42:12 <fabio> all: please review OS-EP-FILTER, #link https://review.openstack.org/#/c/33118/
18:42:24 <morganfainberg> fabio: i'll look at that today.
18:42:27 <stevemar> fabio: will do, it's looking pretty good
18:42:32 <dolphm> rverchikov: i'd ask ayoung (who's not online right now) since it's currently assigned to him
18:42:33 <ayoung> I'm in for 2 minutes...anyone have anything burning for me?
18:42:51 <stevemar> ayoung: you were gone?
18:42:58 <dolphm> rverchikov: in the mean time, feel free to comment on the bug / request bug assignment / suggest a solution
18:43:08 <rverchikov> dolphm: thanks
18:43:09 <ayoung> rverchikov, patch submitted for that, I think
18:43:12 <aababilov> sorry for delay
18:43:14 <dolphm> rverchikov: slash, ayoung is back!
18:43:16 <aababilov> I'm here
18:43:39 <rverchikov> ayoung: could you give a link?..
18:43:43 <dolphm> bknudson: changes to topic branches can merge
18:43:45 <morganfainberg> quick question regarding caching implementation.  I am certain i'll have the caching core and token (revocation list, token, validate) done for M3.  any thoughts on other system i should see about hitting if possible? or just wait for I?
18:44:15 <ayoung> rverchikov, https://review.openstack.org/#/c/40283/
18:44:15 <dolphm> morganfainberg: i noticed that get_domain was hit repeatedly and could benefit
18:44:24 <dolphm> morganfainberg: so, maybe a bit of identity?
18:44:29 <ayoung> rverchikov, if that is not sufficient, then, yes, take it and run
18:44:39 <dolphm> morganfainberg: assignment is probably hit pretty hard as well
18:44:42 <morganfainberg> dolphm: i'll aim for some id/assignment where the revocations aren't ghastly
18:44:45 <morganfainberg> erm
18:44:50 <morganfainberg> s/revocation/invalidation
18:45:10 <morganfainberg> some are a rabbit hole to handle invalidates for atm
18:45:12 <bknudson> does the revocation list get cached?
18:45:25 <morganfainberg> bknudson: on the remote ends, i think it does.
18:45:29 <bknudson> that would help when there's lots of tokens.
18:45:30 <dolphm> morganfainberg: domain crud should be about as complex as the token operations, so cache invalidation should be similar
18:45:37 <ayoung> it gets fetched on a schedule
18:45:41 <bknudson> the clients are requesting every second.
18:45:45 <morganfainberg> bknudson: but in my cache proposal, yes it is cached
18:45:55 <ayoung> defautl is set in auth_token middleware in client code
18:45:58 <morganfainberg> and explicitly refreshed on token revocation instead of just invalidated.
18:46:17 <bknudson> morganfainberg: that's going to be great.
18:47:09 <morganfainberg> bknudson: yeah, over time i want to have most anything hit regularly cached.
18:47:16 <ayoung> morganfainberg, +1
18:47:26 <ayoung> morganfainberg, caches that we can control are key
18:47:36 <ayoung> otherwise, people will cache outside of Keystone, and that will be bad
18:47:41 <ayoung> this is double plus good
18:47:43 <morganfainberg> ayoung: exactly
18:48:04 <morganfainberg> dogpile.cache was just accepted into requirements, so i'll be doing cleanup and posting changesets for review shortly
18:50:08 <jamielennox> morganfainberg, i haven't looked at dogpile, but is it something that should be looked at for what is currently done by memcache/
18:50:18 <morganfainberg> jamielennox: dogpile can use memcache backend, redis, in-memory, file-based, etc.
18:50:18 <bknudson> http provides ways to cache and check if something is changed since some time...
18:50:19 <dolphm> morganfainberg: awesome, i didn't see that
18:50:41 <morganfainberg> dolphm: pending gate.
18:51:05 <dolphm> jamielennox: dogpile.cache can back to memcache
18:51:44 <dolphm> morganfainberg: what's the benefit over a kvs driver?
18:51:45 <jamielennox> gyee, but at least for auth_token the token is going to need to be shared across process
18:52:10 <morganfainberg> dolphm: in theory, you can hook a proxy in front of it and change behavior.  ohter than that, maybe nothing
18:52:15 <morganfainberg> dolphm: i'll need to think about it
18:52:17 <jamielennox> gyee, keyring can do it but i'm not sure it's advisable, and i think it's doing memcache there anyway
18:52:30 <morganfainberg> dolphm: well and the decorators can hook into it
18:52:32 <gyee> I don't think keyring are meant for sharing across processes
18:52:33 <dolphm> jamielennox: why does auth_token need to share data across processes?
18:52:49 <jamielennox> dolphm, tokens across httpd workers
18:52:49 <morganfainberg> dolphm: doesn't autho_token have a concept of memcache store as is?
18:52:56 <dolphm> jamielennox: i asked why
18:53:26 <jamielennox> I guess to prevent multiple httpd worker threads all hitting keystone for an individual token
18:53:29 <dolphm> morganfainberg: yes
18:53:34 <jamielennox> but i'm pretty sure that's in memcache anyway
18:53:47 <morganfainberg> we'll need to look at those options, but it's doable.
18:53:53 <dolphm> jamielennox: so, it's a nice-to-have performance gain, not a 'need'
18:54:01 <gyee> dolphm, that's how it works when auth_token is used in conjunction with Swift cache
18:54:09 <morganfainberg> gyee: ahh.
18:54:12 <gyee> token data is shared across all proxies
18:54:20 <dolphm> gyee: understood
18:54:37 <jamielennox> i'm pretty sure it works that way now anyway, it was just if it was worth extending
18:55:02 <gyee> keystoneclient keyring was meant for CLI
18:55:24 <jamielennox> gyee, right, unfortuanetly it was implemented in the HTTPClient rather than in shell
18:55:31 <morganfainberg> ick.
18:55:46 <gyee> yeah, I smell refactoring :)
18:56:00 <bknudson> just leave the old one there and deprecated.
18:56:47 <jamielennox> the review i posted earlier at least abstracts it to a general cache object, the idea was to give a memcache backend when it got to auth_token
18:57:01 <gyee> bknudson, sure, it should be in OpenStack CLI anyway
18:57:41 <gyee> jamielennox, auth_token cache is abstracted
18:57:54 <gyee> it was implemented exactly the same as Nova
18:58:25 <jamielennox> gyee, i don't know enough about how caching there works
18:58:31 <gyee> you can configure it to use an existing cache (i.e. swift_cache) or the default one
18:59:30 <ttx> dolphm: precision: icehouse branch actually opens when we reach keystone havana-rc1
18:59:42 <bknudson> so we think the keyring work should be done in python-openstackclient?
18:59:56 <ttx> i.e. whenever you get to the botton of your RC bugs list and produce a realistic release candidate
19:00:00 <gyee> bknudson, no, should be in OpenStack CLI
19:00:11 <dolphm> ttx: ah, that makes sense. thank you!
19:00:14 <bknudson> right, that's python-openstackclient
19:00:21 <gyee> especially if we want the auth code to be embedded by all the clients
19:00:29 <ttx> #link http://wiki.openstack.org/ReleaseCycle
19:00:34 <stevemar> gyee: python-openstackclient == openstackCLI
19:00:40 <dolphm> o/
19:00:41 <dolphm> #endmeeting