18:00:40 <morgan_503> #startmeeting keystone
18:00:41 <henrynash> hi-hoooooo
18:00:41 <openstack> Meeting started Tue Aug 11 18:00:40 2015 UTC and is due to finish in 60 minutes.  The chair is morgan_503. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:41 <david8hu> samueldmq, vs
18:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:45 <openstack> The meeting name has been set to 'keystone'
18:00:49 <morgan_503> #topic Agenda
18:00:54 <morgan_503> #link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting
18:00:55 <lbragstad> gyee: I was about -20 then
18:01:03 <samueldmq> gyee, ++ hehe
18:01:10 <morgan_503> #topic Centralized Policies Distribution
18:01:22 <samueldmq> david8hu, just kidding :) it has been fun to run with it
18:01:28 <samueldmq> and there we go
18:01:28 <samueldmq> o/
18:01:32 <hogepodge> o/
18:01:38 <morgan_503> we have lots to cover
18:01:40 <jamielennox> o/
18:01:50 <morgan_503> so... going to timebox some.
18:01:51 <samueldmq> so, operators feedback is what we missed on this point
18:01:59 <morgan_503> you have ~15m
18:02:01 <morgan_503> max
18:02:11 <samueldmq> people know what we want to this cycle (distribute centralized policies)
18:02:18 <samueldmq> we had the specs, etc, and we got operators feedback
18:02:20 <gyee> samueldmq, we'll bring it in up in the ops midcycle next week
18:02:21 <samueldmq> morgan_503, got it, thanks
18:02:30 <gyee> under "burning issues"
18:02:34 <david8hu> samueldmq, will talk to the operators
18:02:40 <ericksonsantos> \o
18:02:57 <samueldmq> so, operators feedback
18:03:03 <samueldmq> #link https://www.mail-archive.com/openstack-operators@lists.openstack.org/msg02805.html
18:03:32 <samueldmq> about customizing and distributing we got things like:
18:03:35 <samueldmq> "Not the easiest task to either get right, or make sure that the files are distributed around in an HA setting.  But absolutely necessary." - Kris G. Lindgren, Senior Linux Systems Engineer at Go Daddy
18:03:57 <samueldmq> and others, so I think the lack of stackholders shouldn't be an issue anymore
18:04:42 <samueldmq> and I'd like to request an official decision on the SFE request
18:04:46 <samueldmq> (since we still don't need a FFE)
18:04:55 <samueldmq> #link https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg57416.html
18:05:22 <samueldmq> any objection? comment? :)
18:05:24 <samueldmq> morgan_503, should we vote? or anything else?
18:05:44 <morgan_503> sure. can start a vote unless someone has something else to add
18:05:53 <breton> how ready is it?
18:06:16 <samueldmq> gyee, yes, bringing it up in the ops midcycle is great, however independent from having it approved internally, given the good feedbacks, good progress in the code/specs
18:06:18 <morgan_503> regardless of state it will be classified as expirimental and optional in liberty at best
18:06:20 <samueldmq> :)
18:06:37 <lbragstad> do we have a sponsor?
18:06:37 <samueldmq> breton, all the code/specs is under https://review.openstack.org/#/q/status:open+branch:master+topic:bp/dynamic-policies-delivery,n,z
18:06:38 <gyee> samuldmq, I am all for it
18:06:47 <morgan_503> pending adoption and/or solving outstanding critical issues in mitaka it could be considered for stable
18:06:55 <henrynash> morgan_503: let;s calrify thta…since when I asked that before, ayoung said it would not be experimental
18:07:04 <samueldmq> breton, specs have 2x +2, code is 90%+ submitted and reviewable
18:07:07 <morgan_503> it will be expirimental or i -2
18:07:19 <morgan_503> it is the same as all new code in keystone
18:07:22 <lhcheng> samueldmq: I've been pinging our core operations team to give a feedback, but we have a use case for centralizing the policy file. It is more nice to have, but not critical.
18:07:27 <morgan_503> a cycle for sussing out before contract is set in stone
18:07:39 <ayoung> can we approve the spec for backlog?
18:07:39 * morgan_503 plays PTL card on this one
18:07:43 <samueldmq> I agree it is experimental
18:07:44 <henrynash> morgan_503”: ++
18:07:50 <gyee> you build it, we'll kick the tire
18:07:58 <henrynash> ayoung: I already +2’d if fo rthat
18:08:05 <morgan_503> so i'm happy to vote on SPFE, which would in turn allow the specs to be approved for liberty
18:08:13 <samueldmq> lhcheng, nice, one more good feedback in the list, thanks
18:08:20 <morgan_503> no need to backlog shuffle for the ones targeted at this cycle
18:08:30 <samueldmq> morgan_503, yes I agree with you and henrynash on the experimental flag
18:08:41 <morgan_503> so any concerns before vote?
18:08:48 <morgan_503> ayoung: any spec can be approved for backlog
18:08:54 <morgan_503> ayoung: no SPFE etc needed ever.
18:09:03 <ayoung> morgan_503, I know...It was more a call to action than a request for approval
18:09:10 <morgan_503> ok so.
18:09:20 <breton> #vote yes
18:09:25 <morgan_503> opening vote in... 1min unless someone says otherwise
18:09:37 <dstanek> *do it*
18:09:41 <samueldmq> ayoung, we are voting on making it possible to merge them to L
18:09:47 <samueldmq> ayoung, o/
18:10:16 <morgan_503> #startvote Approve SPFE for policy distribution and scaffolding for dynamic policy? yes,no,i_dislike_concrete_options_but_still_vote_yes
18:10:16 <openstack> Begin voting on: Approve SPFE for policy distribution and scaffolding for dynamic policy? Valid vote options are yes, no, i_dislike_concrete_options_but_still_vote_yes.
18:10:17 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
18:10:27 <ayoung> #vote yes
18:10:29 <bknudson> #vote yes
18:10:31 <gyee> #vote yes
18:10:31 <morgan_503> #vote yes
18:10:34 <samueldmq> #vote yes
18:10:36 <bknudson> if it's experimental then do whatever you want
18:10:40 <breton> #vote yes
18:10:42 <dstanek> #vote yes
18:10:46 <morgan_503> #vote i_dislike_concrete_options_but_still_vote_yes
18:10:46 <david8hu> #vote yes
18:10:54 <henrynash> #vote yes
18:10:57 <lbragstad> #i_dislike_concrete_options_but_still_vote_yes
18:11:03 <lbragstad> #vote i_dislike_concrete_options_but_still_vote_yes
18:11:05 <morgan_503> 1m left on vote
18:11:13 <lhcheng> #vote yes
18:11:14 <bknudson> I'm not signing up to review the changes. If others don't review it then it's not going to get in.
18:11:16 * lbragstad had to try out the new option
18:11:20 <dstanek> i think there are still a few details i dont' like, but we can work  them out
18:11:23 <jamielennox> don't care
18:11:25 <morgan_503> bknudson: *nod*
18:11:41 <lbragstad> that's why I asked if there was going to be sponsor
18:11:42 <dstanek> bknudson: i've been reviewing a bit and working with samueldmq
18:11:54 <samueldmq> dstanek, thanks
18:11:56 <morgan_503> #endvote
18:11:57 <gyee> lbragstad, on my todo list
18:11:57 <openstack> Voted on "Approve SPFE for policy distribution and scaffolding for dynamic policy?" Results are
18:11:58 <openstack> yes (9): gyee, dstanek, ayoung, lhcheng, bknudson, david8hu, samueldmq, henrynash, breton
18:11:59 <openstack> i_dislike_concrete_options_but_still_vote_yes (2): morgan_503, lbragstad
18:12:05 <morgan_503> ok so i think that is a yes.
18:12:22 <dstanek> ayoung: you win!
18:12:25 <samueldmq> morgan_503, looks right :)
18:12:27 <ayoung> Whaddaya know...miracles do happen
18:12:52 <gyee> dstanek, he's like Charlie Sheen, he always wins
18:12:52 <henrynash> I also think that we must be careful in terms of priority, in Kilo, reviews of Fernet tokens (whcih was an SFE) took precedence over other things that where dubmitted before the deadline…some of which didn’t get in….that can’t happen again
18:12:58 <morgan_503> ayoung, samueldmq: please respond to the ML thread reference this vote as approval, lets merge the specs - if there are no critical -1s on them, we can merge them today, no 2x+2 needed on the specs (refer to this vote when approving)
18:13:18 <morgan_503> be sure to clearly indicate that this will be exprimental like all other code.
18:13:18 <bknudson> we still need to focus on reviews for fernet tokens to fix the bugs
18:13:20 <ayoung> morgan_503, ++  will do my best.
18:13:23 <samueldmq> morgan_503, great
18:13:25 <dstanek> gyee: i only have nsfw responses for that
18:13:28 <lbragstad> bknudson: ++
18:13:29 <samueldmq> #link https://review.openstack.org/#/c/134655/
18:13:29 <samueldmq> #link https://review.openstack.org/#/c/197980/
18:13:34 <ayoung> samueldmq, theres one with a jenkins issue...you are getting that, right?
18:13:38 <samueldmq> the specs are those two ^
18:13:43 <morgan_503> ok moving on
18:13:44 <samueldmq> ayoung, sure
18:13:48 <ayoung> ++
18:13:48 <morgan_503> #topic Raising an exception if no domain specified on user/group/project create
18:13:52 <morgan_503> henrynash: 5m
18:13:54 <henrynash> ok
18:13:58 <samueldmq> yes so please let's get specs merged and review code
18:14:00 <samueldmq> thanks all
18:14:01 <samueldmq> :)
18:14:26 <henrynash> so as the bug indicates, this is where we try teh default domain in cases when you don’t supply a domain in  a create entity call
18:14:55 <morgan_503> henrynash: #link the bug?
18:15:00 <henrynash> this is not in the spec, nor do we really want this….it was in there due to tempest failing (a long time ago)
18:15:09 <jamielennox> bug #1482330
18:15:09 <openstack> bug 1482330 in Keystone "Creating a user/group/project without a domain should raise an exception" [Medium,In progress] https://launchpad.net/bugs/1482330 - Assigned to Henry Nash (henry-nash)
18:15:09 <lbragstad> #link https://bugs.launchpad.net/keystone/+bug/1482330
18:15:09 <stevemar> henrynash: this sounds like it should be a simple validation error
18:15:09 <uvirtbot> Launchpad bug 1482330 in keystone "Creating a user/group/project without a domain should raise an exception" [Medium,In progress] https://launchpad.net/bugs/1482330
18:15:11 <uvirtbot> Launchpad bug 1482330 in keystone "Creating a user/group/project without a domain should raise an exception" [Medium,In progress]
18:15:13 <henrynash> thx
18:15:23 <henrynash> stevemar: and so it should
18:15:39 <breton> won't this break things?
18:15:47 <morgan_503> I agree with stevemar this seems like a validation error
18:15:47 <gyee> henrynash, did we talked about this sometime back, like defaulting it to the caller's domain?
18:15:50 <morgan_503> however....
18:15:52 <henrynash> the question is - is it safe to cut this off after it being out ther for a while?
18:16:05 <morgan_503> changing the behavior is API incompatible
18:16:08 <bknudson> you'd have to deprecate the old behavior
18:16:12 <morgan_503> bknudson: ++
18:16:13 <breton> morgan_503: ++
18:16:16 <htruta_> henrynash, we should also consider that in reseller, it won't be necessary
18:16:21 <bknudson> have a config option for it
18:16:26 <henrynash> gyee: we do default to teh domain token…but if they, for insatnce, used a proejct token tehn we take this odd path
18:16:28 <htruta_> we'd infer it from the parent
18:16:29 <ayoung> V3 ... domain should be specified.  V2, default only.  We all agree that is the proper semantics, right?
18:16:34 <morgan_503> bknudson: i'd like to avoid that kind of option
18:16:52 <henrynash> morgan503: we don’t specifiy this behaviour in the spec
18:17:01 <morgan_503> bknudson: but that is a fine work around for *today*
18:17:05 <bknudson> another option is something on the request (a header or something) like microversioning
18:17:08 <stevemar> we shouldn't be defaulting to the default domain
18:17:25 <gyee> ayoung, yes
18:17:28 <morgan_503> i'm ok with an option to turn off the behavior today
18:17:31 <morgan_503> and we shouldn't default
18:17:38 <morgan_503> ayoung's assessment is correct
18:17:56 <morgan_503> so lets just say make it an option, deprecate old behavior - lets not get into microversions for now
18:18:01 <henrynash> morgan_503L so turn it off (by default), but allow a config switch to truen it back on
18:18:19 <ayoung> when we validate a token,  we should validate V3 only, so as to make the domain information explicit, too
18:18:22 <henrynash> morgan_503: so turn it off (by default), but allow a config switch to truen it back on
18:18:30 <morgan_503> bknudson: ^ i think this can't be done that way because this isn't security driven
18:18:35 <ayoung> thinking in terms of enforcing policy....
18:18:44 <gyee> I am curious if tempest or something well break if we turn it off
18:18:51 <henrynash> ayoung: understand, I’ll check this
18:18:55 <morgan_503> we have to default it to crrent behavior and in 2 cycles we can look at turning off by default
18:18:56 <henrynash> gyee: no, it all passes now
18:19:03 <gyee> henrynash, oh good
18:19:37 <henrynash> morgan_503: even thow we don’t specify it in the spec?
18:19:39 <morgan_503> i think it needs to maintain current behavior..but bknudson any input before moving on?
18:19:40 <ayoung> microresponse code 400.301
18:20:05 <ayoung> "Bad request, but we used to accept it."
18:20:09 <morgan_503> henrynash: sadly even if the spec doesn't say it... behavior is the contract [unless massive security hole]
18:20:13 <gyee> hah
18:20:16 <bknudson> morgan_503: I agree the default for this release should be to be the same as old behavior
18:20:21 <morgan_503> ayoung: 400.201
18:20:30 <gyee> ++
18:20:36 <ayoung> "Bad request but we caaaept it anyway?"
18:20:39 <bknudson> and log a message when it's used
18:20:40 <henrynash> morgan_503: ok, got it….and that’s why I riased it here. Ok, I got what i neded
18:20:45 <morgan_503> cool
18:20:49 <morgan_503> moving on.. next topic
18:20:52 <bknudson> then change default to the new behavior next release
18:21:11 <bknudson> maybe change devstack to the new behavior
18:21:12 <morgan_503> #topic  HTTP caching support
18:21:15 <morgan_503> dstanek: 5m
18:21:26 <dstanek> this is 2 parts... and ties back a little to dynamic policy
18:21:30 <ayoung> Did we ever find out what caching support we get from the requests library?
18:21:33 <dstanek> i put together an experimental client change
18:21:42 <dstanek> #link https://review.openstack.org/#/c/211396/
18:21:52 <dstanek> this will allow you to enable HTTP caching when creating a session
18:21:59 <dstanek> my next step is to get the new deps in g-r if there are no objections to my change
18:22:07 <dstanek> jamielennox also wanted to see if we could make those optional deps
18:22:11 <dstanek> thoughts?
18:22:19 <morgan_503> dstanek: make sure to replicate this in keystoneauth if no complaints
18:22:36 * morgan_503 has no real complaints with this
18:22:43 <jamielennox> so my only thing was i always considered this something that like OSC would do, but i like the idea
18:23:05 <jamielennox> mainly because CacheControl lets you cache to a file so to cache things like discovery between cli invocations
18:23:06 <morgan_503> jamielennox: i think this is putting the features into session so osc can do it
18:23:19 <morgan_503> unless we want it a layer above session
18:23:26 <morgan_503> (if even possible)
18:23:27 <dstanek> jamielennox: in theory we could add it there, but we still need support in ksc so that ksm can take advantage of it
18:23:28 <jamielennox> so there's a few options there i'd like to see if we can somehow expose
18:24:03 <morgan_503> sounds to me like dstanek and jamielennox have some conversations pending on this, but generally no opposition
18:24:06 <jamielennox> dstanek: yep, i'm good with the idea in theory i just want to see if we can make it a bit more generic thatn on/off
18:24:07 <gyee> dstanek, is cache shared globally?
18:24:22 <gyee> or is it per sesssion
18:24:22 <dstanek> gyee: just with the user running the code
18:24:24 <ayoung> we should be able to specify a secure directory to store the cache, no?
18:24:33 <dstanek> so that is the client side
18:24:46 <dstanek> yes....i have a script that i used
18:24:55 <dstanek> #link http://paste.openstack.org/raw/412596/
18:25:02 <dstanek> to test my experimental patch
18:25:08 <dstanek> i also want to make a spec for M to add in real caching support
18:25:15 <dstanek> a real small sliver of the work needs to be done to support dynamic policy, but there is lots more to do
18:25:21 <dstanek> i wanted to get some feedback on the approach
18:25:28 <dstanek> #link https://review.openstack.org/#/c/211693/
18:25:31 <gyee> ayoung read my mind, just don't want to into shared cookie situation
18:25:38 <morgan_503> generally i want to issue cache-control everywhere and see clients be smart
18:25:47 <morgan_503> and cache where they are allowed to
18:25:49 <ayoung> sess = session.Session(auth=auth, http_cache=caches.FileCache('.os_cache'))   ... could we make that a secure temporary by default?
18:25:53 <morgan_503> this is moving down that path
18:25:56 <morgan_503> so i like it
18:26:01 <dstanek> ayoung: gyee: look at the sample in paste - you control that
18:26:08 <bknudson> from cachecontrol.cache import DictCache -- We're limiting to specific cachecontrol implementations?
18:26:11 <gyee> nice!
18:26:16 <ayoung> dstanek, if no param is passed, what do you get?
18:26:20 <jamielennox> ayoung: i don't want to cache to file by default from a library
18:26:27 <dstanek> bknudson: no, those are just shortcuts - you can pass in any object
18:26:33 <dstanek> ayoung: no caching
18:26:34 <bknudson> we could just tell them to use cachecontrol directly
18:26:51 <ayoung> jamielennox, I mean if called like this sess = session.Session(auth=auth, http_cache=caches.FileCache())
18:27:02 <dstanek> like jamielennox eluded to there are some adapter options that i'm not passing though that may be useful
18:27:41 <dstanek> bknudson: they'd have to implement a crazy subclass like i do to get around us using a TCP adapter
18:27:41 <jamielennox> ayoung: i'm sure we can come up with something, it's not as easy as i thought because we have the TCPAdapter with like socket timeout on it
18:27:51 <bknudson> why do we have the TCP adapter?
18:28:07 <dstanek> bknudson: it fixes a TCP keep alive bug
18:28:16 <ayoung> we're probably going to need to be able to share the cache between all threads in an apache instance, and all workers in an eventlet server, too
18:28:25 <bknudson> I don't have a problem telling them they have to implement a crazy subclass.
18:28:28 <dstanek> ayoung: that already works
18:28:58 <dstanek> "them" is us in many cases
18:29:05 <ayoung> dstanek, this is something we've needed for a long time.  Nice.
18:29:15 <jamielennox> dstanek: how do you tie them together? i would have thought it was per session object
18:29:29 <dstanek> jamielennox: the cache?
18:29:32 <jamielennox> right
18:29:36 <jamielennox> memcache backend or something?
18:29:45 <dstanek> the filecache is a shared directory that uses file locking
18:29:55 <jamielennox> ok, still filecache
18:30:04 <ayoung> lets not memcache if we can avoid it.  This should not go off system
18:30:08 <dstanek> there are only 3 upstream backends a memory cache, a file cache and a redis cache
18:30:22 <morgan_503> yay no "memcache"
18:30:40 <ayoung> will it play nicely with eventlet?
18:30:44 <dstanek> and i'd love to see this eventually being automatic for users so things just get cached
18:30:51 <dstanek> ayoung: it should yes
18:30:56 <morgan_503> ayoung: most things do... by accident, but they still do
18:31:04 <morgan_503> gevent is smart-ish
18:31:08 <ayoung> memcache did not
18:31:15 <bknudson> hopefully I can cache my tokens in there
18:31:22 <morgan_503> ayoung: that is because of explicit thread.local and a horrrrrrrible library
18:31:29 <gyee> bknudson, yes, its very safe
18:31:37 <morgan_503> you have to work hard to make it that bad
18:31:40 <jamielennox> dstanek: i definetly want people to opt-in to a cache, particularly one that is stored on a filesystem and will be reused between runs
18:31:42 <dstanek> bknudson: if we are crazy enough to set the cache headers then sure!
18:31:54 <jamielennox> it makes sense for the CLI and applications but not as a library
18:32:22 <dstanek> i'll poke around osc some to see how much damage i can so there
18:32:26 <morgan_503> ayoung: and it used all FDs for the process, so an effective DOS but the cache was still stable/secure
18:32:54 <stevemar> dstanek: pretty neat stuff dude
18:32:55 <bknudson> so everything using the session gets this?
18:32:55 <morgan_503> #action dstanek to work with jamielennox and continue exploring this path. more work to be done before it's really ready to go
18:32:57 <bknudson> not just keystone?
18:32:59 <dstanek> i'm over time...closing thoughts?
18:33:08 <morgan_503> bknudson: correct anything using session can leverage this
18:33:10 <dstanek> bknudson: yes
18:33:13 <ayoung> so...what happens if we put a cache header on, say a user record, then we delete the record.  Will the client library be smart enough to invalidate the cache?  Of will we need a "ignore cache" switch for those case?
18:33:16 <bknudson> seems like this should go to keystoneauth first
18:33:21 <stevemar> dstanek: let me know if you need help on the osc side :)
18:33:40 <morgan_503> ayoung: cache-control says you are allowed to keep that cache until expiry - regardless of server response
18:33:44 <morgan_503> so.
18:33:58 <morgan_503> don't put permissive cache control where you don't want it
18:34:17 <dstanek> ayoung: a client would not ask for that until the local copy times out
18:34:21 <dstanek> no way to flush it
18:34:44 <ayoung> dstanek, so the workflow I'm doing is that I need to check if something is deleted...
18:34:44 <morgan_503> ok we're at time for this topic
18:34:46 <dstanek> this is eventual consistency at it's best
18:34:51 <ayoung> and there is no way except to busy wait.
18:35:02 <dstanek> ayoung: why would you need to do that?
18:35:06 <jamielennox> ayoung: this just means we need to be smarter about what we send in our cache-control headers
18:35:08 <morgan_503> ayoung: don't put a cache-control header on those resources that allow it.
18:35:10 <ayoung> So:  delete router-interface...then check if it exists...sleep. check again
18:35:11 <morgan_503> then
18:35:20 <bknudson> I think we should document how to do it yourself and also support that.
18:35:23 <morgan_503> you also can ignore cache-control as a client
18:35:23 <jamielennox> i think most things won't be cacheable
18:35:34 <ayoung> usually, caching is what I want, but in those cases, its a user override...like hitting ctlr r in the browser
18:35:34 <morgan_503> you don't *have* to use it.
18:35:48 <morgan_503> clients can ignore caching at any point
18:35:55 <dstanek> ayoung: delete your local cache
18:36:00 <ayoung> morgan_503, that is my question  "how?"
18:36:24 <morgan_503> either delete the local cache or don't use a caching session
18:36:33 <morgan_503> or disable caching in the session for those requests
18:36:35 <dstanek> ayoung: don't configure it on your sesson or delete local cache
18:36:37 <ayoung> do I need to delete the whole cache just to force a requery on a single resource?
18:36:56 <dstanek> ayoung: you'll have a hard time finding a single resource
18:37:13 <dstanek> better to create an uncached session
18:37:21 <morgan_503> this still needs work
18:37:24 <bknudson> did you look at other libraries for caching with requests?
18:37:26 <morgan_503> lets take this convo and impl details offline
18:37:36 <morgan_503> next topic.
18:37:59 <dstanek> bknudson: a few..that was a recommendation from sigmavirus24
18:38:02 <morgan_503> #action dstanek to work with ayoung to ensure caching use-cases and cache avoidance for specific requests can occur
18:38:14 <morgan_503> again, lets take the rest of this offline
18:38:23 <morgan_503> we have a few more big ticket topics to hit
18:38:37 <morgan_503> #topic X.509 Tokenless Authz needs a verdict
18:38:43 <morgan_503> gyee: 12m max
18:39:02 <gyee> so the patch's been independently tested by folks from Yahoo and Mirantis
18:39:04 <ayoung> VOTE!
18:39:18 <gyee> are there anything we can do at this point?
18:39:24 <bknudson> what verdict is needed? isn't it just reviews?
18:39:24 <morgan_503> i'm fairly happy with the changes in general
18:39:27 <ayoung> gyee, its experiemental and opt in, right?
18:39:33 <gyee> ayoung, yes
18:39:36 <breton> I have questions
18:39:39 <morgan_503> looks like it needs reviews.. and it needs ot be clearly marked expirimental
18:39:40 <ayoung> lets do it
18:39:43 <breton> related and unrelated to client side
18:40:05 <gyee> breton, client side change will come, after this patch is landed on the server side
18:40:12 <breton> 1. Is getting a catalog for the user?
18:40:25 <gyee> #link https://review.openstack.org/#/c/156870/
18:40:25 <breton> ah.
18:40:34 <breton> *is getting a catalog for the user planned?
18:40:37 <morgan_503> breton: if you're acting on keystone you don't need a catalog
18:40:42 <bknudson> /v3/auth/catalog ?
18:40:44 <jamielennox> gyee: every figure out service tokens?
18:40:45 <morgan_503> breton: if you get a token, you have a catalog
18:40:48 <jamielennox> X-Service-token
18:40:50 <morgan_503> also /v3/auth/catalog ?
18:41:10 <gyee> jamielennox, we'll need to support mapping on the client side
18:41:17 <jamielennox> gyee: mapping?
18:41:20 <jamielennox> client side?
18:41:32 <gyee> map a cert to auth context
18:42:08 <morgan_503> #action Cores: Please review code for tokenless auth (service users)
18:42:09 <stevemar> jamielennox: the x509 tokenless auth work leverages the mapping engine from the federation code
18:42:38 <jamielennox> gyee, stevemar: what does that have to do with client side/
18:42:40 <breton> I saw that Session already supports `cert` option
18:42:53 <stevemar> gyee: what do you mean support mapping on the client side? the x509 specific mapping? or the ability to create a mapping from the client side?
18:43:01 <stevemar> jamielennox: i dunno, that's what i'm wondering ;)
18:43:08 <jamielennox> breton: yes it does but you'll need to handle the catalog so you're better off doing it as a auth plugin
18:43:15 <breton> but they don't work with auth=None (keystoneclient/session.py, line 577, in _auth_required)
18:43:20 <gyee> jamielennox, we'll need to be able to fetch a mapping from server to map the cert into auth context
18:43:33 <morgan_503> gyee: those words do no make sense
18:43:33 <gyee> instead of token to auth context
18:43:37 <breton> jamielennox: so, a kind of no-op plugin then?
18:43:56 <jamielennox> breton: you'll need to pass through the url for keystone, but mostly it'll be no-op
18:44:00 <stevemar> gyee: i'm still confused
18:44:03 <stevemar> :)
18:44:03 <bknudson> service users are used for more than just auth_token.
18:44:14 <breton> gyee: I don't quite understand too
18:44:16 <morgan_503> gyee: if the service user needs a token, get a token via the cert
18:44:17 <jamielennox> breton: there is also like get_connection_parameters from which you can set the cert from the plugin which is better
18:44:31 <stevemar> bknudson: for bootstrapping!
18:44:33 <gyee> morgan_503, sure, we can do that
18:44:34 <breton> gyee: I almost make it work with minimal changes
18:44:39 <morgan_503> if the service user only talks to keystone don't need a token
18:44:40 <morgan_503> simple
18:44:55 <bknudson> service users are used by neutron to send notifications to nova.
18:44:57 <morgan_503> this is opening doors for more options and moving away from tokens elsewhere as needed.
18:44:58 <breton> to get a token via cert we need an auth plugin on ks side, no?
18:44:58 <gyee> morgan_503, cert for token is supported without code changes
18:45:11 <morgan_503> so for now, don't try and grab mappings and such
18:45:15 <morgan_503> that is way way way way too complex
18:45:17 <jamielennox> morgan_503: the X-Service-Token problem is for swift and others that expect you to pass the service that is calling this as well
18:45:19 <morgan_503> just get a token
18:45:30 <jamielennox> morgan_503: in which case there is no token to pass on
18:45:33 <gyee> morgan_503, sounds good
18:45:35 <ayoung> moving away from tokens everywhere: http://adam.younglogic.com/2015/08/tokenless-keystone/
18:45:37 <morgan_503> jamielennox: so get a token for now :)
18:45:46 <morgan_503> this is iterative
18:45:54 <jamielennox> morgan_503: making x509 not useful for at least a couple of services
18:46:06 <morgan_503> it is still useful
18:46:08 <jamielennox> and a growing list
18:46:11 <jamielennox> but sure
18:46:14 <bknudson> we're saying it's experimental now so if it doesn't work then we can stop supporting it
18:46:28 <morgan_503> it is also expirimental so changes can occur in mitaka
18:46:29 <gyee> bknudson, its both experimental and optional
18:46:45 <morgan_503> in short: review code
18:46:46 <ayoung> X Service token can be handled via a separate middleware and an X59 as well.
18:47:02 <ayoung> doesn't invalidate this patch
18:47:07 <morgan_503> don't try and make the client handle the mapping
18:47:25 <dstanek> morgan_503: ++
18:47:32 <morgan_503> provide a way for x509 to grab the token for x-service-token when needed *for now*
18:47:42 <morgan_503> gyee: ^ work for you?
18:47:53 <gyee> morgan_503, yes
18:47:55 <morgan_503> (as in when sending x-service-token)
18:47:57 <morgan_503> ok
18:47:58 <morgan_503> cool
18:48:03 <morgan_503> #topic Bug 1482701 - Federation: user's name in rules not respected
18:48:03 <uvirtbot> Launchpad bug 1482701 in keystone "Federation: user's name in rules not respected" [Medium,In progress] https://launchpad.net/bugs/1482701
18:48:04 <gyee> like I said, it works now, just matter of documenting it
18:48:05 <openstack> bug 1482701 in Keystone "Federation: user's name in rules not respected" [Medium,In progress] https://launchpad.net/bugs/1482701 - Assigned to Marek Denis (marek-denis)
18:48:07 <uvirtbot> Launchpad bug 1482701 in keystone "Federation: user's name in rules not respected" [Medium,In progress]
18:48:15 <morgan_503> if we have time there is one more topic after this
18:48:20 <lbragstad> so marekd opened up this bug (https://bugs.launchpad.net/keystone/+bug/1482701) which is something that is reproducible with UUID tokens and Fernet. The issue is that when the federated user name and id are different, the federated name isn't respected.
18:48:33 <lbragstad> he has a review up for fixing the UUID case - https://review.openstack.org/#/c/211093/
18:48:48 <lbragstad> but for Fernet and federated workflows, we have two options
18:49:13 <ayoung> define "respected" here?
18:49:24 <lbragstad> workaround the user id and username issue by making them the same, or persist the user name in the token format with the user id
18:49:42 <lbragstad> does anyone have an issue seeing the user name go into the token?
18:49:53 <lbragstad> marekd: feel free to elaborate on it a bit more
18:50:13 <bknudson> the bug doesn't say what the response was supposed to be.
18:50:33 <morgan_503> lbragstad: i really want less "name" data in tokens.
18:50:36 <morgan_503> lbragstad: to be frank
18:50:37 <ayoung> is this because we don't have the data to feed through the mapping rules a second time?
18:50:47 <morgan_503> lbragstad: i would rather move towards ids only if at all possible
18:50:54 <lbragstad> I believe what marekd means by "respected" is that the mapping doesn't honor the user name
18:51:10 <ayoung> I think the mapping sets it, and then it is dropped, if I read it correctly
18:51:13 <jamielennox> morgan_503: horizon uses user_name for display
18:51:15 <morgan_503> but federation is always a magic case
18:51:25 <lbragstad> or doesn't match a mapping *if* it is different from the user id
18:51:27 <morgan_503> jamielennox: and i disagree with that being "in the token" ;) but thats aside
18:51:41 <jamielennox> which is weird because i've definetly correctly mapped through a name such that it appears correctly in horizon
18:51:43 <lbragstad> I don't really care to bloat the token
18:51:51 <lbragstad> s/care/want/
18:52:06 <morgan_503> lbragstad: username should be in the token already iirc
18:52:18 <morgan_503> just maybe not in fernet's ... thing?
18:52:21 <morgan_503> uhmm payload
18:52:28 <jamielennox> so what's the issue with this just being a bug and fixing it?
18:52:31 <morgan_503> this is another "federation is special" case
18:52:38 <morgan_503> looks like a bug to me
18:52:43 <ayoung> when you repopulate the token, you don't have the original assertion
18:52:45 <lbragstad> https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/token_formatters.py#L532-L533
18:52:48 <lbragstad> username isn't in there
18:53:00 <morgan_503> so for the fernet payload, put it in for federated tokens
18:53:05 <jamielennox> oh - right this is a fernet thing
18:53:05 <morgan_503> we know federated tokens are special anyway
18:53:18 <morgan_503> but don't put it in for all fernet tokens
18:53:18 <lbragstad> morgan_503: https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L569
18:53:25 <morgan_503> just the federated case
18:53:44 <morgan_503> ugh
18:53:55 <morgan_503> it's fine to use a friendly name in the federated fernet
18:54:06 <morgan_503> just don't bloat the token in all cases.
18:54:15 <jamielennox> it may be for fernet and federation we need to store like a visited_users table or something, we already have groups we need in addition
18:54:17 <morgan_503> just like groups are used specially for federated
18:54:24 <jamielennox> not this cycle
18:54:29 <lbragstad> ok, so this would only concern federated payload (unscoped, domain-scoped, and project-scoped)
18:54:41 <morgan_503> since fernet token payloads can be changed as needed
18:54:43 <morgan_503> no contract
18:54:48 <morgan_503> we can change this next cycle as needed
18:54:56 <morgan_503> or even if we have a better idea this cycle
18:55:10 <morgan_503> yay opaque payloads we control :)
18:55:16 <morgan_503> it's a bug, fix it :)
18:55:17 <lbragstad> ok, so we are fine with including the user name in the federated payloads as long as it doesn't effect other payloads
18:55:25 * morgan_503 says yes.
18:55:30 <lbragstad> ok, i'll leave a comment
18:55:32 <lbragstad> on the bug report
18:55:38 <lbragstad> that's all I neede d
18:56:20 <ayoung> lbragstad, short answer is that if the user data is important to record, we should record it.
18:56:28 <morgan_503> #topic lhcheng's bug for end_point filtering
18:56:35 <morgan_503> lhcheng: o/
18:56:40 <lhcheng> o/
18:56:40 <ayoung> shadow table style, like the ID mapping.  probably should create an epehmeral user record
18:56:42 <lhcheng> https://bugs.launchpad.net/keystone/+bug/1482772
18:56:42 <openstack> Launchpad bug 1482772 in python-openstackclient "Region filtering for endpoints does not work" [Undecided,New] - Assigned to Lin Hua Cheng (lin-hua-cheng)
18:56:42 <morgan_503> you have ~3m
18:56:42 <uvirtbot> Launchpad bug 1482772 in python-openstackclient "Region filtering for endpoints does not work" [Undecided,New]
18:56:43 <uvirtbot> Launchpad bug 1482772 in python-openstackclient "Region filtering for endpoints does not work" [Undecided,New] https://launchpad.net/bugs/1482772
18:56:49 <lhcheng> region is an available filter for List Endpoint in ksc and osc
18:56:55 <lhcheng> however our documentation doesn't seemt to have it
18:57:09 <lhcheng> dolphm suggest to just add the support for the region filter, since it could be useful anyway
18:57:12 <morgan_503> this looks like a no-spec required (minor API doc change needed) and a bug
18:57:16 <lhcheng> are folks okay with no-spec?
18:57:16 <gyee> we have region in endpoint group filter :)
18:57:22 <morgan_503> i'm fine with it being no-spec
18:57:29 <ayoung> add to documentation.  Behavior is good, no spec.
18:57:29 <gyee> same here
18:57:36 <bknudson> you'll need to change the api spec
18:57:39 <morgan_503> anyone have concerns before we end the meeting?
18:57:58 <lhcheng> bknudson: yup, I'll start with updating the api spec
18:57:59 <morgan_503> bknudson: yeah the api doc needs an update but this looks like an oversight not really a "need a formal spec/bp" thing
18:58:22 <morgan_503> lhcheng: make sure the api-doc change is partial-bug tagged
18:58:26 <bknudson> the spec itself wouldn't be interesting anyways
18:58:31 <morgan_503> exactly
18:58:32 <samueldmq> yes, I agree with just api change spec, not formal spec with proposed solution, problem description etc
18:58:42 <morgan_503> ok annnnnd we're done
18:58:43 <lhcheng> morgan_503: got it
18:58:44 <morgan_503> out of time
18:58:47 <morgan_503> #endmeeting