17:00:17 <lbragstad> #startmeeting keystone-office-hours
17:00:19 <openstack> Meeting started Tue Aug  7 17:00:17 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:22 <openstack> The meeting name has been set to 'keystone_office_hours'
17:00:44 <lbragstad> cmurphy: i suppose this is an inconvenient time for you
17:01:05 <cmurphy> not at all
17:01:26 * lbragstad is all ears
17:01:30 <cmurphy> so this is the bug that captures it i think https://bugs.launchpad.net/keystone/+bug/1470205
17:01:30 <openstack> Launchpad bug 1470205 in OpenStack Identity (keystone) "Keystone IdP SAML metadata insufficient for websso flow" [Wishlist,Triaged]
17:01:44 <knikolla> ++
17:02:09 <knikolla> though we're not in the business of being a webapp
17:02:24 <cmurphy> when keystone is set up as a SAML IdP it isn't capable of doing the WebSSO profile, all it does is this very weird IdP-initiated ECP flow which is not a real profile
17:02:26 <knikolla> and the websso flow kinda requires that.
17:02:47 <cmurphy> yeah
17:02:57 <lbragstad> ahh
17:03:05 <knikolla> we'd need a form to login, and session cookies.
17:03:13 <lbragstad> so we have gaps in our federated IdP implementation specifically
17:03:25 <kmalloc> so, fwiw, with flask, i'll be moving us towards having an actual non-json page
17:03:31 <kmalloc> if possible
17:03:44 <kmalloc> which leads towards possibly being able to render websso things more cleanly
17:03:47 <knikolla> kmalloc: that would be really helpful
17:03:55 <kmalloc> flask opens doors in ways webob was a REAL pain
17:04:13 <kmalloc> i already have some logic for "accept header" variations and rendering different content
17:04:22 <kmalloc> so, we can head down that path
17:04:30 <lbragstad> which i think we needed for json/home
17:04:36 <kmalloc> yes
17:04:47 <kmalloc> in flask that was a lot easier to do than in webob
17:05:04 <kmalloc> and once we're 100% on flask-restful, i'll be making a push to go to restplus
17:05:12 <kmalloc> so we get swagger documentation
17:05:33 <kmalloc> if we already have that, we can line up some views for web-rendering that can be webapp/sso flow compatible
17:05:51 <kmalloc> so, think in S release we might be able to head down that path.
17:05:58 <knikolla> sounds reasonable
17:06:09 <kmalloc> depending on how the rest of flask conversion goes
17:06:15 <lbragstad> cmurphy: you have colleagues asking for this specifically?
17:06:24 <lbragstad> or customers?
17:06:41 <kmalloc> lbragstad: change of plans, use https://review.openstack.org/#/c/589546/ [if it passes check] as the base for /policy API
17:06:48 <lbragstad> kmalloc: ack
17:06:54 <kmalloc> lbragstad: i changed the way full_url and base_url work
17:07:07 <kmalloc> so you can pass in a path to them and get /v3 added automatically
17:07:08 <kmalloc> etc
17:07:12 <cmurphy> lbragstad: colleages/project managers
17:07:37 <cmurphy> ignoring the SAML part, the other part I have a hard time wrapping my head around is how a not-openstack application would know how to use keystone fernet tokens - i guess it would have to use keystonemiddleware or have some other kind of middleware that knows to redirect users to keystone for auth and requires the x-auth-token header for all requests
17:08:02 <lbragstad> yeah...
17:08:15 <cmurphy> and then what does it do with information about projects? what would not-openstack do with projects and role assignments
17:08:24 <lbragstad> right
17:08:40 <knikolla> if we implemented openid connect, we would already be compatible with most apps
17:08:53 <lbragstad> we also have a derivative of RBAC that's isn't exaclty an industry standard
17:09:09 <knikolla> the problem is the RBAC, more or less.
17:09:12 <knikolla> yeah
17:09:18 <lbragstad> i agree
17:09:45 <lbragstad> i think this same kinda problem came up when we talked about oauth2 support
17:11:30 <lbragstad> i guess it depends on what's using keystone and what they are trying to protect (without sounds overly vague)
17:11:41 <lbragstad> how are the resources grouped?
17:12:02 <knikolla> yeah
17:12:26 <lbragstad> depending on the answers, could you generalize keystone enough to work for each?
17:14:02 <knikolla> oauth2.0 has the concept of scopes (think roles). but nothing really project based
17:15:29 <knikolla> i'm also not familiar with other implementations of scoped rbac like in openstack.
17:15:46 <lbragstad> i'm not aware of any other implementations of scoped-rbac
17:16:06 <lbragstad> i want to say scoped-rbac is openstack-specific
17:16:17 <knikolla> in that case, does the application care about the project at all?
17:16:29 <lbragstad> which application?
17:16:40 <knikolla> external application that want to integrate with keystone
17:16:53 <lbragstad> i think the answer to that depends on what the application is responsible for
17:16:55 <knikolla> if all they care is users and roles (and have no concept of scoped rbac), but just plain rbac
17:17:22 <knikolla> that should be doable in a generalizable way
17:17:30 <cmurphy> knikolla: hmm so maybe there'd be a dummy project and the application only pays attention to the role name?
17:17:51 <lbragstad> if the application in question is something like nova that provides APIs for managing all sorts of things across different authorization levels, then i think keystone needs to care about scoped-rbac
17:18:16 <orange_julius> Hey guys. Had a quick question about Keystone + LDAP. I have 3 controllers configured and Keystone seems to be sending an authentication request from Horizon off to LDAP 3 times, assumingly one per controller. This is causing an issue because our lockout policy in LDAP is 3 bad passwords... so if a user enters 1 bad password we get an instant locko
17:18:16 <orange_julius> ut. Is this expected behaviour? Why would keystone be sending the authentication request 3 times for the same request?
17:18:40 <knikolla> cmurphy: yeah, something like that
17:19:26 <lbragstad> that's essentially what we started with, i think
17:19:59 <lbragstad> deploy keystone with a single "level" of authorization and only use that layer
17:20:02 <cmurphy> orange_julius: your haproxy should be configured to forward requests to just one keystone, three keystones making the same request at once would indeed all go to ldap
17:20:08 <knikolla> depends on how they want to do the integration. if they care about users, use a dummpy project. if they care about projects (as in the webapp provides services to extend an existing openstack deployemnt therefore project isolation matters), we can send the project_id as the username in the assertion.
17:20:23 <_ix> Maybe this is actually a quick question... if I'm using passwords, is there a reason that I'd specify `v3password` instead of `password`?
17:20:40 <cmurphy> lbragstad: heh yeah i guess this sort of takes us back to pre-multitenancy
17:20:47 <_ix> I don't know why, but some of the services I'm working on have a v3password specified in the conf files.
17:21:50 <cmurphy> _ix: in theory they should both be the same, `password` i think either looks at the url or does a discovery step to figure out what version to use
17:22:10 <knikolla> cmurphy: so the question is, in the webapp, do you want to act on behalf of the user (username in assertion = user_id, webapp will have no concept of projects). do you want to act on behalf of a project (username in assertion is project_id, webapp will have no concept of individual users)
17:22:16 <_ix> Ah. So if I'm explicit with the endpoint, I'll be in good shape. Thanks!
17:23:16 <cmurphy> knikolla: that's a good way to look at it
17:23:18 <lbragstad> knikolla: in the first example, i'm inclined to say we already support that pretty easily
17:24:08 * lbragstad feels like we're coming up with various deployment models
17:25:20 <knikolla> ++
17:25:38 <lbragstad> protecting apps the have no concept of scope or individual projects is a pretty low bar since most of that can be done between keystone and ksm
17:26:08 <lbragstad> if an app starts caring about tenancy, then it becomes a question of where that logic lives, imo
17:26:56 <knikolla> lbragstad: in the case that the app cares about tenancy, that would probably be tied to an attribute/claim of the assertion.
17:27:12 <knikolla> we could make that configurable
17:28:04 <knikolla> Keycloak for example allows you to map user attributes -> claims for each specific client which is using it for auth.
17:28:44 <lbragstad> but the app being protected needs to understand those claims, right?
17:29:11 <knikolla> assuming it's already using a standard openid connect / saml idp. it probably already does
17:31:59 <knikolla> we just need to document the mapping between keystone terminology -> openid connect / saml, and deployment strategies.
17:32:16 <knikolla> like the two cases i mentioned above.
17:32:29 <lbragstad> and the resources managed by the webapp?
17:33:06 <knikolla> it's up to the webapp, based on the roles keystone says u have
17:33:11 <knikolla> not any different than what it is now
17:33:37 <lbragstad> today - that's done by policy
17:33:42 <lbragstad> or our specific policy engine
17:34:06 <lbragstad> because that's where we say "instances are project resources, endpoints are system resources"
17:34:53 <knikolla> you can flatten that into the webapp checking instances require the project_member role, endpoint require the system_member role.
17:35:32 <lbragstad> yeah - using a policy file is an implementation detail
17:35:37 <knikolla> yup
17:36:02 <lbragstad> i guess what i'm seeing is that the resource can make a difference in that mapping depending on the app
17:37:29 <knikolla> lbragstad: you could also say that scope is an implementation detail
17:37:37 <knikolla> user has access to project which owns resource
17:38:03 <knikolla> user acts on behalf of project which owns resource.
17:38:43 <knikolla> like i don't think most openstack project care about users
17:38:54 <knikolla> they store resources with a project_id in their database
17:40:11 <knikolla> therefore given (project_id, role) a webapp has enough information to enforce policy
17:41:00 <lbragstad> sure
17:41:01 <knikolla> i also remember the early discussions about making system a part of the project tree
17:43:59 <lbragstad> true
17:44:29 <lbragstad> we decided against that because we thought there was a difference between the resources in openstack's case
17:45:08 <orange_julius> Could keystone be picking up the authentication requests from RabbitMQ? It doesn't seem like HAProxy is forwarding the requests to all 3 keystone servers
17:47:04 <lbragstad> orange_julius: keystone doesn't support authentication via the message bus
17:47:34 <orange_julius> hmmm ok. Didn't think so but am running out of ideas.
17:48:16 <lbragstad> does the problem go away if you only route requests so a single keystone node?
17:51:43 <orange_julius> Not sure yet. We have to schedule an outage for that sort of test.
17:57:40 <lbragstad> i suppose you could try and recreate it by making the same request to a single keystone node, too
17:57:50 <lbragstad> as opposed to taking nodes out of the pool
17:57:57 * lbragstad grabs lunch quick
17:59:18 <kmalloc> lbragstad: our tests are still spewing the "scope check" fail message
17:59:25 <kmalloc> lbragstad: do we have a fix for that?
17:59:49 <kmalloc> this is a LOT of extra logging that seems like it is not actionable atm.
18:00:31 <kmalloc> oslo_context.
18:01:40 <kmalloc> because this makes me want to push a change to squash all logging from oslo_context
18:01:49 <kmalloc> by default
18:16:24 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Allow wrap_member and wrap_collection to specify target  https://review.openstack.org/589288
18:58:45 <lbragstad> kmalloc: i had a wip series of patches for that
18:59:05 <lbragstad> https://review.openstack.org/#/c/551337/
18:59:11 <lbragstad> and https://review.openstack.org/#/c/551411/1
19:28:35 <lbragstad> kmalloc: i don't need to depend on https://review.openstack.org/#/c/589288/2 to convert the policy API? just https://review.openstack.org/#/c/589546/1 ?
19:35:48 <orange_julius> So it doesn't look like HAProxy is forwarding the keystone authentication requests to all keystone controllers. I tested with a similiar configuration and saw an HTTP request hit just one HTTP server sitting behind haproxy with similiar configs
19:40:31 <lbragstad> but the user is still getting locked out?
19:49:58 <openstackgerrit> Lance Bragstad proposed openstack/oslo.policy master: Move _capture_stdout to a common place  https://review.openstack.org/534440
19:50:47 <orange_julius> Yes the user is still getting locked out
19:52:20 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Update release notes for the rocky release  https://review.openstack.org/589578
19:54:03 <kmalloc> Yeah
19:54:32 <kmalloc> lbragstad: just convert the api
19:54:51 <lbragstad> ok - working on that now
19:55:02 <kmalloc> lbragstad: and you can base on the patch preceeding that one you linked
19:55:36 <kmalloc> The one that changes base_url and full_url
19:56:28 <lbragstad> orange_julius: are your users in sql or ldap?
19:56:45 <lbragstad> orange_julius: you're using the [security_compliance] section of keystone.conf, yeah?
19:57:24 <orange_julius> Users are in active directory. Not using security_compliance. The only lockout policy is defined in active directory
19:57:37 <lbragstad> ahh...
19:58:44 <orange_julius> I see two possibilites. Let me know if I am wrong. Either a single keystone instance is sending multiple requests to AD, or all 3 keystone instances are seeing the login attempt and then asking AD for authorization
19:58:46 <lbragstad> are you able to pin point which user is getting locked out? is it the user that's authenticating or the bind user being used to query ldap?
19:59:10 <orange_julius> Its the user that is authenticating and its actually any user. We were able to replicate with my account (that sucked)
19:59:26 <lbragstad> do you have a different value for that?
19:59:35 <lbragstad> er - does your account have a different lockout setting?
20:00:04 <orange_julius> No, its a "global" policy. 3 bad attempts
20:00:07 <lbragstad> and just to confirm, this is what users are doing POST /v3/auth/tokens ?
20:00:20 <lbragstad> s/what/when/
20:00:50 <orange_julius> Yes. well its through horizon and a POST /v3/auth/tokens is seen in the logs when they attempt. I believe we were seeing that come through the keystone.log on all three controllers
20:01:13 <lbragstad> ok - that makes sense
20:01:24 <lbragstad> horizon is building a request to keystone to get a token for the user trying to log in
20:01:45 <lbragstad> and they are using usernames and passwords?
20:02:03 <orange_julius> correct
20:02:35 <lbragstad> orange_julius: which release are you using?
20:02:41 <lbragstad> master, queens, pike?
20:03:21 <orange_julius> newton actually =(
20:03:29 <lbragstad> oh ok
20:03:47 <lbragstad> luckily, i don't think the ldap bits of the identity driver have changed drastically since then
20:04:44 <orange_julius> We will be turning on verbose logging on keystone + ldap which will hopefully help identify the problem. For now we don't have a whole lot unfortunately. Its a "production" system so we can't bounce things without an approval process
20:04:59 <orange_julius> That won't happen until Friday though
20:05:24 <lbragstad> right now does the user just get an 'Invalid user / password' error?
20:06:02 <orange_julius> Correct. They log into their windows workstation, go to horizon, fatfinger the horizon password, and then their AD account is locked out
20:06:13 <orange_julius> and receive bad password on horiozn
20:06:16 <orange_julius> horizon*
20:07:30 <kmalloc> horizon might be making multiple attempts
20:07:32 <kmalloc> fwiw
20:08:48 <lbragstad> at a high level - we start getting into that logic here https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/core.py#L907
20:09:19 <lbragstad> which calls into the ldap specific implementation https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/backends/ldap/core.py#L60-L78
20:09:56 <lbragstad> and there are some common utilities for getting connections to ldap
20:09:58 <lbragstad> https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/backends/ldap/common.py#L1214-L1238
20:10:12 <lbragstad> but afaict, i just see one connection getting initiatied?
20:11:27 <orange_julius> Sweet that is going to be very helpful. Horizon access logs only show up on the controller with the VIP which is only showing one POST to /dashboard/auth/login.
20:13:18 <lbragstad> orange_julius: do you know if you're setting this in your deployment?
20:13:19 <lbragstad> https://github.com/openstack/keystone/blob/master/keystone/conf/ldap.py#L409-L417
20:13:41 <orange_julius> No we are not. At least not explicitly
20:13:54 <lbragstad> i wonder if that would have an impact
20:14:03 <lbragstad> it attempts to reconnect 3 times before failing
20:14:17 <lbragstad> if using connection pooling (which appears to be enabled by default)
20:14:40 <lbragstad> so - if that's the case, that could be where your 3 attempts are getting used up
20:14:52 <orange_julius> Would the fact that the POST to auth/tokens showing up on all controllers keystone.log indicate that keystone is probably making the request 3 times? one per controller node?
20:14:59 <orange_julius> Wouldn't it only attempt to reconnect if it failed?
20:15:47 <lbragstad> right - that's the part i'm curious about
20:16:07 <lbragstad> if the password is wrong, are we attempting to reconnect?
20:16:22 <lbragstad> and burning password attempts accidentally?
20:16:29 <orange_julius> Ahh I see
20:17:00 <orange_julius> So we could set that to... 50. and have our max lockouts be also set at 50 and receive the same result
20:17:02 <lbragstad> orange_julius: do you have the ability to stand up a test node and point it to your adfs?
20:17:09 <lbragstad> orange_julius: or just disable it..
20:17:17 <lbragstad> yes
20:18:04 <orange_julius> I'll start standing up a test infra to validate. Won't be able to test disabling until Friday at the earliest
20:18:14 <lbragstad> sure - that makes sense
20:18:26 <lbragstad> just to be clear, i have no idea if this is actually the problem :)
20:18:31 <lbragstad> but it looks suspicious
20:18:45 <lbragstad> and might just be an unfortunate coincidence
20:19:07 <lbragstad> that our default for that particular setting happens to match your lockout setting in AD
20:19:57 <lbragstad> i guess if you have read access to your AD deployment, a single keystone node would be able to recreate this
20:20:19 <lbragstad> by keeping those default set, and trying to authenticate using POST /v3/auth/tokens with the wrong password
20:20:51 <orange_julius> No worries. Its worth investigating for sure. Is the retry logic around those links you sent earlier? I see it is supposed to raise an invalid credentials error, but not sure how that is handled
20:20:53 <lbragstad> https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-unscoped-authorization
20:21:01 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
20:21:18 <openstackgerrit> Doug Hellmann proposed openstack/oslo.policy master: add python 3.6 unit test job  https://review.openstack.org/589603
20:21:57 <lbragstad> we use another library for ldappool functionality, but i think if retries fail, that exception bubbles up to the user
20:22:51 <lbragstad> https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/backends/ldap/common.py#L25
20:23:35 <lbragstad> https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/backends/ldap/common.py#L708-L716
20:24:05 <lbragstad> and we pass that configuration option into the constructor for that object https://github.com/openstack/keystone/blob/582cab391a10714a4ec8bab1a6cce9b49867f8d4/keystone/identity/backends/ldap/common.py#L711
20:27:52 <orange_julius> Ok thanks. I have some things to investigate =D
20:27:59 <lbragstad> hopefully it helps
20:29:08 <lbragstad> i'm curious to know if that's actually the problem
20:30:01 <orange_julius> I will report back once I have the test case.
20:30:30 <lbragstad> ++
20:31:24 <orange_julius> except ldap.LDAPError as error: I wonder if an ldap authentication error is of type LDAPError
20:31:27 <orange_julius> HmmmmMMmMM
20:36:00 <orange_julius> https://github.com/python-ldap/python-ldap/blob/e8cc39db0990bfb56e95c6ae1bd2f8be10e88683/Doc/reference/ldap.rst#exceptions
20:36:02 <orange_julius> It is!
20:37:18 <lbragstad> https://github.com/openstack/ldappool/blob/master/ldappool/__init__.py#L248-L265
20:37:47 <orange_julius> 248-https://git.openstack.org/cgit/openstack/ldappool/tree/ldappool/__init__.py#n248-265
20:37:50 <orange_julius> damnit bragstad
20:38:01 <orange_julius> =P
20:38:47 <orange_julius> Yea so that is totally what is happening right? LDAPError is the base class, so that exception will catch all errors including auths. ldappool will retry 3 times in .3 seconds and lock a user out instantly
20:39:14 <lbragstad> it's looking to be more and more the case
20:39:37 <lbragstad> totally and unfortunate combination of configuration options between two systems lol
20:39:46 <orange_julius> haha
20:39:51 <orange_julius> I'm just surprised this hasn't come up before
20:39:59 <lbragstad> yeah... me, to
20:40:03 <lbragstad> too*
20:40:17 <lbragstad> but i guess it depends on what people are setting their locks out to on AD
20:40:47 <lbragstad> if they set it to 5, then the likelihood of hitting this drops significantly (i would assume)
20:42:17 <orange_julius> Yea seems like it. I'm going to set up a VM and play with the connection pooling settings to see if I can replicate. If I can... what do you want me to submit for this?
20:42:47 <lbragstad> i mean - we could document this as a bug and just write down the coincidence and mark it as invalid
20:43:12 <lbragstad> that would make it discoverable in case other people are randomly hitting this
20:43:29 <lbragstad> otherwise, i'm not sure we can adjust the defaults in keystone, just because what would you choose?
20:44:06 <lbragstad> it becomes a guessing game as to what you'd expect a reasonable retry to be and assuming that's what people are doing
20:44:18 <lbragstad> but the net would be
20:44:29 <orange_julius> Well why would you ever retry on a bad password?
20:44:40 <orange_julius> I guess that would be something for ldappool
20:44:45 <lbragstad> right
20:44:57 <lbragstad> you could teach ldappool to try and distinguish the difference
20:45:32 <lbragstad> so - a bug to ldappool is probably more appropriate
20:45:41 <orange_julius> I think getting it documented would be a great start. You are right, you can't increase the number of retries or you might break more people. 3 is fairly low already
20:46:01 <lbragstad> we can just document the workaround in the ldappool bug
20:46:18 <orange_julius> Sounds good to me
20:47:06 <lbragstad> "if you're relying ldap to enforce user lock outs, just be sure to set retry_max to 0 until ldappool knows how to distinguish different authentication failure types"
20:47:33 <orange_julius> (y)
20:47:38 <lbragstad> otherwise you risk locking users out prematurely
20:48:36 <orange_julius> You also could introduce strange behaviors which might have been masked by the retries, but its better than an instant lock out.. especially when people have to call to get their account unlocked =p
20:49:38 <lbragstad> right - i can see that being totally frustrating
20:49:58 <lbragstad> want me to open a bug or have you already started?
20:50:40 <orange_julius> I can open it if you are busy, but I havn't started yet.
20:51:20 <lbragstad> kmalloc: we're putting ldappool bugs here - yeah? https://bugs.launchpad.net/ldappool
20:52:09 <kmalloc> kmalloc: i think so
20:52:17 <kmalloc> kmalloc: i mean, that is where i'd put them ;)
20:52:19 <lbragstad> me too, we just haven't had one opened yet ;)
20:52:35 <orange_julius> oh man nice! A first!
20:53:26 * lbragstad cringes watching ldappool paint suffer it's first scratch
20:54:21 <lbragstad> i'll leave the honors to orange_julius heh
20:58:35 <orange_julius> Maybe I'm just dumb, but I don't see a button to report bugs. Its usually on the far rightyea?
20:58:50 <lbragstad> https://bugs.launchpad.net/ldappool/+filebug
20:59:05 <orange_julius> w00t thanks
20:59:19 <lbragstad> you should see a Report Bug hiding in the upper right corning of the page
21:09:07 <orange_julius> Thanks for helping troubleshoot this lbragstad. Much appreciated
21:10:14 <orange_julius> Submitted: https://bugs.launchpad.net/ldappool/+bug/1785898
21:10:14 <openstack> Launchpad bug 1785898 in ldappool "Connection Pooling Retries Failed Passwords" [Undecided,New]
21:10:36 <orange_julius> I had to reallllly restrain myself from putting "first" somewhere in there =P
21:15:07 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert regions API to flask native dispatching  https://review.openstack.org/589640
21:15:08 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
21:15:08 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
21:15:24 <kmalloc> lbragstad: ^ annnnnnnnd there is another subsystem done.
21:15:35 <kmalloc> lbragstad: i.. i think we're down to the "really hard"(tm) ones
21:24:01 <lbragstad> sweet
21:24:32 <lbragstad> orange_julius: anytime - i'll wait to confirm the bug once we know it's recreateable with your system
21:29:36 <orange_julius> Sounds good. I'll probably get around to reproducing tomorrow in a test environment and then hopefully we can our change request approved for Friday. Once I confirm it there I'll update the bug
21:30:02 <orange_julius> hopefully we can get our change request approved**
21:32:48 <lbragstad> ++
23:16:29 <openstackgerrit> Merged openstack/keystone master: Convert OS-AUTH1 paths to flask dispatching  https://review.openstack.org/588064
23:53:20 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert regions API to flask native dispatching  https://review.openstack.org/589640
23:54:18 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
23:54:40 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
00:37:58 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
00:38:39 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert regions API to flask native dispatching  https://review.openstack.org/589640
00:38:50 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
00:38:55 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
02:36:04 <openstackgerrit> Boxiang Zhu proposed openstack/python-keystoneclient master: refactor the getid method in keystoneclient/base.py  https://review.openstack.org/589689
09:35:39 <openstackgerrit> OpenStack Release Bot proposed openstack/python-keystoneclient master: Update reno for stable/rocky  https://review.openstack.org/589791
10:07:06 <openstackgerrit> Vishakha Agarwal proposed openstack/keystone master: Implement Trust Flush via keystone-manage.  https://review.openstack.org/589378
10:25:58 <openstackgerrit> Vishakha Agarwal proposed openstack/keystone master: Implement Trust Flush via keystone-manage.  https://review.openstack.org/589378
13:40:03 <lbragstad> knikolla: have you been following http://lists.openstack.org/pipermail/edge-computing/2018-August/000379.html at all?
13:40:06 <lbragstad> just curious
13:41:08 <cmurphy> that call is happening now fyi
13:42:28 <cmurphy> or no it's the other edge group i guess
13:44:58 <lbragstad> there hasn't been a time set for the keystone-specific one has there?
13:45:28 <cmurphy> i don't know
13:46:50 <lbragstad> i'll assume not since no one has replied to the thread yet
13:48:01 <cmurphy> ildikov: ^
13:48:04 <ildikov> lbragstad: cmurphy: today's call is an OPNFV one
13:48:14 <lbragstad> ack
13:48:18 <cmurphy> yeah i got them confused
13:48:29 <ildikov> lbragstad: cmurphy: I've just wanted to ping you and knikolla to ask whether anyone would be available tomorrow :)
13:49:00 <ildikov> we would like to start evaluating options that we collected regarding how to use Keystone in edge scenarios
13:49:21 <ildikov> and in addition we can plan for the PTG and follow up testing and demo activities
13:49:25 <cmurphy> i think i have another meeting
13:49:34 <lbragstad> i can sit in tomorrow, but i'm going to be traveling all day friday
13:49:49 <ildikov> and as no one besides the keystone team is an expert of Keystone at this moment we need someone from this team ideally
13:50:05 <ildikov> lbragstad: the most votes are for 1300 UTC tomorrow
13:50:31 <ildikov> lbragstad: would be a Zoom call as that's what people seem to prefer these days, but we usually try to take notes on the edge IRC channel as well
13:50:31 <cmurphy> oh i should be able to do 1300
13:50:48 <ildikov> cmurphy: basically the same time as the OPNFV call today
13:51:01 <ildikov> it's probably a first of a series
13:51:12 <lbragstad> i can make that work for tomorrow i think
13:51:52 <lbragstad> ildikov: i've added a slot for edge related discussions to the federation section of our PTG etherpad https://etherpad.openstack.org/p/keystone-stein-ptg
13:52:15 <lbragstad> (line 52 - 53)
13:53:02 <ildikov> lbragstad: awesome, I wanted to ask you about that too :)
13:53:25 <lbragstad> according to the PTG schedule, we have monday set aside for x-project related things
13:53:33 <lbragstad> thursday and friday are supposed to be project specific
13:53:42 <ildikov> lbragstad: I will try to get the StarlingX people and the Edge Computing Group people in sync about this
13:53:57 <lbragstad> but if there is a time that works better for other, let me know or just jot it down in the etherpad
13:54:10 <lbragstad> (i don't plan on formatting it into an actual schedule until the end of this month)
13:54:28 <ildikov> lbragstad: those groups meet one on Tuesday the other one on Wednesday so if we cannot make Monday work we still have one room each following day to figure it out
13:55:03 <ildikov> lbragstad: sounds great, I will let both groups know about this so that we can plan
13:55:22 <lbragstad> without getting ahead of myself, i was going to try and keep tuesday open to deal with fallout of monday cross-project work
13:55:36 <lbragstad> but maybe i can tag along in the edge discussions with folks on wednesday
13:55:48 <lbragstad> and if keystone comes up, it comes up
13:56:45 <ildikov> we can try to position this for Monday and then figure it out when to follow up if we need to
13:56:53 <ildikov> would that sound good?
13:59:25 <lbragstad> sure
14:01:00 <ildikov> cool, I will write up a mail about it to the ML's and will keep you posted about how things go
14:01:35 <ildikov> did you have any time estimation for the topic or we have some flexibility on that?
14:05:20 <lbragstad> i expect we'll have some flexibility - i imagine the major x-project discussions to be unified limits and policy stuff
14:06:44 <kmalloc> ildikov: If it wasn't a zoom call, I still can't get dialed into those at all, I'd be able to join
14:07:17 <kmalloc> Basically, zoom doesn't work at all for me. :(
14:07:38 <kmalloc> I've tried everything short of running a windows VM at this point.
14:08:00 <kmalloc> I guess I need to do that if I were to want to join.
14:08:25 <cmurphy> it works surprisingly well for me
14:09:07 <kmalloc> It plain doesn't run/work on my 18.04 laptop, can't install on my RHEL one, fedora it crashes on.
14:09:08 <lbragstad> i think i just had to download a plugin when i clicked on the meeting link
14:09:18 <kmalloc> The plug-in is terrible.
14:09:28 <kmalloc> And that is my problem.
14:10:50 <kmalloc> They really need to ditch the plug-in and go with a browser-pluginless solution.
14:12:15 <cmurphy> it made me download a full on desktop client
14:12:49 <cmurphy> which annoyed me but it works so ¯\_(ツ)_/¯
14:13:00 <lbragstad> huh - maybe that's what i did (version 2.0.115900.1201)
14:13:19 <kmalloc> Apparently they have a browser only option as of August 2018
14:13:26 <cmurphy> heh
14:13:28 <kmalloc> I'll try again tomorrow.
14:13:42 <kmalloc> What time was it?
14:13:57 <cmurphy> 1300 utc i think
14:14:16 <kmalloc> Oh, nope, I won't make it
14:14:30 <kmalloc> 6am calls are not in the plan for me
14:15:18 <kmalloc> Hazards of West coast living when apparently everyone else in the community is not in a compatible time zone.
14:17:03 <kmalloc> Oh well, I can weigh in on ML and/or after the fact as needed :)
14:17:55 <kmalloc> ildikov: sorry, I can't make 6am video calls, but trust lbragstad, cmurphy, and knikolla to be good representation for keystone :)
14:21:09 <ildikov> kmalloc: we have people from China, which kills using Hangouts and I only have Zoom account besides that
14:22:13 <ildikov> kmalloc: I'm open to any suggestion for any other time when the time slot is more convenient for you as Zoom works pretty well for those who can access it and I didn't heard other complaints about client issues, etc
14:22:40 <ildikov> kmalloc: so I'm happy to adopt, but I haven't tested too many open tools to be able to come up with alternative that I trust to be stable enough
14:22:56 <ildikov> s/adopt/adapt/
14:25:18 <lbragstad> i used mumble for a while
14:25:23 <lbragstad> but that's audio only
14:39:26 <ildikov> cool, I'll keep that in mind just in case
14:45:48 <orange_julius> So I can't seem to replicate the keystone+ldap issue with a single keystone instance configured. I am starting to wonder if the issue is with the HA or with how the dashboard passing the authentication to keystone. I am going to try to install the dashboard and see if I can get it to replicate in the test environment
14:53:25 <orange_julius> The ldappool connector is created using the designated bind user in the domain configuration is it not? Not the user information that would be passed via the Horizon dashboard
14:54:22 <lbragstad> when i was reading the code, it looked like it was the user that was attempting to authenticate
14:54:27 <lbragstad> but i could be wrong
15:02:46 <kmalloc> ildikov: zoom is less of an issue ATM, I'll run a VM if needed, but timing is a bigger issue, I really can't do before about 1530 utc
15:03:34 <ildikov> kmalloc: fair enough, as we have people from APAC that wasn't really an option
15:03:59 <ildikov> kmalloc: I will try to look into alternating slots and also making sure that the info makes its way out to the ML's
15:04:27 <ildikov> kmalloc: we just don't have a big group of people so I'm trying my best to keep them on one call to quick things off at least
15:23:56 <kmalloc> ildikov: yep, and you have knikolla lbragstad and cmurphy, so changing for.me.isnt important
15:25:03 <ildikov> kmalloc: I don't think you're less important than others, would be better if you could make it too!
15:25:25 <ildikov> kmalloc: I blame time zones :)
15:54:08 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
15:54:27 <lbragstad> kmalloc: ^ that's going to fail 18 tests, but wanted to make sure i was doing things right
15:54:41 <lbragstad> that just moves the /policy api and doesn't touch the endpoint policy API, yet
17:21:03 <orange_julius> Is there anything I should be aware of with Keystone + RabbitMQ that might cause wierdness? How does keystone use rabbitmq?
17:49:46 <lbragstad> orange_julius: keystone only uses rabbit for outgoing notifications
17:50:10 <orange_julius> Hmm ok thanks
17:50:20 <lbragstad> for example, when certain events happen in keystone, it can place a notification on the message bus for other things to consume
17:50:29 <lbragstad> (e.g. you want to know when project resources are updated)
17:51:57 <lbragstad> otherwise - keystone doesn't consume anything off the message bus..
17:52:33 <lbragstad> in fact, message queues are entirely optional for keystone from a deployment perspective
17:55:59 <orange_julius> Ok just making sure. Thanks
17:56:50 <lbragstad> yep - are you able to recreate the bug using your test env?
17:57:46 <orange_julius> Not yet. I'm trying to find out what is different. A simple test of vanilla keystone with LDAP doesn't appear to show the same symptoms.. Although actually... I havn't tried it from the cmdline on the prod env yet..
17:58:42 <lbragstad> so - you've backed your test keystone to the same AD but you're not locked out when you try and login to horizon using a bad password?
17:59:25 <orange_julius> I don't have horizon set up yet. I've been testing with the openstack cmdline tools. 'source rc_file; openstack project list'
17:59:37 <orange_julius> but yes. it is not locking me out
17:59:48 <lbragstad> interesting
18:00:14 <orange_julius> Also interesting tidbit. On the test environment I am receiving a "bad password" while doing the cmdline tool test. In prod I receive a "The request you have made requires authentication" message.
18:01:02 <orange_julius> That may be what it is returning when keystone finds a locked user in AD though
18:06:48 * lbragstad spins up a new instances to test with openldap
18:08:13 <orange_julius> After looking more a tldappool and ldap/common.py it really seems like this should be the issue....
18:12:53 <lbragstad> i'm curious if i can recreate this using openldap
18:13:03 <lbragstad> http://www.openldap.org/doc/admin24/overlays.html#Password%20Policies
18:14:40 <orange_julius> Should work in theory
18:37:02 <orange_julius> Cannot replicate with x3 keystones + haproxy either
18:38:06 <orange_julius> Have not tried Galera replication yet. Would that effect it in some way? I wouldn't think so..
18:51:31 <kmalloc> lbragstad: commented, minor nits mostly
18:51:55 <kmalloc> lbragstad: one thing that warrants a -1, and it's about needing to refactor the deprecated decorator
18:58:05 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
19:01:31 <lbragstad> orange_julius: no - i wouldn't either
19:03:25 <lbragstad> kmalloc: ack - thanks for the review
19:14:27 <kmalloc> lbragstad: i found a minor data leak bug in our role API
19:14:46 <kmalloc> lbragstad: i think.
19:14:53 * kmalloc reads this one more time
19:15:42 <kmalloc> it is possible to determine if a role exists [by id] unauthenticated or even if you're authenticated but not allowed to get roles.
19:16:45 <kmalloc> lbragstad: in support of the "domain roles" we call "get_role_wrapper" which does the provider lookup and then passes down to .get_role or .get_domain_role (on the RoleV3 controller)
19:16:55 <kmalloc> get_role and get_domain_role are @protected
19:17:00 <kmalloc> get_role_wrapper is not
19:17:20 <kmalloc> so you can get a RoleNotFound even if you have no token.
19:17:32 <kmalloc> this is a pretty minor data leak
19:17:40 <kmalloc> but it is a data leak none-the-less
19:18:04 <kmalloc> and i just relalized this wasn't in a direct messagee... and is in a logged channgel.
19:18:05 <kmalloc> ugh
19:18:13 * kmalloc goes and opens a security bug.
19:19:15 <orange_julius> So here is something potentially interesting.. running a TCPDUMP in production shows the authentication request (/v3/auth/tokens) coming through multiple times while a tcpdump in test shows just one request come through
19:19:26 <orange_julius> I swear to god if this is a network issue...
19:20:51 <kmalloc> orange_julius: i would be shocked if a network issue caused it since it should require TCP handshake for the post.
19:21:33 <orange_julius> At this point I'm ready to entertain all possibilities. Not being able to replicate this is super frustrating =P
19:21:52 <kmalloc> orange_julius: i could totally buy openstack-django-auth re-sending the post becuase it doesn't get a response it knows is "your username/password is wrong" and instead thinks it is a "token is invalid, reauth"
19:22:36 <kmalloc> but i would be shocked if a network issue over TCP could cause multiple posts at the network layer itself [application stupidity aside]
19:22:53 <orange_julius> Is that the theory that we have been talking about before? openstack-django-auth would only be the horizon dashboard, no?
19:23:35 <kmalloc> yeah it would be.
19:24:07 <kmalloc> if you're using CLI and seeing the issue it coudl point to an issue in keystoneauth mis-raising an exception that could affect more than just horizon, it could impact horizon, cli, etc
19:24:19 <orange_julius> Yup so the issue happens in the CLI and horizon
19:24:27 <kmalloc> if you use raw CURL to do a post and get the same issue [no python code]
19:24:46 <kmalloc> and the user still gets locked out, then i could see network/keystone/haproxy
19:24:55 <kmalloc> this might also be the retry logic in ksa.
19:25:03 <kmalloc> ksa = keystoneauth library
19:25:42 <kmalloc> or heck just plain requests, but in short don't use keystoneauth, make a direct post to the keystone endpoint to get a token and see if you lock out the user.
19:26:14 <orange_julius> Alrighty I'll try that.
19:28:19 <kmalloc> orange_julius: :) i hope it shows that we have a bug in keystoneauth only because it means you did nothing wrong and we can work to get a fix for you/everyone quickly
19:29:46 <orange_julius> direct curl to a keystone endpoint locks out the user
19:30:22 <orange_julius> I shouldn't be happy about that, but I am. I think that eliminates a bunch of things that were tumbling around my brain
19:31:43 <orange_julius> The question becomes.... why can't I replicate this
19:32:09 <kmalloc> lbragstad: ah nvm, no security issue, but ick, this code is ... awful
19:32:45 <kmalloc> orange_julius: the curl locked out the user and the TCPdump showed multiple posts?
19:33:13 <kmalloc> orange_julius: just to be sure the user wasn't about to be locked out on a single failed attempt
19:33:45 <orange_julius> Checking again now. One sec.
19:33:52 <kmalloc> sure thing :)
19:36:01 <lbragstad> are you sure you're using the same ldap config as what's in production?
19:38:00 <orange_julius> Well I didn't get the tcpdump, but I did just have the account unlocked so I know it didn't have a latent password attempts
19:38:17 <orange_julius> and yes I am pretty sure but I will double check. I copied everything over from production
20:00:00 <lbragstad> kmalloc: TBH i'm liking the self.request_body_json/self.auth_context properties
20:00:07 <lbragstad> that's going to be really nice i think
20:08:19 <kmalloc> lbragstad:  :)
20:09:05 <kmalloc> lbragstad: advantages to having everything in a clear location
20:09:13 <kmalloc> lbragstad: and having direct access to the request context
20:10:12 <kmalloc> lbragstad: i've almost finished [running tests] for /v3/roles, then will work on OS-INHERIT and /role_inference
20:10:47 <kmalloc> i hope i'll have the bulk of assignment (not the cross-api bits) done shortly
20:16:24 <lbragstad> cool - i'm still wading through policy
20:16:39 <lbragstad> working on OS-ENDPOINT-POLICY  now though
20:17:21 <kmalloc> OS-ENDPOINT-POLICY shouldn't be too bad.
20:17:33 <kmalloc> your change will conflict with my endpoints change, but that is not a big deal
20:17:37 <kmalloc> we can race to rebase :)
20:17:52 <kmalloc> whomever's lands last will need to update to remove routers.
20:19:00 <lbragstad> so - the OS-ENDPOINT-POLICY api deals with a bunch of different resources
20:19:09 <lbragstad> like policies, endpoints, and services
20:20:33 <lbragstad> if those resource classes are defined in other modules within keystone/api, can i just import them and re-use them '
20:20:35 <lbragstad> ?
20:21:43 <kmalloc> if you look at https://review.openstack.org/#/c/589288/ you'll see how to wrap members from other apis
20:21:52 <kmalloc> i've already fixed that :)
20:24:42 <kmalloc> and you'll want to move the notification callbacks to the manager
20:26:06 <kmalloc> lbragstad: fwiw, most of what you're seeing is just weirdness in the url
20:26:09 <kmalloc> and line wrapping
20:26:28 <kmalloc> /policies/{policy_id}' + self.PATH_PREFIX +
20:26:28 <kmalloc> '/services/{service_id}'),
20:26:47 <kmalloc> that is really /policies/{policy_id}/OS-ENDPOINT_POLICY/services/{service_id}'),
20:26:55 <lbragstad> right
20:27:18 <kmalloc> which in flask becomes: /policies/<string:policy_id>/OS-ENDPOINT_POLICY/services/<string:service_id>'
20:27:30 <kmalloc> and the controller doesn't actually emit anything
20:27:35 <kmalloc> it's almost all NO_CONTENT
20:28:02 <kmalloc> list_endpoints_for_policy is the only thing you have to return a value for
20:28:17 <lbragstad> oh - i suppose
20:28:21 <kmalloc> yep
20:28:24 <lbragstad> the write operations are put
20:28:28 <lbragstad> and delete
20:28:32 <kmalloc> exactly
20:28:52 <lbragstad> and check policy associations don't return bodies despite being a GET?
20:29:00 <kmalloc> not according to the code.
20:29:06 <kmalloc> they don't have return
20:29:10 <kmalloc> so it's an explicit None
20:29:13 <lbragstad> hmm
20:29:14 <kmalloc> s/explicit/implicit
20:29:15 <lbragstad> interesting
20:29:35 <kmalloc> the biggest change is moving the callbacks to the manager
20:29:41 <lbragstad> get_policy_for_endpoint seems to
20:30:10 <kmalloc> get policy for endpoint is /endpoints
20:30:52 <kmalloc> https://git.openstack.org/cgit/openstack/keystone/tree/keystone/api/endpoints.py?h=refs/changes/42/589642/4#n124
20:30:56 <lbragstad> isn't it /policy ?
20:31:00 <kmalloc> addressed in my migration of /endpoints
20:31:19 <kmalloc> nope, policy for endpoints goes in /endpoints, list_endpoints_for_policy is in /policy
20:31:56 <kmalloc> https://github.com/openstack/keystone/blob/master/keystone/endpoint_policy/routers.py#L38
20:32:28 <kmalloc> this is why moving to keystone.api where all code for a given path prefix lives in one place is so much better
20:32:47 <lbragstad> ah - so i just need to include list_endpoints_for_policy since that's the one under /policies
20:32:49 <kmalloc> i didn't even know endpoint-policy hooked into /endpoints until i failed some tests
20:32:53 <kmalloc> yep.
20:33:00 <lbragstad> ok
20:35:07 <kmalloc> this whole cross-api extension from random places in the codebase has made the move to flask much harder
20:36:56 <lbragstad> i'm going to need new resource clases for endpoint policies and service policies
21:37:25 <openstackgerrit> Merged openstack/keystone master: Allow for more robust config checking with keystone-manage  https://review.openstack.org/589308
21:49:37 <lbragstad> kmalloc: have you seen exceptions like this? http://paste.openstack.org/show/727678/
21:56:49 <openstackgerrit> Merged openstack/keystone master: Convert limits and registered limits to flask dispatching  https://review.openstack.org/588080
22:04:36 <lbragstad> i think it's because i'm defining multiple resource mappings...
22:50:36 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
04:37:33 <openstackgerrit> Vishakha Agarwal proposed openstack/keystone master: Implement Trust Flush via keystone-manage.  https://review.openstack.org/589378
06:07:02 <openstackgerrit> OpenStack Proposal Bot proposed openstack/keystone master: Imported Translations from Zanata  https://review.openstack.org/590111
06:56:05 <mbuil> lbragstad, cmurphy: what tool you use to generate a PKI key pair?
07:00:52 <cmurphy> mbuil: for the saml metadata? I just use the openssl CLI
07:07:54 <openstackgerrit> zhengliuyang proposed openstack/keystone master: More accurate explanation in api-ref:application credentials  https://review.openstack.org/589135
07:12:19 <mbuil> cmurphy: yes. I am not familiar with key generation. I used "openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048" to generate the private key and "openssl rsa -pubout -in private_key.pem -out public_key.pem" to generate the public key. Is certfile the public key and keyfile the private key?
07:12:49 <cmurphy> mbuil: yep
07:13:01 <mbuil> cmurphy: thanks
07:13:28 <cmurphy> also i like to use this oneliner https://stackoverflow.com/a/10176685
07:14:05 <cmurphy> with -nodes
07:16:41 <mbuil> cmurphy: this generates a certificate which I should add as certfile?
07:17:02 <cmurphy> mbuil: it generates the certfile and keyfile in one go
07:18:11 <mbuil> oh that's good! If I remember well, a certificate was a public key signed by a private key. I was wondering if with the commands I wrote you, was enough because I did not generate any certificate but a public and a private key
07:19:48 <openstackgerrit> OpenStack Proposal Bot proposed openstack/oslo.policy master: Imported Translations from Zanata  https://review.openstack.org/590145
07:24:35 <mbuil> cmurphy: keystone-manage does not exist. What pip package should I install? I have keystoneauth1==3.4.0 and python-keystoneclient==3.15.0
07:25:24 <mbuil> oh wait
07:25:40 <cmurphy> mbuil: keystone-manage comes from the server package, you'll need to install keystone itself
07:25:55 <mbuil> cmurphy: I forgot things are installed in venvs, sorry
07:25:59 <mbuil> I see it now :)
07:57:26 <mbuil> cmurphy: I need a bit of help. In my deployment, I had Keystone working under nginx. I have been following this guide to change to Apache ==> https://docs.openstack.org/keystone/latest/install/keystone-install-obs.html#configure-the-apache-http-server but I guess it assumes that there is a clean environment. I stopped nginx and started apache successfully but when using the openstack cli I get problems
07:58:14 <mbuil> this is what I get ==> Failed to discover available identity versions when contacting http://192.168.122.3:5000/v3. Attempting to parse version from URL. Service Unavailable (HTTP 503). I guess I am missing some config in the apache part
07:59:44 <cmurphy> mbuil: is there anything in the apache logs or the keystone logs that would indicate why it's returning a 503?
08:07:13 <mbuil> cmurphy: in apache2 logs ==> client denied by server configuration:.... let me investigate this. thanks
08:48:09 <mbuil> cmurphy: I am stuck again and still in the nginx -> apache step :(. When trying 'openstack network list' things are ok, until it does a GET call to .../v2.0/networks, where it gets a HttpException. This are the logs in the client: https://hastebin.com/efalosiseg.sql
08:49:15 <mbuil> cmurphy: and this is the error I see in neutron logs ==> https://hastebin.com/suvivaxobi.rb "DiscoveryFailure: Could not determine a suitable URL for the plugin"
08:49:46 <mbuil> I am using the same openrc as before... do I need to change anything there?
08:52:44 <cmurphy> mbuil: that looks like an issue with neutron to me, you can see on the lines before the failure that it was successful in talking to keystone
08:53:03 <cmurphy> so the openrc should be fine, seems like a server error with neutron
08:53:36 <cmurphy> oh sorry
08:53:47 <cmurphy> you are looking at the neutron logs
08:54:01 <cmurphy> so i'd check the keystone_authtoken section in neutron.conf
08:54:50 <cmurphy> and make sure it is pointing to the right keystone endpoint and also make sure it has a user_domain_name and project_domain_name set to 'Default'
08:56:58 <mbuil> cmurphy: I also get problems when listing images and glance logs give a bit more info ==> https://hastebin.com/jowigoxica.js
08:57:32 <mbuil> let me check that
08:57:49 <cmurphy> mbuil: is your keystone listening on port 35357? we changed most of our docs to stop using that port and only use port 5000
09:04:57 <mbuil> cmurphy: good point. I have just realized that nginx was listening to 5000, 80 and 35357 and apache is listening to 5000 and 80
09:12:04 <mbuil> cmurphy: when using nginx I had a .../conf.d/keystone-wsgi-public.conf and a .../conf.d/keystone-wsgi-admin.conf. Now I only have .../conf.d/wsgi-keystone.conf and the content points to keystone-wsgi-public. Is that enough or should there be a config for keystone-wsgi-admin too?
09:17:29 <cmurphy> mbuil: that's enough, the two different endpoints are legacy from the keystone v2 API which used different access control for each endpoint, for keystone v3 all of the access control is done in code with policy and it can all go through the one endpoint
09:50:47 <mbuil> cmurphy: I am at the step "Configure Apache to use a federation capable authentication method". Any preference between Shibboleth and Mellon? Remember I am planning to do Keystone to Keystone (perhaps that limits the option to one)
09:52:37 <cmurphy> mbuil: the last time I tried, mellon didn't work properly in the keystone to keystone scenario, there is some bug either in mellon or in keystone that made it unable to parse the saml response properly and i never got to the bottom of it
09:52:58 <cmurphy> shibboleth is a safe bet even though it's annoying to configure
09:53:22 <mbuil> cmurphy all right! thanks
10:20:55 <mbuil> cmurphy: regarding shibboleth config, I am following: https://docs.openstack.org/keystone/latest/advanced-topics/federation/shibboleth.html. I am about to add the "<Location " config but I am not sure what should I write there
10:21:24 <mbuil> cmurphy: first question, should I install and configure Shibboleth in both deployments or only in the one acting as SP?
10:34:24 <cmurphy> mbuil: only on the SP
10:36:10 <cmurphy> mbuil: the <Location /Shibboleth.sso> you can copy verbatim, for the <Location /v3/OS-FERATION/...> you can also copy it verbatim but the name of the identity provider and protocol, which is 'myidp', and 'saml2' in the example, is important and will come up later in the documentation
10:36:43 <cmurphy> you should probably keep the name of the protocol as 'saml2' but you might want to change the name of the identity provider
10:37:24 <mbuil> cmurphy: ah! ok, I was wondering whether I should change myidp with what I wrote in the IdP as "idp_entity_id". Let me read further then
10:50:25 <mbuil> cmurphy: BTW, there was a shib.conf that appeared in the conf.d/ directory right after installing Shibboleth. Should I leave it there?
10:50:54 <cmurphy> mbuil: yes
10:51:39 <mbuil> cmurphy: ok. I need to stop here and focus on something different. I'll try to progress tomorrow. Thanks for the help! :)
10:52:01 <cmurphy> no problem :)
12:53:24 <lbragstad> ildikov: is there a specific zoom link floating around? or should we use the same one as before?
12:53:45 <ildikov> lbragstad: https://zoom.us/j/671236148
12:54:01 <ildikov> lbragstad: all the relevant info is here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Keystone
13:12:59 <knikolla> joining, now, sorry i'm late.
13:28:30 <lbragstad> zzzeek: about your galera work, do you know if there was ever a spec pushed for that? i remember you brought it to a meeting once and the next steps were to document the approach a bit
13:28:56 <zzzeek> lbragstad: the spec I was working on is at https://review.openstack.org/#/c/566448/
13:29:12 <lbragstad> oh - great
13:29:13 <lbragstad> thanks!
13:29:22 <zzzeek> lbragstad: current POC is at https://github.com/zzzeek/stretch_cluster
13:41:15 <ildikov> knikolla: no worries, Tnx for joining
14:24:52 <kmalloc> o/
14:26:14 <lbragstad> mornin'
14:41:33 <gagehugo> o/
15:02:09 <lbragstad> FYI - https://review.openstack.org/589950
15:02:15 <lbragstad> i've proposed rc1
15:02:48 <lbragstad> depending on the state of porting various APIs we can assess if we want an RC2 next week
15:07:59 <lbragstad> also - i'll be traveling tomorrow and unavailable
15:08:24 <lbragstad> if anything urgent comes up i should be available saturday-ish?
15:08:46 <lbragstad> but at that point i'll be on opposite timezones
15:49:39 <openstackgerrit> OpenStack Release Bot proposed openstack/keystone master: Update reno for stable/rocky  https://review.openstack.org/590405
16:12:07 <kmalloc> weird, i am getting a failure on policy for "get_domain_role" and i don't see why
16:14:14 <kmalloc> oh... i see what is happening.
16:14:22 <kmalloc> missing target data because of magic stuff.
16:14:24 <kmalloc> got it
16:15:18 <lbragstad> i added cycle-highlights for keystone https://review.openstack.org/#/c/590411/
16:15:33 <lbragstad> would love feedback there if anyone has any
16:24:08 <gagehugo> lbragstad: done
16:24:41 <gagehugo> my main concern is the whole case-insensitive issue we had with "Member" vs "member" when we merged the default roles
16:24:50 <gagehugo> that may affect people
16:27:00 <lbragstad> gagehugo: done
16:28:41 <lbragstad> gagehugo: where did we put that case-sensitity statement in docs?
16:29:06 <gagehugo> https://review.openstack.org/#/c/576640/
16:29:33 <lbragstad> https://docs.openstack.org/keystone/latest/admin/identity-case-insensitive.html ah
16:29:39 <gagehugo> yeah
18:32:20 <kmalloc> lbragstad: interesting, role api and role_implication api is the first place i had to convert an enforcement callback. it went super easily
18:32:43 <kmalloc> lbragstad: fwiw, enforce_call made it almost painless.
18:35:32 <openstackgerrit> Gage Hugo proposed openstack/keystone master: WIP - Add details and clarify examples on casing  https://review.openstack.org/590477
18:43:04 <ayoung> orange_julius, ever solve your problem?  Sounds like HA proxy is sending one request to each Keystone server for some reason.
19:01:14 <kmalloc> lbragstad: hm.
19:01:24 <kmalloc> lbragstad: we broke HTTP spec again in implied roles *sigh*
19:01:34 <kmalloc> lbragstad: we issue a NO_CONTENT for HEAD.
19:01:40 <kmalloc> *eyeroll*
19:01:57 <kmalloc> checking our docks.
19:01:59 <kmalloc> docs*
19:21:17 <kmalloc> yep, we messed that one up =/
19:23:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert Roles API to flask native dispatching  https://review.openstack.org/590494
19:23:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert Roles API to flask native dispatching  https://review.openstack.org/590495
19:24:23 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert Roles API to flask native dispatching  https://review.openstack.org/590494
19:59:50 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_inferences API to flask native dispatching  https://review.openstack.org/590502
20:19:42 <openstackgerrit> Gage Hugo proposed openstack/keystone master: Set initiator id as user_id for auth events  https://review.openstack.org/588086
20:28:12 <orange_julius> ayoung: no I have not yet. Trying to replicate the issue. I can lock out a user by sending a POST with curl to a backend keystone node though.. so I don't think its haproxy
20:28:17 <orange_julius> since the curl never hits haproxy
20:39:19 <openstackgerrit> Merged openstack/keystone master: Imported Translations from Zanata  https://review.openstack.org/590111
20:50:12 <kmalloc> lbragstad: almost have role_assignments converted. the assignment subsystem is getting there.
20:53:04 <ayoung> orange_julius, so we do one bind as the use to authenticate, and the rest of the work is done as an admin user.  If there are multiple simple-bind calls from Keystone to LDAP, it should show up in the Keystone log.
20:53:33 <ayoung> Need to turn on tracing
20:57:44 <orange_julius> I can't turn that on in prod. Trying to replicate in test. Hopefully I'll have something soon
20:59:30 <ayoung> orange_julius, I wonder if it is something fun like:  pooling mechanism treats a failure as a reason to retry.
21:00:00 <kmalloc> ayoung: hm. that would be odd, but i could see ldappool having such an issue
21:00:07 <orange_julius> Yup thats what the theory is. I have a bug report opened with ldappool
21:00:15 <orange_julius> Just need verification
21:00:15 <kmalloc> ayoung: wonder if we're setting a RETRY value on the connection as well
21:00:53 <orange_julius> ldappool catches ldap.LDAPError which is the superclass for all ldap errors. I'm pretty sure its happening there... line 251-268 of ldappool
21:01:07 <ayoung> simple-bind is dumb
21:01:23 <ayoung> it really is an anti-pattern.  Share your password with every app....
21:01:49 <kmalloc> hmm.. this looks suspect
21:01:50 <kmalloc> https://github.com/openstack/ldappool/blob/master/ldappool/__init__.py#L99
21:05:56 <kmalloc> ayoung, orange_julius: https://github.com/python-ldap/python-ldap/blob/f3ff4a320a21a3ac42bc7587eaed09c4f9b2f9f5/Lib/ldap/ldapobject.py#L1193-L1203
21:06:08 <kmalloc> looks to me like what is happening is the failure is being re-tried
21:06:30 <ayoung> except ldap.SERVER_DOWN:
21:06:35 <kmalloc> looks like LDAPpool needs a bunch more logic and we need to do connect independant of simple_bind, and ensure we don't retry on a legit simple bind error
21:06:45 <ayoung> you should not get that on an auth failure, tho
21:07:10 <kmalloc> unless the simple_bind failure to AD maybe drops the connections?
21:07:13 <kmalloc> in this case
21:07:20 <kmalloc> we can't discount that as a possibility
21:07:27 <orange_julius> This is what we had been taking a look at: https://git.openstack.org/cgit/openstack/ldappool/tree/ldappool/__init__.py#n255 Is this only to establish the pool then?
21:07:31 <ayoung> ldap.TIMEOUT maybe?
21:07:51 <kmalloc> ayoung: maybe as well
21:08:04 <kmalloc> i am guessing it is an interaction with the reconnect object and ldappool's use
21:08:21 <ayoung> oh, that last one looks like a suspect
21:08:42 <kmalloc> ah yeah that might do it
21:08:57 <ayoung> that is so not my code
21:09:07 <ayoung> who wrote that...
21:09:11 <kmalloc> that is mostly inherited from mozilla
21:09:14 <orange_julius> Are you guys looking at ldappoll stuff?
21:09:16 <kmalloc> when we took over ldappool
21:09:22 <kmalloc> orange_julius: yeah.
21:09:39 <kmalloc> ayoung: so, we own ldappool but it was originally a mozilla project.
21:09:59 <kmalloc> ayoung: my guess is that is historical.
21:10:05 <ayoung> I still wanna know who wrote dat
21:10:10 <kmalloc> ayoung: git blame?
21:11:18 <ayoung> 5f674821 (Steve Martinelli   2016-05-12 12:16:34 -0700 255)             except ldap.LDAPError as error:
21:11:18 <ayoung> 5f674821 (Steve Martinelli   2016-05-12 12:16:34 -0700 256)                 exc = error
21:11:26 <ayoung> Dun dun DUN!
21:11:58 <ayoung> not really
21:12:01 <ayoung> -            except ldap.LDAPError as exc:
21:12:01 <ayoung> +            except ldap.LDAPError as error:
21:12:44 <kmalloc> ayoung: how about https://github.com/openstack/ldappool/blame/6350323b7f6ecbb54f70fd2e847190c99b826d94/ldappool/__init__.py#L217
21:13:54 <ayoung> predates this repo....
21:13:57 <kmalloc> yep
21:14:00 <ayoung> initial-import....
21:14:05 <kmalloc> so long long ago
21:14:12 <kmalloc> in a galaxy far away
21:15:19 <ayoung> so, I wonder if an auth failure is not supposed to raise an exception
21:16:42 <ayoung> exception ldap.INSUFFICIENT_ACCESS
21:16:57 <ayoung> https://www.python-ldap.org/en/latest/reference/ldap.html
21:17:22 <ayoung> I wonder if we were not using that code before to authenitcate
21:17:55 <kmalloc> we might have been
21:19:00 <ayoung> conn = self.user.get_connection(user_ref['dn'],
21:19:00 <ayoung> password, end_user_auth=True)
21:19:38 <ayoung> I was young.  I needed the money.
21:20:09 <ayoung> I'd like to state for the record that I did not do the pool thing.
21:21:05 <ayoung> commit 22b114f64724a551df5d32075b6a2d93c394b0d3
21:21:05 <ayoung> Author: Dolph Mathews <dolph.mathews@gmail.com>
21:21:05 <ayoung> Date:   Fri Feb 26 01:22:22 2016 +0000
21:21:05 <ayoung> Enable LDAP connection pooling by default
21:21:51 <ayoung> https://review.openstack.org/#/c/285008/
21:21:54 <ayoung> I +2ed it
21:22:10 <kmalloc> LOL
21:22:15 <ayoung> course, that was not the initial pool commit
21:22:56 <ayoung> commit ea689ff78f47ca762a4c46a726917b290c52cfef
21:22:56 <ayoung> Author: Arun Kant <arun.kant@hp.com>
21:22:56 <ayoung> Date:   Fri May 23 15:25:38 2014 -0700
21:23:17 <ayoung> https://review.openstack.org/#/c/95300/
21:23:24 <ayoung> I +2ed that one, too
21:23:54 <ayoung> orange_julius, you can blame me
21:24:29 <orange_julius> fire and brimstone upon you!!!
21:24:47 <ayoung> It would cool things off right now.  Heat wave in New England
21:24:55 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
21:25:15 <ayoung> I don't think that is fixable
21:25:23 <ayoung> you'd have to turn off pooling
21:25:43 <orange_julius> So is this issue indeed with https://bugs.launchpad.net/ldappool/+bug/1785898 ?
21:25:43 <openstack> Launchpad bug 1785898 in ldappool "Connection Pooling Retries Failed Passwords" [Undecided,New]
21:26:19 <ayoung> it sure is
21:26:28 <ayoung> Switch to Kerberos
21:26:51 <ayoung> Do we still support kerberos with LDAP? Seems to me Dolph wanted to kill that
21:27:13 <orange_julius> =(   What effect does turning off connection pooling have?
21:27:46 <ayoung> slower connection to the LDAP server/
21:27:51 <ayoung> but you would not be turning it off
21:27:56 <ayoung> just stopping retrys
21:28:07 <ayoung> which means poor failover semantics, I think
21:28:41 <ayoung> the hack would be to change ldappool to not retry on auth failure
21:28:54 <ayoung> you have the power to change Python code in production?
21:29:49 <orange_julius> Yes but we won't be doing that =P. I think disabling retries would be fine. Does that involve just setting the retry number to 0 in the ldap options?
21:56:41 <orange_julius> might've replicated the issue. I can't unlock my own account though so I gotta wait =(
22:14:04 <kmalloc> Turning off the pool is much slower, but this legitimately is an icky bug
22:14:14 <kmalloc> I'd go with slower over lockout :P
23:11:38 <openstackgerrit> Merged openstack/oslo.policy master: Imported Translations from Zanata  https://review.openstack.org/590145
23:31:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
23:32:03 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
23:32:51 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
00:23:29 <kmalloc> wxy-xiyuan: adding that callback in as a followup
00:30:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
00:35:04 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Add callback action back in  https://review.openstack.org/590590
00:41:05 <kmalloc> wxy-xiyuan: ^
02:11:51 <openstackgerrit> Merged openstack/python-keystoneclient master: refactor the getid method in keystoneclient/base.py  https://review.openstack.org/589689
02:28:26 <openstackgerrit> zhengliuyang proposed openstack/keystone master: Redundant parameters in api-ref:domain-config  https://review.openstack.org/590604
02:40:27 <openstackgerrit> Li Yingjun proposed openstack/keystone master: Explicitly assigning the exception to a variable for reraise  https://review.openstack.org/590606
02:52:30 <openstackgerrit> Merged openstack/keystone master: Set initiator id as user_id for auth events  https://review.openstack.org/588086
05:51:31 <openstackgerrit> Li Yingjun proposed openstack/keystone master: Explicitly assigning the exception to a variable for reraise  https://review.openstack.org/590606
06:47:07 <openstackgerrit> Merged openstack/keystone master: Update reno for stable/rocky  https://review.openstack.org/590405
08:15:00 <openstackgerrit> Li Yingjun proposed openstack/keystone master: Explicitly assigning the exception to a variable for reraise  https://review.openstack.org/590606
11:30:52 <openstackgerrit> Merged openstack/keystone master: Expose a bug that issue token with project-scope gets error  https://review.openstack.org/587368
12:17:35 <knikolla> o/
13:00:57 <openstackgerrit> Merged openstack/keystone master: Unified code style nullable description parameter  https://review.openstack.org/587721
13:16:26 <orange_julius> Replicated ldappool lockout issue in a test environment. Trying to pool disable fix now
13:36:38 <orange_julius> Turning off connection pooling fixes the issue. Note: setting pool_retry_max = 0 does NOT work. This breaks the LDAP. setting use_pool = False is the fix
13:40:02 <cmurphy> orange_julius: wow great work
13:40:10 <cmurphy> orange_julius: what do you mean by "This breaks the LDAP"?
13:41:03 <orange_julius> It throws a 500 error immediately. I didn't investigate too far into that and just moved on to setting use_pool = False
13:41:07 <orange_julius> sec I can get the logs
13:43:24 <cmurphy> orange_julius: what about pool_retry_max = 1? i wonder if there's an off by one problem
13:43:37 <orange_julius> Ok ldappool throws a MaxConnectionReachedError immediately and keystone freaks out when pool_retry_max = 0.
13:43:41 <orange_julius> I'll try 1
13:46:52 <orange_julius> Setting pool_retry_max = 1 seems to work!
13:47:16 <cmurphy> with use_pool set back to true? :)
13:47:35 <orange_julius> Yes =P
13:47:39 <cmurphy> neat
13:59:56 <orange_julius> So in my mind... ldappool should first try to catch an ldap.invalidcredentials, then abort and then do a catchall ldap.ldaperror catch. It seems reasonable that this is where the issue is happening: https://git.openstack.org/cgit/openstack/ldappool/tree/ldappool/__init__.py#n255. This would probably require a change in keystone as well though
14:01:52 <cmurphy> i'm surprised that ldap.ldaperror seems to be a catch-all for both server side and client side errors
14:04:19 <orange_julius> You are surprised  that its used in ldappool that way or you are surprised that python-ldap would make it the catch-all for both server side and client side?
14:04:59 <cmurphy> surprised that python-ldap acts that way
14:05:47 <cmurphy> i imagine the author of ldappool would be surprised too or they wouldn't have written it that way
14:08:37 <orange_julius> Yea most likely. All the exception types are documented here: https://www.python-ldap.org/en/latest/reference/ldap.html#ldap.LDAPError It looks like we would just use ldap.invalid_credentials before ldaperror and break out of the retry loop early.  It looks like logic is already in there for handling a bad password so we probably wouldn't have to c
14:08:37 <orange_julius> hange too much. https://git.openstack.org/cgit/openstack/ldappool/tree/ldappool/__init__.py#n268
14:31:09 <cmurphy> ildikov: is there a good reference link to the opnfv edge group and meeting schedule? i've found https://wiki.opnfv.org/display/meetings/Edge+Cloud but the etherpad link seems to be to a different minutes etherpad so i'm not sure that's the right one
14:32:48 <ildikov> cmurphy: that's a working group and not the Edge Cloud Project
14:33:40 <cmurphy> ildikov: what is the group that uses https://etherpad.opnfv.org/p/edge_cloud_meeting_minutes for their meeting minutes?
14:33:47 <ildikov> cmurphy: this is the project wiki:https://wiki.opnfv.org/display/PROJ/Edge+cloud
14:34:04 <ildikov> cmurphy: they use that meeting etherpad
14:34:12 <cmurphy> ildikov: okay thank you
14:34:39 <ildikov> cmurphy: np
14:35:26 <cmurphy> ildikov: is the zoom link open or is it invite-only?
14:36:27 <ildikov> cmurphy: you mean to join their meetings?
14:36:34 <cmurphy> ildikov: yes
14:36:41 <ildikov> cmurphy: it's open
14:37:06 <ildikov> the next one will be the week after next
14:37:10 <cmurphy> ildikov: and published somewhere? it's not on https://wiki.opnfv.org/display/PROJ/Edge+cloud
14:37:25 <ildikov> the Wednesday meetings are bi-weekly AFAIK
14:39:49 <cmurphy> ildikov: i'm writing a team update email for the openstack-dev mailing list and wondering whether i should encourage people to attend this meeting and if so how they can discover how to attend the meeting
14:39:50 <ildikov> cmurphy: I have a calendar invite: https://zoom.us/j/5014627785
14:41:12 <ildikov> cmurphy: if people are interested in the demo and federation testing activity, then I think it's worth mentioning
14:41:32 <ildikov> cmurphy: I couldn't find the info on the wiki, I'll ping Fu Qiao about it
14:41:59 <cmurphy> ildikov: okay thank you :)
14:42:13 <ildikov> cmurphy: you can also point people to this wiki on our end: https://wiki.openstack.org/wiki/Keystone_edge_architectures
14:42:31 <ildikov> cmurphy: and this one for activities: https://wiki.openstack.org/wiki/Edge_Computing_Group#Keystone
14:43:05 <ildikov> cmurphy: me thanks! :)
14:43:18 <kmalloc> O/
14:43:47 <kmalloc> orange_julius: I'll poke ldappool if someone else doesn't re: that bug.
14:44:23 <cmurphy> ildikov: will do, just want to make sure i'm covering both groups :)
14:44:33 <ildikov> cmurphy: +1 :)
14:48:00 <cmurphy> kmalloc: i feel like orange_julius is closing in on being able to submit the patch themselves ;)
14:50:39 <gagehugo> o/
15:26:18 <kmalloc> cmurphy: ++
15:53:29 <openstackgerrit> Gage Hugo proposed openstack/keystone master: Remove unused util function  https://review.openstack.org/587232
17:51:03 <openstackgerrit> Merged openstack/keystone master: Remove redundant get_project call  https://review.openstack.org/589027
17:51:05 <openstackgerrit> Merged openstack/keystone master: More accurate explanation in api-ref:application credentials  https://review.openstack.org/589135
17:51:33 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
19:35:24 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
19:35:25 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
19:35:26 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: remove unused variable to fix pep8 issue  https://review.openstack.org/591068
20:42:32 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
20:43:07 <kmalloc> knikolla, cmurphy: ^ that change, the OS-FEDERATION one is ... lets just say the scariest of changes to flask, and that is because it is relatively untested code.
20:43:25 <kmalloc> i need some pretty heavy eyes on those bits to make sure i didn't regress something.
20:43:35 <kmalloc> that is pre-unit tests btw.
20:44:36 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
20:48:22 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
22:17:18 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
01:09:17 <kmalloc> knikolla, cmurphy: about to post a patchset that will pass tests. again, this is ugly and i def. need solid eyes on it.
01:14:45 <knikolla> kmalloc: will give it a first pass tomorrow.
01:51:49 <kmalloc> knikolla: thnx
01:52:54 <kmalloc> ugh, we have a broken json_home document
02:02:33 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
13:02:25 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
13:02:25 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
13:02:26 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
13:05:20 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
13:05:21 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
14:45:18 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
14:45:19 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
14:45:20 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: fix documentation build  https://review.openstack.org/591154
15:39:16 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
15:39:31 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
16:46:29 <orange_julius74> Hey guys. I left my regular account logged in at work =(  I've create a patch for the LDAPPool issue we have been discussing. Its in my github currently. What is the best way to get it into the project?
16:47:13 <orange_julius74> Is the ldappool github accepting pull requests or should it go elsewhere?
17:25:32 <kmalloc> orange_julius74: no worries, you'll need to submit it to gerrit (review.openstack.org)
17:26:15 <kmalloc> orange_julius74: https://docs.openstack.org/zaqar/latest/contributor/first_patch.html best reference guide i have
17:26:30 <orange_julius74> Alrighty. I'll start working on that. Thanks!
17:26:52 <kmalloc> well one of them, obviously zaquar replaced with ldappool
17:26:53 <kmalloc> ;)
17:27:05 <orange_julius74> Adding a test for this might be a little funky. I'm still sorting that out now
17:27:33 <kmalloc> let me know, i spend a lot of time writing wonky tests, i can probably give advice if you need it
17:27:36 <kmalloc> :)
17:28:19 * kmalloc sighs. My patch stack is getting a bit deep again: https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1776504
17:29:08 <kmalloc> cmurphy: after flask, i think i'll be working on stand-alone keystone stuff [or at least piling on to that]
17:29:20 <kmalloc> cmurphy: since flask should help to make it a bit easier.
17:32:04 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
17:32:05 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
17:32:06 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: fix gate  https://review.openstack.org/591162
17:35:12 <orange_julius74> Alright so I tried to keep the test inline with what was already there. I'm going to clean up the commits and fix a few indentation issues, but I was hoping to get your thoughts. https://github.com/ragingpastry/ldappool/blob/fix-pool-retry/ldappool/tests/test_ldapconnection.py
17:35:33 <kmalloc> cool.
17:35:45 <kmalloc> let me finish my test run and i'll take a look
17:36:04 * kmalloc is running unit tests before adding yet-another-patch to the huge pile of "omg flask" conversion patches
17:37:20 <orange_julius74> Some things: _bind_fails on line 50 wasn't actually used so I kinda took over that. Looking back I probably should make a new function there. I also removed ldap.INVALID_CREDENTIALS from the check here: https://github.com/ragingpastry/ldappool/blob/fix-pool-retry/ldappool/__init__.py#L272 . It made sense to me since that is being throw up above and
17:37:21 <orange_julius74> lets us test for invalid_credentials more specifically. Let me know if this sucks =P
17:37:48 <kmalloc> i doubt it will suck ;)
17:38:07 <kmalloc> hey, you're fixing a real bug for us, that in my mind already says "this doesn't suck"
17:38:20 <kmalloc> (even if you need some adjustments to the code to land it, still not "sucking")
17:41:56 <orange_julius74> hehe alrighty. Well let me know whenever you get a chance! I'll be around
17:42:22 <kmalloc> do we need to actually raise up INVALID_CREDENTIALS or would a break of the loop be more correct? /me is still reading
17:44:01 <orange_julius74> We could certainly just break out of the loop. It might be more clean to have all of the raise logic in one place (272)
17:44:20 <kmalloc> that was my initial thought
17:44:28 <kmalloc> i don't have a problem with it as is
17:44:49 <kmalloc> but it might be a lot cleaner to just break and raise out at 272 as needed
17:44:56 <orange_julius74> Yup for sure. I'll change that
17:45:05 <kmalloc> if you have issues getting that patch into gerrit, i can help
17:45:20 <kmalloc> and submit on your behalf, but would pref if you can do it
17:45:38 <kmalloc> i'll also want to see the test(s) running in our gate. :)
17:45:46 <kmalloc> overall, i like the direction
17:45:52 <kmalloc> and it's a nice and simple patch
17:53:31 <orange_julius74> Cool I'll make that change and get working on getting patch into gerrit. Does the test I added make sense?
18:11:56 <kmalloc> mostly
18:12:08 <kmalloc> i don't see anything wonky with it at first glance
18:31:30 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
18:31:30 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
20:17:56 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
21:52:11 <openstackgerrit> Nick Wilburn proposed openstack/ldappool master: fix ldappool bad password retry logic  https://review.openstack.org/591174
21:53:34 * orange_julius74 crosses fingers
22:09:08 <kmalloc> woot
22:09:29 <kmalloc> thats a good first patch if i ever saw one:)
22:15:00 <kmalloc> orange_julius74: commented on your patch in gerrit. I'll upgrade to a +2 when i have a little more time to eyeball it. but it looks good.
22:22:13 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add lib-forward-testing-python3 test job  https://review.openstack.org/591185
22:22:27 <openstackgerrit> Doug Hellmann proposed openstack/oslo.policy master: add lib-forward-testing-python3 test job  https://review.openstack.org/591189
22:49:34 <orange_julius74> Awsome!
23:09:41 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Add safety to the inferred target extraction during enforcement  https://review.openstack.org/591203
23:10:28 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
23:10:35 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
23:10:39 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
23:11:03 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
23:11:08 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
23:11:15 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
23:11:20 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
23:11:24 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
23:11:28 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
19:43:27 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
19:43:40 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
03:01:50 <lbragstad> knikolla: ping
03:02:46 <knikolla> lbragstad: o/
03:07:08 <lbragstad> knikolla: sorry - i'm having connectivity issues i think :)
03:09:26 <lbragstad> did my question come through?
03:09:46 <knikolla> lbragstad: nope, i don't see it
03:10:13 <lbragstad> i was curious if you had a baseline for how long it takes to do federated authetication in your public cloud deployment?
03:11:02 <knikolla> like a benchmark?
03:11:11 <lbragstad> yeah
03:11:30 <lbragstad> using WebSSO and/or CLI
03:12:34 <knikolla> never measured it, though i can do some profiling tomorrow
03:13:10 <lbragstad> no worries - i was just curious if you had a general idea
03:13:31 <knikolla> it's not particularly slower than plain sql
03:13:38 <lbragstad> would it be weird if it was taking longer than 8 seconds using sso?
03:13:57 <lbragstad> ok - so yeah, 8 seconds would seem slow?
03:15:18 <knikolla> i think so
03:15:22 <lbragstad> ok
03:15:37 <lbragstad> that's about as good of an answer i need i think
03:16:05 <lbragstad> apparently we have an internal team doing some prototypes of federation and they reported some slow authentication times
03:16:31 * lbragstad doesn't have any details about how things are configured
03:17:14 <knikolla> hmmm
03:18:01 <lbragstad> i plan to ask for specific when wxy-xiyuan and i meet with them, though
03:18:05 <lbragstad> specifics*
03:18:40 <knikolla> sure
03:19:01 <knikolla> websso is a lot of redirects. running a profiler would point which part is slow.
03:19:14 <lbragstad> right
03:19:36 <lbragstad> i don't really have any idea of how much of that time is spent in shib/mellon versus on the wire
03:20:15 <lbragstad> i assume once things hit keystone-proper, it's pretty similar to a local user authentication flow
03:20:42 <knikolla> probably faster since it doesn't have to hash the password
03:21:05 <lbragstad> ++ true
03:22:19 <lbragstad> but that's really the only difference, keystone either deals with the password hash or pull attributes exposed from shib and runs it through a mapping
03:28:11 <knikolla> lbragstad: i did get about 5 seconds
03:28:18 <knikolla> with websso through horizon
03:28:28 <knikolla> with horizon taking half of that time.
03:28:51 <lbragstad> interesting
03:29:16 <lbragstad> i assume part of that is spent in shib
03:30:15 <lbragstad> since it has to verify the assertion and expose the mapped attributes as ENVs
03:31:25 <lbragstad> this is totally dependent on the deployment/network, but it would be interesting to setup a test environment with osprofiler enabed and just see how much of that time is actually in keystone
03:31:38 <knikolla> this was with openid connect and mod_auth_openidc
03:31:46 <lbragstad> aha
03:31:56 <lbragstad> would you expect results using mod_shib to be a lot different?
03:32:42 <knikolla> lbragstad: don't know.
03:32:53 * lbragstad nods
03:33:33 <knikolla> would make for an interesting comparison thouhg.
03:33:38 <lbragstad> this is good to know, if we do get general performance "patterns" then it would be good to add this kinda stuff to docs
03:33:44 <lbragstad> yeah - absolutely
03:34:11 <lbragstad> i would stay away from putting numbers in the results, but just abstract them to percentages if we get useful patterns
03:34:53 <knikolla> definitely.
03:37:08 <knikolla> logging off now, talk to you tomorrow! goodnight.
03:37:21 <lbragstad> thanks for the info knikolla - talk to you tomorrow!
06:41:05 <openstackgerrit> Merged openstack/oslo.policy master: add python 3.6 unit test job  https://review.openstack.org/589603
06:48:48 <openstackgerrit> Merged openstack/oslo.policy master: Move _capture_stdout to a common place  https://review.openstack.org/534440
07:52:33 <lbragstad> kmalloc: the migration to flask has got me thinking if we can remove APIs in favor of making RBAC more granular
07:52:45 <lbragstad> e.g. do we really need the domain-roles API/
07:52:48 <lbragstad> ?
07:54:06 * cmurphy waves at lbragstad
07:56:03 * lbragstad_ is having connectivity issues
07:56:22 <lbragstad_> cmurphy: did you happen to see my reply to your newsletter last week?
07:56:58 <cmurphy> lbragstad_: i did, i don't totally understand though
07:57:21 <cmurphy> don't we rev the major version anyway just for the cycle?
07:57:41 <lbragstad_> oh - i should have clarified the API version
07:57:45 <lbragstad_> (e.g. 3.10)
07:57:50 <cmurphy> oh gotcha
07:58:10 <cmurphy> but that's not really part of the REST API?
07:58:16 <lbragstad_> right
07:58:26 <lbragstad_> but something we guarantee now?
07:58:57 <lbragstad_> bah - i guess someone can delete them after install
07:59:13 <cmurphy> yeah - also if they're upgrading they're not guaranteed to run bootstrap again
07:59:32 <lbragstad_> which would be misleading - if we say "yep, 3.11 has these roles by default"
07:59:43 <lbragstad_> then someone deletes them because $reason
07:59:55 <lbragstad_> and someone queries the API and sees 3.11 in the response
08:00:31 <lbragstad_> yeah - then my email doesn't really make sense
08:00:52 <mbuil> cmurphy: I am a bit stuck with Shibboleth, when you have some time, it would be great if you could give me a hand :)
08:01:31 <cmurphy> mbuil: sure what's up
08:03:25 <mbuil> cmurphy: I think I had configured everything as stated in the documentation. However, when starting the shibd service or the apache2 service, I get: "Could not load a transcoding service". Logs from apache show nothing else and logs from shibd show nothing
08:05:08 <mbuil> cmurphy: do you have an example of how conf.d/shib.conf should look like?
08:05:38 <cmurphy> mbuil: hmm that error is totally unfamiliar to me but it might be from the language settings http://shibboleth.net/pipermail/users/2014-December/018662.html
08:05:50 <cmurphy> mbuil: i don't, i don't think i've ever had to change it from the default
08:31:48 <mbuil> cmurphy: it is apparently something related to systemd. When starting them processes manually, they start correctly. weird... anyway, for a prototype should be enough ;)
08:32:02 <cmurphy> :)
08:32:02 <mbuil> s/them/the
08:34:41 <mbuil> cmurphy: regarding the Service Provider’s metadata file. Documentation says that: "If keystone is your Identity Provider you do not need to upload this file.". However, how would the SP and IdP trust each other then?
08:39:35 <cmurphy> mbuil: in the K2K case, the IdP doesn't need to trust the SP since it's never accepting requests from the SP
08:42:41 <mbuil> cmurphy: the SP will query the IdP to verify a user, right? Shouldn't the IdP do a security check on the SP to verify that it is not a rogue one? Maybe it is stupid :P
08:45:41 <cmurphy> mbuil: no, the SP never talks to the IdP in K2K (it's part of why K2K is so weird). The SP gets a signed SAML assertion from the IdP, and so the SP needs to trust the IdP, but all the data about the user needs to be contained in that SAML assertion so the SP should never have to ask the IdP for anything else
08:46:56 <mbuil> cmurphy: when does the SP get the signed SAML assertion from the IdP?
08:48:19 <cmurphy> mbuil: it goes through the user's client, they have to authenticate with the IdP and pass the response to the SP
08:49:49 <mbuil> cmurphy: ok, thanks. Hopefully all will be clearer when I see it working :)
08:50:00 <openstackgerrit> Merged openstack/keystone master: Migrate OS-EP-FILTER to flask native dispatching  https://review.openstack.org/589274
08:50:35 <cmurphy> mbuil: yeah i found just tcpdumping everywhere after i had it all set up made it start making sense
08:51:05 <openstackgerrit> Merged openstack/keystone master: Convert OS-SIMPLE-CERT to flask dispatching  https://review.openstack.org/589282
08:51:07 <openstackgerrit> Merged openstack/keystone master: Pass path into full_url and base_url  https://review.openstack.org/589546
09:39:12 <mbuil> cmurphy: I am at the mapping step: https://docs.openstack.org/keystone/latest/advanced-topics/federation/configure_federation.html#mapping. In the remote part, I should write a couple of keystone users from the IdP keystone, right? (in the example, instead of 'demo' and 'alt_demo'). Should I leave the type as 'openstack_user'?
09:42:26 <cmurphy> mbuil: you could make it even simpler by omitting the any_one_of part and just having [{"type": "openstack_user"}]
09:43:29 <cmurphy> for a demo it won't really matter but the mapping rules are supposed to give you pretty fine-grained control over who can log in and what they can do https://docs.openstack.org/keystone/latest/advanced-topics/federation/mapping_combinations.html
09:48:04 <mbuil> cmurphy: if I omit the 'any_one_of', all users that appear when typing "openstack user list" in keystone-IdP will be mapped to the "federated_users" group?
09:48:56 <cmurphy> mbuil: right
13:04:02 <kmalloc> lbragstad: we prob could drop domain roles.
14:51:48 <orange_julius> cmurphy: re ldappool bug comment. Do you mean catching ldap.invalid_credential above ldap.ldaperror and just breaking the while loop there? I agree that would be cleaner.
14:52:38 <cmurphy> orange_julius: yes that's what i meant, if that works i'd prefer to do that
14:54:42 <orange_julius> Ok perfect. I don't see why it wouldn't work. I'll test that change when I get home tonight just to be sure then submit it up
14:55:11 <cmurphy> cool
15:09:33 <gagehugo> o/
15:48:51 <knikolla> o/
15:49:46 <kmalloc> cmurphy: ++
16:07:55 <openstackgerrit> Chason Chan proposed openstack/keystone master: Fix the incorrect file path of keystone apache2 configuration  https://review.openstack.org/586930
16:45:37 <openstackgerrit> Merged openstack/keystone master: Remove unused util function  https://review.openstack.org/587232
16:47:20 <gagehugo> kmalloc flask question, do we enforce "identity:get_region" at all here? https://review.openstack.org/#/c/589640/3/keystone/api/regions.py
16:49:35 <gagehugo> oh, nvm I think I see the issue
17:43:34 <kmalloc> gagehugo: yeah good catch, thnx
19:29:27 <knikolla> kmalloc: the os-federation flask patch is failing on the tempest federation tests
19:42:54 <kmalloc> knikolla: i figured it probably would.
19:43:00 <kmalloc> knikolla: what part is failing?
19:43:41 <knikolla> kmalloc: http://logs.openstack.org/82/591082/6/check/keystone-dsvm-functional-v3-only/0ab5d9b/logs/screen-keystone.txt.gz#_Aug_11_23_37_18_420588
19:43:44 <kmalloc> knikolla: and you're testing the latest patchset?
19:43:48 <knikolla> the /auth part
19:44:12 <kmalloc> hmm
19:44:18 <kmalloc> i didn't change the /auth paths
19:44:42 <knikolla> identity_providers/testshib/protocols/mapped/auth
19:44:45 <kmalloc> or is this the os-federation/auth ?
19:45:16 <kmalloc> ah
19:45:18 <kmalloc> that part
19:45:19 <kmalloc> hm.
19:45:51 <kmalloc> knikolla: we need to make that a voting job btw
19:46:00 <kmalloc> i ignore non-voting 100% of the time
19:46:14 <kmalloc> if it isn't voting, i assume it's superfluous
19:46:24 <kmalloc> probably incorrect on my part, but still.
19:46:38 <kmalloc> oh, i think this is the json-body check
19:47:25 <kmalloc> oh, huh.
19:47:32 <kmalloc> this is weird
19:49:14 <rodrigods> i think it is not voting because it uses testshib
19:49:25 <rodrigods> and we don't want to block our patches if an external service is down
19:49:31 <kmalloc> rodrigods: right. we need to fix that asap imo
19:49:36 <rodrigods> +1
19:49:41 <kmalloc> because we don't gate on non-voting tests
19:49:54 <kmalloc> aka, it really doesn't mean much, could break between check->gate and we wouldn't know
19:50:13 <rodrigods> yep, rn is basically task of the reviewer to check if the failure is important or not
19:50:25 <rodrigods> i can try to make this a project for the next outreachy round
19:50:35 <kmalloc> ++
19:50:39 <kmalloc> that would be fantastic
19:50:50 <kmalloc> it shouldn't be too bad to do
19:50:56 <rodrigods> yep
19:51:07 <rodrigods> lots of ramp up for the intern, but the result is very useful
19:51:39 <cmurphy> do you have an idp implementation in mind?
19:51:56 <cmurphy> shibboleth and keycloak are not trivial to set up afaik
19:52:05 <knikolla> k2k tests have been up for review for a while. once we merge those i can shut down the testshib part.
19:52:17 <kmalloc> knikolla: ++ that was my plan
19:52:19 <rodrigods> but k2k is a different path, right
19:52:35 <knikolla> no, we're using saml ecp for testshib too
19:52:42 <rodrigods> cmurphy, i don't... but maybe it fits within the 3 month project
19:52:49 <kmalloc> hm. so lets see...
19:53:06 <kmalloc> it isn't finding mapped in identity
19:53:09 <rodrigods> but k2k ecp is not "100% ecp"
19:53:13 <rodrigods> it has some differences
19:53:20 <kmalloc> weird.
19:53:48 <knikolla> rodrigods: you mean in the assertion or in the flow?
19:53:53 <kmalloc> found the bug
19:53:58 <rodrigods> both?
19:54:27 <knikolla> since shibboleth can parse it, it must conform to the standard.
19:54:39 <knikolla> the flow, yes, is different. but is mostly client driven.
19:55:33 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
19:55:35 <rodrigods> yes, but i think we can broke one or another in different ways that not necessarily are captured by a test
19:55:45 <kmalloc> knikolla: ^ that should fix that error
19:55:49 <rodrigods> so IMO we need both tests
19:56:18 <kmalloc> rodrigods: we need at least to cover federation voting -- long term both, but ANY amount voting (functional) would be good now
19:56:48 <rodrigods> yes, that's where i was going with my reasoning
19:57:12 <knikolla> kmalloc: ah, good catch.
19:57:49 <kmalloc> ugh i need to wait until tomorrow to install a new graphics card in this workstation so i can do looking-glass based VMs for development. i don't want to install all the dependencies to run tox/etc on the base OS [even as a desktop]
19:58:23 <kmalloc> wonder if i can get the tox-in-docker thing to work
19:58:32 <kmalloc> would be awesome.
20:00:50 <knikolla> pci-passthrough for desktop vms, interesting.
20:02:32 <kmalloc> knikolla: yes, it's awesome, i just ordered a SR-IOV compatible gpu
20:03:49 <knikolla> kmalloc: i went an easier route. aliased tox to a script that rsyncs to a vm and runs tox through ssh.
20:04:25 <kmalloc> knikolla: it's more about my IDE and not wanting to have to install all my dependencies
20:05:57 <knikolla> true
20:06:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert regions API to flask native dispatching  https://review.openstack.org/589640
20:06:11 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
20:06:20 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
20:06:26 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert Roles API to flask native dispatching  https://review.openstack.org/590494
20:06:36 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_inferences API to flask native dispatching  https://review.openstack.org/590502
20:06:41 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Add safety to the inferred target extraction during enforcement  https://review.openstack.org/591203
20:06:47 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
20:06:52 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
20:06:58 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
20:07:05 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
20:07:11 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
20:07:19 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
20:07:28 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
20:08:02 <kmalloc> knikolla: ^ rebased a chunk of the change until i fix OS-INHERIT to be safer
20:45:43 <kmalloc> knikolla: there is a keystone sec bug, i would like your eyes on
20:46:33 <knikolla> kmalloc: sure
22:03:38 <openstackgerrit> Merged openstack/oslo.limit master: Fix CI  https://review.openstack.org/586768
22:25:36 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: fix gate  https://review.openstack.org/591162
22:25:37 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
22:25:38 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
22:25:39 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add lib-forward-testing-python3 test job  https://review.openstack.org/591185
00:26:45 <openstackgerrit> Nick Wilburn proposed openstack/ldappool master: fix ldappool bad password retry logic  https://review.openstack.org/591174
00:47:01 <lbragstad> kmalloc: fyi - i'm planning on picking up the /polices conversion to flask after this week (i'm unable to pull patches from gerrit)
00:47:24 <lbragstad> policies*
01:05:34 <kmalloc> lbragstad: np
01:21:28 <openstackgerrit> Merged openstack/oslo.limit master: ADD i18n file  https://review.openstack.org/586759
02:01:55 <openstackgerrit> Merged openstack/keystone master: Allow wrap_member and wrap_collection to specify target  https://review.openstack.org/589288
02:12:30 <openstackgerrit> Merged openstack/keystone master: Convert regions API to flask native dispatching  https://review.openstack.org/589640
02:43:25 <openstackgerrit> Bi wei proposed openstack/keystone master: Fix a bug that issue token with project-scope gets error  https://review.openstack.org/587399
07:19:02 <cmurphy> lbragstad: you can't pull patches from gerrit? you might try switching your remotes to https https://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-over-https
07:19:33 <lbragstad> oh - nice
07:20:22 <lbragstad> i'll try that - right now my port is getting blocked
07:21:21 <lbragstad> but i assume 443 to work
07:26:07 <lbragstad> i can't access it from the office - but i might be able to from the hotel
07:42:09 <mbuil> cmurphy: I am finally at "Testing it all out" part ==> https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#testing-it-all-out. When it creates the k2ksession, it passes a string 'mysp', what should I write there? I have an entityID which identifies the IdP but I don't remember having an id to identify the SP
07:43:48 <cmurphy> mbuil: that should be the name of the service provider entry you created on the identity provider in this step https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#create-a-service-provider-sp
07:48:32 <mbuil> cmurphy: Oh, I missed that step. That should be executed in the IdP part?
07:48:43 <cmurphy> mbuil: yes
07:54:39 <mbuil> Regarding the sp_url, when creating the SP, I added in /etc/shibboleth/shibboleth2.xml the following ==> <ApplicationDefaults entityID="http://mysp.example.com/shibboleth">. However, the example sp_url is 'http://mysp.example.com/Shibboleth.sso/SAML2/ECP'. Is that ok?
07:55:26 <mbuil> cmurphy: I actually haven't written anywhere in the SP config 'http://mysp.example.com/Shibboleth.sso/SAML2/ECP'
07:59:35 <cmurphy> mbuil: the entityID is just an identifier string, it doesn't route to anything
08:00:05 <cmurphy> the /Shibboleth.sso/SAML2/ECP is provided by the shibboleth mod, you can query /Shibboleth.sso/Metadata or something like that to get all the endpoints it provides
08:00:20 <cmurphy> so yes you're all good
08:04:00 <mbuil> cmurphy: good. Let's try. Fingers crossed
08:04:59 <mbuil> cmurphy: one extra thing. From Keystone IdP I should be able to wget http://mysp.example.com:5000/v3/OS-FEDERATION/identity_providers/myidp/protocols/saml2/auth, right?
08:23:01 <cmurphy> mbuil: sort of, you'd have to POST the ECP assertion with the request, it would be easier to just let keystoneauth handle it (and openstackclient supports it now too)
08:30:42 <openstackgerrit> Colleen Murphy proposed openstack/keystone master: Use osc in k2k example  https://review.openstack.org/591587
08:30:45 <cmurphy> mbuil: ^
08:32:02 <mbuil> cmurphy: something must be wrong. This line returns None :( https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/identity/v3/k2k.py#L166
08:32:12 <mbuil> cmurphy: let me check that patch
09:48:23 <mbuil> cmurphy: I need a bit of help debugging. It seems everything goes ok between "User Agent" and IdP. However, when "User Agent" accesses SP, I see in the logs of SP_Apache2:
09:48:24 <mbuil> 172.29.236.11 - - [14/Aug/2018:09:33:05 +0000] "POST /Shibboleth.sso/SAML2/ECP HTTP/1.1" 500 915 "-" "osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.18.4 CPython/2.7.13"
09:48:57 <mbuil> cmurphy: /var/log/apache2/keystone.log says ==> 2018-08-14 09:33:43.538256 Issuer must have TextContent.
09:49:23 <mbuil> cmurphy: any idea what is referring to with TextContent?
09:58:15 <cmurphy> mbuil: it's saying that the assertion is invalid, it's referring to a field called Issuer and saying it's empty
09:58:57 <cmurphy> I'm not sure why it would be empty, that should be the [saml]/idp_entity_id set in keystone.conf on the IdP
09:59:24 <cmurphy> you might try regenerating the metadata on the IdP and restarting keystone/apache
09:59:34 <mbuil> cmurphy: aaah ok, thanks. I guess the problem seems to be in the IdP then
10:00:23 <mbuil> cmurphy: the shibboleth log in SP supports your guess:
10:00:30 <mbuil> 2018-08-14 09:33:43 INFO Shibboleth-TRANSACTION [3]: New session (ID: ) with (applicationId: default) for principal from (IdP: none) at (ClientAddress: 172.29.236.11) with (NameIdentifier: none) using (Protocol: urn:oasis:names:tc:SAML:2.0:protocol) from (AssertionID: )
11:17:30 <mbuil> cmurphy: I am again stuck but I feel I am close to the final line! Now I get in /var/log/keystone/keystone.log: "Could not map any federated user properties to identity values. Check debug logs or the mapping used for additional details.". I added some logs to the code and I realized that it never this line never returns anything: https://github.com/openstack/keystone/blob/master/keystone/federation/utils.py#L776
11:18:30 <mbuil> cmurphy: it compares the rules, which in my case are: https://hastebin.com/itetericiy.py with the assertion which in my case is: https://hastebin.com/gulimuwiye.py
11:19:18 <mbuil> cmurphy: so, it searches for a key "openstack_user" in the assertion but there is nothing like that. Do you think the problem is that the assertion is wrong?
11:23:57 <cmurphy> mbuil: did you modify attribute-map.xml like in this step? https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#keystone-to-keystone
11:31:23 <mbuil> cmurphy: not exactly, I did this mapping ==> https://docs.openstack.org/keystone/latest/advanced-topics/federation/configure_federation.html#mapping
11:32:19 <mbuil> and this one https://docs.openstack.org/keystone/latest/advanced-topics/federation/shibboleth.html
11:32:55 <cmurphy> mbuil: you need to edit /etc/shibboleth/attribute-map.xml too, it's a quirk of the shibboleth SP that by default it won't pass through attributes it doesn't understand
11:35:09 <mbuil> cmurphy: I edited it and added https://hastebin.com/olutuhevaz.xml
11:36:52 <cmurphy> mbuil: ah you need to keep id="openstack_user" etc, the id parameter in that xml node doesn't refer to the actual ID of the user, it's an internal identifier that needs to be unique
11:37:32 <mbuil> cmurphy: aaaah ok, thanks!
11:39:52 <mbuil> cmurphy: look at this ==> https://hastebin.com/mutetusabe.rb, success??
11:40:09 <cmurphy> mbuil: looks like it!!!
11:40:54 <mbuil> ole!
12:35:04 <openstackgerrit> Merged openstack/keystone master: Fix a bug that issue token with project-scope gets error  https://review.openstack.org/587399
14:55:32 <knikolla> o/
14:57:42 <lbragstad> o/
15:01:29 <knikolla> schedule for berlin is live
15:49:49 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: fix gate  https://review.openstack.org/591162
15:49:50 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
15:49:51 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
15:49:52 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add lib-forward-testing-python3 test job  https://review.openstack.org/591185
15:50:39 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: fix doc gate  https://review.openstack.org/591162
15:50:40 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
15:50:41 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
15:50:42 <openstackgerrit> Doug Hellmann proposed openstack/oslo.limit master: add lib-forward-testing-python3 test job  https://review.openstack.org/591185
15:59:19 <lbragstad> kmalloc: i fixed up a bunch of the failures in the policy conversion patch, i should be able to clean up the last couple bits and get a new version up earlier than i thought
16:09:59 <kmalloc> Cool
17:44:47 <kmalloc> knikolla: i think i have a fix for OS-FEDERATION now
17:44:50 <kmalloc> knikolla: almost there.
17:45:07 <kmalloc> knikolla: running local tests to make sure it is working at least in unit.
17:45:37 <knikolla> kmalloc: awesome!
17:46:51 <kmalloc> knikolla: also, running unit tests on a threadripper is nice. 32 threads of unit tests, <100s for full run
17:47:29 <knikolla> kmalloc: cool!
17:47:41 <knikolla> i usually run mine on a 16 vcpu vm
17:48:24 <kmalloc> notmorgan@tardis:~/Documents/openstack_dev/keystone$ docker-tox -epep8,py35 ->
17:48:42 <kmalloc> https://www.irccloud.com/pastebin/sKuIKFqp/
17:49:24 <knikolla> kmalloc: 94 seconds, yup. impressive!
17:49:45 <knikolla> i think best i've got is about 3 minutes.
17:49:52 <kmalloc> docker-tox is an alias to `docker run --rm -v `pwd`:/opt/src keystone-dev:16.04 tox
17:50:56 <kmalloc> Docker File for keystone-dev (16.04) https://www.irccloud.com/pastebin/BtGV79Dq/Dockerfile
17:51:07 <kmalloc> knikolla: ^ if you want to use my dockerfile.
17:51:38 <kmalloc> that'll build a keystone dev docker image, assuming your PWD is one level outside of the keystone drectory
17:51:51 <kmalloc> and it'll do the bindep work for you.
17:51:58 <kmalloc> so you get all the bindeps needed.
17:52:25 <kmalloc> it could even work for non-keystone things with an env for OS_PROJ
17:52:34 <kmalloc> OS_PROJECT*
17:52:38 <knikolla> kmalloc: i only have a lowly dual core laptop
17:52:45 <knikolla> until then i have this https://gist.github.com/knikolla/a921573ded94538796ee5ce1383eb1fb
17:53:01 <kmalloc> right, but this means i don't need to install all the deps
17:53:12 <kmalloc> and i could mod the dockerfile to use centos, fedora, etc
17:53:42 <kmalloc> i'd need to modify the explicit apt-get bit, i think i could do it 100$ with bindep
17:53:49 <kmalloc> 100%*
17:54:31 <kmalloc> my system has basic IDE support, but i run all the code [inc dependencies] in docker, even for the IDE inspections.
17:54:32 <knikolla> true
17:54:41 <knikolla> interesting
17:54:46 <kmalloc> i also don't have py27 installed
17:54:50 <kmalloc> locally, just in docker
17:55:59 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
17:56:00 <knikolla> hmmm... i never thought of installing the dependencies in docker and running IDE inspection from that
17:56:10 <kmalloc> knikolla: it works explicitly with pycharm
17:56:25 <kmalloc> i think vscode will support "Remote interpreter" soon too
17:56:32 <kmalloc> and as soon as it does, i'll move to vscode
17:56:46 <kmalloc> I can never get atom.io to do what i want it to
17:56:51 <kmalloc> =/ i want to like it
17:56:54 <kmalloc> i really do.
17:57:17 <knikolla> i tried vs code, but there were some annoyances
17:57:23 <knikolla> pycharm works best for me
17:57:27 <knikolla> atom is sluggish
17:57:39 <kmalloc> but pycharm supporting remote environments is huge
17:57:48 <kmalloc> i spent yesterday setting it all up
17:58:04 <kmalloc> currently in my living room, coding on my  65" TV 4k TV ;)
17:58:28 <knikolla> sweet
17:59:03 <kmalloc> and my workstation is a TR4 (threadripper, 1950x) with 128GB of ram, running on mirrored NVMe (full encryption) drives, 2x WD RED [slow] rust, and boot/EFI on a usb-stick.
17:59:25 <kmalloc> encryption keys in the dTPM chip, so i can walk off with the USB stick and the machine is about as secure as you can get.
17:59:35 <knikolla> whoa, that's a beast.
17:59:57 <kmalloc> it has a 1080ti in it, and will have an AMD 9100wx tomorrow
18:00:21 <kmalloc> so VMs will get VT-D, sr-iov slice of a gpu.
18:00:43 <kmalloc> the machine also has a 10G (SFP+) intel nic in it, with a DAC to my switch
18:01:07 <kmalloc> and my new keyboard is a WASD cherry-mx clear 10keyless :)
18:01:20 <knikolla> overkill
18:01:24 <kmalloc> though i really want a speed silver keyboard for my gaming PC
18:01:54 <kmalloc> (which is, being built now), Watercooled, 8086K i7, 32GB of ram, GTX 1080ti
18:02:07 <kmalloc> on mirrored NVME drives.
18:02:17 <knikolla> your screen alone is as big as my entire apartment, lol.
18:02:33 <kmalloc> i have 3 (soon to be 4) monitors at my desk.
18:02:47 <kmalloc> going to be replacing my broken 4k monitor with dual ultra-wide monitors
18:02:52 <orange_julius> Yea well I am getting a desk this week so that I don't have to sit on the floor anymore. So there!
18:03:08 <kmalloc> orange_julius: ++ don't sit on the floor! it hurts after too long
18:03:14 <kmalloc> :)
18:03:32 <kmalloc> knikolla: and i think i got my RH issued X1C6 to work... it at least suspends to s0i3 now.
18:03:45 <kmalloc> so battery lasts longer than ... 3 hrs sleeping
18:03:57 <kmalloc> (5-8 days suspend now)
18:03:58 <knikolla> kmalloc: didn't they issue you a p52?
18:04:03 <kmalloc> nope.
18:04:05 <kmalloc> it was denied
18:04:17 <kmalloc> like out of hand, even though my manager approved it
18:04:26 <kmalloc> so i settled on an X1C6
18:04:37 <kmalloc> and it's "ok" but not great.
18:05:06 <knikolla> that's what i asked them for, but they gave me a used x270.
18:05:16 <kmalloc> next step is install my NUC for my openstack control plane, stand up a FreeIPA server, and get all my virtualization under management
18:05:28 <kmalloc> xick on the x270
18:05:52 <kmalloc> i was issued a useless t460 or whatever the last gen was
18:05:58 <kmalloc> with like 8GB of ram
18:06:03 <kmalloc> and 256GB hdd
18:06:13 <knikolla> hdd?? in 2018?
18:06:22 <kmalloc> it was m.2 SSD
18:06:26 <kmalloc> but not even nvme
18:06:35 <kmalloc> or maybe it was 2.5" ssd
18:07:00 <kmalloc> whatever, it was not usable. it's why i used my X1C4 for the last 2 years (even with it being a lemon of a laptop)
18:07:54 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
18:08:12 <knikolla> kmalloc: how's the screen on the X1C6?
18:08:31 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
18:08:37 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
18:08:57 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
18:09:03 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
18:09:12 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
18:09:23 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
18:09:46 <kmalloc> knikolla: the one i have is trash, since it's 1080p
18:11:28 <knikolla> kmalloc: right. though 1440p isn't that big of a step up.
18:11:58 <kmalloc> but i'm used to 1440p and 4k monitors
18:12:08 <knikolla> do you run them with 2x scaling?
18:12:13 <kmalloc> so 1080p is pretty hard to drop down to.
18:12:21 <kmalloc> nope, 1x scaling
18:12:37 <knikolla> you must have impressive eyesight
18:12:50 <knikolla> i'm running my 12.5" 1080p screen with 1.5 font scaling.
18:12:56 <kmalloc> though i admit when i am on the TV, i'm almost 10' away, so i go to ~1.25x or so
18:13:22 <kmalloc> hm.
18:13:34 <kmalloc> sigh.
18:13:50 <kmalloc> this is ugly, going to have a ton more TRY/EXCEPt in the OS-INHERIT build_enforcement_target.
18:14:13 <kmalloc> maybe i should just explicitly raise out Forbidden on the raise instead of 404*
18:14:36 <kmalloc> knikolla: this is dangerous
18:14:41 <kmalloc> https://www.irccloud.com/pastebin/JE4eFJMG/
18:14:52 <kmalloc> should i log the 404s and raise up 403s?
18:15:04 <kmalloc> since this is part of the enforcement line.
18:18:13 <knikolla> kmalloc: hmmm... i think yes
18:18:34 <kmalloc> the other option is to just leave the target/enforcement epty
18:18:37 <kmalloc> empty*(
18:18:49 <kmalloc> which means it's on the enforcement_str to handle if it is allowed or not.
18:18:58 <kmalloc> which is probably most correct.
18:19:35 <kmalloc> since an empty target['user'] dict means we can't enforce on it, so enforcement should behave as expected.
18:20:14 <knikolla> that wouldn't raise the correct 404 though, right?
18:20:30 <kmalloc> in both cases it should net a 403
18:20:50 <kmalloc> it just means the enforcement rule is responsible vs keystone saying "WHOA, NO USER! FORBIDDEN"
18:21:14 <knikolla> oh, i see what you're saying. yes.
18:21:35 <kmalloc> ok, going to LOG.INFO this and set the target empty
18:21:41 <kmalloc> and see what happens with testing
18:22:04 <knikolla> ++
18:35:29 <kmalloc> ooh it's a jaypipes
18:35:31 <kmalloc> hi jaypipes
18:35:47 <jaypipes> kmalloc, cmurphy: heya. if I want to get a list of users within a project via the keystone v3 client, how would I do that? did the behaviour between the v2 and v3 client call for keystone_client.users.list(project_id) change?
18:36:13 <kmalloc> jaypipes: hm, i don't think it changed.
18:36:28 <kmalloc> but... let me check.
18:37:31 <kmalloc> jaypipes: wait, you're looking for what users have access/roles on a project?
18:37:38 <kmalloc> jaypipes: just to confirm not something else.
18:39:42 <jaypipes> kmalloc: users that have any role in the supplied project, yes.
18:39:53 <jaypipes> kmalloc: I have a report that this behaviour changed from v2 to v3.
18:39:58 <jaypipes> kmalloc: and it was surprising to me.
18:40:33 <jaypipes> kmalloc: the report is stating that v3 doesn't filter any more. it just returns all users in the entire keystone database, regardless of what gets passed to list(project_id)
18:40:39 <kmalloc> right.
18:40:56 <kmalloc> i'm looking at the code, and it looks like project/defauilt_project is a filter on the user's default project
18:41:03 <kmalloc> instead of what you want.
18:41:14 <kmalloc> i think you want to hit role_assignments, let me see how that works really quickly
18:41:22 <jaypipes> kmalloc: I'm more interested in whether this behaviour *changed* from v2 to v3?
18:41:44 <kmalloc> i think the behavior did change. but the report was probably giving incorrect information in v2
18:41:57 <kmalloc> the change is we don't filter on default_project anymore, in either case
18:42:07 <kmalloc> which is what the argument was doing in both cases.
18:43:02 <kmalloc> what you're looking for now, is role_assignments() https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/role_assignments.py#L64 filtering on project
18:43:16 <kmalloc> and that, while hitting v3 API, would return v2 assignments as well
18:43:34 <kmalloc> (since v2 assignments == v3 assignments with domain(default))
18:44:29 <jaypipes> ack
18:44:40 <kmalloc> ah, i am wrong, v2 did filter on tenant_id like role_assignments does
18:44:53 <kmalloc> v3 filtered on default_project id (terrible ux)
18:45:12 <kmalloc> so, behavior in ksc changed a lot between 2 and 3
18:45:22 <kmalloc> but i think that is because the udnerlying APIs didn't work even remotely the same
18:45:47 <kmalloc> and we have role_assignments which expands out implied_roles (if asked), inherited roles, etc.
18:46:35 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
18:46:51 <kmalloc> knikolla: ^ lets see how that ends up. I think that is the most correct option.
18:47:00 <kmalloc> knikolla: note the HUGE comment in the function.
18:47:07 <kmalloc> i need to replicate that in the groups version too
18:47:30 <jaypipes> kmalloc: ack, thx for the information. appreciated.
18:47:57 <jaypipes> kmalloc: to sum up, we should be using the role assignments functionality for listing users, instead of... well, listing users. :)
18:48:48 <kmalloc> jaypipes: if you want role assignment information. NOTE that API is very punative. it has to do a ton of work behind the scenes to map users, roles, projects, domains, and expansions
18:48:58 <kmalloc> jaypipes: so i wouldn't run it in a tight loop or anything.
18:49:07 <kmalloc> tl;dr hits the DB a lot
18:49:18 <kmalloc> and may be very slow if you have a ton of users.
18:49:52 <jaypipes> kmalloc: k, good to know.
18:50:09 <kmalloc> knikolla: is it wrong i want to buy a tv-sized screen for my office now :P
18:50:15 <jaypipes> kmalloc: if I just want to get a list of users in a project, why is the UX so different now?
18:50:31 <jaypipes> kmalloc: not trying to bitch... just this kind of hit us HARD :)
18:50:45 <kmalloc> jaypipes: because users are owned by potentially many domains
18:51:14 <jaypipes> kmalloc: you can imagine automation scripts that were processing a dozen user accounts in a loop now processing 6K+ users in each loop over a tenant. blew up the entire service when the v3 client patches were used.
18:51:21 <kmalloc> oh totally. i get it
18:51:45 <kmalloc> the semantics of the scope of where users are and the types of roles are more expansive in v3
18:51:53 <kmalloc> inherited roles notably, and implied roles.
18:52:20 <kmalloc> and that users live in more places (ldap, sql, etc) there is potential you have 5 sources of users
18:52:40 <jaypipes> kmalloc: for some reason, I thought domains were no longer a thing... is that not the case?
18:52:54 <kmalloc> domains are a specialized container that is just a project behind the scenes
18:53:10 <jaypipes> oh, ok. so just the implementation of domains was changed?
18:53:13 <kmalloc> that was for maintenance clarity, so we didn't need all sorts of extra magic to handle a domain role assignment
18:53:14 <jaypipes> concept is still around?
18:53:18 <kmalloc> yep
18:53:21 <jaypipes> gotcha.
18:53:39 <kmalloc> i would love to ditch domains... but lets jsut say API contract hell and v4.
18:53:48 <kmalloc> so, we douibled down, but made domains much easier to work with
18:54:03 <kmalloc> domains are just projects witha  bit flipped and can be referenced via either API
18:54:16 <jaypipes> sure. there seem to have been some casualties of that war, though ;P
18:54:29 <kmalloc> yup =/
18:54:37 <jaypipes> :) no worries mate, shit happens.
18:54:54 <kmalloc> functionally we didn't change v3's semantics, we made it much easier to be consistent internally though
18:55:11 <kmalloc> but v3 has always been very different (unfortunately/fortunately) from v2
18:55:15 <jaypipes> kmalloc: I might touch back with you later to validate some code I'll put together to get a list of users within a tenant the v3 way (efficiently that is)
18:55:22 <kmalloc> sure thing.
18:55:27 <jaypipes> appreciated :)
18:55:37 <kmalloc> role_assignments, fwiw, is on a short list of "how can we make this better"
18:55:59 <kmalloc> that i want to tackle. but i'm amidst a giant refactor to drop webob on the cutting room floor :P
18:56:07 <jaypipes> hehe :)
18:56:15 <kmalloc> and i'm down to ~5 major apis.
18:56:41 <kmalloc> but it's been a beast. Flask makes this all MUCH better... but seeing it has been hard too keep in focus
18:58:33 <orange_julius> Do you mind me asking why flask makes this better?
19:00:11 <kmalloc> orange_julius: it's allowing us to clean up a lot of bits, mostly around enforcement in a much cleaner way; centralized access to request data, so we don't need to pass request objects around
19:00:40 <kmalloc> orange_julius: and flask-RESTful takes a bunch of load off us to implement rendering of the response.
19:00:44 <kmalloc> in a clean json-form.
19:01:26 <kmalloc> and we can handle things directly in the request instead of needing to process it as a middleware stack
19:01:32 <kmalloc> e.g. "is this a json request body"
19:02:22 <kmalloc> so we collapse the stack of things that have to process the request significantly. and we'll be able to more easily support things such as etags
19:02:54 <kmalloc> so the request jaypipes is going to make for a report could be highly cachable on the client side, with webob adding such features is a real beast.
19:03:47 <kmalloc> the final benefit is.. flask is something more folks understand than Routes and our custom wsgi bits
19:04:04 <kmalloc> as we move forward we'll have less and less custom wrappers for flask and more and more basic flask/flask-restful code
19:04:05 <kmalloc> :)
19:04:37 <orange_julius> Ah nice. Thanks! I wasn't aware there was a lot of custom bits. I havn't poked my head into the Keystone code too much, and truthfully wouldn't know where to start. I am always curious though =D
19:05:10 <kmalloc> we had nearly 100% custom wsgi stack in our code
19:05:20 <kmalloc> all webob/pastedeploy
19:05:28 <kmalloc> and Python Routes
19:05:32 <kmalloc> it was hard to work with.
19:05:39 <kmalloc> and our policy enforcement suffered.
19:09:21 <knikolla> kmalloc: what do you do with all that screen real estate?
22:32:09 <kmalloc> knikolla: watch movies, and surf the web,m what else? ;)
23:49:29 <kmalloc> knikolla: if you have some time, eyes on the OS-FEDERATION bits would be useful
23:49:48 <kmalloc> knikolla: i am not sure why we're getting Role Not Found.
00:00:09 <kmalloc> knikolla: nvm, found it
00:28:33 <kmalloc> knikolla: i need to check the header format from the webob request object
00:28:41 <kmalloc> knikolla: so i can properly extend the flask header(s)
01:01:55 <openstackgerrit> Merged openstack/keystone master: Convert services api to flask native dispatching  https://review.openstack.org/589641
01:08:48 <lbragstad> kmalloc: qq
01:08:58 <kmalloc> lbragstad: qa
01:09:16 <lbragstad> the json home document for policies is missing the resource relations for /policies
01:09:43 <lbragstad> i must be doing something wrong in https://review.openstack.org/#/c/589950/2/keystone/api/policy.py@39
01:09:45 <kmalloc> in your implementation or in general?
01:10:01 <lbragstad> the OS-ENDPOINT-POLICY relations look to be ok
01:10:19 <kmalloc> looking
01:10:48 <lbragstad> for example there is no https://docs.openstack.org/api/openstack-identity/3/rel/policies key in the actual response
01:11:00 <lbragstad> (but there is in the reference)
01:11:14 <kmalloc> hm.
01:11:34 <lbragstad> so when i ported policies to flask, i must have missed something that builds that for the /v3/policies key
01:11:50 <kmalloc> that is the current patch?
01:12:10 <lbragstad> no - but the handful of changes that i have locally aren't related to that
01:12:18 <lbragstad> or to json jome
01:12:20 <lbragstad> home*
01:12:31 <kmalloc> ah, ok well first AssertionError: View function mapping is overwriting an existing endpoint function: policy.policyresource
01:12:31 <kmalloc> View function mapping is overwriting an existing endpoint function: policy.policyresource
01:12:31 <kmalloc> }}} <-- error that is because you can't use a resource twice
01:12:46 <lbragstad> note that https://review.openstack.org/#/c/589950/2/keystone/api/policy.py@31 appears to be working fine
01:12:51 <lbragstad> yeah - i already fixed that bit
01:13:03 <kmalloc> sec. let me do one thing
01:13:42 <kmalloc> hm. weird.
01:14:00 <kmalloc> because Adding standard routes to API policy for `MethodViewType` (API Prefix: ) [/policies, /policies/<string:policy_id>]
01:14:00 <kmalloc> should add json_home bits
01:14:52 <lbragstad> i had to add _api_url_prefix = '/policies' to fix the policy.policyresource error you just pasted
01:15:09 <lbragstad> in the PolicyAPI class
01:15:31 <kmalloc> you're going to have an issue then, since /policies will prefix and you'll have /policies/policies
01:15:49 <lbragstad> i removed the /policies/ from the other paths
01:16:25 <kmalloc> but we automatically construct for Resources
01:16:37 <kmalloc> which includes the collection_kehy
01:16:38 <kmalloc> key*
01:16:45 <kmalloc> you might want to add the changes from https://review.openstack.org/#/c/591082/9/keystone/tests/unit/test_versions.py
01:16:57 <kmalloc> they make debugging json_home issues a lot easier
01:17:37 <kmalloc> so, if you want to use a prefix, you'll need to do a resource_mapping for the main resource as well [totally fine]
01:17:40 <lbragstad> nice
01:17:43 <lbragstad> diffing that error is a pain
01:17:52 <lbragstad> let me try that quick
01:17:56 <kmalloc> yeah $%!@ diffing the errors.
01:18:18 <kmalloc> if you push the latest code up, i can cherry-pick it and look at it
01:18:46 <kmalloc> it's a little frustrating/fragile when porting, but pretty bomb proof once you get it down.
01:19:24 <kmalloc> anyway i need to go eat.
01:19:40 <lbragstad> ack
01:20:11 <kmalloc> will check back in in a bit. remember, you can have multiple api classes (look at federation)
01:20:13 <kmalloc> if it helps
01:21:05 <kmalloc> lbragstad: https://git.openstack.org/cgit/openstack/keystone/tree/keystone/api/os_federation.py?h=refs/changes/82/591082/9#n559
01:21:33 <lbragstad> ok
01:21:35 <lbragstad> thanks
01:21:39 <kmalloc> also note if you do that the _additional_parameters parts
01:21:45 <kmalloc> needed to construct json_home sanely
01:21:59 <kmalloc> if you want to use that you'll need to rebase on top of OS-FEDERATION
01:22:19 <kmalloc> some other fixes were rolled in (such as the regex/sub of <> to {}
01:22:30 <kmalloc> for the whole url, not just the post-prefix parts
01:23:02 <kmalloc> so 591082 is safe to rebase on, might make it easier for you too since you can just delete EP-Polidy routers
01:23:10 <kmalloc> since by that point endpoints is already ported
03:41:24 <openstackgerrit> Merged openstack/keystone master: Convert endpoints api to flask native dispatching  https://review.openstack.org/589642
06:20:28 <openstackgerrit> wangqiang-bj proposed openstack/keystone master: modify json file relative path in 'application-credentials'  https://review.openstack.org/591909
06:35:04 <openstackgerrit> wangqiang-bj proposed openstack/keystone master: fix wrong relative path in 'application-credentials.inc'  https://review.openstack.org/591919
06:44:49 <openstackgerrit> wangqiang-bj proposed openstack/keystone master: fix wrong relative path in 'inherit.inc'  https://review.openstack.org/591922
06:49:11 <openstackgerrit> wangqiang-bj proposed openstack/keystone master: fix wrong relative path in 'os-pki.inc'  https://review.openstack.org/591924
06:55:47 <openstackgerrit> wangqiang-bj proposed openstack/keystone master: fix wrong relative path in 'inherit.inc'  https://review.openstack.org/591926
06:55:50 <openstackgerrit> Andriy Shevchenko proposed openstack/ldappool master: fix tox python3 overrides  https://review.openstack.org/591927
07:48:13 <openstackgerrit> Shuo Liu proposed openstack/keystoneauth master: add release notes to readme.rst  https://review.openstack.org/591943
07:51:41 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Fix db model inconsistency for FederatedUser  https://review.openstack.org/566242
07:51:42 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Enable Foreign keys for sql backend unit test  https://review.openstack.org/558029
07:51:42 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Enable foreign keys for unit test  https://review.openstack.org/558193
07:51:43 <openstackgerrit> wangxiyuan proposed openstack/keystone master: ADD a test for idp and federated user cascade deleting  https://review.openstack.org/591946
07:52:14 <openstackgerrit> Shuo Liu proposed openstack/keystonemiddleware master: add releasenotes to readme.rst  https://review.openstack.org/591947
10:41:46 <openstackgerrit> Merged openstack/oslo.limit master: fix doc gate  https://review.openstack.org/591162
10:41:47 <openstackgerrit> Merged openstack/oslo.limit master: import zuul job settings from project-config  https://review.openstack.org/588697
10:45:34 <openstackgerrit> Merged openstack/oslo.limit master: add python 3.6 unit test job  https://review.openstack.org/589599
12:34:34 <knikolla> o/
13:16:56 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
14:31:24 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
14:44:34 <gagehugo> o/
18:21:13 <openstackgerrit> Michael Beaver proposed openstack/oslo.policy master: WIP Docs: Remove references to JSON format  https://review.openstack.org/592170
18:25:50 <_ix> I think I've gotten the most mileage out of asking questions in here. No good deed goes unpunished ;)
18:26:04 <_ix> Is there a specific channel for tempest questions?
18:28:15 <ayoung> _ix, qa used to be it
18:28:34 <ayoung> try /join #openstack-qa
18:29:30 <_ix> Thanks, ayoung!
18:30:14 <ayoung> _ix, tell them kmalloc sent you
19:28:47 <kmalloc> lol
19:53:54 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
20:01:39 <openstackgerrit> Merged openstack/ldappool master: fix ldappool bad password retry logic  https://review.openstack.org/591174
20:01:56 <Dano8080> Hi.  Does anyone know if there's a way to have keystone include the elapsed time in the log message for a request?  Or some other way to capture the elapsed time for a request?
20:53:12 <kmalloc> Dano8080: there are two ways, but they are chatty/can slow things down.
20:53:28 <kmalloc> Dano8080: there is the debug middleware (just per request data)
20:54:01 <kmalloc> And there is the trace log level (logs a TON of data about each manager call/time in the DB, etc)
20:54:09 <kmalloc> My guess is you want the debug middleware
20:55:44 <openstackgerrit> Michael Beaver proposed openstack/oslo.policy master: Docs: Remove references to JSON format  https://review.openstack.org/592170
21:32:35 <Dano8080> kmalloc: Thanks.  I will investigate those methods
22:14:53 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
22:15:04 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
22:15:08 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
22:15:16 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
22:15:23 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
22:15:56 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
22:16:28 <kmalloc> knikolla: ^ ok i *think* i've solved all the issues with OS-FEDERATION port
22:16:47 <kmalloc> going to see what zuul has to say, but it should be ready now.
22:45:22 <kmalloc> knikolla: yep, passes functional test now. phew
23:07:57 <kmalloc> knikolla: further along, there is another os-federation issue cropping up, but we're good for reviews up until OS-FEDERATION patch and maybe one after
01:04:27 <openstackgerrit> lvxianguo proposed openstack/python-keystoneclient master: fix misspelling of 'default'  https://review.openstack.org/577368
01:41:47 <lbragstad> kmalloc: thoughts on https://review.openstack.org/#/c/589950/5 ?
04:29:25 <kmalloc> It is all deprecated
04:29:33 <kmalloc> We missed some
04:29:42 <kmalloc> lbragstad: ^cc
04:30:34 <kmalloc> Keystone is a bad distribution point for policy files, and those APIs are hard UX to be useful.
04:31:29 <kmalloc> That said ep.policy is checked by tempest. I tried making it disabled by default a whilr ago.
05:10:55 <lbragstad> hmm
05:10:57 <lbragstad> ok
05:11:10 <lbragstad> so we should formally deprecate the OS-ENDPOINT-POLICY API?
05:11:22 <lbragstad> because according to the code it was just before i moved it to flask
08:16:29 <openstackgerrit> Merged openstack/python-keystoneclient master: fix misspelling of 'default'  https://review.openstack.org/577368
09:20:06 <openstackgerrit> Merged openstack/keystoneauth master: Update reno for stable/rocky  https://review.openstack.org/586083
09:23:19 <openstackgerrit> Merged openstack/keystonemiddleware master: Update reno for stable/rocky  https://review.openstack.org/586086
09:29:58 <mbuil> cmurphy, lbragstad: in the K2K deployment, when the SP receives the "Assertion", this one contains the auth-url ==> http://mysp.example.com:5000/v3/OS-FEDERATION/identity_providers/myidp/protocols/saml2/auth, right? So I guess the SP fetches that when verifying the "ECP SAML Response" from the User Agent, right?
09:31:24 <mbuil> cmurphy, lbragstad: wait, that is a bit strange because the auth-url points to the SP itself... I am trying to understand how IdP and SP exchange the metadata to create the trust
09:34:07 <cmurphy> mbuil: in the k2k case trust only goes in one direction, only the SP needs to trust the IdP, and with shibboleth that's done by setting the MetadataProvider in /etc/shibboleth/shibboleth2.xml to either a remote URL or a local file where it can find the IdP's metadata which contains its public key
09:39:57 <mbuil> cmurphy: aaaah I see. An Shibboleth at the SP side fetches the metadata when it gets the assertion from the User Agent?
09:41:21 <cmurphy> mbuil: I think it fetches it when the shibd daemon is started but not 100% sure
09:42:44 <mbuil> cmurphy: thanks
09:47:32 <mbuil> cmurphy: if I want to list the images that I have in my sp, this command should work right ==> openstack --os-service-provider mysp --os-remote-project-name federated_project --os-remote-project-domain-name federated_domain image list
09:53:57 <cmurphy> mbuil: i think so
09:54:45 <mbuil> cmurphy: I am getting a Unauthorized (HTTP 401), even though token issue works
09:55:01 <cmurphy> mbuil: hmm :/
09:58:01 <cmurphy> mbuil: is it coming from the IdP or the SP?
09:58:16 <cmurphy> mbuil: you can turn on insecure_debug in keystone.conf in both keystones and see if it gives you a reason
09:59:26 <mbuil> I am looking at /var/log/keystone.log in the SP but I see nothing wrong. I can see the federation stuff going on and I see exactly the same when I do "token issue" instead
09:59:41 <mbuil> cmurphy: ok, let me try that
10:01:15 <mbuil> cmurphy: I see a message in the IdP keystone.log saying: "This is not a recognized Fernet token"
10:02:00 <mbuil> so probably the token is created and fetched but the IdP does not recognize it (even though the request is targeting the sp)
10:08:17 <cmurphy> mbuil: yeah something is pointing it back to the wrong keystone, not sure why though :/
10:14:10 <mbuil> cmurphy: in the IdP I have the service provider 'mysp' registered correctly pointing to the correct Auth URL
10:15:08 <mbuil> cmurphy: there must be somewhere in the code a "switch" that tries the remote endpoint instead of the "local" one, right? Any idea where that code is?
10:25:48 <cmurphy> mbuil: it's either in the client or in keystonemiddleware
10:26:54 <cmurphy> mbuil: I think it's a client issue actually, since you have OS_AUTH_URL pointing to the IdP
10:27:19 <cmurphy> mbuil: I would try retrieving the token and then just using the token directly with OS_TOKEN and OS_URL pointing to the SP instead of the IdP
10:28:24 <mbuil> cmurphy: right. That's what is happening. Can I force that behaviour with just modifying stuff in my openrc?
10:29:59 <mbuil> you mean: 1 - fetch a token with "token issue". 2 - export OS_TOKEN=token_id 3 - export OS_URL=AUTH_SP ? Should I remove OS_AUTH_URL from my env?
10:31:04 <cmurphy> mbuil: yes to all, in fact clean all OS_* variables from your env after step 1 before step 2 just to be safe
10:31:39 <mbuil> cmurphy
10:31:42 <mbuil> cmurphy ok!
10:50:07 <mbuil> cmurphy: for OS_URL, I wrote exactly what I had in OS_AUTH_URL but changing the ip (http://10.10.100.29:5000/v3/). I guess this is wrong because when doing 'image list' it tries to fetch something from the URL ==> http://10.10.100.29:5000/v3/v2/images
10:54:45 <cmurphy> mbuil: oh it's trying to use it as the glance URL, I might be wrong about how that command works
10:55:34 <cmurphy> let me try to reproduce
11:09:58 <mbuil> cmurphy: reading the code I can see that what I get makes sense. When using tokens, auth_ref is None because of this ==> https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/token_endpoint.py#L69-L76
11:10:20 <mbuil> cmurphy: as a consequence, the endpoint is taken from OS_URL ==> https://github.com/openstack/osc-lib/blob/stable/queens/osc_lib/clientmanager.py#L294-L297
11:12:43 <ravirjn> Hi Everyone, I am unable to allocate floating IP using admin user, it seems some permission issue with keystone... can anyone please help me.. here is log http://paste.openstack.org/show/728181/
11:28:46 <cmurphy> mbuil: still not sure exactly what the issue is but I think this is a better test and works for me: `curl -H "x-auth-token: $OS_TOKEN" <glance endpoint>/images`
11:32:24 <cmurphy> ravirjn: that doesn't look like anything to do with keystone to me, that looks like an issue between horizon and neutron
11:41:18 <cmurphy> mbuil: hmm well now going through the client with OS_URL=<glance endpoint> works for me, not sure what i changed
11:48:31 <cmurphy> mbuil: I was using a token obtained from the IdP not the SP, maybe that's what you were doing too
12:00:31 <mbuil> cmurphy: using OS_URL=<glance endpoint> works for me too. Therefore, when a remote service must be used, the OS_URL needs to be change for each service
12:01:14 <mbuil> I wonder why it does not use the catalog when doing the token authentication...
12:01:55 <cmurphy> i guess it's just cutting out that round trip to keystone
12:06:24 <mbuil> cmurphy: the code says "# token plugin does not have an auth ref, because it's a "static" authentication using a pre-existing token.... not sure what static means here :/
12:09:29 <cmurphy> mbuil: i guess in a dynamic authentication you would go to keystone first and exchange your credentials for a token which in the process also gives you a catalog, maybe by "static" they mean you don't get the chance to refresh your catalog since you're not going to keystone first
12:11:55 <mbuil> cmurphy: ok. Time to switch to other things. Thanks a lot for the help! Tomorrow more :)
12:13:50 <cmurphy> mbuil: cool :)
14:22:21 <ayoung> kmalloc, I realize that Git blame is going to make the entire Keystone code base look like it was written by you.
14:26:12 <cmurphy> i already plan on blaming kmalloc for all keystone bugs
14:27:42 <knikolla> o/
14:27:49 <knikolla> lol
14:30:31 <kmalloc> cmurphy: <3
14:30:49 <kmalloc> ayoung: yep, I knew that going into the huge refactor.
14:31:49 <ayoung> kmalloc, that is why I wanted to move the files first, or in a stand alone git commit
14:32:05 <ayoung> Its too late now
14:32:32 <ayoung> but I would prefer it if we could maintain history, especially on some of the more chaotice files, like auth and such
14:33:03 <ayoung> I usually just want to look to see if I was at fault for a certain commit, like the LDAP pool locking people out.
14:33:11 <kmalloc> Except, history would still show me mostly doing all the work. Since the files change too much
14:33:27 <kmalloc> You're going to have to run back a few commits in either case :(
14:36:14 <openstackgerrit> Colleen Murphy proposed openstack/keystone master: Do not log token string  https://review.openstack.org/592505
14:39:16 <kmalloc> ayoung: thankfully, most all of the code here is controller code, most of the logic is all further down... With exception of auth and discovery.
14:40:09 <ayoung> kmalloc, yeah, for most of the files it should be OK.  It was the trust code that was most controlled embedded.
14:40:26 <ayoung> auth is also a bit of spaghetti
14:41:04 <kmalloc> The next move will be just file moving (moving bits from top level to keystone.subsystem)
14:41:17 <kmalloc> So it will be keystone.subsystem.trusts.
14:42:03 <kmalloc> Long term the goal will be keystone top level is common code, API is controller/view code,and keystone.subsystem will be manager /driver bits.
14:42:28 <kmalloc> But that is really just moving files, no additional refactoring.
14:47:58 <ayoung> kmalloc, you knikolla and I need to sit down and plan the Edge talk.
14:48:38 <knikolla> ayoung: are you comind to devconf?
14:48:41 <knikolla> coming*
14:49:06 <ayoung> knikolla, I was not planning on it
14:49:57 <ayoung> knikolla, I probably should, though
15:46:16 <kmalloc> ayoung: i can chat about the talk today
15:48:16 <ayoung> kmalloc, knikolla 2-3PM Eastern today OK to discuss?
15:48:44 <knikolla> ayoung: works for me
15:56:42 <kmalloc> Sure
16:44:19 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
16:44:27 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
16:44:33 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
16:45:13 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
16:45:41 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
16:45:45 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
16:45:51 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
16:45:56 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert groups API to flask native dispatching  https://review.openstack.org/591147
16:46:00 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix a translation of log  https://review.openstack.org/591164
16:46:05 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-INHERIT API to flask native dispatching  https://review.openstack.org/591165
16:46:26 <kmalloc> ayoung: ^ the queryparam "is true" part was wrong, this is a fix of that and a rebase of the stack
16:46:50 <ayoung> kmalloc, I thought that was fairly standard for boolean values
16:48:35 <kmalloc> it is
16:48:39 <kmalloc> i had the code wrong
16:48:46 <kmalloc> i had url?param = false
16:49:03 <kmalloc> the only cases that are false are: url
16:49:05 <kmalloc> and url?param=0
16:49:10 <kmalloc> or should be*
16:49:25 <kmalloc> so existence of the param, without value == true
16:49:27 <kmalloc> (now)
16:49:55 <kmalloc> no change in behavior from original code. it was a mistake I made when porting it.
16:59:19 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Add safety to the inferred target extraction during enforcement  https://review.openstack.org/591203
17:00:35 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
17:00:44 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
17:00:50 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
17:00:56 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
17:01:02 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
17:02:01 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Fix RBACEnforcer get_member_from_driver mechanism  https://review.openstack.org/591146
17:13:38 <gyee> kmalloc, Keystone doesn't use these functionality right? https://github.com/flask-restful/flask-restful/blob/master/flask_restful/utils/crypto.py
17:15:33 <kmalloc> gyee: not explicitly/implicitly there might be cases it would.
17:16:18 <gyee> I am not seeing them being used anywhere.
17:16:42 <gyee> https://github.com/flask-restful/flask-restful/blob/master/setup.py#L9
17:17:02 <gyee> I am trying to resolve a package dependency on pycrypto
17:28:38 <hrybacki> cmurphy: o/ -- there isn't a config option to enable app creds right?
17:30:03 <cmurphy> hrybacki: not to enable the API but you have to have application_credential in [auth]/methods (which is there by default) in order to auth with them
17:30:16 <hrybacki> no issue creating them, but ran into issues authenticating with one. Watched (and followed along in Horizon) your Vancouver talk. Hit the same issue with the clouds.yaml so figured I'd raise it with you. "Error authenticating with application credential: Application credentials cannot request a scope. (HTTP 401)"
17:30:35 <hrybacki> lemme look there and make sure we didn't do anything weird in osp bits
17:31:54 <hrybacki> nope, definitely using the defaults. cmurphy have you seen that response before? Perhaps I mucked something else up ^^
17:32:44 <openstackgerrit> Colleen Murphy proposed openstack/keystone master: Do not log token string  https://review.openstack.org/592505
17:33:45 <cmurphy> hrybacki: you shouldn't have a scope object in the request
17:34:06 <cmurphy> it takes the scope from the application credential itself
17:34:10 <cmurphy> example https://developer.openstack.org/api-ref/identity/v3/index.html#id94
17:35:03 <hrybacki> cmurphy: weird -- I'm simply invoking a `openstack token issue`
17:35:20 <hrybacki> I'll dig into it deeper and get back. Thanks for the nudge :)
17:36:23 <cmurphy> hrybacki: are you setting a project in your openrc/clouds.yaml?
17:37:53 <hrybacki> cmurphy: nope -- https://paste.fedoraproject.org/paste/cXr4WXccF9V3zJXOgs6TJA
17:38:27 <cmurphy> hmm
17:40:10 <hrybacki> yeah, I'm scratching my head over here haha
17:40:14 * hrybacki fetches coffeeeeee
17:42:14 <knikolla> can you do an `openstack token issue --debug`?
17:42:34 <cmurphy> good idea
17:57:20 <kmalloc> gyee: i don't see where pycrypto is coming from
17:57:53 <kmalloc> gyee: my tox environment, fwiw, doesn't have pycrypto
17:58:42 <openstackgerrit> Merged openstack/keystone master: Convert Roles API to flask native dispatching  https://review.openstack.org/590494
17:58:43 <openstackgerrit> Merged openstack/keystone master: Convert role_inferences API to flask native dispatching  https://review.openstack.org/590502
18:01:25 <gyee> kmalloc, its the python-Flask-Restful package in openSuSE build server. For some reason it include pycrypto as a dependent. I am fixing it now. Thanks for verifying.
18:02:11 <hrybacki> I can (just getting sucked back into meetings :()
18:03:17 <kmalloc> np
18:06:47 <hrybacki> I figured it out kmalloc cmurphy.. polluted environment variables from an early sourcing of an rc file -_-
18:07:04 <hrybacki> the project scope was enough of a tip off -- so thanks :)
18:07:11 <cmurphy> cool :)
18:07:34 <kmalloc> hrybacki: doh! hate it when that happens
18:24:11 <kmalloc> cmurphy:
18:24:23 <kmalloc> https://usercontent.irccloud-cdn.com/file/dJmnBEOP/IMG_20180816_112347.jpg
18:24:46 <cmurphy> squeeee
18:24:56 <cmurphy> comfy pup is comfy
18:25:59 <kmalloc> yesssss
18:48:02 <openstackgerrit> Merged openstack/keystone master: Add callback action back in  https://review.openstack.org/590590
18:58:10 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert role_assignments API to flask native dispatching  https://review.openstack.org/590518
18:59:45 <mnaser> so what would be the difference between applications credentials and trusts?
19:00:00 <mnaser> i don't really see that much of a difference somehow
19:03:16 <kmalloc> trusts use normal auth mechanisms "username/password" claiming delegation in the trust
19:03:31 <kmalloc> app creds supplant the standard username/password
19:03:53 <cmurphy> mnaser: a normal end user can't create a user to delegate a trust to
19:03:59 <kmalloc> cmurphy: ++ that too
19:04:14 <cmurphy> application credentials are entirely self service
19:06:30 <mnaser> cmurphy, kmalloc: ok cool, so the idea is a user can grant application credentials without needing to create a new user?
19:06:43 <kmalloc> mnaser: yep, and limit the roles/project access
19:07:03 <mnaser> my use case is a customer who needs to give access to different users to the same project, but because of the admin-ness bug and my not-wanting-to-modify-default-policy, they're unable to maintain their own users
19:07:35 <mnaser> so my idea was: create application credentials and use those to authenticate (and hopefully openstackclient supports that part, or we might have to do some work to get it to)
19:07:43 <kmalloc> cmurphy: would you be opposed if i wrapped the hashing of the app-cred into the SQL model like I did for passwords? rather than needing to explicitly call "hash" it's done on Model.secret = XXX
19:07:58 <kmalloc> mnaser: that fits exactly what app creds are meant to solve (or one case)
19:08:10 <mnaser> kmalloc: cool!  glad we're running queens then :D
19:08:14 <kmalloc> :)
19:08:23 <mnaser> on a seperate note where i should check this but it's easier to ask
19:08:35 <mnaser> will keystone forward the original user id in requests or the 'application credentials' one to services
19:08:46 <cmurphy> kmalloc: i vaguely remember using the user model as an example so i'm surprised it's not already the same
19:08:47 <mnaser> i.e.: reboot a server will come from original_user_id or application_credential_user_id ?
19:08:56 <kmalloc> it generates a token for the user
19:09:13 <kmalloc> explicitly scoped with roles assigned
19:09:25 <kmalloc> so the token is for the issuing user_id
19:09:27 <mnaser> ah so all requests will be identified as that user, so there won't really be the ability to know who-did-what
19:09:29 <mnaser> damrn
19:09:30 <kmalloc> yes.
19:09:30 <mnaser> darn*
19:09:45 <kmalloc> well you know the token was generated with the app_cred
19:09:51 <kmalloc> (or should) and you know the token was used
19:10:10 <mnaser> yeah but for example things like instance action logs log user_id and project_id but not token
19:10:12 <kmalloc> so it should be doable to correlate, but that  is getting into audit trails
19:10:25 <kmalloc> yeah.
19:11:14 <kmalloc> i would offer the other option is to allow the user to use an external LDAP or similar for user management
19:11:21 <kmalloc> domain-specific-config.
19:11:21 <mnaser> kmalloc: that was exactly my idea
19:11:30 <mnaser> so good to have it validated
19:11:36 <kmalloc> :)
19:11:40 <mnaser> domain specific external ldap, my only concern is how roles are assigned
19:11:51 <mnaser> i.e. i dont want them to give a role 'admin' to one of their users :-)
19:11:56 <knikolla> in that case trusts?
19:12:20 <mnaser> trusts implies that the another user already exists
19:12:31 <knikolla> from the ldap store, yeah
19:12:32 <mnaser> but the user cant create another user, because they don't have the ability to do that (no admin rights)
19:12:35 <kmalloc> if you're using v3, grant them domain_admin and add a check in polcy that ensures they have the role before they can assign it
19:12:52 <kmalloc> i think i can help you come up with the check_str for policy.json for that
19:13:13 <mnaser> i think the issue was having domain_admin could let you assign any roles to anyone so that was the concern at the time
19:13:28 <kmalloc> nah, we could craft a special check_string to help with that
19:13:51 <kmalloc> i've been deep in policy code lately, so give me a moment to see if we can do that easily
19:14:04 <mnaser> that'd be pretty sweet if that was the case, i can imagine this being something a lot of people needing
19:14:15 <kmalloc> it might already be part of our v3_cloud policy
19:16:21 <kmalloc> mnaser: ah, we lean on domain-specific roles for this
19:16:40 <mnaser> kmalloc: im thinking out loud -- ldap for identity, sql for assignment, give `foo` group _member_ role to project, and let them add users to said group without giving them access to assign roles
19:16:41 <kmalloc> it'd be roles created with a domain=XXX
19:17:08 <kmalloc> mnaser: that would be pretty straightforward as well
19:17:24 <kmalloc> mnaser: it might actually be *easier* to manage.
19:17:34 <kmalloc> because it doesn't involve creating domain-specific roles.
19:17:34 <mnaser> yeah, they just have to add users to a certain group and voila
19:17:49 <mnaser> the only thing is they are kinda part of the existing default domain
19:17:54 <kmalloc> you could even simply make one group per role [if more than one role]
19:18:16 <mnaser> so im not sure if how easy it would be to handle that
19:18:40 <kmalloc> well, as long as they are using v3 api, conversion is straightforward
19:19:03 <kmalloc> add stuff to new domain, tell them to use new domain stuff / migrate and they can manage users.
19:19:18 <mnaser> can a group in domain `foo` be given access to a project in domain `bar` ?
19:19:25 <kmalloc> should be doable
19:19:47 <kmalloc> i can't be 100% sure right this second though.
19:20:08 <mnaser> so question is can role assignments span across domains?
19:21:19 <kmalloc> they should be able to
19:21:24 <kmalloc> i'm looking at the code now
19:21:48 <kmalloc> but, the whole concept that a user is in domain X, and owned by Domain X, doesn't mean all projects user works on should be required to be in Domain X
19:22:02 <kmalloc> i think the only case is for domain-specific roles
19:22:12 <kmalloc> where those roles can only be added to projects within that domain.
19:22:24 <kmalloc> but not limited to users owned by that domain
19:23:09 <kmalloc> mnaser: looking at the code, i see no reason a grant cannot connect USER/GROUP from DOMAIN X to project in DOMAIN Y
19:24:15 <mnaser> kmalloc: https://bugs.launchpad.net/keystone/+bug/1474284/comments/4 from 2015 seems to match
19:24:15 <openstack> Launchpad bug 1474284 in OpenStack Identity (keystone) "Adding users from different domain to a group" [Medium,Invalid]
19:25:12 <kmalloc> ++
19:27:11 <kmalloc> knikolla: i had to buy a smaller CPU cooler :( the one I had blocked PCIE-1, so i couldn't install the RADEON PRO GPU
19:28:44 <kmalloc> knikolla: aparantly 140MM is just too large if I want to use all the PCIE slots =/
19:28:57 <kmalloc> (stupid "not workstation" motherboard)
19:29:25 <openstackgerrit> Colleen Murphy proposed openstack/keystone master: Do not log token string  https://review.openstack.org/592505
20:48:25 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert system (role) api to flask native dispatching  https://review.openstack.org/590588
20:49:02 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Move json_home "extension" rel functions  https://review.openstack.org/591025
20:49:31 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Convert OS-FEDERATION to flask native dispatching  https://review.openstack.org/591082
20:49:39 <openstackgerrit> Morgan Fainberg proposed openstack/keystone master: Refactor ProviderAPIs object to better design pattern  https://review.openstack.org/571955
00:32:35 <openstackgerrit> Merged openstack/keystone master: Do not log token string  https://review.openstack.org/592505
07:08:09 <mbuil> cmurphy: yesterday I had a talk with the edge guys and they had in mind a demo where a user logs into Horizon from the SP using federation. As far as I saw yesterday, the only way to interact with the SP is through a token and changing OS_URL, so I guess what they had in mind is currently not possible, or?
07:28:07 <cmurphy> mbuil: it is possible to use horizon but you have to log in to the IdP not the SP
07:29:24 <mbuil> cmurphy: and could you see or interact with the SP resources from the IdP somehow?
07:30:31 <cmurphy> mbuil: yes, it should actually be pretty seamless, you just select the SP from a menu and then it will pretty much act like you've logged into the SP
07:31:07 <mbuil> cmurphy: ah nice. I'll try that right away :)
07:31:19 <cmurphy> because it gets the token up front and has the catalog from that, it knows where all the endpoints are
09:45:57 <openstackgerrit> Merged openstack/ldappool master: fix tox python3 overrides  https://review.openstack.org/591927
14:29:57 <hrybacki> o/
15:51:48 <openstackgerrit> Merged openstack/keystonemiddleware master: add releasenotes to readme.rst  https://review.openstack.org/591947
16:00:30 <openstackgerrit> Gage Hugo proposed openstack/keystoneauth master: Add nosec to usage of SHA1  https://review.openstack.org/593094
16:01:00 <gagehugo> kmalloc ^ bandit was updated to 1.5, ksa now is failing the bandit check
16:01:10 <gagehugo> on a sha1 usage
16:53:13 <kmalloc> just exempt it
16:53:36 <kmalloc> there is zero reason our use of SHA1 is an issue we're using it for obfuscating logs.
16:53:47 <kmalloc> yep
16:53:51 <kmalloc> i see you did that.
16:54:27 <kmalloc> gagehugo: alternative is to use sha256
16:54:49 <kmalloc> gagehugo: really it is fine to change the hash we use as long as we're consisting within a release
16:57:44 <openstackgerrit> Merged openstack/keystone master: Remove get_catalog from manage layer  https://review.openstack.org/575704
16:58:16 <kmalloc> gagehugo: -1, but only becaue the TODO is pointless.
16:59:07 <kmalloc> gagehugo: we can not suppress by moving to sha256 or we can nosec and say "this is for logging only"
17:27:04 <gagehugo> kmalloc ok, I'll update that
17:39:09 <openstackgerrit> Gage Hugo proposed openstack/keystoneauth master: Change log hashing to SHA256  https://review.openstack.org/593094
22:10:36 <openstackgerrit> Merged openstack/keystoneauth master: Change log hashing to SHA256  https://review.openstack.org/593094
07:00:06 <openstackgerrit> Merged openstack/keystoneauth master: add release notes to readme.rst  https://review.openstack.org/591943
03:07:34 <openstackgerrit> wangxiyuan proposed openstack/keystone master: The password should be changed when changing password  https://review.openstack.org/593476
09:29:02 <vishakha> cmurphy: Hi, https://review.openstack.org/#/c/588211/ is this change not valid then??
10:46:21 <deepak_mourya_> cmurphy: Hey, I think there is no more update on this bug from the assignee. So,  Can I start working on the same? https://bugs.launchpad.net/keystone/+bug/1669080
10:46:21 <openstack> Launchpad bug 1669080 in OpenStack Identity (keystone) ""openstack role create" should support "--description"" [Wishlist,In progress] - Assigned to Deepak Mourya (mourya007)
10:54:10 <cmurphy> vishakha: it will not work the way it is now, i don't have a good answer on how to make it work because it will require us to enhance our mapping rules parser to be able to recognize something more complicated than just a key-value pair
10:54:34 <cmurphy> deepak_mourya_: i would coordinate with wxy-xiyuan since he was working on https://review.openstack.org/484348
11:12:23 <wxy-xiyuan> deepak_mourya_: oh, feel free to pick it up. I think the patch is only need for rebasing.
11:19:23 <wxy-xiyuan> vishakha: cmurphy: We hit this requirements as well. What's more we also want the assertion properties can be configurable. I think it's good to have but we should find a correct way. We should think more about it. It may be a good topic for PTG.
11:20:05 <cmurphy> wxy-xiyuan: ++
11:21:02 <openstackgerrit> wangxiyuan proposed openstack/keystone master: The password should be changed when changing password  https://review.openstack.org/593476
11:25:27 <vishakha> ,cmurphy , wxy-xiyuan : Thanks for the reply. So should I hold this for now??
11:56:56 <wxy-xiyuan> vishakha: I'd like to see yes unless you have some idea about it. It's good to leave comments in the patch.
14:03:56 <openstackgerrit> Merged openstack/keystone master: Use osc in k2k example  https://review.openstack.org/591587
15:06:50 <openstackgerrit> Kripa Shankar Sharma proposed openstack/python-keystoneclient master: Allow user to skip logging of user API response.  https://review.openstack.org/568373
15:15:06 <openstackgerrit> Kripa Shankar Sharma proposed openstack/python-keystoneclient master: Allow user to skip logging of user API response.  https://review.openstack.org/568373
15:30:45 <kmalloc> wxy-xiyuan: I noticed I accidentally -2'd the review re passowr dchsnge let me fix that to a -1
15:30:57 <kmalloc> wxy-xiyuan: sorry about that.
15:52:49 <imacdonn> kmalloc: Need to decide if we're going to finish https://review.openstack.org/#/c/583699/ and https://review.openstack.org/#/c/583835/ . Personally, I don't really care any more. as we finished upgrading everything to Queens, but I'll follow through on mine if we think it's of value to others.....
16:08:22 <kmalloc> imacdonn: we can, I was focused on some other stuff first since it wasn't a rush.
16:09:00 <imacdonn> kmalloc: OK, NP... just trying to tie up loose ends ;)
18:34:52 <openstackgerrit> Kripa Shankar Sharma proposed openstack/python-keystoneclient master: Allow user to skip logging of user API response.  https://review.openstack.org/568373
01:05:04 <wxy-xiyuan> kmalloc: yeah, we have a option "unique_last_password_count", but it can't handle the case pw1 -> pw1. Since the minimum is 1 means there is no limit. So If we want to fix with it, we should let the minimum=0 means no limit. But it'll break the options default behavior. I'm not sure it's a good way?
01:05:10 <wxy-xiyuan> https://bugs.launchpad.net/keystone/+bug/1787874
01:05:10 <openstack> Launchpad bug 1787874 in OpenStack Identity (keystone) "There is no way to forbid users changing password to itself" [Undecided,In progress] - Assigned to wangxiyuan (wangxiyuan)
01:12:26 <kmalloc> wxy-xiyuan: doing this in the controller is 100% wrong in either case. Also, changing the behavior is technically an API contract break
01:12:52 <kmalloc> wxy-xiyuan: if anything it should be done in the same place we do the password history check.
01:14:11 <kmalloc> wxy-xiyuan: right now the code doesn't actually check the password, it just fails if the password is the same, which is not the same thing as what you're trying to do.
01:16:00 <kmalloc> You'll need to check the password and the new password (hashing) against the current and fail in the following ways: 1) original password is not valid, current behavior, 2) new password matches hash and original password matches, 400, 3) original password matches, new password does not: update to new password
01:16:36 <kmalloc> So, in short, don't do it in the controller, do it in the manager/where we do history check.
01:16:50 <wxy-xiyuan> kmalloc: OK. I'll look into the password history part to see what we can do there.
01:19:09 <kmalloc> Cool
01:20:27 <kmalloc> Also, make sure to clear it with lbragstad and clearly communicate this is a change in api behavior (in the release note)
01:20:29 <kmalloc> I'll +2 it with those additional bits with the logic below the controller.
01:28:48 <bzhao__> wxy-xiyuan: six
01:57:23 <lbragstad> kmalloc: wxy-xiyuan i'm still parsing email but i should be back to reviews tomorrow - i hope
02:28:27 <openstackgerrit> wangxiyuan proposed openstack/keystone master: [WIP] Change unique_last_password_count default to 0  https://review.openstack.org/593476
03:00:44 <vishakha> wxy-xiyuan: Ok thanks. I will put the comment on the patch. and will leave as it is.
03:02:52 <vishakha> wxy-xiyuan: one more thing should I make you remember to discuss this in PTG??
03:05:44 <wxy-xiyuan> vishakha: we have a list for PTG topic here: https://etherpad.openstack.org/p/keystone-stein-ptg I'll update it later.
03:07:03 <wxy-xiyuan> I'm not sure I'll attend PTG or not (Depend on my employer), Lance can pick it up I guess
03:09:17 <vishakha> wxy-xiyuan: Yes pls update the topic list.
06:39:20 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Change unique_last_password_count default to 0  https://review.openstack.org/593476
07:26:38 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Change unique_last_password_count default to 0  https://review.openstack.org/593476
09:21:48 <cmurphy> lbragstad: since you're up, want to ack https://review.openstack.org/593613 and https://review.openstack.org/593606 ?
09:23:16 <lbragstad> cmurphy: ack - thanks for keeping an eye on that
09:23:33 <cmurphy> np
09:23:37 <cmurphy> the releases one won't pass until the project-config one is in
09:23:53 <lbragstad> oh - good call
10:21:25 <openstackgerrit> Vishakha Agarwal proposed openstack/keystone master: Implement Trust Flush via keystone-manage.  https://review.openstack.org/589378
10:26:08 <ktibi> hi keystone. Simple question about domain and LDAP. Can I use a domain which use LDAP backend for user and tenant and keep sql for service user like nova, glance.... ???
10:33:08 <lbragstad> ktibi: yeah - that's a supported deployment pattern
10:33:34 <ktibi> lbragstad, great ;) thx
10:33:46 <lbragstad> ktibi: https://docs.openstack.org/keystone/latest/admin/identity-domain-specific-config.html
10:43:52 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Repropose capability lists to Stein  https://review.openstack.org/594123
11:06:33 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Switch to stestr  https://review.openstack.org/581326
11:08:47 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Repropose capability lists to Stein  https://review.openstack.org/594123
11:08:48 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Move MFA receipt specification to Stein  https://review.openstack.org/594129
11:08:48 <lbragstad> adriant: ^
11:09:07 <lbragstad> jgrassler ^
11:09:21 <lbragstad> feel free to take those over if there are details that need to be updated
11:09:46 <lbragstad> i mainly just wanted to update the rocky specs repository to be accurate with what actually landed
11:23:51 <lbragstad> hrybacki: fyi - i created a new stein trello board
11:31:16 <lbragstad> hrybacki: kmalloc also - do you happen to know if mike has an irc nick and is available? https://review.openstack.org/#/c/541903/6
11:31:44 <lbragstad> i have a few questions regarding the actual jwt implementation and he seems pretty familiar with the underlying library bits
12:15:25 <hrybacki> lbragstad: mike?
12:15:33 <lbragstad> mike fedosin
12:16:56 <hrybacki> hmm, doesn't ring a bell. Memory lapse or?
12:17:14 <lbragstad> https://review.openstack.org/#/c/541903/
12:17:24 <lbragstad> he was apparently reviewing the jwt stuff
12:17:33 <lbragstad> and appears to be from redhat
12:19:12 <hrybacki> he does, lemme look intenrallyh
12:20:39 <wxy-xiyuan> lbragstad: called mfedosin in glance channel. He was one of Glance core reviewer.
12:20:47 <lbragstad> oh - nice
12:20:53 <lbragstad> wxy-xiyuan: good call - thanks
12:21:02 <hrybacki> wxy-xiyuan: beat me to it :)
12:21:17 <wxy-xiyuan> hah
12:25:28 <lbragstad> hrybacki: also - let me know if you want go through the trello board prior to denver
12:25:38 <lbragstad> both Rocky and Stein
12:50:38 <openstackgerrit> Colleen Murphy proposed openstack/keystoneauth master: Add Keystone2Keystone example  https://review.openstack.org/594156
12:50:42 <cmurphy> wxy-xiyuan: ^
13:16:32 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Convert policy API to flask  https://review.openstack.org/589950
13:48:07 <hrybacki> lbragstad: I would. I'll shoot you a calendar invite if that works?
13:48:18 <lbragstad> wfm
14:05:34 <openstackgerrit> Stephen Finucane proposed openstack/oslo.policy master: sphinxext: Start parsing 'DocumentedRuleDefault.description' as rST  https://review.openstack.org/594222
14:35:31 <gagehugo> o/
15:10:18 <hrybacki> o/
15:33:48 <kmalloc> lbragstad: going to be a little late to the meeting today
15:33:58 <lbragstad> thanks for the heads up
15:34:05 <kmalloc> lbragstad: dog walk and need to call health insurance company :(
16:42:01 * gagehugo goes to grab lunch
17:01:36 <openstack> lbragstad: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
17:01:43 <lbragstad> #endmeeting