18:00:31 <stevemar> #startmeeting keystone
18:00:32 <openstack> Meeting started Tue Jun 28 18:00:31 2016 UTC and is due to finish in 60 minutes.  The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:33 <rodrigods> o/
18:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:34 <gagehugo> howdy
18:00:36 <knikolla> hiiii o/
18:00:37 <openstack> The meeting name has been set to 'keystone'
18:00:43 <browne> o/
18:00:44 <MeganR> o/
18:00:44 <stevemar> o/
18:00:46 <samueldmq> hi all o/
18:00:49 <gagehugo> o/
18:00:52 <dstanek> o/
18:00:58 <raildo> \(~_~)/
18:01:12 <xek> o/
18:01:30 <stevemar> i wonder if dolphm is back from kung fu noodle
18:01:39 <ayoung> no shaleh?
18:01:41 <stevemar> agenda is here https://etherpad.openstack.org/p/keystone-weekly-meeting
18:01:54 <hogepodge> o/
18:02:04 <nonameentername> o/
18:02:44 <stevemar> alrighty, let's get this show on the road
18:02:50 <stevemar> #topic Make oslo.messaging and caching required for keystonemiddleware
18:03:05 <stevemar> #link https://bugs.launchpad.net/keystonemiddleware/+bug/1540115
18:03:05 <openstack> Launchpad bug 1540115 in keystonemiddleware "use extras to install optional dependencies" [Medium,In progress] - Assigned to Steve Martinelli (stevemar)
18:03:44 <notmorgan> o/
18:03:44 <stevemar> there was a bug to use "extras" - like how we do it in keystoneauth - for the audit and caching support in keystonemiddleware
18:03:54 <stevemar> but jamielennox brought up a good point
18:04:11 <stevemar> we highly recommend caching, so why not make it required
18:04:35 <bknudson_> why do we recommend caching? Have we done performance measurements?
18:04:37 <stevemar> and audit support only needs oslo.messaging, which most services will already depend on
18:04:54 <stevemar> bknudson_: of course not, but caching fixes everything (tm)
18:05:39 <stevemar> bknudson_: i believe we have caching enabled in the gate
18:05:43 <dstanek> and breaks everything else :-)
18:06:07 <stevemar> and even if you don't have something setup, it'll use an in-memory cache
18:06:26 <notmorgan> the memory cache is terrible
18:06:29 <notmorgan> and needs to die.
18:06:29 <jamielennox> so we can never require that a deployment turn on caching or messaging
18:06:37 <ayoung> die cache die
18:06:54 <jamielennox> however for this i think it safe to say that we recommend it all so heavily that we should just make the dependencies required to make it work always installed
18:07:17 <ayoung> messaging used for audit only, right?
18:07:21 <jamielennox> rather than have to do the pip install keystonemiddleware['cache,audit_messaging_notifications']
18:07:24 <stevemar> ayoung: correct
18:07:36 <dstanek> stevemar: using an in-memory cache in production would be terrible if it doesn't clean up after itself; and i'm assuming it doesn't
18:07:37 <bknudson_> I thought the point of oslo cache was that you could have different implementations (not just memcached)
18:08:19 <notmorgan> dstanek: it does. ghe issue with the in-memory cache is that multiple processes dont share
18:08:37 <notmorgan> and the in-memory cache cleans up [runs a for loop on the dict] on every get.
18:08:40 <dstanek> notmorgan: least common algorithm?
18:09:03 <notmorgan> dstanek: tokens can be validated on one process and fail on another
18:09:07 <notmorgan> dstanek: for example
18:09:16 <henrynash> (eek, sorry to be late)
18:09:27 <dstanek> notmorgan: i'm more worried about absorbing all memory for the cache
18:09:29 <notmorgan> dstanek: so subsequent requests succeed/fail/succeed because the processes cached data
18:09:30 <stevemar> henrynash: no worries :)
18:09:49 <dstanek> unless it is bounded; only keeps last X entries
18:09:54 <notmorgan> basically don't enable cache with in-memory caching as a default ANYwhere it isn't already
18:09:57 <jamielennox> unfortunately we tried to remove that and devstack suffered and we got slapped for it so we're stuck with that for a little bit
18:10:02 <notmorgan> and we need to deprecate the current one
18:10:09 <stevemar> jamielennox: yep
18:10:24 <notmorgan> if someone advocates adding in-memory cache anywhere else, i'm sorry, but no.
18:10:38 <notmorgan> and that being slapped for it was a massive overreaction
18:10:43 <stevemar> revert as jamie metioned: https://github.com/openstack/keystonemiddleware/commit/70a9754ae68c3eb4d5645c76f3e61052486e9054
18:10:50 <ayoung> how about messaging?
18:10:53 <stevemar> bknudson_: so yeah, it does actually work :P
18:11:12 <stevemar> bknudson_: cause the neutron gates kept timing out iirc
18:11:15 <jamielennox> ayoung: so if you install audit middleware but don't install oslo.messaging it just logs everything
18:11:21 <stevemar> ayoung: what about it?
18:11:24 <notmorgan> stevemar: the fact is the gate doesn't need the in-memory cache.
18:11:47 <jamielennox> which is not particularly useful - and can actually be accomplished by configuring oslo.messaging that way if you're really interested
18:12:13 <jamielennox> but i see no reason to do that, so if you want to use audit middleware you have to install oslo.messaging seperately
18:12:18 <notmorgan> so, if this throws thing into in-memory cache by default with audit middleware
18:12:22 <ayoung> stevemar, should we require it
18:12:24 <notmorgan> i'm against this.
18:12:26 <bknudson_> None of this is going to affect me since we've got deployment tools.
18:12:38 <ayoung> should we require oslo-messaging, caching go bye bye
18:12:41 <stevemar> ayoung: doesn't hurt it
18:12:41 <jamielennox> the question is - is it just going to be easier to make oslo.messaging a direct depenedency
18:13:00 <jamielennox> ayoung: we will want caching, we just want to use oslo.cache instead of the current thing
18:13:18 <jamielennox> notmorgan: against because of extra deps?
18:13:32 <stevemar> jamielennox: so maybe wait until we move to oslo.cache, i don't have a timeline for that
18:13:38 <gyee> swift has their own cache implement, just fyi
18:13:47 <bknudson_> moving to oslo.cache first makes more sense.
18:13:49 <jamielennox> oslo.messaging is a big dependency and audit is not that widely used
18:14:01 <jamielennox> i think the oslo.cache one is an easy yes
18:14:29 * stevemar pokes notmorgan
18:14:38 <dstanek> jamielennox: i think notmorgan is against using the in-memory cache
18:14:43 <notmorgan> basically
18:14:50 <stevemar> notmorgan: would you still be against it if we used oslo.cache
18:15:00 <notmorgan> i am against defaulting to the in-memory cache
18:15:03 <dstanek> -2 for in-memory cache from me - i'm fine with adding the deps tough
18:15:11 <gyee> jamielennox, message is not a dependency on audit middleware
18:15:14 <gyee> not a hard dependency
18:15:19 <notmorgan> if we add the dep we need to make sure the in-memory cache is NOT used anywhere.
18:15:24 <notmorgan> oslo.cache should be a hard-dep
18:15:25 <bknudson_> can we have an in-memory cache that only caches a few tokens?
18:15:27 <gyee> audit middleware can send audit to the local logs
18:15:34 <notmorgan> bknudson_: no. lets not even bother.
18:15:44 <ayoung> OK..yes to deps on oslo cache and oslo messaging.  No to in memory.
18:15:49 <ayoung> Next topic
18:15:56 <stevemar> ayoung: not quite
18:16:08 <stevemar> yes to depends on oslo.cache, use extras for oslo.messaging
18:16:08 <notmorgan> the extra deps don't really matter imo.
18:16:31 <notmorgan> most everything uses oslo.messaging anyway.
18:16:39 <ayoung> stevemar, I have to figure out how to map extras to RPM dependencies
18:16:46 <bknudson_> Does swift use oslo.messaging?
18:16:48 <jamielennox> notmorgan: oh, yea, dump that asap
18:16:49 <ayoung> I'd rather not use extras and just force the depends
18:16:55 <notmorgan> ayoung: "recommended"
18:17:00 <notmorgan> ayoung: or "suggested"
18:17:05 <jamielennox> gyee: it's not a dep, but it's pretty useless without it
18:17:07 <notmorgan> ayoung: or in RPM "required"
18:17:08 <gyee> nope, swift does not use oslo.messaging
18:17:18 <notmorgan> ayoung: extras is really only a pip thing, map it as a hard dep in the rp
18:17:21 <notmorgan> rpm*
18:17:26 <stevemar> gyee: swift doesn't everything different, quit looking at them for a comparison :)
18:17:28 <gyee> jamielennox, no, you can still send audits to a log
18:17:36 <ayoung> notmorgan, ok.  THat works for this case
18:17:40 <bknudson_> I would guess swift wouldn't like us adding a dependency on oslo.messaging
18:17:40 <jamielennox> gyee: why - auth_token already logs requests
18:17:50 <stevemar> jamielennox: for the cadf event
18:17:56 <jamielennox> gyee: also oslo.messaging has a log driver you an use to do that
18:17:59 <gyee> jamielennox, CADF formatted logs, to a separate log file
18:18:04 <notmorgan> bknudson_: if swift uses ksm in a deployment, installing oslo.* is already happening
18:18:23 <gyee> jamielennox, oslo.messaging's log driver does not work for swift
18:18:32 <stevemar> alright, next topic, i wrote the outcome in the etherpad
18:18:38 <jamielennox> gyee: why?
18:18:48 <jamielennox> gyee: i mean - didn't i just fix that?
18:19:22 <stevemar> jamielennox: moving on, and yeah, you just did
18:19:26 <stevemar> #topic Should we add a "migration completion" command to keystone-manage?
18:19:27 <gyee> jamielennox, swift uses proxy logger, which using systemd underneath
18:19:31 <henrynash> ok
18:19:34 <stevemar> henrynash: ^
18:19:40 <henrynash> There are some migrations in Newton that could (if operations continue during a rolling upgrade) leave the DB in an incorrect state during a multi-node migration
18:19:48 <henrynash> E.g.: https://bugs.launchpad.net/keystone/+bug/1596500
18:19:48 <openstack> Launchpad bug 1596500 in OpenStack Identity (keystone) "Passwords created_at attribute could remain unset during rolling upgrade" [Undecided,New] - Assigned to Henry Nash (henry-nash)
18:19:51 <bknudson_> making keystone-manage more user friendly is a good thing
18:19:57 <xek> I was thinking of annotating the migrations that have to be run as part of the "migrate completion" command with a decorator that would be used in sqlalchemy migrate scripts
18:20:06 <xek> if you do a normal keystone-manage db_sync, everything should work like today
18:20:15 <xek> but if you want to do a rolling upgrade, you would change this behavior by using new commands
18:20:42 <notmorgan> xek: so, what is the order of ops
18:20:45 <bknudson_> oh, this is rolling upgrades
18:20:49 <henrynash> xek: in fact there is even a bit more than that
18:20:52 <notmorgan> xek: upgrade schema, upgrade code?
18:20:59 <notmorgan> upgrade code, upgrade schema?
18:21:09 <notmorgan> what is the required order to handle rolling upgrades
18:21:18 <notmorgan> this still isn't clear to me, before we add new things
18:21:28 <notmorgan> please set the order that is expected
18:21:33 <lbragstad> i feel like we tried hashing that out at the midcycle for like 5 hours last time
18:21:35 <notmorgan> i am against rolling upgrades without a clear target
18:21:43 <henrynash> there is at least one migration that wants to have  non-nullable column, but leabes it nullable to ensure we don’t get other errors durinr a rolliog upgrade
18:21:47 <notmorgan> and remain against it.
18:21:50 <jamielennox> yea, i think we would need to do a fair bit of work code side to make keystone hadnle different running database schemas to get this to work
18:22:12 <gyee> we support rolling upgrade today?
18:22:17 <notmorgan> gyee: no
18:22:21 <lbragstad> i don't think we do
18:22:24 <henrynash> yes we do
18:22:30 <notmorgan> henrynash: no we don't
18:22:35 <notmorgan> absolutely do not
18:22:40 <dstanek> gyee: if you cross your fingers it may work
18:22:41 <notmorgan> code and schema must be in sync
18:22:44 <xek> notmorgan, well, you need to upgrade the code first to get the upgrade scripts for upgrading your schema
18:22:46 <bknudson_> if we have cores saying we don't then we don't, since stuff is going to be approved that breaks it.
18:22:53 <stevemar> gyee: we support upgrades, but not rolling ones :)
18:23:02 <notmorgan> if you're super lucky and do limited things it may work with old schemas
18:23:04 <gyee> right, not rolling
18:23:18 <notmorgan> xek: right i mean -- are you upgrading code everywhere, then schema?
18:23:35 <notmorgan> xek: or is it "we make the schemas work in n-1 (or n-2) configurations"?
18:23:42 <gyee> not for sql driver anyway
18:23:44 <henrynash> I am stunned…we agreed we WOULD be doing this at the summit…and I thought Steve you confirm that on IRc last week?
18:23:53 <bknudson_> you could upgrade one system to the new code just to do the schema upgrade.
18:24:14 <notmorgan> bknudson_: that is why i want to know the planned order - so we can ensure we're developing for the right cases.
18:24:20 <dstanek> i would say flow is (additive schema change; update code everywhere; contracting schema change)
18:24:22 <stevemar> henrynash: that was the goal, but it looks like we shot ourselves in the foot with migration 105
18:24:23 <notmorgan> bknudson_: i'm not against rolling upgrades.
18:24:26 <xek> it's not well tested yet, and I agree that there could be some braking changes, but I think it's beneficial to keep rolling upgrades in mind while reviewing, to bootstrap the process
18:24:26 <bknudson_> We're going to need rolling upgrade support otherwise our cloud is going to be useless.
18:24:31 <ayoung> so...rolling upgrade is a really reaaly hard thing
18:24:31 <notmorgan> just against "adding things without the plan"
18:24:38 <rderose_> o/
18:24:40 <ayoung> I think before we hit it, we would need a test that does this:
18:24:47 <ayoung> look at the current version of the code and database
18:24:55 <jamielennox> dstanek: ++ you have to do code first otherwise you can end up with schemas newer than your code understands
18:24:56 <henrynash> haveing a migration-completion task would always be required to ensure rolling upgrades
18:24:57 <ayoung> then...upgrade one but leave the other at the old version
18:25:01 <ayoung> and make sure things still run
18:25:07 <ayoung> probabl "upgrade the database"
18:25:16 <ayoung> and that...I think...means no data migrations.
18:25:25 <stevemar> ayoung: partially agree, i just want a test to make sure the model matches the migration
18:25:39 <gyee> I am with notmorgan, we need clear high-level goal on rolling upgrade first
18:25:42 <ayoung> jamielennox, so we always write code that *can* work with an older schema?
18:26:02 <ayoung> so..what would we upgrade from and to?
18:26:07 <ayoung> mitaka to newton?
18:26:12 <jamielennox> ayoung: i'm pretty sure that's how this normally works, when you do a migration you need to ensure in code that there is still a path forward if the old schema is found in the database
18:26:13 <ayoung> n1 to n2?
18:26:20 <xek> ayoung, I'm working on grenade CI, which would downgrade the code to the last stable version and check if the schema is compatible and if resources written by the newer version don't cause problems
18:26:48 <henrynash> ayoung: I thought we agreed we would uprade the DB (maybe in conjunction with one node’s code), then upgrade the otehr nodes to newton
18:26:49 <ayoung> xek, so...what does Grenade consider as "last stable?"
18:26:50 <notmorgan> xek: ++ good - so the plan is "upgrade schema" and roll code ?
18:27:02 <dstanek> can we hash this out at the mid-cycle?
18:27:05 <notmorgan> ayoung: last release
18:27:07 <dolphm> dstanek: ++
18:27:08 <lbragstad> dstanek ++
18:27:09 <samueldmq_> dstanek: ++
18:27:12 <samueldmq_> +++++++=
18:27:14 <rderose_> dstanek ++
18:27:16 <ayoung> notmorgan, so major releases only?
18:27:20 <bknudson_> what do we do until the mid-cycle? no migrations?
18:27:20 <stevemar> dstanek: we can argue for another 5 hours, sure :)
18:27:22 <ayoung> mitaka to newton?
18:27:28 <lbragstad> we had a long discussion about this at the last cycle
18:27:36 <dolphm> bknudson_: no migrations that clearly break rolling upgrade strategies?
18:27:40 <dstanek> stevemar: i hope not. i'll have some thought prepared already
18:27:41 <dolphm> bknudson_: additive should still be fine
18:27:43 <rderose_> lbragstad: but I don't know what the plan was
18:27:50 <bknudson_> ok, additive works for me.
18:27:57 <notmorgan> ayoung: i would never advocate otherwise. doing between other things is crazy
18:28:05 <ayoung> so we need a test that runs master against Mitaka's migrations
18:28:07 <lbragstad> we pretty much came up with a list of "disruptive" schema changes
18:28:12 <dolphm> xek: are you going to be at the midcycle?
18:28:17 <stevemar> dolphm: bknudson_ we still have henrynash's bug he reported
18:28:28 <ayoung> That needs to be part of "check"
18:28:34 <henrynash> so, we are in good shape (with a simple complete-migration step) to support mitaka->newton rolling upgrades with DB first with one node, and then upgrade the other nodes
18:28:34 <xek> dolphm, I'll talk with my manager
18:28:57 <stevemar> henrynash: i believe so
18:29:06 <dolphm> stevemar: henrynash: the bug does not need to be resolved immediately
18:29:15 <henrynash> dolphm: ++
18:29:17 <dolphm> stevemar: henrynash: it's just a red flag, there's nothing broken
18:29:24 <stevemar> dolphm: nope, but it just leaves a hole in the plan
18:29:38 <stevemar> dolphm: anyway, i get what you mean
18:29:42 <henrynash> dolphm: and we are about to create another hole with https://review.openstack.org/#/c/328447/
18:29:58 <dolphm> it can wait for the midcycle, at least. we need a whiteboard for this again :(
18:30:09 <henrynash> similar thing, wwhere we need a migration-complete task to tidy things up
18:30:10 <stevemar> henrynash: ready for your second topic?
18:30:11 <gyee> stop creating them holes!
18:30:13 <henrynash> yep
18:30:31 <stevemar> #topic Domain hierarchies as an alternative to relaxing project name constraints or project hierarchical naming
18:30:41 <henrynash> ok, last attempt!
18:30:57 <notmorgan> henrynash: domains must remain globally unique or we break api contract.
18:31:02 <henrynash> so took dolphm/notmorgans suggetsions about domains
18:31:12 <henrynash> notmorgan: they do stay unique
18:31:18 <notmorgan> henrynash: name wise. but i'll leave off hierarchy complaints
18:31:27 <notmorgan> henrynash: and let it happen if other folks support it
18:31:32 <henrynash> https://review.openstack.org/#/c/332940/
18:31:49 <notmorgan> as long as they stay at the top. i still will say i dislike it and it opens a LOT of security concerns
18:32:05 <henrynash> and a dependant spec of: https://review.openstack.org/#/c/334364/ (for simplicity)
18:32:14 <henrynash> notmorgan: what do you mean “on top”
18:32:20 <notmorgan> domains cannot be under projects
18:32:26 <notmorgan> domain->domain ok
18:32:27 <henrynash> notmorgan: ++
18:32:30 <notmorgan> domain->project->domain no
18:32:36 <ayoung> agreed
18:32:43 <henrynash> notmorgan: agreded. spec confirms that
18:32:55 <notmorgan> and i'm going to ask for a thread analysis on adding this + UX review.
18:33:01 <ayoung> so string naming on domains means we can compose domain names out of path segments?
18:33:02 <notmorgan> threat analysis*
18:33:08 <henrynash> ayoung: yes
18:33:18 <jamielennox> i don't think i like this - it may just be my model but this is never how i thought domains should be used
18:33:24 <dstanek> henrynash: i've wanted this since we were discussing HMT in paris
18:33:25 <ayoung> helll, I think we can get rid of projects then
18:33:30 <notmorgan> jamielennox: i agree with you.
18:33:39 <ayoung> jamielennox, we should never have had domains
18:33:48 <ayoung> but lets kill projects and just use domains
18:33:48 <notmorgan> ayoung: uhm.. except projects are not named globally unique
18:33:49 <jamielennox> i mean we're going heavily down the projects everywhere path and now we kick up domain usage
18:33:51 <ayoung> projects are legacy
18:33:58 <notmorgan> ayoung: and domains MUST be globally uniquely named
18:34:02 <notmorgan> regardless of hierarchy
18:34:12 <gyee> lets just rename them back to tenants
18:34:13 <ayoung> notmorgan, that is not what I just asked
18:34:14 <dstanek> jamielennox: problem is that we pushed down the wrong path for too long
18:34:15 <jamielennox> and from a UX perspective it will result in a LOT of cases where the USER_DOMAIN_ID and PROJECT_DOMAIN_ID are different and that will just confuse everyone
18:34:17 <ayoung> what I just asked is
18:34:18 <henrynash> jamielennox: well, their all just proejcts
18:34:20 <raildo> gyee: lol
18:34:21 <ayoung> so string naming on domains means we can compose domain names out of path segments?
18:34:36 <ayoung> so if D1->D2  then the name is D1/D2
18:34:44 <ayoung> name is unique, but composed out of sections
18:34:50 <stevemar> jamielennox: ugh, that would not be nice if USER_DOMAIN_ID was different than PROJECT_DOMAIN_ID
18:34:51 <henrynash> jamielenniox: we allow that today, you can have assignements across domains
18:34:53 <ayoung> so we keep the uniqueness, but add the ability to compose
18:35:10 <stevemar> henrynash: the number of times that is used approaches 0
18:35:10 <henrynash> ayoung: ++
18:35:22 <jamielennox> henrynash: you can do it today yes, but most people i talk to are surprised by that and no-one really does it
18:35:33 <notmorgan> jamielennox: ++
18:35:36 <bknudson_> Don't you have users in ldap and projects in SQL?
18:35:40 <henrynash> jamielennox: I suspect that is true
18:35:49 <jamielennox> partially because it's a difficult workflow to even find projects in other domains to assign permissions on
18:35:54 <notmorgan> stevemar: i think multi-domain backends is going to get weird.
18:35:59 <ayoung> jamielennox, that is because we've made a gawdoffal mess out of it
18:36:00 <notmorgan> but *shrug*
18:36:13 <ayoung> jamielennox, I've been asked about it 2-3 times in the last couple of weeks
18:36:54 <jamielennox> also for every argument that domains can just be COMPANY-DIVISION-PROJECT to keep them unique - the exact same argument applies to project names
18:37:04 <notmorgan> i still think just simply letting other services (asking other services) to be domain aware is better than weding in more HMT things
18:37:05 <bknudson_> tempest needs more multi-domain testing, and also testing with domain_specific_backends=True
18:37:15 <ayoung> jamielennox, I agree, but there is the whole "you promisd not to do that"
18:37:25 <ayoung> we didn't promise, however, anything about hierarchial domains
18:37:32 <ayoung> so they are still fair play
18:37:33 <jamielennox> oh god, yea, and domain specific backends would have to play into this
18:37:48 <henrynash> so I guess my ask here is that provided I can work out the corner cases, are we ok conceptually and compatibility-wise with this (somcething which I  don’t think we true for either the other two proposals)
18:38:26 <ayoung> henrynash, you know you have my support.  I think we need an affirmation from dolphm and notmorgan.
18:38:29 <henrynash> but I don’t want to keep working thorugh this if we are going to blowit out the water on a conceptual bases
18:38:41 <notmorgan> i will not block this, i would still rather not to do hierarchical domains and just ask other projects to allow grouping of domains etc for quota
18:39:00 <notmorgan> this seems like we're adding a LOT of extra cruft for supporting quota groupings etc.
18:39:03 <ayoung> notmorgan, but you agree it does not break any of our current promises?
18:39:24 <ayoung> notmorgan, I see it as us finally doing what we should have done in the first place
18:39:27 <jamielennox> henrynash: i'm -1 just from a conceptual level, i don't think this is doing things the right way
18:39:40 <notmorgan> ayoung: if domains remain globally unique name wise -- and all current APIs return the same data
18:39:43 <ayoung> we should not have two abstractions (domains nad projects) and projects should have been hierarchical from the get go
18:39:44 <notmorgan> we aren't breaking promises
18:39:49 <dolphm> my thoughts echo notmorgan's
18:39:55 <raildo> we got some use cases in the last summit, like get ceilometer metrics in the hierarchy... this is something that just adding domain concept in other services will not be enoght... ++ for henrynash's idea
18:39:59 <notmorgan> domains also should not be containers for VMs.
18:40:03 <lbragstad> jamielennox what are the things you don't think are right?
18:40:10 <notmorgan> that is specific to project(s)
18:40:14 <henrynash> jamielennox: conceptually a domain is a name-space….why shouldn’t customer have more tahn one?
18:40:19 <ayoung> notmorgan, I can accept that for now
18:40:35 <stevemar> henrynash: they can have more than one, just another top level one
18:40:42 <notmorgan> henrynash: why does it need to be wedged under another domain?
18:40:46 <notmorgan> henrynash: that is my question
18:40:56 <stevemar> notmorgan: so they get one bill?
18:40:56 <notmorgan> henrynash: rather than just giving them another top-level domain
18:41:04 <notmorgan> stevemar: -2 for that reasoning
18:41:08 <notmorgan> stevemar: :P
18:41:12 <jamielennox> lbragstad, henrynash: we've had people now for a while using things like the v3policy sample thing to do things like domain administration - i think that it is too big a conceptual leap to tell people now that domains can be used everywhere
18:41:14 <stevemar> (i was just joking)
18:41:16 <henrynash> notmorgan: since otherwise it would have to unique across all other top level domains…taht just won’t work in public cloud
18:41:34 <notmorgan> henrynash: why can't it be unique?
18:41:57 <henrynash> notmorgan: plus, we need to allow a customer’s admin to crud them without reference to the cloud admin
18:42:03 <jamielennox> i think it will confuse everyone, and there are things like domain specific backends and users owned by domains which make this too hard to sell
18:42:10 <henrynash> notmorgan: I don’t know how to do that at the top level
18:42:17 <notmorgan> henrynash: so assign a customer "admin" that has access across domains
18:42:49 <henrynash> notmorgan: but the cloud admin would have to create each domain…..that won’t scale in public cloud
18:42:49 <notmorgan> henrynash: and crm solutions can map multiple domains to a single account
18:43:05 <notmorgan> henrynash: we're adding tons of cruft in keystone that is a CRM issue.
18:43:27 <bknudson_> how does this work with domain-specific backends?
18:43:38 <henrynash> notmorgan: how would you avoid the cloud admin being involved?
18:43:50 <bknudson_> I haven't read the spec
18:43:52 <notmorgan> wouldn't you ask for another domain via the CRM solution?
18:44:01 <ayoung> I could see a pattern, though, where people used a common name for projects and used domain nesting for ecerything namespacey
18:44:04 <henrynash> bknduson_: don’t know, haven’t thought that bit though yet
18:44:16 <ayoung> like /d1/d2/d3/pro  and /d4/d5/d6/pro
18:45:00 * notmorgan shrugs.
18:45:05 <stevemar> i think this is an example of putting the cart before the horse, we are making a lot of assumptions about how things are deployed and how operators do things. i'd prefer to wait and get feedback, like, will they actually use this?
18:45:12 <notmorgan> i think dolphm and I are mostly on the same point for this.
18:46:30 <stevemar> next topic?
18:46:40 <stevemar> the conversation has slowed down
18:46:44 <henrynash> not sure where that leves this…you saying punt?
18:46:46 <ayoung> is it me?
18:46:57 <notmorgan> i think from a business logic standpoint, this could be automated as is. public clouds have extra mechanisms and toolchains that private clouds dont
18:46:57 <ayoung> samueldmq_, you are up
18:47:01 <notmorgan> and need it.
18:47:20 <samueldmq_> ayoung: just waiting for topic to change
18:47:24 <stevemar> henrynash: i think you need more advocates :P
18:47:31 <notmorgan> but like i said, i wont block this - i am voicing a general dislike of it, but will support as long as our APIs and prmises remain
18:47:37 <stevemar> the last 2 topics are short
18:47:48 <henrynash> notmorgan: understand
18:47:59 <henrynash> ok
18:48:01 <jamielennox> i'm still -1 but won't fight it if overruled
18:48:25 <gyee> my head still hurts
18:48:28 <jamielennox> i think the conceptual model is wrong
18:48:31 <ayoung> stevemar, change the topic.
18:48:36 <henrynash> gyee: stay of the babysham
18:48:43 <gyee> hah
18:48:48 <jamielennox> and every time we hack around something to get it to work we end up supporting it forever
18:48:50 <stevemar> i think we need buy in from ops, i want to make sure it'll be useful for someone before adding a metric ton of code
18:49:01 <henrynash> stevemar: ok, fair point
18:49:16 <samueldmq_> stevemar: ++
18:49:19 <stevemar> handwavey usecases are OK for smaller features :P
18:49:27 <stevemar> #topic Migrate API Ref Docs
18:49:32 <stevemar> samueldmq_: ^
18:49:49 <ayoung> samueldmq, nice work there
18:49:52 <samueldmq> so...
18:49:59 <ayoung> you just lookinbg for reviews?
18:50:00 <samueldmq> as everyone knows, we are migrating api-ref docs to our repo
18:50:13 <samueldmq> I've put this to gather some love on those patches
18:50:18 <samueldmq> ayoung: yes
18:50:22 <ayoung> ++
18:50:26 <ayoung> I'll start looking
18:50:26 <samueldmq> it's basically migrating the docs from api-ref repo
18:50:40 <ayoung> makes total sense.  shoulda happened yeas ago
18:50:43 <samueldmq> and then followup work will include updating them/making them better
18:51:11 <dolphm> samueldmq: it's been a couple weeks, but the patches i've looked at have been a little challenging to review because there's far more going on than just a copy paste from one repo to another
18:51:16 <samueldmq> it'd be nice to get it merged, so we keep working on improving them
18:51:27 <bknudson_> the gate-keystone-api-ref link should work, but it's not working on https://review.openstack.org/#/c/322131/ for some reason
18:51:40 <dolphm> and it wasn't obvious wherever i stepped in how to review the (published / built) result of the change overall
18:51:48 <stevemar> bknudson_: it's old
18:51:53 <samueldmq> dolphm: it's basically copy&paste and minor formatting (such as removing lots of blanklines)
18:51:54 <stevemar> bknudson_: 3 weeks now
18:52:10 <dolphm> stevemar: can we just recheck it then?
18:52:34 <stevemar> dolphm: just just rebase
18:52:36 <ayoung> dolphm, can we accept the changes in with some sort of caveat that they might have drifted from the imp? that is probably true now anyway of the old docs
18:53:03 <samueldmq> bknudson_: I will check on that with infra if ther's something wrong in our gate
18:53:12 <jamielennox> ayoung: would be nicer for the first commit to just be a direct copy and then we can look at cleanups individually
18:53:16 <stevemar> dolphm: rebased
18:53:32 <stevemar> samueldmq: it's just an old build, they remove the results after a few days
18:53:43 <bknudson_> Also, I posted comments on https://review.openstack.org/#/c/322131/ and they weren't addressed... not sure why I +1 rather than +2.
18:53:44 <stevemar> bknudson_: the generated results do work
18:53:49 * samueldmq nods
18:53:51 <ayoung> This commit polish the results from the conversion and migrate the v2 docs to our repository under 'api-ref/source' directory. Missing parameters definitions were added. Polishing the generated RST files include: - Removing unnecessary blank lines; - Removing empty references. Polishing the generated RST files do not include: - Modifying their content; - Modifying file names; - Wrapping lines at the maximum of 79 chars. Updating the documentation
18:53:51 <ayoung> will be done after this migration step.
18:53:58 <ayoung> I think that is acceptable
18:54:05 <ayoung> need that to pass the parser, right?
18:54:31 <samueldmq> bknudson_: I will look at it, sorry had missed your comments
18:54:31 <stevemar> would the team be up for a 2 day sprint to get the API ref matching what we have in keystone-specs?
18:54:56 <ayoung> stevemar, um...no
18:55:04 <stevemar> i think the nova team did something similar
18:55:07 <stevemar> ayoung:  :(
18:55:09 <ayoung> 2 days polishing all of the v2 docs?
18:55:12 <bknudson_> the nova team did a sprint and it works pretty well
18:55:13 <rderose> stevemar: I would :)
18:55:13 <ayoung> I can't justify that
18:55:26 <stevemar> ayoung: it would be the v3 docs and v3 "extensions" too
18:55:41 <samueldmq> ayoung: mostly are v3 docs
18:55:57 <stevemar> any other folks interested?
18:55:58 <samueldmq> stevemar: that'd be awesome to have
18:56:00 <bknudson_> guess what -- everyone still uses v2.
18:56:09 <ayoung> is there really 2 days worth of work there for the whole team>?
18:56:11 <notmorgan> bknudson_: it's true
18:56:29 <samueldmq> I can split the migration in 2: i) move as it is (only fixing things that break) then ii) polish it (remove blank lines, etc)
18:56:33 <notmorgan> we should spend time on this - it is important.
18:56:36 <dolphm> i'm happy to do reviews
18:56:42 <ayoung> bknudson_, not based on RH product.  We are pushing v3 due to Federation needs
18:56:44 <ayoung> and domains
18:56:57 <dolphm> as long as it's not next week :P
18:56:58 * samueldmq nods
18:57:10 <ayoung> 3 minutes...and a related topic...
18:57:19 <stevemar> dolphm: okay i'll propose one for the week after next
18:57:30 <stevemar> #topic Troubleshooting Doc
18:57:30 <notmorgan> ayoung: we can't only be concerned with single vendors involved in openstack we have to support all users [and this impacts end users, who are more important than specific vendors]
18:57:30 <lbragstad> what about the week before the micycle?
18:57:39 <stevemar> ayoung: ^
18:57:46 <stevemar> lbragstad: i'll send something out to mailing list
18:57:50 <dolphm> lbragstad: that's the week steve is referring to
18:57:51 <lbragstad> stevemar ++
18:57:52 <ayoung> notmorgan, just saying that "not everyone uses v2.0"
18:57:53 <stevemar> i'll propose a day
18:58:05 <stevemar> dolphm: indeed it is!
18:58:07 <ayoung> stevemar, I'll be on Vacation that week
18:58:15 <stevemar> ayoung: perfect timing! :)
18:58:18 <ayoung> heh
18:58:27 <gyee> lets get started on dolphm's keystone doctor already! forget troubleshooting docs
18:58:30 <gyee> time to automate
18:58:32 <ayoung> OK...last topic
18:58:34 <dolphm> gyee: lol
18:58:36 <ayoung> no need to changethe topic
18:58:39 <stevemar> i did
18:58:40 <ayoung> Troubleshooting docs
18:58:40 <lbragstad> gyee ++
18:58:41 <samueldmq> ayoung: 2 minutes on your topic
18:58:53 <stevemar> ayoung: you're up bud
18:58:54 <ayoung> We have an etherpad....what should we do next
18:59:00 <stevemar> #link https://etherpad.openstack.org/p/keystone-troubleshooting
18:59:11 <stevemar> let's populate it and move it to docs
18:59:12 <ayoung> I think that Keystone troubleshooting is the #1 intterrupt I recewive
18:59:13 <notmorgan> ayoung: get an officially published doc
18:59:16 <lbragstad> can we make it a list of things we should check with keystone-doctor?
18:59:22 <dolphm> ayoung: ++
18:59:27 <stevemar> lbragstad: both can co-exist
18:59:30 <ayoung> do we want that in the Keystone tree?  And as a single doc?
18:59:32 <dolphm> ayoung: "why 401?'
18:59:33 <notmorgan> convert to rst and get it on docs :)
18:59:34 <lbragstad> stevemar cool
18:59:39 <ayoung> It seems like a very fluid
18:59:44 <notmorgan> ayoung: or we get another repo
18:59:48 <ayoung> and constantly changing doc
18:59:49 <notmorgan> which would make a lot of sense
18:59:57 <stevemar> keeping in docs is OK to me
18:59:57 <bknudson_> the troubleshooting doc uses the keystone cli.
19:00:01 <ayoung> could it ... go in the specs repo?
19:00:11 <gyee> lot of them can be automated
19:00:18 <ayoung> I know that sounds weird, but specs always seemed like "the pre-docs" repo
19:00:30 <dolphm> lbragstad: maybe file wishlist bugs tagged with "doctor" or something?
19:00:39 <stevemar> we're out of time!
19:00:41 <henrynash> (time)
19:00:42 <ayoung> I wanted them on the Wiki origianlly, as that is the right degree of responsieness, but Wiki dead
19:00:43 <lbragstad> dolphm yeah - that would be a good idea too
19:00:43 <stevemar> #endmeeting