18:00:16 <dolphm> #startmeeting keystone
18:00:17 <openstack> Meeting started Tue Jun 17 18:00:16 2014 UTC and is due to finish in 60 minutes.  The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:20 <openstack> The meeting name has been set to 'keystone'
18:00:30 <dolphm> #topic Juno hackathon
18:00:39 <morganfainberg> yay hackathon!
18:00:47 <stevemar> ahoy
18:00:49 <dolphm> so just one quick update that should hopefully be inconsequential for the most part...
18:01:08 <dolphm> we've completely moved the event to the *other* geekdom building @ 110 E Houston St, San Antonio, TX 78205
18:01:19 <dstanek> dolphm: is that the old or new one?
18:01:21 <dolphm> which is a tiny bit closer to the Valencia
18:01:24 <bknudson> what happened?
18:01:27 <dolphm> dstanek: this is Geekdom's new space
18:01:31 <lbragstad> #link https://www.google.com/maps/place/110+E+Houston+St/@29.4262771,-98.4935065,17z/data=!3m1!4b1!4m2!3m1!1s0x865c5f52c3e335c1:0xaa931eb0dee8ded4
18:01:40 <boris-42> morganfainberg hackathon hmm interseting
18:01:43 <dolphm> bknudson: the event space was too big for us, i suppose?
18:01:46 <stevemar> ty lbragstad
18:02:14 <stevemar> dolphm, is the valencia still a good hotel spot?
18:02:17 <ayoung> I was just going to follow the crowd there from the Hotel anyway
18:02:21 <ayoung> stevemar, this is "closer"
18:02:27 <dolphm> stevemar: yes, see the map on:
18:02:28 <dolphm> #link http://dolphm.com/openstack-keystone-hackathon-for-juno/
18:02:32 <dstanek> dolphm: ah, nice. yes, closer.
18:02:42 * morganfainberg needs to book flight to SAT.
18:02:45 <stevemar> neato
18:03:02 * stevemar also needs to book a flight to SAT
18:03:14 <dstanek> is the barbican one at this location too?
18:03:26 <dolphm> dstanek: yeah, we'll be colocating with barbican on wednesday
18:03:34 <morganfainberg> cool
18:03:58 <morganfainberg> i'll be there for wed -> fri. (wont make it for barbican mon/tues)
18:04:17 <dstanek> cool, i'm planning on attending barbican too - i'll be in SAT for the whole week
18:04:24 <dolphm> dstanek: oh cool
18:04:51 <dolphm> #topic Spec Approval Requirements
18:05:06 <dolphm> morganfainberg: i can't remember if you put this on the agenda for this week, or if i left it on from last week?
18:05:29 <morganfainberg> i changed it
18:05:30 <morganfainberg> actually
18:05:33 <morganfainberg> right before the meeting >.>
18:05:40 <morganfainberg> but it should be spec approval deadlines for juno
18:05:41 <bknudson> There's a Ripley's in san antonio
18:05:59 <morganfainberg> do we want a deadline for spec approvals
18:06:02 <morganfainberg> where if it isn't in - we aren't accepting it. i don't mean the J2 code thing
18:06:21 <lbragstad> Nova is doing a deadline of the first milestone I believe... They also have a ton of specs to review
18:06:28 <morganfainberg> or is J2 the deadline
18:06:36 <dolphm> morganfainberg: like, you need to land a spec in j1 to land an implementation in j2 or j3?
18:06:39 <bknudson> seems like if it can get in in time then that's the deadline
18:06:45 <morganfainberg> dolphm, basically
18:06:59 <morganfainberg> dolphm, do we want a hard cut off of "look the spec isn't going through this cycle"
18:07:08 <dolphm> morganfainberg: obviously we can't meet that for j2, but i'd love to say that if you don't land a spec by the end of j2, you're missing juno completely
18:07:26 <ayoung> dolphm, I would say that the dead line for spec approval would vary if it is an API change or not
18:07:29 <morganfainberg> dolphm, ++ works for me
18:07:36 <ayoung> for API change, code still needs to be in J2
18:07:45 <ayoung> spec approval needs to lead that, but probably not by much
18:08:02 <ayoung> but why not do the same for J3?
18:08:11 <dolphm> ayoung: this would mean that API specs need to be proposed and accepted by milestone 1, to be implemented in milestone 2
18:08:22 <ayoung> dolphm, not what I'm saying
18:08:28 <morganfainberg> this was more of a "should we have a spec deadline" and if so what is it, question
18:08:37 <ayoung> dolphm, I'm saying :  spec needs to be approved before the code, but not by much
18:08:43 <ayoung> I say "no"
18:08:48 <ayoung> we have code deadlines only
18:09:04 <morganfainberg> this is to direct how we prioritise reviewing specs as well
18:09:06 <ayoung> and specs need to be progressing to keep the code moving
18:09:15 <morganfainberg> if it's only code deadlines we need to keep eyes on specs 100% of the time.
18:09:26 <ayoung> we can try to lead things more in K, L and M....
18:09:31 <morganfainberg> else, we can say specs (not modifications, new specs) are only from X to Y
18:09:53 <ayoung> how about a cutoff for submitting new specs?
18:09:56 <dolphm> ayoung: right, we're talking more about j3 and beyond
18:10:49 <stevemar> we should get some leeway this release, i doubt anyone was thinking the specs would take this long...
18:10:54 <dolphm> we're already into j2, and we have mature specs that can realistically land in j2. but moving forward, the hard cutoff for submitting new specs can simply be a milestone ahead
18:11:06 <morganfainberg> dolphm, i like it
18:11:08 <stevemar> thats fair
18:11:57 <dolphm> so, the earliest this type of policy could take effect is for j3: to land a change in j3 (and therefore juno), the spec should land before the J2 deadline (whatever the date is in late july)
18:12:12 <ayoung> stevemar, I knew specs would take this long
18:12:28 <ayoung> look at the team, and how detail oriented we've been on past things
18:12:35 <ayoung> did you rally think specs would be any different?
18:12:46 <ayoung> dolphm, ++
18:12:47 <stevemar> ayoung, i was hoping..
18:13:00 <stevemar> i was never really 100% *for* specs to begin with
18:13:06 <stevemar> but we got what we got :)
18:13:20 <dolphm> stevemar: some specs will move fast. highly impactful ones will move slowly (unfortunately, most of the currently proposed ones are big :)
18:13:44 <stevemar> dolphm, i suppose
18:13:52 <ayoung> stevemar, I'm kindof with you on that.  But I think that the review process is valuable, just we are really, really paranoid
18:14:12 <stevemar> did we decide on a way to approve them yet? last week was voting in a meeting, but some went through +2/+A process
18:14:19 <bknudson> those of us who have to deal with security vulnerabilities are paranoid for a reason
18:14:32 <morganfainberg> stevemar, vote is the general thumbs up/thumbs down.
18:14:34 <ayoung> stevemar, I pushed for the vote just to get people looking at them and saying "I'm comfortable with this"
18:14:44 <morganfainberg> stevemar, after that it's 2x+2/+A (as per last meeting)
18:14:49 <ayoung> bknudson, I know.  I'm backporting a patch to Grizzly as we speak
18:14:52 <stevemar> morgabra, cool!
18:15:02 <dolphm> morganfainberg: you need to change your nick now
18:15:07 <morganfainberg> dolphm, damn it
18:15:13 <bknudson> ayoung: me too... we need to get the community to support older releases.
18:15:22 <bknudson> it's probably the same one
18:15:35 <ayoung> bknudson, yep...lets compare notes after the meeting
18:15:36 <stevemar> hehe, whoops! morganfainberg not morgabra
18:15:57 <morganfainberg> bknudson, there was talk about increasing length of time on support for icehouse iirc
18:15:58 <morganfainberg> bknudson, don't know if we will resurrect grizzly
18:16:00 <morganfainberg> though
18:16:09 <morganfainberg> or icehouse and havana
18:16:13 <bknudson> no, we won't resurrect grizzly
18:16:34 <bknudson> maybe on github
18:17:07 <dolphm> #action specs must be approved ahead of the milestone in which they are to be targeted/merged. for example, a j3 feature must have an approved spec land no later than the deadline for j2.
18:17:20 <bknudson> does approved here mean merged?
18:17:27 <bknudson> or voted on in the meeting?
18:17:46 <dolphm> bknudson: merged is what i meant
18:17:57 <morganfainberg> bknudson, merged.
18:18:04 <morganfainberg> juno has some extra leeway because we started late though.
18:18:18 <dolphm> juno-2
18:18:27 <morganfainberg> dolphm, ++ juno2
18:18:42 <dolphm> we'll go case-by-case for juno-2 features, but i don't think there will be any surprises
18:18:44 <dolphm> #topic Criteria for next release of client
18:18:48 <dolphm> ayoung: o/
18:18:59 <ayoung> dolphm, thanks
18:19:07 <bknudson> let's try not to break everyone with the next release of the client
18:19:19 <dolphm> bknudson: s/next/any/
18:19:26 <ayoung> OK,  so there are a bunch of changes that need to land for features in Juno.  Want to make sure we are aligned in purpose
18:19:49 <ayoung> Split of middleware repo?
18:19:55 <ayoung> morganfainberg, you've been working on that
18:20:01 <ayoung> what do we think is the timeline?
18:20:16 <dolphm> there's a proposed spec for that hasn't merged :)
18:20:23 <dolphm> for that which*
18:20:27 <morganfainberg> ayoung, well, it's not hard to split it.
18:20:41 <ayoung> morganfainberg, what are the practical
18:20:48 <ayoung> ramiofications of splitting it?
18:20:57 <morganfainberg> ayoung, it's the testing and release process that is going to be hard
18:20:59 <morganfainberg> for the first round
18:21:02 <morganfainberg> lots of code shuffling
18:21:07 <morganfainberg> i would say this is not a next release thing
18:21:17 <morganfainberg> this is a "we will get it done in the Juno cycle" thing.
18:21:19 <ayoung> morganfainberg, do we want in place for Juno?
18:21:36 <ayoung> and, if so, what do we try to sync it around?
18:21:38 <bknudson> 6663e13 Make get_oauth_params conditional for specific oauthlib versions
18:21:46 <bknudson> oh, that was a test issue
18:21:53 <ayoung> bknudson, isn't that in already?
18:21:59 <morganfainberg> client and middleware will release the same way. separate from the named releases
18:22:00 <ayoung> I'm more concerned about pending reviews
18:22:14 <morganfainberg> ayoung, i'd rather work on this in J3.
18:22:18 <ayoung> morganfainberg, ++
18:22:20 <bknudson> I was checking if there was a reason for a release now. I don't think there is.
18:22:25 <gyee> yeah, there are a bunch of client reviews that have impact on integration
18:22:26 <ayoung> OK  so...let me skip to audit for a second then
18:22:29 <morganfainberg> ayoung, make sure we don't block feature work.
18:22:39 <ayoung> audit is gordon chung's work
18:22:47 <ayoung> and we were talking about putting it in to the client
18:22:57 <morganfainberg> ayoung, and that (iirc) is going in oslo?
18:22:58 <bknudson> middleware?
18:23:00 <ayoung> does it make sense to wait until after the split to get that in, as it is middleware?
18:23:28 <bknudson> seems like it would be a mistake to put it in keystoneclient and then move it to middleware
18:23:45 <ayoung> morganfainberg, we had discussed putting the audit middleware in keystoneclient, with an eye to a longer term integration between policy and audit
18:23:57 <morganfainberg> ayoung, i can do the split earlier if we need to split first
18:24:07 <ayoung> bknudson, ++ which is why I was thinking of pushing for the split sooner
18:24:14 <ayoung> also to flush out the issues with the split
18:24:29 <morganfainberg> it's basically just putting a moratorium on new changes to the middleware/affecting the middleware until the split is complete
18:24:39 <morganfainberg> and we have a 1st release of the new pypi package
18:24:47 <morganfainberg> and global reqs update
18:24:58 <bknudson> I don't know if I can stop myself from fiddling with auth_token middleware,
18:24:58 <ayoung> morganfainberg, maybe we get it that far:
18:25:13 <gyee> is there a compelling reason to do the split earlier? seem like our priority should be on keystoneclient integration
18:25:29 <ayoung> bknudson, we have a lot of changes queued up for that.  It might be easier to postpone moving AT to the middleware repo until we have it completely refactored
18:25:45 <ayoung> gyee, ++
18:25:54 <dolphm> who's creating the new repo? morganfainberg?
18:25:55 <bknudson> although it's no problem for me to move a review from python-keystoneclient to a different repo
18:25:58 <morganfainberg> dolphm, yes.
18:26:13 <dolphm> morganfainberg: can you fork keystoneclient to do so?
18:26:19 <ayoung> bknudson, so lets hold off on moving auth token, but we have other middleware we could move
18:26:25 <ayoung> just to seed the new repo
18:26:33 <bknudson> I think there's a script that oslo uses for moving stuff to a lib
18:26:36 <bknudson> not sure what it does
18:26:36 <morganfainberg> dolphm, is is extracting the middleware and the history
18:26:39 <gyee> s3, nobody care about that one :)
18:26:42 <morganfainberg> dolphm, its not even a fork
18:26:57 <morganfainberg> its just not having to propose the same patches over and over again to multiple places.
18:27:12 <morganfainberg> gyee, s3 i'm not super worried about changes coming into atm
18:27:25 <dolphm> morganfainberg: it's the history i wanted to keep; we'll talk after
18:27:33 <ayoung> ok, so here is the ordering then, that I think we need to focus on
18:27:34 <morganfainberg> dolphm, yeah we will keep the history
18:27:39 <morganfainberg> dolphm, that is part of the plan
18:27:39 <ayoung> 1.  kerberos and session
18:27:48 <ayoung> 2.  auth_token using revocation events
18:27:54 <ayoung> 3. Split repo
18:27:57 <ayoung> 4. audit
18:27:58 <morganfainberg> dolphm, i'll be using the olso-incubator method for maintaining it and extracting the files
18:28:02 <gyee> ayoung, make sense
18:28:12 <gyee> may want to swap 3. and 4.
18:28:20 <gyee> I think audit can happen before split repo
18:28:28 <stevemar> whats there to do for auditing on the client side?
18:28:35 <ayoung> gyee, I thought we said it makes no sense to put audit into client, just to move it to middleware?
18:28:52 <ayoung> gyee, true.
18:29:00 <gyee> but they are independent
18:29:14 <dolphm> morganfainberg: ack
18:29:15 <ayoung> gyee, the difference will just be in pipeline definition
18:29:17 <bknudson> the audit middleware move is essentially the same as all the other middleware, so maybe not that big of a deal
18:29:39 <bknudson> I assume they're just going to import from the new repo so that it's backwards compat
18:29:39 <ayoung> filter=keystonclien.Audit   vs keystonemiddleware.Audit
18:30:04 <dolphm> what is this the priority of, exactly? these all seem like parallelizable (is that a word?) efforts
18:30:05 <bknudson> but it does seem like a waste to have to support the audit middleware import
18:30:28 <ayoung> OK...so...go and review client code, please
18:30:35 <ayoung> dolphm, that's all I got on that
18:30:43 <ayoung> dolphm, not quite
18:30:44 <morganfainberg> the only concern is preserving history is a 1 shot deal, it's the initial import
18:30:48 <dolphm> #action review client code
18:30:59 <ayoung> session needs to go in before the revoke events can ggo in
18:31:10 <gyee> ayoung ++
18:31:11 <ayoung> but beyond that, yeah, parallelizable, just prioritization
18:31:17 <dolphm> ayoung: are those reviews dependent?
18:31:27 <morganfainberg> so i can't peserve history on a separate bit of code w/o resubmitting all the patches unless it's done at the split time.
18:31:40 <gyee> dolphm, client integration depending on session
18:31:43 <dolphm> morganfainberg: fair enough
18:31:50 <bknudson> I'd expect during the split there's no merging
18:32:01 <stevemar> ayoung, are there any patches / blueprints for audit?
18:32:04 <ayoung> dolphm, not 'session in authtoken' and 'revocation in auth token'  as session needs to be there for revoke to use
18:32:23 <gyee> yeah, that too
18:32:27 <morganfainberg> bknudson, thats the idea, it should be maybe 1wk to get the split done (based on infra needing to approve the new repo)
18:32:31 <morganfainberg> bknudson, worst case.
18:32:37 <ayoung> stevemar, no, as it is existing code, just giving it a place to live
18:32:45 <bknudson> I think they do it on fridays
18:32:59 <morganfainberg> bknudson, yep, thats why i said worst case, a week
18:32:59 <morganfainberg> :)
18:33:14 <gyee> that including weekend?
18:33:30 <morganfainberg> gyee, depends on when everything is done
18:33:45 <dolphm> #topic Dynamic fetch of Policy
18:33:46 <dolphm> ayoung: o/
18:33:57 <ayoung> OK...so, there are a few conflicting goals
18:34:07 <bknudson> I believe we read the policy file on every op now?
18:34:12 <dolphm> ayoung: can you move this off the wiki and into a spec?
18:34:32 <ayoung> dolphm, I will once I have a clear view.  I started a new spec, that I think I am going to trash, though
18:34:54 <ayoung> the goal is for an endpoint to get its policy from keystone
18:35:07 <dolphm> which is really an oslo.policy topic first
18:35:07 <ayoung> that leads to a couple questions
18:35:33 <ayoung> 1.  How does Keystone/ or the endpoint know which policy to serve
18:35:56 <ayoung> I posted a spec to make it an explicit link, but I think that goes against the current thinking
18:36:08 <ayoung> right now, a policy file is uploaded, and gets a UUID
18:36:15 <ayoung> there is no associateion between that UUID and anything
18:36:41 <ayoung> so,  dolphm  recognized that there is no need to know "what service" as all of the policy rules are prefixed buy the service name
18:36:47 <ayoung> identity for keystone rules, and so forth
18:36:52 <lbragstad> so each endpoints has a policy, /endpoint/<endpoint_id>/policy/<policy_id>/
18:36:53 <henrynash> ayoung: quickie…do you mean endpoint or service….so this is to support an endpoint in region A having different  policies to an endpoint of teh same service in Rgion B ?
18:36:55 <ayoung> we could, in theory, just lump them all together
18:37:04 <ayoung> henrynash, I mean both
18:37:07 <ayoung> henrynash, that is one goal
18:37:15 <henrynash> ayoung: ok, got it
18:37:25 <ayoung> to be able to send a different policy to different endpoints for the same service
18:37:34 <ayoung> henrynash, so  the rule  is something like:
18:37:55 <ayoung> if there is an explicit something for an endpoint, serve that, otherwise, use the global
18:37:57 <ayoung> but...
18:38:09 <ayoung> endpoints don'
18:38:12 <ayoung> t know who they are
18:38:22 <ayoung> we don't set an endpoint ID in the config file
18:38:28 <ayoung> we give them a user
18:38:37 <ayoung> and all operations from that endpoint use that user id
18:38:48 <henrynash> ayoungL the big obvious advantage I see is that you can have different policies for, say, a production region/AZ vs a test region/AZ
18:38:55 <ayoung> now, with henrynash 's incipient patch, we have a clean way to segregate the service users
18:39:10 <ayoung> henrynash, so I was thinking we could link policy to the service user
18:39:31 <dolphm> we discussed this at the summit in terms of hierarchical multitenancy as well, with the notion that all policies between the project in context and the root/null project would apply equally (you'd have to match any rule that appeared in any of them... in other words, they're all AND'ed together)
18:39:47 <ayoung> henrynash, and fetch policy based on the project for that service user.
18:39:53 <dolphm> which means that project-specific policy can only narrow the parent's policy
18:40:07 <ayoung> dolphm, so...to avoid confusion
18:40:28 <ayoung> I am not shooting for "project specific  policy" where the project means the project containing the resource
18:40:36 <ayoung> only the "project" for the service users for now
18:40:58 <ayoung> and I am not planning on doing stackable policy right now
18:41:10 <ayoung> unless it is the only way to get this implements
18:41:13 <ayoung> implemented
18:41:27 <dolphm> ayoung: well if you're thinking in that direction, then it wouldn't be the service user's project, it would be the requesting user's project
18:41:40 <dolphm> the service user's project is basically meaningless today anyway
18:42:39 <ayoung> dolphm, right.  But we also don't fetch policy at all.  So how would we know what the root policy is?
18:43:31 <ayoung> dolphm, are you suggesting that we jump right to stackable, and that we fetch policy based on the requesting users project?
18:43:35 <bknudson> do the services know what their endpoint is now?
18:43:52 <ayoung> bknudson, no
18:43:53 <dolphm> ayoung: if project_id was a first class attribute on policies, then you could also do something like GET/v3/policies?project_id=&
18:43:58 <morganfainberg> bknudson, i don't think so.
18:44:10 <ayoung> dolphm, yes.  That was my thinking
18:44:22 <henrynash> ayoung: one issue we get a lot from customers is how can we use roles to determin where a user can deploy resource from a given project to a given region/AZ…..(i.e. Adam is special so he can deploy anywhere, but Henry’s a tester so he can only deploy to teh test region)…anything tht can help that in a clean fashion would be great and I’d help with
18:44:36 <ayoung> dolphm, the only issue then becomes how do we merge policy for different services
18:44:48 <ayoung> take the status quo
18:44:56 <dolphm> ayoung: that sounds like a one time manual process
18:45:04 <ayoung> we have policy.json checked in to the repo for each project
18:45:04 <dolphm> "building a policy for your deployment"
18:45:25 <ayoung> so are we going to provide a global policy file?
18:45:46 <gyee> hierarchical multipolicy
18:46:12 <ayoung> it means that each of the openstack services  no longer owns the default policy for themselves, and that seems wrong
18:46:21 <ayoung> I would think that Keystone should provide:
18:46:28 <ayoung> 1.  Basica policy enforcment\
18:46:33 <ayoung> 2.  Basic policy fetch
18:46:45 <ayoung> 3.  Default rules for ais_Admin and the like
18:46:51 <ayoung> and 4.  rule for identity
18:46:53 <ayoung> rules
18:47:04 <ayoung> then the other services would extend the base policy
18:47:17 * jamielennox arrives - sorry
18:47:27 <ayoung> there is also the question of all of the *aaS  new projects
18:47:33 <ayoung> what about something that is just incubated?
18:48:01 <ayoung> I think we want to keep policy management separated by service
18:48:07 <bknudson> service users are created in keystone (by devstack), projects could create their policy the same way
18:48:30 <gyee> whahhh? policy as a separable service?
18:48:37 <ayoung> no
18:49:03 <dstanek> gyee: each service having it's own policy
18:49:09 <ayoung> gyee, just that base policy editing for Nova is done by Nova team, and base policy editing for Glance is done by the glance team
18:49:12 <henrynash> dstanek: ++
18:49:23 <gyee> come to think of it, why not?
18:49:37 <ayoung> I think there is a benefit, though, to dolphm 's idea of merging them into a singe file
18:49:40 <ayoung> single
18:49:51 <ayoung> as it allows for reuse of rules
18:50:13 <bknudson> devstack could just `openstack policy-add nova/policy.json`
18:50:22 <gyee> if you look at oslo policy code, it has hooks for remote policy enforcement
18:50:39 <ayoung> bknudson, so...if we keep them as separate blobs, that will work
18:50:46 <gyee> just saying
18:51:02 <ayoung> it just means that when we fetch, we have to get the correct blob for  the service, and that is what I am aiming to lock down
18:51:14 <ayoung> what does the API look like for fetching policy?
18:51:32 <bknudson> nova has policy ID in its .conf, and fetches that one
18:51:40 <dolphm> gyee: oh that's new to me
18:51:45 <bknudson> or are we trying to avoid that?
18:51:47 <ayoung> is it by endpoint id or is it by service user project?
18:52:00 <gyee> dolphm, I remember it supports http brain or something like that
18:52:34 <gyee> but I haven't dig into that code for awhile
18:52:37 <ayoung> bknudson, if you do that, you have to admin each endpoint separately.
18:52:59 <ayoung> IE...I want poicy to change for endpoin E, I first upload the new policy file, then go and edit the config file on E
18:53:03 <ayoung> that seems wrong
18:53:08 <ayoung> just put it in E
18:53:21 <ayoung> it only makes sense if we want keystone to be the policy store
18:53:22 <dolphm> (5 minutes left, and we have a few updates from henrynash to cover)
18:53:32 <ayoung> if we don't lets deprecate policy, but I think we do want it
18:53:46 <lbragstad> take this topic to -keystone after the meeting?
18:53:55 <dolphm> ++ and -specs! :)
18:53:58 <dolphm> #topic Multi-backend Unique Identifiers
18:54:01 <dolphm> henrynash: o/
18:54:05 <henrynash> ok, very quikcly
18:54:07 <ayoung> dolphm, I'll write the spec once I know the scheme
18:54:15 <dolphm> #link https://review.openstack.org/#/c/100497
18:54:19 <henrynash> updated spec publhsed and new patch to match
18:54:25 <dolphm> #link https://review.openstack.org/#/c/74214/ implementation
18:54:28 <henrynash> only two questions
18:54:54 <henrynash> 1) The patch is big since it includes all the mechanical unit test changes of moving ID gernation from controller ot manager…
18:55:23 <ayoung> don't split
18:55:27 <ayoung> it will delay.
18:55:33 <ayoung> Its not such a terrible patch.
18:55:34 <henrynash> …it would be a bit of a pain to split it up so that part goes in separately…but will do so if peopel feel that’s imporant
18:55:50 <ayoung> the number of changes is irrelevant, as they are all mostly doing the same thing
18:56:02 <dolphm> ayoung: splitting will also get patches in more quickly with better review coverage / more stability
18:56:14 <morganfainberg> splitting would mkae the review much easier
18:56:14 <lbragstad> 2224 lines is a lot of patch
18:56:19 <dolphm> lbragstad: ++
18:56:20 <ayoung> disagree
18:56:28 <morganfainberg> but i am willing to review it as is. i would prefer it split
18:56:28 <ayoung> splitting at this point will make it harder
18:56:32 <ayoung> so what
18:56:32 <lbragstad> bite sized reviews will be easier
18:56:39 <ayoung> we still need to review 2224 lines
18:56:44 <ayoung> just in two patches as opposed to one
18:56:54 <lbragstad> but in functional change sets it's more manageable
18:57:04 <dstanek> for the patch in question i had to start splitting it up to understand it - i would vote for smaller patches
18:57:17 <dstanek> ayoung: it's about context and how much you have to keep in your head at once
18:57:25 <dolphm> dstanek: ++
18:57:29 <bknudson> there get to be too many comments on large reviews and reviewers get tired of looking at the same changes over and over again.
18:57:29 <henrynash> and question 2) is….should we salt the hash of the ID with something that is unique to each “cloud” in case anyone is worried about the same LDAP users in the default domain having the same ID from two clouds
18:57:45 <morganfainberg> a study was done and said peer review quality goes waaaay down over 400lines of change
18:57:58 <lifeless> goes down before then
18:58:03 <dolphm> morganfainberg: i'd like a bot to -1 when we cross that threshold :)
18:58:05 <lifeless> but thats the cliff
18:58:27 <morganfainberg> lifeless, ++
18:58:29 <ayoung> henrynash, salt can be a follow on patch
18:58:38 <ayoung> I think it is a stand alone feature.
18:58:45 <dolphm> (time)
18:58:49 <ayoung> Lets not hold this up for it,
18:59:26 <dolphm> #endmeeting