18:00:16 #startmeeting keystone 18:00:17 Meeting started Tue Jun 17 18:00:16 2014 UTC and is due to finish in 60 minutes. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:20 The meeting name has been set to 'keystone' 18:00:30 #topic Juno hackathon 18:00:39 yay hackathon! 18:00:47 ahoy 18:00:49 so just one quick update that should hopefully be inconsequential for the most part... 18:01:08 we've completely moved the event to the *other* geekdom building @ 110 E Houston St, San Antonio, TX 78205 18:01:19 dolphm: is that the old or new one? 18:01:21 which is a tiny bit closer to the Valencia 18:01:24 what happened? 18:01:27 dstanek: this is Geekdom's new space 18:01:31 #link https://www.google.com/maps/place/110+E+Houston+St/@29.4262771,-98.4935065,17z/data=!3m1!4b1!4m2!3m1!1s0x865c5f52c3e335c1:0xaa931eb0dee8ded4 18:01:40 morganfainberg hackathon hmm interseting 18:01:43 bknudson: the event space was too big for us, i suppose? 18:01:46 ty lbragstad 18:02:14 dolphm, is the valencia still a good hotel spot? 18:02:17 I was just going to follow the crowd there from the Hotel anyway 18:02:21 stevemar, this is "closer" 18:02:27 stevemar: yes, see the map on: 18:02:28 #link http://dolphm.com/openstack-keystone-hackathon-for-juno/ 18:02:32 dolphm: ah, nice. yes, closer. 18:02:42 * morganfainberg needs to book flight to SAT. 18:02:45 neato 18:03:02 * stevemar also needs to book a flight to SAT 18:03:14 is the barbican one at this location too? 18:03:26 dstanek: yeah, we'll be colocating with barbican on wednesday 18:03:34 cool 18:03:58 i'll be there for wed -> fri. (wont make it for barbican mon/tues) 18:04:17 cool, i'm planning on attending barbican too - i'll be in SAT for the whole week 18:04:24 dstanek: oh cool 18:04:51 #topic Spec Approval Requirements 18:05:06 morganfainberg: i can't remember if you put this on the agenda for this week, or if i left it on from last week? 18:05:29 i changed it 18:05:30 actually 18:05:33 right before the meeting >.> 18:05:40 but it should be spec approval deadlines for juno 18:05:41 There's a Ripley's in san antonio 18:05:59 do we want a deadline for spec approvals 18:06:02 where if it isn't in - we aren't accepting it. i don't mean the J2 code thing 18:06:21 Nova is doing a deadline of the first milestone I believe... They also have a ton of specs to review 18:06:28 or is J2 the deadline 18:06:36 morganfainberg: like, you need to land a spec in j1 to land an implementation in j2 or j3? 18:06:39 seems like if it can get in in time then that's the deadline 18:06:45 dolphm, basically 18:06:59 dolphm, do we want a hard cut off of "look the spec isn't going through this cycle" 18:07:08 morganfainberg: obviously we can't meet that for j2, but i'd love to say that if you don't land a spec by the end of j2, you're missing juno completely 18:07:26 dolphm, I would say that the dead line for spec approval would vary if it is an API change or not 18:07:29 dolphm, ++ works for me 18:07:36 for API change, code still needs to be in J2 18:07:45 spec approval needs to lead that, but probably not by much 18:08:02 but why not do the same for J3? 18:08:11 ayoung: this would mean that API specs need to be proposed and accepted by milestone 1, to be implemented in milestone 2 18:08:22 dolphm, not what I'm saying 18:08:28 this was more of a "should we have a spec deadline" and if so what is it, question 18:08:37 dolphm, I'm saying : spec needs to be approved before the code, but not by much 18:08:43 I say "no" 18:08:48 we have code deadlines only 18:09:04 this is to direct how we prioritise reviewing specs as well 18:09:06 and specs need to be progressing to keep the code moving 18:09:15 if it's only code deadlines we need to keep eyes on specs 100% of the time. 18:09:26 we can try to lead things more in K, L and M.... 18:09:31 else, we can say specs (not modifications, new specs) are only from X to Y 18:09:53 how about a cutoff for submitting new specs? 18:09:56 ayoung: right, we're talking more about j3 and beyond 18:10:49 we should get some leeway this release, i doubt anyone was thinking the specs would take this long... 18:10:54 we're already into j2, and we have mature specs that can realistically land in j2. but moving forward, the hard cutoff for submitting new specs can simply be a milestone ahead 18:11:06 dolphm, i like it 18:11:08 thats fair 18:11:57 so, the earliest this type of policy could take effect is for j3: to land a change in j3 (and therefore juno), the spec should land before the J2 deadline (whatever the date is in late july) 18:12:12 stevemar, I knew specs would take this long 18:12:28 look at the team, and how detail oriented we've been on past things 18:12:35 did you rally think specs would be any different? 18:12:46 dolphm, ++ 18:12:47 ayoung, i was hoping.. 18:13:00 i was never really 100% *for* specs to begin with 18:13:06 but we got what we got :) 18:13:20 stevemar: some specs will move fast. highly impactful ones will move slowly (unfortunately, most of the currently proposed ones are big :) 18:13:44 dolphm, i suppose 18:13:52 stevemar, I'm kindof with you on that. But I think that the review process is valuable, just we are really, really paranoid 18:14:12 did we decide on a way to approve them yet? last week was voting in a meeting, but some went through +2/+A process 18:14:19 those of us who have to deal with security vulnerabilities are paranoid for a reason 18:14:32 stevemar, vote is the general thumbs up/thumbs down. 18:14:34 stevemar, I pushed for the vote just to get people looking at them and saying "I'm comfortable with this" 18:14:44 stevemar, after that it's 2x+2/+A (as per last meeting) 18:14:49 bknudson, I know. I'm backporting a patch to Grizzly as we speak 18:14:52 morgabra, cool! 18:15:02 morganfainberg: you need to change your nick now 18:15:07 dolphm, damn it 18:15:13 ayoung: me too... we need to get the community to support older releases. 18:15:22 it's probably the same one 18:15:35 bknudson, yep...lets compare notes after the meeting 18:15:36 hehe, whoops! morganfainberg not morgabra 18:15:57 bknudson, there was talk about increasing length of time on support for icehouse iirc 18:15:58 bknudson, don't know if we will resurrect grizzly 18:16:00 though 18:16:09 or icehouse and havana 18:16:13 no, we won't resurrect grizzly 18:16:34 maybe on github 18:17:07 #action specs must be approved ahead of the milestone in which they are to be targeted/merged. for example, a j3 feature must have an approved spec land no later than the deadline for j2. 18:17:20 does approved here mean merged? 18:17:27 or voted on in the meeting? 18:17:46 bknudson: merged is what i meant 18:17:57 bknudson, merged. 18:18:04 juno has some extra leeway because we started late though. 18:18:18 juno-2 18:18:27 dolphm, ++ juno2 18:18:42 we'll go case-by-case for juno-2 features, but i don't think there will be any surprises 18:18:44 #topic Criteria for next release of client 18:18:48 ayoung: o/ 18:18:59 dolphm, thanks 18:19:07 let's try not to break everyone with the next release of the client 18:19:19 bknudson: s/next/any/ 18:19:26 OK, so there are a bunch of changes that need to land for features in Juno. Want to make sure we are aligned in purpose 18:19:49 Split of middleware repo? 18:19:55 morganfainberg, you've been working on that 18:20:01 what do we think is the timeline? 18:20:16 there's a proposed spec for that hasn't merged :) 18:20:23 for that which* 18:20:27 ayoung, well, it's not hard to split it. 18:20:41 morganfainberg, what are the practical 18:20:48 ramiofications of splitting it? 18:20:57 ayoung, it's the testing and release process that is going to be hard 18:20:59 for the first round 18:21:02 lots of code shuffling 18:21:07 i would say this is not a next release thing 18:21:17 this is a "we will get it done in the Juno cycle" thing. 18:21:19 morganfainberg, do we want in place for Juno? 18:21:36 and, if so, what do we try to sync it around? 18:21:38 6663e13 Make get_oauth_params conditional for specific oauthlib versions 18:21:46 oh, that was a test issue 18:21:53 bknudson, isn't that in already? 18:21:59 client and middleware will release the same way. separate from the named releases 18:22:00 I'm more concerned about pending reviews 18:22:14 ayoung, i'd rather work on this in J3. 18:22:18 morganfainberg, ++ 18:22:20 I was checking if there was a reason for a release now. I don't think there is. 18:22:25 yeah, there are a bunch of client reviews that have impact on integration 18:22:26 OK so...let me skip to audit for a second then 18:22:29 ayoung, make sure we don't block feature work. 18:22:39 audit is gordon chung's work 18:22:47 and we were talking about putting it in to the client 18:22:57 ayoung, and that (iirc) is going in oslo? 18:22:58 middleware? 18:23:00 does it make sense to wait until after the split to get that in, as it is middleware? 18:23:28 seems like it would be a mistake to put it in keystoneclient and then move it to middleware 18:23:45 morganfainberg, we had discussed putting the audit middleware in keystoneclient, with an eye to a longer term integration between policy and audit 18:23:57 ayoung, i can do the split earlier if we need to split first 18:24:07 bknudson, ++ which is why I was thinking of pushing for the split sooner 18:24:14 also to flush out the issues with the split 18:24:29 it's basically just putting a moratorium on new changes to the middleware/affecting the middleware until the split is complete 18:24:39 and we have a 1st release of the new pypi package 18:24:47 and global reqs update 18:24:58 I don't know if I can stop myself from fiddling with auth_token middleware, 18:24:58 morganfainberg, maybe we get it that far: 18:25:13 is there a compelling reason to do the split earlier? seem like our priority should be on keystoneclient integration 18:25:29 bknudson, we have a lot of changes queued up for that. It might be easier to postpone moving AT to the middleware repo until we have it completely refactored 18:25:45 gyee, ++ 18:25:54 who's creating the new repo? morganfainberg? 18:25:55 although it's no problem for me to move a review from python-keystoneclient to a different repo 18:25:58 dolphm, yes. 18:26:13 morganfainberg: can you fork keystoneclient to do so? 18:26:19 bknudson, so lets hold off on moving auth token, but we have other middleware we could move 18:26:25 just to seed the new repo 18:26:33 I think there's a script that oslo uses for moving stuff to a lib 18:26:36 not sure what it does 18:26:36 dolphm, is is extracting the middleware and the history 18:26:39 s3, nobody care about that one :) 18:26:42 dolphm, its not even a fork 18:26:57 its just not having to propose the same patches over and over again to multiple places. 18:27:12 gyee, s3 i'm not super worried about changes coming into atm 18:27:25 morganfainberg: it's the history i wanted to keep; we'll talk after 18:27:33 ok, so here is the ordering then, that I think we need to focus on 18:27:34 dolphm, yeah we will keep the history 18:27:39 dolphm, that is part of the plan 18:27:39 1. kerberos and session 18:27:48 2. auth_token using revocation events 18:27:54 3. Split repo 18:27:57 4. audit 18:27:58 dolphm, i'll be using the olso-incubator method for maintaining it and extracting the files 18:28:02 ayoung, make sense 18:28:12 may want to swap 3. and 4. 18:28:20 I think audit can happen before split repo 18:28:28 whats there to do for auditing on the client side? 18:28:35 gyee, I thought we said it makes no sense to put audit into client, just to move it to middleware? 18:28:52 gyee, true. 18:29:00 but they are independent 18:29:14 morganfainberg: ack 18:29:15 gyee, the difference will just be in pipeline definition 18:29:17 the audit middleware move is essentially the same as all the other middleware, so maybe not that big of a deal 18:29:39 I assume they're just going to import from the new repo so that it's backwards compat 18:29:39 filter=keystonclien.Audit vs keystonemiddleware.Audit 18:30:04 what is this the priority of, exactly? these all seem like parallelizable (is that a word?) efforts 18:30:05 but it does seem like a waste to have to support the audit middleware import 18:30:28 OK...so...go and review client code, please 18:30:35 dolphm, that's all I got on that 18:30:43 dolphm, not quite 18:30:44 the only concern is preserving history is a 1 shot deal, it's the initial import 18:30:48 #action review client code 18:30:59 session needs to go in before the revoke events can ggo in 18:31:10 ayoung ++ 18:31:11 but beyond that, yeah, parallelizable, just prioritization 18:31:17 ayoung: are those reviews dependent? 18:31:27 so i can't peserve history on a separate bit of code w/o resubmitting all the patches unless it's done at the split time. 18:31:40 dolphm, client integration depending on session 18:31:43 morganfainberg: fair enough 18:31:50 I'd expect during the split there's no merging 18:32:01 ayoung, are there any patches / blueprints for audit? 18:32:04 dolphm, not 'session in authtoken' and 'revocation in auth token' as session needs to be there for revoke to use 18:32:23 yeah, that too 18:32:27 bknudson, thats the idea, it should be maybe 1wk to get the split done (based on infra needing to approve the new repo) 18:32:31 bknudson, worst case. 18:32:37 stevemar, no, as it is existing code, just giving it a place to live 18:32:45 I think they do it on fridays 18:32:59 bknudson, yep, thats why i said worst case, a week 18:32:59 :) 18:33:14 that including weekend? 18:33:30 gyee, depends on when everything is done 18:33:45 #topic Dynamic fetch of Policy 18:33:46 ayoung: o/ 18:33:57 OK...so, there are a few conflicting goals 18:34:07 I believe we read the policy file on every op now? 18:34:12 ayoung: can you move this off the wiki and into a spec? 18:34:32 dolphm, I will once I have a clear view. I started a new spec, that I think I am going to trash, though 18:34:54 the goal is for an endpoint to get its policy from keystone 18:35:07 which is really an oslo.policy topic first 18:35:07 that leads to a couple questions 18:35:33 1. How does Keystone/ or the endpoint know which policy to serve 18:35:56 I posted a spec to make it an explicit link, but I think that goes against the current thinking 18:36:08 right now, a policy file is uploaded, and gets a UUID 18:36:15 there is no associateion between that UUID and anything 18:36:41 so, dolphm recognized that there is no need to know "what service" as all of the policy rules are prefixed buy the service name 18:36:47 identity for keystone rules, and so forth 18:36:52 so each endpoints has a policy, /endpoint//policy// 18:36:53 ayoung: quickie…do you mean endpoint or service….so this is to support an endpoint in region A having different policies to an endpoint of teh same service in Rgion B ? 18:36:55 we could, in theory, just lump them all together 18:37:04 henrynash, I mean both 18:37:07 henrynash, that is one goal 18:37:15 ayoung: ok, got it 18:37:25 to be able to send a different policy to different endpoints for the same service 18:37:34 henrynash, so the rule is something like: 18:37:55 if there is an explicit something for an endpoint, serve that, otherwise, use the global 18:37:57 but... 18:38:09 endpoints don' 18:38:12 t know who they are 18:38:22 we don't set an endpoint ID in the config file 18:38:28 we give them a user 18:38:37 and all operations from that endpoint use that user id 18:38:48 ayoungL the big obvious advantage I see is that you can have different policies for, say, a production region/AZ vs a test region/AZ 18:38:55 now, with henrynash 's incipient patch, we have a clean way to segregate the service users 18:39:10 henrynash, so I was thinking we could link policy to the service user 18:39:31 we discussed this at the summit in terms of hierarchical multitenancy as well, with the notion that all policies between the project in context and the root/null project would apply equally (you'd have to match any rule that appeared in any of them... in other words, they're all AND'ed together) 18:39:47 henrynash, and fetch policy based on the project for that service user. 18:39:53 which means that project-specific policy can only narrow the parent's policy 18:40:07 dolphm, so...to avoid confusion 18:40:28 I am not shooting for "project specific policy" where the project means the project containing the resource 18:40:36 only the "project" for the service users for now 18:40:58 and I am not planning on doing stackable policy right now 18:41:10 unless it is the only way to get this implements 18:41:13 implemented 18:41:27 ayoung: well if you're thinking in that direction, then it wouldn't be the service user's project, it would be the requesting user's project 18:41:40 the service user's project is basically meaningless today anyway 18:42:39 dolphm, right. But we also don't fetch policy at all. So how would we know what the root policy is? 18:43:31 dolphm, are you suggesting that we jump right to stackable, and that we fetch policy based on the requesting users project? 18:43:35 do the services know what their endpoint is now? 18:43:52 bknudson, no 18:43:53 ayoung: if project_id was a first class attribute on policies, then you could also do something like GET/v3/policies?project_id=& 18:43:58 bknudson, i don't think so. 18:44:10 dolphm, yes. That was my thinking 18:44:22 ayoung: one issue we get a lot from customers is how can we use roles to determin where a user can deploy resource from a given project to a given region/AZ…..(i.e. Adam is special so he can deploy anywhere, but Henry’s a tester so he can only deploy to teh test region)…anything tht can help that in a clean fashion would be great and I’d help with 18:44:36 dolphm, the only issue then becomes how do we merge policy for different services 18:44:48 take the status quo 18:44:56 ayoung: that sounds like a one time manual process 18:45:04 we have policy.json checked in to the repo for each project 18:45:04 "building a policy for your deployment" 18:45:25 so are we going to provide a global policy file? 18:45:46 hierarchical multipolicy 18:46:12 it means that each of the openstack services no longer owns the default policy for themselves, and that seems wrong 18:46:21 I would think that Keystone should provide: 18:46:28 1. Basica policy enforcment\ 18:46:33 2. Basic policy fetch 18:46:45 3. Default rules for ais_Admin and the like 18:46:51 and 4. rule for identity 18:46:53 rules 18:47:04 then the other services would extend the base policy 18:47:17 * jamielennox arrives - sorry 18:47:27 there is also the question of all of the *aaS new projects 18:47:33 what about something that is just incubated? 18:48:01 I think we want to keep policy management separated by service 18:48:07 service users are created in keystone (by devstack), projects could create their policy the same way 18:48:30 whahhh? policy as a separable service? 18:48:37 no 18:49:03 gyee: each service having it's own policy 18:49:09 gyee, just that base policy editing for Nova is done by Nova team, and base policy editing for Glance is done by the glance team 18:49:12 dstanek: ++ 18:49:23 come to think of it, why not? 18:49:37 I think there is a benefit, though, to dolphm 's idea of merging them into a singe file 18:49:40 single 18:49:51 as it allows for reuse of rules 18:50:13 devstack could just `openstack policy-add nova/policy.json` 18:50:22 if you look at oslo policy code, it has hooks for remote policy enforcement 18:50:39 bknudson, so...if we keep them as separate blobs, that will work 18:50:46 just saying 18:51:02 it just means that when we fetch, we have to get the correct blob for the service, and that is what I am aiming to lock down 18:51:14 what does the API look like for fetching policy? 18:51:32 nova has policy ID in its .conf, and fetches that one 18:51:40 gyee: oh that's new to me 18:51:45 or are we trying to avoid that? 18:51:47 is it by endpoint id or is it by service user project? 18:52:00 dolphm, I remember it supports http brain or something like that 18:52:34 but I haven't dig into that code for awhile 18:52:37 bknudson, if you do that, you have to admin each endpoint separately. 18:52:59 IE...I want poicy to change for endpoin E, I first upload the new policy file, then go and edit the config file on E 18:53:03 that seems wrong 18:53:08 just put it in E 18:53:21 it only makes sense if we want keystone to be the policy store 18:53:22 (5 minutes left, and we have a few updates from henrynash to cover) 18:53:32 if we don't lets deprecate policy, but I think we do want it 18:53:46 take this topic to -keystone after the meeting? 18:53:55 ++ and -specs! :) 18:53:58 #topic Multi-backend Unique Identifiers 18:54:01 henrynash: o/ 18:54:05 ok, very quikcly 18:54:07 dolphm, I'll write the spec once I know the scheme 18:54:15 #link https://review.openstack.org/#/c/100497 18:54:19 updated spec publhsed and new patch to match 18:54:25 #link https://review.openstack.org/#/c/74214/ implementation 18:54:28 only two questions 18:54:54 1) The patch is big since it includes all the mechanical unit test changes of moving ID gernation from controller ot manager… 18:55:23 don't split 18:55:27 it will delay. 18:55:33 Its not such a terrible patch. 18:55:34 …it would be a bit of a pain to split it up so that part goes in separately…but will do so if peopel feel that’s imporant 18:55:50 the number of changes is irrelevant, as they are all mostly doing the same thing 18:56:02 ayoung: splitting will also get patches in more quickly with better review coverage / more stability 18:56:14 splitting would mkae the review much easier 18:56:14 2224 lines is a lot of patch 18:56:19 lbragstad: ++ 18:56:20 disagree 18:56:28 but i am willing to review it as is. i would prefer it split 18:56:28 splitting at this point will make it harder 18:56:32 so what 18:56:32 bite sized reviews will be easier 18:56:39 we still need to review 2224 lines 18:56:44 just in two patches as opposed to one 18:56:54 but in functional change sets it's more manageable 18:57:04 for the patch in question i had to start splitting it up to understand it - i would vote for smaller patches 18:57:17 ayoung: it's about context and how much you have to keep in your head at once 18:57:25 dstanek: ++ 18:57:29 there get to be too many comments on large reviews and reviewers get tired of looking at the same changes over and over again. 18:57:29 and question 2) is….should we salt the hash of the ID with something that is unique to each “cloud” in case anyone is worried about the same LDAP users in the default domain having the same ID from two clouds 18:57:45 a study was done and said peer review quality goes waaaay down over 400lines of change 18:57:58 goes down before then 18:58:03 morganfainberg: i'd like a bot to -1 when we cross that threshold :) 18:58:05 but thats the cliff 18:58:27 lifeless, ++ 18:58:29 henrynash, salt can be a follow on patch 18:58:38 I think it is a stand alone feature. 18:58:45 (time) 18:58:49 Lets not hold this up for it, 18:59:26 #endmeeting