18:00:25 #startmeeting keystone 18:00:26 Meeting started Tue Jun 6 18:00:25 2017 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:30 The meeting name has been set to 'keystone' 18:00:31 #link https://etherpad.openstack.org/p/keystone-weekly-meeting 18:00:35 o/ 18:00:39 o/ 18:00:42 ping ayoung, breton, cmurphy, dstanek, edmondsw, gagehugo, henrynash, hrybacki, knikolla, lamt, lbragstad, lwanderley, notmorgan, rderose, rodrigods, samueldmq, spilla, aselius 18:00:47 o/ 18:00:48 sjain_: there we go, meeting starting 18:00:48 o/ 18:00:51 o/ 18:00:51 o/ 18:00:55 o/ 18:00:55 o/ 18:00:56 o/ 18:01:07 yes, i'm here 18:01:15 o/ 18:01:18 henrynash: hey o/ 18:01:31 shocking, I know... 18:01:36 henrynash: :) 18:01:59 o/ 18:01:59 so bored with the UK election, had to do something else :-) 18:02:45 #topic announcements 18:02:56 ok - couple house keeping items 18:03:02 #info pike 2 is this week 18:03:14 we’ve cut off Donald’s thumbs so he can’t tweet? 18:03:15 * morgan lurks 18:03:25 #info specification freeze is this week 18:03:29 (that would be an announcement) 18:04:08 #info proposal for rolling upgrade tests 18:04:23 #link https://review.openstack.org/#/c/471419/ 18:04:43 I'm working on a job to adding rolling upgrade tests to our gate 18:04:57 I'd love to get some feedback on that review to make sure if looks right 18:05:12 I've double checked it with the openstack-ansible community as well as the folks in -infra 18:05:20 lbragstad: interesting, what's the idea behind it? 18:05:29 run tempest tests throughout upgrade? 18:05:31 so - that patch adds a new keystone gate job 18:06:02 what it does is deploys keystone onto two separate containers, creating a deployment 18:06:18 which starts out as the last stable release 18:06:27 (stable/ocata) in our case today 18:06:50 then it clones the change in review, places it in /opt/keystone on the jenkins slave, and using openstack-ansible to do a rolling upgrade 18:07:04 s/using/uses/ 18:07:27 it will also execute a few openstack-ansible functional tests before and after 18:07:41 it will also constantly authenticate and validate tokens during the upgrade 18:08:05 at the end of the upgrade, it will output the performance results during the upgrade and if there were any dropped requests 18:08:08 lbragstad: awesome, so it does test things during the upgrade process 18:08:13 yes 18:08:32 one node is ocata and the other is the patch (pike) 18:08:34 i believe openstack-ansible uses tempest 18:08:50 samueldmq: well - both nodes start out as ocata 18:09:05 and then just one goes pike, correct? 18:09:07 then the openstack-ansible-os_keystone role will perform a rolling upgrade 18:09:26 samueldmq: they both get upgraded according to our rolling upgrade documentation 18:09:47 (e.g. keystone-manage db_sync --expand, --migrate, --contract, etc...) 18:10:55 samueldmq: it's very similar to the process I documented here 18:10:57 #link https://www.lbragstad.com/blog/using-openstack-ansible-to-performance-test-rolling-upgrades 18:11:10 lbragstad: gotcha, I wonder if would be interesting to run tests between --expand and --contract 18:11:19 so that we make sure the upgrading is a rolling one (no downtime) 18:12:09 samueldmq: well - that's kind of what this does 18:12:12 #link https://github.com/openstack/openstack-ansible-os_keystone/blob/master/tests/test-upgrade-post.yml#L20 18:12:37 #link https://github.com/openstack/openstack-ansible-os_keystone/blob/master/tests/test-benchmark-keystone-upgrade.yml 18:13:16 lbragstad: nice, I will take a better look at it later too 18:13:19 which authenticates and validates tokens repeatedlt during the upgrade 18:13:25 #link https://github.com/openstack/openstack-ansible-os_keystone/blob/master/tests/templates/locustfile.py 18:13:32 lbragstad: awesome 18:13:46 so for now we only keep testing auth during the upgrade 18:13:47 we could get a little more prescriptive in our testing 18:13:49 not keystone api 18:14:05 right - auth and validate are the only APIs that we're testing with rolling upgrades 18:14:17 but we do run some tempest tests after the upgrade 18:14:35 which I assume checks data in the deployment to make sure the upgrade didn't ruin it or something like that 18:14:55 makes sense, maybe it could be improved later 18:15:00 ++ 18:15:00 but is a great start as it is 18:15:09 thanks for clarifying 18:15:17 yeah - for now it should be enough to get us the rolling upgrade tag 18:15:47 so any reviews on those two patches would be greatly appreciate 18:15:49 appreciated* 18:15:52 #link https://review.openstack.org/#/c/471419/ 18:16:01 #link https://review.openstack.org/#/c/471427/ 18:16:05 moving on 18:16:13 #topic picking up policy-docs work 18:16:26 so - all the work to move policy into code has been done 18:16:42 and the follow on for that was to use oslo.policy to document each policy in code 18:16:56 lbragstad: anything we can do to help aside from reviews to that end? It'd be nice to see those land sooner rather than later 18:17:00 we only have a few patches left for that implementation 18:17:18 hrybacki: for the testing patches or the policy documentation patches? 18:17:28 the later 18:17:48 so these are the outstanding patches that need to merge 18:17:51 #link https://review.openstack.org/#/q/topic:bp/policy-docs+project:openstack/keystone+status:open 18:18:04 and i'm not sure if antwash is going to have the time to pick it back up 18:18:31 lbragstad: I'll take a look. Might be something I can help with 18:18:43 i wanted to put this on the agenda to see if we could break that work up among the group and carry the last few bits across the finish line 18:19:12 I volunteer too, I will update them 18:19:12 right now - most of the patches are in a linear series 18:19:25 hrybacki: and others are also more them welcome to help too 18:19:34 * hrybacki nods 18:19:42 which probably isn't necessary since there isn't a reason to not work them in parallel 18:20:04 lbragstad: ++ let's just get them all rebased on master 18:20:21 samueldmq: cool - so that will be step one 18:21:42 samueldmq: hrybacki want to follow up after the meeting and divvy up the work? 18:21:55 wfm 18:21:56 sounds good to me 18:22:09 #action lbragstad samueldmq and hrybacki to rebase and address comments on remaining policy-docs patches 18:22:42 cool - if there aren't any more questions on that specifically, we can move on 18:23:00 #topic How to do experimental code in the future 18:23:04 ayoung: o/ 18:23:15 Yello 18:23:22 ayoung: all yours 18:23:30 So, we killed extensions 18:23:42 and we don't have experiemental, or alpha, or beta or nothing like that 18:23:45 and...I miss it 18:23:50 I miss productivity 18:23:58 I miss trying new things and actually getting them to work 18:24:10 and all that coolness that makes people actual like coding 18:24:21 and...well, what do we do? 18:24:34 Let me give you an example of what should be possible 18:24:55 A down-down-down stream company should be able to come up with an idea for Keystone. Say, oh, RBAC in Middleware 18:25:15 and they want to implement it for their system. Couple iterations...got something workable 18:25:19 and then submit it upstream 18:25:24 ayoung: what is experimental? may be removed or changed in a backwards incompatible way without previous notice? 18:25:32 "get the feed back from the operators" is the mantra 18:25:40 samueldmq, I don't know. That is what I want to figure out 18:25:58 how can we, as a parent project, encourage new development 18:26:21 how can we make it possible to build something, get it into production, and then say "ok, this should go into core" 18:26:29 extensions? 18:26:32 I don't think the top down approach we have right now is working? 18:26:39 "first write a spec" 18:26:44 etc 18:26:59 samueldmq, extensions, alpha, beta, separate namespaces...I don't know 18:27:16 I do lnow there is something like it in Kubernetes, and it is where OpenShift does stuff 18:27:29 so ideas that are vetted in OpenShift can then be shipped upstream 18:28:10 What do we want the workflow to look like? 18:28:32 yeah - i remember the initial fernet stuff taking a long time 18:28:43 are there any other components that have a good model for this? 18:28:50 lbragstad, exactly. And PKI also could not morph fast enough 18:28:52 when i originally did that work with dolph, i proposed all the changes to a fork of keystone 18:29:02 I think the idea is interesting, but there might be different cases in there, e.g whether you just add APIs vs touch the existing ones 18:29:03 or even just a good reputation for working with 'outsiders' ? 18:29:37 Hell, trusts were originally supposed to be an extension, but it touched so much code.... 18:29:39 hrybacki: from an openstack keystone perspective, i feel like the "everything is pluggable!" card gets played there 18:29:57 ayoung: yeah - in hind-sight that was a hard thing to do as an extension to be hont 18:30:00 honest* 18:30:08 don't we already have an experimental API? http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/json_home.py#n62 why not just put new ideas behind that? 18:30:21 lbragstad, yeah. and with the Stable Driver interface, we found it too hard to maintain, and yanked that 18:30:47 lbragstad: can you expand on that? 18:31:08 cmurphy, I think that we need to explore just what that means. It is probably a step in the chain 18:31:12 hrybacki: the implementation we have for trusts is complicated and it requires a lot of knowledge of various keystone sub-systems 18:31:14 feature branches are something that swift and others have used effectively for large chunks of feature work 18:31:42 ^ that's pretty much how we did non-persistent tokens 18:31:47 https://wiki.openstack.org/wiki/Swift/feature_branches 18:31:49 interesting idea 18:31:51 but not as a formal feature branch 18:32:00 what I would like to see is a way to say to downstream: here is the interface. Here is the process. 18:32:03 #link https://wiki.openstack.org/wiki/Swift/feature_branches 18:32:05 thanks notmyname 18:32:23 And I think it is important to understand the full process before we build any new mechanisms. 18:32:36 lbragstad: happy to share our experiences if you decided to do it. feel free to ping me about it 18:32:45 Something like: we want to know that and idea is vetted in a production system before considering it in Keystone 18:32:57 notmyname: i guess my first question is - are you happy with it? 18:33:00 and maybe quibble a bit about what Production means 18:33:22 notmyname: my second is - is there anything you'd change about the process? 18:33:23 ayoung: could you write up a spec for this? XD /s 18:33:25 but, we are the antithesis of DevOps here. It is big band waterfall. Everything I hate about old style software development 18:33:26 yes! feature branches have been both very successful and necessary for many multi-cycle efforts in swift 18:33:36 I've very pleased with feature branches in swift 18:33:41 hrybacki, I do have an un publish rant blogpost 18:34:08 lbragstad: as to the process, we've iterated on it over time, and the current process (ie what's in the wiki) is what's worked best for us so far 18:34:11 ayoung: I bet you have an arsenal of them :) 18:34:14 notmyname, how do you use feature branches to do that 18:34:25 hrybacki, almost as many as abandonded patches in Gerrit 18:34:28 ayoung: to do what? 18:34:33 hmm and that can be a good fit for both things that are completely separate or that touch the existing code a lot (feature branches) 18:34:45 notmyname, "feature branches have been both very successful and necessary for many multi-cycle efforts in swift" 18:35:15 just keep rebasing them? 18:35:15 ayoung: the "how" is in the wiki link ;-) 18:35:44 " 18:35:46 There are a couple of things we've found that are helpful. 18:35:47 frequently merge from master (only a core can do this) 18:35:49 worth mentioning, i believe keystone may have been the first project to use a feature branch in our gerrit, but it's been a few years (was for a protocol major version change) 18:36:05 whoop! 18:36:10 * hrybacki is curious how CI would need to change (if at all) 18:36:27 hrybacki: ++ 18:36:39 hrybacki, I was toying with the idea of submitting code direct to RDO prior to pushing to upstream. 18:37:23 ayoung: we've done feature branches for at least 4 year-long efforts in swift. the "how" is mostly one of prioritization and community communication. the mechanics aren't actually all that complicated 18:37:26 I am wondering if "upstream first:" is still the right strategy for a risk adverse, security sensitive project like Keystone 18:37:54 devstack-gate already has some logic to recognize branches starting with "feature/" as being master-like (again, pretty sure that got implemented for the keystone v2 api feature branch as well) 18:37:56 ayoung: I think it's dangerous to move away from that model 18:37:57 i think that depends on the feature 18:37:58 yes, you can set up different CI jobs for a feature branch. sometimes that very helpful (ie when you're adding new stuff or in the middle of breaking everything) 18:38:18 notmyname, also, we have issues with stuff needing to be in sync between server, client, CLI and middleware 18:38:47 hrybacki, I'm not sure it is. I think that there is an argument that something should only be in Keystone *after* its been tried in production 18:39:12 * hrybacki has mixed feelings about that thought 18:39:22 fungi: nice 18:39:28 kindof like Vendors submitting drivers to the Linux Kernel after they've supported them for their own customer base for a few iterations 18:39:30 maybe "upstream first" isn't the way to describe that? 18:39:33 ayoung: IMO the main reason for feature branches is so that you can land broken stuff or partially implemented stuff. AIUI, gerrit (zuul?) can coordinate feature branches across projects, so the same banch name is tested together 18:39:48 tricky question: is it possible for a deployer to try different features that are in different feature branches at the same time? 18:39:59 good question 18:40:01 samueldmq, it would be a code merge issue 18:40:02 without needing to merge/rebase them by themselves? 18:40:11 you could make a third feature branch for mixing them 18:40:23 or the deployer could manage that themselves 18:40:31 so we would be making all the combinations? we don't control what people need 18:40:55 or leaving that up to the deployer seems inappropriate 18:41:12 i feel like if we need to start providing ways to use features from different branches, something just needs to be merged to master 18:41:13 samueldmq, so...that leads me to wonder if Keystone is too monolithic in approach. 18:41:20 quickstart in devmode might be able to pull reviews across branches in, checking with the oooq folks now 18:41:23 samueldmq: in my experience, that will depend on how often you merge master back in to the feature branch [protip: do it often] 18:41:29 Keystone seems to be 3 distinct applications bundled up into one 18:41:38 identity, assignemtn, service catalog 18:41:40 notmyname: ++ 18:41:56 ideally, a new feature would be deployable as a separate microservice 18:41:57 IOW, it's up to the keystone team to figure that out, not a deployer 18:42:20 notmyname: interesting, and we want feature branches for things that are experimental 18:42:34 so we wouldn't expect a deployer to experiment so many things in a production env 18:42:43 notmyname: have you ever had to prune feature branches that never make it to master? 18:42:56 looks like, oooq can pull into multiple reviews: https://docs.openstack.org/developer/tripleo-quickstart/devmode.html#zuul-mode 18:43:00 take the RBAC in middelware example. Aside from the implied roles stuff (which are already in ) it could easily be in a separate service. Fernet tokens almost feel like they could have been done that way, too 18:43:08 lbragstad: yes. the hummingbird/golang stuff. we've redirected a few times on that and haven't landed it on master 18:43:21 like a "token signing and validation server" 18:43:39 how do we do feature branches across multiple projects? 18:43:42 one very important thing with feature branches is that they MUST have a lifetime (probably measured no longer than 18 months) 18:43:47 just use the same branch name consistently? 18:43:52 samueldmq: eys 18:44:00 * samueldmq nods 18:44:03 I think that is targetting too late in the process 18:44:10 samueldmq: at least, zuul is set up to test those together if they have the same branch name 18:44:21 yeah, devstack-gate and zuul-cloner look for branch name matches first when cloning any repos 18:44:22 notmyname: that makes perfect sense to me 18:44:23 you are still thinking in a centralized swift team, and a branch managed in gerrit 18:44:30 notmyname: so - feature X is going to start as a feature branch and we are going to target to merge it to master wtihin a year 18:44:34 I would hate to do all that work just to have someone -2 it at the end 18:44:38 fungi: great, that's what makes sense to me, thanks for confirming 18:44:39 and I've lived that life 18:45:11 would rather be able to say "here is the extra server" and try to get it into Keystone, but if it is rejected, it can be a stand alone opensource project 18:45:16 ayoung: sorry. not trying to assume too much about how it might work. just trying to share what's worked for us 18:45:19 ayoung: that'd have to be -2ed even before going to a feature branch 18:45:21 notmyname, ++ 18:45:34 samueldmq, it never happens that way 18:45:36 -2s are for conceptual things not for implementation details/hardening 18:45:44 which is what feature branches are fore 18:45:46 for 18:45:53 samueldmq, people ignore something until it is getting close, they get scared, and it gets pocket vetoed 18:46:00 the tool (ie feature branches) is not important. what's important is the communication. so whatever works for communication is what you should do 18:46:04 and that is probably due to the new feature being risky 18:46:06 that's not cool, ayoung 18:46:21 notmyname: ++ 18:46:34 which is why we want it in production, and proven, prior to thinking about merging it in to Keystone 18:46:51 ayoung: ah, yes. so, again from my experience, that's where a *ton* of communication amongst core and leadership from the PTL is essential. communicate early and often 18:46:58 kind like what the policy effort via ... what is the Apache project called again? Fortress? 18:47:15 ayoung: yeah 18:47:38 they should be able to do that, and if we can make it work into the rest of the Keystone stuff, great 18:47:46 if not, they should be able to continue it on 18:47:56 I think the issue is the service catalog 18:48:10 yes and the roles are in keystone, and assignments and all 18:48:11 we need a way to represent new things in there without conflict 18:48:17 that's basically replacing a part of what keystone is today 18:48:25 so - alternative, what if it was easier to iterate major api versions 18:48:26 * lbragstad ducks 18:48:50 lbragstad, what if we were able to "separate" modify api versions for subsystems? 18:49:02 lbragstad: like get it in, and just +1 major version and remove it if doesn't work? 18:49:17 like identity is at v3.5, federation a 3.14, assignment at 3.14 and so on 18:49:25 lbragstad: we would endup in keystone v30 in 2 years maybe 18:49:43 yeah - i wasn't thinking about individually across subsystems 18:49:45 point of order, you won't run out of version numbers ;-) 18:49:51 notmyname: ++ 18:49:56 just straight up semver 18:50:09 notmyname, yeah, but we'll agree to skip '95 18:50:16 :) 18:50:29 so...start chewing it over. Think about what we want the overall process to look like 18:50:35 that'd work but I don't think that's the right approach, nor what anyone else is doing in the big tent 18:50:47 * lbragstad needs to read the swift process doc 18:50:58 we can use tools that have been successful here (long lived feature branches) as well as elswhere 18:51:03 do it, if doesn't work, go major+1 and don't be backward compatible, that's terribly bad I think 18:51:06 samueldmq: no - but it's come up before in discussions 18:51:16 people are still stuck on the v2 api 18:51:28 ayoung: right 18:52:44 alright we have about 8 minutes for open discussion 18:52:48 #topic open discussion 18:52:55 the floor is open 18:53:13 we need a new name for api keys 18:53:31 #link https://review.openstack.org/#/c/450415/ 18:53:59 I thought "auth keys" were thrown around as a name before 18:54:01 cmurphy: new name for the thing ? 18:54:01 i need to review the latest iteration of that spec 18:54:06 or a new name (owner) for it? 18:54:15 can i point out the bikeshed on the name is stupid 18:54:16 samueldmq: name for the thing 18:54:23 this is bikeshed 18:54:31 color it purple, move on 18:54:32 morgan: okay but everyone is unhappy with it 18:54:55 thats fine, propose something and move forward 18:55:01 I don't think it's bikeshed yet, have we had discussions on it before? 18:55:05 discussing the name into the weeds is silly 18:55:06 if they don't behave like "API keys" as they are known in other parts of the industry, I'd be fine with something as simple as application specific passwords 18:55:14 Lets drop the word Keys 18:55:14 lbragstad: ++ go with that 18:55:36 application subjects? Application Principals? 18:55:39 Principals? 18:55:47 application-specific-passwords is really what this mirrors 18:55:51 that is a fine name for it 18:56:00 application-specific-accounts 18:56:05 okay i'm going to run with application specific passwords 18:56:07 it's a password, not an account 18:56:10 if the client still has to understand openstack's scoping bits, then API keys is probably not the right name 18:56:20 ayoung: please don't push harder on the bikeshed here. 18:56:29 so we are only going to support password for this? 18:56:31 lbragstad: that is still application specific passwords. 18:57:04 how about we use the OAUTH term and call them consumers and say "please use oauth" 18:57:18 morgan, you know this is not just bikeshedding 18:57:38 THIS IS. 18:57:40 application specific credentials 18:57:53 morgan, I kicked off this discussion 18:58:12 I asked for links to the rationale for calling the API keys... 18:58:18 and note that those are still not in the spec 18:58:29 it was a term we fell in to because it is used somewhere 18:58:36 but this is service users by another name 18:58:50 ok so two options 18:58:58 1) use oauth (not this spec, real oauth) 18:59:05 2) call it what it is, and it's not oauth 18:59:23 morgan, one reason to go oauth is it works with other projects 18:59:29 option 1 is really not doable w/o scrapping all the stuff massively 18:59:32 ayoung: this is in the weeds 18:59:33 docker, specifically, needs Oauth for image repos, for example 18:59:36 <1 minute y'all 18:59:36 this is NOT oauth 18:59:43 ok - so bite-sized pieces, let's refactor the doc to include application specific passwords then go from ther e 18:59:46 we're out of time 18:59:48 lbragstad: + 18:59:54 lbragstad: ++ 18:59:54 morgan, this is not passwords 19:00:19 if we come up with a better name in the process of reviewing the important bits of the mechanism, we can rename 19:00:25 thanks for coming folks! 19:00:29 oh, I see this topic is going strong :) 19:00:29 #endmeeting