18:01:52 #startmeeting keystone 18:01:52 stevemar: dress appropriatley please 18:01:52 Meeting started Tue Aug 4 18:01:52 2015 UTC and is due to finish in 60 minutes. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:56 but i'm going to start with a quick reminder 18:01:56 The meeting name has been set to 'keystone' 18:02:00 #topic Release schedule 18:02:01 henrynash: hehe 18:02:13 We cut liberty-2 last week (yay! good job to all), so we're now staring down liberty-3 and thus FEATURE FREEZE is next up the calendar. 18:02:20 #link https://launchpad.net/keystone/+milestone/liberty-2 18:02:24 #info Feature freeze is September 1st 18:02:35 That means we have until the end of the month to land all the featureful bits for Liberty (all blueprints and wishlist bugs). 18:02:45 #link https://launchpad.net/keystone/+milestone/liberty-3 18:02:49 our next milestone target ^ 18:02:59 If you have any questions, direct them to morganfainberg ;) because we’re moving on! 18:03:07 #topic Add type as a filter for /v3/credentials 18:03:10 #link https://bugs.launchpad.net/keystone/+bug/1460492 18:03:10 Launchpad bug 1460492 in Keystone "List credentials by type" [Wishlist,In progress] - Assigned to Marianne Linhares Monteiro (mariannelinharesm) 18:03:12 #link https://review.openstack.org/#/c/208620/ 18:03:22 Launchpad bug 1460492 in keystone "List credentials by type" [Wishlist,In progress] 18:03:23 Launchpad bug 1460492 in keystone "List credentials by type" [Wishlist,In progress] https://launchpad.net/bugs/1460492 18:03:26 The question on the agenda is "Does this need a spec since it impacts API?" -- and the answer is, of course, absolutely yes. It's a user-facing change and the API impact must be documented in keystone-specs/api :) 18:03:28 raildo, samueldmq, tellesnobrega: o/ floor is yours 18:03:34 thanks :) 18:03:40 hi guys, marianne start works with Keystone now, and we are looking for a low-hanging-fruit bug for her and we choose this bug 18:03:43 dolphm, that list doesn't look right, where's the x.509 one? 18:03:55 gyee: add it to the list 18:04:00 so to fix this bug we just add the 'type' in the hint of credentials, on this patch that dolph put here 18:04:07 stevemar, grasseyass amigo! 18:04:21 since this is an API change, we might need to open a blueprint/spec, but it's fairly small, so maybe not. So we want to know if the keystone core are happy with this way or we need to create a bp/spec for this? 18:04:38 keystone core team* 18:04:44 raildo: as dolphm says we should open a spec for it 18:04:45 raildo: see dolphm's answer above :-) 18:04:54 raildo: it has an API impact change and so it must be documented in keystone-specs 18:05:02 there won't be much discussion on it 18:05:09 raildo: the spec portion of it will naturally be super short 18:05:12 I don't think it needs to be super lengthy, but ++ to needing a spec 18:05:26 +++ to needing a spec 18:05:29 dolphm: great :) 18:05:34 raildo: it's the API documentation which is much more important for the long term 18:05:40 ++ 18:05:43 dolphm: ++ 18:05:45 * stevemar wonders if we're too pedantic about this... 18:06:00 dolphm: yes ++ on API doc update, the spec part makes me grumble a bit 18:06:05 stevemar: about documenting the api or specs? 18:06:12 dstanek: ^ 18:06:29 this is really a tiny fix, ++ to document first in the API spec 18:06:35 its just a new hint to filter on, but whatever. 18:06:37 and we already have the bug to track the change, anyway 18:06:42 i'd actually be happy with just the api update since the commit message will link to the bug 18:06:46 dolphm: so this will just happend in Mitaka, since we can't propose this for Liberty? 18:06:58 raildo: it'll land in liberty 18:07:18 i'd aim for liberty 18:07:23 raildo: we can target it to https://launchpad.net/keystone/+milestone/liberty-3 18:07:33 great :) 18:07:44 any other questions on this? 18:07:58 dolphm: nope 18:08:01 #topic Use [extras] for package requirements depending on deployment options 18:08:03 thanks 18:08:06 #link https://review.openstack.org/#/c/207620/ 18:08:08 bknudson: o/ floor is yours 18:08:13 * stevemar sneaks out 18:09:05 hrm, bknudson has not made a peep so far this meeting 18:09:11 bknudson: around? 18:09:36 that's unfortunate...i'm curious about this one 18:09:38 someone try and wake bknudson up with a really bad patch toreview 18:09:49 alright, let's move on -- we can loop back if he arrives 18:09:52 dstanek: me too 18:09:54 let me try and grab him 18:10:19 (i'll give topol a minute) 18:10:38 topol, check the men's room to see if he needs a square 18:10:48 * dstanek is whistling the Jeopardy theme 18:10:55 he could be doing psirts? 18:10:56 I finally got back from team lunch 18:11:02 bknudson: \o/ 18:11:11 ok, so the [extras] support seems to work 18:11:33 so I proposed the changes for ldap 18:11:38 which were in test-requirements 18:11:45 and also I think memcache and something else 18:11:59 so, going forward, we need to consider also other optional requirements 18:12:04 bknudson: will the requirements get automatically updated in setup.cfg? 18:12:19 dstanek: I checked the source for the update tool and it looks like it. 18:12:37 bknudson, do we gate the extras the same way we do on the requirements 18:12:56 gyee: yes, I think that's how the tool works 18:13:03 should we do this in a feature branch to test the infra workflow first? 18:13:19 oh, so another example is fernet tokens, since there are specific requirements for those 18:13:26 or can we look at another project doing this? 18:13:43 I haven't looked to see if anyone else has actually implemented it yet 18:13:59 will devstack need to be changed at all to make sure it's requirements are installed? 18:14:04 bknudson: with more than one desirable 'extra', I assume you can do something like `pip install keystone[ldap,fernet]` ? 18:14:15 dolphm: yes, that's the format 18:14:49 but that wouldnt impact config, just deps? 18:15:08 there's no config change 18:16:14 cinder is also working on using this: https://review.openstack.org/#/c/203237/ 18:17:12 cinder is a fantastic use case for this 18:17:45 bknudson: re: devstack ^ 18:18:16 I proposed the change to devstack 18:18:31 devstack was pip install ldappool itself 18:18:31 bknudson: do you need anything specific from us today, or was this topic just to raise awareness? 18:18:44 nice 18:18:49 this is to raise awareness amongst more than just the reviewers who look at these changes 18:19:40 if anyone wants to do further reading on extras: 18:19:42 #link https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies 18:20:19 bknudson: ready to move on? 18:20:27 yep, move on 18:20:29 #topic State of feature/keystoneauth_integration branch 18:20:37 bknudson: o/ floor is still yours 18:20:47 in case others aren't aware 18:20:55 the feature/keystoneauth_integration branch is broken 18:21:02 why 18:21:03 dolphm: remember there is a policy topic :) 18:21:04 and I was wondering if anybody is willing to look into it 18:21:27 samueldmq: yes, it's next on the agenda 18:21:39 tempest fails -- EndpointNotFound: publicURL endpoint for alarming service not found 18:21:44 samueldmq: although there are no details - please fill in the agenda! 18:21:54 I have no idea why this is happening on the feature branch and not master 18:22:06 samueldmq: i moved it to the end 18:22:22 bknudson: can you paste the link ? 18:22:28 so if you're actually interested in the keystoneauth work I'd suggest getting involved in it 18:22:35 #link http://logs.openstack.org/83/208583/1/check/gate-tempest-dsvm-neutron-src-python-keystoneclient/aa1fcbb/ 18:22:37 https://review.openstack.org/#/c/208583/ is an example failing 18:22:51 dolphm: done 18:22:54 dstanek: thanks 18:23:31 and my attempt at merging master to the branch fails neutron tempest the same way 18:23:41 marekd: are you volunteering to investigate? :) 18:23:57 i'd rather not promise anything this week :( 18:24:28 bknudson, alarming service, that monasca? 18:24:46 let me ping them to see 18:24:57 the errors are in ceilometer 18:25:08 dolphm: http://logs.openstack.org/83/208583/1/check/gate-python-keystoneclient-python27/7ff54a1/console.html this fails due to oauthlib 18:25:15 oh wow, i didn't realize we have project monasca and project miniscus 18:25:26 we had it in ksc and it was fixed 18:25:29 dolphm: WHAT?! 18:25:31 the python errors are known 18:25:31 miniscus? 18:25:41 never heard of miniscus 18:25:44 marekd: logging as a service 18:25:48 it's the neutron tempest failure that's not fixed in the merge 18:25:53 it's mostly abandoned https://github.com/ProjectMeniscus/meniscus 18:26:14 bknudson: but the link you pasted fails on python 18:26:34 sorry I'm late 18:26:49 marekd: yes, I ran an experiment to see if master fails the same way or if it was introduced when I did the merge. 18:26:57 bknudson: the reviews to look into are here: https://review.openstack.org/#/q/status:open+project:openstack/python-keystoneclient+branch:feature/keystoneauth_integration,n,z , right? 18:27:03 the merge is here: https://review.openstack.org/#/c/207267/ 18:27:42 marekd: yes, you can look at the open merges for the branch... all the newer reviews are failing. 18:29:02 alarming should be monasca, ceilometer is metering 18:29:37 * marekd soon there will be Linux Kernel as a Service.... 18:29:53 gyee: I don't think monasca is running? http://logs.openstack.org/83/208583/1/check/gate-tempest-dsvm-neutron-src-python-keystoneclient/aa1fcbb/logs/ -- or maybe there's no log for it? 18:29:53 marekd: docker? 18:30:49 dstanek: along with Kubernetes, covered with OpenStack branded Manila 18:31:17 i think we're off topic here :) 18:31:40 sure, sorry 18:31:51 bknudson, yeah, lemme sort it out with the monasca folks 18:32:02 gyee: ok, thanks 18:32:16 ok, we can move on we don't need to debug it here 18:32:27 just raising awareness in case somebody was concerned about it 18:32:31 #info feature/keystoneauth_integration is failing gate 18:32:51 i assume stevemar and jamielennox would be interested as well - but they're both afk 18:32:54 so, moving on! 18:32:59 #topic Centralized policy distribution 18:33:01 ayoung, samueldmq: o/ floor is yours 18:33:09 alright 18:33:25 samueldmq: there's no links or anything on the agenda - might want to provide those first 18:33:26 specs are under review, all concerns addressed, we need some weight/decision on acceptance 18:33:39 #link https://review.openstack.org/#/c/197980/ 18:33:44 #link https://review.openstack.org/#/c/134655/ 18:34:04 samueldmq, that's the stuff you demoed at the midcycle right? 18:34:10 I can proxy for jamie 18:34:10 gyee: yes 18:34:19 lets do this! 18:34:26 we need reviews .. we have middleware + oslo.policy code 18:34:36 and I am working on the keystone server bit already 18:34:36 samueldmq: so the thing I hear most of all from people is whether this is a burning issue that needs to be solved now (and hence needs an SFE)…perhaps you could address that issue 18:34:51 dstanek is taking care of adding support to CacheControl at ksclient 18:35:00 dolphm, since we hhave your attention; does the dynamic policy approach seem right to you? 18:35:18 I'd rather hash it out now before we get too far into implementation; 18:35:35 ayoung: by dynamic, do you mean centralized distribution? 18:35:39 ayoung: I think so, I addressed all his concerns in the specs 18:35:42 dolphm: yes 18:35:49 the unified stuff kindof dies in committee, and that was based on a dolphm comment at midcycle back in san antonio 18:36:16 dolphm, yeah, centralized, but also unified 18:36:20 ayoung: then yes, but i can't speak for deployers 18:36:51 fair enough..I just want to make sure that we've got a consensus. 18:37:07 ayoung: there’s no spec we are votinh on her for unification, is there? I hope not 18:37:18 what was the outcome of the mailing list discussion? i don't think i ever saw the thread 18:37:23 dolphm, one issue, which I know you commented on in code review is the "deleted proejt" issue 18:37:23 henrynash: we just targeted the distribution for L 18:37:33 henrynash: which are those 2 specs are about 18:37:33 samueldmq: ++ 18:37:45 henrynash: i imagine unification would be an operator decision - nothing to do with the implementation 18:37:46 which is what ... 18:37:54 if we scope everything, what do we do to unlock a resource where we deleted the project in keystone but the remote service didn't get or process the notification 18:38:03 henrynash: although it'd be great to publish a unified policy file, or a tool to create one 18:38:42 what does project deletion have anything to do with policies? 18:38:48 dolphm, ++ provide a unified was my goal. In order to do that, there are lots of nits to knock down 18:38:54 dolphm, who would be the consumer? 18:39:18 gyee, policy needs to provide acess to the API in order to delete a proejects rsources 18:39:27 ayoung: i think services should be cleaning up their own resources in response to deleted project notifications - not some cleanup process making HTTP API calls through an authz layer. 18:39:30 if I need to get a scoped token, but that project is deleted, I can't get scoped token 18:39:34 gyee: yes I think we're off topic ... 18:39:38 ayoung: so, that example seems like a non-issue to me 18:39:44 dolphm, agreed, but there is a long way to go until they can do that 18:39:46 sorry got distracted 18:39:50 back now 18:39:53 and there is always the possibility of a missed notification 18:39:59 how do we handle in the interim? 18:40:01 ayoung, dolphm: as you know, I am fully against a unifies policy file…..that doesn’t mean there aren’t other ways (in the future) of provding common rules etc. across the policies of different projects 18:41:05 henrynash, "Fully against?" 18:41:33 we've had a spec-freeze exception request, spec-freeze time is over and we didn't reach a decision 18:41:44 dolphm, so, lets assume for now that there are missed notification etc...the only other option I cansee is "admin role on admin proejct indefault domain can delete anything" 18:41:51 should I submit a FFE now ? are cores ok with this feature landing in Liberty? 18:42:06 ayoung, samueldmq: regardless of the nits and/or specifics we are arguing about…the question here is whether we haev a burning issue that requires an SFE…i.e. is the problem one that causes us to make an exception to our working practices. I support teh centralised approach, I just haven’t seen the evidnce for the urgency 18:42:19 ayoung: so, that sounds like authorization at the service level -- not specific to any tenant 18:42:37 I'd like to see some more buy-in from an operator 18:42:48 ayoung, sameuldmq: which customers are shouting about the problem 18:42:53 lbragstad, gyee is going to poperators meetup. He'll proxy 18:42:57 I am attending the ops midcycle 18:42:58 henrynash: well, the code is under review already, there are jsut some missing bits 18:43:06 henrynash: we just need reviews, nothing more 18:43:09 i agree with henrynash. there is no reason to rush this 18:43:15 samueldmq: with respcet, that doen’t answer the question 18:43:23 henrynash: we were'nt able to have a single decision on SFE during L2 18:43:26 :/ 18:43:32 henrynash, urgency is that we can't seem to make any progress. Either reviews are too big and then sl=plit, or too small and then provide less value...can't win. 18:43:43 So, we've reduced scope and reduced scope 18:43:53 ayoung: so why would we do this in a X-3 cycle> 18:43:54 ? 18:43:56 and we have it down to "fetch from centralized" as simple as possib;le 18:43:58 gyee: could you bring it up? i'd love to have some feedback 18:44:06 henrynash, I proposed back pre Summit 18:44:09 this stuff takes time 18:44:10 dstanek, yes, will do 18:44:46 The win for theoperators, when this is all in, is that they can delegate to their own users the ability to share resources 18:45:22 If I am a consumer of a company cloud, and I need to share with...say jamielennox who also works here...I should not need to go to the central admin to get him acessto my proejct. All of this is driving toward that 18:45:24 ayoung, json is still going to be a challenge for deployers. 18:45:41 david8hu, may that be the worst we have to face 18:46:04 ayoung: I know….but we either have festures freezes for a reason, or we don't 18:46:43 henrynash, this change is in oslo messaging at this point, which was outside the "locked to release" in the past 18:46:58 david8hu: there's a conversation on openstack-dev about building a proper UI for policy editing 18:47:30 #link http://lists.openstack.org/pipermail/openstack-dev/2015-August/071184.html 18:47:44 henrynash, does "trial and error" count as a reason? We've had it at different points in the cycle in the past. 18:48:18 ayoung, debug aspect of things is going to be more challenging than it is today, but it can be address through proper documentation. 18:48:49 maybe horizon can solve all the issues that dynamic policies is targeting 18:48:58 so...can we please make progress on this? 18:49:11 dolphm, +1, I would love to join that conversation. If you can send me links offline, it will be great. 18:49:24 david8hu: the link is above 18:49:29 ayuong: so to introdiuce something as experiemental that seems OK,…again, not sure in an X-3 cycle where we’ll be craming to other already approved items in 18:49:30 saw it 18:49:34 thx 18:50:03 at least approve the spec for backlog? 18:50:19 ayoung: ++ 18:50:43 ayoung: henrynash and if we come with all code implemented in the ffe request, we analyze it apart from the specs approval 18:50:49 ayoung: sure, that gets my vote…as I say in my comment on teh patch, I support this, just not sure why it needs the urgency of getting into L 18:52:13 dolphm: you agree ^ 18:52:15 ? 18:52:17 henrynash, because if it is in L then people can use it 18:52:24 and until people use it we don't get feedback 18:52:26 just download the patch and test drive it, if it worth the money, approve it :) 18:52:33 and until we get feedback we are doing ivory tower development 18:52:44 samueldmq, is it considered ok just to reference another spec? "Please refer to the Keystone Middleware spec" 18:52:54 and code that is not in production is like excess inventory 18:53:06 amakarov: yes it is, so I don't need to repeate all the text there 18:53:09 ayoung: people that are interested will review and test patches 18:53:15 amakarov: it was a request from some cores already 18:53:27 amakarov: yes. i complained that there was too much of the same content in the specs 18:53:27 ayoung, +1, need ways to get all data points we need. 18:53:38 i couldn't see what was meant to be the actual change 18:53:47 samueldmq, dstanek thanks, +1 from me then :) 18:53:54 dstanek: ++ 18:54:28 if we built it will they come? 18:54:34 so yes, I think those specs should be ready to be approved to backlog, since all the concerns from core-reviews were addressed 18:55:00 dstanek, unless we build it, they can't come 18:55:17 dstanek, yes, I bet my beer money on it 18:55:22 i really do think we need to find an operator that we know will try it because they want it 18:55:30 dstanek: ++ 18:55:30 dstanek: ++ 18:55:34 gyee: will you have HP start using it in test labs? 18:55:45 dstanek, yes! 18:55:49 I just think it will help guide an implementation, that's all 18:56:06 dstanek, ayoung, need a very good PR person to sell it :) 18:56:11 dstanek, this is a building block. We can't give them what they really need yet. They might or might not try this, but if we don't get it out there, they certainly won't 18:56:18 ayoung: you don't need to merge things to master for get feedback from early adopters 18:56:26 david8hu: no! that means we're doing it wrong 18:56:49 dolphm, I'm not merging code. I'm trying to get the spec in 18:57:03 that means agreement on the direction 18:57:08 dstanek, no one is using the v3 policy api today. I am hoping we can do better this time. 18:57:08 ayoung, we also have the big tent now, just saying :) 18:57:09 gyee: have you gone through all of the proposals? 18:57:37 david8hu: then i would question if it was needed :-) 18:57:40 dstanek, you have a pop quiz coming? 18:57:51 ayoung: are there any early stakeholders for this besides yourself and samueldmq? 18:58:18 gyee: no, but if you're going to help drive as a sort of 'business owner' you should go through them and see if they are doing what you need 18:58:37 dolphm: maybe gyee is a stakeholder 18:58:51 gyee: ? 18:59:01 dstanek,dolphm, yes, I am involved 18:59:14 gyee: as a developer or from an operations perspective? 18:59:25 * gyee puts on his operator hat 18:59:25 * dstanek hears the chanting. 'do it, do it, do it' 18:59:36 dstanek: o/ 19:00:05 so we get the specs approved and I will come with the ffe request 19:00:05 gyee: i'm not sure if that's a yes or no 19:00:09 then we analyze it next meeting 19:00:14 dolphm: ^ sounds good ? 19:00:15 dolphm, as an operator 19:00:20 eating the dog food 19:00:23 dolphm, so you are implying that no features are going to go in upstream until they are in active deployment somehwere? 19:01:12 ayoung: of course not 19:01:24 I think time's over 19:01:28 ayoung: samueldmq: but the issue i've seen on this topic for the past 6+ months is a lack of operator interest. if we're overlooking those voices, please point them out / have them speak up in the spec review. 19:01:28 cutting edge, how we roll 19:01:37 and yes, time. samueldmq: thanks 19:01:40 #endmeeting