18:00:21 <stevemar> #startmeeting keystone 18:00:22 <openstack> Meeting started Tue Dec 6 18:00:21 2016 UTC and is due to finish in 60 minutes. The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:26 <openstack> The meeting name has been set to 'keystone' 18:00:34 <lamt> o/ 18:01:13 <rderose> o/ 18:01:14 <bknudson> hi 18:01:19 <breton> o/ 18:01:25 <stevemar> #topic announcements 18:01:38 <stevemar> hello all! 18:01:41 <stevemar> Next week we are cutting Ocata-2 -- Week of Dec 12-16 -- This is also spec freeze week, specs must merge by then. 18:02:06 <stevemar> Any 'contentious' feature already approved, but not completed, will get bumped to Pike if it doesn't merge. 18:02:10 <stevemar> Contentious meaning: touching a critical path (token validation / issuance), affecting many components, potentially changing output of API calls. 18:02:36 <dstanek> hey 18:02:37 <stevemar> I don't think theres anything contentious that risks getting bumped (maybe the policy stuff?) but most of it is isolated 18:02:59 <stevemar> most approved features* are isolated 18:03:05 <ayoung> Oyez oyez 18:03:51 <stevemar> i think i started too soon? did i lose anyone? 18:04:08 <lbragstad> stevemar i think you're good 18:04:16 <stevemar> okie doke 18:04:17 <samueldmq> stevemar: so 'contentious' features (the impl) must be merged by o-2 ? 18:04:36 <lbragstad> samueldmq i think the spec needs to be merged 18:04:40 <lbragstad> by o-2 18:04:49 <stevemar> yes, i don't think there are any, the contentious ones were the token refactoring and the expiry window stuff jamie was working on 18:04:57 <ayoung> links? 18:04:59 <stevemar> those are all merged 18:05:10 <stevemar> agenda link: https://etherpad.openstack.org/p/keystone-weekly-meeting 18:05:21 <samueldmq> stevemar: okay, but we still talking about the specs, right ? 18:05:24 <ayoung> that is the link I was looking for 18:05:41 <samueldmq> lbragstad: kk just confirming, thanks 18:05:56 <stevemar> samueldmq: yes. all specs must be approved by EOW next week 18:06:04 * samueldmq nods 18:06:16 <stevemar> otherwise they get the boot 18:06:46 * stevemar waits for more questions 18:06:57 <ayoung> Trust Scope Extensions should not go forward. It is a good idea, but untenable as written. It can't be Trust based, needs to be at the token itself... 18:07:18 <ayoung> Extend user API to support federated attributes looks pretty good 18:07:27 <samueldmq> the ones I am worried the most are the trust ones 18:07:32 <samueldmq> ayoung: ^ 18:07:47 <ayoung> samueldmq, Spec: Trust Scope Extensions [jgrassler] 18:07:48 <ayoung> https://review.openstack.org/#/c/396331/ you mean? 18:07:51 <samueldmq> #link https://review.openstack.org/#/c/396331/ 18:07:57 <samueldmq> #link https://review.openstack.org/#/c/396634/ 18:08:04 <stevemar> ayoung: there are two trust ones 18:08:16 <ayoung> samueldmq, yeah, I had a long talk with jgrassler this morning. My position is unchanged 18:08:28 <stevemar> ayoung: theres also "lightweight" trusts: https://review.openstack.org/#/c/396634/ 18:08:52 <ayoung> stevemar, that is oauth 18:09:03 <samueldmq> ayoung: https://review.openstack.org/#/c/396331/ is solved with our OATH in your opinion? 18:09:04 <ayoung> we can do those today but should merge trusts and oauth 18:09:14 <ayoung> samueldmq, no the other one is 18:09:20 <ayoung> 396634/ is oauth 18:09:42 <samueldmq> ayoung: ok, and what about "Added trust-scope-extensions" 18:10:26 <stevemar> samueldmq: that one also has negative feedback 18:10:49 <stevemar> anyway, some specs are close, others are not. keep reviewing them 18:10:57 * samueldmq nods 18:11:18 <stevemar> moving along 18:11:20 <samueldmq> stevemar: we might want to have a deeper discussion in next meeting on those that haven't been merged by that time 18:11:27 <samueldmq> stevemar: ++ 18:11:27 <stevemar> samueldmq: yep 18:11:32 <stevemar> #topic Introduce Annapoornima Koppad [annakoppad] 18:11:40 <stevemar> raildo, rodrigods ^ 18:12:00 <raildo> stevebaker, thanks 18:12:02 <ayoung> she have an irc nick? 18:12:03 <rodrigods> so... welcome annakoppad_ 18:12:03 <raildo> stevemar, * 18:12:16 <ayoung> ah.. 18:12:17 <chrisplo> are you saing 396634 we're calling OAUTH instead of calling it trust something? 18:12:20 <ayoung> welcome 18:12:21 <annakoppad_> Thanks, raildo! 18:12:30 <ayoung> chrisplo, it is already implemented 18:12:32 <lbragstad> annakoppad_ o/ 18:12:35 <ayoung> has been for years 18:12:42 <rodrigods> she is our new outreachy intern, her project starts today! 18:12:49 <samueldmq> ayoung: chrisplo let's continue that in -keystone, we've changed topic already. thanks 18:13:07 <rodrigods> the idea is to add a new job to our CI to handle LDAP scenarios 18:13:08 <stevemar> welcome annakoppad_ 18:13:18 <samueldmq> that's great! 18:13:18 <raildo> rodrigods, ++ she will be working with us (at least) march 6 :) 18:13:19 <annakoppad_> Thanks! stevemar! 18:13:26 <samueldmq> raildo and rodrigods as mentors ? 18:13:30 <rodrigods> yep 18:13:31 <stevemar> samueldmq: yep! 18:13:32 <raildo> samueldmq, yes 18:13:36 <samueldmq> annakoppad_: welcome aboard! 18:13:41 <stevemar> annakoppad_: good choice of mentors :) 18:13:45 <annakoppad_> hopefully, more as well, as I do not intend to stop coding ever at all! 18:13:52 <samueldmq> nice nice, good to see it continuing, congrats on the initiative rodrigods and raildo 18:13:54 <stevemar> annakoppad_: :D 18:13:57 <annakoppad_> at raildo! rodrigods 18:14:03 <rodrigods> samueldmq, ++ 18:14:08 <raildo> samueldmq, your welcome. 18:14:13 <annakoppad_> thanks stevemar and samueldmq 18:14:25 <rodrigods> so... her project is to help keystone, but involves lots of devstack, project-config, etc 18:14:39 <dstanek> annakoppad_: welcome 18:14:48 <annakoppad_> hello dstanek! 18:14:49 <stevemar> annakoppad_: looking forward to reviewing your patches, please don't be shy and ask questions in #openstack-keystone if you need a hand 18:14:59 <annakoppad_> yes stevemar! 18:15:02 <rodrigods> stevemar, ++ 18:15:09 <stevemar> thanks rodrigods and raildo for doing this 18:15:22 <stevemar> morgan: you around? 18:15:25 <raildo> annakoppad_, so, any doubts, you can ask for help in the #openstack-keystone, and I know that you will find someone to help :) 18:15:34 <raildo> stevemar, no problem :) 18:15:53 <annakoppad_> great, raildo! 18:16:13 <stevemar> i don't think morgan is around, so we can skip the next topic, taskmanager in keystoneauth 18:16:28 <stevemar> plus, the work is well under way in https://review.openstack.org/#/c/362473/ 18:16:28 <rodrigods> think it is not only my opinion that having the LDAP job is a great advance for us :) 18:16:37 <ayoung> annakoppad_, BTW, it turns out the devstack LDAP code is not currently working 18:16:49 <ayoung> just tried it last week... 18:16:58 <stevemar> ayoung: yeah, its bit rotted a bunch 18:17:07 <annakoppad_> I was trying out running the devstack..will try it a little bit 18:17:11 <raildo> ayoung, out first step will be fix it 18:17:17 <raildo> our* 18:17:34 <ayoung> raildo, we can discuss. Lets set up a time to talk it through 18:17:41 <stevemar> ayoung: i have no doubt that raildo and rodrigods have a plan ;) 18:17:47 <raildo> ayoung, ++ 18:17:49 <annakoppad_> stevemar, and ayoung, never mind, I will help raildo and rodrigods fix whatever is broken 18:17:49 <stevemar> that works too 18:17:52 <ayoung> stevemar, I['ve been talking with them about it 18:18:00 <stevemar> ayoung: cool cool 18:18:18 <stevemar> time for our super fun topic 18:18:26 <stevemar> #topic Custom project IDs 18:18:28 <ayoung> RBAC? 18:18:33 <ayoung> stevemar, AH 18:18:36 <ayoung> ok so I -2 that 18:18:40 <ayoung> and here is why 18:18:41 <stevemar> agrebennikov, ayoung, but on your boxing gloves 18:18:59 <ayoung> stevemar, so the argument that agrebennikov gave me is he is trying to do multi site 18:19:17 <ayoung> and syncing stuff at the database layer (Galera) puts a lot of load on him 18:19:18 <rodrigods> isn't the intent behind shadow mapping to fix that? 18:19:24 <ayoung> rodrigods, not quite 18:19:26 <ayoung> let me go on 18:19:26 <agrebennikov> and I was very surprizrd to get -2 on the nexd day after we agreed on everything telling the truth 18:19:35 <samueldmq> #link https://review.openstack.org/#/c/403866/ 18:19:36 <stevemar> theres a good mailing list thread going on: http://lists.openstack.org/pipermail/openstack-dev/2016-December/108466.html 18:19:37 <ayoung> agrebennikov, that was because we did not think it through 18:19:51 <ayoung> what you are proposing is going to put some very had to debug errors into the system 18:20:01 <ayoung> we have an assumption right now that a token has a single project ID in it 18:20:14 <ayoung> and from that project ID, we deduce the set of roles in a token 18:20:31 <ayoung> now, ignoring all the work that everyone else is doing around this, lets thinkg that through 18:20:35 <ayoung> say we have 2 sites 18:20:47 <ayoung> and we add a project to one, and give the user some role assignemtnes there 18:20:59 <ayoung> then, we used this approach to create the project on the second site 18:21:02 <cbits> ayoung- I would like to propose a differnt use case. Where in I would have many independent keystones but many central reporting, finance and billing apps that only understand one project_Id I also believe that doing this in the DB would be akin to a hack and the API can and IMHO should support it. Also if we do DB replication and that gets out of sync the user experance is quite horable as well. ;) 18:21:02 <knikolla> o/ (i'm late) 18:21:09 <ayoung> and...we say that tokens issued at one site are valid at the other 18:21:15 <ayoung> which we could do via fernet 18:21:21 <ayoung> but...we screwed up on the second site 18:21:31 <ayoung> and gave the user a different set of roles 18:21:42 <ayoung> now, the token validation response will depend on which keystone you go to 18:21:59 <ayoung> but the tokens will be valid at all service endpoints 18:22:00 <lbragstad> ayoung that's was a big part of breton's concern 18:22:13 <ayoung> lbragstad, oh, we totally have breton to thank for seeing this issue 18:22:31 <stevemar> lbragstad: ayoung won't you end up with those error if you choose to create projects with id? 18:22:35 <dstanek> cbits: so you're saying that you trust hand rolled replication over DB replication? 18:22:37 <ayoung> what agrebennikov is asking for (and is reasonable) is a way to sync multiple Keystones. But it is a Hard problem 18:22:40 <ayoung> NP-Hard 18:23:02 <lbragstad> stevemar well - you can.. it depends on how you create the role assignments in each site 18:23:09 <agrebennikov> so with this just remove the "one token for all" usecase 18:23:15 <breton> i still don't see why federation is not an option 18:23:16 <agrebennikov> keys are different 18:23:25 <rodrigods> breton, ++ 18:23:30 <cbits> dstanek: I trust the orchestration layer I will put in place to make this work over DB replication 18:23:32 <agrebennikov> tokens are only valid in one region 18:23:42 <cbits> db replacation to 30 sites is a problem 18:23:54 <cbits> I may only need to place a project in 4 of the 30 18:23:57 <ayoung> agrebennikov, remove the project ID from the equasion and you have the same solution: make sure everything matches between two keystone 18:23:59 <ayoung> s 18:24:05 <breton> tokens are meant to be valid only inside a single keystone installation 18:24:17 <ayoung> you must restrict yourself to using only names, no ids, when requesting tokens 18:24:24 <cbits> In my use case we are talking about > 30 regaions 18:24:41 <cbits> Names always change. thats why we love id's 18:25:03 <ayoung> cbits, then you need IDS, and complete replication forthe entire data store 18:25:05 <dstanek> cbits: imo partial replication makes it worse. hard to track down bugs and an increased chance of running into collisions 18:26:04 <agrebennikov> breton, I keep struggling to understand how federation helps over here.... 18:26:12 <cbits> If my orchestration creates the ID's and we have basic validation at the API point whow wil these independent keystones colide? 18:26:15 <agrebennikov> but lets move it aside, I'll ask you in person 18:26:46 <ayoung> cbits, and then when someone makes a change at one site outside of your orchestration? 18:26:50 <lbragstad> agrebennikov if we're going to talk about federation why no do it here, in the open? 18:26:54 <lbragstad> not* 18:26:56 <stevemar> i say we make this a config option disabled by default, and let agrebennikov try it out, if it blows up we deprecate and remove the option -- document the limitations and state it's experimental 18:27:11 <cbits> the orchestration will have the admin creds. 18:27:18 <ayoung> cbits, and only the admin? 18:27:24 <ayoung> I mean, and only the orchestration? 18:27:27 <ayoung> no other admins? 18:27:30 <cbits> yes 18:27:31 <dstanek> stevemar: the problem for me is that it will blow up over time not immediately 18:27:53 <ayoung> cbits, then make your databse read only on the regions and do it at the replication level 18:27:54 <cbits> because in a complex system not only do we need to create the project but we need to also bind it to internal budjets, cmdb, etc 18:28:01 <ayoung> stevemar, its a nightmare 18:28:04 <dstanek> cbits: how do you keep track of used ids and names? 18:28:13 <cbits> AD Grups are used. 18:28:20 <cbits> no UID's in openstack. 18:28:22 <cbits> just groups 18:28:36 <stevemar> cbits: and that is replicated (not at the API layer) 18:28:53 <stevemar> theres a good argument between trying to replicate and sync at the various layers 18:29:25 <cbits> no, If say project X needs access to site 1, 2, and 3 we will simply add the project and role (with ad group there) Ad is centeralized in the company 18:29:33 <stevemar> mfisch had a nice reply here: http://lists.openstack.org/pipermail/openstack-dev/2016-December/108499.html 18:29:49 <dstanek> agrebennikov: federation won't completely help until shadow mapping exists 18:30:05 <cbits> DB replication works well with 3 or 4 sites. it gets really nasty when you have lots of sites. 18:30:06 <ayoung> stevemar, this is one of the sites that was using Assignment in LDAP 18:30:13 <breton> dstanek: yes it will 18:30:25 <cbits> We only use AD for auth and autz 18:30:29 <agrebennikov> even that... federation is for something else. I don't see any application of federation in my usecase 18:30:32 <cbits> user / password and group membership 18:30:36 <dstanek> breton: there's still no way to create projects right? 18:30:52 <agrebennikov> neither projects nor groups 18:30:57 <agrebennikov> if I'm not mistaken 18:31:00 <ayoung> dstanek, I think what he's saying is they want to have the same kind of behavior with LDAP as you are proposing via Federation: 18:31:05 <dstanek> agrebennikov: federation is how separate keystone installations collaborate 18:31:07 <cbits> I would love to have hte project_ids be unique per site. But I live in a wold where the cloud is intergrated with many systems. That does not chage over night. 18:31:21 <ayoung> first time you hit a Keystone, create the resources there. But they want to make sure the resources match multi-site 18:31:41 <breton> dstanek: projects will be created the same way agrebennikov is going to do it now. He suggests to create 10 projects, each in every location 18:31:46 <dstanek> ayoung: isn't that the goal of shadow mapping? 18:31:51 <breton> dstanek: just with static id 18:31:54 <cbits> Just the project_id needs to be the same so reporting and other systems can see it as one thing. 18:32:12 <ayoung> dstanek, is that sufficient, though? It is also the "create the assignment on the fly" part 18:32:21 <cbits> not because its cool. but because other systems that will use that data and intergrations are not ready for project_id[] 18:32:40 <breton> dstanek: and then manually create assignments in each location 18:32:51 <ayoung> cbits, what you are saying, I think, is you need a way to correlate projects between multiple deployments? 18:32:54 <breton> dstanek: which can already be done with federation, but cleaner and more predictable 18:33:22 <samueldmq> ayoung: ++ 18:33:24 <stevemar> agrebennikov: cbits have you guys tried this already? do you have results? can you use a custom interface? 18:33:37 <ayoung> they also tried K2K and found it too slow 18:33:52 <samueldmq> ayoung: optimize it ? 18:33:53 <ayoung> Multi-site is going to be a real issue 18:34:02 <cbits> yes for many products, OpenBook, ceilometer, custom CMDB, etc. 18:34:03 <ayoung> samueldmq, perhaps. 18:34:06 <breton> how slow is too slow? 18:34:18 <ayoung> breton, "16 times" was the agrebennikov quote 18:34:29 <cbits> I have been working with agrebennikov: to find a good way to do this. That is why I like his patch 18:34:30 <dstanek> ayoung: the assignments should also be part of that or automatically creating a project is useless 18:34:32 <ayoung> but..again, not a complete solution anyway 18:34:46 <ayoung> cbits, it seems to me that what you 18:35:09 <dstanek> 16x for obtaining the initial token? 18:35:12 <ayoung> need is a way to syncronize the remote sites and perform "eventually consistent" semantics 18:35:19 <ayoung> dstanek, ++ 18:35:28 <samueldmq> wow 18:35:46 <agrebennikov> please, lets remove my emotional paragraph from the topic please :/ 18:35:47 <ayoung> IE. I usually work with region A but today I am working with region B and I need everything over there to mirror Region A for me 18:36:53 <samueldmq> dstanek: I agree with you. user_id must be the same too. assignments must be the same too. 18:36:55 <stevemar> agrebennikov: if you're referring to the etherpad, you can edit that. 18:37:01 <ayoung> cbits, are new projects and assignments only every going to be written via your orchestration tool and in a centralized location, and then you need to sync them down to the remote Keystones? 18:37:12 <agrebennikov> I mean 16x 18:37:20 <cbits> ayoung: yes 18:37:39 <stevemar> agrebennikov: cbits what's preventing you from creating a custom interface for create_project? 18:37:48 <ayoung> cbits, ok...so it sounds like what you need is along these lines: 18:38:00 <ayoung> 1. disallow local writes on a certain set of tables: 18:38:07 <ayoung> projects, assignements, to start 18:38:27 <stevemar> i can't see this landing upstream given the concerns, it's way too contentious. but i don't want to stop you from attempting it yourself 18:38:41 <stevemar> we keep going around in circles 18:38:55 <lbragstad> agrebennikov did you experience a 16x performance degradation in your testing? 18:39:03 <ayoung> 2. push data from a central orchestration hub (say a keystone server, but that is limited to just accessfrom the orchestration engine) down to the remote keystones for that data 18:39:13 <ayoung> AD is external and used for authN 18:39:26 <dstanek> stevemar: agreed... it would be nice to come back with some experience on this being deployed and discuss again in the future 18:39:30 <samueldmq> stevemar: is the ML discussion still flowing ? 18:39:37 <lbragstad> samueldmq yeah 18:39:43 <ayoung> With the exception of Token data (which now should be ephemeral) I think you can get away with a completely read-only Keystone everywhere but in the auth Hub 18:39:54 <ayoung> will not allow for self service of anything, though 18:40:02 <dstanek> agrebennikov: how often did you need to do the operation that took 16x? 18:40:07 <samueldmq> lbragstad: stevemar so perhaps that's the right way to continue the discussion 18:40:16 <cbits> We can patch the code, thats easy. however it seems that an optional pram that is validated seems resonable. 18:40:17 <ayoung> dstanek, he was just saying K2K took a lot longer than getting a token 18:40:23 <ayoung> and I don't think it solves his needs anyway 18:40:26 <agrebennikov> I did not of course. But keeping in mind you always have to make inter-services calls between different geo locations, and since I tried it before with one of the external auth system ldap/federation manner I can say for sure that it's way slower 18:40:54 <lbragstad> agrebennikov so 16x is arbirtrary 18:41:05 <stevemar> cbits: unfortunately not :( 18:41:24 <stevemar> cbits: once we open up the possibility, others will start using it 18:41:38 <cbits> yes, I get where you are coming from. 18:41:45 <agrebennikov> yes sir. also another usecase which I described yesterday and nobody pointed an attention - replicated images should be always replicated across zoned into the same projectIDs 18:41:49 <cbits> I was hoping to solve my use case and not carry a patch. 18:41:51 <agrebennikov> and this is not a problem of glance 18:41:54 <dstanek> ayoung: yeah, i was just trying to get a handle on how slow it actually was 18:42:35 <stevemar> cbits: i think this is one of the few instances where i'd advocate that you folks try it out first and pass along any results you come up with 18:42:45 <ayoung> agrebennikov, I really think you need to do this at the database level. Sorry. There is just too much of a data integrity problem otherwise. It is more than just projects. 18:43:03 <samueldmq> ayoung: + 18:43:17 <dstanek> agrebennikov: i don't know anything about glance, but from the thread i thought that the image ids were different 18:43:44 <lbragstad> i meant to follow up with sigmavirus on that 18:44:07 <agrebennikov> nope. replicator needs to make sure that whenever the image appears in regionA - it moves it to the regionB. Obviously it may only rely on the project 18:44:48 <ayoung> OK...I don't think we are moving ahead with this one. Sorry folks. 18:44:52 <agrebennikov> dstanek, and obviously not the project name 18:45:04 <agrebennikov> ayoung, got for it.... :( 18:45:07 <stevemar> yeah, we're just going around in circles at this point 18:45:07 <dstanek> agrebennikov: but you're not comparing apples to apples. if they don't guarantee the ID and we are saying we can't then you already have that ability 18:45:09 <stevemar> :\ 18:45:29 <samueldmq> agreed, and still have 3 topics :( 18:45:32 <cbits> Thanks for your help. 18:45:38 <samueldmq> think it's better to continue on the ML 18:45:40 <samueldmq> and review 18:45:41 <stevemar> sorry agrebennikov and cbits 18:45:54 <agrebennikov> for sure we will. Thanks everyone 18:46:09 <samueldmq> agrebennikov: cbits thanks 18:46:19 <ayoung> Last up ... RBAC 18:46:32 <stevemar> ayoung: lol, thats not even on the agenda :P 18:46:40 <ayoung> stevemar, yes it is 18:46:45 <samueldmq> ehehe 18:46:51 <ayoung> oh..last weeks 18:46:54 <stevemar> ayoung: ^_^ 18:47:05 <samueldmq> it's on last week's agenda 18:47:13 <ayoung> quick vote on whether we think it can go in? RBAC in MIddleware [ayoung] 18:47:13 <ayoung> https://review.openstack.org/#/c/391624/ 18:47:21 <ayoung> since we are talking specs 18:47:38 <lbragstad> i need to look at the latest revision 18:47:38 <stevemar> i haven't looked into it enough, abstaining 18:47:44 <ayoung> If the group opinion is to support it, I'll push. If not, I'll back off. 18:47:53 <ayoung> biggest difference is naming 18:48:21 <ayoung> instead of URL_PAttern, calling the main entity: access_rule 18:48:51 <samueldmq> making role checks in middleware is technically possible and I like it 18:48:55 <ayoung> also added in a default scheme 18:49:00 <samueldmq> not sure about the steps to get there. like splitting the policy etc 18:49:08 <ayoung> { pattern: "ANY", verb: "ANY" role: "<role>"} 18:49:09 <samueldmq> ayoung: I also owe you a review 18:49:29 <ayoung> times running out before spec freeze. I think this moves us ahead a lot. 18:49:51 <ayoung> It is based on, quite simply, years of discussions, trial, and feedback 18:49:59 <stevemar> ayoung: so you'll have a bunch of predefined responses for URL patterns ? 18:50:20 <ayoung> stevemar, I don't quite understand the question 18:50:25 <ayoung> responses? 18:50:31 <lbragstad> 10 minute mark 18:50:47 <stevemar> ayoung: i'll talk to you about it after the meeting 18:50:49 <ayoung> we can figure out what the URL patterns them selves are from the api-regs 18:50:50 <ayoung> refs 18:50:53 <ayoung> ok... 18:51:03 <ayoung> I'll stick around. surrendering the conch 18:51:09 <samueldmq> ayoung: does it define a brand new syntax for the new RBAC-only policy for the middleware ? 18:51:14 <samueldmq> ok, later on -keystone 18:51:37 <stevemar> ayoung: it looks decent though, and isolated 18:51:47 <ayoung> yes...like the routes in keuystone: 'pattern': '/v2/images/{image_id}/reactivate', 18:51:53 <stevemar> ayoung: i'm just looking for stuff that doesn't touch critical paths at this point 18:52:13 <ayoung> we can default just about everything to "Member" with no loss of security, too 18:52:42 <stevemar> spilla: around? 18:52:50 <spilla> yup 18:52:59 <stevemar> ayoung: switching to spilla's topic 18:53:07 <stevemar> gotta give the new folks the floor! 18:53:14 <stevemar> #topic PCI-DSS Query Password Expired Users 18:53:33 <stevemar> spilla: sounds like you're asking advice on an implementation detail 18:53:41 <spilla> correct 18:53:53 <rderose> I like /v3/users?password_expires_at=gt:{timestamp}, which is what was in the spec and is consistent with the guidelines: 18:53:53 <rderose> http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering 18:54:38 <ayoung> That will work 18:54:44 <ayoung> any driving reason not to do it? 18:54:48 <stevemar> rderose: well the gt and lt stuff is more for quantity i think? 18:54:59 <samueldmq> I am not sure filtering at an exact timestamp is useful for anyone 18:55:11 <samueldmq> /v3/users?password_expires_at={timestamp} 18:55:16 <stevemar> /users?pasword_expires_after={some time stamp} seems a bit clearer 18:55:21 <ayoung> ah... 18:55:25 <spilla> the reason I went with /v3/users?password_expires_before={timestamp} was to make it easier to under stand if you start chaining them, or even in general 18:55:36 <spilla> but that doesn't make it more right or wrong 18:56:01 <ayoung> at the API level, let us implement the set of them 18:56:01 <spilla> wanted some input to head the best direction 18:56:21 <ayoung> if there is a rationale for each, we can give the choice 18:56:23 <samueldmq> and we can combine them, right ? with ?password_expires_before={timestamp1}&password_expires_after={timestamp0} 18:56:29 <ayoung> ++ 18:56:36 <stevemar> that makes sense 18:56:49 <ayoung> expires AT seems low value, but if there is a need for it, include it as well 18:57:05 <spilla> ayoung ++ both is always an option 18:57:08 <samueldmq> again, making a perfect match ?password_expires={timestamp} does not make sense to me 18:57:17 <stevemar> this reminds me of http://developer.openstack.org/api-ref/compute/?expanded=list-servers-detailed-detail#list-servers-detailed 18:57:19 <dstanek> the gt: is what the api-wg is recommenting 18:57:19 <samueldmq> not sure if anyone agree with me on that 18:57:20 <rderose> I just think lt gt solves it and it's consistent with other filtering 18:57:31 <stevemar> dstanek: but that's more for quantities 18:57:41 <dstanek> stevemar: no according to their examples 18:57:52 <ayoung> spilla, that enough to go on? 18:57:53 <rderose> stevemar: no 18:57:57 <dstanek> "GET /app/items?finished_at=ge:15:30&finished_at=lt:16:00" 18:58:07 <rderose> dstanek: ++ 18:58:26 <stevemar> oh did i miss that? 18:58:36 <dstanek> http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#time-based-filtering-queries 18:58:46 <lbragstad> dstanek i like that 18:58:52 <stevemar> oh there it is: GET /app/items?finished_at=ge:15:30&finished_at=lt:16:00 18:59:02 <dstanek> i assume dates would be handled the same way, but that's not explicit 18:59:06 <stevemar> spilla: there's your answer ^ 18:59:11 <spilla> so gt lt seems to be it 18:59:24 <stevemar> yessir 18:59:27 <stevemar> wrapping up! 18:59:30 <stevemar> #endmeeting