14:00:17 <nikhil_k> #startmeeting Glance 14:00:18 <sigmavirus24> Thanks nikhil_k :D 14:00:18 <openstack> Meeting started Thu Jun 4 14:00:17 2015 UTC and is due to finish in 60 minutes. The chair is nikhil_k. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:21 <sigmavirus24> o/ 14:00:22 <openstack> The meeting name has been set to 'glance' 14:00:24 <nikhil_k> sigmavirus24: :) 14:00:25 <ativelkov> o/ 14:00:28 <ajayaa> o/ 14:00:31 <mfedosin> o/ 14:00:31 <flaper87> o/ 14:00:35 <TravT> o/ 14:01:40 <nikhil_k> Let's give folks a couple more mins 14:02:02 <bpoulos> o/ 14:02:05 <lakshmiS> o/ 14:02:36 <nikhil_k> Welcome everyone! 14:02:47 <kragniz> o/ 14:03:02 <nikhil_k> Looks like we've a decent turnout. Let's get started. 14:03:06 <rosmaita> o/ 14:03:14 <nikhil_k> #topic Mid-cycle meetup 14:03:51 <nikhil_k> A informal conversation on irc led us to pursue the co-location with other teams. 14:04:31 <nikhil_k> We are currently looking to co-host the event with Horizon and SearchLight teams however, are open to options if more are willing to join 14:04:32 <flaper87> I'd really beg to make it earlier in July and somewhere in the east coast 14:04:50 <nikhil_k> Nova one is a bit later 14:04:51 <flaper87> not that I hate the west one but going there just for 2 days is.... painful 14:04:53 <flaper87> :D 14:04:56 * flaper87 stops crying 14:05:14 <nikhil_k> Thanks for the feedback flaper87 . 14:05:29 <nikhil_k> We shall welcome some more on the collab link: 14:05:33 <nikhil_k> #action everyone: Please review and suggest a location, date for mid-cycle meetup https://docs.google.com/spreadsheets/d/1w0eI6SPCA2IrOyHiEYC2uDO3fbYGzahZRUQSva0UD3Y/edit#gid=0 14:05:41 * ativelkov is smiling, thinking about painfull travels for 2 days 14:05:46 <TravT> scroll down to the bottom. 14:05:47 * mfedosin agrees with Flavio 14:06:02 <flaper87> ativelkov: lol 14:06:26 <nikhil_k> Thanks to those who already gave feedback there. If anyone has more apprehensions feel free to send me email. 14:07:00 <nikhil_k> Any more comments? 14:07:09 <flaper87> looking forward to it 14:07:12 <flaper87> :) 14:07:48 <nikhil_k> #topic APIWG process update 14:08:07 <nikhil_k> Trying to get this out there for some cross project process awareness 14:08:12 <nikhil_k> #link https://review.openstack.org/#/c/186836/ 14:08:31 <rosmaita> mfedosin: sorry, i edited your row by mistake 14:08:56 <mfedosin> rosmaita, np 14:09:20 <nikhil_k> elmiko reached out the other day and would be a decent point of contact. I think we also have our very own sigmavirus24 for any intial comments and concerns.. :-) 14:09:43 <nikhil_k> #topic Domain model and related refactor 14:09:49 <nikhil_k> flaper87: all yours 14:09:56 <flaper87> o/ 14:10:02 <nikhil_k> one thing before.. 14:10:03 <flaper87> so, I did some research, as promissed 14:10:04 <nikhil_k> #link https://etherpad.openstack.org/p/liberty-glance-domain-model 14:10:11 <flaper87> nikhil_k: ++ 14:10:22 <flaper87> Most of my findings were around status updates 14:10:28 <flaper87> I know ativelkov had some other places 14:10:36 <flaper87> and in the etherpad I didn't put task status updates 14:10:38 <flaper87> anyway 14:11:00 <flaper87> that said, I think, in the lihgt of the upcoming artifacts work, that we should fix this cases with atomic updates in the domain model 14:11:14 <flaper87> and let the artifact code remove the domain model entirely 14:11:22 <flaper87> Assuming that's something we want 14:11:40 <flaper87> This will allow us to fix these races and still work on a major refactor if we want 14:11:46 <mfedosin> Next week I'm going to start working on new architecture for v3 14:12:03 <flaper87> The fix shouldn't be invassive, it should be enough to fix the status setter 14:12:05 <mfedosin> without dm 14:12:05 <nikhil_k> #link https://review.openstack.org/#/c/132898/ 14:12:06 <ativelkov> yup, I agree with that, given that it does not delay us: I'd prefer to land the remaining part of artifacts as it is now (it uses domain concept) and get rid of it with further commits 14:12:20 <flaper87> same for tasks 14:12:31 <flaper87> and with those small fixes, we should be able to cover those races 14:12:37 <flaper87> I wrote some drawbacks in the etherpad 14:12:43 <flaper87> if you think I missed something please, do tell 14:12:49 <nikhil_k> What is the proposal though for the pattern without the DM? 14:12:53 <flaper87> ativelkov: and I had a small chat yday about some strategies 14:13:07 <nikhil_k> I agree that it makes a ton of sense for tasks to remove it 14:13:09 <flaper87> nikhil_k: I don't have a proposal for that, I'll defer it to ativelkov 14:13:22 <flaper87> I'm more concerned about the races now than the DM 14:13:27 <nikhil_k> Please do share ativelkov 14:13:34 <flaper87> That's a side convo that came out of these findings 14:14:03 <ativelkov> I want to make sure that the same transaction is used for the whole duration of each request 14:14:41 <nikhil_k> I don't mind if we start collaborating about removing such logic on the etherpad. We had concerns for v2 refactor given our priority is stability. So, +1 on fixing races and not refactoring there yet. 14:14:42 <ativelkov> with optional "for update" locks and a decorator to repeat the transaction in case of dead-lock exception in DB 14:15:08 <nikhil_k> ativelkov: you mean a DB session? 14:15:14 <ativelkov> Yup 14:15:41 <nikhil_k> Basically what't the scope for transaction would be good to know. Does that make sense for long running tasks? 14:15:52 <nikhil_k> Those may be Glance Tasks or uploads/downloads too 14:16:10 <nikhil_k> not so much on downloads, uploads may be affected with open sessions 14:16:14 <nikhil_k> (yet) 14:16:36 <ativelkov> Then the DB updates should be transactional anyway 14:17:17 <ativelkov> Now we have a situation when a DB record is read with one transaction, serialized to a "domain object", then somehow transformed and then the object is put back to the DB 14:17:39 <ativelkov> This is racy 14:18:10 <nikhil_k> Yeah 14:18:17 <mfedosin> we have a solution on that matter btw 14:18:28 <nikhil_k> Does this conflict with the oslo.versionedobjects proposal? 14:18:49 <nikhil_k> We need the records serialized in that case 14:19:03 <ativelkov> Not sure yet 14:19:10 <nikhil_k> (I am just putting some food for thought out there as this is a big change) 14:19:11 <flaper87> I don't think it conflicts but I'd need to double check 14:19:32 <flaper87> wasn't the previous artifact version out of the DM ? 14:19:41 <flaper87> then you guys made it work with DM 14:19:42 <nikhil_k> It is 14:19:53 <flaper87> ah ok 14:20:06 <nikhil_k> Oh wait, no. It's all in coherence with DM.. 14:20:09 <mfedosin> no, it originally was with dm 14:20:10 <ivasilevskaya1> flaper87, DM model was always there 14:20:12 <nikhil_k> #link https://review.openstack.org/#/c/132898/ 14:20:17 <flaper87> thanks 14:21:11 <nikhil_k> Should we move on and let ativelkov, mfedosin &/or flaper87 propose us a plan / etherpad ? 14:21:37 <flaper87> nikhil_k: for a non-DM glance? yes 14:21:43 <nikhil_k> Thanks 14:21:44 <flaper87> I don't expect us to figure that out now 14:21:49 <nikhil_k> #topic Discuss adopting Core Reviewer Hierarchy akin to Neutron's 14:21:53 <flaper87> but I'd like to have an agreement on fixing the races now 14:21:55 <flaper87> nikhil_k: ^ 14:21:58 <flaper87> (on the DM) 14:22:11 <nikhil_k> Thanks. ethadpad discussion good with you? 14:22:16 <nikhil_k> flaper87: ^ 14:22:34 <flaper87> nikhil_k: kk 14:22:40 <nikhil_k> Let me revert the topic then 14:22:58 <sigmavirus24> lol 14:23:03 <flaper87> nikhil_k: you know there's an undo command ? 14:23:06 <flaper87> :) 14:23:11 <sigmavirus24> flaper87: there is? 14:23:15 <flaper87> there is 14:23:19 <sigmavirus24> huh 14:23:34 <flaper87> anyway, feel free to move on 14:23:42 <nikhil_k> flaper87: sure :) more of silent question 14:23:59 <nikhil_k> #topic racy status updates in v2 14:24:12 <nikhil_k> flaper87: earlier was on DM 14:24:35 <nikhil_k> #link https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L250 14:24:37 <flaper87> The thing I'd like us to agree on now is on fixing those races in the actual code 14:24:42 <nikhil_k> #link https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L254-L306 14:24:56 <nikhil_k> more on #link https://etherpad.openstack.org/p/liberty-glance-domain-model 14:24:57 <flaper87> instead of waiting 'til the discussion of whether to use DM or not ends 14:25:01 <sigmavirus24> flaper87: I doubt anyone disagrees that we need to fix that up 14:25:07 <nikhil_k> == sigmavirus24 14:25:13 <flaper87> sigmavirus24: logged or it isn't true 14:25:15 <flaper87> :P 14:25:18 <flaper87> ok, that's it 14:25:19 <nikhil_k> ha 14:25:22 <sigmavirus24> lol 14:25:23 <flaper87> that's all I wanted :) 14:25:26 * flaper87 stfu now 14:25:35 <sigmavirus24> flaper87: you know that #openstack-glance is logged now rigth? 14:25:40 <sigmavirus24> =P 14:25:51 <sigmavirus24> Also you can make a spec and have it logged via votes =P 14:25:51 <jokke_> ++ for working around it for now and taking the time to plan future properly 14:25:54 <flaper87> sigmavirus24: yup but this makes it feel "official" 14:26:10 <flaper87> sigmavirus24: stfu 14:26:11 <nikhil_k> test and set (yes please for status updates) 14:26:12 <flaper87> :P 14:26:15 <sigmavirus24> ++ for having jokke_ around 14:26:36 <flaper87> sigmavirus24: bug>spec 14:26:47 * flaper87 hides from everyone's kciks 14:26:50 <flaper87> or kicks 14:26:55 <flaper87> ok, enough jokes 14:26:58 <flaper87> nikhil_k: lets move on 14:27:03 <nikhil_k> Thanks 14:27:04 <flaper87> thanks for the feedback 14:27:06 <nikhil_k> #topic Discuss adopting Core Reviewer Hierarchy akin to Neutron's 14:27:19 <nikhil_k> #link http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/core-reviewers.rst#n33 14:27:23 <nikhil_k> sigmavirus24: all yours 14:27:32 <sigmavirus24> So 14:27:47 <sigmavirus24> There's been a discussion of this on the mailing list and there was a summit discussion of it 14:27:59 <sigmavirus24> At this point there are probably 3 sub-teams in glance: metadefs, images, artifacts 14:28:21 <sigmavirus24> I think in order of attention they get it's: images, artifacts, metadefs 14:28:33 <sigmavirus24> And it's very unfair to both artifacts and metadefs 14:29:00 <nikhil_k> May be this overlaps with Fast track stuff (next topic) ? 14:29:01 <sigmavirus24> It makes sense to me if we adopt the hierarchy/sub-team model since glance has grown to have these specialty teams 14:29:12 * sigmavirus24 missed the fast track stuff 14:29:40 <sigmavirus24> This is a bit more broad though, especially since there are a bunch of metadefs reviews lying around that no one has reviewed 14:30:08 <sigmavirus24> Also, we agreed at the summit to keep Metadefs around so we need to give it proper attention 14:30:12 <nikhil_k> sigmavirus24: Actually, I got one email about image members too 14:30:21 <mclaren> I'll put my hand up for some metadef reviews 14:30:40 * nikhil_k wonders if anyone is using image sharing in real world besides Rackspace? 14:31:07 <sigmavirus24> Yeah, I can imagine there are several specialty parts of the codebase. Sub-teams just makes sense and would allow us to maybe scale 14:31:21 <sigmavirus24> We could even adopt something like the Security Doc team that has to have one +2 from doc team core and one +2 from sec core 14:31:23 <mclaren> nikhil_k: yes 14:31:36 <nikhil_k> ok 14:31:47 <sigmavirus24> Again, most of this is up on the Mailing List but that's all discussion around Rally at the moment 14:32:20 <sigmavirus24> Neutron and the Security docs projects seem to be doing well with the sub-team model and I think we'd benefit from it and I'd love to hear other opinions telling me I'm wrong =P 14:33:40 <jokke_> I'm just not convinced that we're big enough project to justify this, but it might be just me 14:33:45 <kragniz> sigmavirus24: I think it's worth trying 14:33:47 <ativelkov> Well, neutron is really big and diverse 14:33:48 <kragniz> particularly with metadefs 14:34:13 <ativelkov> De-facto we already have "sub-teams" 14:34:24 <sigmavirus24> ^ ++ 14:34:31 <jokke_> I think neutron has pretty much their own repos for the subteams which makes the real life separation easier 14:34:54 <sigmavirus24> jokke_: to a degree but they also have that flow in neutron itself if I understand correctly 14:34:55 <kragniz> jokke_: thye have sub teams in the core neutron as well 14:34:56 <ivasilevskaya1> hm.. sorry, I don't get it - who is supposed to review sub-teams? 14:34:57 <ativelkov> However I'm not sure if we need any formal rules on this 14:34:57 <mfedosin> what's the benefit to have it de-jure then? 14:35:13 <nikhil_k> I am with sigmavirus24 on this one. I think this makes a ton of sense at least for metadefs to begin with. For artifacts, we have fast track proposal coming out and the entire team should get to know it given we will go with all artifacts v3 API. However, I agree that we are too small and don't have enough reviewers to begin with. 14:35:15 <sigmavirus24> well at the moment, none of the metadef subteam members are core 14:35:23 <mclaren> It's worth remembering that a metadef change can still impact (break) image functionality, they're not completely orthogonal. 14:35:31 <sigmavirus24> mclaren: agreed 14:35:39 <sigmavirus24> Which is why I pointed out teh security doc team workflow 14:35:59 <sigmavirus24> at least one +2 from metadefs subteam and at least one from other glance core before workflowing it 14:36:06 <jokke_> mclaren: =+ 14:36:13 <nikhil_k> I doubt if a email mechanism works for everyone esp for reviews 14:36:34 <sigmavirus24> We can try whatever, but we need to try something =P 14:36:40 <nikhil_k> +1 14:36:47 <jokke_> sigmavirus24: I think the fact that we have no core from metadefs group is the issue we need to solve instead of trying to make things more complicated 14:36:48 <sigmavirus24> And we all need to commit to it 14:36:56 <mclaren> I think currently there's just a small # of metadef reviews which we should be able to get through 14:37:11 <mclaren> [until metadefs need to be ported to v3 :-)] 14:37:14 <sigmavirus24> mclaren: are you just looking at /glance or also python-glanceclient 14:37:20 <kragniz> mclaren: D: 14:37:26 <flaper87> also, note that in the future images+artifacts will be one thing 14:37:40 <sigmavirus24> flaper87: right, in the future ;) 14:37:41 <TravT> i believe wayne is out of office this week, but i would like to mention that multiple times he mentioned that he was stuck in getting reviews this last release. 14:37:45 <TravT> on metadefs 14:37:52 <flaper87> sigmavirus24: right but lets not forget about that 14:38:04 <mclaren> TravT: yeah, he spoke to me at the summit, I said I'll try to help 14:38:13 <TravT> and wayne and lakshmiS know metadefs code better than anybody else, i believe. 14:38:23 <mclaren> that's true for sure 14:38:56 <nikhil_k> In that case, we need some committee that will drive & draft/document the proposal. I would prefer more than two company and at least one repr from each of the sub teams to be proposed if we are even considering this. 14:39:13 <jokke_> TravT & all: if stuck with reviews ... yell out for help ... it might not happen on the same day, but keeping same reviews week after week in the meeting agenda seems to give them some traction ;) 14:39:36 <mclaren> TravT: we don't expect too many more metadef patches once the current ones are merged? 14:39:53 <nikhil_k> Yeah, holler for help should be great. However, I think the diverse focus is making this difficult. 14:40:17 <nikhil_k> mclaren: I think we may need more as SL becomes more mature 14:40:30 <sigmavirus24> jokke_: I know that wayne hollered at me for help a bunch of times last cycle and I did my best 14:40:37 <TravT> mclaren, i have been talking to wayne about some concerns I have about current API 14:40:49 <TravT> it requires too many calls and can be optimized 14:40:52 <sigmavirus24> But yeah, we just need some way to make sure wayne, lakshmi, and everyone else's valuable contributions don't get lost 14:41:26 <sigmavirus24> I don't know if that way of prioritizing them is an etherpad or trello or something else that we'll all stop using after a couple weeks 14:41:35 <TravT> i was talking to wayne about how we can change a few things so that we can go from 13 calls from horizon to 3 to get them. 14:41:58 <sigmavirus24> ^ mclaren, i.e., it's not just these few reviews and then we're done 14:42:00 <TravT> which actually was part of our original API, but at the juno mid-cycle was voted out. 14:42:12 <TravT> because of time 14:42:22 <lakshmiS> also when the cruch time comes, metadef changes takes a lower priority even with all the good intentions from the reviewers 14:42:38 <jokke_> I don't think having the metadef stuff on flight brought up in the meeting (and opened up a bit why something needs to happen) would hurt 14:43:22 <jokke_> Personally I'm true believer the more we understand what's going on, the easier it is to review and keep moving 14:43:34 <nikhil_k> One thing though.. we need people reviewing stuff and maintain the core-reviewership based off reviews. If that's to be changed then we need to consider input from CPL/TC meeting. 14:43:39 <mclaren> can metadefs become standard artifacts in v3? Around the 'u' timeframe :-) 14:43:55 <TravT> there is also a localization spec wayne (primarily) and I have been talking about. 14:44:09 <nikhil_k> ha 14:44:28 <nikhil_k> U -- would be still call artifacts artifacts then or have a fancy new name? 14:44:30 <TravT> mclaren, i have concerns about that as well but should be discussed 14:44:32 <ativelkov> mclaren: I don't think so 14:44:46 <ativelkov> Metadefs are dynamic schemas of things 14:45:04 <mfedosin> mclaren, they will be part of v3 along with artifacts and tasks 14:45:04 <ativelkov> artifacts are things themselves. 14:45:10 <TravT> my thoughts are a little too much for this meeting, so i can send out a ML on it if desired. 14:45:20 <nikhil_k> umm? 14:45:42 <nikhil_k> Ok, we are short of time. 14:45:45 <mclaren> ativelkov: they're not immutable you mean? 14:46:10 <ativelkov> mclaren: yes, that's one of the differences 14:46:14 <TravT> i think this is a good discussion to have... i'll start an ML on it later today. 14:46:29 <nikhil_k> Let's put some thought and share them on ML. We seem to be having some such discussions there atm.. 14:46:56 <nikhil_k> I think metadefs are conceptually different from artifacts/images. 14:47:03 <mclaren> ativelkov: hmm, ok. (I wonder if we could fake something with versioning...) 14:47:20 <nikhil_k> sigmavirus24: you mind creating a thread with your comprehensive thoughts on this topic? 14:47:29 <sigmavirus24> I dont' mind 14:47:34 <nikhil_k> Thanks 14:47:41 <mfedosin> but you can always make a metadef-plugin for artifacts if you need it :) 14:47:49 <ativelkov> nikhil_k: +1 14:48:07 <nikhil_k> #topic Merging remaining artifacts commits with FastTrack 14:48:15 <nikhil_k> ativelkov: or mfedosin ? 14:48:18 <ativelkov> me 14:48:30 <nikhil_k> please :) 14:49:11 <ativelkov> Well, just wanted to let you know that we have two changesets ready. They are all the same as we had in April, just changed from v0.1 to v3 and put back into main glance-api process 14:49:29 <ativelkov> #link https://review.openstack.org/#/c/132898/ 14:49:34 <ativelkov> and 14:49:40 <ativelkov> #link https://review.openstack.org/#/c/136629/ 14:49:43 <flaper87> ativelkov: I'm happy to be the glance-core that sanity checks those patches 14:49:53 <flaper87> I think I raised my hand at the summit 14:50:04 * flaper87 keeps doing that 14:50:07 <ativelkov> I'd prefer to merge them using our newly-approved "fast-track" procedure ) 14:50:08 <mfedosin> and also there is a spec https://review.openstack.org/#/c/177397/ 14:50:11 <ativelkov> thanks flaper87! ) 14:50:15 <nikhil_k> cool! 14:50:16 <flaper87> ativelkov: I'll take a look and go for fast-track 14:50:31 <mfedosin> flaper87, thank you! 14:50:34 <ativelkov> Thanks! 14:50:39 <flaper87> should we tag fast-track reviews in the commit message? 14:50:49 <nikhil_k> ativelkov: please do send an email to ML when that is about to merge 14:50:53 <flaper87> A FastTrack tag doesn't hurt 14:50:53 <mfedosin> flaper87, I think yes 14:50:55 <ativelkov> So, I want to merge them first, and then initiate a discussion with API-WG about improvements into API 14:51:02 <nikhil_k> flaper87: I think that should help ^ 14:51:11 <nikhil_k> ativelkov: ok 14:51:15 <jokke_> I'd like the idea to have operational artifacts sooner than later now when we have been going around the topic for 2 releases ;P 14:51:15 <nikhil_k> tag works then 14:51:27 <sigmavirus24> ativelkov: iirc there's a meeting this morning for the WG 14:51:29 <jokke_> merged I mean 14:51:38 <sigmavirus24> (er, in a couple hours) 14:51:41 <sigmavirus24> (morning for me =P) 14:51:41 <nikhil_k> :) 14:51:49 <flaper87> jokke_: stop bringing the 2 releases thing up 14:52:01 <mfedosin> I will update those CMs right after this meeting 14:52:03 <ativelkov> I'd like to summarize my question into the email first and then discuss them 14:52:35 <ativelkov> I have some questions on my mind, inlucing that one about /v3/artifacts/something vs /v2/something - and some more about type versioning 14:52:36 <nikhil_k> sounds great 14:52:55 <ativelkov> /v3/something* 14:53:15 <mclaren> at least you didn't mis-type v4 :-) 14:53:18 <flaper87> mfedosin: ativelkov feel free to ping me on irc 14:53:47 <mfedosin> let's leave /v3/artifacts/something for now 14:54:00 <nikhil_k> +1 14:54:08 <nikhil_k> I like that for future too 14:54:28 <sigmavirus24> mfedosin: +1 14:54:51 <mfedosin> if we have tasks and metadefs along with artifacts then it's the best solution I think 14:54:52 <nikhil_k> ativelkov: mfedosin : please tag cores for reviews if stuck. I think people won't mind giving a quick look.. 14:55:08 * sigmavirus24 has those two reviews open to look at 14:55:20 <ativelkov> Thanks! 14:55:27 <ativelkov> That's all from my side 14:55:33 <nikhil_k> thanks 14:55:34 <nikhil_k> moving on.. 14:55:37 <mfedosin> thanks :) 14:55:40 <nikhil_k> #topic Making use_user_token param False by default 14:55:43 <nikhil_k> mfedosin: yours 14:55:49 <mfedosin> yeah 14:56:10 <mfedosin> I wrote a message to Openstack dev where I had a couple of questions 14:56:43 <mfedosin> We have situations during image creation where our token may expire, which leads to unsuccessful operation. 14:57:14 <mfedosin> and one of them in connection between glance-api and glance-registry 14:57:31 <mfedosin> In this case we have a solution (https://review.openstack.org/#/c/29967/) - use_user_token parameter in glance-api.conf, but it is True by default. 14:57:52 <mfedosin> If it's changed to False then glance-api will use its own credentials to authorize in glance-registry and it prevents many possible issues with user token expiration. 14:58:06 <mfedosin> So, I'm interested if there are some performance degradations if we change use_user_token to False and what are the reasons against making it the default value. 14:58:17 <mclaren> is http://en.wikipedia.org/wiki/Principle_of_least_privilege relevant here? 14:58:43 <nikhil_k> yep, I think admin calls are a bit messy (in openstack lingo) 14:59:05 <sigmavirus24> didn't ayoung suggest trusts as a (messy) way of doing this? 14:59:17 <mclaren> mfedosin: you're thinking about trusts for the backend - would it make sense ... bah there goes sigmavirus24 14:59:18 <jokke_> also I don't think this affect gate so perhaps making documentation change rather than changing the default behavior 14:59:22 <ativelkov> sigmavirus24: yeah, that is one of the options 14:59:29 <flaper87> fwiw, we've been walking around this issue waiting for the right solution to happen 14:59:46 <flaper87> sigmavirus24: ayoung did mention that 14:59:52 <sigmavirus24> flaper87: is correct 14:59:58 <sigmavirus24> This was an open issue for months 15:00:06 <sigmavirus24> The old solution was to bump the keystone token expiration time limit 15:00:10 <nikhil_k> no registry, trusted auth are some other deployment (non-software) opetions 15:00:19 <jokke_> I personally like the trusts approach for long term fix 15:00:22 <sigmavirus24> & time 15:00:26 <nikhil_k> yep 15:00:36 <sigmavirus24> don't want to encroach on searchlight =P 15:00:40 <nikhil_k> If there's nothing urgent, let's continue on the ML.. 15:00:58 <mfedosin> okay thank you for suggestions :) 15:01:00 <ativelkov> Yeah, any input is welcome to that thread 15:01:03 <nikhil_k> Thanks all! 15:01:08 <nikhil_k> #endmeeting