18:00:21 #startmeeting keystone 18:00:22 Meeting started Tue Apr 12 18:00:21 2016 UTC and is due to finish in 60 minutes. The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:22 o/ 18:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:25 hello! 18:00:26 The meeting name has been set to 'keystone' 18:00:36 #link agenda: https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting#Agenda_for_next_meeting 18:00:36 o/ 18:00:36 hi 18:00:41 o/ hello 18:00:42 * ayoung-mobile in lunch mode still 18:00:43 o/ 18:00:43 hello 18:00:45 hi 18:00:46 \o 18:00:51 \o 18:00:56 \o/ 18:01:03 they are testing the fire alarm right now in my building, fun times 18:01:06 oh 18:01:12 o/ 18:01:17 \o 18:01:28 amakarov: hope you're around, you're on the agenda! 18:01:39 Hi all 18:01:45 oh I am here 18:01:52 #topic mitaka is out 18:01:52 and I was waiting for a courtesy ping :B 18:01:57 bye mitaka 18:01:59 hello newton 18:02:02 amakarov: i have things to talk to you about that caching btw... 18:02:04 i am off my game... 18:02:04 oooooh 18:02:08 ping ajayaa, amakarov, ayoung, breton, browne, crinkle, claudiub, davechen, david8hu, dolphm, dstanek, edmondsw, gyee, henrynash, hogepodge, htruta, jamielennox, joesavak, jorge_munoz, knikolla, lbragstad, lhcheng, marekd, MaxPC, morganfainberg, nkinder, raildo, rodrigods, rderose, roxanaghe, samleon, samueldmq, shaleh, stevemar, tjcocozz, tsymanczyk, topol, vivekd, wanghong, xek 18:02:11 yay Mitaka 18:02:16 o/ 18:02:19 we support mitaka for at least 12 months 18:02:20 Why don't we cache new tokens in kmw cache right after they were issued? 18:02:25 o/ 18:02:26 Sam you are supposed tobsay "here I am!". It goes with the name 18:02:29 queue horrible bug discovery in 3,2,.... 18:02:32 MITAKA is out! 18:02:32 o/ 18:02:40 release notes! http://docs.openstack.org/releasenotes/keystone/mitaka.html 18:02:54 here I am! <<< ayoung-mobile 18:02:55 o/ 18:03:04 just wanted to say thanks to everyone that contributed 18:03:05 o/ 18:03:14 amakarov: hold up but yes. 18:03:15 and i hope we have an awesome newton 18:03:27 thats alL! 18:03:33 * morgan hides the microphone before ayoung-mobile gets it ;) 18:03:35 we will ahve !! for sure 18:04:00 alright, lets give the microphone over 18:04:10 stevemar, great job as PTL on mitaka! congrats :) 18:04:15 #topic Pre-cache new tokens for 5 minutes in KMW cache 18:04:26 raildo: ++ so glad it wasn't me :) 18:04:39 morgan, haha 18:04:44 I assume this this auth_token middleware? 18:04:48 kmw == ksm? 18:04:52 yeah 18:04:58 keystonemiddleware 18:05:01 nice, I was confused too 18:05:16 Well, I've ran into one stupid thing we do for every token: validate it right after it was issued 18:05:17 can i make a recommendation - do not abbreviate "ks, ksm, ksa,..." 18:05:17 damn acronyms 18:05:33 Doesn't a shared memcache work now 18:05:35 is it necessary? 18:05:44 so there is a reason for this 18:05:46 how does auth_token know about the new token? 18:05:52 bknudson: ++ 18:05:55 send it over a message bus? 18:06:06 ayoung-mobile, yes, and we can cache new tokens there 18:06:09 bknudson: he's saying puysh the validated totken to memcache 18:06:13 directly from keystone 18:06:15 on issuance 18:06:23 without validating them first 18:06:24 Amakarov so this is on issue? 18:06:32 i don't think you could message bus it, keystone would have to go straight to memcache 18:06:33 jsut a config thing not ? 18:06:35 no* 18:06:46 so hold up before we discuss how we get tokens out there 18:06:53 Nah right into memcache from keystone 18:06:56 often endpoints are grouped logically 18:07:05 and have some shared memcache but not all shared 18:07:12 I'd suggest do that on client side 18:07:15 keystone would need to know what memcaches to push the token to. 18:07:32 morgan, that's not essential 18:07:33 amakarov: wait is this a client thing or a middleware thing? 18:07:51 morgan, that's the client thing I think 18:08:07 because lets draw the line in the sand, client is NOT trusted to put things in memcache that keystonemiddleware or keystone consumes 18:08:09 the client should cache new tokens for middleware to get it 18:08:17 so how is this different from our current flow in terms of performance? 18:08:19 morgan: amakarov: client think -> keystone auth thing ? 18:08:20 morgan: +1000 18:08:29 and client will never be trusted. 18:08:32 to do so 18:08:53 in fact, most deployments should lock down memcache more than they do... that is a different convo though 18:08:57 oh wait, that's just for new issued tokens 18:09:00 samueldmq, by client I mean keystoneclient lib and everything it uses 18:09:00 is it that important? 18:09:11 if the client can put tokens in the auth_token cache then there's no need for keystone at all 18:09:22 I think the effort doesn't pay the improvement (1 request) ? 18:09:26 bknudson, hmm, right 18:09:29 bknudson: and no security in openstack 18:09:37 Amakarov this is client as called from within middleware right? 18:09:37 let's move it to server side then 18:09:39 morgan: y, that too. 18:09:46 ok, so server side 18:09:59 you have keystone needing to know all the memcaches to push to 18:10:01 ayoung-mobile, I want to avoid this first-time validation 18:10:10 i support shared memcache for endpoints 18:10:11 so as of a review i have open keystone can/will be consuming keystonemiddleware 18:10:14 all the memcache clusters 18:10:17 so I assume we can trust keystone server then 18:10:22 so where this code lives is not a big deal 18:10:22 bknudson: correct 18:10:23 that token is valid 18:10:26 are we talking about keystone pushing to another service's memcache cluster? 18:10:33 dstanek: possibly 18:10:33 Amakarov allinone deploys? 18:10:42 seems like bad architecture 18:10:51 allinone deployes are not really something we enginerr for 18:10:59 we get it for free most of the time :) 18:11:06 dstanek: I don't like that too 18:11:06 ayoung-mobile, not exactly 18:11:21 dstanek: too much change for saving 1 validation request 18:11:29 amakarov: i made this argument before 18:11:43 and we explored it ( ayoung-mobile and i ) 18:11:52 if keystone had a memcache where it could look up tokens quickly then the call from auth_token would be faster 18:11:54 sharing a memcache cluster for endpoints is good 18:12:16 however, keystone pushing to probably a separate cluster in big deploys 18:12:18 is bad 18:12:29 keystone as it is could pre-cache for itself making the validate faster 18:12:37 morgan: ++ 18:12:39 and i'd be happy to add that to the caching layer for tokens 18:12:42 anyway, if kmw uses another cache it will stil validate the token as usual 18:12:50 morgan: i think that's a good idea 18:12:53 but i wouldn't want keystone to push to the cache in the keystonemiddleware way 18:13:00 morgan: ++ 18:13:07 keep keystonemiddleware in charge of keystonemiddleware semantics 18:13:23 if the cluster happens to be shared and the semantics are the same 18:13:25 sure 18:13:28 from an architecture POV a cache is within a service and not a part of the API. 18:13:33 keystonemiddleware can share the cache of other service's keystoenmiddlewares though, if that's convenient 18:13:36 but i'd focus on keystone pre-caching for itself. 18:13:38 dstanek: ++ 18:13:51 and i REALLY like recommending the various services sharing keystonemiddleware caches 18:13:59 morgan, do you suggest to cache tokens in keystone server very own jaust to validate it faster? 18:14:00 regardless if keystone shares that or not. 18:14:14 amakarov: it would be easy to on issuance to a pre-cache in keystone 18:14:14 s/jaust/just/ 18:14:22 and it would make validations faster 18:14:28 for the window of cache 18:14:31 morgan, sounds like a compromise 18:14:33 so ~300s by default 18:14:55 once keystone uses keystonemiddleware 18:15:05 and keystonemiddleware is on oslo.cache these pre-caches may look the same 18:15:07 morgan, I presume we still invalid the cache of the token changes? 18:15:10 morgan, under our loads the majority of tokens are short living 18:15:11 so it accelerates everything 18:15:13 i.e. role changes in fernet 18:15:32 gyee: we would need to be smart about where we cache the data. 18:15:43 gyee, there may be a trade-off 18:15:51 gyee: but we cache for 300s by default 18:16:03 gyee: which we've always said is within our acceptible clock skew 18:16:03 5 min cache is ok to wait for 18:16:05 morgan: so at least under current design we don't inherit the caching part of keystonemiddleware in keystone 18:16:13 jamielennox: and we shouldn't 18:16:19 amakarov: are you satisfied with this discussion, is there still a need for the blueprint? https://blueprints.launchpad.net/keystone/+spec/pre-cache-tokens 18:16:21 right 18:16:21 behavior is a bit different though, middleware versus keystone 18:16:28 middleware cache validation results only 18:16:29 jamielennox: i'd like to see keystonemiddleware move to oslo.cache and we can see if they align 18:16:36 jamielennox: but i don't expect it to in the near term 18:16:50 stevemar, I'm ok with current agreement 18:17:04 gyee: again, known window of validation we accept as tokens being valid for 18:17:12 morgan: gah, i got it so close, but the edge cases are hard 18:17:20 stevemar: might make sense to document the agreement in a bp 18:17:22 gyee: known risk and within what we consider clock-skew 18:17:28 morgan, sure, options and tradeoffs as always 18:17:34 stevemar: people might want to know what we were thinking 6 months from now 18:17:47 just saying we need to doc the difference 18:17:52 shaleh: that's what i'm doing :) 18:17:53 this is a relatively low bar (yay!) to hit for newton 18:18:17 i'm happy to sign up for helping on this front as long as someone else is willing to take a first stab at docs 18:18:25 shaleh, modified the bp 18:18:28 and/or someone is willing to at least co-author so we have more cache knowledge :) 18:18:53 or take a stab and i'll review 18:18:54 ;) 18:19:04 amakarov: i may have overridden your changes, launchpad :( 18:19:12 lets continue the impl details offline btw 18:19:12 morgan, :) 18:19:29 I'd like to know more about token validation performance since I'm sure our cloud folks will ask about it. 18:19:36 bknudson: sounds good! :) 18:19:47 bknudson: actually i think we have a lot of ways to improve that this cycle 18:19:53 ++ 18:19:59 how about we split token validation to another service? 18:20:04 ;) 18:20:08 say what?! 18:20:13 * stevemar throws a rotten tomato at bknudson 18:20:28 * morgan borrows a wet emu from mordred to throw at bknudson 18:20:34 Lets go tokenless everywhere 18:20:40 ayoung: good plan! 18:20:42 next topic :) 18:20:46 #topic OSProfiler integration 18:21:03 amakarov: morgan ^ 18:21:13 ok, what's holding us from accepting that thing? ) 18:21:30 https://review.openstack.org/#/c/294535 18:21:30 amakarov: was working through the last bits (such as https://review.openstack.org/#/q/I4857cfe1e62d54c3c89a0206ffc895c4cf681ce5,n,z ) with DinaBelova 18:21:44 amakarov: i don't want config options for it (not sure if this is still an issue_ 18:21:59 making sure we weren't doing something weird with either side 18:22:05 morgan, yes, and I was fixing that job failures )) 18:22:17 stevemar: all options are [iirc] in osprofiler now 18:22:24 we have an external default set 18:22:28 to default it to off 18:22:31 ++ 18:22:32 and a couple other things 18:22:46 i'll confirm, but the code is almost there and looks pretty good 18:23:14 we should aim to land it early this cycle and i plan on opening a convo with DinaBelova at the summit about better ways to desgin for profiling going forward across openstack 18:23:17 morgan, it has just passed jenkins tests 18:23:28 but we shouldn't wait since that'll be a few cycles out 18:23:34 this is a good starting place 18:23:46 so in short, review it. 18:23:50 morgan: ++ 18:23:53 please don't bikeshed it 18:24:15 and if there are minor concerns lets stack the changes on top where possible 18:24:34 i am happy where this has moved to and think it can land soon as long as it's reviewed. 18:25:06 morgan: we should take some hints from how things like newrelic are implemented 18:25:13 dstanek: that is the plan. 18:25:39 dstanek: and setting clear hook points in the projects so anything (not just osprofiler) can hool in 18:25:39 morgan: i have some ideas, but no time :-( 18:25:41 hook* 18:25:45 question though, why doesn't that live in oslo.db? 18:26:05 jamielennox: it isn't db only 18:26:25 no, but i know i put a review on the oslo.cache one the other day that it should be in oslo.cache 18:26:29 jamielennox: there should be support in oslo.db, but also in other things 18:26:42 as is you are going to be _wrap_session-ing in every project, why not just control it from oslo.db? 18:26:50 jamielennox: that is part of the larger conversation / x-project spec i want to work with DinaBelova on 18:26:53 so start with it here in Keystone and move it down to oslo.db? 18:26:59 maybe we can move it to oslo.db once we prove it out 18:27:04 shaleh, ++ 18:27:14 its in all the pipelines too 18:27:17 let's have something up and running first 18:27:20 isn't this already in other projects? 18:27:24 dstanek: it is 18:27:27 we're one of the last 18:27:28 that's ok - we can do it that way i'm just wondering why we'd do this particular case this way 18:27:29 iirc 18:27:44 jamielennox: version 1, i think is the answer 18:27:47 it that's the case why prove it again? just put it where it belongs 18:27:50 k 18:28:15 dstanek: i think the architecture isn't there. 18:28:26 and having profiling hooks in keystone is a good thing (tm) 18:28:34 even if they are suboptimal for the moment 18:28:51 once it is in oslo.db we move the stuff out of keystone, it's a pattern we've done before 18:29:00 true 18:29:19 and should be pretty lightweight to do so as it improves. 18:29:44 but if we;re adding changes to oslo, and othe rthings i think we should look at the design to be not just osprofiler specific 18:29:47 let anyone hook in 18:29:49 alright, lets review it then 18:29:57 but in short,lets review it. 18:30:01 any new issues we can leave as comments 18:30:12 critical concerns please highlight right away 18:30:18 minor concerns can be addon patches 18:30:26 bikeshedding... leave at the door ;) 18:30:45 I'd rather not think of it as generic hooks, but rather profiling that can be 0 cost disabled 18:30:47 * morgan steps off the soapbox 18:30:58 next topic 18:31:02 ayoung: you mean 0 cost enabled right? 18:31:11 ayoung: join us at the summit and we will work on that :) 18:31:14 shaleh, nah, meaning you don't pay for it if it is disabled 18:31:16 ayoung: please join us for that convo. 18:31:24 ayoung: that is the goal btw. 18:31:27 morgan, speaking of summit.... 18:31:32 is that next? 18:31:48 ayoung: next is federation functional tests 18:31:50 shaleh: unfortunately profiling always has a cost. that's why i was advocating for defaulting to of 18:31:52 off 18:31:56 ayoung: what did you have in mind? 18:32:09 dstanek: and why i -2d it up and down until it was default off 18:32:34 stevemar, on Profiling? Nothing, just avoiding a general "hooks" approach. 18:32:37 dstanek: fair anough 18:32:39 Lets do Federation 18:32:52 #topic Keystone federation integration/functional tests 18:33:12 knikolla around? 18:33:18 rodrigods, yeah 18:33:21 ayoung: we talking k2k here? 18:33:23 So...dumb Idea.... 18:33:30 shaleh: yup 18:33:35 so, current CI gates do not do functional/integration testing for federation 18:33:37 what if we made Keystone act as its own Federated IdP? 18:33:39 devstack should be able to spin up 2 keystone instances so you can do k2k on 1 system. 18:33:54 Like...you get an url under OS-FEDERATION 18:33:59 and it is protected with basic auth 18:33:59 ayoung, interesting 18:34:01 ayoung: I feel like testing something closer to a real setup would be a more reassuring test 18:34:12 bkero: yeah, plausible. 18:34:12 bknudson: why not use the same keystone as IdP and SP? 18:34:16 gsilvis, this is testing the Federation code itself. 18:34:17 bknudson: plausible 18:34:23 stupid auto nick 18:34:32 THis is not K2K 18:34:33 ayoung: ++ 18:34:36 ayoung, a real gate with 2 devstack would allow to test features that are enabled by k2k. 18:34:43 http://lists.openstack.org/pipermail/openstack-dev/2016-March/091055.html says k2k 18:34:46 knikolla: agreed 18:34:56 ayoung: we had intended it to be a test of K2K 18:35:06 this is just for getting rid of Password in the token request, and doing basic auth like the web Spec 18:35:07 i'd also like to see "regular" federation besides k2k 18:35:20 rodrigods: sure 18:35:24 gsilvis, I care far more about regular federation 18:35:30 rodrigods: i am actually working on this 18:35:31 but, if this is K2K, go on. 18:35:36 breton, wow 18:35:38 what is "regular"? 18:35:44 rodrigods: there are patches by dstanek that i'm reviving 18:35:48 so... let's focus our efforts 18:35:52 gyee, SAML, OpenIDC, Federation with "2K" 18:35:55 breton, our idea is to use tempest instead 18:35:57 without 2K 18:35:57 ayoung: yup, I can understand that 18:35:58 ayoung, K2K includes the regular federation bits. 18:36:00 and run the tests using dvms 18:36:03 is there different code paths between the SP in k2k vs "regular" federation? 18:36:14 dstanek, just for the idp 18:36:23 actually... 18:36:30 no? 18:36:45 it should be the same 18:36:53 rodrigods: you still need to set up federation bits like mod_shib. Does tempest do that? 18:37:03 breton, we setup it up in devstack 18:37:05 dstanek, no difference as far as I know 18:37:09 since it is FOSS software 18:37:12 breton: not yet. that what my patches do, setup tempest 18:37:15 rodrigods: ^ 18:37:21 dstanek: https://review.openstack.org/#/c/151310/9 these ones? 18:37:32 have we asked devstack about this? it would be a huge change to make two devstacks parallelably installable 18:37:46 jamielennox: infra supports multi-node 18:37:47 when i was testing this I used some ansible to manipulate the N devstacks once they were up into being federated 18:37:49 So....theoretically, could an application running in a VM inside of Nova accpet the SAML assertion from Keystone? 18:37:52 breton: yep 18:37:52 we could hook into the muiltinode thing 18:37:57 if we wanted 18:37:59 bknudson's suggestion lets it all live in devstack 18:38:03 dstanek: cool. Hope you don't mind me tackling them. 18:38:10 ayoung, yes 18:38:11 Damnit we built an IdP. 18:38:12 breton: nope, not at all 18:38:24 ayoung: yup. 18:38:24 i think for functional testing it would be easy to have a 2 keystone only env, but i like the idea of being able to test beyond just the keystone effects and proper resource federation 18:38:25 I knew this was a mistake 18:38:29 jamielennox: ++ 18:38:32 shaleh: it makes more sense to do 2 devstack vms if you want to expand this beyond keystone (e.g. image federation) 18:38:42 i would rather, but get it all setup in devstack as it would be much easier and i think hits all the code paths 18:38:44 i still don't understand why we want 2 keystones 18:38:46 bknudson: tree 18:38:48 true 18:38:49 why not use the same keystone? 18:38:49 bah 18:39:15 breton: better insurance we are not lying to ourselves, right? 18:39:15 the only reason not to do that would be to test against a specific IdP. 18:39:16 bknudson: exactly---we want to build resource federation tests on top of this 18:39:18 breton, the second Keystone consumes the SAML generated by the first. We need to test that code path 18:39:21 Functional should be a pretty real test 18:39:27 breton, like bknudson just said 18:39:34 you could loop it, but it's not a realistic functional test 18:39:36 the idp can be keytone... or not 18:39:58 Keystones don't consume SAML. Apache does. 18:40:04 * topol o/ better late than never 18:40:09 breton: .... for now ... 18:40:19 dstanek: oh gawd 18:40:28 ;-) 18:40:28 lol 18:40:32 i think we need to sync the efforts 18:40:36 dstanek, are there plans to implement own shibboleth or something? 18:40:42 as gsilvis said, this would allow as to test resource federation using k2k between two different devstack. 18:40:44 https://etherpad.openstack.org/p/Keystone-Federation-Testing 18:40:53 us* 18:41:06 amakarov: i am working on a POC to have keystone deal with the SAML bits 18:41:12 can we call "k2k" irregular federation? 18:41:18 :-) 18:41:20 dstanek: but why? 18:41:21 gyee: not so much :) 18:41:25 amakarov: to support dynamic configuration among other things 18:41:59 it would make federation easier for ops 18:42:05 i would agree - the point here was to not build a full SAML parser into keystone 18:42:15 dstanek, yeah, PKI tokens :-) 18:42:16 dstanek, cool. why have you chosen saml then? Just a preference or you compared it with oidc for ex. ? 18:42:25 dstanek: hm, the dynamic configuration is an interesting point 18:42:46 jamielennox, ++ 18:42:49 we should actually test both 18:42:55 amakarov: i believe that it fits the Rackspace usecase 18:42:57 dstanek: though I definitely agree with jamielennox here too 18:43:01 But K2K can't produce OpenIDC 18:43:05 only SAML now 18:43:12 correct 18:43:13 jamielennox, we need an army to implement all of its bells and whistles! 18:43:21 so was the plan to have a job in keystone for the multinode k2k test run? 18:43:26 also mod_shib is crazy in what it will do without an apache reboot 18:43:28 gyee, PKI tokens were written with this in mind, but let's let that go. 18:43:29 bknudson: yes 18:43:31 gsilvis: jamielennox: fair point, but i'm actually not building the SAML parsing myself - it already exists 18:43:40 gsilvis: no complaints from me. 18:43:50 jamielennox, craazy in a good or bad wy? 18:43:52 ayoung, oh I agree, PKI token is a "signed document" like SAML2 18:44:00 gyee, "was" 18:44:07 lets put itin the past tense. 18:44:12 gyee: has anyone ever suggested saml2 tokens 18:44:13 dstanek: why not just use the plugins? 18:44:14 ayoung: i'm undecided - but crazy 18:44:16 ayoung: it can do interesting things if configured right w/o a restart 18:44:24 ayoung, gyee necromancy, hm? ;) 18:44:25 stevemar: like mod_shib? 18:44:29 dstanek: y 18:44:39 stevemar: you have to restart apache for most changes 18:44:40 amakarov, I am being fair here 18:44:50 every time you add an idp you have to kick apache 18:44:55 stevemar: and you have to manage config files instead or using an API 18:45:00 dstanek: restart... graceful... 18:45:01 I think that Apache can do a kill -1 type thing, and reread its config, no need to dump sessions 18:45:08 dstanek: potato potato 18:45:36 so are we still talking k2k here? how many times are we adding this? 18:45:38 how often does one add idps? 18:45:43 ayoung: graceful - process current requests, new requests go to new workers with new things 18:45:45 dstanek: you're not wrong 18:45:48 my vision is to build it as something that can sit outside of keystone's core code - so even if nobody else liked it we could still doit 18:45:51 jamielennox: well I'd like to... 18:45:52 breton depends on who you are 18:46:26 Have a related conversation on hold with #puppet-openstack about setting up Federation 18:46:30 dstanek: and it's crypto so you want it to be fast - so an apache plugin? 18:46:46 jamielennox: or something not pure cpython 18:46:47 morgan, restarting apache... as for me is like buing a new car once it out of fuel 18:46:56 lbragstad: i don't know. The guy who usually adds idp. 18:47:00 s/buing/buying/ 18:47:06 amakarov: graceful reload, it's what it's there for 18:47:19 breton well - public clouds might add idps a lot more than a private deployment 18:47:22 jamielennox: if your using separate processes in apache then crypto is find in Python since it's in C anyway. it's when you use threading that it's a problem. 18:47:39 lbragstad: I could definitely imagine adding idps all the time in our usecase 18:47:47 morgan, not long ago wa had an issue with mod_wsgi desagree with graceful reloads 18:47:47 lbragstad: resource federation also opens up the possibility of a much more dynamic setup 18:48:14 amakarov: use uwsgi and mod_proxy_uwsgi ;) [actually a better setup for that reason] 18:48:19 i think i side tracked us too much. 18:48:23 amakarov: separate convo though 18:48:25 tornado drill... I should work from home. 18:48:34 amakarov: stay on federatio 18:48:43 ack 18:48:50 amakarov: :) 18:49:07 so... who is interested in having a CI for federation 18:49:13 join the etherpad 18:49:20 rodrigods: i think everyone is interested ;) 18:49:32 morgan, awesome 18:49:40 rodrigods: link? 18:49:43 rodrigods, would K2K be sufficient for SAML? 18:49:47 * morgan waits for open discussion has something. 18:49:48 testing, that is? 18:49:49 https://etherpad.openstack.org/p/Keystone-Federation-Testing 18:49:51 ayoung: it should be. 18:50:00 ayoung, it would 18:50:00 ty 18:50:08 rodrigods, I'd start with testing on a single keystone and split it afterwards when you have tests to run in the new environment 18:50:24 morgan: how many minutes you need? 18:50:27 can someone link the etherpad in the meeting summary? (I don't know how that sort of thing works) 18:50:32 stevemar: 5 ish 18:50:40 #link https://etherpad.openstack.org/p/Keystone-Federation-Testing 18:50:40 stevemar: will be quick. 18:50:42 amakarov, put that idea there 18:50:59 stevemar: thanks 18:51:00 so we can discuss and vote, also i'm not complete aware of all the needed steps 18:51:09 next topic for now, edit the etherpad for federation functional tests 18:51:10 where to send changes to have this kind of deployment and so on 18:51:18 stevemar, ++ 18:51:23 vote for who? 18:51:28 Pedro 18:51:31 #topic morgan wanted 5 minutes 18:51:33 Vote for PEDRO! 18:51:36 lol nice one ayoung 18:51:38 lol 18:51:42 Keystone Midcyle 18:51:53 PLease say Boston 18:51:57 i'm volunteering to help chase down venue in the bay area 18:52:00 sorry ayoung 18:52:06 Dagnabit 18:52:19 we already did Boston :-) 18:52:24 bay area sounds great 18:52:32 this is to change up the venue to a location we haven't been and switch coasts 18:52:41 shaleh: is once really enough, when it's boston? 18:52:41 also summer in the bay is kindof awesome 18:52:42 gyee: he means Vancouver Bay right? 18:52:55 shaleh: lol 18:52:56 I vote for the Hotel Formerly named the Ahawannee 18:53:03 those lobster rolls were top notch btw... 18:53:07 shaleh, love that one too 18:53:11 ayoung: not exactly Bay Area but close 18:53:17 anyway, i'll start working on some details for it this week so we can roll into the summit with planning 18:53:25 shaleh, If I'm flying to California.... 18:53:38 ayoung: 2 hour drive when there is no traffic 18:53:41 i figure it's not a hard sell to get people to fly to California / SF [everyone has offices in that area] 18:53:47 shaleh, thre 3.5 18:53:50 ttry 18:53:51 try 18:53:57 ayoung: maybe how you drive :-) 18:54:05 morgan, if you change coasts it can be Vladivostok :) 18:54:06 lbragstad: ++ 18:54:07 shaleh, I did it most weekends for a decade 18:54:09 ayoung: i would have voted for yosemite 18:54:16 ayoung: but.. i think nothing would get done 18:54:23 amakarov: i offer sydney each time, nothign... 18:54:24 morgan, lots would get done 18:54:29 ayoung: just not code. 18:54:35 Priorities 18:54:37 jamielennox: sydney! 18:54:41 jamielennox: hehe 18:54:42 ayoung ++ 18:54:45 samueldmq: wanted it down his way. But you know, Olympics..... 18:54:50 Toronto 18:54:54 Push for Summit in Sydney afte Barthelona 18:54:55 shaleh: :-( 18:54:57 anteaya ++ 18:54:59 btw, why not Sidney? 18:54:59 shaleh: maybe next year 18:55:06 what dates were we looking at for the midcycle? 18:55:08 as a fallback would be Tronto if the bay area can't happen this time around 18:55:13 Boston 18:55:14 topol: haven't looked at the scheudle 18:55:24 late June right? 18:55:29 topol: will know more later this week, but it'll prob be june 18:55:34 FYI 5 mins left 18:55:35 Late July I think 18:55:36 topol: releases.openstack.org/newton/schedule.html late june or early july 18:55:36 amakarov: because enough people couldn't justify the flight to come 18:55:37 shaleh: typically. 18:55:40 morgan so its gonna be city by the bay? Awesome 18:55:48 topol: that is what i want :) 18:55:49 ayoung: if only the midcycle were in a month with good weather in boston 18:56:05 gsilvis, is there such a month? 18:56:16 4 minutes 18:56:24 i'll send ML emails and such later this week 18:56:25 ayoung, we can lie and say they all are. 18:56:27 * morgan is done. 18:56:28 ayoung: ... well there's a week, sometimes, does that count? 18:56:29 legal seafood in boston or fishermans warf in SF. Its a win win 18:56:31 gsilvis: thats why i'm gunning for rochestor and toronto :P 18:56:41 topol, not.even.close. 18:56:52 topol: where is this illegal seafood you eat? 18:56:58 But there is much good food in SF 18:57:01 topol: sourdough is better in SF though. 18:57:03 jamielennox: his fish tank 18:57:07 ayoung: ++ tons of fantastic food 18:57:10 stevemar: that... ugh.. 18:57:22 morgan, are you looking for in SF proper? 18:57:23 ayoung. well maybe if you took me to the good seafood restaurant in boston I could agree 18:57:23 not so much illegal as unhygenic 18:57:28 topol ate nemo? 18:57:32 Cuz South Bay is not worth the trip. 18:57:40 ayoung: nah, let's do something horrible like milptas 18:57:42 gyee: and dory 18:57:45 Vacaville 18:57:48 lets do in Europe 18:57:50 * gyee cries 18:57:54 UK? 18:57:55 ayoung: eeewww in June :-) 18:58:03 Fremont! 18:58:15 ayoung: Fremont is just Milpitas north 18:58:16 brazil! 18:58:22 breton, spb? :) 18:58:24 we are getting to the point where we have a decent contingent of non-US people who would attend\ 18:58:27 we can all visit samueldmq 18:58:32 Dublin. Lunch at Gyee's 18:58:47 i imagine we'll hash this all out at the summit 18:58:48 ayoung: sounds good. His Dad rocks the kitchen. 18:58:57 ayoung, sounds good 18:59:02 jamielennox: i think midcycles are going to be a different thing post austin. 18:59:06 shaleh, and we would be ahead of the City traffic headed out 120 to Yose 18:59:08 whoever is willing to organize it has my vote 18:59:08 ayoung, good idea! 18:59:10 jamielennox: and it'll make it easier [i hope] 18:59:13 post barcelona 18:59:15 morgan: waiting to see how that one plays out 18:59:18 ayoung: ++++++++.... 18:59:19 anteaya: ++ 18:59:19 I mean Dublin, CA 18:59:31 oh... I was totally fore ireland 18:59:32 anyway, i'll start organizing this for this cycle. 18:59:33 lets give infra time to assemble 18:59:35 ayoung, LOL 18:59:36 cheers. 18:59:37 #endmeeting