18:00:08 <lbragstad> #startmeeting keystone
18:00:09 <openstack> Meeting started Tue Mar 21 18:00:08 2017 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:12 <openstack> The meeting name has been set to 'keystone'
18:00:20 <cmurphy> o/
18:00:23 * thingee runs
18:00:23 <rodrigods> o/
18:00:24 <gagehugo> o/
18:00:24 <lbragstad> ping agrebennikov, amakarov, annakoppad, antwash, ayoung, bknudson, breton, browne, chrisplo, cmurphy, davechen, dolphm, dstanek, edmondsw, edtubill, gagehugo, henrynash, hrybacki, jamielennox, jaugustine, jgrassler, knikolla, lamt, lbragstad, kbaikov, ktychkova, morgan, nishaYadav, nkinder, notmorgan, portdirect, raildo, ravelar, rderose, rodrigods, roxanaghe, samueldmq, SamYaple, shaleh, spilla, srwilkers,
18:00:25 <lbragstad> StefanPaetowJisc, stevemar, topol, shardy, ricolin
18:00:31 <hrybacki> o/
18:00:32 <lbragstad> agenda #link https://etherpad.openstack.org/p/keystone-weekly-meeting
18:00:52 <rderose> o/
18:01:15 <ravelar> o/
18:01:20 <dstanek> ehlo
18:01:58 <lbragstad> maybe next week we will start doing roll call
18:02:24 <breton> yes, that would be useful
18:02:37 <breton> i think the list will be twice less after that
18:02:38 <lbragstad> and give people and opportunity to opt into the ping list again
18:02:40 <antwash> o/
18:03:01 * lbragstad makes a note
18:03:23 <lbragstad> alright - we have a lot to do today so let's go ahead and get started
18:03:26 <dstanek> i bet you can just look at the speakers in the last 10 meetings
18:03:38 <lbragstad> #topic Pike goals: deploy in wsgi
18:03:54 <lbragstad> so far there are two community goals that have been accepted for Pike
18:04:06 <lbragstad> the first one is deploy in wsgi - which we already support
18:04:09 <lbragstad> so that's good
18:04:09 <lbragstad> #link https://review.openstack.org/#/c/440840/
18:04:33 <lbragstad> #topic Pike goals: python3.5 support
18:04:39 <lbragstad> the second is supporting python 3.5 which we already do as well
18:04:40 <lbragstad> #link https://governance.openstack.org/tc/goals/pike/python35.html
18:04:57 <notmorgan> except python-memcached*
18:04:58 <rodrigods> great :)
18:05:15 <notmorgan> sortof...sometimes..whoknows
18:05:31 <lbragstad> notmorgan that's the only case where we don't support py3 deployments, right?
18:06:06 <notmorgan> yeah, well also mod_wsgi is weird with py3
18:06:13 <notmorgan> but you can make it work...ish
18:06:18 <lbragstad> (and i think that would be mitigated after/if we move pymemcached
18:06:20 <dstanek> python-memcache has a py3 version....but i think we just don't like it
18:06:24 <ayoung> python3-memcached.noarch
18:06:35 <notmorgan> dstanek: it has been spotty in actually working
18:06:42 <ayoung> Summary     : Pure python3 memcached client
18:06:42 <ayoung> URL         : https://github.com/eguven/python3-memcached
18:06:54 <dstanek> notmorgan: i use it all the time outside of keystone without any issues
18:06:55 <notmorgan> behaves badly at times then gets fixed...then doesn't...
18:07:07 <notmorgan> basically move to pymemcache
18:07:10 <ayoung> looks to be actively worked on
18:07:10 <knikolla> o/
18:07:18 <notmorgan> ayoung: not really
18:07:32 <notmorgan> I've talked with the maintainer, he has next to no time on it
18:07:41 <ayoung> https://github.com/eguven/python3-memcached/commit/5539869760b2c13ddf0820df66b97cbb72f99043
18:07:45 <notmorgan> I tried to take it over 3 different times
18:07:50 <ayoung> https://github.com/linsomniac/python-memcached
18:08:04 <notmorgan> yeah linsomniac has no time
18:08:15 <ayoung> What was last touched in December
18:08:16 <ayoung> joy
18:08:22 <notmorgan> not a bad dude, just overwhelmed
18:08:25 <notmorgan> with other work
18:08:37 <lbragstad> that happens
18:08:42 <notmorgan> pymemcached is maintained by Pinterest
18:08:46 <lbragstad> but should it stop us from asserting that pike goal?
18:08:58 <lbragstad> or do we need to move to pymemcached?
18:09:07 <notmorgan> we should move to pymemcached
18:09:11 <lbragstad> or use the goal as an excuse to move to pymemcached this release?
18:09:23 <notmorgan> use the goal as the excuse
18:09:26 <notmorgan> :)
18:09:31 <lbragstad> fair enough
18:09:33 <ayoung> Need to get it into the distributions.  Does Debian support it?
18:09:50 <dstanek> if we want it then we should create a spec. it doesn't change the fact that we already assert py3 support
18:09:50 <notmorgan> iirc, everyone but fedora
18:10:12 <notmorgan> we support py3, except in some cases with other deps
18:10:33 <ayoung> Should not be hard to get the package accepted then
18:11:01 * notmorgan will brb, coffees
18:11:20 <ayoung> https://admin.fedoraproject.org/pkgdb/package/rpms/python-pymemcache/  Fedora has it, too
18:11:51 <lbragstad> dstanek well - it would prevent keystone from running in specific python3 environments, so i'm wondering if that is enough to keep us from asserting the goal?
18:12:25 <dstanek> lbragstad: python-memcached is a packaging issue. there is no work for us there.
18:12:27 <notmorgan> ayoung: cool.
18:12:30 <ayoung> Yeah we have python3-pymemcache.noarch
18:12:32 <lbragstad> or if the goal is only specific to keystone source?
18:12:39 <ayoung> is just not memcached, only client
18:12:45 <notmorgan> dstanek: we need a driver for oslo.cache and a fix in ksm
18:12:49 <notmorgan> dstanek: not just packaging
18:12:49 <dstanek> i can't speak to any mod_swgi issues, but uwsgi is fine with py3
18:13:02 <ayoung> https://koji.fedoraproject.org/koji/packageinfo?packageID=19104
18:13:08 <notmorgan> uwsgi works fine, mod_wsgi is odd, but works, you need to be very specific about how you configure it
18:13:15 <dstanek> notmorgan: a driver to use python3-memcached?
18:13:15 <lbragstad> oh - i was under the assumption that parts of python-memcached weren't py3 compatible
18:13:20 <notmorgan> it assumes py2 (iirc)
18:13:28 <ayoung> So mod_wsgi is the C code part
18:13:29 <notmorgan> lbragstad: python-memcache behaves weirdly in py3
18:13:33 <notmorgan> ayoung: yes
18:13:44 <notmorgan> but you can specify a different interpreter
18:13:50 <notmorgan> and it has always seemed to work
18:13:54 <notmorgan> uwsgi doesn't care.
18:13:58 <notmorgan> and works perfectly
18:14:14 <notmorgan> but by default py2 is what mod_wsgi uses
18:14:54 <lbragstad> ok - so what i'm hearing is that we should be good to move forward with asserting that goal is completed for keystone
18:15:09 <notmorgan> yes.
18:15:19 <lbragstad> ok - but we should move at some point
18:15:24 <notmorgan> but we *should* ensure we move to pymemcache this cycle if at all possible
18:15:26 <lbragstad> s/move/move to pymemcached/
18:15:32 <lbragstad> ok
18:15:42 <lbragstad> is anyone interested in helping dig into that work
18:15:47 <notmorgan> and make sure we have mod_wsgi for keystone configured to run in py3 mode in gate at least for some tests
18:16:05 <ayoung> python3-mod_wsgi
18:16:13 <ayoung> its a different package  in Fedora
18:16:23 <notmorgan> the work is 1) write a dogpile/oslo.cache driver, 2) convert ksm to use pymemcached instead of python-memcached *or* to use oslo.cache
18:16:43 <notmorgan> 3) default keystone to use the new oslo.cache driver instead of the python-memcache one
18:16:47 <notmorgan> all should be very easy to do
18:17:13 <lbragstad> i can only imagine that a spec is required for #2
18:17:14 <notmorgan> (you might need a minor layer of conversion, since pymemcache does not support the same interfaces as python-memcached)
18:17:21 <notmorgan> lbragstad: nah.
18:17:29 <notmorgan> just do it as a straight up conversion.
18:17:42 <notmorgan> this imo, is a bug if we do a conver to pymemcache
18:17:46 <notmorgan> a spec if we move to oslo.cache
18:18:15 <lbragstad> a spec if we move ksm to use oslo.cache?
18:18:20 <notmorgan> yeah
18:18:24 <notmorgan> because that isa  much bigger change
18:18:33 <notmorgan> a lot of "setuyp the region" etc
18:18:37 <notmorgan> same things we do in keystone
18:18:53 <notmorgan> where direct use of pymemcache is "create translation and instantiate correct object"
18:19:04 <notmorgan> no functional changes, no option changes, etc
18:19:11 <notmorgan> oslo.cache is a *much* bigger change
18:19:24 <lbragstad> notmorgan is there a noticeable advantage to one approach over the other?
18:19:50 <notmorgan> oslo.cache might impact swift more
18:19:53 <lbragstad> i could see having everything using oslo.cache consistently being an advantage of simplicity
18:20:01 <notmorgan> pymemcache, if you use a simple translation object, would have no impact
18:20:08 <notmorgan> oslo.cache is more consistent with *all* of openstack
18:20:18 <dolphm> lbragstad: ++
18:20:32 <notmorgan> those are the two sides of the concerns
18:20:34 <notmorgan> i prefer oslo.cache
18:20:44 <notmorgan> i think pymemcache might be *way* simpler to just drop in
18:20:49 <lbragstad> ok - i can work on drafting a spec that details the work and at least propose it for review
18:20:53 <notmorgan> depenmding on the amount of time folks have to dedicate to it
18:21:28 <dstanek> we use python-memcache directly not because we wanted to limit dependencies right?
18:23:30 <lbragstad> dstanek or was it because keystonemiddleware's caching implementation pre-dated oslo.cache?
18:23:56 <dstanek> lbragstad: someone told me way back when that is was a dep thing. not sure who though
18:23:57 <dolphm> lbragstad: that's definitely true, but i don't know if it's the *because*
18:24:25 <lbragstad> dolphm me either
18:24:31 <lbragstad> notmorgan do you know?
18:25:41 <lbragstad> we can circle back on this, too
18:25:57 <dstanek> wha... https://review.openstack.org/#/c/268662/
18:26:28 <ayoung> dstanek, yes?
18:26:31 <lbragstad> dstanek huh
18:26:47 <lbragstad> dstanek looks like jamie was at least planning on moving to oslo.cache
18:26:59 <ayoung> perpetually
18:27:07 <lbragstad> that's a good enough answer for me
18:27:20 <lbragstad> i can attempt to document this in a keystonemiddleware spec
18:27:49 <lbragstad> #action lbragstad to propose spec to keystonemiddleware detailing the steps required to move to oslo.cache
18:28:32 <lbragstad> #topic Boston Forum Brainstorming
18:28:37 <dstanek> lbragstad: ++
18:28:43 <lbragstad> who all is planning on going to the forum?
18:28:56 <dstanek> lbragstad: there is a linked bug that you touched last too
18:29:01 <lbragstad> apparently there are planning sessions and we have a deadline of April 2 to have things submitted
18:29:13 <dstanek> i'm planning on going. just have to work out parking.
18:29:34 <lbragstad> #link https://etherpad.openstack.org/p/BOS-Keystone-brainstorming
18:29:40 <gagehugo> I might be going
18:29:59 * cmurphy likely going
18:30:05 <lbragstad> i just need a rough idea of what our attendance might look like
18:30:09 <knikolla> i'm going
18:30:12 <lbragstad> and possible topics
18:30:13 <ayoung> I'll be there
18:30:43 <knikolla> it's a 15 min walk from my office
18:30:48 <lbragstad> knikolla nice
18:30:59 <lbragstad> i'm planning on organizing this just like we did for the PTG
18:31:10 <dstanek> lbragstad: are you planning on doing keystone sessions?
18:31:12 <lbragstad> just start dumping information in the etherpad and i'll go through and organzie it
18:31:21 <lbragstad> dstanek that's what i'm trying to figure out
18:31:38 <lbragstad> given a list of topic, i'll try and organize them into buckets and propose them
18:31:55 <dstanek> lbragstad: i probably won't be in many of those. i'm planning on going to presentations and hallway talking about openstack
18:32:18 <lbragstad> and since the deadlines is within a couple weeks - it would be nice if everyone threw their ideas down sooner rather than later
18:32:27 <ayoung> I'm presenting on the RBAC proposal, along with knikolla
18:33:00 <lbragstad> i was under the assumption that the forum was going to be tailored for operator feedback, so i wasn't expecting to have to organize many keystone specific dev sessions
18:33:04 <ayoung> Hope to have a demo ready for then
18:33:44 <dstanek> lbragstad: as in having a keystone "room" for operators to visit?
18:34:03 <lbragstad> dstanek yeah - i was for sure going to try and get that
18:34:17 <ayoung> Will also plan on having a Climbing gym night.
18:34:33 <lbragstad> it seems like a lot of other projects are planning on having dev-like discussions (like the PTG)
18:35:04 <dstanek> i'm actually hoping to learn more about the rest of the ecosystem
18:36:11 <lbragstad> ok - maybe i'll swing by the release room and ask specific in there
18:36:36 <lbragstad> because I'm kind of unsure what to plan for based on who is going to be there in comparison to past summits
18:36:59 <lbragstad> regardless, if there are things you want to talk about at the forum, please feel free to add them to the etherpad #link https://etherpad.openstack.org/p/BOS-Keystone-brainstorming
18:37:52 <lbragstad> anyone have any questions on the forum that they want me to relay?
18:38:02 <notmorgan> dstanek: in ksm, we uses python-memcached because swift passes us an object in some cases, and 2) history (not dep related)
18:38:10 <notmorgan> sorry for the delay
18:38:23 <lbragstad> notmorgan no worries, thanks for the update
18:38:33 * notmorgan is chatting with landlord about dishwasher and potential "giving up the ghost" issues.
18:38:41 <henrynash> (henry joined, sorry to most unfashionably late)
18:38:52 <dstanek> notmorgan: thanks. i wish i could remember who told me is was a dep issue
18:38:53 <ayoung> henrynash, !  You coming to Boston?
18:39:01 <henrynash> no, unfortunately not
18:39:15 <lbragstad> if there aren't any more questions specific to the forum we can move on
18:39:17 <lbragstad> henrynash o/
18:39:28 <lbragstad> #topic VMT Update: keystonemiddleware diagram and docs
18:39:30 <lbragstad> knikolla gagehugo
18:39:36 <notmorgan> dstanek: in ksa it would be a dep issue
18:39:50 <notmorgan> but we don't cache there
18:40:00 <gagehugo> wip draft review is here: https://review.openstack.org/#/c/447139/
18:40:16 <gagehugo> which has the updated arch diagram
18:40:36 <lbragstad> knikolla gagehugo you two just need reviews on #link https://review.openstack.org/#/c/447139/ ?
18:40:43 <lbragstad> as of right now?
18:41:10 <gagehugo> yeah, but those docs still need to be filled in more
18:41:19 <ayoung> https://review.openstack.org/#/c/447139/2/doc/source/artifacts/keystonemiddleware/pike/figures/keystonemiddleware_architecture-diagram.png,unified
18:41:26 <lbragstad> yeah - i parsed it a bit
18:41:31 <lbragstad> gagehugo i need to review it again
18:42:03 <ayoung> um... knikolla isn't the fetch from Memcache prior to the call to keystone?
18:42:04 <lbragstad> gagehugo knikolla anything else VMT related?
18:42:32 <knikolla> ayoung: nice catch.
18:42:37 <gagehugo> ah yeah
18:42:59 <ayoung> I thought we stored the token validations in memcache, so why else would we be doing memcache stuff if not to see if we have a valid token? Is there another reason?
18:43:29 <knikolla> i'll double check to make sure, but it would make sense that memcache is checked first.
18:43:45 <gagehugo> ayoung I probably just put the numbers in the wrong order
18:43:50 <ayoung> Do we also pass on the memcache key for the service to use later on?
18:44:39 <knikolla> ayoung: i do not think so. will check that too, see if they share config sections for that.
18:44:47 <ayoung> so, it should be steps 3, then 4, then 2, with something to indicate that steps 2 depends on there being no response in step 4
18:45:06 <lbragstad> sounds like we can keep this going in the review
18:45:27 <knikolla> yeah
18:45:33 <gagehugo> ok
18:45:42 <lbragstad> gagehugo knikolla ayoung thanks!
18:45:49 <lbragstad> #topic pike specs
18:45:55 <lbragstad> #info 3.5 weeks until Spec Proposal Freeze
18:46:01 <lbragstad> #info 11 weeks until Spec Freeze
18:46:07 <ayoung> RBAC in middleware should be there
18:46:17 <ayoung> its already approved, but in future state
18:46:18 <lbragstad> lets spend the last 15 minutes on spec
18:46:22 <lbragstad> specs*
18:46:30 <lbragstad> ayoung i think its in ongoing
18:46:37 <lbragstad> #topic pike specs: Project Tags
18:46:43 <lbragstad> gagehugo o/
18:46:50 <gagehugo> o/
18:46:52 <lbragstad> #link https://review.openstack.org/#/c/431785/
18:46:56 <lbragstad> this one is looking good
18:47:01 <ayoung> lbragstad, right, and knikolla is picking up active development of the server side piece.  It was working in Nov, but then has lain fallow
18:47:05 <lbragstad> i only had a couple last minute/minor questions
18:47:26 <gagehugo> I am fine with limiting tags by # per request
18:47:36 <ayoung> I think I am actually OK with this. I've seen how Kubernetes uses labels, and I think this will be done is somewhat the same way
18:47:40 <lbragstad> gagehugo do we have an idea of what that number should be?
18:47:58 <ayoung> the real question is priority:
18:48:01 <gagehugo> nova uses 50 for instances
18:48:19 <lbragstad> gagehugo they limit the total number of tags an instance can have to 50?
18:48:33 <gagehugo> no that was per request
18:48:37 <gagehugo> from that schema
18:48:38 <ayoung> IE:  who can set the tag.  And I think that was the issue we had last time.  I could see a scalability issue with number.
18:48:38 <lbragstad> i.e. bulk tag options are limited to 50 as a part of API validation?
18:48:58 <lbragstad> ayoung yeah - that was something edleafe was describing to me the other day
18:49:04 <gagehugo> https://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/tag-instances.html#rest-api-impact
18:49:13 <lbragstad> it's better to have a bunch of smaller tags than to have one *massive* tag
18:49:19 <ayoung> cuz I think the way that you explained it to me before it was a huge gaping security hole
18:49:33 <lbragstad> which begs another question, do we want to do more strict validation on the length/size of individual tags?
18:49:48 <ayoung> yes. yes yes
18:49:54 <ayoung> strictissississimo
18:50:07 <ayoung> make sure they are URL safe
18:50:14 <gagehugo> sure
18:50:21 <lbragstad> i think that's a requirement of the API WG
18:50:32 <lbragstad> er, a guideline that they suggest for tag implementation
18:50:39 <lbragstad> implementations*
18:50:50 <ayoung> gagehugo, BTW... you do realize that this is going to be a security nightmare, right?
18:51:10 <ayoung> people are going to want to be able to tag their own projects, but that will get in the way with the "official" tags
18:51:19 <ayoung> which will have security/billing info implications
18:51:24 <lbragstad> ayoung right now only a project admin is going to be able to modify tags
18:51:39 <ayoung> lbragstad, define "project admin"
18:51:50 <lbragstad> "god-mode" admin
18:52:04 <lbragstad> whatever admin we use in the rest of our policy file
18:52:09 <ayoung> lbragstad, admin and is_admin_project=True?
18:52:28 <ayoung> lbragstad, gagehugo what if....
18:52:34 <ayoung> 2 distinct entities
18:52:49 <ayoung> one which is the labels that a user can put on their own resources, the other which are admin only
18:52:52 <lbragstad> ayoung it should require the same rules as https://github.com/openstack/keystone/blob/master/etc/policy.json#L40-L41
18:53:11 <lbragstad> at least until we have a better way to define granularity in policy that allows us to get around the security concerns
18:53:14 <ayoung> lbragstad, three little words:
18:53:17 <ayoung> hierarchical
18:53:18 <ayoung> multi
18:53:22 <ayoung> tenancy
18:53:26 <lbragstad> sure
18:53:32 <ayoung> all the same issues come up here
18:53:45 <lbragstad> I expect that to make this more complicated
18:53:46 <ayoung> a global set of tags is going to be a  nuisance
18:54:06 <ayoung> lbragstad, I wonder if tags should be scoped to domains.
18:54:08 <ayoung> lets as henrynash
18:54:13 <gagehugo> hmm
18:54:15 <ayoung> henrynash, should tags be scoped to domains?
18:54:23 <lbragstad> we're going to run into similar permission issues with the limits proposal
18:54:26 <henrynash> hmm\
18:54:48 <henrynash> on tags my cut feel is probably yes
18:54:52 <henrynash> gut
18:55:15 <dstanek> i don't know how you'd do that
18:55:23 <ayoung> dstanek, me either
18:55:37 <dstanek> if i want to see all the projects that i have access to that are tagged with 'dev' - which dev?
18:55:47 <ayoung> gagehugo, lets plan on a pretty heavy brainstorming session on this at the summit
18:55:58 <gagehugo> ayoung: ok
18:56:08 <dstanek> to me tags are really an arbitrary string added by the user that has permission to modify the resource
18:56:21 <lbragstad> dstanek ++
18:56:38 <gagehugo> dstanek: ++ that is the intended goal
18:56:42 <ayoung> dstanek, if the tag is used for some billing purposes, then you want to limit who can add that tag to any resource
18:56:45 <ayoung> or remove it
18:56:45 <lbragstad> that's how i was thinking ofit
18:57:21 <ayoung> so, if the end goal is to be able to tag all "high-cost" projects you can't do that without removing the ability to set tags from the project admin
18:57:38 <lbragstad> i would think that would be up to the deployer to make sure if they are using billing tags that they control who has that access
18:58:01 <dstanek> lbragstad: how would they do that?
18:58:03 <ayoung> so, if you want tags as a way to be able to self organize, that is a very different use case than the "tag all projects across domains for billing purposes"
18:58:23 <lbragstad> dstanek currently our project api requires an admin
18:58:32 <dstanek> ayoung: ++
18:58:39 <lbragstad> dstanek i'd keep the tags api for projects consistent with that
18:58:57 <dstanek> lbragstad: but you could have domain admins be able to edit and they there's a security hole for the billing usecase
18:59:47 <dstanek> hmmm...i have to think about this a little more. i just got done reading the spec and was pretty happy
18:59:47 <lbragstad> dstanek can domain admins have the admin role?
18:59:53 <lbragstad> #link https://github.com/openstack/keystone/blob/master/etc/policy.json#L40-L41
19:00:05 <breton> no my topic today again, eh?
19:00:21 <dstanek> lbragstad: that is our default policy and not the policy everyone necessarily uses
19:00:23 <ayoung> Times up
19:00:33 <lbragstad> breton wanna talk about in -keystone?
19:00:34 <lbragstad> dstanek right
19:00:36 <lbragstad> #endmeeting