16:00:24 <lbragstad> #startmeeting keystone
16:00:24 <openstack> Meeting started Tue May  8 16:00:24 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:28 <cmurphy> o/
16:00:28 <openstack> The meeting name has been set to 'keystone'
16:00:32 <knikolla> o/
16:00:33 <hrybacki> o/
16:00:36 <lbragstad> o/
16:00:41 <gagehugo> o/
16:00:52 <wxy|> o/
16:00:54 <lbragstad> #link https://etherpad.openstack.org/p/keystone-weekly-meeting
16:00:58 <lbragstad> agenda ^
16:01:32 <kmalloc> O/
16:02:02 <lbragstad> we have a good mix of folks here - so we can go ahead and get started
16:02:10 <lbragstad> #topic New direction with default roles
16:02:24 <lbragstad> thanks to jroll we got a bunch of good infomraiton on the openstack-specs process last week
16:02:31 <lbragstad> #link http://lists.openstack.org/pipermail/openstack-dev/2018-May/130207.html
16:02:38 <lbragstad> summary ^
16:02:39 <lbragstad> #link https://review.openstack.org/#/c/566377/
16:02:42 <lbragstad> new specification ^
16:03:21 <lbragstad> the TL;DR is that openstack-specs isn't really reviewed anymore, and sounds like it has been unofficially superseded by community goals
16:03:43 * hrybacki nods
16:04:01 <lbragstad> dhellmann: and mnaser had some really good input on the process too, and actually recommended that we propose the default roles thing as a community goal
16:04:09 <lbragstad> (which i was pretty hesitant about)
16:04:20 <lbragstad> just because of the volume of work
16:04:37 <lbragstad> in the mean time, they suggested building out a solid template using keystone and a couple other services before we formally propose it as a community gaol
16:04:48 <lbragstad> which is what hrybacki has done with the new specification
16:05:04 <hrybacki> We are hoping for Keystone, Barbican, and Octavia at a minimum
16:05:53 <hrybacki> Regardless, I'd like to get a jump on the Keystone work sooner rather than later
16:06:01 <lbragstad> i thought the encouragement for proposing this as a community goal was a good sign
16:06:06 <hrybacki> +1
16:06:38 <lbragstad> so - i guess what we need to decide here is whether or not to allow the specification proposal post specification proposal freeze
16:07:34 <kmalloc> I am ok with this one post freeze (tentatively, based upon cmnty feedback on community goals)
16:07:45 <kmalloc> This is pretty well defined scope wise.
16:08:06 <lbragstad> we also proposed it to openstack-specs a long time ago, so technically it's not a "new" proposal
16:08:06 <knikolla> ++
16:08:12 <lbragstad> but.. going to formality
16:08:29 <lbragstad> cross the i's, dot the t's
16:09:01 <dhellmann> keep in mind that before it can be accepted as a goal, we would need to see at least one project adopting it and general support for it across the other projects
16:09:31 <cmurphy> the formalities are supposed to help us with planning work, moving the spec to a different repo isn't really changing anything for us
16:09:43 <lbragstad> ++ cmurphy
16:09:52 <lbragstad> we were planning on doing this work regardless i guess
16:09:59 <kmalloc> Exactly.
16:10:14 <lbragstad> cool - so no objections then, and it sounds like everyone is on the same page
16:10:22 <lbragstad> i'll link to this discussion in the ml list
16:10:32 <lbragstad> and we can go through the review process for the specification
16:11:02 <hrybacki> +1 thanks lbragstad
16:11:16 <lbragstad> dhellmann: brings up a good point, because we'll have the opportunity to really make the template solid before proposing the community goal
16:11:22 <lbragstad> so we could work in the system_scope stuff too
16:11:52 <lbragstad> and hopefully provide a really clear picture of how the two concepts work together
16:12:33 <lbragstad> #topic getting patrole in CI
16:12:56 <lbragstad> hrybacki: was this your topic?
16:13:02 <hrybacki> aye
16:13:22 <hrybacki> Okay, so this goes along with the default roles work. tl;dr we need a way to test it and CI is slow, slow, slow to bring in new features
16:13:33 <lbragstad> #link https://review.openstack.org/#/c/559137/
16:13:53 <hrybacki> and, now knowing that openstack-specs isn't really monitored, the original review in keystone-specs
16:13:55 <hrybacki> #link https://review.openstack.org/#/c/464678/
16:14:08 <gagehugo> would this also be a good fit for a "community goal"?
16:14:20 <hrybacki> gagehugo: yes. I think that these two go hand-in-hand
16:14:24 <gagehugo> have some role testing in multiple projects
16:14:37 <hrybacki> and testing it out in Keystone/Barbican would be a great place to start and work out the kinks
16:14:44 <gagehugo> felipemonteriro__ ^
16:16:12 <hrybacki> I'd like to see the original review picked back up, targetting a non-voting job against keystone changes. No way of knowing how long that will take tbh so getting the ball rolling know is preferential imo
16:16:46 <gagehugo> adding a playbook to keystone is pretty simple now too
16:16:49 <hrybacki> I don't see how this would pigeon hole us (I need to do more research on Patrole) but felipemonteiro__ is quite responsive
16:16:59 <hrybacki> gagehugo: +1
16:17:50 <hrybacki> I know that ayoung was opposed to the original keystone-spec when we were targetting RBAC in middleware work. I'm hoping he'd be willing to remove it now however
16:18:21 <hrybacki> lbragstad: as we are past spec proposal freeze I can see how this might not be ideal for Rocky but is it possible to work on a PoC in the meantime?
16:18:30 <ayoung> which one?
16:18:36 <hrybacki> ayoung: https://review.openstack.org/#/c/464678/
16:18:36 <felipemonteiro__> hrybacki: hi. i don't recall if i mentioned this in the spec, but i noticed keystone (particularly lbragstad) and others were committing only to openstack-specs, so i followed suit... i can backtrack too as well and update the one in keystone-specs.
16:19:16 <hrybacki> felipemonteiro__: ack, we thought that was the current standard but it's not. Community Goals are where it's at but require adoption and support across several projects first
16:19:21 <gagehugo> yeah I don't think openstack-specs is actively reviewed
16:19:21 <ayoung> My issue with patrole is that I am afraid they are going to lock us in to the current, broken Bug 968696 stuff
16:19:23 <openstack> bug 968696 in OpenStack Identity (keystone) ""admin"-ness not properly scoped" [High,In progress] https://launchpad.net/bugs/968696 - Assigned to Adam Young (ayoung)
16:19:38 <ayoung> I would be fine with it going in afterwards, or if there was a plan to mitigate
16:19:48 <hrybacki> ayoung: how would it do that?
16:19:53 <hrybacki> non-voting*
16:19:57 <lbragstad> we could have a requirement to only add test cases for patrole once an API works in the new default roles and has scope_types set
16:19:58 <ayoung> I've not gotten a sense from the Patrole team that they take the problem seriously enough
16:20:37 <ayoung> TBH, if they care about policy, Patrole should be a distant second in effort
16:20:53 <ayoung> getting the changes in for scope should be full court press
16:21:32 <ayoung> felipemonteiro__, you are the Patrole Point of contact?
16:21:38 <hrybacki> ayoung: folks are working that. I'm wanting to work with the AT&T folks to lay out the legwork for Patrole. I'd like to have a testing strategy ready to go before submitting a Communtiy Goal for the default roles work
16:23:05 <hrybacki> I don't presume that goal will be accepted until T release
16:23:11 <kmalloc> Patrole must come after new role work
16:23:28 <felipemonteiro__> ayoung: yes, as while others are active downstream, i work upstream too
16:23:31 <hrybacki> I keep hearing that but I don't understand why
16:23:36 <ayoung> felipemonteiro__, why do you care?
16:23:43 <kmalloc> We can't lock in on the broken role setup we have now.
16:23:54 <ayoung> felipemonteiro__, why did you build patrole?  What was the impetus?
16:24:07 <hrybacki> kmalloc: I don't get why it would do that
16:24:10 <kmalloc> Gating on it locks us/wedges us deeper into where we are today that is seeing a lot of work to undig the hole
16:24:36 <lbragstad> all the testing would require setup classes and infrastructure to assert scopes and things like that
16:24:37 <felipemonteiro__> i didn't build it, i helped contribute to it. i care because at&t cares about validating any rbac changes since our rbac customization is rather complex compared to most folks'.
16:24:38 <hrybacki> non-voting jobs won't lock us in but they will give us a baseline to look back upon
16:25:09 <ayoung> felipemonteiro__, that is the problem
16:25:10 <gagehugo> kmalloc I would hope we wouldn't make it voting until the broken roles are fixed
16:25:15 <ayoung> we need to change RBAC, radically
16:25:24 <ayoung> because how it exists today is broken
16:25:25 <hrybacki> I'm not asking us to put in an entire suite of tests -- I just want to ensure that the infrastructure is there for when we are ready to do just that (after role work is in)
16:25:33 <kmalloc> hrybacki: try changing something tempest is locked in on. It at least extends the timeframe by 2-3 cycles, and may mean we are even longer pushing out for generalized acceptance/gating of the new way
16:25:34 <ayoung> and if we test based on how it is today, we will not be able to fix it
16:25:39 <ayoung> and I am firm on thiws
16:25:41 <ayoung> this
16:25:56 <ayoung> patrole, without a fix for 968696 is dangerous
16:26:03 <ayoung> and I have been burnt by this pattern before
16:26:13 <ayoung> we had tests in tempest that assumed broken behavior
16:26:16 <kmalloc> In short, it must come (anything that locks/gates/tests with voting) after new role work.
16:26:26 <lbragstad> is there something preventing us from doing this incrementally?
16:26:34 <ayoung> and we could not fix in Keystoner because then the tests would fail in tempest, and not pass gate etc
16:26:40 <hrybacki> but how do we confirm role work is not broken without testing it?
16:26:54 <hrybacki> also again non-voting**
16:27:02 <ayoung> hrybacki, we know it IS broken today
16:27:06 <lbragstad> if we add the default roles to an API in keystone, we can add a test to patrole after words to assert the correct behavior along with unit tests, can we not?
16:27:08 <kmalloc> lbragstad: current roles are so narrow, if we test + vote now, we aren't going to be able to move the needle
16:27:08 <ayoung> once we fix it, yes, patrole
16:27:10 <ayoung> not until
16:27:29 <kmalloc> So, we should setup patrole to test with the new defaults out the gate.
16:27:36 <ayoung> felipemonteiro__, help out on the 968696 work and you will get my full supprt
16:27:40 <kmalloc> Just not on the current broken RBAC setup.
16:27:49 <felipemonteiro__> ayoung: yeah you're right, we've had to seek out workarounds to said bug internally, including scoping adminness via sub-admin roles and the like, but as a result of which we had to have a means of validation.
16:27:52 <ayoung> do we have scope changes for all services?
16:28:00 <hrybacki> kmalloc: that is precisely what I'm shooting to do
16:28:20 <ayoung> felipemonteiro__, here is what I would accept
16:28:24 <lbragstad> i think we've all said the same thing, just different ways
16:28:31 <ayoung> a version of Patrole that fails due to bug 968696
16:28:33 <openstack> bug 968696 in OpenStack Identity (keystone) ""admin"-ness not properly scoped" [High,In progress] https://launchpad.net/bugs/968696 - Assigned to Adam Young (ayoung)
16:28:41 <ayoung> and that passes when we show changes go in that fix it
16:28:49 <kmalloc> hrybacki: good, what ayoung and I are saying is we absolutely don't want to test current state of the world.
16:28:59 <kmalloc> We're on the same page.
16:29:36 <hrybacki> okay cool -- so tl;dr ensure the spec clearly indicates the non-voting* gate will be in place and testing against <how we expect success to look after things are resolved>
16:29:51 <ayoung> hrybacki, ++
16:29:52 <gagehugo> hrybacki ++
16:29:57 <kmalloc> As long as we keep that in mind we are good. Standing patrole up for the new defaults is the confirm we have fixed 98696
16:30:02 <kmalloc> And don't regress.
16:30:13 <hrybacki> this way we have infra in place, that won't interfere with fixes, and doesn't stick us into existing behaviors
16:30:19 <kmalloc> ++
16:30:22 <kmalloc> Exactly.
16:30:24 <hrybacki> excellent, thanks all :)
16:30:30 <hrybacki> lbragstad: you right
16:30:31 <lbragstad> it sounds like the patrole spec can be collapsed into the testing section of the default roles one
16:30:31 <ayoung> lbragstad, do we have changes in place for, say cinder, neutron, glance, as well as nova?
16:30:49 <lbragstad> ayoung: i've been working on patches for nova, but i haven't gotten everythign done yet
16:30:59 <hrybacki> lbragstad: I'm not sure I want to tightly couple them but dont' have solid reasons to jusitfy that intuitino yet
16:31:02 <lbragstad> waiting on the ksm patch and the oslo.context patch
16:31:13 <ayoung> keep them separate, but link them at the related specs section
16:31:26 <lbragstad> hrybacki: ayoung ok - wfm
16:31:42 <hrybacki> are we okay to un-abandon https://review.openstack.org/#/c/464678/ in this case?
16:31:59 <felipemonteiro__> ayoung: i can certainly try to assist in changing things w.r.t to things like system scoping but it's not my full time job, so any change that isn't forthcoming isn't because of a lack of agreement or consensus
16:32:00 <hrybacki> I can work with felipemonteiro__ to bring it up-to-date given today's discussion
16:32:05 <ayoung> Its just a -1
16:32:30 <ayoung> hrybacki, TYVM
16:33:55 <hrybacki> thanks for popping in felipemonteiro__ -- lbragstad that is all I have on that topic
16:34:02 <lbragstad> cool
16:34:08 <lbragstad> #topic multisite support
16:34:14 <lbragstad> ayoung:  i think this one was you?
16:34:17 <zzzeek> hey this is me
16:34:25 <lbragstad> hey zzzeek
16:34:39 <zzzeek> i got alerted to a blueprint that ayoung is working on which is about something ive already been working on for months
16:34:55 <ayoung> #link https://review.openstack.org/#/c/566448/
16:34:57 <zzzeek> which is being able to have multiple overclouds coordinate against a single keystone database
16:35:19 <zzzeek> we've already worked out a deployment plan for this as well as solved a few thorny issues in making it work with very minimal tripleo changes
16:35:22 <ayoung> the pattern we are seeing is that multi-site is not necessarily planned up front
16:35:54 <ayoung> instead, a large organization might have several openstack deployments, and they realize that they need to be run as multiple regions of a single deployment
16:36:02 <zzzeek> right...so i call it  a "stretch" clsuter, where you take the two overclouds and then you add a glaera database that is "stretched" over the two or more overclouds
16:36:10 <lbragstad> #link https://github.com/zzzeek/stretch_cluster/tree/standard_tripleo_version
16:36:21 <zzzeek> so that's a working POC
16:36:42 <kmalloc> Sounds like something icky to setup, but a good approach in general.
16:36:43 <ayoung> For people not up-to-speed with Tripleo, overcloud is the term for the openstack cluster we care about
16:36:44 <zzzeek> which does what it has to in order to make the two overclouds talk, so the plan is, make it nicer and make it not break with upgrades
16:36:57 <ayoung> there is an undercloud using ironic but those are site specific
16:37:17 <zzzeek> right so here we are assuming: tripleo, pacemaker / HA, docker containers, galera
16:37:41 <kmalloc> The only sticky point is divergence of keystone data, afaict.
16:37:51 <ayoung> zzzeek, and for the overcloud, we are supporting Keystone,nova,neutron,glance,swift, cinder, Sahara?
16:38:06 <ayoung> kmalloc, it is a major stickypoint, yes
16:38:12 <kmalloc> Between the two sites (assuming these are existing clouds)
16:38:21 <zzzeek> ayoung: keystone is the only service that actually communicates over the two overclouds.  the rest of the servcies remain local to their overcloud
16:38:28 <kmalloc> If it is strictly a new deployment, less icky.
16:38:31 <zzzeek> so for the merging of data, O
16:38:46 <zzzeek> I've been assuming that the overclouds are built up indepdendntly but plan to be merged immediately
16:39:03 <ayoung> upgrades should be able to use the Keystone 0 downtime upgrade mechanism,
16:39:06 <zzzeek> so the keystone DBs are built up individually using distinct region names
16:39:14 <kmalloc> That is less problematic to address.
16:39:22 <ayoung> role IDs should be synced
16:39:23 <zzzeek> then the DBs are merged where all region-specific records are added
16:39:33 <kmalloc> Long running clouds would be hard (tm)
16:39:43 <zzzeek> at the moment I just use SQL scripts inside of an ansible playbook to do this
16:39:52 <ayoung> long running would require db conversion scripts, I would think
16:40:02 <ayoung> not to be lightly undertaken, but not impossible
16:40:05 <kmalloc> ayoung: yes. And it gets very sticky.
16:40:12 <ayoung> just...don't try to do it while running the cloud
16:40:27 <kmalloc> zzzeek: we could wrap that into a keystone-manage (vs ansible)
16:40:32 <kmalloc> And I'd be ok with that.
16:40:34 <ayoung> i.e.  stop keystone, run script, start keystone
16:40:36 <ayoung> ++
16:40:59 <zzzeek> kmalloc: yes it woudl be a keystone command
16:41:06 <ayoung> the biggest problem I would expect would be the Default domain
16:41:23 <kmalloc> ayoung: id is "default"
16:41:26 <ayoung> everyone has one, and they have the same ID and Name, but then there are all the projects under
16:41:30 <kmalloc> So, not a big issue.
16:41:47 <ayoung> all of the projects under it will conflict, tho
16:41:53 <zzzeek> kmalloc: but additional work here that is outside of keystone involves deploying the galera cluster across datacenters.    since ayoung's spec is against tripleo, that would be part of it right?
16:41:56 <kmalloc> This is not long running clouds, so manageable.
16:42:14 <ayoung> so if you have 2 clouds, and each has a Production proejct, in default domain, they will have same names, different Ids
16:42:24 <ayoung> right,
16:42:26 <kmalloc> zzzeek: yes, with a spec/something for keystone to track against
16:42:48 <ayoung> for long running clouds, you would say "get everything off the default domain"
16:42:57 <ayoung> before you could even consider something like this
16:43:02 <kmalloc> But our side is pretty minimal, if you want to move it out of ansible.. non-existent if ansible is a-ok to continue with.
16:43:19 <zzzeek> kmalloc: the ansible thing i have right now is not production grade since it is hardcoding SQL in it
16:43:31 <ayoung> would that be something we could do in a migration?
16:43:36 <ayoung> is it a run once type thing?
16:43:43 <kmalloc> ayoung: unlikely.
16:43:53 <zzzeek> ayoung: not really, it deosnt chagne the structure of the database, it's about data coming into the DB
16:43:55 <kmalloc> But we can explore it.
16:44:09 <zzzeek> ayoung: it's like, here's a keystone DB, heres a bunch of new services etc. from a new region
16:44:17 <zzzeek> that can happen any number of times
16:44:39 <lbragstad> and it must be idempotent, right?
16:44:52 <kmalloc> Yeah, I would expect it to be idempotent
16:45:15 <zzzeek> lbragstad: well if it's a keystone-manage command, it's always nice if it is but i dont know it's 100% necessary
16:45:42 <kmalloc> I think idempotent is a requirement. Risk of breaking things if it isn't is high
16:45:54 <zzzeek> kmalloc: sure.  so keystone might need to have some knowledge of this
16:46:00 <kmalloc> And up-arrow+enter could ruin a cloud.
16:46:02 <zzzeek> e.g. the keystone db
16:46:20 <zzzeek> kmalloc: yes if you're guarding against arrow-enter then yes, you need some kind of record that shows somehting already happened
16:46:27 <kmalloc> Yep.
16:46:45 <zzzeek> so i didn't manage to get "domains" to do this, keystone seemed pretty intent on "domain" being "default"
16:46:51 <zzzeek> it seemed like "regions" were the correct concept
16:46:53 <kmalloc> And keystone-manage commands are mostly safe to run anytime(tm)
16:46:53 <zzzeek> is that not correct ?
16:47:23 <ayoung> so...from keystone-manage, we would want utilities to rename domains and regions.  Anything else?
16:47:38 <kmalloc> This sounds like regions to me. Domains are containers.. you'll need to replicate them, but that is fine.
16:47:51 <zzzeek> kmalloc: here's the code right now: https://github.com/zzzeek/stretch_cluster/blob/standard_tripleo_version/roles/setup-keystone-db/tasks/main.yml#L29
16:48:00 <zzzeek> kmalloc: which got me a basic nova hello world going
16:48:16 <kmalloc> Cool.
16:48:33 <zzzeek> kmalloc: there's a copy of each keystone DB on the shared cluster, keystone1 and keystone2, and it just copies region and endpoint in and that's it
16:48:35 <kmalloc> Yeah that should ultimately be doable via keystone-manage.
16:48:42 <ayoung> Some of that is Tripleo specific, I think.  If you install Keystone, you don't get any service catalog by default
16:48:44 <zzzeek> kmalloc: seemed a little too simple
16:48:57 <kmalloc> So, you're going to run into some issues with roles, projects, etx
16:49:18 <zzzeek> kmalloc: so the assumption i made was that, you've just deployed the two overclouds assuming this case so they have the same passwods, the same projects and roles, etc.
16:49:19 <ayoung> ah...rigjht...role IDs should be sync-able, too
16:49:27 <kmalloc> Basically that doesn't solve authn/authz across the clouds consistently
16:49:34 <ayoung> and that will be an expensive database update
16:49:44 <kmalloc> Project IDs are uuid4 generated
16:49:47 <zzzeek> kmalloc: for our initial version we were going to keep it simple that the two overclouds can fold into each other
16:49:54 <kmalloc> So, if any are created, IDs won't be the same.
16:49:56 <ayoung> projects should be left alone
16:50:17 <ayoung> the problem is if two clusters have the same project name in the same domain.  New install, not an issue
16:50:20 <kmalloc> Same with users, etc. Passwords are bcrypt, so salted ...same but different t hashed strings.
16:50:35 <zzzeek> kmalloc: yeah i acually deployed the overclouds w/ identical passwords to solve that :)
16:50:36 <ayoung> can we sync the salt?
16:50:52 <kmalloc> We would need to sync the hash.
16:51:07 <kmalloc> Which covers the salt.
16:51:33 <kmalloc> We can't force a salt within keystone
16:52:14 <zzzeek> i would say, pick a winner, the operator has to know the passwords are going to change if they aren't the same already
16:52:18 <kmalloc> zzzeek: so, you will see drift of data between the keystone's if we're only galeraing the regions/catalog.
16:52:44 <zzzeek> kmalloc: right.  initial use case is that you have a brand new keystone DB in both datacenters
16:52:44 <kmalloc> Right, you should setup a "master cloud" that populates the new cloud
16:53:07 <kmalloc> And overwrites things as needed.
16:53:28 <ayoung> so, I think we need a spec for the keystone manage changes we want to support this, right?
16:53:38 <kmalloc> Yeah, I think we do
16:53:58 <kmalloc> This isn't a bad concept, or even anything outlandish
16:54:02 <zzzeek> OK so from my perspective, I'm looking at the full view, which is, you have two tripleo overclouds, you have deployed such that keystones are talking to a separate galera DB that is local to their environment, the "stretch" operation then pulls those galeras into a single cross-datacenter cluster and the keystone data is merged
16:54:06 <kmalloc> Just need to have it clearly written.
16:54:12 <lbragstad> we'll need a proposal exception as well
16:54:15 <ayoung> I did start a tripleo spec, but I think I want a separate one for the Keystone options
16:54:47 <zzzeek> also since nobody asked, the big trick here is that I modified the pacemaker resource agent to allow multiple pacemakers to coordinate a single galera cluster
16:54:51 <kmalloc> zzzeek: that sounds reasonable. Then pivot the keystone to read/write from.galera
16:54:54 <ayoung> I'll submit a placeholder
16:55:10 <kmalloc> Once the new db is in place.
16:55:30 <lbragstad> just a heads up, we're at 5 minutes remaining
16:55:37 <kmalloc> ayoung: this may need to be a "utility" separate from keystone e-manage
16:55:44 <zzzeek> what also is nice is that the keystone services never change the DB they talk to, it's a galera cluster that just suddenly has new nodes as part of it
16:56:27 <kmalloc> ayoung: let's get a spec and go from there. Make sure to add clear use-case descriptions
16:56:30 <ayoung> kmalloc, if so, we'll get that in the spec
16:56:50 <kmalloc> So we don't need to refer to the meeting long to remember it all :)
16:58:33 <lbragstad> i look forward to reading the spec, it'll be more clear to me on paper i think
16:58:43 <kmalloc> #action ayoung to write up spec for stretch over cloud with zzzeek
16:59:16 <lbragstad> any other comments on this topic?
17:00:00 <lbragstad> thanks zzzeek
17:00:07 <lbragstad> #topic open discussion
17:00:12 <lbragstad> last minute -
17:00:26 <lbragstad> wxy|: has a new version of the unified limit stuff up
17:00:29 <lbragstad> and it looks really good
17:00:36 <lbragstad> everyone here should go read
17:00:38 <lbragstad> :)
17:00:40 <lbragstad> #link https://review.openstack.org/#/c/540803/10
17:00:54 <gagehugo> lemme leave a tab open for it
17:01:03 * hrybacki same
17:01:06 <lbragstad> thanks for the time everyone!
17:01:10 <lbragstad> #endmeeting