18:01:52 <dolphm> #startmeeting keystone
18:01:52 <henrynash> stevemar: dress appropriatley please
18:01:52 <openstack> Meeting started Tue Aug  4 18:01:52 2015 UTC and is due to finish in 60 minutes.  The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:56 <dolphm> but i'm going to start with a quick reminder
18:01:56 <openstack> The meeting name has been set to 'keystone'
18:02:00 <dolphm> #topic Release schedule
18:02:01 <stevemar> henrynash: hehe
18:02:13 <dolphm> We cut liberty-2 last week (yay! good job to all), so we're now staring down liberty-3 and thus FEATURE FREEZE is next up the calendar.
18:02:20 <dolphm> #link https://launchpad.net/keystone/+milestone/liberty-2
18:02:24 <dolphm> #info Feature freeze is September 1st
18:02:35 <dolphm> That means we have until the end of the month to land all the featureful bits for Liberty (all blueprints and wishlist bugs).
18:02:45 <dolphm> #link https://launchpad.net/keystone/+milestone/liberty-3
18:02:49 <dolphm> our next milestone target ^
18:02:59 <dolphm> If you have any questions, direct them to morganfainberg ;) because we’re moving on!
18:03:07 <dolphm> #topic Add type as a filter for /v3/credentials
18:03:10 <dolphm> #link https://bugs.launchpad.net/keystone/+bug/1460492
18:03:10 <openstack> Launchpad bug 1460492 in Keystone "List credentials by type" [Wishlist,In progress] - Assigned to Marianne Linhares Monteiro (mariannelinharesm)
18:03:12 <dolphm> #link https://review.openstack.org/#/c/208620/
18:03:22 <uvirtbot> Launchpad bug 1460492 in keystone "List credentials by type" [Wishlist,In progress]
18:03:23 <uvirtbot> Launchpad bug 1460492 in keystone "List credentials by type" [Wishlist,In progress] https://launchpad.net/bugs/1460492
18:03:26 <dolphm> The question on the agenda is "Does this need a spec since it impacts API?" -- and the answer is, of course, absolutely yes. It's a user-facing change and the API impact must be documented in keystone-specs/api :)
18:03:28 <dolphm> raildo, samueldmq, tellesnobrega: o/ floor is yours
18:03:34 <raildo> thanks :)
18:03:40 <raildo> hi guys, marianne start works with Keystone now, and we are looking for a low-hanging-fruit bug for her and we choose this bug
18:03:43 <gyee> dolphm, that list doesn't look right, where's the x.509 one?
18:03:55 <stevemar> gyee: add it to the list
18:04:00 <raildo> so to fix this bug we just add the 'type' in the hint of credentials, on this patch that dolph put here
18:04:07 <gyee> stevemar, grasseyass amigo!
18:04:21 <raildo> since this is an API change, we might need to open a blueprint/spec, but it's fairly small, so maybe not. So we want to know if the keystone core are happy with this way or we need to create a bp/spec for this?
18:04:38 <raildo> keystone core team*
18:04:44 <stevemar> raildo: as dolphm says we should open a spec for it
18:04:45 <dstanek> raildo: see dolphm's answer above :-)
18:04:54 <dolphm> raildo: it has an API impact change and so it must be documented in keystone-specs
18:05:02 <stevemar> there won't be much discussion on it
18:05:09 <dolphm> raildo: the spec portion of it will naturally be super short
18:05:12 <lbragstad> I don't think it needs to be super lengthy, but ++ to needing a spec
18:05:26 <topol> +++ to needing a spec
18:05:29 <raildo> dolphm: great :)
18:05:34 <dolphm> raildo: it's the API documentation which is much more important for the long term
18:05:40 <lbragstad> ++
18:05:43 <raildo> dolphm: ++
18:05:45 * stevemar wonders if we're too pedantic about this...
18:06:00 <stevemar> dolphm: yes ++ on API doc update, the spec part makes me grumble a bit
18:06:05 <dstanek> stevemar: about documenting the api or specs?
18:06:12 <stevemar> dstanek: ^
18:06:29 <rodrigods> this is really a tiny fix, ++ to document first in the API spec
18:06:35 <stevemar> its just a new hint to filter on, but whatever.
18:06:37 <rodrigods> and we already have the bug to track the change, anyway
18:06:42 <dstanek> i'd actually be happy with just the api update since the commit message will link to the bug
18:06:46 <raildo> dolphm: so this will just happend in Mitaka, since we can't propose this for Liberty?
18:06:58 <stevemar> raildo: it'll land in liberty
18:07:18 <dolphm> i'd aim for liberty
18:07:23 <lbragstad> raildo: we can target it to https://launchpad.net/keystone/+milestone/liberty-3
18:07:33 <raildo> great :)
18:07:44 <dolphm> any other questions on this?
18:07:58 <stevemar> dolphm: nope
18:08:01 <dolphm> #topic Use [extras] for package requirements depending on deployment options
18:08:03 <tellesnobrega> thanks
18:08:06 <dolphm> #link https://review.openstack.org/#/c/207620/
18:08:08 <dolphm> bknudson: o/ floor is yours
18:08:13 * stevemar sneaks out
18:09:05 <dolphm> hrm, bknudson has not made a peep so far this meeting
18:09:11 <dolphm> bknudson: around?
18:09:36 <dstanek> that's unfortunate...i'm curious about this one
18:09:38 <topol> someone try and wake bknudson up with a really bad patch toreview
18:09:49 <dolphm> alright, let's move on -- we can loop back if he arrives
18:09:52 <dolphm> dstanek: me too
18:09:54 <topol> let me try and grab him
18:10:19 <dolphm> (i'll give topol a minute)
18:10:38 <gyee> topol, check the men's room to see if he needs a square
18:10:48 * dstanek is whistling the Jeopardy theme
18:10:55 <lbragstad> he could be doing psirts?
18:10:56 <bknudson> I finally got back from team lunch
18:11:02 <dolphm> bknudson: \o/
18:11:11 <bknudson> ok, so the [extras] support seems to work
18:11:33 <bknudson> so I proposed the changes for ldap
18:11:38 <bknudson> which were in test-requirements
18:11:45 <bknudson> and also I think memcache and something else
18:11:59 <bknudson> so, going forward, we need to consider also other optional requirements
18:12:04 <dstanek> bknudson: will the requirements get automatically updated in setup.cfg?
18:12:19 <bknudson> dstanek: I checked the source for the update tool and it looks like it.
18:12:37 <gyee> bknudson, do we gate the extras the same way we do on the requirements
18:12:56 <bknudson> gyee: yes, I think that's how the tool works
18:13:03 <dolphm> should we do this in a feature branch to test the infra workflow first?
18:13:19 <bknudson> oh, so another example is fernet tokens, since there are specific requirements for those
18:13:26 <dolphm> or can we look at another project doing this?
18:13:43 <bknudson> I haven't looked to see if anyone else has actually implemented it yet
18:13:59 <dstanek> will devstack need to be changed at all to make sure it's requirements are installed?
18:14:04 <dolphm> bknudson: with more than one desirable 'extra', I assume you can do something like `pip install keystone[ldap,fernet]` ?
18:14:15 <bknudson> dolphm: yes, that's the format
18:14:49 <dolphm> but that wouldnt impact config, just deps?
18:15:08 <bknudson> there's no config change
18:16:14 <bknudson> cinder is also working on using this: https://review.openstack.org/#/c/203237/
18:17:12 <dolphm> cinder is a fantastic use case for this
18:17:45 <dstanek> bknudson: re: devstack ^
18:18:16 <bknudson> I proposed the change to devstack
18:18:31 <bknudson> devstack was pip install ldappool itself
18:18:31 <dolphm> bknudson: do you need anything specific from us today, or was this topic just to raise awareness?
18:18:44 <dstanek> nice
18:18:49 <bknudson> this is to raise awareness amongst more than just the reviewers who look at these changes
18:19:40 <dolphm> if anyone wants to do further reading on extras:
18:19:42 <dolphm> #link https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
18:20:19 <dolphm> bknudson: ready to move on?
18:20:27 <bknudson> yep, move on
18:20:29 <dolphm> #topic State of feature/keystoneauth_integration branch
18:20:37 <dolphm> bknudson: o/ floor is still yours
18:20:47 <bknudson> in case others aren't aware
18:20:55 <bknudson> the feature/keystoneauth_integration branch is broken
18:21:02 <marekd> why
18:21:03 <samueldmq> dolphm: remember there is a policy topic :)
18:21:04 <bknudson> and I was wondering if anybody is willing to look into it
18:21:27 <dolphm> samueldmq: yes, it's next on the agenda
18:21:39 <bknudson> tempest fails -- EndpointNotFound: publicURL endpoint for alarming service not found
18:21:44 <dolphm> samueldmq: although there are no details - please fill in the agenda!
18:21:54 <bknudson> I have no idea why this is happening on the feature branch and not master
18:22:06 <dstanek> samueldmq: i moved it to the end
18:22:22 <marekd> bknudson: can you paste the link ?
18:22:28 <bknudson> so if you're actually interested in the keystoneauth work I'd suggest getting involved in it
18:22:35 <dolphm> #link http://logs.openstack.org/83/208583/1/check/gate-tempest-dsvm-neutron-src-python-keystoneclient/aa1fcbb/
18:22:37 <bknudson> https://review.openstack.org/#/c/208583/ is an example failing
18:22:51 <samueldmq> dolphm: done
18:22:54 <samueldmq> dstanek: thanks
18:23:31 <bknudson> and my attempt at merging master to the branch fails neutron tempest the same way
18:23:41 <dolphm> marekd: are you volunteering to investigate? :)
18:23:57 <marekd> i'd rather not promise anything this week :(
18:24:28 <gyee> bknudson, alarming service, that monasca?
18:24:46 <gyee> let me ping them to see
18:24:57 <bknudson> the errors are in ceilometer
18:25:08 <marekd> dolphm: http://logs.openstack.org/83/208583/1/check/gate-python-keystoneclient-python27/7ff54a1/console.html this fails due to oauthlib
18:25:15 <dolphm> oh wow, i didn't realize we have project monasca and project miniscus
18:25:26 <marekd> we had it in ksc and it was fixed
18:25:29 <marekd> dolphm: WHAT?!
18:25:31 <bknudson> the python errors are known
18:25:31 <marekd> miniscus?
18:25:41 <gyee> never heard of miniscus
18:25:44 <dolphm> marekd: logging as a service
18:25:48 <bknudson> it's the neutron tempest failure that's not fixed in the merge
18:25:53 <dolphm> it's mostly abandoned https://github.com/ProjectMeniscus/meniscus
18:26:14 <marekd> bknudson: but the link you pasted fails on python
18:26:34 <ayoung> sorry I'm late
18:26:49 <bknudson> marekd: yes, I ran an experiment to see if master fails the same way or if it was introduced when I did the merge.
18:26:57 <marekd> bknudson: the reviews to look into are here: https://review.openstack.org/#/q/status:open+project:openstack/python-keystoneclient+branch:feature/keystoneauth_integration,n,z , right?
18:27:03 <bknudson> the merge is here: https://review.openstack.org/#/c/207267/
18:27:42 <bknudson> marekd: yes, you can look at the open merges for the branch... all the newer reviews are failing.
18:29:02 <gyee> alarming should be monasca, ceilometer is metering
18:29:37 * marekd soon there will be Linux Kernel as a Service....
18:29:53 <bknudson> gyee: I don't think monasca is running? http://logs.openstack.org/83/208583/1/check/gate-tempest-dsvm-neutron-src-python-keystoneclient/aa1fcbb/logs/ -- or maybe there's no log for it?
18:29:53 <dstanek> marekd: docker?
18:30:49 <marekd> dstanek: along with Kubernetes, covered with OpenStack branded Manila
18:31:17 <dolphm> i think we're off topic here :)
18:31:40 <marekd> sure, sorry
18:31:51 <gyee> bknudson, yeah, lemme sort it out with the monasca folks
18:32:02 <bknudson> gyee: ok, thanks
18:32:16 <bknudson> ok, we can move on we don't need to debug it here
18:32:27 <bknudson> just raising awareness in case somebody was concerned about it
18:32:31 <dolphm> #info feature/keystoneauth_integration is failing gate
18:32:51 <dolphm> i assume stevemar and jamielennox would be interested as well - but they're both afk
18:32:54 <dolphm> so, moving on!
18:32:59 <dolphm> #topic Centralized policy distribution
18:33:01 <dolphm> ayoung, samueldmq: o/ floor is yours
18:33:09 <samueldmq> alright
18:33:25 <dolphm> samueldmq: there's no links or anything on the agenda - might want to provide those first
18:33:26 <samueldmq> specs are under review, all concerns addressed, we need some weight/decision on acceptance
18:33:39 <samueldmq> #link https://review.openstack.org/#/c/197980/
18:33:44 <samueldmq> #link https://review.openstack.org/#/c/134655/
18:34:04 <gyee> samueldmq, that's the stuff you demoed at the midcycle right?
18:34:10 <ayoung> I can proxy for jamie
18:34:10 <samueldmq> gyee: yes
18:34:19 <gyee> lets do this!
18:34:26 <samueldmq> we need reviews .. we have middleware  + oslo.policy code
18:34:36 <samueldmq> and I am working on the keystone server bit already
18:34:36 <henrynash> samueldmq: so the thing I hear most of all from people is whether this is a burning issue that needs to be solved now (and hence needs an SFE)…perhaps you could address that issue
18:34:51 <samueldmq> dstanek is taking care of adding support to CacheControl at ksclient
18:35:00 <ayoung> dolphm, since we hhave your attention;  does the dynamic policy approach seem right to you?
18:35:18 <ayoung> I'd rather hash it out now before we get too far into implementation;
18:35:35 <dolphm> ayoung: by dynamic, do you mean centralized distribution?
18:35:39 <samueldmq> ayoung: I think so, I addressed all his concerns in the specs
18:35:42 <samueldmq> dolphm: yes
18:35:49 <ayoung> the unified stuff kindof dies in committee, and that was based on a dolphm comment at midcycle back in san antonio
18:36:16 <ayoung> dolphm, yeah, centralized, but also unified
18:36:20 <dolphm> ayoung: then yes, but i can't speak for deployers
18:36:51 <ayoung> fair enough..I just want to make sure that we've got a consensus.
18:37:07 <henrynash> ayoung: there’s no spec we are votinh on her for unification, is there?  I hope not
18:37:18 <dolphm> what was the outcome of the mailing list discussion? i don't think i ever saw the thread
18:37:23 <ayoung> dolphm, one issue, which I know you commented on in code review is the "deleted proejt" issue
18:37:23 <samueldmq> henrynash: we just targeted the distribution for L
18:37:33 <samueldmq> henrynash: which are those 2 specs are about
18:37:33 <henrynash> samueldmq: ++
18:37:45 <dolphm> henrynash: i imagine unification would be an operator decision - nothing to do with the implementation
18:37:46 <samueldmq> which is what ...
18:37:54 <ayoung> if we scope everything, what do we do to unlock a resource where we deleted the project in keystone but the remote service didn't get or process the notification
18:38:03 <dolphm> henrynash: although it'd be great to publish a unified policy file, or a tool to create one
18:38:42 <gyee> what does project deletion have anything to do with policies?
18:38:48 <ayoung> dolphm, ++  provide a unified  was my goal.  In order to do that, there are lots of nits to knock down
18:38:54 <david8hu> dolphm,  who would be the consumer?
18:39:18 <ayoung> gyee, policy needs to provide acess to the API in order to delete a proejects rsources
18:39:27 <dolphm> ayoung: i think services should be cleaning up their own resources in response to deleted project notifications - not some cleanup process making HTTP API calls through an authz layer.
18:39:30 <ayoung> if I need to get a scoped token, but that project is deleted, I can't get scoped token
18:39:34 <samueldmq> gyee: yes I think we're off topic ...
18:39:38 <dolphm> ayoung: so, that example seems like a non-issue to me
18:39:44 <ayoung> dolphm, agreed, but there is a long way to go until they can do that
18:39:46 <dstanek> sorry got distracted
18:39:50 <dstanek> back now
18:39:53 <ayoung> and there is always the possibility of a missed notification
18:39:59 <ayoung> how do we handle in the interim?
18:40:01 <henrynash> ayoung, dolphm: as you know, I am fully against a unifies policy file…..that doesn’t mean there aren’t other ways (in the future) of provding common rules etc. across the policies of different projects
18:41:05 <ayoung> henrynash, "Fully against?"
18:41:33 <samueldmq> we've had a spec-freeze exception request, spec-freeze time is over and we didn't reach a decision
18:41:44 <ayoung> dolphm, so, lets assume for now that there are missed notification etc...the only other option I cansee is "admin role on admin proejct indefault domain can delete anything"
18:41:51 <samueldmq> should I submit a FFE now ? are cores ok with this feature landing in Liberty?
18:42:06 <henrynash> ayoung, samueldmq: regardless of the nits and/or specifics we are arguing about…the question here is whether we haev a burning issue that requires an SFE…i.e. is the problem one that causes us to make an exception to our working practices.  I support teh centralised approach, I just haven’t seen the evidnce for the urgency
18:42:19 <dolphm> ayoung: so, that sounds like authorization at the service level -- not specific to any tenant
18:42:37 <lbragstad> I'd like to see some more buy-in from an operator
18:42:48 <henrynash> ayoung, sameuldmq: which customers are shouting about the problem
18:42:53 <ayoung> lbragstad, gyee is going to poperators meetup.  He'll proxy
18:42:57 <gyee> I am attending the ops midcycle
18:42:58 <samueldmq> henrynash: well, the code is under review already, there are jsut some missing bits
18:43:06 <samueldmq> henrynash: we just need reviews, nothing more
18:43:09 <dstanek> i agree with henrynash. there is no reason to rush this
18:43:15 <henrynash> samueldmq: with respcet, that doen’t answer the question
18:43:23 <samueldmq> henrynash: we were'nt able to have a single decision on SFE during L2
18:43:26 <samueldmq> :/
18:43:32 <ayoung> henrynash, urgency is that we can't seem to make any progress.  Either reviews are too big and then sl=plit, or too small and then provide less value...can't win.
18:43:43 <ayoung> So, we've reduced scope and reduced scope
18:43:53 <henrynash> ayoung: so why would we do this in a X-3 cycle>
18:43:54 <henrynash> ?
18:43:56 <ayoung> and we have it down to "fetch from centralized" as simple as possib;le
18:43:58 <dstanek> gyee: could you bring it up? i'd love to have some feedback
18:44:06 <ayoung> henrynash, I proposed back pre Summit
18:44:09 <ayoung> this stuff takes time
18:44:10 <gyee> dstanek, yes, will do
18:44:46 <ayoung> The win for theoperators, when this is all in, is that they can delegate to their own users the ability to share resources
18:45:22 <ayoung> If I am a consumer of a company cloud, and I need to share with...say jamielennox who also works here...I should not need to go to the central admin to get him acessto my proejct.  All of this is driving toward that
18:45:24 <david8hu> ayoung, json is still going to be a challenge for deployers.
18:45:41 <ayoung> david8hu, may that be the worst we have to face
18:46:04 <henrynash> ayoung: I know….but we either have festures freezes for a reason, or we don't
18:46:43 <ayoung> henrynash, this change is in oslo messaging at this point, which was outside the "locked to release" in the past
18:46:58 <dolphm> david8hu: there's a conversation on openstack-dev about building a proper UI for policy editing
18:47:30 <dolphm> #link http://lists.openstack.org/pipermail/openstack-dev/2015-August/071184.html
18:47:44 <ayoung> henrynash, does "trial and error" count as a reason?  We've had it at different points in the cycle in the past.
18:48:18 <david8hu> ayoung, debug aspect of things is going to be more challenging than it is today, but it can be address through proper documentation.
18:48:49 <bknudson> maybe horizon can solve all the issues that dynamic policies is targeting
18:48:58 <ayoung> so...can we please make progress on this?
18:49:11 <david8hu> dolphm, +1, I would love to join that conversation.  If you can send me links offline, it will be great.
18:49:24 <dolphm> david8hu: the link is above
18:49:29 <henrynash> ayuong: so to introdiuce something as experiemental that seems OK,…again, not sure in an X-3 cycle where we’ll be craming to other already approved items in
18:49:30 <david8hu> saw it
18:49:34 <david8hu> thx
18:50:03 <ayoung> at least approve the spec for backlog?
18:50:19 <samueldmq> ayoung: ++
18:50:43 <samueldmq> ayoung: henrynash  and if we come with all code implemented in the ffe request, we analyze it apart from the specs approval
18:50:49 <henrynash> ayoung: sure, that gets my vote…as I say in my comment on teh patch, I support this, just not sure why it needs the urgency of getting into L
18:52:13 <samueldmq> dolphm: you agree ^
18:52:15 <samueldmq> ?
18:52:17 <ayoung> henrynash, because if it is in L then people can use it
18:52:24 <ayoung> and until people use it we don't get feedback
18:52:26 <gyee> just download the patch and test drive it, if it worth the money, approve it :)
18:52:33 <ayoung> and until we get feedback we are doing ivory tower development
18:52:44 <amakarov> samueldmq, is it considered ok just to reference another spec? "Please refer to the Keystone Middleware spec"
18:52:54 <ayoung> and code that is not in production is like excess inventory
18:53:06 <samueldmq> amakarov: yes it is, so I don't need to repeate all the text there
18:53:09 <dolphm> ayoung: people that are interested will review and test patches
18:53:15 <samueldmq> amakarov: it was a request from some cores already
18:53:27 <dstanek> amakarov: yes. i complained that there was too much of the same content in the specs
18:53:27 <david8hu> ayoung, +1, need ways to get all data points we need.
18:53:38 <dstanek> i couldn't see what was meant to be the actual change
18:53:47 <amakarov> samueldmq, dstanek thanks, +1 from me then :)
18:53:54 <dolphm> dstanek: ++
18:54:28 <dstanek> if we built it will they come?
18:54:34 <samueldmq> so yes, I think those specs should be ready to be approved to backlog, since all the concerns from core-reviews were addressed
18:55:00 <ayoung> dstanek, unless we build it, they can't come
18:55:17 <gyee> dstanek, yes, I bet my beer money on it
18:55:22 <dstanek> i really do think we need to find an operator that we know will try it because they want it
18:55:30 <lbragstad> dstanek: ++
18:55:30 <dolphm> dstanek: ++
18:55:34 <dstanek> gyee: will you have HP start using it in test labs?
18:55:45 <gyee> dstanek, yes!
18:55:49 <lbragstad> I just think it will help guide an implementation, that's all
18:56:06 <david8hu> dstanek, ayoung, need a very good PR person to sell it :)
18:56:11 <ayoung> dstanek, this is a building block.  We can't give them what they really need yet.  They might or might not try this, but if we don't get it out there, they certainly won't
18:56:18 <dolphm> ayoung: you don't need to merge things to master for get feedback from early adopters
18:56:26 <dstanek> david8hu: no! that means we're doing it wrong
18:56:49 <ayoung> dolphm, I'm not merging code.  I'm trying to get the spec in
18:57:03 <ayoung> that means agreement on the direction
18:57:08 <david8hu> dstanek, no one is using the v3 policy api today.  I am hoping we can do better this time.
18:57:08 <gyee> ayoung, we also have the big tent now, just saying :)
18:57:09 <dstanek> gyee: have you gone through all of the proposals?
18:57:37 <dstanek> david8hu: then i would question if it was needed :-)
18:57:40 <gyee> dstanek, you have a pop quiz coming?
18:57:51 <dolphm> ayoung: are there any early stakeholders for this besides yourself and samueldmq?
18:58:18 <dstanek> gyee: no, but if you're going to help drive as a sort of 'business owner' you should go through them and see if they are doing what you need
18:58:37 <dstanek> dolphm: maybe gyee is a stakeholder
18:58:51 <dolphm> gyee: ?
18:59:01 <gyee> dstanek,dolphm, yes, I am involved
18:59:14 <dolphm> gyee: as a developer or from an operations perspective?
18:59:25 * gyee puts on his operator hat
18:59:25 * dstanek hears the chanting. 'do it, do it, do it'
18:59:36 <samueldmq> dstanek: o/
19:00:05 <samueldmq> so we get the specs approved and I will come with the ffe request
19:00:05 <dolphm> gyee: i'm not sure if that's a yes or no
19:00:09 <samueldmq> then we analyze it next meeting
19:00:14 <samueldmq> dolphm: ^ sounds good ?
19:00:15 <gyee> dolphm, as an operator
19:00:20 <gyee> eating the dog food
19:00:23 <ayoung> dolphm, so you are implying that no  features are going to go in upstream until they are in active deployment somehwere?
19:01:12 <dolphm> ayoung: of course not
19:01:24 <samueldmq> I think time's over
19:01:28 <dolphm> ayoung: samueldmq: but the issue i've seen on this topic for the past 6+ months is a lack of operator interest. if we're overlooking those voices, please point them out / have them speak up in the spec review.
19:01:28 <gyee> cutting edge, how we roll
19:01:37 <dolphm> and yes, time. samueldmq: thanks
19:01:40 <dolphm> #endmeeting