20:01:02 <ttx> #startmeeting tc
20:01:03 <openstack> Meeting started Tue Jun 14 20:01:02 2016 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:07 <openstack> The meeting name has been set to 'tc'
20:01:08 <notmorgan> o/ -ish
20:01:15 <oshidoshi> o/
20:01:20 <mtreinish> o/
20:01:22 <ttx> Small quorum but we'll make it happen
20:01:28 <ttx> Our agenda for today is:
20:01:32 <ttx> #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
20:01:37 * edleafe_ lurks while in his parked car
20:01:41 <ttx> #topic Tidy of item 5 of the vulnerability:managed tag
20:01:48 <ttx> #link https://review.openstack.org/300698
20:01:56 <flaper87> edleafe_: please, open the windows or turn the AC on
20:02:05 <ttx> This one sounds like a good incremental improvement, now that the security folks are happy with it
20:02:13 * flaper87 is happy with it
20:02:14 <ttx> Still short of a few votes though
20:02:18 <fungi> vmt members are satisfied with sdake's proposal
20:02:31 <sdake> hi fungi :)
20:02:32 <notmorgan> fungi: ++
20:02:33 <dims> +1 from me
20:02:41 <dims> thanks fungi
20:02:43 <notmorgan> as both TC and VMT, i think it's good.
20:02:50 <edleafe_> flaper87: It's 97F. The AC is running
20:02:58 <ttx> need two more votes
20:03:07 <flaper87> edleafe_: jeez
20:03:24 * Rockyg suggests edleafe roll down w window to avoid heat stroke
20:03:40 <annegentle> Rockyg: edleafe_ nope just keep the car running with AC!
20:03:44 <notmorgan> ttx: oops looks like i forgot to rollcall vote on it
20:03:46 <notmorgan> fixed that
20:03:52 <ttx> looks like we have enough now
20:04:01 <ttx> ok, approved
20:04:08 <ttx> #topic Fast-tracking projects.yaml release tags / deliverable changes (dhellmann)
20:04:09 * amrith wanders in ...
20:04:12 <ttx> dhellmann: want to introduce this one ?
20:04:16 <dhellmann> sure
20:04:22 <dhellmann> We’ve added some validation rules to the release request process that verify that the thing being released matches the thing as defined for governance.
20:04:28 <dhellmann> We’ve started noticing in a few cases that updates to the governance repository to “fix” release-related settings like release models and deliverable definitions have been delaying releases.
20:04:34 <dhellmann> I would like to propose a general exception to the review policy for openstack/governance that if the change does not effect governance, and are only related to tags managed by other teams, we not wait the week for “objections”.
20:04:40 <dhellmann> For example, changing the release model of an existing deliverable and adding (or removing) the tags for stable or security teams would only need to be reviewed by the relevant team, and then they can be approved by the chair.
20:04:45 <dhellmann> Splitting or combining deliverables is a similar sort of change that’s a bit more complex because making new deliverables may require the approval of multiple groups to set the appropriate tags.
20:04:50 <dhellmann> However, as long as no new repositories are added I think this sort of change should also be fast-tracked.
20:04:53 * flaper87 notices dhellmann prepared hist intro ahead of the meeting
20:04:55 <flaper87> dhellmann: ++
20:04:57 <dhellmann> Changes that add repositories, and therefore change the voter roles for the TC, should continue to go through the 1 week review period.
20:05:00 * edleafe_ knows that annegentle knows south Texas heat
20:05:17 * dhellmann ends pastebomb
20:05:56 <russellb> sounds reasonable to me
20:05:57 <flaper87> I'm in favor of this
20:05:59 <sdague> works for me
20:06:05 <johnthetubaguy> +1 from me
20:06:06 <notmorgan> sdague: ++ same
20:06:24 <ttx> I guess we could revert if a TC member ends up objecting two days after the fact
20:06:43 <dhellmann> ttx: yes, that's a good point. all of this is ultimately mutable.
20:06:47 <mtreinish> no issue from me
20:06:50 <annegentle> dhellmann: ooo my only tough sell point is the "splitting or combining deliverables" part... what's that like? Do you have an example?
20:07:01 <annegentle> dhellmann: gnocci/aodh/ceilometer type stuff?
20:07:02 <dhellmann> annegentle : the recent stuff with neutron
20:07:05 <ttx> dhellmann: so, release: and security: tags changes, + deliverable reorganization ?
20:07:27 <dhellmann> ttx: and the stable tag
20:07:45 <ttx> if I have the correspodning PTL signoff I can fasttrack them
20:07:48 <dhellmann> and any other teams to which we've delegated management of tags in the future (like if annegentle's work turns into doc tags)
20:07:49 * edleafe_ has to go - daughter is ready.
20:08:00 <dhellmann> ttx: yes
20:08:07 <ttx> ok, let me capture that
20:08:17 <dhellmann> should I write this up more formally as a resolution, or can we agree to a procedural thing without that?
20:08:47 <dhellmann> we've made agreements on procedural decisions in-meeting in the past, iirc, like our current fast-track policy on typos
20:08:49 <ttx> #agreed for projects.yaml chnage affecting stable: security: or release: tags (or deliverable reorganization) the TC chait can fast-track approval if the corresponding PTL approved it
20:08:55 <flaper87> I'm good with meeting agreement
20:08:57 <dims> ttx : dhellmann : am generally in favor..don't really see a need for resolution, we could update some instructions/readme somewhere
20:08:59 <ttx> dhellmann: no, it's a procedural agreement
20:09:08 <dhellmann> ok, cool, just making sure
20:09:18 <ttx> I guess we could document it as house rules
20:09:18 <annegentle> I'm ok with fast-track
20:09:29 * flaper87 pictures dhellmann doing the "I don't need to write a resolution" dance
20:09:34 <dhellmann> ttx: +1 to writing things down
20:09:36 <annegentle> I might regret it later :) but seems like trust is best
20:09:40 <ttx> #action ttx to document house rules
20:09:58 <ttx> (that's the speech I end up giving at every start of cycle)
20:10:11 <flaper87> lol
20:10:29 <dhellmann> an alternative is to move some of those tags into the repo where we're using them and validating them, but we've been focusing on putting them all in the governance repo for consistency so...
20:10:30 <ttx> ok, anything else on that topic ?
20:10:31 <dims> haha
20:10:45 <dhellmann> ttx: that's it from me, if we've settled the question
20:11:18 <ttx> #topic Add project Smaug to OpenStack big-tent
20:11:24 <ttx> #link https://review.openstack.org/326724
20:11:34 <ttx> Smaug aims to provide data protection as a service (in a large sense of the word "data")
20:11:47 <ttx> I could not spot any good reason to deny them / delay their entry, so I'm tentatively +1 on this
20:12:04 <ttx> saggi: o/
20:12:12 <ttx> Questions ?
20:12:17 <annegentle> one concept I read and read but couldn't understand is:
20:12:19 <saggi> I'm here
20:12:28 <annegentle> what are the plugable pieces? the protections?
20:12:51 <saggi> There are 3 main pluggable pieces.
20:13:04 <saggi> 1. is what we call protectables
20:13:17 <annegentle> are "Protection Plugins" open source?
20:13:18 <saggi> It's the resources you want to protect and their relationships
20:13:28 <saggi> annegentle: yes
20:13:35 <annegentle> sorry saggi, I mispoke, I wondered about the plugins. What are some of the names?
20:13:44 <annegentle> of plugins?
20:14:01 <saggi> For protection we have cinder\nova\neutron.
20:14:02 <dhellmann> annegentle : are you asking about which plugins are implemented/planned or which areas of smaug can have plugins?
20:14:14 <russellb> https://wiki.openstack.org/wiki/Smaug#Available_Protectables
20:14:34 <flaper87> russellb: ++
20:14:44 <russellb> clears up what "protectables" are for me
20:14:57 <saggi> Things you want to protect. The "what".
20:15:10 <annegentle> dhellmann: what are names of plugins, that's what I'm trying to understand. I understand what you want to protect
20:15:16 <sdague> right, it's basically calling out to existing services - https://github.com/openstack/smaug/blob/ce2b117a184ade8f376839178fdc34c2d70896b7/smaug/services/protection/protection_plugins/volume/cinder_protection_plugin.py#L41
20:15:37 <saggi> The user can add external protectables if they are required for the application. They define new types of resources you can protect and how they relate
20:15:40 <dhellmann> I understood this to be a thing to put in front of different backup tools that would implement backup for different types of objects in an appropriate way.
20:15:54 <saggi> They define that a volume is needed by a VM
20:16:13 <annegentle> saggi: as an app developer, do I define a plugin that protects multiple protectables?
20:16:17 <saggi> it's pluggable in a sense that the user can add entity external to Openstack an they will be included in the tree.
20:16:24 <saggi> Protection Plugins are the "how"
20:16:35 <saggi> you can define multiple protection plugins to a single protectable.
20:16:45 <Rockyg> so, what would the manila plugin name look like?
20:16:49 <saggi> it's the admins responsibility to choose what protection plugin to map to what resource
20:17:08 <annegentle> saggi: is this an admin tool or an app developer tool? Are you protecting the service or resources run on the service?
20:17:24 <dhellmann> saggi : so I could have a protection plugin for storing things to swift and a protection plugin for writing things to magnetic tape, and then the admin would map the appropriate one to keystone data, cinder, data, etc.?
20:17:49 <Rockyg> dhellmann, good question
20:18:04 <saggi> This will be a bank plugin.
20:18:37 <saggi> So the protection plugin takes a protectable and puts it in the bank.
20:18:45 <saggi> We don't enforce all the data being in the bank
20:18:57 <saggi> but it must put information on where to find the data to the bank
20:19:07 <saggi> so when we restore we give that information to the plugin
20:19:10 <dhellmann> ok, I understand protectables and banks, but I don't understand protections then
20:19:18 <ttx> Personally I feel like it's a weird mix of very cloudy stuff (advanced service driven by an API) but which would likely be used on very non-cloud-native apps, but then we are no longer judging the usefulness.
20:19:26 <annegentle> saggi: I'm worried this is a really different way to use the term plugin.
20:19:45 <annegentle> ttx: I had that concern as well, that if you use this, you create pet apps even.
20:19:58 <annegentle> ttx: rather than using cloudy architectures
20:20:00 <sdague> annegentle: pet apps are fine
20:20:05 <saggi> annegentle: pet apps?
20:20:10 <russellb> looks like there's just a swift bank plugin today?
20:20:17 <saggi> yes
20:20:26 <annegentle> saggi: using the "pets vs cattle" analogy, does this mean you create apps that aren't cloudy?
20:20:28 <dims> saggi : what changes if any are needed in other projects for smaug to work (or work better).
20:20:28 <sdague> the value of openstack is that it spans the whole gambit
20:20:36 <annegentle> saggi: and therefore pet apps
20:20:53 <russellb> protectable: a glance image.  protection plugin: glance (it knows how to back up an image).  bank: a place to store stuff, like swift
20:20:56 <russellb> is that a fair summary?
20:21:09 <oshidoshi> annegentle: pet apps are a possibility, many companies still use them, that's not necessarily a bad thing
20:21:20 <russellb> trying to make a tl;dr :)
20:21:20 <ttx> annegentle: yeah, pet apps are fine. It's just the combination of using a cloud API to drive data protection on a pet app that sounds alien. But then I guess with a UI on top...
20:21:22 <jbryce> sdague: +1
20:21:37 <saggi> russellb: you are correct
20:21:42 <russellb> saggi: ok thanks
20:21:43 <flaper87> russellb: I believe it is. I asked earlier today in smaug's channel whether it'd make sense for smaug to use glance_Store to be able to talk to more stores
20:21:46 <flaper87> saggi: ^
20:21:47 <oshidoshi> russleb: +
20:21:50 <dhellmann> russellb : thanks, that's clear
20:21:57 <flaper87> The answer was yes, and it can be implemented as a bank
20:22:17 <dims> russellb : thanks for that. makes it clearer
20:22:30 <dhellmann> sdague : s/gambit/gamut/
20:22:41 <sdague> dhellmann: sure, that too :)
20:22:49 <russellb> ha
20:22:57 <dims> haha
20:22:58 <saggi> As far as pets vs cattle. Backing up and restoring glance images and network topology is important regardless.
20:23:00 <dhellmann> although some may say openstack is a gambit of sorts
20:23:35 <saggi> Also, since we don't limit the entities you want to protect you can also back up your heat templates.
20:23:40 <sdague> right, I think we decided a while ago that we were not going to focus on "perceived usefulness" and instead let projects play out and find their community or not
20:23:47 <ttx> Still short of a few votes. Any other questions you'd like to ask saggi ?
20:23:50 <dims> sdague ++
20:23:54 <annegentle> sdague: yeah, still was curious
20:23:58 <johnthetubaguy> sdague +1
20:24:05 <dhellmann> sdague : ++
20:24:08 <sdague> this doesn't really overlap anything other projects are doing, definitely seems like some users want it
20:24:11 * dims votes
20:24:13 <dhellmann> saggi : thanks for clarifying the technical stuff
20:24:15 <amrith> do I understand correctly that this backs up not just the data but the metadata in openstack as well?
20:24:17 <annegentle> saggi: again, is this service intended for admins or app devs?
20:24:23 <russellb> i think the dependency graph approach to backup of different resources is interesting
20:24:29 <saggi> admins
20:24:32 <annegentle> saggi: or maybe I missed it, I'm on a terrible IRC client :)
20:24:33 <amrith> i.e. would it be used to backup an openstack deployment?
20:24:33 <russellb> happy to see it explored
20:24:37 <saggi> but also tenant admins
20:24:46 <amrith> or to backup, say a database running in the openstack cloud?
20:25:08 * dims sees amrith worry/think about trove's backup capabilities
20:25:09 <saggi> amrith: The deployment, it might include backing up the DB in an optimized way.
20:25:11 <amrith> am wondering how this is different from, for example, snapshots of a cinder volume directly.
20:25:17 <Rockyg> saggi, ++
20:25:17 <saggi> This is where cooperation with freezer comes into play
20:25:21 <thingee> ttx: o/
20:25:34 <russellb> amitry: because it captures more than cinder volumes, and the relationships between them
20:25:36 <ttx> We now have a majority
20:25:46 <dhellmann> amrith : it includes the metadata, and relationships between objects, so you can restore all of that rather than just bits on a volume
20:26:03 <russellb> hopefully not duplicating the actual work of volume backup necessarily
20:26:10 <amrith> dhellmann, I'm trying to wrap my head around a practical use case
20:26:13 <russellb> anyway, that's a detail :)
20:26:15 <saggi> Classically you back up storage and make scripts. We want to eliminate the need for scripts.
20:26:19 <amrith> and finding it hard to relate smaug to a database
20:26:21 <ttx> keeping in mind it's pretty young at this stage
20:26:24 <amrith> managed by an admin or trove.
20:26:33 <amrith> so I'm wondering whether maybe databases are not the right target.
20:26:43 <amrith> for example, in a database there's config information and data
20:26:55 <amrith> and I'm wondering how one would express that to smaug
20:27:13 <amrith> and how smaug would, for example, know to quiesce a database and take a transaction consistent backup
20:27:27 <amrith> in a way that would be better than whatever the database vendor provides
20:27:27 <dhellmann> I think that's getting deeper into the tech than we need to.
20:27:31 <jroll> amrith: sounds like it's more, back up your flavors and such
20:27:32 <flaper87> tbh, I think it's fine to explore all those questions after the meeting
20:27:45 <dims> ++ flaper87
20:27:45 <saggi> amrith: We do it because the DB protectable will tell us it depends on the VM
20:27:49 <dhellmann> amrith : as a cloud user, I do not have access to my cloud provider's database backups
20:27:54 <saggi> this is where the relationships come into play
20:27:56 <amrith> will take it offline
20:28:05 <oshidoshi> amrith: you are welcome to contact us on our irc, we'll be happy to discuss
20:28:06 <dims> saggi : thanks!
20:28:07 <ttx> amrith: thx!
20:28:16 <amrith> saggi, flaper87 dims dhellmann ... wondering how trove can use smaug. thanks!
20:28:20 <ttx> OK, approving now unless someone screams
20:28:55 <annegentle> ttx: got my vote in
20:29:03 <ttx> approved
20:29:05 <saggi> We have docs about how we traverse the resource tree and build the task graph to make dependent tasks between resources happen.
20:29:07 <ttx> saggi: welcome
20:29:09 <Rockyg> Congratz!
20:29:11 <saggi> horray!
20:29:16 <saggi> Thanks everyone
20:29:19 <annegentle> thanks saggi !
20:29:28 <oshidoshi> thanks guys
20:29:34 <oshidoshi> and gals
20:29:38 <amrith> thx saggi will catch you later
20:29:38 <ttx> #topic Updates projects.yaml to indicate type:service only if a REST API is provided
20:29:44 <ttx> #link https://review.openstack.org/317094
20:29:55 <ttx> So... I raised this one since I'm not sure we want to overload the governance projects.yaml in this way
20:30:08 <ttx> On this one Anne suggests we limit type:service to things which provide a REST service endpoint, so that it could be reused to build API doc links
20:30:18 <ttx> That results in removing it from Horizon, which I could agree with
20:30:22 <ttx> But I have three objections
20:30:30 <annegentle> ttx: I originally wanted a handy way to scan for "does this provide an API?"
20:30:33 <ttx> 1/ it implies that type:service is redefined, since Horizon fills the "provides a user-facing long-running service, usually with a REST API" requirement
20:30:44 <annegentle> but fine with abandoning and going the "discover REST API docs" route
20:30:54 <ttx> 2/ it uses a deliverable type tag to specify something which is more of a technical property of a specific component (we have "deliverables" which provide multiple REST API endpoints)
20:31:10 <ttx> 3/ it's a bit of a slippery slope to add extra type of data to projects.yaml, especially data which is not directly consumed by humans. It's not a service catalog imho
20:31:21 <annegentle> ttx: agreed on all 3
20:31:29 <ttx> annegentle: ok cool :)
20:31:35 <annegentle> :)
20:31:47 <fungi> not to be pedantic, but whether a long-running service provides an api (rather than just ui) and what protocol is used for that api seem like separate facets too
20:32:01 <annegentle> if anyone has ideas for how to discover what API docs are where for each service please tell me :)
20:32:13 <mtreinish> ttx: so add a provides:rest_api tag?
20:32:14 <ttx> fungi: hence the "usually"
20:32:14 * dims nods to fungi's comment
20:32:30 <annegentle> mtreinish: I think I'll do a docs: -api: http://blah url
20:32:30 <ttx> mtreinish: I don't think that would be useful
20:32:39 <annegentle> mtreinish: per deliverable
20:32:39 <fungi> mtreinish's suggestion is what i was thinking of as well
20:33:00 <ttx> my suggestion here would be to either extend the YAML grammar to make room for a list of APIs provided by the project team / deliverable / repository
20:33:02 <thingee> +1
20:33:03 <dhellmann> yeah, if the point of this is to link to API docs, let's just do the simple thing and add the links.
20:33:03 <annegentle> would that idea work - per deliverable, does it have an API, and where are the docs?
20:33:09 <ttx> or move that to another repository / YAML file
20:33:26 <anteaya> dhellmann: ++
20:33:43 <ttx> annegentle: the trick being some of those deliverable maybe propose multiple APIs (not sure)
20:33:51 <annegentle> dhellmann: one goal is to figure out which services we need to provide nav to on an API docs site
20:33:51 <fungi> i agree that this isn't something which necessarily needs describing and tracking by the tc (hence a governance tag)
20:33:57 <mtreinish> annegentle: I think having a link per deliverable would work fine
20:34:00 <dims> +1 to another repository / yaml file (maintained by doc team?)
20:34:00 <annegentle> ttx: eesh. well ok
20:34:15 <annegentle> dims: ugh. hm
20:34:26 <mtreinish> ttx: if there are multiple apis they can have a single doc still
20:34:26 <sdague> dims: if we go down that route, I think realistically we already have a different solution
20:34:36 <annegentle> dims: I mean docs already have to figure this out from non-truth-sources. I see projects.yaml as a source of truth.
20:34:39 <dhellmann> annegentle : sure. so maybe a deliverable tag here, and then when it comes to making sure those docs are linked elsewhere we can figure out the details of what "has an api" means for that deliverable?
20:34:50 <sdague> because we have a repo for the service types list, which went on hold when we actually hit this hard question
20:34:55 <annegentle> dhellmann: I like per-deliverable
20:34:57 <sdague> that api doc links weren't obvious
20:34:57 * dims thinking
20:34:59 <annegentle> sdague: yeah that too
20:35:23 <dims> we need a tag and a url to where the api docs are?
20:35:28 <ttx> ok, so extend grammar to add collection of API doc links
20:35:31 <sdague> annegentle: if this is mostly just for api site nav, I think we probably should revive the service types authority and do that there
20:35:43 <ttx> dims: doesn't have to be a tag, especially if it needs to have a value
20:35:47 <mugsie> sdague: ++
20:35:49 <dhellmann> do we need to keep a tag, or do we just need to review what we have in these repos one time so we can manage the links elsewhere?
20:36:00 <johnthetubaguy> sdague: oh, is that the stuff related to the service catalog?
20:36:02 <ttx> sdague: yeah, that was what I was hinting at
20:36:17 <annegentle> sdague: it's for nav, for outreach to teams... discoverability (how does anyone figure this out?)
20:36:31 <sdague> johnthetubaguy: yeh, it's been on a hiatus because api-ref was related and more important
20:36:45 <dhellmann> so maybe the tag is "has contributed their info to the service catalog working group"?
20:36:48 <dims> dhellmann : the "team" concept in releases/ repo ...
20:36:59 <ttx> dhellmann: in all cases I'd say it would not end up being a tag. Could be some new key on the YAML grammar
20:37:07 <johnthetubaguy> sdague: yeah, I see what you mean, what we need is a list of OpenStack APIs and how to find them
20:37:08 <ttx> since we need it to have a value
20:37:15 <annegentle> I'm definitely abandoning the service:type patch. Would like ideas for docs including APIs
20:37:27 <dhellmann> ttx: it sounded like the value is going to be kept in that other repo, though?
20:37:49 <ttx> annegentle: maybe we can come up with a solution by discussing that offline between you, sdague and me ?
20:38:05 <annegentle> dhellmann: hopefully not, ideally it'll be expanded and agreed upon after https://review.openstack.org/#/c/316396/
20:38:08 <sdague> yeh, lets have an offline chat just to figure out if there is another approach here
20:38:09 <annegentle> ttx: yeah that works.
20:38:10 * thingee cuts out for next flight back home
20:38:17 <annegentle> ttx: it's not urgent this week for sure :)
20:38:21 <amrith> safe travels thingee
20:39:00 <ttx> annegentle: maybe over emails so that we can think about our responses more :)
20:39:08 <annegentle> ttx: I like email. a lot. too much.
20:39:09 <anteaya> annegentle: so the problem is in creating the motivation for projects to come to you with links?
20:39:20 <ttx> annegentle: could be a -dev thread
20:39:37 <annegentle> anteaya: problem space is varied :) discoverability, readability, navigation, and source-of-truth
20:39:41 <annegentle> ttx: yeah, ok
20:39:46 <anteaya> annegentle: okay thanks
20:39:49 <ttx> ok, we have several topics to discuss in open discussion, so I'll move on
20:40:00 <dims> annegentle : not a small task for sure. thanks!
20:40:03 <ttx> annegentle: starting with describing all the goals sounds like a good start
20:40:05 <annegentle> heh
20:40:16 <ttx> #topic Open discussion
20:40:16 <annegentle> I'm a conflater. sigh.
20:40:18 <dhellmann> ttx: ++
20:40:24 <ttx> morgan wanted to discuss data files defined in setup.cfg and sdague the API rate limiting
20:40:34 <ttx> whoever speaks first wins
20:40:38 <notmorgan> o/
20:40:42 <notmorgan> omg
20:40:45 <flaper87> lol
20:40:48 <ttx> notmorgan wins by arguably cheating
20:40:52 <notmorgan> actually if sdague wants to go, i'll defer
20:40:52 <flaper87> notmorgan: now you have to speak
20:40:53 * dhellmann pictures confetti raining down on notmorgan
20:40:54 <notmorgan> lol
20:40:59 <sdague> ok so my thing - http://lists.openstack.org/pipermail/openstack-operators/2016-June/010692.html
20:41:00 <flaper87> LOL
20:41:06 * notmorgan lets sdague talk
20:41:11 <sdague> we used to have this toy rate limitter in nova
20:41:16 <ttx> let's do sdague first to have one discussion at a time
20:41:18 <notmorgan> mine can wait until next week + have a resolution to governance
20:41:23 <notmorgan> if there is no time after
20:41:25 <sdague> which is now gone
20:41:33 <notmorgan> sdague: right.
20:41:39 <sdague> because it was a toy, and really bad. It was disabled in havana by default
20:42:01 <sdague> however, rate limiting is kind of a fundamentally important unit to API services with unknown users
20:42:33 <sdague> and it seems like we should try to figure out how to get our community on to the same, or a set of pages to collaborate there
20:42:37 <notmorgan> sdague: and i've see a number of bugs, some originally securiyt then made public, some just public around rate limiting.
20:42:41 <dhellmann> so are you envisioning building our own version of something like https://pypi.python.org/pypi/turnstile ?
20:42:49 <fungi> case in point, the vmt fields numerous reports of "denial of service" vulnerabilities which boil down to no way to limit request volume
20:42:50 <ttx> sdague: the historical answer by the VMT was that this needs to be solved outside of openstack, so we avoided to consider rate-based attacks as vulnerabilities
20:42:50 <notmorgan> this has been a real thorn for a number of reasons.
20:42:53 <dhellmann> (which is old and apparently unmaintained)
20:42:54 <mtreinish> sdague: just thinking out loud, but can't you just do it in apache or whatever you use as a webserver?
20:42:54 <notmorgan> fungi: ++
20:42:58 <dims> sdague : mod_ratelimit ?
20:43:03 <notmorgan> mtreinish: we can.
20:43:12 <mtreinish> and you're on your own if you decide to use eventlet as the server
20:43:24 <dims> mtreinish : yep
20:43:25 <notmorgan> mtreinish: but regardless a clear message should be sent on how this is expected to work
20:43:25 <sdague> mtreinish: typically rate limitting for API services isn't just connects
20:43:38 <dhellmann> yeah, doesn't it have to be user- or project-aware?
20:43:40 <notmorgan> and this should be consistent across openstack - for pure connections, mod_ratelimit is good
20:43:45 <sdague> at least it wasn't in the toy implementation, or turnstile (which was Vek)
20:43:55 <flaper87> I've heard different opinions on this but I believe the general consensus among those opinions is that we shouldn't do it ourselves but instead let balancers/http servers do it.
20:44:00 <notmorgan> but for other things we should have a clear direction: this is what is looks like in openstack
20:44:04 <flaper87> This might be an issue for some projects, of course
20:44:04 <amrith> sdague, as I said in email, one thought I had a couple of months ago was to have the delimiter project cover this. Looking at the history, Jay didn't like that idea.
20:44:08 <fungi> yeah, we've (vmt) traditionally punted rate limiter shortcomings to needing documentation/pointers to separate solutions
20:44:08 <sdague> because not every METHOD /PATH are the same cost
20:44:26 <notmorgan> because this is something that should be absolutely consistent in style across openstack (even if some things are more heavily limited than others)
20:44:54 <notmorgan> sdague: so the real issue here is going to be IPC. and house keeping. imo
20:45:04 <notmorgan> anyone can implement (toy or otherwise a rate limiter)
20:45:14 <sdague> dhellmann: it would only need to be project aware in having regex paths + METHODS ... which is basically what turnstile does actually
20:45:24 <notmorgan> though i think the richness of nginx/apache/etc to specify per-path matching is going to be the right way
20:45:24 <dhellmann> ok
20:45:30 <notmorgan> pet-path-per-method
20:45:31 <sdague> it was more or less the memcache cluster version of the toy limitter in nova
20:45:37 <notmorgan> less so about "user/project" awareness.
20:45:49 <sdague> so... I guess here is the set of questions:
20:45:53 <notmorgan> since apache, nginx, haproxy all know how to track the conections
20:45:54 <dhellmann> sdague : so the question is how to get people to work on it?
20:46:04 <sdague> 1) is it important that we have a consistent thing here?
20:46:16 <sdague> 2) is it acceptable to be documentation
20:46:25 <sdague> 3) is anyone interested in working in this space?
20:46:33 <notmorgan> 1) I say absolutely to.
20:46:38 <dhellmann> 1) yes 2) maybe 3) no myself
20:46:43 <notmorgan> 2) probably.
20:46:47 <flaper87> 1) yes 2) sure 3) not me
20:46:49 <notmorgan> 3) if i have time, I'll contribute
20:46:58 <johnthetubaguy> I think it would be good to be consistent about how the API users sees they are rate limited, and that might be documentation(?)
20:46:58 <notmorgan> but i can't write / doc / code it all on my own.
20:47:22 <dhellmann> I could also see it being helpful to build something that's aware of our API and user definition, though it seems like it wouldn't be too hard to make it generic.
20:47:39 <sdague> dhellmann: yeh, honestly, I think this could absolutely be pretty generic
20:47:39 <dhellmann> because if we build it, then we can include it in our interop testing
20:47:40 <dims> 1) yes 2) yes 3) no
20:47:41 <mtreinish> 1. yes, 2. I think so, 3. someone is always willing to work on something
20:47:53 <sdague> mtreinish: I would argue with your answer on 3
20:48:02 <dhellmann> mtreinish : I know a lot of counter examples to #3
20:48:06 <sdague> dhellmann: ++
20:48:18 <annegentle> sdague: 1) I don't think it's as important because large cloud providers already solved it in ways that make sense to their ops teams and with the resources they have.
20:48:30 <dhellmann> sdague : is this the first example of the TC saying "we want X to exist, someone please make that"?
20:48:37 <sdague> annegentle: so every new cloud has to solve it for themselves?
20:48:39 <amrith> annegentle makes an interesting point.
20:48:44 <annegentle> 2) documentation can be generic as http://developer.openstack.org/api-guide/compute/limits.html is already
20:49:03 <mtreinish> heh, fair enough. I didn't mean to imply we ask and they show up. I meant more people pick weird things to work on and find interesting
20:49:08 <jroll> someone has done this before, but it's in java: http://www.openrepose.org/
20:49:12 <annegentle> 3) there are many unsolved problems in OpenStack I don't think we would prioritize this one over others?
20:49:28 <dims> annegentle : Amen
20:49:32 <notmorgan> sdague: we have documentation being proposed in one of the projects that covers a lot of this
20:49:33 <dhellmann> mtreinish : that I agree with.
20:49:33 <edleafe> annegentle: especially when solutions already exist
20:49:36 <notmorgan> i think keystone maybe?
20:49:40 <notmorgan> we can probably adapt it
20:49:43 <notmorgan> more globally
20:49:43 <johnthetubaguy> so I don't think we should create a rate limiter, but I think we should help our users with rate limiting
20:49:48 <annegentle> sdague: they do now, and docs would be the biggest help since there are myriad considerations
20:49:53 <ttx> I'm with johnthetubaguy
20:50:01 <sdague> johnthetubaguy: yeh, that is probably the best path forward
20:50:04 <annegentle> johnthetubaguy: yeah, that's my sentiment as well
20:50:21 <ttx> Produce doc on how to solve rate limiting consistently
20:50:24 <flaper87> And we should probably start by asking the OPs team how they do it
20:50:30 <johnthetubaguy> that might be more effort than building one, but I think our users will be happier afterwards!
20:50:35 <annegentle> flaper87: yep that thread is started
20:50:37 <sdague> so... the seeds we have is keystone is writing up approaches based on apache, which should be applicable across the board, we start there?
20:50:42 <sdague> flaper87: see the link I posted
20:50:44 <ttx> It feels like something that could be driven from the ops side
20:50:55 * flaper87 scrolls back
20:50:58 <dims> sdague : sounds like a good plan
20:50:59 <johnthetubaguy> sdague: I like the apache base idea
20:51:01 <sdague> http://lists.openstack.org/pipermail/openstack-operators/2016-June/010692.html
20:51:07 <flaper87> my bad, missed it
20:51:09 <flaper87> thanks
20:51:22 <ttx> sdague: yes, that's the right start
20:51:26 <fungi> i think some members of the security project team may also be useful resources for collaboration on protective rate limiting for openstack rest apis
20:51:34 <sdague> notmorgan: could you get into that thread and speak up about docs being built on the keystone side?
20:51:34 <dhellmann> sdague : ++
20:51:35 <dhellmann> fungi : ++
20:51:47 <fungi> i know there's been some exploration in the space from their end
20:51:50 <ttx> fungi: yes they mentioned several options over time in OSSNs
20:51:54 <dims> fungi : +1
20:52:11 <sdague> ok, so I think we'll be agreed that the ops thread should probably be the right place to keep this conversation going
20:52:20 <ttx> https://wiki.openstack.org/wiki/OSSN/OSSN-0008
20:52:26 <sdague> and that as the TC we feel it's important there is some standard story here for folks
20:52:28 <flaper87> sdague: yes
20:52:43 <flaper87> ++
20:52:53 <sdague> and we'll see if we can convince the keystone folks to solve it in docs for the rest of us? :)
20:53:00 <notmorgan> sdague: i'm ting to find it
20:53:03 <notmorgan> it's actually a review
20:53:07 <flaper87> yup, and we should collect the output of that thread and document it somewhere
20:53:10 <dims> :)
20:53:11 <notmorgan> but it's hiding somewhere
20:53:12 <flaper87> and notmorgan will do everything here
20:53:18 <sdague> \o/
20:53:28 <ttx> notmorgan: you have 5 minutes for your topic if you think it's sufficient
20:53:43 <flaper87> ttx: he's documenting the rate limit stuff... don't interrupt
20:53:48 <notmorgan> sure
20:53:53 <notmorgan> ttx: it's easy topic
20:53:58 <ttx> flaper87: I counted 2 minutes for that
20:54:00 <flaper87> last famous words
20:54:05 <flaper87> ttx: oh, ok
20:54:07 <flaper87> :D
20:54:28 <notmorgan> #link https://review.openstack.org/#/c/326152/
20:54:54 <notmorgan> so we've seen changes go in to install config files with stup
20:54:56 <notmorgan> setup*
20:54:58 <notmorgan> also
20:55:00 <notmorgan> #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/097123.html
20:55:30 <notmorgan> basically, we, as the TC (imo) need to step in here and say explicitly "these are not data files we support" or "we should support, and do this with your data files for config"
20:55:32 <notmorgan> simply
20:55:43 <notmorgan> i am in the camp personally of "don't install config with setup"
20:55:50 <notmorgan> we can have a tool to install it for you
20:55:54 <notmorgan> for venv, etc
20:56:01 <notmorgan> but it shouldn't be "setup", pbr, etc
20:56:10 * flaper87 is in that camp too
20:56:13 <notmorgan> specifically lifeless and mordred's responses to the thread
20:56:16 <ttx> these are not the data files you're looking for
20:56:30 <notmorgan> i'll propose a governance guideline to make sure we, as openstack, say "this is infact what data is"
20:56:34 <dhellmann> yeah, the fact that it doesn't work consistently and correctly means we shouldn't do it, at least for now
20:56:35 <notmorgan> when you use data-files.
20:56:41 <notmorgan> we can change that later
20:56:46 <mtreinish> notmorgan: what about something like cli tooling that depends on data files?
20:56:47 <notmorgan> but i want this to be explicitly consistent
20:56:59 <notmorgan> mtreinish: "data files" are not "config files"
20:57:02 <notmorgan> we can have data files.
20:57:02 <dhellmann> we have the config generator for creating config files
20:57:04 * jroll is also in the notmorgan camp fwiw
20:57:22 <notmorgan> but config files themselves should not be installed with pip/setup in /etc /usr/stc/ usr/share... etc
20:57:29 <notmorgan> that isn't our call and it ends up inconsistent.
20:57:32 <sdague> notmorgan: so, curiously, we've got a big giant upgrade problem with privsep because of this stance
20:57:34 <notmorgan> depending on wheel, etc.
20:57:35 <mtreinish> dhellmann: well except the oslo config generator depends on a config file :)
20:57:40 <sdague> which blocked os-brick 1.4
20:57:57 <notmorgan> sdague: i think we should have a clear tool for this. heck, i'll help write one
20:57:58 <sdague> http://lists.openstack.org/pipermail/openstack-dev/2016-June/097293.html
20:58:06 <dhellmann> mtreinish : that config file can actually be delivered as a data file in the app, because it's data and not meant to be edited by the user
20:59:07 <ttx> should that be a cross-project spec ?
20:59:10 <sdague> so while I understand the concerns about python managing etc files, because it does so terribly
20:59:18 <ttx> Feels more appropriate than a TC resolution
20:59:23 <notmorgan> sdague: uhm. that is a case where frankly upgrade can't be "pip install"
20:59:24 <fungi> i strongly agree that non-configuration should not be expected in /etc
20:59:32 <flaper87> ttx: ++
20:59:42 <fungi> with my long-time sysadmin hat on
20:59:47 <ttx> bottom-up rather than top-down
20:59:49 <notmorgan> fungi: wht about configuration being installed from pip?
20:59:50 <sdague> notmorgan: so ... we've architected to break CD for everyone, which is what you are saying?
21:00:05 <dhellmann> fungi : I think the point here is rather that we shouldn't try to ship config files to be installed with python packages
21:00:09 <notmorgan> sdague: basically we need to work another way to do that
21:00:09 <fungi> notmorgan: pip doesn't know how, so no problem there?
21:00:14 <notmorgan> fungi: ++
21:00:15 <flaper87> time check
21:00:22 <fungi> install sample configurations as data
21:00:27 <ttx> feels like we'll need to continue this one on the thread
21:00:32 <notmorgan> sdague: please respond to the thread i linked, and we'll need to continue
21:00:38 <notmorgan> i didn't expect this to be resolved this week
21:00:43 <notmorgan> i just wanted to flag it for attention :)
21:00:44 <fungi> distros already commonly do this when there is no sane out-of-the-box default config
21:00:45 <ttx> alrighty
21:00:55 <dims> notmorgan : ack. need to think more about it
21:01:04 <notmorgan> thanks!
21:01:05 <ttx> let's continue this on eon thread
21:01:09 <amrith> thanks ttx
21:01:14 <ttx> Thanks everyone
21:01:16 <ttx> #endmeeting