20:00:09 <sbalukoff> #startmeeting Octavia
20:00:10 <openstack> Meeting started Wed Sep  3 20:00:09 2014 UTC and is due to finish in 60 minutes.  The chair is sbalukoff. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:13 <openstack> The meeting name has been set to 'octavia'
20:00:35 <dougwig> sbalukoff: a stalk?
20:00:41 <sballe> o/
20:00:43 <sbalukoff> Ok, folks!
20:00:46 <blogan> hi
20:00:50 <blogan> \o/
20:00:51 <sbalukoff> Here's the agenda for today:
20:00:54 <sbalukoff> #link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2014-09-03
20:01:09 <sbalukoff> Let's get going
20:01:12 <sbalukoff> #topic Briefly discuss Octavia/Non-arbitrary Decisions wiki page
20:01:25 <sbalukoff> I just wanted to briefly bring people's attention to this.
20:01:36 <sbalukoff> #link https://wiki.openstack.org/wiki/Octavia/Non-arbitrary_Decisions
20:01:56 <sbalukoff> We'll be using that for documenting decisions that took longer than a couple minutes to resolve (like the damn name thing).
20:02:06 <sbalukoff> Please let me know if you've got any questions about that.
20:02:24 <sbalukoff> If y'all have concerns about that, we can discuss at the next meeting.
20:02:36 <blogan> lgtm
20:02:44 <sbalukoff> (I am not expecting y'all to have concerns, as we discussed this in the last couple meetings.)
20:02:57 <sballe> sounds good to me. i am in a meeting and it is runnign late
20:03:00 <sbalukoff> Ok!
20:03:03 <sbalukoff> #topic Briefly discuss v0.5 component design under review, Brandon's initial database migrations, push for consensus on that.
20:03:07 <jorgem> hello!
20:03:15 <TrevorV> o/
20:03:24 <sbalukoff> So, basically, this is about just making sure we're processing reviews in a timely manner.
20:03:35 <rm_work> o/
20:03:38 <sbalukoff> Looks like the v0.5 component design was merged earlier today.
20:03:56 <dougwig> sbalukoff: the mentioned review is blocked on mine, which i will be trying to merge today.
20:04:06 <blogan> migrations need one more PS bc of the name thing, and need to add attributes to the health monitor
20:04:34 <sbalukoff> Other than the naming thing that we're discussing, which is clearly a blocker for a couple of these, any major concerns to discuss at this time?
20:04:59 <TrevorV> I have a topic if we have time.  Not pressing, just wondering what people's thoughts are
20:05:17 <sbalukoff> TrevorV: Er... does it have to do with the current topic?
20:05:41 <sbalukoff> (current topic is essentially discussing outstanding concerns with outstanding gerrit review stuff.)
20:05:44 <sbalukoff> #link https://review.openstack.org/#/q/stackforge/octavia+status:open,n,z
20:05:50 <TrevorV> No it does not sballe
20:05:51 <rm_work> guessing not because he said he as a Topic :P
20:06:01 <TrevorV> sbalukoff, ** mah bad
20:06:04 <sbalukoff> Oh, haha!
20:06:31 <sbalukoff> Ok, we'll try to get through the remaining topics quickly if we can so we can discuss your topic, eh.
20:06:33 <sbalukoff> Ok!
20:06:36 <sbalukoff> So, moving on!
20:06:46 <sbalukoff> #topic Get consensus on name of "thingy" doing the load balancing (VM / appliance / device / strategy / toaster / whatever)
20:06:48 <dougwig> #link https://etherpad.openstack.org/p/octavia-backend-name
20:06:55 <dougwig> go vote.
20:07:04 <sbalukoff> Yes, please go vote
20:07:14 <sbalukoff> We need a decision on this today, because it's holding up other work.
20:07:45 <dougwig> in about 1 hour and 23 minutes, i'm going to update the patchset with the winner, and then push for reviews.  we've had literally no other comments since the weekend except on naming.
20:08:13 <dougwig> the review in question:
20:08:15 <dougwig> #link https://review.openstack.org/#/c/117701/
20:08:16 <blogan> dougwig: i think it'd be smart to get the top 3 adn then everyone vote on that, but only one vote
20:08:17 <sbalukoff> dougwig: That seems fair.
20:08:33 <sbalukoff> blogan: There's only one with a positive score right now, I think.
20:08:33 <rm_work> blogan +1, though that delays things again
20:08:34 <TrevorV> +1 blogan
20:08:36 <rm_work> but I think it is the fairest
20:08:43 <dougwig> blogan: eh, runoff's are good when you only have on vote amongst many.  there is no limit here.
20:08:51 <dougwig> /on/one/
20:09:31 <blogan> i think if there are 3 things and someone doesn't like any, but they had to choose one, the vote will be much clearer
20:09:44 <sbalukoff> One thing to consider with this name:
20:09:49 <rm_work> blogan +1 again
20:09:53 <dougwig> ok, at 3:30 we'll have a runoff.  voting will end at 5pm (this is all mountain time.)
20:10:15 <sbalukoff> This describes a nova instance dedicated to running the octavia code + haproxy which actually does the load balancing in this solution.
20:10:29 <sbalukoff> In an active-active topology, there will be groupings of these which perform the load balancing.
20:10:39 <rm_work> please set your name if you are voting so we can see who everyone is :P
20:10:41 <dougwig> and at present, we literally have no names with >0 points.
20:10:43 <sbalukoff> So, it would be good if the name we choose has a logical grouping of some kind.
20:10:53 <sbalukoff> (eg. sheep / flock ) or something.
20:11:00 <sbalukoff> Not that I'm suggesting sheep at this point.
20:11:12 <blogan> may as well call it cats
20:11:19 <rm_work> cats / herd?
20:11:23 <dougwig> i'm definitely upload sheep.  you don't want to know what i'm naming the driver/interface.
20:11:24 <blogan> indeed
20:11:24 <sbalukoff> Haha!
20:11:28 <rm_work> although RMS might get on us because of GNU/Herd
20:11:41 <rm_work> or is that Hurd
20:11:47 <sbalukoff> That would be a good reflection of trying to get this group to agree on something. ;)
20:11:55 <rm_work> heh yes
20:12:08 <blogan> this will be the hardest problem we come across
20:12:14 <ptoohill> Battlestar-loadtallica is a logical grouping and should totally win
20:12:15 <sbalukoff> Absolutely.
20:12:44 <dougwig> ok, go vote, i think we can move on?
20:12:48 <sballe> My meeting is running even later. I'll have to read the minutes later
20:12:54 <rm_work> velociraptor / pack <-- just sayin'
20:12:56 <rm_work> kk
20:13:19 <rm_work> ah though WD uses velociraptor for a line of drives :/
20:13:20 <sbalukoff> serial killer / bloodbath
20:14:03 <dougwig> one last note: there are a TON of -1's in there.  try to find some +1's, or suggest something new.  i don't want a vote among the least hated here.
20:14:11 <rm_work> truth
20:14:52 <TrevorV> I wish we had enough logical space to explain why people are placing -1 or +1.  It always helps me make a decision when I know what people's thoughts are
20:15:03 <TrevorV> (logical space?)
20:15:15 <rm_work> doug has been adding briefly why he's -1'ing i think
20:15:27 <rm_work> or
20:15:41 <rm_work> I guess he and Jorge did
20:15:53 <TrevorV> honestly, "overloaded term" is a bad argument for a -1 since its already overloaded it seems to make sense to use it... IMO
20:16:08 <sbalukoff> TrevorV -1
20:16:10 <sbalukoff> :)
20:16:13 <TrevorV> At most I'd say -0.5
20:16:29 <dougwig> heh, note that i put that on "instance".  :)
20:16:51 <sbalukoff> No, I disagree: Having an overloaded term used for something with a specific purpose just irritates me.
20:17:11 <sbalukoff> I know! Let's go with "server"
20:17:15 <sbalukoff> Yeah. No.
20:17:20 <dougwig> no, no.  "object"
20:17:21 <blogan> but this is supposed to bea generic term
20:17:37 <sbalukoff> dougwig: That's making me physically ill.
20:17:43 <crc32> yea I withdraw instance. Cause we'll need a term in front of it now.
20:17:46 <sbalukoff> blogan: "generic enough"
20:17:47 <TrevorV> blogan, +1, hence me saying overloaded-term doesn't make sense.
20:17:57 <sbalukoff> The octavia load balancing thingy actually has a specific purpose.
20:18:24 <blogan> well a container has a specific purpose, and its to hold things, but it has a genric name
20:18:27 <blogan> or does
20:18:29 <blogan> it
20:18:31 <blogan> oye
20:18:37 <sbalukoff> It runs the software which actually delivers the load balancing service to the end user.
20:18:45 <sbalukoff> Right.
20:19:00 <TrevorV> So its a host, sbalukoff ?
20:19:20 <sbalukoff> TrevorV: You're lucky you're not within throwing distance.
20:19:38 <TrevorV> I'm actually serious here, since that seems appropriate to me with that description :D
20:19:41 <rm_work> peon / grunt? :P
20:19:49 <sbalukoff> rm_work: +1
20:19:50 <crc32> Sounds like a "<InsertTermHere>Manager"
20:19:59 <sbalukoff> These things shouldn't be that intelligent.
20:20:09 <vivek-ebay> I am confused. What are we suggesting name for? octavia-backend ?
20:20:12 <rm_work> totally adding both of those
20:20:12 <ptoohill> Oh dear, executive decision time on naming issues? Let's just default to toaster or something? Isn't there other pressing issues?
20:20:13 <TrevorV> Right, so what keeps us from using an overloaded-term?  :)
20:20:16 <sbalukoff> vivek-ebay: Yes.
20:20:25 <sbalukoff> The Octavia VM in previous diagrams and component designs.
20:20:29 <dougwig> TrevorV: imagine if you're new to the project, clone it, and have nothing as a roadmap except the directory listing.  there are a few generic terms that have enough meaning to help.  host isn't one of them.
20:20:31 <rm_work> ptoohill: well unfortunately this is blocking a major CR
20:20:43 <xgerman_> sorry for being late
20:20:44 <TrevorV> dougwig, that's fair.
20:20:54 <sbalukoff> ptoohill: Yes there are.
20:20:55 <sbalukoff> So!
20:21:02 <rm_work> I'm thinking about something that is a good metaphor
20:21:06 <rm_work> peon / grunt are that
20:21:15 <rm_work> they're the actual "workers"
20:21:17 <rm_work> oh
20:21:24 <sbalukoff> Add any suggestions you want before the end of this meeting...  directly after the meeting, everyone please register your +1 / -1 / abstain on this.
20:21:26 <rm_work> is "workers" super overloaded? >_>
20:21:34 <dougwig> worker/colony/ant
20:21:45 <rm_work> i like worker/colony
20:21:46 <blogan> so as dougwig said, this vote will be until 330pm Mountain Time, and then the top 3 will have another vote which will close at 5pm Mountain Time
20:21:46 <dougwig> hive
20:21:47 <sbalukoff> Anyone have anything else to add to this discussion right now?
20:22:01 <sbalukoff> dougwig: bee
20:22:06 <dougwig> i think we're past time to move on.  take your ideas to the etherpad, please
20:22:10 <vivek-ebay> beehive
20:22:11 <sbalukoff> Yep.
20:22:32 <ptoohill> Goooses
20:22:39 <sbalukoff> Ok!
20:22:41 <ptoohill> Oh were past that srry
20:22:42 <sbalukoff> #topic Discuss where haproxy config should be rendered (controller, driver, or Octavia VM / appliance)(Driver, of course.)
20:22:58 <sbalukoff> My thought echoes Dougwig's here:  In the driver.
20:23:04 <jorgem> i added approach
20:23:14 <sbalukoff> (And then configs get pushed out to the octavia load balancer thingy)
20:23:17 <dougwig> (sorry for editorializing the agenda, i couldn't resist.)
20:23:28 <sbalukoff> Anyone have different ideas here?
20:23:29 <blogan> isn't the driver basiaclly in the controller?
20:23:37 <sbalukoff> blogan: Yes, ish.
20:23:39 <sbalukoff> ;)
20:23:41 <dougwig> blogan: yes, imported by controller.
20:23:45 <sbalukoff> It gets loaded by the controller.
20:23:47 <blogan> the controller will just instantiate whatever driver it needs to use
20:23:49 <xgerman_> no, we like it to be in the VM
20:24:08 <sbalukoff> xgerman_: Please explain your reasoning.
20:24:32 <jorgem> +1 blogan
20:24:38 <xgerman_> I though we discussed that last time
20:24:38 <dougwig> i actually don't care beyond it not being in the controller core.  it's really a driver issue, but as long as the interface isn't hard-coded to haproxy, it matters not whether that occurs in the driver or VM.
20:24:40 <jorgem> It makes it easier to test as well
20:24:48 <jorgem> because we can mock stuff out
20:24:51 <rm_work> sbalukoff: +1, in the driver IMO
20:25:01 <blogan> xgerman_: any strong reasons why it should be on the VM side?
20:25:16 <xgerman_> then I can switch haproxy, etc. without changing the controller
20:25:23 <dougwig> xgerman_: especially considering that if you roll a custom VM, you could make your driver a pass-through.
20:25:26 <johnsom_> dougwig +1
20:25:29 <sbalukoff> xgerman_: So a couple reasons I don't want this in the VM:   1.  It puts more intelligence into the VM than is necessary (again, centralize intelligence, distribute workload)
20:25:50 <sbalukoff> 2. It makes the back-end API (ie. what the driver/controller speaks to the VM) more complicated
20:26:03 <TrevorV> +1 driver
20:26:25 <sbalukoff> Also: 3. It makes it more difficult to add new minor features... because you'll have to go out and update 10,000+ VMs instead of updating a few controllers.
20:26:51 <xgerman_> well, you have to do that anyway in case of security updates
20:27:01 <sbalukoff> xgerman_: So, if you want to replace haproxy with nginx, that seems to call for a new driver in any case.
20:28:01 <sbalukoff> xgerman_: Yep, so let's not exacerbate the problem by having to do it for minor feature additions, too.  Also, not all security updates will necessarily require that.  If it's a problem that can be fixed with an update to a config (eg. "disable this kind of SSL") then that doesn't require updating all VMs.
20:28:22 <dougwig> i think this becomes a difference of, do we have an nginx and an haproxy driver, or do we have a nova VM driver, which different VM's?  one of these schemes will fit inside the other.
20:28:34 <dougwig> /which/with/
20:28:58 <sbalukoff> xgerman_: If you want to have an Octavia VM be able to run either haproxy or nginx, that's doable too-- and accomplishing this is still easier by writing a driver which can do both, and making a minor change to the back-end API.
20:28:59 <xgerman_> well, my vision was that the controller talk to the VMs in some octavia protocol and the the vm renders it as needed
20:29:10 <sbalukoff> (Speaking from experience, as our BLBv1 product can do either haproxy or nginx)
20:29:14 <dougwig> (an aside, i will note that from minute zero in this conversation, we have used the term VM to describe what we're voting on naming.)
20:29:29 <xgerman_> so now we need to bookkeerp which vm is compatible with which driver, versions need to fit, etc.
20:29:35 <sbalukoff> xgerman_: I think that's good in theory, but in practice will be more of a pain once you have a large deployment.
20:29:56 <rm_work> xgerman_: yeah that makes sense and I was thinking that too -- although then we have to define another whole interface level, which i think we can avoid by just doing it in the driver layer
20:30:02 <sbalukoff> xgerman_: All of that is solved if you have an API command to gather that info. Think of it like API versioning.
20:30:28 <rm_work> though yeah, i am concerned about the bookkeeping
20:30:32 <sbalukoff> Octavia VMs will also have a version, eh.
20:30:53 <xgerman_> yep, so we might have to replace VMs + the driver
20:31:17 <xgerman_> to solve the special case of "minor changes"
20:31:20 <sbalukoff> xgerman_: For major changes, yes.
20:31:27 <sbalukoff> But that's no different from the model you've proposed.
20:31:40 <sbalukoff> No, actually, you don't have to replace the VMs for minor changes.
20:31:52 <xgerman_> yeah, but for major changes I do
20:32:05 <sbalukoff> xgerman_: Again, which is no different than the solution you've proposed.
20:32:14 <sbalukoff> The difference here is that you have to replace the VMs for minor changes as well.
20:32:20 <sbalukoff> I don't.
20:32:26 <xgerman_> yep, so I don't create a special case
20:32:34 <sbalukoff> ...
20:32:46 <rm_work> (dougwig yeah we kind of anchored the discussion in a way by saying "vote on what to call VMs" :P)
20:33:11 <sbalukoff> xgerman_: The solution that allows for the least amount of overall pain is the best one, IMO.
20:33:22 <TrevorV> sbalukoff, +1
20:33:24 <sbalukoff> Rather than making everything equally painful
20:34:37 <sbalukoff> Anyway, do we want to vote on this here, or do y'all think that mailing list discussion is warranted?
20:34:41 <dougwig> i think that if xgerman_ isn't convinced, this needs to be punted to ML or voice.
20:34:42 <xgerman_> well, so far the advantage of your proposal id for minor upgrades; downside is more bookeeping
20:34:45 <dougwig> jinx
20:35:05 <sbalukoff> xgerman_: There's more than just that.
20:35:07 <blogan> if the config rendering gets done on the VM side, will the controller even need an haproxy/nginx driver? would that just be pushed to the VM?
20:35:08 <TrevorV> +1 to mailing list
20:35:15 <sbalukoff> The backend API is also far less complicated.
20:35:29 <TrevorV> blogan, I think the argument is having that all pushed to VM
20:35:31 <xgerman_> ok, mailing list
20:35:35 <sbalukoff> And again, your solution goes against the whole "centralize intelligence / decentralize workload" design philosophy.
20:35:36 <dougwig> blogan: yes, because not all appliance backends will have the luxury of implementing it all inside the VM.
20:35:58 <blogan> dougwig: ++ for your use of appliance
20:36:02 <TrevorV> dougwig, (I see what you did there)
20:36:03 <sbalukoff> haha
20:36:12 <tmc3inphilly> Unit of Compute (UoC)
20:36:30 <sbalukoff> Ok, we'll punt to mailing list.
20:36:37 <xgerman_> ok
20:36:41 <sbalukoff> xgerman_: Do you want to start that thread, or shall I?
20:37:12 <xgerman_> you can start it
20:37:16 <sbalukoff> Ok, will do.
20:37:36 <sbalukoff> Ok, next topic
20:37:40 <sbalukoff> #topic Discuss DB model around loadbalancer VIPs in relation to different front-end topologies and how be to represent these abstractly
20:38:47 <sbalukoff> So, in looking at how to make the network stuff work, I think blogan and I realized that we've not yet come up with a good way to represent the types of connections the VMs / appliances will need to the rest of the network.
20:39:03 <rm_work> that's... quite a wordy topic :P
20:39:21 <sbalukoff> We could default to Neutron terms here, which I think is actually not good because it assumes a lot about layer-2 topology.
20:39:23 <dougwig> sbalukoff: can you give an example or two?
20:39:34 <TrevorV> +1 dougwig
20:40:12 <xgerman_> +1
20:40:15 <sbalukoff> dougwig: So, right now if we're working with just Neutron as a networking layer, if we want to represent the front-end connectivity to a VIP address (ie. part of the loadbalancer object), we need to record both the vip_address and vip_port_id
20:40:18 <sbalukoff> Or something like that..
20:40:39 <sbalukoff> What the user is actually probably interested in is vip_address and subnet_id
20:40:43 <blogan> if octavia is responsible for creating the vip_port, then we need a subnet_id
20:40:49 <sbalukoff> But that's assuming layer-2 connectivity on the front-end.
20:41:04 <blogan> and if floating ips are being used, thats different
20:41:05 <sbalukoff> Things get more complicated with layer-3 (routed) connectivity
20:41:17 <dougwig> and overlapping subnets.
20:41:32 <sbalukoff> Because a layer-3 address isn't going to be associated with a port, it's going to be associated with a route.
20:42:01 <sbalukoff> dougwig: Well, ignoring the overlapping subnets problem for a bit, we still have trouble reliably representing things even if there isn't overlap.
20:42:02 <dougwig> we'll have to have some generic id fields, which the network_driver is going to have to map to neutron specifics.  the question is how many id fields, and what to name them?
20:42:35 <dougwig> or a text blob where we can put whatever json the network_driver needs?
20:42:45 <blogan> dougwig: i believe that is the extent of the problem, making sure we have the necessary fields and good names (naming things!@#@##)
20:42:53 <sbalukoff> dougwig: I'd love it if we had a way to refer to these things in "Octavia language" or something, which the driver then translates to do whatever is necessary for that type of connectivity on the network side.
20:42:55 <TrevorV> dougwig, don't forget what is required versus optional
20:43:52 <sbalukoff> So, that we're using more industry standard terms and concepts, and aren't doing what I think to be "hackish" ways of handling this with Neutron specifically (eg. associating an address with a port and the not putting that port on any subnet, because that's how you handle a layer-3 route? really?)
20:43:53 <blogan> xgerman_: from HP's perspective, what information would you need to store for front end connectivity?
20:44:13 <sbalukoff> I don't want to bake that kind of hack-ish-ness into Octavia's design, if we can avoid it.
20:44:32 <blogan> sbalukoff: you can have a port without a subnet in neutron?
20:44:34 <xgerman_> we are still discussing on our end how things will shape up
20:44:38 <dougwig> maybe we inch our way there?  0.25 is vip and members on same subnet, 0.5 is vips all on one subnet, members on others, etc...  there's no reason these db models have to be perfect in the first rev; we're going to learn a lot in the first prototype.
20:45:05 <sbalukoff> blogan: I think so, but I could possibly be wrong. You might have to create the subnet, and then not attach it to a router. I forget-- it's been a few months since I looked into how to do this.
20:45:29 <blogan> dougwig: +1, I think that may end up being what we need to do because there is a lot of unknowns right now
20:45:49 <sbalukoff> dougwig: I guess I'm asking: Is anyone out there an expert who has an opinion on the right way to do this that can talk to us?
20:45:56 <blogan> sbalukoff: i'm not totally 100% sure, but I didn't think it was possible to have a port not on a subnet
20:46:00 <sbalukoff> Unfortunately, my neutron networking expert is on vacation this week. :P
20:46:32 <sbalukoff> blogan: Then I'm probably wrong, but I don't think I'm wrong about making floating IPs work a non-hackish kind of thing. ;)
20:46:34 <dougwig> sbalukoff: i think i'd still advocate getting some code on the books with a simpler topology first.
20:47:16 <blogan> could just store these things as generic for now, network_resource_id
20:47:23 <sbalukoff> dougwig: I'd really rather have this model figured out before we paint ourselves into a corner with an inadequate design baked into other components. :P
20:47:27 <TrevorV> dougwig, if I understood you correctly, you mean to just leave naming/fields tied to Neutron for now, and modify as needed later?
20:48:01 <sbalukoff> In any case, we don't need a final decision on that now, or even this or next week, IMO.
20:48:05 <dougwig> no, i mean as we find fields we need to add to support neutron, we think of generic names/concepts and add them to the migrations at that time, instead of waiting to have perfect models.
20:48:13 <blogan> sbalukoff: I don't think a major refactor before 0.5 will be a huge deal (famous last words I know)
20:48:15 <sbalukoff> dougwig: You're right that we can get some work done without having this figured out.
20:48:43 <TrevorV> dougwig, ah, gotcha, thanks
20:48:54 <sbalukoff> Mosty, I wanted to make the rest of y'all aware of the problem, so if you can pull in resources, or if you have a good idea on how to solve this in the long run that you'd like to share, I'd love to hear it.
20:49:02 <sbalukoff> (Over the next coming weeks, eh.)
20:49:35 <sbalukoff> Anyway, we've only got 10 minutes left, so I wanted to move on to the next topic.
20:49:47 <blogan> sbalukoff: in the meantime is using generic names acceptable?
20:50:00 <sbalukoff> blogan: I don't think we've got another choice. :)
20:50:00 <xgerman_> blogan +1
20:50:07 <blogan> done
20:50:14 <sbalukoff> blogan: Nobody is suggesting anything else at this time, eh.
20:50:18 <sbalukoff> Ok!
20:50:20 <dougwig> i'd put their neutron counterpart names in the models (as comments), where we differ.
20:50:33 <blogan> dougwig: good idea
20:50:41 <sbalukoff> dougwig: +1
20:50:46 <sbalukoff> #topic Discuss blueprints here, look for volunteers: https://blueprints.launchpad.net/octavia/
20:50:49 <sbalukoff> Ok, folks!
20:50:56 <sbalukoff> We have blueprints! Please start claiming them!
20:51:07 <sbalukoff> (And fleshing them out, doing work on them, etc.)
20:51:24 <xgerman_> +1
20:51:27 <blogan> everyoen agree on the process?
20:51:31 <sbalukoff> Also, please make sure to start updating the stand-up etherpad I created (modeled on the one Jorge did for Neutron LBaaS)
20:51:41 <blogan> giving more details in the blueprint work items/whiteboard?
20:51:51 <sbalukoff> #link https://etherpad.openstack.org/p/octavia-weekly-standup
20:52:04 <blogan> some things like interface designs are probably just easiest showing the code honestly
20:52:16 <xgerman_> ok, good to know'
20:53:07 <johnsom_> Can we capture the networking concerns on the ML so we can pass it around to people for comment?
20:53:39 <blogan> johnsom_: i think that is a good idea
20:53:41 <sbalukoff> johnsom_: Sure.
20:54:13 <sbalukoff> #action sbalukoff to start ML thread on front-end topology representation concerns
20:54:30 <sbalukoff> Ok, Trevor! What's your topic?
20:54:46 <dougwig> sbalukoff: switch the topic to open discussion...
20:54:56 <sbalukoff> #topic Open Discussion
20:55:13 <blogan> TrevorV: you had something
20:55:51 <TrevorV> So in some of my work with the db-repository blueprint, a question came up.  Where do we do validation of request/ownership?
20:55:57 <TrevorV> For example:
20:56:18 <TrevorV> If a customer makes a request to retrieve a load balancer by an ID that doesn't belong to their tenant, where does the exception get thrown from?
20:56:26 <TrevorV> (Implementation detail, I know, but it helps)
20:56:38 <sbalukoff> TrevorV: So I see that as being a function of the API.
20:56:53 <TrevorV> So the API layer would retrieve the object and then check its tenant_id?
20:56:56 <TrevorV> Sort of situation?
20:56:56 <blogan> i think the more generic question is how much validation do we want the database layer to be responsible for versus an actaul validation alyer?
20:57:12 <TrevorV> +1 blogan, much more concise
20:57:18 <xgerman_> can you circumvent the validation layer?
20:57:30 <sbalukoff> blogan: Syntax, santity checks at the DB layer, authorization at the validation layer?
20:57:35 <TrevorV> xgerman_, you shouldn't be able to unless you're accessing the operator api
20:57:43 <sbalukoff> Er...
20:57:53 <sbalukoff> Well, I suppose we could do all of that at the validation layer.
20:57:56 <rm_work> are we using RBAC using keystone middleware?
20:58:00 <blogan> xgerman_: i don't think the validation should be able to be circumvented at all, but that could be argued
20:58:06 <dougwig> i like permissions in controller, validation in ORM.  but that's me.
20:58:25 <rm_work> if so, they we do it like barbican does it -- assign rbac roles and on the function we say to enforce a specific role requirement, and the middleware handles it
20:58:30 <a2hill> rm_work +1 RBAC can handle a lot of that for us
20:58:34 <xgerman_> blogan_, I just want to make sure we don't open a security hole by designing it wrong
20:58:51 <xgerman_> a2hill, rmwork +1
20:58:57 <xgerman_> I love roles
20:59:08 <sbalukoff> a2hill, rm_work: +1
20:59:17 <sbalukoff> Why not go with a precedent, eh?
20:59:21 <blogan> thats fine, but for things that are not handled by rbac
20:59:39 <xgerman_> example?
20:59:39 <dougwig> the counter-argument is to keep this stuff simple for now, since if we're a driver of lbaas, that crap will all be done for us.
20:59:40 <blogan> such as maximum values
20:59:49 <dougwig> (i.e. a trusted entity)
20:59:54 <dougwig> ((except by sbalukoff))
21:00:01 <sbalukoff> Haha
21:00:09 <sbalukoff> Ok, well, we're out of time for this meeting.
21:00:24 <sbalukoff> I didn't get a chance to do the vote on whether to keep things here or move back to webex.
21:00:32 <sbalukoff> I'll add that as an agenda item for next time.
21:00:35 <xgerman_> let's vote next time :-)
21:00:42 <sbalukoff> Thanks y'all!
21:00:45 <sbalukoff> #endmeeting