20:00:31 <sbalukoff> #startmeeting Octavia
20:00:31 <openstack> Meeting started Wed Sep 17 20:00:31 2014 UTC and is due to finish in 60 minutes.  The chair is sbalukoff. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:32 <blogan> 100 is fine for that as well
20:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:34 <openstack> The meeting name has been set to 'octavia'
20:00:34 <TrevorV_> o/
20:00:37 <dougwig> blogan: 100 does that with widescreen monitors.
20:00:41 <sbalukoff> Ok, folks!
20:00:42 <dlundquist> o/
20:00:47 <sbalukoff> Potentially short agenda for today.
20:00:48 <blogan> well what if i want 3 editors side by side?
20:00:55 <dougwig> 2 monitors?
20:00:59 <johnsom_> Hello
20:01:00 <blogan> then i could have 6!
20:01:00 <dlundquist> blogan: 4k display?
20:01:11 <sbalukoff> As usual, agenda is here:
20:01:13 <blogan> now i can have a lot of editors!
20:01:14 <sbalukoff> #link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Agenda
20:01:14 <xgerman> o/
20:01:17 <ajmiller_> o/
20:01:23 <blogan> 10 char line limit!
20:01:27 <blogan> oh hi
20:01:28 <dougwig> alright,let's settle down for our chair here.  :)
20:01:29 <blogan> meeting started
20:01:35 <TrevorV_> #action TrevorV write up 2 weeks worth of meeting notes.
20:01:43 <TrevorV_> o_0
20:01:46 <sbalukoff> #topic Review progress on gerrit reviews and blueprints
20:02:23 <sbalukoff> I feel like we're getting good progress on gerrit reviews, but that only a handful of us are doing said reviews presently.
20:02:47 <sbalukoff> Also, I apologize: I was too distracted with other priorities to actually update any of the blueprints in launchpad this last week.
20:02:55 <sbalukoff> I will be doing so this week.
20:03:20 <davidlenwell> o/
20:03:47 <davidlenwell> I will start in on helping with the reviews also
20:03:52 <sbalukoff> Question I have for you, especially those looking to get involved: Is there something we can do to help you get started in particular?
20:04:00 <sbalukoff> Thanks, david
20:04:01 <TrevorV_> On this topic, I'd like to draw attention to this review:  https://review.openstack.org/#/c/116718/
20:04:21 <sbalukoff> (And by "we" I mean "those of us who have been working on LBaaS and Neutron LBaaS for months.)
20:04:22 <TrevorV_> It was dependent on the migrations, and since that's merged it would seem prudent to review this one next.
20:04:27 <davidlenwell> sbalukoff feel free to tag me in gerrit on reviews
20:04:31 <xgerman> we should probably start an etherpad with links to what we like eyeballs on
20:04:36 <sbalukoff> davidlenwell: Will do!
20:04:43 <xgerman> that helped me a lot when we did LBaaS v2
20:04:52 <sbalukoff> #action sbalukoff to assign all review work to davidlenwell
20:04:55 <sbalukoff> #undo
20:04:56 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0x1f4e310>
20:04:57 <johnsom_> +1 on etherpad for reviews
20:05:15 <blogan> can we have an etherpad listing out all the etherpads as well?
20:05:18 <blogan> j/k
20:05:28 <xgerman> that would be the wiki
20:05:32 <xgerman> :-)
20:05:33 <blogan> oh snap
20:05:38 <TrevorV_> actually blogan that might be helpful, since they don't explicitly show an organizational structure
20:05:48 <dougwig> i can setup an etherpad again.
20:05:52 <sbalukoff> xgerman: Sounds good.  Question for you, as well:  Is this link helpful for knowing what is in the review queue?
20:05:53 <sbalukoff> #link https://review.openstack.org/#/q/stackforge/octavia+status:open,n,z
20:05:57 <blogan> well
20:06:04 <dougwig> #action dougwig octavia review etherpad
20:06:06 <blogan> yeah sbalukoff
20:06:10 <blogan> thats what iw as going to link
20:06:11 <dougwig> i use that link, but it doesn't prioritize.
20:06:12 <davidlenwell> sbalukoff:  maybe make that the irc chanels topic
20:06:14 <blogan> all the reviews are right there
20:06:38 <sbalukoff> davidlenwell: Good idea!
20:06:39 <xgerman> well, I still need to figure out what is WIP
20:06:49 <blogan> that has teh WIP status
20:07:23 <johnsom_> I will put that link on our wiki page for easy reference
20:07:27 <blogan> if there's a big X under W, that means its a WIP
20:07:36 <dougwig> johnsom_: it's already there
20:07:38 <dougwig> at the top
20:07:41 <dougwig> :)
20:07:49 <xgerman> make it bold and blinking
20:07:53 * Vorrtex__ power is fluctuating at random... might not be on here consistently
20:07:54 <johnsom_> So it is, cool, missed that
20:07:57 <sbalukoff> #action sbalukoff to try to update channel topic (even though we don't have ops here)
20:08:41 <dougwig> i can do that.  what topic?
20:08:47 <blogan> also I thought a good reason to put WIPs in gerrit was so people could look at the direction the code is going and comment on it
20:08:49 <sbalukoff> dougwig:  https://review.openstack.org/#/q/stackforge/octavia+status:open,n,z
20:08:54 <blogan> not wait for it to get out of WIP and ready for review
20:09:08 <dougwig> will set after meeting
20:09:15 <dougwig> #action dougwig fix lbaas channel topic
20:09:30 <sbalukoff> blogan: Yeah, I've sort of been doing the latter. I'll be more dilligent about reviewing WIP code.
20:09:41 <blogan> well thats what I thought, I could be wrong
20:09:43 <xgerman> blogan, you are right but I want to make sure I at least review aht's urgent first :-)
20:10:18 <dougwig> there are multiple kinds of reviews.  the ones where you look for errors, omissions, or not being openstack-y, are kinda useless while WIP.  the ones where you want to give design feedback, those are when you go into a WIP.  IMO.
20:10:18 <blogan> xgerman: totally understand, I just wanted to make sure my understanding was correct, not saying any particular WIP should be reviewed right now
20:10:31 <blogan> dougwig +1
20:10:41 <sbalukoff> In any case, I can certainly put together an etherpad for people to update if they don't think the automatic listing is helpful (since it's not prioritized). We can see how that goes and decide whether it's worth maintaining long-term.
20:10:43 <xgerman> +1
20:10:44 <blogan> those are the most useful reviews in any phase of the review process
20:11:30 <Vorrtex__> Forgive me, did we get a page set up with links to reviews? etherpad or wiki? either?
20:12:17 <sbalukoff> Vorrtex__: There's this automated listing here: https://review.openstack.org/#/q/stackforge/octavia+status:open,n,z
20:12:24 <sbalukoff> But it is not prioritized by urgency
20:12:37 <Vorrtex__> Oh, ha, I've seen this page in passing, but never paid any attention to it.
20:12:39 <Vorrtex__> Thanks
20:12:42 <sbalukoff> Anyway, I'll go ahead and set up that etherpad.
20:12:57 <sbalukoff> #action sbalukoff to create etherpad listing reviews that need attention in order of urgency.
20:13:08 <blogan> we can set priorities in the launchpad blueprint page, but it's not easy to get to the reviews from there
20:13:18 <sbalukoff> blogan: I agree
20:13:25 <xgerman> +1 etherpad
20:13:54 <sbalukoff> I don't know about y'all but I do find this stand-up etherpad useful as well:  https://etherpad.openstack.org/p/octavia-weekly-standup
20:14:02 <sbalukoff> But I notice not everyone updated it this week.
20:14:07 <blogan> ah crap I forgot to update taht
20:14:09 <blogan> sorry
20:14:12 <sbalukoff> Would it be useful for me to send a reminder to the mailing list?
20:14:35 <blogan> yeah
20:14:36 <sbalukoff> (I had been doing that prior to each meeting, but figured people might find it tiresome.)
20:14:38 <blogan> you  should just utomate it
20:14:55 <xgerman> blogan, you can also use your won Outlook
20:14:59 <sbalukoff> blogan: Yeah, easily done. The question is: Do people mind?
20:15:19 <sbalukoff> dougwig: Should the bots have operator / voice status here?
20:15:19 <blogan> xgerman: yes i could do that, but that requires me to you know, do something
20:15:32 <xgerman> fair...
20:15:33 <sbalukoff> blogan: Want me to send you a calendar invite?
20:15:38 <dougwig> sbalukoff: they already have those rights, they just don't sit with them active.
20:15:40 <blogan> lol im being dumb
20:15:43 <sbalukoff> (I have one to remind me to update the agenda. XD )
20:15:48 <blogan> no i can set my own, its fine
20:16:03 <blogan> im joking about me being helpless, i just wasn't thinking
20:16:21 <sbalukoff> Ok, so again, question: Is it useful for me to send something to the mailing list reminding people of the agenda, the meeting time and location, and the stand-up etherpad?
20:16:29 <ctracey_> hola folks
20:16:30 <sbalukoff> Because I absolutely could do that. :)
20:16:38 <sbalukoff> Howdy Craig!
20:16:40 <ctracey_> sbalukoff: yes
20:16:54 <blogan> i think it is useful, but once more and more people get involved it won't scale very well
20:17:00 <xgerman> yes, agenda to mailing list is  good
20:17:05 <blogan> i mean the etherpad in general
20:17:05 <sbalukoff> Good enough! The rest of you will just have to put up with my spam. (as usual)
20:17:06 <blogan> not the ML
20:17:16 <ctracey_> well the agenda should be posted
20:17:18 <Vorrtex__> I did a general update for the weekly standup page, sorry about that sbalukoff.  I'll draw my attention to that for before the future meetings
20:17:32 <blogan> yeah agenda should be posted, and the etherpad can be in that same post
20:17:44 <sbalukoff> #action sbalukoff to send weekly reminders to ML about agenda, meeting time + location, and stand-up etherpad
20:17:57 <sbalukoff> blogan: Yes, I intend to do this in a single e-mail.
20:17:59 <ctracey_> yes - a standing post in the meeting invite is fine.
20:18:36 <sbalukoff> Ok!
20:19:51 <sbalukoff> So, one thing I would like to see for next week is for everyone interested in contributing to have a look through the blueprints and either ask questions (here, in the ML, or in next week's IRC meeting) about what is unclear, what is to ambiguous, or anything else that's a subtle blocker for getting started helping. :)
20:20:16 <sbalukoff> I'm totally going to make an action item out of that.
20:20:22 <xgerman> I think once we hav more specs it might be easier for people to jump in
20:20:32 * dougwig thinks that sbalukoff just discovered the #action tag.
20:20:49 <xgerman> last week was vote-tag
20:20:58 <xgerman> now it's action - whoc knows what's next
20:20:59 <sbalukoff> #action everyone to look through blueprints, help flesh out and/or come to IRC, ML or meeting with questions.
20:21:11 <sbalukoff> Next week it'll all be about the #undo tag.
20:21:39 <sbalukoff> Ok, on this note, does anyone have anything else they'd like to ask about this topic before we switch to open discussion?
20:22:42 <johnsom_> crikets...
20:22:44 <sbalukoff> I'll take that as a 'no'
20:22:50 <sbalukoff> #topic Open Discussion
20:23:06 * Vorrtex__ likes these short and to-the-point meetings
20:23:25 <dougwig> question: are we going to move these meetings to the openstack meeting channels?
20:23:31 <sbalukoff> Anyone have anything they'd like to bring up before the group?  (Otherwise, we might as well end early and let people get back to, you know, doing actual work.)
20:23:32 <blogan> good question
20:23:44 <blogan> we should move to the meeting channels
20:23:48 <blogan> i have no idea what that process is though
20:23:53 <sbalukoff> dougwig: I've yet to see whether there's an opening during this time slot.
20:23:59 <dougwig> i think you just reserve a slot in the wiki and go for it.
20:24:17 <sbalukoff> I'm happy to do that, so long as we can get this same slot (or something very near it).
20:24:53 <sbalukoff> (Mid week, and not forcing me to get up at 5:00am makes for a slightly less cranky sbalukoff)
20:25:11 <blogan> looks like openstack-meeting-3 would be available at this time
20:25:15 <sbalukoff> #action sbalukoff to look into / move Octavia meeting to a standard openstack meeting channel.
20:25:18 <blogan> just from a quick search
20:25:19 <dougwig> i'd suggest that we do so, unless there are time conflicts.  but see as all of my other openstack meetings are at horrendous hours, the mid-day blocks should be free.  :)
20:25:32 <Vorrtex__> dougwig: aint that the truth
20:25:40 <xgerman> or we start openstack-meeting4
20:25:42 <xgerman> ...
20:25:45 <sbalukoff> dougwig: That was my sardonic hope, as well. ;)
20:25:59 <sbalukoff> xgerman: Isn't that what #openstack-lbaas is? ;)
20:26:14 <sbalukoff> Ok, folks, anything else, or are we done for today?
20:26:26 <xgerman> then we need tolerate other projects doing meetings in our channel
20:26:33 <dougwig> i'm done.  trevor, you got some -1 love.
20:26:38 <Vorrtex__> dougwig: thanks
20:26:40 <blogan> sbalukoff, xgerman: name of the controller driver interface to push teh configs along
20:26:50 <sbalukoff> blogan: Ooh! Good one.
20:26:55 <Vorrtex__> dougwig: if you reviewed the repository review then I know its broken
20:27:05 <blogan> is it a WIP?
20:27:20 <Vorrtex__> blogan: last I checked yeah
20:27:23 <sbalukoff> #topic "Discussion" about what to name class that is the controller<->driver interface.
20:27:36 <blogan> naming! yay!
20:27:45 <xgerman> driver means the driver which controls LBs on an Amphora
20:27:49 <dougwig> what's the name for a roman vomitorium?
20:27:51 <sbalukoff> I was really tempted to make the topic "Weekly holy war"
20:28:03 <dougwig> or a roman bottle opening?
20:28:08 <dougwig> jk.
20:28:13 <sbalukoff> dougwig: Haha!
20:28:29 <blogan> so the current name suggested is AmphoraeDriver, but that to me seounds like its responsible for spinning up and down Amphorae
20:28:49 <blogan> wherease SoftwareLoadBalancerDriver is more specific, but still generic enough (though the name is a bit dumb I know)
20:29:17 <dougwig> ControllerDriver.  AmphoraConfigDriver.  AmphoraMetaDriver.
20:29:22 <dougwig> (just throwing stuff out)
20:29:37 <sbalukoff> ControllerDriverInterface
20:29:46 <sbalukoff> That's the most literal term I can think of.
20:29:47 <Vorrtex__> while we're throwing stuff out, how about nuking the term amphora?
20:29:49 <Vorrtex__> lulz jp
20:29:51 <blogan> is there also going to be a driver interface that is responsible for amphora lifecycle mangement?
20:30:16 <sbalukoff> Vorrtex__: Don't make me send the phone spiders after you.
20:30:33 <sbalukoff> blogan: There needs to be something like that, yes.
20:30:43 <blogan> and that will live in the controller as well then?
20:31:00 <sbalukoff> Well, we've talked (briefly) about having an abstract interface to Nova
20:31:02 <blogan> so doesn't AmphoraeDriver seem more appropirate for that
20:31:09 <sbalukoff> It seems to me it would be there, and probably not in the thing German is working on.
20:31:26 <sbalukoff> blogan: Maybe AmphoraeManager
20:31:27 <xgerman> yeah, ironically blogan suggested that name
20:31:40 <blogan> xgerman: shhh
20:31:44 <blogan> no one needs to know that
20:31:49 <xgerman> lol
20:32:31 <dougwig> Manager gets overloaded a lot, between python context managers and openstack in general.
20:32:33 <blogan> AmphoraeManager is still the same problem
20:32:42 <blogan> noo dougwig, no more overloaded complaints
20:32:49 <xgerman> I guess we should pick a few and vote
20:32:49 <xgerman> exercise the #vote tag
20:33:05 <sbalukoff> I think some of the trepidation here is that there's probably some confusion about the responsibilities of each component.
20:33:20 <blogan> ill be fine with AmphoraDriver, i don't want to get into a long drawn out discussion and vote
20:33:29 <sbalukoff> blogan: +1
20:33:32 <sbalukoff> Ok, so!
20:33:39 <Vorrtex__> but driver is overloaded.
20:33:56 * sbalukoff sends the phone spiders after Vorrtex__
20:34:08 * Vorrtex__ laughs as his friends return to his side
20:34:15 <sbalukoff> Ok, so!
20:34:19 <dlundquist> I think it would be easier if someone put forward a high level architecture with their best names and then we reviewed it, otherwise we can't decide if the name fits better somewhere else.
20:34:20 <xgerman> then lets name it chauffeur
20:34:21 <sbalukoff> Any other suggestions for a name here?
20:34:29 <sbalukoff> I'm about to compile a list and call a vote.
20:34:30 <Vorrtex__> xgerman: how about alfred, or jarvis.
20:34:34 <blogan> AmphoraLoadBalancerDriver
20:34:35 <sbalukoff> So we don't have to spend too much time on this.
20:34:48 <sballe_> blogan, +1
20:34:53 <xgerman> +1
20:35:34 <xgerman> AmphoraLoadBalancersDrivers to illustrate the M to N
20:35:57 <Vorrtex__> xgerman: I thought we went 1:M LB:amphora?
20:35:59 <blogan> wouldn't it be Amphorae?
20:36:23 <xgerman> correct and Vortex_ sadly that didn't get ratified
20:36:25 <blogan> anyway, sorry for brining up yet another naming issue
20:36:41 <Vorrtex__> So I should undo that change in the models then?
20:36:45 <blogan> i thought we agreed to go wtih 1:M LB:amphora at first
20:36:50 <Vorrtex__> Yeah, same
20:37:06 <xgerman> I thought dougwig threw a rench
20:37:09 <xgerman> wrench
20:37:25 <xgerman> and we left it M to N?
20:37:33 <blogan> well he thinks it should be left up to the drivers
20:37:36 <Vorrtex__> I didn't see that xgerman, but I could have missed it
20:37:59 <sbalukoff> #vote What should we call the class that is the controller-driver interface? AmphoraLoadBalancerDriver AmphoraDriver ControllerDriver AmphoraConfigDriver AmphoraMetaDriver ControllerDriverInterface AmphoraeManager
20:38:12 <sbalukoff> Let's see if the voting system barfs over that.
20:38:19 <blogan> is it active now?
20:38:28 <blogan> #vote AmphoraLoadBalancerDriver
20:38:31 <sbalukoff> Dammit no..
20:38:34 <johnsom_> It doesn't look like it fired off the vote
20:38:35 <Vorrtex__> I thought it was "#start-vote" or something like that
20:38:36 <sbalukoff> Sorry... just a sec.
20:38:37 <blogan> start-vote
20:38:46 <sbalukoff> #startvote What should we call the class that is the controller-driver interface? AmphoraLoadBalancerDriver AmphoraDriver ControllerDriver AmphoraConfigDriver AmphoraMetaDriver ControllerDriverInterface AmphoraeManager
20:38:47 <openstack> Begin voting on: What should we call the class that is the controller-driver interface? Valid vote options are AmphoraLoadBalancerDriver, AmphoraDriver, ControllerDriver, AmphoraConfigDriver, AmphoraMetaDriver, ControllerDriverInterface, AmphoraeManager.
20:38:48 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
20:38:51 <blogan> #vote AmphoraLoadBalancerDriver
20:38:53 <sbalukoff> Ok, NOW vote
20:39:00 <xgerman> #vote AmphoraLoadBalancerDriver
20:39:02 <sballe_> #vote AmphoraLoadBalancerDriver
20:39:02 <sbalukoff> #vote AmphoraLoadBalancerDriver
20:39:10 <sbalukoff> I sense a trend.
20:39:11 <blogan> lol didn't need a vote
20:39:22 <Vorrtex__> #vote AmphoraeManager
20:39:22 <sbalukoff> blogan: But now it'll be official.
20:39:23 <johnsom_> #vote AmphoraLoadBalancerDriver
20:39:26 <blogan> i think we just like the vote sript
20:39:38 * Vorrtex__ really likes using overloaded terms that still make sense
20:39:53 <jwarendt> #vote AmphoraLoadBalancerDriver
20:40:01 <ajmiller_> #vote AmphoraLoadBalancerDriver
20:40:09 <sbalukoff> 60 seconds until voting is ended...
20:41:09 <sbalukoff> #endvote
20:41:11 <openstack> Voted on "What should we call the class that is the controller-driver interface?" Results are
20:41:12 <openstack> AmphoraeManager (1): Vorrtex__
20:41:13 <openstack> AmphoraLoadBalancerDriver (7): xgerman, jwarendt, sbalukoff, ajmiller_, johnsom_, blogan, sballe_
20:41:18 <sbalukoff> Ok!
20:41:19 <sbalukoff> handy.
20:41:23 <xgerman> runaway victory
20:41:27 <sbalukoff> Anyone have anything else to bring before the group?
20:41:34 <dougwig> Lost network.  :)
20:41:44 <Vorrtex__> dougwig: blogan spoke for you a few times
20:41:46 <Vorrtex__> just sayin
20:41:47 <blogan> sbalukoff: keeping the M:N table structure for LB:Amphora
20:41:58 <Vorrtex__> yeah, good call blogan
20:42:02 <xgerman> +1
20:42:25 <sbalukoff> #topic voting on M:N table structure for LB:Amphora
20:42:28 <blogan> if it is indeed left to the driver to decide, then we should just keep the M:N, but w can't put in unique constraints either to enforce 1:M
20:42:36 <sbalukoff> Anyone want to summarize the arguments for either side?
20:43:12 <sbalukoff> blogan: And therefore the code will have to deal with M:N, even if the driver uses 1:N
20:43:20 <blogan> yes
20:43:22 <johnsom_> sbalukoff you are good with definitions, should we review what an "LB" is and how it is different than a "Amphora" just so we are all on the same page?
20:43:24 <xgerman> sbalukoff and I agreed on 1:N
20:43:53 <sbalukoff> johnsom_: Ok, so "LB" is load balancer as it came to be understood in the Neutron LBaaS project...
20:43:57 <sbalukoff> Speaking of which...
20:43:57 <Vorrtex__> xgerman: blogan and I also agree on that
20:44:01 <blogan> xgerman: we agreed to do that at first, but we can still have the table structure set up to be M:N to not paint oursevles in a corner to not allow it in the future or other drivers to do it
20:44:29 <sbalukoff> #action sbalukoff to start dictionary / glossary of terms for Octavia project.
20:44:36 <xgerman> +1
20:44:37 <sbalukoff> I keep forgetting to do that. :P
20:44:42 <johnsom_> +1
20:44:44 <blogan> that way its up to the driver to decide whether its M:N or 1:M
20:44:52 <masteinhauser> sbalukoff: You defined like 20 on-the-fly when we were on the phone...
20:45:20 <sbalukoff> That's because I talk too quickly and interminably. :)
20:46:46 <blogan> anyone hve a strong opinion on whether we allow M:N LB:Amphora table structure (but still aim at 1:M LB:Amphora for the driver we actually implement)?
20:46:48 <sbalukoff> To further clarify what "LB" means: It's essentially the same thing as a "VIP" in other load balancing terminology (ie. everywhere outside of Neutron LBaaS), with the exception that a load balancer *might* have more than on IP address associated with it in the future.
20:47:46 <sbalukoff> blogan: So this question, to me, is more about whether we ever want to allow more than one load balancer per amphora. (ie. whether there are practical, technical, or business needs to allow for this.)
20:48:10 <sbalukoff> If we intend to allow 3rd party vendors to have more freedom in how they implement their solutions, we need M:N, IMO.
20:48:15 <blogan> sbalukoff: true, and if someone really needs it then we shouldn't not allow it
20:48:17 <masteinhauser> M:N seems to make sense in the case when you may be using LVS or other Direct Routing style load balancers.
20:48:24 <sbalukoff> But... I dunno. Maybe we don't want Octavia to allow that. :/
20:48:45 <blogan> xgerman: thoughts?
20:49:16 <xgerman> well, my thought was for a software LB we can always adjust the size of the vm -- so if you need to LB's just spin up tow tiny vms
20:49:49 <xgerman> so  one lb per amphora is sufficient and you tune with nova
20:50:29 <blogan> is there another case for M:N other than trying to save space/resources by putting many LBs/Listeners on an amphora?
20:50:42 <sbalukoff> I could imagine a 3rd party solution where a vendor makes a "big" load balancer appliance and allows "virtual load balancers" to be created on it in some fashion. This model doesn't actually break with 1:N, per se...
20:50:50 <xgerman> also since we have migrations how diffiuclt is it to geo from 1:N to M:N?
20:50:58 <sbalukoff> It's also a question of "do we really need to allow for colocation?
20:51:07 <sbalukoff> Apolocation is necessary to fulfill HA requirements, in any case.
20:51:18 <sbalukoff> But I'm having a hard time coming up with a solid case for LB colocation.
20:51:31 <Vorrtex__> sbalukoff: colocation I thought was being handled inside neutron or did I miss a conversation there as well o_0
20:51:52 <blogan> xgerman: it really shouldn't be too difficult, code will have to changed as well probably
20:51:59 <sbalukoff> xgerman: It's not just migrations, per se... it's also a bunch of places in the code where people, by that time, might have assumed 1:M and aren't prepared to deal with M:N
20:52:36 <xgerman> yeah, so the reason would need to be really compelling by then :-)
20:52:46 <sbalukoff> Vorrtex__: There was some talk of it being handled in Nova, IIRC... but we'll still need a logical representation in Octavia in any case, I think.
20:53:08 <xgerman> yeah, so the lifecycle driver can tell nova what to do
20:53:10 <sbalukoff> xgerman: Yes. And again, I'm having trouble coming up with a compelling justification.
20:53:13 <Vorrtex__> I see, thanks sbalukoff, I thought we had talked about it at some point there
20:53:14 <blogan> dougwig: do you have any thoughts on this?
20:53:19 <sbalukoff> Can anyone here think of a good reason why we would need colocation?
20:53:44 <xgerman> colocation of two LBs on the same vm -- NOT colocation of two vms conatining LBs on the same host
20:54:04 <sbalukoff> xgerman: Yes, exactly. Thanks for the clarification.
20:54:15 <blogan> sbalukoff: define colocation and apolocation in yoru gloassary too
20:54:26 <xgerman> +1
20:54:40 <blogan> i'm colocated with everyone right now, on earth
20:54:48 <sbalukoff> I can think of one potentially compelling reason not to allow colocation: It makes our system less flexible.
20:55:21 <sbalukoff> (Because users would then, effectively, be able to dictate where certain cloud resources get placed.)
20:55:31 <xgerman> it's always more difficult to take the right thing away then to add things
20:55:50 <sbalukoff> xgerman: Another compelling reason.
20:55:58 <sbalukoff> dougwig: Are you still here?
20:56:48 <sbalukoff> dougwig: I would like to get your perspective on this because I think you're probably the person most in favor of M:N here.
20:57:03 <sbalukoff> So, if dougwig has lost connectivity, I will forego the vote until next week.
20:57:11 <xgerman> but we need to know
20:57:41 <xgerman> well, I can assume the 1:N case in the interface
20:58:02 <sbalukoff> xgerman: Let's assume 1:N for now, then, unless dougwig can give us a compelling reason to do M:N that outweighs the two reasons we've come up with for not to allow colocation.
20:58:16 <sbalukoff> Ok!
20:58:22 <sbalukoff> We have about 2 minutes left.
20:58:22 <xgerman> Deal!
20:58:28 <sbalukoff> Anything else?
20:58:30 <xgerman> blogan?
20:58:44 <blogan> i have nothing else
20:58:52 <blogan> but yeah thats fine by me
20:59:03 <sbalukoff> Thanks for coming y'all!
20:59:16 <sballe> bye
20:59:18 <xgerman> bye
20:59:24 <sbalukoff> #endmeeting