15:00:52 <n0ano> #startmeeting gantt
15:00:54 <openstack> Meeting started Tue Jul 15 15:00:52 2014 UTC and is due to finish in 60 minutes.  The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:58 <openstack> The meeting name has been set to 'gantt'
15:01:05 <n0ano> anyone here to talk about the scheduler?
15:01:18 <bauzas> \o
15:02:26 <mspreitz> o/
15:03:40 <n0ano> not many admitting to being here today
15:03:46 <bauzas> :)
15:03:49 <n0ano> #topic forklift status
15:03:56 <bauzas> jaypipes ? Yathi ?
15:04:06 <coolsvap> hello
15:04:25 <bauzas> n0ano: johnthetubaguy is currently having a meeting conflict, possible to delay this topic ?
15:04:41 <bauzas> n0ano: are there other topics to discuss ?
15:04:45 <Yathi> Hi
15:04:55 <n0ano> pretty much the only topic unless there's anything else anyone wants to raise?
15:05:18 <jaypipes> o/ (sorry, was finishing reply to PaulMurray)
15:05:38 <johnthetubaguy> I am lingering, but will not be very responsive
15:05:57 <n0ano> maybe we can start on this anyway...
15:06:13 <bauzas> sure
15:06:22 <n0ano> jaypipes, seems like you have the most concerns about the split, care to express them?
15:06:28 <bauzas> n0ano: do you want me to summarize the outcome?
15:06:36 <n0ano> bauzas, sure, go ahead
15:06:46 <bauzas> so
15:07:19 <bauzas> long story short, there are some patches in review
15:07:39 <bauzas> one spec is still not validated https://review.openstack.org/89893
15:07:58 <johnthetubaguy> we are past the spec freeze date now, so will have to do an exception for that
15:08:03 <bauzas> so that means we will miss the Juno target unless we got an exception (little chances tho)
15:08:16 <johnthetubaguy> I am going to try and make a list of those today
15:08:27 <bauzas> some patches can be reviewed tho :
15:08:29 <bauzas> https://review.openstack.org/82778
15:08:33 <bauzas> https://review.openstack.org/104556
15:08:54 <bauzas> one small and quickwin patch is here https://review.openstack.org/103503
15:09:21 <bauzas> the 103503 is really small and not disruptive, so I expect it to be merged soon
15:09:58 <bauzas> once the 103503 gets merged, the only API method for Scheduler will be select_destinations()
15:10:34 <jaypipes> n0ano: I don't see the urgency or use cases, frankly.
15:10:41 <bauzas> the 2 above patches (82778 and 104556) aim to add a second interface for publishing stats to the scheduler
15:10:51 <bauzas> and here comes the debate :)
15:10:58 <jaypipes> was hoping Yathi could show me a use case that demonstrates what users of OpenStack would benefit from a split scheduler.
15:11:36 <bauzas> so, just to end up my summary
15:11:44 <Yathi> jaypipes: sure
15:11:57 <jaypipes> bauzas: yes, please do.
15:12:01 * PaulMurray sorry I'm late - in another meeting at same time
15:12:19 <Yathi> To be crisp, I can use the scenario, we demoed in the summits at HKG and ATlanta.. - compute-volume affinity
15:12:22 <bauzas> the goal over Juno is to define what will be the next scheduler API
15:12:58 <Yathi> this is clearly a cross-service concern.. with nova and cinder data being used to come up with a unified placement decision
15:13:01 <mspreitz> If by split scheduler you mean use case for only the forklift, I do not think there is one.  The forklift is prep for more interesting work.  Like what Yathi said.
15:13:01 <ddutta> bauzas: gr8 idea ... have you looked at the server group extension .... would it help to review this as a potential starting point ... we pushed it in I
15:13:32 <bauzas> mspreitz: +1
15:13:34 <n0ano> mspreitz, +1
15:13:36 <jaypipes> Yathi: that isn't a use case.
15:13:47 <bauzas> the goal of the split is to do the necessary prework for Gantt
15:13:56 <jaypipes> that isn't a use case either.
15:14:03 <bauzas> ddutta: could you please give me a pointer ?
15:14:04 <n0ano> that has alway been the case, the split alone creates a baseline for further work
15:14:04 <mspreitz> jaypipes: right..
15:14:08 <Yathi> having such cross-service concerns is best addressed using a split scheduler
15:14:18 <Yathi> the split scheduler will have a unified view
15:14:22 <mspreitz> the use cases were outlined earlier in discussions of broader placement aka scheduling
15:14:22 <bauzas> I like johnthetubaguy's idea to create a community over Gantt
15:14:24 <jaypipes> n0ano: that further work is not a use case either.
15:14:54 <n0ano> Yathi's example is a use case, that's a further work item
15:14:57 <bauzas> jaypipes: if by usecase, you want to express a customer story ?
15:15:04 <jaypipes> bauzas: bingo.
15:15:06 <Yathi> jaypipes: So I would say - "cross-service scheduling" "unified scheduling" are all use cases
15:15:12 <ddutta> jaypipes: here is a use case ..... say you want to place a group of VMs with a policy very close to storage blobs within a network distance of x
15:15:15 <jaypipes> Yathi: no, those are features.
15:15:23 <bauzas> jaypipes: ok, lemme play then :)
15:15:35 <Yathi> ddutta: +1
15:15:45 <jaypipes> ddutta: ty. that approaches a use case.
15:15:50 <PaulMurray> jaypipes, we have the volume example as a use case - but I haven't used it to motivate a split
15:16:00 <bauzas> jaypipes: as a user, I want to place my volumes close to my compute resources
15:16:16 <jaypipes> bauzas: what does "close" mean?
15:16:22 <Yathi> yes my customer stories,  that is the customer story we demoed in HKG and in ATlanta as part of the NFV talk we had..
15:16:23 <ddutta> another use case:  place a VM closest to a group a VMs but avoid a bunch of storage nodes (coz its redhot)
15:16:25 <bauzas> jaypipes: as an user, I want to take benefit of the datacenter proximity
15:16:36 <PaulMurray> jaypipes, in our case volumes are not accessible from different AZ
15:16:50 <bauzas> jaypipes: as an user, I want to place my VMs on bisectional network
15:17:09 <jaypipes> bauzas: how does a user know that there is a bisectional network?
15:17:16 <PaulMurray> jaypipes, as a user I want my instance to be placed in the same az as my volume
15:17:33 <jaypipes> PaulMurray: that is eminently doable without a split scheduler.
15:17:35 <mspreitz> who guys on the AZ stuff, I think that may be handled differently
15:17:46 <mspreitz> Isn't the AZ already chosen by the client?
15:17:50 <jaypipes> yup.
15:17:51 <PaulMurray> jaypipes, or rather, as a user I don't want to bother with az, but I want my instances to be able to use my volumes
15:18:04 <ddutta> We can just document all this in a shared doc
15:18:10 <mspreitz> s/AZ/rack/ and now we can talk
15:18:14 <PaulMurray> jaypipes, I did say I didn't use it to motivate a split
15:18:16 <bauzas> jaypipes: correct, lemme express the usecase differently
15:18:25 <Yathi> jaypipes: As a user, I would like to ensure that my VM is scheduled in a network that has x bandwidth
15:18:29 <jaypipes> mspreitz: cloud users don't (and shouldn't) know which  rack their stuff is on.
15:18:39 <bauzas> jaypipes: as an operator, I want to provide entreprise-grade network for golden customers
15:18:42 <ddutta> https://docs.google.com/document/d/1ZOz5S9rZFYiwdCLkQM9Th9c7VKln152skO7NcKZDnUM/edit ???
15:18:46 <jaypipes> Yathi: that is already doable with metric.
15:18:49 <jaypipes> metrics.
15:19:01 <mspreitz> jaypipes: right, exactly, cloud users should not identify racks.  But they should be able to say "different racks", to get better reliability
15:19:07 <jaypipes> bauzas: that is a neutron thing and not a scheduler thing.
15:19:19 <jaypipes> bauzas: i.e. neutron's proposed flavor framework.
15:19:28 <Yathi> jaypipes: you don't want to complicate the logic and bring in all the concerns within Nova.. hence the motivation for a split
15:19:30 <jaypipes> mspreitz: no they shouldn't, IMO.
15:19:31 <ddutta> jaypipes: I think there are 2 aspects of a scheduler .... 1) figure out where to place 2) schedule
15:19:41 <bauzas> ddutta: +1
15:19:45 <jaypipes> ddutta: no.
15:19:49 <ddutta> I think the placement should be taken out
15:20:04 <Yathi> jaypipes: first step is to decide where to place., and then complete the scheduling
15:20:04 <jaypipes> ddutta: that may be true for a kernel process scheduler. not for a compute VM scheduler. it's all about placement. nothing else.
15:20:06 <ddutta> if you need a bunch of info to place why put it in nova for now
15:20:19 * bauzas tries to follow both the chatty conversation and read jaypipes's reply on the -dev list. Hard.
15:20:24 <ddutta> the fact its in nova is historic
15:20:26 <jaypipes> hehe
15:20:48 <jaypipes> ddutta: NO. the fact that *scheduling* (i.e. run instance) is in the ova scheduler is historical. NOT placement.
15:20:49 <ddutta> and you could always put everything in nova and talk to other services ... we have demo-ed that too :)
15:21:00 <Yathi> jaypipes: Also look at SOlver Scheduler https://blueprints.launchpad.net/nova/+spec/solver-scheduler..
15:21:06 <ddutta> but what would that serve
15:21:11 <jaypipes> Yathi: no use cases there either. sorry :(
15:21:22 <ddutta> I gave you 2 use cases :)
15:21:24 <PaulMurray> jaypipes, of course, run_instance is not done through scheduler now
15:21:25 <bauzas> jaypipes: saw my email about people concerned ? I do accept these as non-usecases indeed
15:21:35 <jaypipes> we need to stop thinking about the nova scheduler as a kernel thread/process scheduler.
15:21:47 <Yathi> Jaypipes: I gave you customer use cases earlier.. and so did ddutta.. now I am moving to the motivation for a split
15:21:55 <jaypipes> PaulMurray: right, which is why I said that was historical.
15:22:02 <bauzas> jaypipes: I'm seeing the scheduler as the first piece of a SLA engine
15:22:16 <ddutta> bauzas: +1
15:22:18 <jaypipes> we need to stop thinking about the nova scheduler as a kernel thread/process scheduler.
15:22:20 <jaypipes> we need to stop thinking about the nova scheduler as a kernel thread/process scheduler.
15:22:25 <Yathi> jaypipes:  So regarding solver scheduler.. imagine the difficultly we are having convincing the community to have such smart scheduling aspects within Nova
15:22:25 <ddutta> agree
15:22:47 <bauzas> jaypipes: ie the scheduler is responsible for handling a contract in between the user (or operator thru a flavor/hint/whatever) and the system
15:22:53 <jaypipes> Yathi: yes, there is difficulty there because it doesn't meet many use cases.
15:22:58 <mspreitz> jaypipes: what do you mean by "we need to stop thinking about the nova scheduler as a kernel thread/process scheduler" ?
15:23:05 <jaypipes> it's a mostly-academic exercise.
15:23:15 <bauzas> jaypipes: I guess you know the pets vs. cattle story ?
15:23:20 <Yathi> A split scheduler. .meant for doing placement as a service. will hopefully be the best place for newer placement decision logics
15:23:27 <jaypipes> bauzas: of course.
15:23:37 <bauzas> jaypipes: so if I want to have pets?
15:23:42 <ddutta> jaypipes: I think there are quite a few use cases if you want to use Openstack for interesting applications like NFV ....
15:24:00 <jaypipes> mspreitz: everyone seems to think of the Nova scheduler in terms of something that is divvying out time slices to a variety of callers (fair scheduler, smart solver, etc). that isn't what Nova scheduler is about.
15:24:14 <jaypipes> bauzas: screw pets?
15:24:17 <jaypipes> :)
15:24:23 <bauzas> :D
15:24:29 * bauzas noting it...
15:24:31 <n0ano> jaypipes, I haven't heard about time slices, more about best placement taking into account different metrics
15:24:47 <jaypipes> and NFV, as I mentioned on the ML, have done a lousy job in specifying/describing actual use cases
15:24:49 <mspreitz> jaypipes: I did not notice such a confusion
15:25:24 <mspreitz> Even for cattle, we want to say stuff about the herd and how it shoud be distributed.
15:25:29 <jaypipes> beyond stuff that really means "whhhaaaah, my telco IT department doesn't want to rearchitect our giant monolithic NFV component to use a cloud environment! Can't the cloud infra do everything for me? Whaaah."
15:25:46 <jaypipes> sorry for being frank,
15:26:02 <bauzas> jaypipes: you're welcome to express your feelings tomorrow in #openstack-meeting-alt ;)
15:26:33 <ddutta> jaypipes: Its not just 1 telco use case. When we wanted to speed hadoop up, we needed the affinity use cases ....
15:26:33 <jaypipes> mspreitz: the defining characteristic of cattle VMs is that they don't matter. any other VM can take their place, and failure of the VM is expected and accounted for.
15:26:35 <mspreitz> jaypipes: frank is good, but I'm not here because of crying by a telco IT department
15:26:55 <mspreitz> jaypipes: exactly, pets vs cattle is about main of content, we are talking now about placement
15:27:11 <mspreitz> s/main/maintenance/
15:27:13 <ddutta> jaypipes: for a bunch of the use cases, we felt placement across services is imp and currently everything runs in nova and we need to make calls to network and storage
15:27:21 <jaypipes> ddutta: I'd rather see the nova scheduler be able to take inputs from things like neutron that would be usable in placement decisions, than break nova scheduler out of nova.
15:27:44 <bauzas> by saying the pets metaphor, I was expressing an usecase :)
15:27:52 <jaypipes> mspreitz: I know you're not. sorry for making it seem that way, apologies.
15:28:09 <bauzas> "Openstack, please leave me have pets"
15:28:19 <ddutta> jaypipes: clearly thats another way to do it. In fact Yathi/I have demo-ed that route too ... we used network distances and storage affinity in a demo 2 summits ago
15:28:20 <n0ano> jaypipes, I thinkwe all want the scheduler to take inputs from multiple services, we just think that will be easier if the scheduler is a separte project
15:28:37 <bauzas> n0ano: violent agreement from me
15:28:38 <ddutta> jaypipes: we are fine with that too but it looks messy beyond a point
15:28:59 <jaypipes> n0ano: and I believe the interfaces for taking those inputs need to be cleaned up and exposed properly before a split is reasonable
15:29:08 <Yathi> jaypipes: this was from our session in HongKong: https://docs.google.com/drawings/d/1BgK1q7gl5nkKWy3zLkP1t_SNmjl6nh66S0jHdP0-zbY/edit?pli=1
15:29:12 <ddutta> jaypipes: but yes we "could" do it that way .... question is "should" we?
15:29:12 <mspreitz> I really think pets vs. cattle is beside the point.  Even for cattle you want to say stuff like "keep the cattle spread out", "put each cow near a watering hole", ...
15:29:52 <jaypipes> mspreitz: sure, no real disagreement from me on that.
15:29:56 <n0ano> jaypipes, the 3 patches bauzas talked about are the attempt to clean up the interfaces
15:30:11 <bauzas> n0ano: thanks
15:30:14 <jaypipes> n0ano: yeah, but it's the resource tracking piece that is missing.
15:30:33 <jaypipes> and having the scheduler actually be responsible for the claims of resources.
15:31:13 <jaypipes> we can paper over that problem by putting interfaces into the scheduler to update resources, but that's not solving the main problem of the scheduler not owning and distributing the claims themselves.
15:31:16 <bauzas> jaypipes: if we consider that update_resource_stats is a viable interface, the thing of putting the ResourceTracker in Gantt or not won't change that interface
15:31:23 <n0ano> jaypipes, we're looking at the RT part, if we decide that is a critical piece that needs to be done first we can add it, that will definitely miss Juno
15:31:27 <jaypipes> bauzas: see above, I don't think it's a viable interface.
15:31:28 <ddutta> Could we list down use cases here ... for Jaypipes .... https://docs.google.com/document/d/1ZOz5S9rZFYiwdCLkQM9Th9c7VKln152skO7NcKZDnUM/edit?userstoinvite=ybudupi@gmail.com#heading=h.d4hunb4yodwi
15:31:50 <bauzas> ddutta: I love etherpads :)
15:31:51 <ddutta> this could be a very productive exercise .... we list down use cases and see how we could implement it within the current scheduler
15:32:00 <jaypipes> ddutta: ++
15:32:14 <ddutta> ok https://etherpad.openstack.org/p/SchedulerUseCases
15:32:15 <ddutta> then here
15:32:23 <bauzas> ddutta: thanks
15:32:27 <jaypipes> ddutta: and annotate use cases that could be satisfied with small changes to interfaces inside Nova scheduler (without breaking it out)
15:32:39 <jaypipes> ddutta: and annotate use cases that absolutely cannot be solved without Gantt.
15:32:49 <bauzas> jaypipes: can't see above where you say it's not viable :)
15:32:50 <ddutta> jaypipes: sure
15:33:02 <jaypipes> and please remember that "implement solver scheduler" is not a use case :)
15:33:15 <PaulMurray> jaypipes, shall we do the same for scheduler owning claims
15:33:32 <Yathi> jaypipes: but you do realize that nothing is impossible.. you can probably have everything within Nova
15:33:33 <PaulMurray> jaypipes, annotate things that can't be done with/without scheduler owning claims
15:33:39 <jaypipes> bauzas: not viable because the scheduler doesn't own the claim (the compute worker still does, AFAICT from your patches)?
15:33:54 <jaypipes> Yathi: eh, exactly ;)
15:34:12 <n0ano> Yathi, +1 (but clarity is obvious isolation is still good)
15:34:18 <Yathi> but what makes best sense for a separation of concern with Gantt is what we need to figure out
15:34:19 <n0ano> s/is/and
15:34:40 <jaypipes> PaulMurray: sure, although in that case, the claim stuff is about removing locks and allowing the scheduler to have race-free operations.
15:34:49 <bauzas> I'm just feeling we're redoing the wheel from the Icehouse and Juno summits... :)
15:34:52 <jaypipes> PaulMurray: so, less of ause case, more of an implementation detail
15:35:02 <bauzas> ie. "do we need an external scheduler?"
15:35:23 <jaypipes> bauzas: sorry :(
15:35:27 <bauzas> I was thinking this question was answered
15:35:43 <bauzas> nah, lemme be more precise
15:36:34 <bauzas> jaypipes: I mean, splitting the current scheduler is not that perfect, but it was agreed to do refactoring stuff first and provide something better after
15:36:39 <ddutta> I think we could build any scheduler use case with the current Nova scheduler + a bunch of interactions with NEutron/Cinder/Swift. I think the question we are asking is what would we lose if its a separate piece ....
15:37:00 <jaypipes> bauzas: so we are mostly disagreeing on the level of refactoring required before a split.
15:37:02 <bauzas> jaypipes: the thing is, I won't be able to join the mid-cycle meetup where this kind of question needs to be debated
15:37:03 <jaypipes> bauzas: yes?
15:37:40 <bauzas> jaypipes: yey, that's why we ran a session over here in ATL
15:37:53 <bauzas> jaypipes: in order to tackle the level of changes
15:38:08 <jaypipes> bauzas: understood. there's only so many places one can be at any given time, though ;)
15:38:18 <bauzas> as johnthetubaguy said, I just don't want to throw the baby with the water :)
15:38:48 <bauzas> jaypipes: yeah, I know we're all humans, so it's impossible to be at all sessions
15:38:59 <n0ano> I still think, from a high level, cleaning up the library, DB and RT are the 3 refactorings that need to be done and then a split is reasonable.
15:39:10 <jaypipes> bauzas: what is the baby, though? is the baby "a separate scheduler service", or is the baby "our users will be able to do X"? if the former, I don't really see the problem.
15:39:22 <jaypipes> n0ano: sure, ++
15:40:17 <bauzas> n0ano: the debate here is that jaypipes is feeling that refactoring the RT will impact the library
15:40:34 <bauzas> jaypipes: good question
15:40:34 <n0ano> well, the lib spec is approved patches waiting, we need to get the DB spec approved and we need to do the RT spec/patches
15:41:16 <ddutta> also could we spend some cycles next meeting to discuss the use cases ... we can all work on it https://etherpad.openstack.org/p/SchedulerUseCases ..... this exercise has not been done from what I remember
15:41:19 <ddutta> lets s=do it
15:41:20 <n0ano> if the RT impacts the lib that's just part of the work, I don't see a problem.
15:41:28 <bauzas> jaypipes: the baby is maybe the violent agreement we had from the community to split out the scheduler
15:42:01 <n0ano> ddutta, +1, everyone look at the etherpad this week, I don't want to try to do that one the fly right now
15:42:02 <mspreitz> I want users to be able to express cross-service placement constraints and have a solution found even if it can only be found by considering both kinds of resources in one unified placement problem.
15:42:11 <bauzas> ddutta: could you please reply to the -dev thread and give us this link in there ?
15:42:21 <ddutta> bauzas: thanks
15:42:29 <bauzas> ddutta: people who weren't able to attend this meeting can still have good usecases
15:42:30 <ddutta> will do
15:42:33 <n0ano> #action update the use case etherpad at https://etherpad.openstack.org/p/SchedulerUseCases
15:42:34 <Yathi> jaypipes, as another long term use case:  I need to figure out the best placement both for my VM and for the Volume that it will be attached to..
15:42:34 <bauzas> ddutta: cool thanks
15:42:45 <bauzas> n0ano: who's update ? :D
15:42:53 <Yathi> this combined scheduling.. is something would be possible only by a split scheduler that unifies everything
15:43:00 <Yathi> and as a separate service
15:43:33 <n0ano> #action all update the use case etherpad at https://etherpad.openstack.org/p/SchedulerUseCases
15:43:43 <bauzas> again, as said, I'm feeling the split as something already agreed and validated (aka. "the baby")
15:43:49 * n0ano needs to learn how to use meetbot
15:44:10 <mspreitz> maybe not "the baby", but an agreed step along the way
15:44:29 <jaypipes> Yathi: so, let me reword your use case above...
15:45:00 <jaypipes> Yathi: "As a cloud user, I want my VM placed somewhere where volume XYZ is 'near'"
15:45:26 <mspreitz> jaypipes: no, it's more like "give me a VM and a volume near each other"
15:45:31 <bauzas> jaypipes: wanna create a Scrum backlog, eh ? ;)
15:45:41 <jaypipes> Yathi: if you word your use case like above, you realize that it is simply describing another input to the scheduler's decision about a *VM* placement, and not where to place a volume.
15:46:16 <jaypipes> mspreitz: you can't do both at once. Just pick a location for a volume (or use the location of an existing volume), and then schedule the VM to be near it.
15:46:29 <jaypipes> mspreitz: there's no point overcomplicating the code, IMO.
15:46:32 <Yathi> jaypipes: thanks..there are two of them.. one is the direct volume affinity.. which we demoed in HKG summit already..
15:46:35 <mspreitz> jaypipes: you can't do both today, but we hope you can later
15:46:40 * bauzas wishes good chance to johnthetubaguy for scrolling back the discussion...
15:46:52 <jaypipes> mspreitz: I'm saying there's no point doing both.
15:46:53 <Yathi> but the second one is this:  "As a cloud user..I want my VM and Volume placed close to each other
15:47:17 <jaypipes> mspreitz: just pick a place for the volume, and make the location of the volume an input to the VM placement decision.
15:47:32 <jgriffith> Yathi: I'm curious why you'd want the complexity of both?
15:47:34 <jaypipes> Yathi: that is not valid, IMO. :)
15:47:38 <bauzas> so, because there is one week in between this meeting and the next one, could we consider what n0ano said and see if my proposals are worth it ?
15:47:54 <ddutta> :)
15:47:58 <bauzas> I'm just trying to get a conclusion from that lovely discussion :)
15:48:05 <mspreitz> jaypipes: as long as you have plenty of spare capacity in all your individual containers, serial placement will work.  But individual containers can run low on spare capacity
15:48:32 <ddutta> folks, need to drop off a little early but would be filling up https://etherpad.openstack.org/p/SchedulerUseCases .... sent to the dev list by replying to a bunch of emails on scheduler :)
15:48:59 <n0ano> bauzas, I think your current specs are fine, the only issue is I guess we need to create one for the RT refactoring
15:49:41 <bauzas> ddutta: cool thanks
15:49:54 <jaypipes> mspreitz: not entirely sure I follow you, but put your thoughts in that eherpad and we can go from there?
15:50:03 <mspreitz> jaypipes: sure
15:50:04 * jaypipes needs to leave for airport shortly :(
15:50:12 <bauzas> n0ano: as said, jaypipes's thoughts were about the fact we pass a JSON blob :)
15:50:25 <bauzas> n0ano: so he was giving a -1
15:50:37 <bauzas> n0ano: and the rest is history
15:50:39 <jaypipes> bauzas: I actually support 95% of your patches and patch content.
15:50:48 <jaypipes> bauzas: will do thorough reviews later today on plane.
15:50:51 <bauzas> jaypipes: quoted for posterity
15:51:03 <jaypipes> bauzas: it's just the end timing on a split that I disagree on.
15:51:08 <jaypipes> or rather, the urgency of a split.
15:51:09 <n0ano> bauzas, that's one of those `devil in the details', the basic idea is good, need to agree on the implementation
15:51:12 <jaypipes> anyways...
15:51:21 <Yathi> great discussion.. I hope the etherpad will get more "use cases" and help the case for Gantt
15:51:27 <jaypipes> ++
15:51:31 <bauzas> Yathi: +1
15:51:37 <jaypipes> thx guys, and sorry to offend anyone with my brash style!
15:51:47 * jaypipes goes to herd his cattle.
15:51:54 <bauzas> jaypipes: no worries, your feedback is greatly appreciated:)
15:51:55 <n0ano> that's ok, we save payback for when we meet in person :-)
15:52:05 <jaypipes> hehe :) I'll be in Beaverton.
15:52:06 <n0ano> bauzas, +2
15:52:13 <Yathi> jaypipes: thanks for letting us all know what a use case is :)
15:52:25 <bauzas> s/know/remember :p
15:52:27 <n0ano> jaypipes, so will I (be afraid, be very afraid :-)
15:52:37 <Yathi> bauzas: +1 :)
15:52:40 <jaypipes> hehe
15:52:41 <bauzas> jaypipes: I won't be in Beaverton :(
15:52:48 <bauzas> anyway
15:52:49 <jaypipes> bauzas: I know,, that sucks :(
15:53:07 <bauzas> yey, I so much heard about the oregon beers...
15:53:09 <jaypipes> PaulMurray, mspreitz, Yathi: will you be in Beaverton?
15:53:17 <mspreitz> I will not be in Beaverton, unfortunately
15:53:25 <n0ano> bauzas, the Colorado beers are better
15:53:28 <mspreitz> But I think others can carry the flag
15:53:32 <jaypipes> cool.
15:53:34 <Yathi> jaypipes: I am looking into my calendar.. will know soon
15:53:38 <jaypipes> k
15:53:39 <Yathi> will try though
15:53:54 <jaypipes> OK guys, I've got to pack for the flight... ciao!
15:54:01 <n0ano> winding down, let's close a little early and get some work done
15:54:03 <bauzas> n0ano: 5 mins for opens ?
15:54:10 <n0ano> #topic opens
15:54:15 <n0ano> Any?
15:54:19 <bauzas> ...
15:55:12 <bauzas> (wind in the trees)
15:55:21 <n0ano> guess not, tnx everyone
15:55:26 <bauzas> thanks n0ano :)
15:55:27 <mspreitz> thanks
15:55:31 <n0ano> #endmeeting