15:00:27 <n0ano> #startmeeting gantt
15:00:28 <openstack> Meeting started Tue Jan 13 15:00:27 2015 UTC and is due to finish in 60 minutes.  The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:31 <openstack> The meeting name has been set to 'gantt'
15:00:35 <n0ano> anyone here to talk about the scheduler
15:00:46 <alex_xu> o/
15:00:54 <edleafe> \o
15:02:15 <n0ano> I was hoping bauzas was around, let's wait just a minute
15:02:26 <alex_xu> yea
15:02:52 <edleafe> just posted a quick note on #openstack-nova
15:03:10 <n0ano> edleafe, beat me too it :-)
15:04:04 <n0ano> in that case, let's do things in a slightly different order
15:04:17 <n0ano> #topic Topics for mid-cycle meetup
15:04:47 <n0ano> have people been thinking on this?  Right now my main topic is the split at the start of L
15:05:23 <n0ano> (assuming we get everything implmented in Kilo)
15:05:58 <edleafe> I've had some talks with jaypipes, and we were hoping to clarify the interface of the scheduler
15:05:59 <alex_xu> n0ano: what need to do for split? we will have any doc describe that, or that will flush out in mid-cycle?
15:06:42 <edleafe> IOW, what exactly a separated scheduler would look like to anything that wanted to use it
15:06:58 <n0ano> alex_xu, I would hope any doc would mainly be internal descriptions, the goal is a drop in replacement with minimal impact, maybe some config changes is all
15:07:52 <n0ano> edleafe, would be good to talk about that as I would hope the L cycle will be `what we have now', the M cycle is where we can look at making things more palatable for other projects
15:07:52 <jaypipes> hey guys, sorry for being late
15:08:02 <alex_xu> n0ano: ok
15:08:20 <n0ano> jaypipes, NP, just talking about what we might talk about at the mid-cycle, can you see the history?
15:08:24 <edleafe> n0ano: understood; I just think that that's backwards
15:08:52 <edleafe> if we define the interface that will be useful to other projects, we will know exactly what we need to clean up
15:09:17 <bauzas> \o
15:09:17 <bauzas> oops
15:09:28 <bauzas> for some reason, I had no sound
15:09:44 <n0ano> edleafe, given the goal of minimal impact to Nova on the first split I don't think we want to make many (if any) changes in L interfaces
15:10:55 <bauzas> so, you're discussing about the split ?
15:11:03 <n0ano> sounds like we have 2 good discussion topics for the split:
15:11:12 <n0ano> 1) specifics of the L split
15:11:13 <edleafe> n0ano: whether you change them now or not is one issue. We should really know where we want to end up, though
15:11:24 <jaypipes> n0ano: at the start if L, we need the interfaces to the scheduler that update the scheduler's view of the world to be public and versioned.
15:11:49 <n0ano> 2) what is the longer term intefaces
15:12:10 <n0ano> jaypipes, I would think that the result of the Kilo cycle to to do just that
15:12:24 <bauzas> n0ano: agreed
15:12:38 <bauzas> n0ano: provided that jaypipes's BP would be merged
15:12:49 <bauzas> n0ano: and mine about request_spec
15:13:00 <jaypipes> n0ano: well, we are currently at an impasse with regards to any agreement on when and how the scheduler's view of resource consumption changes should be done.
15:13:09 <n0ano> bauzas, both specs were approved so of course the code will be merged :-)
15:13:25 <bauzas> n0ano: I share your enthusiasm
15:13:26 <edleafe> jaypipes: which makes it a perfect topic for the mid-cycle! :)
15:13:33 <jaypipes> indeed.
15:13:34 <n0ano> edleafe, +1
15:13:50 <edleafe> we're not trying to settle it right now
15:13:56 <bauzas> don't we have an approved BP about isolating how the scheduler is getting resources ?
15:14:39 <bauzas> anyway, let's discuss it during the midcycle, agreed
15:14:51 <jaypipes> bauzas: not approved, AFAIK
15:15:26 <bauzas> jaypipes: which one do you refer ? edleafe's, mine or yours ?
15:15:53 <bauzas> anyway, we're nitpicking, let's discuss this during midcycle
15:15:58 <jaypipes> k
15:16:22 <n0ano> in that case, switching gears:
15:16:29 <n0ano> #topic 1)	Remove direct nova DB/API access by Scheduler Filters - https://review.opernstack.org/138444/
15:16:29 <bauzas> we should get a better view of what's planned and what's mergede
15:16:47 <bauzas> I really dislike this commit title but that's life :)
15:16:52 <n0ano> bauzas, I believe you have some issues
15:16:57 <bauzas> the description is self sufficient
15:17:16 <bauzas> yeah, sorry about being late today
15:17:16 <bauzas> so
15:17:22 <bauzas> does anyone readed the spec ?
15:17:32 <edleafe> bauzas: Hey, I changed it in the spec itself... :)
15:17:56 <jaypipes> I am re-reading it again now.
15:18:13 <n0ano> bauzas, I +1'd the last-1 version, does that count
15:18:21 <bauzas> yeah I suggest to take 5 mins for reading the proposal and my comment
15:18:23 <edleafe> jaypipes: it follows the direction we discussed
15:19:34 <bauzas> so, tl;dr the problem is that the compute manager is claiming, not the scheduler
15:19:59 <bauzas> so by the proposal, the resources would be updated when booting the instance
15:20:18 <edleafe> boot/resize/destroy
15:20:22 <bauzas> which is far different from the existing, where the update is coming by a DB update
15:20:41 <bauzas> edleafe: indeed, but the race condition is still there
15:21:20 <n0ano> bauzas, yeah, don't we currently have the same race, the compute updates the DB and the scheduler reads the DB, still a race there
15:21:50 <bauzas> n0ano: nope, because scheduler reads the DB *after* the DB is updated
15:22:02 <bauzas> n0ano: here the proposal is to read the DB *before* the DB is updated
15:22:25 <edleafe> bauzas: it could read, make a decision, and the DB could change in between
15:22:29 <alex_xu> bauzas: the comment you said is pretty close the ironic bug we disscused on the ML?  because there are two place to validate resource?
15:22:40 <bauzas> alex_xu: that's related indeed
15:22:42 <jaypipes> I'm +1 on edleafe's latest version.
15:22:59 <bauzas> jaypipes: good to know but that doesn't change my view :)
15:23:13 <jaypipes> bauzas: I responded to your concern.
15:23:57 <alex_xu> emm....not involved by this spec, already existed in current code
15:24:57 <bauzas> jaypipes: your proposal looks good in theory
15:25:08 <bauzas> jaypipes: because it means that we keep track of the pending requests
15:25:34 <bauzas> jaypipes: but in practice, HostStateManager is not persisted
15:25:51 <jaypipes> bauzas: it doesn't need to be.
15:28:10 <n0ano> jaypipes, maybe you can expand on why it doesn't need to be persisted
15:28:14 <alex_xu> jaypipes: but we can run multiple schedulers?
15:29:06 <bauzas> jaypipes: the trick works for aggregates because we regenerate the HostState from the DB
15:29:50 <jaypipes> n0ano: because the source of truth for the state of an instance is the compute node itself. if the scheduler goes down, it should re-establish its view of pending instances by querying the compute API.
15:30:01 <bauzas> jaypipes: but here, as it wouldn't be persisted, we would not have the information about pending requests for a new request
15:30:32 <jaypipes> bauzas: why not? we can make a call to the compute API to get pending instances grouped by host.
15:30:42 <bauzas> jaypipes: there is a huge difference between ComputeNode.get_all() and Instance.get_all() you know
15:30:56 <bauzas> jaypipes: oh, so you would need to persist information about pending instances
15:31:16 <jaypipes> bauzas: there already is persisted information: it's in the ionstances table in the Nova DB.
15:31:33 <bauzas> jaypipes: as I said, that wouldn't be performance happy
15:31:40 <jaypipes> bauzas: just query the compute API to get pending instances for each host on scheduler startup.
15:31:56 <bauzas> jaypipes: how do you get that list of instance ids ?
15:32:09 <jaypipes> unfortunately, I need to attend another meeting right nwo :(
15:32:22 <jaypipes> bauzas: you get that list of instance objects via a call to compute API.
15:32:29 <bauzas> ok, I think your proposal looks good but the devil is in the details
15:32:58 <bauzas> it looks good because the logic is quite the same as for the aggregates spec, which I'm fine
15:33:04 <bauzas> but again, let's dicuss that later off the meeting
15:33:13 <n0ano> do we need to continue this on IRC later when jaypipes is available again?
15:33:14 <bauzas> jaypipes: edleafe: you ok ?
15:33:20 <bauzas> n0ano: huge +1
15:33:46 <edleafe> yes
15:33:46 <n0ano> jaypipes, do you know when you'll be available again?
15:33:47 <jaypipes> n0ano: probably best to do a google hangout, maybe thursday?
15:34:20 <n0ano> jaypipes, that late, I was hoping this afternoon, but if that's the soonest we can do that
15:34:53 <bauzas> jaypipes: the spec freeze period ends 2 weeks before K2
15:34:59 <bauzas> ergh
15:35:00 <jaypipes> we need to put together a more detailed doc IMO on the end scheduler interfaces that we want to expose
15:35:14 <bauzas> *spec freeze exception period
15:35:16 <edleafe> Today sucks for me; tomorrow or Thurs is OK
15:35:20 <bauzas> jaypipes: fine by me (for the interfaces doc)
15:35:24 <jaypipes> n0ano: I have one meeting after another, from now until 6pm :(
15:35:44 <n0ano> jaypipes, I feel your pain, let's see if we can get a common time tomorrow
15:36:04 <jaypipes> n0ano: tomorrow, I am afk until noonish EST, afterwards, I'm good to go
15:36:06 <edleafe> n0ano: I'm flexible tomorrow
15:36:13 <bauzas> think about EU people \o
15:36:29 <alex_xu> asia people can join o/?
15:36:47 <edleafe> timezones are hard
15:36:51 <alex_xu> but it's fine my time is too hard for your guys
15:37:08 <bauzas> edleafe: you're PST ?
15:37:09 <alex_xu> I can check the comment on the gerrit
15:37:16 <edleafe> bauzas: CST
15:37:20 <edleafe> Texas
15:37:30 <n0ano> bummer, it's late afternoon EU right now so an afternoon EST is bad, what about thurs morning EST
15:37:33 <bauzas> edleafe: ack, -8 from CET
15:37:57 <bauzas> eh, -7 even
15:37:57 <edleafe> bauzas: yep
15:38:16 <bauzas> PST, MST, CST, EST right ?
15:38:20 <bauzas> from west to east ?
15:38:21 <edleafe> UTC-6
15:38:30 <edleafe> bauzas: correct
15:38:34 <n0ano> bauzas, yep (I'm MST)
15:38:50 <bauzas> edleafe: ok so CET-7 (I'm UTC+1 during winter period)
15:38:53 <jaypipes> n0ano: I have a Thursday 11-12pm EST, otherwise open
15:39:13 <bauzas> lemme check when 2 weeks before K2 is
15:39:30 <bauzas> because that's the fencing gate for merging spec exceptions
15:39:38 <n0ano> how about this slot on Thurs, 1500 UTC (8AM MST)
15:40:05 <edleafe> n0ano: works for me
15:40:36 <bauzas> n0ano: I have a kid to go take around 1545 but it will just take 10 mins
15:40:50 <bauzas> so I can attend 1500UTC with a 10-min off window
15:41:10 <n0ano> #action n0ano to setup a Google hang out on Thurs, 1/15, 1500 UTC
15:41:58 <bauzas> https://wiki.openstack.org/wiki/Kilo_Release_Schedule
15:42:06 <n0ano> so let's all read edleafe spec and be ready to discuss then, we don't have much time before the merge cutoff
15:42:12 <bauzas> 2 weeks before K2 is 22th Jan
15:42:39 <bauzas> to be clear : 22th Jan is the deadline for merging a spec exception, provided it has been approved before
15:42:54 <n0ano> bauzas, I'm not too concerned, I think the bulk of the spec is fine, we just have to iron out this one issue
15:43:16 <bauzas> some code could probably be asked for validating the spec
15:43:35 <bauzas> n0ano: so the sooner is the better but I'm fine with Thurs
15:43:46 <edleafe> I can start putting together some code based on this
15:44:13 <edleafe> but it will really depend on a lot of the other changes
15:44:15 <bauzas> edleafe: sounds a good idea
15:44:19 <n0ano> edleafe, that'd be good, I don't think we're going to make major changes to the spec
15:44:43 <bauzas> n0ano: well, that depends on what we agree on the hangout
15:44:57 <n0ano> bauzas, I'm optimistic :-)
15:44:57 <edleafe> it's just that it will be affected by all the changes that bauzas and jaypipes have queued up
15:46:22 <jaypipes> bauzas: what's the status of the detach-service patch series? ready for final reviews?
15:47:14 <bauzas> jaypipes: you can take a look, yup
15:47:33 <bauzas> jaypipes: I have some fixes to do for the objectification
15:48:01 <jaypipes> k, will do this evening (sorry, in meetings all day.. )
15:48:27 <bauzas> jaypipes: no worries
15:48:47 <bauzas> jaypipes: at the 3 first patches in the series are good to be merged
15:49:05 <bauzas> jaypipes: which would be a good next step for the bp
15:50:24 <n0ano> bauzas, do I read that bp right, there are 14 patches to implement it?
15:51:04 <bauzas> n0ano: yeah, lots of fixes about providing backwards compatibility
15:51:25 <n0ano> we all need to do `a lot` of reviewing
15:51:42 <bauzas> n0ano: basically, the new field has been provided, the goal now is to get rid of all the old methods
15:51:58 <bauzas> and use the objects as a Facade for accessing the new way
15:52:24 <bauzas> n0ano: or switch back to the old way if the field is unset
15:52:56 <n0ano> well, we're approaching the top of the hour so:
15:53:00 <n0ano> #topic opens
15:53:10 <n0ano> anything new for today?
15:54:35 <n0ano> hearing a lot of crickets so I'll thank everyone, hope to see you at the hangout on Thurs and we'll talk again next week
15:54:48 <edleafe> thanks, n0ano!
15:54:51 <bauzas> see ya
15:54:53 <n0ano> #endmeeting