15:00:27 #startmeeting gantt 15:00:28 Meeting started Tue Jan 13 15:00:27 2015 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:31 The meeting name has been set to 'gantt' 15:00:35 anyone here to talk about the scheduler 15:00:46 o/ 15:00:54 \o 15:02:15 I was hoping bauzas was around, let's wait just a minute 15:02:26 yea 15:02:52 just posted a quick note on #openstack-nova 15:03:10 edleafe, beat me too it :-) 15:04:04 in that case, let's do things in a slightly different order 15:04:17 #topic Topics for mid-cycle meetup 15:04:47 have people been thinking on this? Right now my main topic is the split at the start of L 15:05:23 (assuming we get everything implmented in Kilo) 15:05:58 I've had some talks with jaypipes, and we were hoping to clarify the interface of the scheduler 15:05:59 n0ano: what need to do for split? we will have any doc describe that, or that will flush out in mid-cycle? 15:06:42 IOW, what exactly a separated scheduler would look like to anything that wanted to use it 15:06:58 alex_xu, I would hope any doc would mainly be internal descriptions, the goal is a drop in replacement with minimal impact, maybe some config changes is all 15:07:52 edleafe, would be good to talk about that as I would hope the L cycle will be `what we have now', the M cycle is where we can look at making things more palatable for other projects 15:07:52 hey guys, sorry for being late 15:08:02 n0ano: ok 15:08:20 jaypipes, NP, just talking about what we might talk about at the mid-cycle, can you see the history? 15:08:24 n0ano: understood; I just think that that's backwards 15:08:52 if we define the interface that will be useful to other projects, we will know exactly what we need to clean up 15:09:17 \o 15:09:17 oops 15:09:28 for some reason, I had no sound 15:09:44 edleafe, given the goal of minimal impact to Nova on the first split I don't think we want to make many (if any) changes in L interfaces 15:10:55 so, you're discussing about the split ? 15:11:03 sounds like we have 2 good discussion topics for the split: 15:11:12 1) specifics of the L split 15:11:13 n0ano: whether you change them now or not is one issue. We should really know where we want to end up, though 15:11:24 n0ano: at the start if L, we need the interfaces to the scheduler that update the scheduler's view of the world to be public and versioned. 15:11:49 2) what is the longer term intefaces 15:12:10 jaypipes, I would think that the result of the Kilo cycle to to do just that 15:12:24 n0ano: agreed 15:12:38 n0ano: provided that jaypipes's BP would be merged 15:12:49 n0ano: and mine about request_spec 15:13:00 n0ano: well, we are currently at an impasse with regards to any agreement on when and how the scheduler's view of resource consumption changes should be done. 15:13:09 bauzas, both specs were approved so of course the code will be merged :-) 15:13:25 n0ano: I share your enthusiasm 15:13:26 jaypipes: which makes it a perfect topic for the mid-cycle! :) 15:13:33 indeed. 15:13:34 edleafe, +1 15:13:50 we're not trying to settle it right now 15:13:56 don't we have an approved BP about isolating how the scheduler is getting resources ? 15:14:39 anyway, let's discuss it during the midcycle, agreed 15:14:51 bauzas: not approved, AFAIK 15:15:26 jaypipes: which one do you refer ? edleafe's, mine or yours ? 15:15:53 anyway, we're nitpicking, let's discuss this during midcycle 15:15:58 k 15:16:22 in that case, switching gears: 15:16:29 #topic 1) Remove direct nova DB/API access by Scheduler Filters - https://review.opernstack.org/138444/ 15:16:29 we should get a better view of what's planned and what's mergede 15:16:47 I really dislike this commit title but that's life :) 15:16:52 bauzas, I believe you have some issues 15:16:57 the description is self sufficient 15:17:16 yeah, sorry about being late today 15:17:16 so 15:17:22 does anyone readed the spec ? 15:17:32 bauzas: Hey, I changed it in the spec itself... :) 15:17:56 I am re-reading it again now. 15:18:13 bauzas, I +1'd the last-1 version, does that count 15:18:21 yeah I suggest to take 5 mins for reading the proposal and my comment 15:18:23 jaypipes: it follows the direction we discussed 15:19:34 so, tl;dr the problem is that the compute manager is claiming, not the scheduler 15:19:59 so by the proposal, the resources would be updated when booting the instance 15:20:18 boot/resize/destroy 15:20:22 which is far different from the existing, where the update is coming by a DB update 15:20:41 edleafe: indeed, but the race condition is still there 15:21:20 bauzas, yeah, don't we currently have the same race, the compute updates the DB and the scheduler reads the DB, still a race there 15:21:50 n0ano: nope, because scheduler reads the DB *after* the DB is updated 15:22:02 n0ano: here the proposal is to read the DB *before* the DB is updated 15:22:25 bauzas: it could read, make a decision, and the DB could change in between 15:22:29 bauzas: the comment you said is pretty close the ironic bug we disscused on the ML? because there are two place to validate resource? 15:22:40 alex_xu: that's related indeed 15:22:42 I'm +1 on edleafe's latest version. 15:22:59 jaypipes: good to know but that doesn't change my view :) 15:23:13 bauzas: I responded to your concern. 15:23:57 emm....not involved by this spec, already existed in current code 15:24:57 jaypipes: your proposal looks good in theory 15:25:08 jaypipes: because it means that we keep track of the pending requests 15:25:34 jaypipes: but in practice, HostStateManager is not persisted 15:25:51 bauzas: it doesn't need to be. 15:28:10 jaypipes, maybe you can expand on why it doesn't need to be persisted 15:28:14 jaypipes: but we can run multiple schedulers? 15:29:06 jaypipes: the trick works for aggregates because we regenerate the HostState from the DB 15:29:50 n0ano: because the source of truth for the state of an instance is the compute node itself. if the scheduler goes down, it should re-establish its view of pending instances by querying the compute API. 15:30:01 jaypipes: but here, as it wouldn't be persisted, we would not have the information about pending requests for a new request 15:30:32 bauzas: why not? we can make a call to the compute API to get pending instances grouped by host. 15:30:42 jaypipes: there is a huge difference between ComputeNode.get_all() and Instance.get_all() you know 15:30:56 jaypipes: oh, so you would need to persist information about pending instances 15:31:16 bauzas: there already is persisted information: it's in the ionstances table in the Nova DB. 15:31:33 jaypipes: as I said, that wouldn't be performance happy 15:31:40 bauzas: just query the compute API to get pending instances for each host on scheduler startup. 15:31:56 jaypipes: how do you get that list of instance ids ? 15:32:09 unfortunately, I need to attend another meeting right nwo :( 15:32:22 bauzas: you get that list of instance objects via a call to compute API. 15:32:29 ok, I think your proposal looks good but the devil is in the details 15:32:58 it looks good because the logic is quite the same as for the aggregates spec, which I'm fine 15:33:04 but again, let's dicuss that later off the meeting 15:33:13 do we need to continue this on IRC later when jaypipes is available again? 15:33:14 jaypipes: edleafe: you ok ? 15:33:20 n0ano: huge +1 15:33:46 yes 15:33:46 jaypipes, do you know when you'll be available again? 15:33:47 n0ano: probably best to do a google hangout, maybe thursday? 15:34:20 jaypipes, that late, I was hoping this afternoon, but if that's the soonest we can do that 15:34:53 jaypipes: the spec freeze period ends 2 weeks before K2 15:34:59 ergh 15:35:00 we need to put together a more detailed doc IMO on the end scheduler interfaces that we want to expose 15:35:14 *spec freeze exception period 15:35:16 Today sucks for me; tomorrow or Thurs is OK 15:35:20 jaypipes: fine by me (for the interfaces doc) 15:35:24 n0ano: I have one meeting after another, from now until 6pm :( 15:35:44 jaypipes, I feel your pain, let's see if we can get a common time tomorrow 15:36:04 n0ano: tomorrow, I am afk until noonish EST, afterwards, I'm good to go 15:36:06 n0ano: I'm flexible tomorrow 15:36:13 think about EU people \o 15:36:29 asia people can join o/? 15:36:47 timezones are hard 15:36:51 but it's fine my time is too hard for your guys 15:37:08 edleafe: you're PST ? 15:37:09 I can check the comment on the gerrit 15:37:16 bauzas: CST 15:37:20 Texas 15:37:30 bummer, it's late afternoon EU right now so an afternoon EST is bad, what about thurs morning EST 15:37:33 edleafe: ack, -8 from CET 15:37:57 eh, -7 even 15:37:57 bauzas: yep 15:38:16 PST, MST, CST, EST right ? 15:38:20 from west to east ? 15:38:21 UTC-6 15:38:30 bauzas: correct 15:38:34 bauzas, yep (I'm MST) 15:38:50 edleafe: ok so CET-7 (I'm UTC+1 during winter period) 15:38:53 n0ano: I have a Thursday 11-12pm EST, otherwise open 15:39:13 lemme check when 2 weeks before K2 is 15:39:30 because that's the fencing gate for merging spec exceptions 15:39:38 how about this slot on Thurs, 1500 UTC (8AM MST) 15:40:05 n0ano: works for me 15:40:36 n0ano: I have a kid to go take around 1545 but it will just take 10 mins 15:40:50 so I can attend 1500UTC with a 10-min off window 15:41:10 #action n0ano to setup a Google hang out on Thurs, 1/15, 1500 UTC 15:41:58 https://wiki.openstack.org/wiki/Kilo_Release_Schedule 15:42:06 so let's all read edleafe spec and be ready to discuss then, we don't have much time before the merge cutoff 15:42:12 2 weeks before K2 is 22th Jan 15:42:39 to be clear : 22th Jan is the deadline for merging a spec exception, provided it has been approved before 15:42:54 bauzas, I'm not too concerned, I think the bulk of the spec is fine, we just have to iron out this one issue 15:43:16 some code could probably be asked for validating the spec 15:43:35 n0ano: so the sooner is the better but I'm fine with Thurs 15:43:46 I can start putting together some code based on this 15:44:13 but it will really depend on a lot of the other changes 15:44:15 edleafe: sounds a good idea 15:44:19 edleafe, that'd be good, I don't think we're going to make major changes to the spec 15:44:43 n0ano: well, that depends on what we agree on the hangout 15:44:57 bauzas, I'm optimistic :-) 15:44:57 it's just that it will be affected by all the changes that bauzas and jaypipes have queued up 15:46:22 bauzas: what's the status of the detach-service patch series? ready for final reviews? 15:47:14 jaypipes: you can take a look, yup 15:47:33 jaypipes: I have some fixes to do for the objectification 15:48:01 k, will do this evening (sorry, in meetings all day.. ) 15:48:27 jaypipes: no worries 15:48:47 jaypipes: at the 3 first patches in the series are good to be merged 15:49:05 jaypipes: which would be a good next step for the bp 15:50:24 bauzas, do I read that bp right, there are 14 patches to implement it? 15:51:04 n0ano: yeah, lots of fixes about providing backwards compatibility 15:51:25 we all need to do `a lot` of reviewing 15:51:42 n0ano: basically, the new field has been provided, the goal now is to get rid of all the old methods 15:51:58 and use the objects as a Facade for accessing the new way 15:52:24 n0ano: or switch back to the old way if the field is unset 15:52:56 well, we're approaching the top of the hour so: 15:53:00 #topic opens 15:53:10 anything new for today? 15:54:35 hearing a lot of crickets so I'll thank everyone, hope to see you at the hangout on Thurs and we'll talk again next week 15:54:48 thanks, n0ano! 15:54:51 see ya 15:54:53 #endmeeting