15:01:05 #startmeeting scheduler 15:01:06 Meeting started Tue Aug 13 15:01:05 2013 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:09 The meeting name has been set to 'scheduler' 15:01:29 anyone here for the scheduler meeting? 15:01:47 Yep (but I have to drop at half past) 15:02:13 o/ 15:03:12 let's wait a minute or two and then go... 15:03:16 hi all 15:04:11 #topic Perspective for nova scheduler 15:04:25 I hope eveyone has had a chance to look at Bors' paper 15:04:30 #link https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsWf0UWiQ/edit#heading=h.6ixj0ctv4rwu 15:05:29 hi guys 15:05:29 my read is that basically he is saying update state info through the DB is a scaling problem, doing RPC calls to the scheduler would solve this problem 15:06:25 my intuition is to agree with that but I believe there is a significant group the feels the DB is a more scalable solution 15:06:35 how to we resolve this dichotomy 15:07:35 I think there were also some open questions on how does a newly started scheduler get a full set of state 15:07:56 wasn't a great deal of this hashed out on the ML? 15:08:26 And is all of this is now held just in memory by the scheduler(s) how do we get visibility into that state 15:08:51 jog0, discussed on the ML -yes, resolved -I don't think so 15:09:07 Could be that I missed that mlist discussion (had way to much in my inbox when I came back - must stop taking holidays) 15:09:31 PhilDay, good points but are those implementation details or architectural problems 15:10:09 PhilDay: here is the thread http://lists.openstack.org/pipermail/openstack-dev/2013-July/012221.html 15:10:24 If we're moving to a model of not persisting the scheduling related state in the DB, then I'd say they are architectural 15:10:39 @jog0 - thanks 15:12:45 I guess what I'm thinking is that in addition to "time to scheudule a new VM" I'd like to see "time for a new scheduler to retrieve its state" as an explicit metric 15:12:47 PhilDay, would providing an API to the scheduler to access this info be sufficient or would somehow periodically syncing to the DB work 15:13:13 there was some good ideas at the end ofthat thread 15:13:20 I guess either 15:13:41 Ok - sounds like i have some more reading to do before I can contribute intellegently ;-) 15:14:54 15:15:05 also boris-42's paper didn't clearly show the actual issue (IE not sure how to reproduce there results) 15:15:17 jog0 hi 15:15:26 jog0 whatsup? 15:15:51 I think we all agree the current scheduler has limitiations the questions are at what point exactly and are there any good short term fixes we can do for now, until Icehouse dev is open 15:16:07 boris-42: see backlog 15:16:14 jog0 yeah you are doing great work 15:16:21 jog0 around removing fanout] 15:17:03 n0ano hi 15:17:13 boris-42, welcome 15:17:44 n0ano we updated today our document 15:17:54 n0ano so we are not doing fanout call to scheduler 15:18:08 As I understood last weeks discussion, any short term fix would keep the DB in place but the path for updates would change from "comp->conductor->DB" to "comp->Sched->DB" 15:18:13 must read new doc 15:18:31 n0ano there is not so much change 15:18:43 PhilDay not only this change 15:18:48 Do we really think there is time to move to a non-DB model still in Havana ? 15:19:00 PhilDay no 15:19:07 PhilDay this should be done in I cycle 15:19:34 PhilDay: things like more optimized DB queries or caching are options right now 15:19:43 Ok, so if its all to be done in I - shouldn't this be a topic to be bottomed out in HK ? 15:19:57 PhilDay yes 15:20:04 PhilDay, absolutly, but doing some prep work before hand is good 15:20:38 PhilDay yeah we would like to prapaer 1) all code 2) benchmark results on real deployments before summit 15:20:59 OK - sorry I got the wrong end of the stick from the start of this then. I thought you were trying to drive to a conclusion on the architecture in here 15:21:12 nonon 15:21:13 =) 15:21:18 PhilDay: thats what I thought too 15:21:18 boris-42, have you considered PhilDay concern that you need a way to look at the compute node states, moving from the DB makes that hard 15:21:47 so I liked some of Clint's ideas for scheduling 15:21:50 n0ano we will put all data about HOST into DB 15:22:03 scheduler DB 15:22:08 not only compute_node table 15:22:19 but also data from compute_node_stats and probably from cinder 15:22:31 to be able to use different data from different project in our scheduler 15:22:44 Just to be clear I want any query on data to be behind an API - so I'n not wedded to it being in the DB, I just want to be sure I don't lose any visbility 15:23:04 PhilDay visibility about what? 15:23:22 The data the scheduler is using (i.e host states, etc_ 15:23:32 PhilDay and? 15:23:38 boris-42: any cross project DB stuff makes things much harder 15:23:44 jog0 no 15:23:49 jog0 it don't make 15:23:52 politically 15:24:01 jog0 our goal is to have one scheduler 15:24:07 that keeps all data about hosts 15:24:12 personally, I like an API and a back channel (for debugging when the API server fails) 15:24:17 it becomes a nother contractual API to maintain 15:24:39 jog0 it will be much easier 15:24:42 A single scheduler that can also know about Network locatilty (from Quantum) and Volume locality (from Cinder) ? 15:24:48 yeah 15:24:52 PhilDay yes 15:24:58 philDay and is actually scalable 15:25:18 PhilDay it is very useful in a lot of cases 15:25:36 PhilDay for example you are runing cinder and nova on each host 15:26:11 PhilDay and would like to schedule you instance with block device with size of 200GB and ensure that on that host you have enough of free disk in cinder=) 15:26:36 boris-42: I like the idea too, but doing it requires careful consideration to make sure it doesn't couple the assorted projects too much. 15:26:44 also is this in the new document? 15:26:53 jog0 sorry not ready yet 15:27:02 One other thing that's at the back of my mind (but I haven't done much thinking about it) is what it would take to plug in a third party scheduler (like say MOAB) - having only an RPC interface might make that simpler I guess 15:27:16 jog0 but iour goal is to finish all this things before summit and doc also 15:27:47 PhilDay, is a 3rd party scheduler really that necessary? 15:27:51 PhilDay: that is a good question, most of this discussion is around we only have one scheduler 15:28:01 jog0 PhilDay it is really huge change (not in lines) but in approach. So I agree that we should really carefully discuss all this things 15:28:42 n0ano: some people may want to use other information to schedule on, and simpler scheduler etc 15:29:07 PhilDay nano I don't see very is complexity of our approach? 15:29:18 jog0, I would hope that the extensibility we've already built into the scheduler is sufficient for 99% of the users 15:29:42 Not necessary, and I wouldn't do that in favour of having all of these features in Openstack - but it is something that comes up from time to time in conversation with customers wanting to build thier own clouds. 15:29:48 PhilDay one simple scheduler that have small amount of methods (run_instance, migrate, cinder scheduler methods) 15:29:56 and one another method 15:30:01 that update host_state 15:30:12 and could be called from different serviesec 15:30:33 Got to dive for another call - sorry 15:30:40 PhilDay good luck 15:30:41 PhilDay: bye 15:30:54 boris-42: in short this is a big change, huge infact 15:30:58 all ARs to to PhilDay :-) 15:31:04 sorry for joining late but just like we decided to do a separate network service, I dont see why we cant have a plugabble scheduler service 15:31:26 it already is, you can select from multiple scheduler right now 15:31:28 jog0 I agree that it is big change in approach, but small in LOCs=) 15:31:34 jog0 and could be done step by step 15:31:43 LOCs don't matter in this 15:31:44 jog0 but first step should be done only in I cycle 15:32:01 jog0 I find your current job great 15:32:04 jog0 for H cycle 15:32:06 there was a BP to do this a while backbut it got stalled 15:32:15 n0ano: however the state management is not pluggable yet 15:32:27 boris-42: I do like this proposal, I am just saying it is tricky 15:32:33 debo_os, hence the discussion here 15:32:49 I would recomend drafting up an early idea and putting it to the ML along with an outline of what you think 15:32:54 n0ano: apologies for joining late hence might sound repetitive :) 15:32:58 along with any history of why tried it before 15:33:02 jog0 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler 15:33:03 debo_os, NP 15:33:20 jog0 Ok will be done soon 15:33:44 in addition to the discussion ... one of my colleagues had written up a doc for the last summit and socialized it .. https://docs.google.com/document/d/1cR3Fw9QPDVnqp4pMSusMwqNuB_6t-t_neFqgXA98-Ls/edit# 15:34:13 boris-42, looks like your BP is mainly just a link to your doc 15:34:20 there was some good feedback and folks told him to get back a little later 15:34:22 the main question is the mechanics of adding a new contracttual API for all projects that wires to the scheduler 15:34:30 n0ano yes 15:34:33 its a little like boris's doc 15:34:50 n0ano because in doc is described a lot of 15:34:54 boris-42: should we try to merge the 2 proposals 15:35:07 debo_os as I say in email yes of course 15:35:14 debo_os they are really close 15:36:46 debo_os: don't use the word orchestrator in your proposal its an overloaded word 15:36:52 =)) 15:36:53 sorry for joining the conversation late, but, I like the idea of having a kind of Scheduler as a Service 15:37:28 debo_os: also the doc needs an abstract/summery 15:37:34 jog0: agreed. I need to clean it up since my colleague wrote most of it and now left the OS world ... 15:37:37 its TL;DR for me, skimming hte slides 15:37:37 ignoring current proposals I still don't see how to resolve the question - which is more scalable DB vs. RPM? 15:37:50 gr8 feedback 15:37:52 n0ano RPM 15:37:54 s/RPM/RPC 15:37:58 rpc* 15:38:10 for example 15:38:15 we have 10k nodes 15:38:29 we need to produce only 150req/sec 15:38:32 to all schedulers 15:38:46 so 150/SCHEDULER_AMOUNT 15:38:47 in sec 15:38:55 n0ano: IMHO neither 15:39:07 even if you have only 3 schedulers for 10k nodes 15:39:09 boris-42 15:39:22 you will have to process only 50 req/sec 15:39:26 and it is nothing 15:39:35 but then again we don't need to hash this out right now 15:39:44 boris-42: lets work to merge the 2 proposals. For starters we can add this doc for reference too 15:40:14 debo_os Ok it will be easier to merge it through emails then IRC chat =) 15:40:15 we don't have to answer now but I would like to know `how` to come to an answer, right now we're kind of in a `he said, she said' situation 15:40:30 boris-42: agreed! lets work to merge the 2 proposals over emails 15:40:32 n0ano we will make real benchmarks 15:40:38 n0ano on real deplyouments 15:40:48 n0ano will be it enough for you? 15:41:02 debo_os As I said "nods" 15:41:07 boris-42, I think we need that, measureable and reproducible would be great 15:41:39 n0ano: ++ and any proposed idea has to show its better then the existing and why its better then other options 15:41:40 n0ano yes we are going to create some new project so everybody will be able to reproduce these things 15:42:02 Well, I'm hearing some actions out of all of this: 15:42:16 #action boris-42 & debo_os to merge proposals 15:42:42 #action come up with benchmark to measure DB vs. RPC scalability 15:42:42 yeha 15:42:50 n0ano 15:42:51 no 15:42:57 we are building whole system 15:43:01 to test real openstack 15:43:10 not just this case 15:43:44 guys, if I may jump in for a sec. Is there a reason to rule out an in-memory solution? 15:43:53 hmmm, notic `reproducible', if not then we're just providing anecdotal input 15:44:08 doron: not at all 15:44:11 ie- I agree db is problematic, but RPM will have it's price. 15:44:18 thats why we need to define APIs 1st instead of implementation 15:44:21 (RPC) 15:44:29 hence boris-42 and I need to merge the proposals .... 15:44:43 doron didn't understood question 15:45:05 doron: ideally if you run the scheuler as a service you could swap out the implementation and have an in memory solution for the state management 15:45:22 boris-42: did you consider of storing the needed data in-memory instead of a DB? 15:45:30 doron we will use in memory key-value storage 15:45:38 doron to avoid scheduler fanout 15:45:50 boris-42: gr8. this is what I had in mind 15:46:12 doron so each request to update host state will be processed only by one scheuler 15:46:35 doron this allows us to solve problem with too much for one scheduler rpc =) 15:46:48 just adding another schedulers in system 15:46:55 makes sense. I'll go over your merged doc 15:46:56 boris-42: I guess if we define the crisp update APIs etc ... the implementation could be separated and we will have all teh scaling featuers you want to do ..... 15:47:16 debo_os> 15:47:27 debo_os we already implemented this part of our scheduler 15:47:46 debo_os switch from Nova.DB to scheduler.DB 15:48:06 I'll take a look. sorry for the noise. 15:48:17 doron ok I think we will publish soon code 15:48:25 so I will add you as reviews 15:48:33 thanks! 15:48:50 boris-42: gr8 15:49:13 boris-42, now I'm confused, are you proposing that the scheduler implement it's own private DB 15:49:26 n0ano yes 15:49:32 I thought it was just maintaing state in it's memory 15:49:49 n0ano that produce fanout 15:50:00 n0ano and we spoke with Mike from BlueHost 15:50:32 n0ano and he said that it will be better to use fast key-value storage such as memcached 15:51:09 then it's not really a DB, it's just a backup for the internal memory storage 15:51:23 n0ano as I said we haven't enough time to update our docs, and they are updated today 15:51:33 n0ano we don't need "real" DB 15:51:40 n0ano for temp data 15:51:48 boris-42: +1 on no need for real db. 15:51:55 if this isn't ready for review/discussion why are we here? 15:52:15 jog0 I am just answering on question 15:52:29 s/question/questions 15:52:42 jog0, I thought we were farther along and I wanted the answer to how do we decide DB vs. RPC 15:53:02 =) 15:53:42 sorry guys for misunderstanding =) 15:53:51 so, it's getting late, looks like boris-42 & debo_os need to update the doc, when that is done we can re-visit this 15:54:04 n0ano it is already updated 15:54:17 boris-42, what about merge with debo_os ? 15:54:29 n0ano I mean about using DB in scheduler 15:54:34 n0ano sorry=) 15:54:48 n0ano ok due next session we will update new combined doc 15:54:56 boris-42, NP 15:55:02 And I agree with jog0 that we should discuss in this moment about Havana 15:55:03 work 15:55:06 not I 15:55:29 let's see where we are next week, especially with an eye to what do we need/want to do for Havana 15:55:56 #topic opens 15:56:07 any opens in the remaining few minutes? 15:57:12 hearing silence, I'll thank everyone 15:57:20 #endmeeting