14:00:09 #startmeeting nova_scheduler 14:00:09 Meeting started Mon Jun 20 14:00:09 2016 UTC and is due to finish in 60 minutes. The chair is edleafe. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:13 The meeting name has been set to 'nova_scheduler' 14:00:15 Anyone here today? 14:00:17 o/ 14:00:19 o/ 14:00:22 <_gryf> o/ 14:00:28 o/ 14:00:34 o/ 14:01:09 o/ 14:01:27 Let's wait another minute for the latecomers... 14:01:38 In the meantime, happy solstice! 14:01:49 * rlrossit feels the sunburn 14:02:43 * johnthetubaguy lurks in a multi-tasking way 14:03:28 * bauzas mentions he's out of coffee 14:03:29 Well, I guess we should get started 14:03:43 #topic Specs and Reviews 14:03:54 o/ 14:03:59 There is only one on the agenda: rlrossit, take it away! 14:04:05 alright! 14:04:10 #link https://review.openstack.org/#/c/330145/ 14:04:24 so I have a crazy idea and am looking for some feedback 14:04:41 I want to try and PoC a new scheduler that uses a different HostStateManager 14:05:00 o/ 14:05:00 the new manager will use Redis (or some other shared memory) to maintain the state of all of the hosts 14:05:11 o/ 14:05:15 that way we don't need to be going back to the DB or caching, it's always up-to-date on all of the nodes 14:05:32 I just whipped up that spec last week, so it's still really rough around the edges 14:05:56 but initially, I'm just looking for an answer to the questions: Am I crazy for trying this? Has anyone done this before? 14:05:57 rlrossit: until the scheduler actually owns that data, it will never be fully accurate.' 14:05:58 it's a very old story :) 14:06:10 jaypipes: good point 14:06:27 rlrossit: done it with redis, not sure. there was talk of trying cassandra for that 14:06:29 currently my plan is to have the compute nodes update their state to the shared memory 14:06:35 rlrossit: Google have done this before. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41684.pdf 14:07:03 I saw that when looking at johnthetubaguy's backlog scheduler spec 14:07:08 doffm: rlrossit: jaypipes: https://etherpad.openstack.org/p/liberty-nova-scalable-scheduler 14:07:24 rlrossit: and then, once the scheduler *does* own that data, putting into Redis essentially makes it throwaway data and you will lose any transactional contracts that an RDBMS provides. 14:07:28 rlrossit: my expectation for a proposal like that would be to have numbers showing that it's worthwhile, and a clear explanation of the failure modes and how that compares to now 14:07:45 alaski: that's the plan of the quick-and-dirty PoC 14:08:10 I just didn't want to be met with pitchforks and torches when I came with numbers later :) 14:08:20 * edleafe is back after a wifi drop 14:08:43 well, my original thinking is that we can still have the computes owning the resources but having the schedulers sharing a global state 14:08:51 rlrossit: I'm not sure you can avoid that completely :) 14:08:52 that's not mutually exclusive 14:09:12 bauzas: that's my current plan 14:09:16 So is the idea that this will enable multiple schedulers better, sicne they will share a common view of the state of the compute nodes? 14:09:21 or, rather, my current plan for the current scheduler 14:09:36 rlrossit: see the etherpad I mentioned above, it was kind of an idea I had in the past 14:09:39 yeah it'll allow horizontal scaling without getting contention on the hosts 14:09:42 rlrossit: honestly the interesting thing to me would be what apis do you need between computes and the scheduler so that what you're working on and what jaypipes is working on could both coexist 14:10:05 the real problem to me is to find some way to address that kind of shared state without pulling some huge dep 14:10:31 bauzas: yeah... 14:10:44 Yeah, the current scheduler design doesn't do shared state very well 14:10:49 rlrossit: There will still be contention. Shared state doesn't mean IMMIDEATE update on all the schedulers. It EVENTUALLY consistent, not actually consistent. 14:10:50 that's the part where I think the main disagreement will come 14:10:54 tbh, I'm still totally on page with the rp-providers specs 14:11:15 those specs are for managing our heterogenous ways of counting resources 14:11:28 doffm: I think that is understood. The problem now with horizontal scaling is increasing the number of schedulers increases the raciness 14:11:33 doffm: I want to see how many schedulers it takes to get contention with redis, I bet it'll be a lot 14:11:40 I hope so. 14:12:00 the concept of "owning" that resource is just one side blueprint that was discussed to be left up for discussion post-newton given the progress we make on Newton 14:12:05 yeah, the goal is to get to more than 2 schedulers :) 14:12:13 rlrossit: so this will reduce the lag between a host being updated, and all of the schedulers knowing about it? 14:12:43 edleafe: yep 14:12:55 edleafe: and it's more of an instant cache between the multiple schedulers 14:12:57 "instant" 14:13:02 :) 14:13:11 * rlrossit steps lightly around words 14:13:13 well, it's a global state cache 14:13:16 rather 14:13:19 yeah 14:13:34 host managers are local state caches 14:13:49 right, but they aren't shared 14:14:04 which is pretty expensive 14:14:05 which is why you start getting contention with multiple caching schedulers 14:14:15 bauzas: if they were, they'd be global :) 14:14:21 not really 14:14:40 this way if someone schedules to a host, all schedulers see that and have their host state updated, so if a host gets filled up, they won't schedule to it anymore 14:15:00 instead of failing and having to update their cache, and then retrying 14:15:22 the main driver to me is to keep it as light as possible and keep the scheduler(s) optimistic 14:15:23 rlrossit: what would be your time frame for getting PoC numbers? 14:15:39 that's where I think we should still have the computes owning the resources 14:15:46 edleafe: the goal is before the midcycle so doffm can present my numbers if they are good 14:15:51 Or bad. 14:15:56 ie. the scheduler could fail fast, or give a wrong answer 14:16:01 rlrossit: cool 14:16:12 so, if there's no one that wants to kill me yet, I'll get started on the PoC 14:16:27 the only concern I have with the idea is to use Redis as it 14:16:30 So I guess the question is: what sort of numbers would be persuasive enough to get everyone to look at this more seriously? 14:16:50 that's where I think we should be more subtle 14:17:12 bauzas: that's an implementation detail, no? 14:17:30 bauzas: yeah, if this works out, I have long-term thoughts on how the deps will work 14:17:34 I think the idea is to demonstrate if the general approach helps. 14:17:42 but it's not worth looking into that if this doesn't even do anything for us 14:17:48 edleafe: I dunno 14:18:20 * edleafe notes that my first commits to nova was removing redis from NASA's original design 14:18:31 edleafe: ie. my point is that we shouldn't rely on some magic given by a backend for providing us update consensus and cache invalidation 14:19:03 "magic" is kind of harsh at this point 14:19:11 It's a cache 14:19:23 Shared by multiple schedulers 14:19:35 Is there any other cache that would not be as magic? 14:19:50 rlrossit: I also had a design for the "shared-state scheduler" 14:20:03 bauzas: Why would we write our own? 'Magic' is good. 'Magic' means tested iplementation that we worry less about. 14:20:09 rlrossit: but I think we take the different approach 14:20:29 Yingxin: do you want to give a quick summary of the differences? 14:21:03 I think part of the plan for testing is also to look at Yingxin's changes. 14:21:25 doffm: indeed it is 14:21:31 edleafe: there are no global cache in my design, they just get quick synchronized from incremental updates. 14:22:25 I'll publish the new test result of my prototype to the ML :) 14:22:43 OK, great 14:22:51 doffm: "magic" means "strong dependency for us" that would make us sensitive to updates :) 14:23:17 Yingxin: would you review rlrossit's spec and give your feedback, as you've also worked on this issue? 14:23:21 I understand, There is a tradeoff. :) 14:24:10 edleafe: sure 14:24:16 OK, let's continue this on the spec. And I hope that rlrossit and doffm keep us updated between now and the midcycle 14:24:35 Any other specs or reviews that we need to discuss here? 14:24:37 I will do my best to take copious notes 14:25:41 OK, let's move on 14:25:52 #topic Midcycle 14:26:10 * edleafe can never decide if there's a hyphen in midcycle or not 14:26:28 So a quick show of hands: who's going, and who isn't? 14:26:59 * bauzas waves hand 14:27:02 * _gryf will be there. probably. 14:27:10 o/ 14:27:24 * edleafe will be there 14:27:34 Will also be there. 14:27:43 edleafe: I will attend it for sure but I will attend it remotely 14:28:00 I will be there 14:28:09 as will jaypipes 14:28:27 diga: do you know if there will be a remote system in place? 14:29:08 yes, if possible we can setup webex 14:29:24 What I'd like to do is make sure we've identified the issues that need discussion ahead of time 14:29:25 last time I attended magnum via webex 14:30:12 well, nova midcycles are pretty hard to remotely attend, tbh 14:30:22 first, the audience is larger in the room 14:30:22 So rather than start yet another etherpad, how about we keep editing the meeting agenda for now, and we can be sure to discuss these in the meetings before the midcycle 14:30:38 second, the agenda is pretty free up to the last minute 14:30:51 and third, the flow of the conversation is pretty high 14:30:55 bauzas would know (last year's Rochester meetup) 14:31:25 so, in general, we could maybe try to setup some kind of connectivity, but that's mostly just an audio without asking folks to participate 14:31:47 because that would slow down the convos 14:32:03 diga: were you able to participate in the Magnum discussions? Or simply listen? 14:32:18 either way, it's something I guess mriedem hasn't planned yet 14:32:26 edleafe: participated in the magnum discusion 14:33:15 diga: well, I echo bauzas's concern, but I guess it's worth a try 14:33:37 edleafe: Thank you :) 14:34:17 So I guess what I'd like to see in the next week is for people on this subteam to start adding their ideas for the midcycle to the agenda page 14:34:25 #link https://wiki.openstack.org/wiki/Meetings/NovaScheduler 14:35:07 Since we only have limited time at the midcyle, we should discuss these topics and identify the most important for F2F discussion 14:35:35 Sound good to everyone? 14:37:20 * edleafe notes that silence == agreement 14:37:37 agreed 14:37:50 I am fine with this edleafe 14:38:03 #topic Opens 14:38:25 So, before I send you all back to being productive, anyone have any other topics to discuss? 14:38:38 edleafe, anything related to the scheduler that i could help with? 14:38:54 edleafe: any date you are planning for mid-cycle ? 14:39:15 diga: https://wiki.openstack.org/wiki/Sprints/NovaNewtonSprint#Hotels 14:39:19 oops 14:39:22 diga: https://wiki.openstack.org/wiki/Sprints/NovaNewtonSprint 14:39:32 July 19-21 14:39:38 edleafe: thanks 14:40:01 sudipto: have you seen the Resource Providers work? 14:40:16 edleafe, yeah. 14:40:32 edleafe, read the spec, saw a few reviews. Nothing in particular yet though. 14:40:48 sudipto: then reviewing the series that now starts with https://review.openstack.org/#/c/328276 14:40:53 would be a good start 14:40:57 edleafe, alright. 14:41:08 edleafe, will do. 14:41:12 edleafe, thanks! 14:41:38 Anything else to discuss? 14:42:25 OK, thanks everyone! Now go back to work!! 14:42:27 #endmeeting