17:00:18 #startmeeting nova_cells 17:00:19 Meeting started Wed Jan 14 17:00:18 2015 UTC and is due to finish in 60 minutes. The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:22 The meeting name has been set to 'nova_cells' 17:00:23 hi 17:00:31 bonjour 17:00:33 \o 17:00:37 o/ 17:00:39 o/ 17:00:42 o/ 17:00:48 o/ 17:01:09 well, we have a simple agenda today 17:01:21 #topic scheduling 17:01:28 link ? 17:01:33 about the agenda 17:01:38 https://wiki.openstack.org/wiki/Meetings/NovaCellsv2#Agenda 17:01:45 * bauzas is too lazy... 17:01:49 o/ 17:02:01 https://review.openstack.org/#/c/141486/ is the current review for cells scheduling 17:02:26 thanks for having updated it 17:02:27 last week the consensus was that we would spend this week diving into scheduling 17:03:10 so what are the current points of consternation or interest? 17:04:04 how is failure being handled in this scheme? 17:04:12 i mean, are we in a position to discuss details? 17:04:26 alaski: I'm basically +1 your spec 17:04:39 vineetmenon: some level of detail, yes 17:04:52 alaski: I like this proposal as well 17:04:58 vineetmenon: but I would like to get the broad details discussed before getting too deep 17:04:59 alaski: some details are a bit debatable that said 17:05:20 agreed, I'm pretty much on board with that spec 17:05:31 alaski: I'm really concerned about how you plan to persist those request details (for not naming the devil by its name) 17:05:39 alaski: I like the general idea, question about the cells v1 support though 17:05:58 alaski: I have some questions that I'm commenting in the spec but I think we can discuss here as well 17:06:02 alaski: can we not just have the top level scheduler send an RPC to the child scheduler, rather than do a partial decision? 17:06:28 johnthetubaguy: that's really a detail for later, right? 17:06:31 meaning, 17:06:33 johnthetubaguy: I really like the idea to have the cells API to manage the 2 calls rather than a top sched proxying children 17:06:40 johnthetubaguy: yes. but let me elaborate for a sec 17:06:57 the concerning bit is handling the api contract, and what happens after the fact to get to a cell,host is something else 17:07:08 johnthetubaguy: basically what dansmith said 17:07:14 \o/ 17:07:18 heh 17:07:20 dansmith: right 17:07:29 yeah, thats fair 17:07:35 that was indeed my 2nd question 17:07:49 I recall the first one : how do you plan to persist the request 17:07:53 I like the proxy option... 17:08:18 bauzas: essentially by persisting what is sent in the API request 17:08:27 bauzas: in a very basic manner for now 17:08:36 we need to generate them a uuid 17:08:44 but otherwise, if we store what they sent us, plus the uuid, that's enough I think 17:08:57 yep, that was my thinking as well 17:09:06 alaski: so let me say the devil's name, don't you think that http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/request-spec-object.html would be a dependency FWIW 17:09:07 ? 17:09:16 alaski: about the request_spec store, I guess the other way I remember dansmith mentioned was instread having the instance "cache" hold the data from the beginning? But thats a detail really, thats just the how its stored, I agree we need to store it 17:09:22 provided that we persist that object of course 17:10:22 bauzas: I'm not too concerned about whether we persist the object, or the input that allows the creation of the object 17:11:04 alaski: well the history showed us that persistency is worth good 17:11:05 bauzas: I don't want to be blocked on the request spec object, but if it's in place I think it would be worth using 17:11:15 I am trying to think about any other APIs that return before creating an object in a specific cell, but I guess instance is the only oneā€¦ 17:11:21 bauzas: persistency? :P 17:11:32 instance is the big one for sure 17:11:36 dansmith: aren't we all French, someone said last week in the news ? :) 17:11:47 they said persistency? awesome 17:12:05 * bauzas reading a dictionary now 17:12:18 johnthetubaguy: there might be more, but we'll tackle them as they come up 17:12:32 eh I was close, only 2 letters to change 17:13:06 alaski: agreed on the plan, we can maybe iterate on what you need first, and integrate with ReqSpec while it comes 17:13:17 johnthetubaguy: where it's stored is a detail, but what I like about using a separate table for the persistance right now is since the data is short lived we can iterate that store over time without worrying too much about compatibility/versioning 17:13:23 alaski: agreed, I was just wondering if going the tasks route will help give us a more general solution than storing "build_request", but I am probably thinking too deep 17:13:35 alaski: yeah, agreed, too deep 17:13:50 alaski: and FYI https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/request-spec-object,n,z is implemented on the object side 17:13:51 I agree with having to store "the request" for the reasons you state 17:14:20 bauzas: cool, I'll look into it 17:14:27 sounds like we're in violent agreement then ? 17:14:30 johnthetubaguy: cool 17:14:44 bauzas: with the broak strokes at least 17:14:50 s/broak/broad/ 17:14:55 alaski: do your homework, create a table and then we'll see how to turn up to the object later on 17:15:19 alaski: of course, you still need to iterate your spec by adding details on the *persistance* 17:15:33 yeah, needs fleshing out 17:15:41 yeah. so realistically I think this is multiple specs 17:16:00 alaski: +1 17:16:09 we need to split out things 17:16:27 there is also some scheduling homework 17:16:30 I can break this out now that there's general agreement, but I'm not sure what we can get moving during the freeze 17:16:37 but I'll push on what I can 17:16:49 alaski: are you planning to implement something for Kilo ? 17:17:04 bauzas: yes 17:17:17 alaski: I guess then the approved specs, right ? 17:17:20 there's already approved specs which need implementations 17:17:40 and I would like to continue into scheduling if we get to that point 17:17:47 alaski: hence my point, that's not really a problem to rewrite those scheduler specs, as it's not planned for Kilo 17:17:47 how that's done is an open question right now 17:18:08 alaski: yeah, but we can work on drafts and merge on L 17:18:27 alaski: I can see some point where the scheduler should serve CellsV2 as destinations 17:18:30 the approved specs are for cell-instance-mapping and scheduler, right 17:18:43 what are you planning to implement in those? 17:18:44 I mean that the scheduler should provide cells as a result 17:18:49 mvineetmenon: not scheduling 17:19:12 well, I think the extra DB thing is already a huge thing :) 17:19:16 bauzas: sure, let's focus on what's approved and iterate on this in the mean time 17:19:29 alaski: because you need to cable all the bits for creating a 2nd DB 17:19:45 are there more specifics on scheduling that anyone would like to dive into now? 17:19:47 alaski: did you work on a POC for this btw. ? 17:20:06 bauzas: I have not 17:20:12 alaski: again, I think that for scheduling, we need to split this spec into specific items 17:20:26 and see what's needed to change in the scheduler 17:20:41 the current proposal is vague about the capability of providing cells 17:20:55 do you plan to merge the Cells V1 scheduler code into the mainstream one ? 17:20:59 alaski: ^ 17:21:02 .join ##barc-cern-openstack 17:21:29 bauzas: not exactly 17:21:36 alaski: cool 17:21:37 bauzas: how the scheduler will handle the case of the 2 layer approach? 17:21:53 belmoreira: I was thinking of extending the Scheduler API 17:22:03 by adding a new method 17:22:15 but will the top scheduler be aware of everything anyway? 17:22:20 bauzas: there are some filters and weighers for cells that the scheduler should be able to handle. We'll need to determine the gaps that need to be filled so that can happen 17:22:26 that would be the safiest approach, as many bps are planning to work on select_dest() 17:22:42 alaski: agreed, we need to jump into the details 17:23:06 alaski: and I have to admit I have to dig into the Cells V1 scheduler code, my knowledge being vague on it 17:23:18 bauzas: but one of the goals of cells if to split things... 17:23:42 belmoreira: I don't get your point ? 17:23:51 bauzas: it's quite similar to the scheduler in it's manner of operation, but it schedules primarily by flavor availability 17:24:03 alaski: ack 17:24:11 I thought the plan was to make the scheduling flexible enough to use single-level scheduling by default, but allow 2-level if there was some reason 17:24:20 like, a filter that could punt to a second scheduler 17:24:40 ***scalalibity*** ? 17:24:40 because with a caching scheduler, I don't really understand why two levels are needed 17:24:57 dansmith: in that case the first schedule will only have limited light info, right? 17:25:19 belmoreira: no, I don't think so, but it may make lightweight decisions 17:25:44 dansmith: for example we have filters enabled in only some cells 17:25:56 dansmith: I think we should be able to pick from all hosts from all cells in one go, and just tell the caller which cell it was (at least until some level of scale) 17:26:07 johnthetubaguy: that's what I'd like, yeah 17:26:13 dansmith: so, the first scheduler is powerful enough to select a cell, host tuple? 17:26:18 johnthetubaguy: +1 17:26:22 vineetmenon: yeah 17:26:23 we would need some extra stuff to filter on cells properties, but that just works 17:26:24 johnthetubaguy: +1 17:26:52 dansmith: that's possible, provided we define a clear abstraction 17:27:05 dansmith: at the moment, the mainstream scheduler is host-centric 17:27:19 * vineetmenon is thinking about scalability issues with thousands of nodes 17:27:22 the problem is when the caching scheduler's DB call takes so long, its doing nothing else, but we can hopefully ignore that case 17:27:41 well, I think a cell-aware scheduler is just one that has info about cells and extra filters to make cell-based decisions early in the stack and then host-based decisions once you select a cell 17:27:42 I like dansmith's idea of the flexibility to go to two-level later on 17:27:52 right, and so once that's properly abstracted, 17:27:57 bauzas: agreed. no matter the physical setup of the scheduler I think it logically needs to operate on cells and hosts 17:28:00 vineetmenon: there were some plans to use kafka as the data store to get incremental updates, a-la no-db-scheduler plan from a few releases back, but I am ignoring that as a detail 17:28:10 we should have the ability for a filter to return "contact scheduler at 1.2.3.4 to continue" 17:28:18 and just let it punt to the next one 17:28:28 alaski: I think its a list of hosts, and cell is a property of the host, but its the same difference really 17:28:40 that way if you *want* to have lots of schedulers, with different configs, you can write filters such that you'll proceed from top to mid to bottom scheduler before a decision is made 17:28:52 +1 17:29:00 johnthetubaguy: I disagree, because cells have capabilities on their own with no regards to the hosts 17:29:02 dansmith: but you mean that a filter would issue a call to a 2nd scheduler ? 17:29:13 alaski: thats the same as aggregates today though 17:29:23 bauzas: or return some result that says "stop what you're doing, pass what you have to $scheduler2 and let it proceed" 17:29:25 alaski: +1 17:29:38 johnthetubaguy: and that's why we're working on changing how we access that aggregate info 17:29:43 bauzas: whether that is a result to the client and the client re-queries, or we proxy is an open question, but I don't think it matters much 17:29:45 alaski: we can just make those properties of every cell, and it flattens things, and filters/weight just pick which they want to act on 17:30:12 but its a detail, and it doesn't change much 17:30:18 johnthetubaguy: right, aggregates seem like the way you handle such things to me 17:30:23 dansmith: interesting, I just need to think about that 17:30:26 johnthetubaguy: and keep a host-centric scheduler 17:30:30 bauzas: yeah, this sounds like a subset of aggregates 17:30:30 dansmith: that sounds like sharding 17:30:42 dansmith: we can just map cells to aggregates for "v1" support I guess 17:30:50 johnthetubaguy: yeah 17:30:57 johnthetubaguy: indeed 17:31:25 anyway, we should put our thoughts in a spec :) 17:31:31 johnthetubaguy: humm 17:31:35 belmoreira: in answer to your concern, the ability to filter differently for different cells should remain if you need it, it would just be handled in the scheduler rather than in nova 17:32:17 dansmith: I'm very much in agreement with your statement of things, but not quite sure what exactly host-centric means yet 17:32:28 belmoreira: +1 with alaski, that's just a matter of configurating nova-scheduler, not nova-api 17:32:36 alaski: meaning that the scheduler's goal is to select a host, not to select a cell 17:32:57 dansmith: gotcha, then yeah I'm on the same page 17:33:14 dansmith: +1 17:33:47 alaski: btw. http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/isolate-scheduler-db-aggregates.html will be the new way for the scheduler to work with aggregates 17:34:07 that sounds feasible to map the design but with cells 17:34:07 so everyone is good with taking this and splitting out some detailed specs to continue the discussion on? 17:34:13 +1 17:34:17 bauzas: thanks, will take a look 17:34:21 yup 17:34:52 great, moving on then 17:34:53 yes 17:34:57 #topic open discussion 17:35:21 I was having an action from last week 17:35:21 move to adjourn? :) 17:35:25 damn. 17:35:28 eh 17:35:31 dansmith: hah 17:35:48 bauzas: any update? 17:35:59 I'll be very brief, I promise :) 17:36:04 so, 73 tests failing now 17:36:51 so I'm about to add a new job like https://github.com/openstack-infra/project-config/blob/f98caac794f6d1327cb7565c446c026cc951a873/jenkins/jobs/devstack-gate.yaml#L816-L829 by adding those 73 tests 17:37:12 anyone figured out the reason for new year +30 bonanza? 17:37:13 everybody agrees ? 17:37:40 +1 17:37:59 and I won't touch the job above 17:38:05 alaski: you ok ? 17:38:09 bauzas: I like that. If we can get that to voting in a reasonable time it will prevent backslides 17:38:18 alaski: ok, will do 17:38:23 that's it 17:38:39 vineetmenon: I have not. it's a TODO for me that I haven't done yet 17:39:01 bauzas: thanks 17:39:15 I would like to point out https://review.openstack.org/#/c/145922/ 17:39:32 and that's it from me 17:39:36 anything else? 17:39:53 alaski: we would like to contribute in the implementation 17:40:25 belmoreira: +1 17:40:41 belmoreira: great. ping me outside of the meeting and we can discuss that 17:40:49 alaski: ok 17:41:33 dansmith: anything from you? :) 17:41:40 move to adjourn :) 17:41:46 seconded 17:42:01 sld 17:42:02 sold 17:42:09 motion passes 17:42:11 thanks everyone! 17:42:26 #endmeeting