17:00:18 <alaski> #startmeeting nova_cells
17:00:19 <openstack> Meeting started Wed Jan 14 17:00:18 2015 UTC and is due to finish in 60 minutes.  The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:22 <openstack> The meeting name has been set to 'nova_cells'
17:00:23 <vineetmenon> hi
17:00:31 <bauzas> bonjour
17:00:33 <bauzas> \o
17:00:37 <alaski> o/
17:00:39 <melwitt> o/
17:00:42 <dansmith> o/
17:00:48 <belmoreira> o/
17:01:09 <alaski> well, we have a simple agenda today
17:01:21 <alaski> #topic scheduling
17:01:28 <bauzas> link ?
17:01:33 <bauzas> about the agenda
17:01:38 <alaski> https://wiki.openstack.org/wiki/Meetings/NovaCellsv2#Agenda
17:01:45 * bauzas is too lazy...
17:01:49 <edleafe> o/
17:02:01 <alaski> https://review.openstack.org/#/c/141486/ is the current review for cells scheduling
17:02:26 <bauzas> thanks for having updated it
17:02:27 <alaski> last week the consensus was that we would spend this week diving into scheduling
17:03:10 <alaski> so what are the current points of consternation or interest?
17:04:04 <vineetmenon> how is failure being handled in this scheme?
17:04:12 <vineetmenon> i mean, are we in a position to discuss details?
17:04:26 <bauzas> alaski: I'm basically +1 your spec
17:04:39 <alaski> vineetmenon: some level of detail, yes
17:04:52 <belmoreira> alaski: I like this proposal as well
17:04:58 <alaski> vineetmenon: but I would like to get the broad details discussed before getting too deep
17:04:59 <bauzas> alaski: some details are a bit debatable that said
17:05:20 <dansmith> agreed, I'm pretty much on board with that spec
17:05:31 <bauzas> alaski: I'm really concerned about how you plan to persist those request details (for not naming the devil by its name)
17:05:39 <johnthetubaguy> alaski: I like the general idea, question about the cells v1 support though
17:05:58 <belmoreira> alaski: I have some questions that I'm commenting in the spec but I think we can discuss here as well
17:06:02 <johnthetubaguy> alaski: can we not just have the top level scheduler send an RPC to the child scheduler, rather than do a partial decision?
17:06:28 <dansmith> johnthetubaguy: that's really a detail for later, right?
17:06:31 <dansmith> meaning,
17:06:33 <bauzas> johnthetubaguy: I really like the idea to have the cells API to manage the 2 calls rather than a top sched proxying children
17:06:40 <alaski> johnthetubaguy: yes.  but let me elaborate for a sec
17:06:57 <dansmith> the concerning bit is handling the api contract, and what happens after the fact to get to a cell,host is something else
17:07:08 <alaski> johnthetubaguy: basically what dansmith said
17:07:14 <dansmith> \o/
17:07:18 <alaski> heh
17:07:20 <bauzas> dansmith: right
17:07:29 <johnthetubaguy> yeah, thats fair
17:07:35 <bauzas> that was indeed my 2nd question
17:07:49 <bauzas> I recall the first one : how do you plan to persist the request
17:07:53 <belmoreira> I like the proxy option...
17:08:18 <alaski> bauzas: essentially by persisting what is sent in the API request
17:08:27 <alaski> bauzas: in a very basic manner for now
17:08:36 <dansmith> we need to generate them a uuid
17:08:44 <dansmith> but otherwise, if we store what they sent us, plus the uuid, that's enough I think
17:08:57 <alaski> yep, that was my thinking as well
17:09:06 <bauzas> alaski: so let me say the devil's name, don't you think that http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/request-spec-object.html would be a dependency FWIW
17:09:07 <bauzas> ?
17:09:16 <johnthetubaguy> alaski: about the request_spec store, I guess the other way I remember dansmith mentioned was instread having the instance "cache" hold the data from the beginning? But thats a detail really, thats just the how its stored, I agree we need to store it
17:09:22 <bauzas> provided that we persist that object of course
17:10:22 <alaski> bauzas: I'm not too concerned about whether we persist the object, or the input that allows the creation of the object
17:11:04 <bauzas> alaski: well the history showed us that persistency is worth good
17:11:05 <alaski> bauzas: I don't want to be blocked on the request spec object, but if it's in place I think it would be worth using
17:11:15 <johnthetubaguy> I am trying to think about any other APIs that return before creating an object in a specific cell, but I guess instance is the only oneā€¦
17:11:21 <dansmith> bauzas: persistency? :P
17:11:32 <dansmith> instance is the big one for sure
17:11:36 <bauzas> dansmith: aren't we all French, someone said last week in the news ? :)
17:11:47 <dansmith> they said persistency? awesome
17:12:05 * bauzas reading a dictionary now
17:12:18 <alaski> johnthetubaguy: there might be more, but we'll tackle them as they come up
17:12:32 <bauzas> eh I was close, only 2 letters to change
17:13:06 <bauzas> alaski: agreed on the plan, we can maybe iterate on what you need first, and integrate with ReqSpec while it comes
17:13:17 <alaski> johnthetubaguy: where it's stored is a detail, but what I like about using a separate table for the persistance right now is since the data is short lived we can iterate that store over time without worrying too much about compatibility/versioning
17:13:23 <johnthetubaguy> alaski: agreed, I was just wondering if going the tasks route will help give us a more general solution than storing "build_request", but I am probably thinking too deep
17:13:35 <johnthetubaguy> alaski: yeah, agreed, too deep
17:13:50 <bauzas> alaski: and FYI https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/request-spec-object,n,z is implemented on the object side
17:13:51 <johnthetubaguy> I agree with having to store "the request" for the reasons you state
17:14:20 <alaski> bauzas: cool, I'll look into it
17:14:27 <bauzas> sounds like we're in violent agreement then ?
17:14:30 <alaski> johnthetubaguy: cool
17:14:44 <alaski> bauzas: with the broak strokes at least
17:14:50 <alaski> s/broak/broad/
17:14:55 <bauzas> alaski: do your homework, create a table and then we'll see how to turn up to the object later on
17:15:19 <bauzas> alaski: of course, you still need to iterate your spec by adding details on the *persistance*
17:15:33 <dansmith> yeah, needs fleshing out
17:15:41 <alaski> yeah.  so realistically I think this is multiple specs
17:16:00 <bauzas> alaski: +1
17:16:09 <bauzas> we need to split out things
17:16:27 <bauzas> there is also some scheduling homework
17:16:30 <alaski> I can break this out now that there's general agreement, but I'm not sure what we can get moving during the freeze
17:16:37 <alaski> but I'll push on what I can
17:16:49 <bauzas> alaski: are you planning to implement something for Kilo ?
17:17:04 <alaski> bauzas: yes
17:17:17 <bauzas> alaski: I guess then the approved specs, right ?
17:17:20 <alaski> there's already approved specs which need implementations
17:17:40 <alaski> and I would like to continue into scheduling if we get to that point
17:17:47 <bauzas> alaski: hence my point, that's not really a problem to rewrite those scheduler specs, as it's not planned for Kilo
17:17:47 <alaski> how that's done is an open question right now
17:18:08 <bauzas> alaski: yeah, but we can work on drafts and merge on L
17:18:27 <bauzas> alaski: I can see some point where the scheduler should serve CellsV2 as destinations
17:18:30 <mvineetmenon> the approved specs are for cell-instance-mapping and scheduler, right
17:18:43 <mvineetmenon> what are you planning to implement in those?
17:18:44 <bauzas> I mean that the scheduler should provide cells as a result
17:18:49 <alaski> mvineetmenon: not scheduling
17:19:12 <bauzas> well, I think the extra DB thing is already a huge thing :)
17:19:16 <alaski> bauzas: sure, let's focus on what's approved and iterate on this in the mean time
17:19:29 <bauzas> alaski: because you need to cable all the bits for creating a 2nd DB
17:19:45 <alaski> are there more specifics on scheduling that anyone would like to dive into now?
17:19:47 <bauzas> alaski: did you work on a POC for this btw. ?
17:20:06 <alaski> bauzas: I have not
17:20:12 <bauzas> alaski: again, I think that for scheduling, we need to split this spec into specific items
17:20:26 <bauzas> and see what's needed to change in the scheduler
17:20:41 <bauzas> the current proposal is vague about the capability of providing cells
17:20:55 <bauzas> do you plan to merge the Cells V1 scheduler code into the mainstream one ?
17:20:59 <bauzas> alaski: ^
17:21:02 <mvineetmenon> .join ##barc-cern-openstack
17:21:29 <alaski> bauzas: not exactly
17:21:36 <bauzas> alaski: cool
17:21:37 <belmoreira> bauzas: how the scheduler will handle the case of the 2 layer approach?
17:21:53 <bauzas> belmoreira: I was thinking of extending the Scheduler API
17:22:03 <bauzas> by adding a new method
17:22:15 <belmoreira> but will the top scheduler be aware of everything anyway?
17:22:20 <alaski> bauzas: there are some filters and weighers for cells that the scheduler should be able to handle.  We'll need to determine the gaps that need to be filled so that can happen
17:22:26 <bauzas> that would be the safiest approach, as many bps are planning to work on select_dest()
17:22:42 <bauzas> alaski: agreed, we need to jump into the details
17:23:06 <bauzas> alaski: and I have to admit I have to dig into the Cells V1 scheduler code, my knowledge being vague on it
17:23:18 <belmoreira> bauzas: but one of the goals of cells if to split things...
17:23:42 <bauzas> belmoreira: I don't get your point ?
17:23:51 <alaski> bauzas: it's quite similar to the scheduler in it's manner of operation, but it schedules primarily by flavor availability
17:24:03 <bauzas> alaski: ack
17:24:11 <dansmith> I thought the plan was to make the scheduling flexible enough to use single-level scheduling by default, but allow 2-level if there was some reason
17:24:20 <dansmith> like, a filter that could punt to a second scheduler
17:24:40 <vineetmenon> ***scalalibity*** ?
17:24:40 <dansmith> because with a caching scheduler, I don't really understand why two levels are needed
17:24:57 <belmoreira> dansmith: in that case the first schedule will only have limited light info, right?
17:25:19 <dansmith> belmoreira: no, I don't think so, but it may make lightweight decisions
17:25:44 <belmoreira> dansmith: for example we have filters enabled in only some cells
17:25:56 <johnthetubaguy> dansmith: I think we should be able to pick from all hosts from all cells in one go, and just tell the caller which cell it was (at least until some level of scale)
17:26:07 <dansmith> johnthetubaguy: that's what I'd like, yeah
17:26:13 <vineetmenon> dansmith: so, the first scheduler is powerful enough to select a cell, host tuple?
17:26:18 <edleafe> johnthetubaguy: +1
17:26:22 <dansmith> vineetmenon: yeah
17:26:23 <johnthetubaguy> we would need some extra stuff to filter on cells properties, but that just works
17:26:24 <melwitt> johnthetubaguy: +1
17:26:52 <bauzas> dansmith: that's possible, provided we define a clear abstraction
17:27:05 <bauzas> dansmith: at the moment, the mainstream scheduler is host-centric
17:27:19 * vineetmenon is thinking about scalability issues with thousands of nodes
17:27:22 <johnthetubaguy> the problem is when the caching scheduler's DB call takes so long, its doing nothing else, but we can hopefully ignore that case
17:27:41 <dansmith> well, I think a cell-aware scheduler is just one that has info about cells and extra filters to make cell-based decisions early in the stack and then host-based decisions once you select a cell
17:27:42 <melwitt> I like dansmith's idea of the flexibility to go to two-level later on
17:27:52 <dansmith> right, and so once that's properly abstracted,
17:27:57 <alaski> bauzas: agreed.  no matter the physical setup of the scheduler I think it logically needs to operate on cells and hosts
17:28:00 <johnthetubaguy> vineetmenon: there were some plans to use kafka as the data store to get incremental updates, a-la no-db-scheduler plan from a few releases back, but I am ignoring that as a detail
17:28:10 <dansmith> we should have the ability for a filter to return "contact scheduler at 1.2.3.4 to continue"
17:28:18 <dansmith> and just let it punt to the next one
17:28:28 <johnthetubaguy> alaski: I think its a list of hosts, and cell is a property of the host, but its the same difference really
17:28:40 <dansmith> that way if you *want* to have lots of schedulers, with different configs, you can write filters such that you'll proceed from top to mid to bottom scheduler before a decision is made
17:28:52 <melwitt> +1
17:29:00 <alaski> johnthetubaguy: I disagree, because cells have capabilities on their own with no regards to the hosts
17:29:02 <bauzas> dansmith: but you mean that a filter would issue a call to a 2nd scheduler ?
17:29:13 <johnthetubaguy> alaski: thats the same as aggregates today though
17:29:23 <dansmith> bauzas: or return some result that says "stop what you're doing, pass what you have to $scheduler2 and let it proceed"
17:29:25 <belmoreira> alaski: +1
17:29:38 <bauzas> johnthetubaguy: and that's why we're working on changing how we access that aggregate info
17:29:43 <dansmith> bauzas: whether that is a result to the client and the client re-queries, or we proxy is an open question, but I don't think it matters much
17:29:45 <johnthetubaguy> alaski: we can just make those properties of every cell, and it flattens things, and filters/weight just pick which they want to act on
17:30:12 <johnthetubaguy> but its a detail, and it doesn't change much
17:30:18 <dansmith> johnthetubaguy: right, aggregates seem like the way you handle such things to me
17:30:23 <bauzas> dansmith: interesting, I just need to think about that
17:30:26 <dansmith> johnthetubaguy: and keep a host-centric scheduler
17:30:30 <edleafe> bauzas: yeah, this sounds like a subset of aggregates
17:30:30 <bauzas> dansmith: that sounds like sharding
17:30:42 <johnthetubaguy> dansmith: we can just map cells to aggregates for "v1" support I guess
17:30:50 <dansmith> johnthetubaguy: yeah
17:30:57 <bauzas> johnthetubaguy: indeed
17:31:25 <bauzas> anyway, we should put our thoughts in a spec :)
17:31:31 <belmoreira> johnthetubaguy: humm
17:31:35 <alaski> belmoreira: in answer to your concern, the ability to filter differently for different cells should remain if you need it, it would just be handled in the scheduler rather than in nova
17:32:17 <alaski> dansmith: I'm very much in agreement with your statement of things, but not quite sure what exactly host-centric means yet
17:32:28 <bauzas> belmoreira: +1 with alaski, that's just a matter of configurating nova-scheduler, not nova-api
17:32:36 <dansmith> alaski: meaning that the scheduler's goal is to select a host, not to select a cell
17:32:57 <alaski> dansmith: gotcha, then yeah I'm on the same page
17:33:14 <edleafe> dansmith: +1
17:33:47 <bauzas> alaski: btw. http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/isolate-scheduler-db-aggregates.html will be the new way for the scheduler to work with aggregates
17:34:07 <bauzas> that sounds feasible to map the design but with cells
17:34:07 <alaski> so everyone is good with taking this and splitting out some detailed specs to continue the discussion on?
17:34:13 <bauzas> +1
17:34:17 <alaski> bauzas: thanks, will take a look
17:34:21 <dansmith> yup
17:34:52 <alaski> great, moving on then
17:34:53 <edleafe> yes
17:34:57 <alaski> #topic open discussion
17:35:21 <bauzas> I was having an action from last week
17:35:21 <dansmith> move to adjourn? :)
17:35:25 <dansmith> damn.
17:35:28 <bauzas> eh
17:35:31 <alaski> dansmith: hah
17:35:48 <alaski> bauzas: any update?
17:35:59 <bauzas> I'll be very brief, I promise :)
17:36:04 <bauzas> so, 73 tests failing now
17:36:51 <bauzas> so I'm about to add a new job like https://github.com/openstack-infra/project-config/blob/f98caac794f6d1327cb7565c446c026cc951a873/jenkins/jobs/devstack-gate.yaml#L816-L829 by adding those 73 tests
17:37:12 <vineetmenon> anyone figured out the reason for new year +30 bonanza?
17:37:13 <bauzas> everybody agrees ?
17:37:40 <vineetmenon> +1
17:37:59 <bauzas> and I won't touch the job above
17:38:05 <bauzas> alaski: you ok ?
17:38:09 <alaski> bauzas: I like that.  If we can get that to voting in a reasonable time it will prevent backslides
17:38:18 <bauzas> alaski: ok, will do
17:38:23 <bauzas> that's it
17:38:39 <alaski> vineetmenon: I have not.  it's a TODO for me that I haven't done yet
17:39:01 <alaski> bauzas: thanks
17:39:15 <alaski> I would like to point out https://review.openstack.org/#/c/145922/
17:39:32 <alaski> and that's it from me
17:39:36 <alaski> anything else?
17:39:53 <belmoreira> alaski: we would like to contribute in the implementation
17:40:25 <vineetmenon> belmoreira: +1
17:40:41 <alaski> belmoreira: great.  ping me outside of the meeting and we can discuss that
17:40:49 <belmoreira> alaski: ok
17:41:33 <alaski> dansmith: anything from you? :)
17:41:40 <dansmith> move to adjourn :)
17:41:46 <alaski> seconded
17:42:01 <bauzas> sld
17:42:02 <bauzas> sold
17:42:09 <alaski> motion passes
17:42:11 <alaski> thanks everyone!
17:42:26 <alaski> #endmeeting