15:00:29 #startmeeting scheduler 15:00:30 Meeting started Tue May 28 15:00:29 2013 UTC. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:33 The meeting name has been set to 'scheduler' 15:00:36 Show of hands, who's here? 15:00:45 * glikson here 15:00:53 hi, I'm here for the meeting 15:00:57 garyk: i'm here 15:00:59 hi all 15:02:27 OK, let's being then. There were 3 BPs I wanted to talk about today and, since glikson is here now, why don't we start with that one 15:02:41 #topic coexistence of different schedulers 15:02:57 glikson, care to expand upon this? 15:04:44 glikson, YT? 15:05:33 yes, sorry, had an interrupt 15:05:53 we are still working on some design details.. will need to defer. 15:05:53 NP, the floor is yours 15:06:23 OK, we can do that (but I'm going to keep you on the agenda), will you be ready next week? 15:06:35 I hope so 15:06:48 OK, then moving on. 15:07:00 #topic whole host allocation capability 15:07:28 I read the BP and the idea does sound interesting but there are a lot of open questions about how to implement it. 15:08:03 unfortuntately, Phil Day doesn't appear to be here to day so I'm not sure we can talk about this a lot 15:08:16 I have been working in a similar use-case 15:08:43 belmoreira, cool, have you talked to Phil about any overlaps you might have? 15:08:56 hello 15:09:41 unfortunately not. I tried several emails without answer. 15:10:04 but in fact i was only aware about this BP a few time ago. 15:10:33 OK, well I think I'll try and ping Phil and see when he can come to one of these meetings and we can talk intelligently about things 15:10:57 belmoreira, I'll keep you in the loop and, if we don't get any response, we'll make you take the lead on the idea :-) 15:11:15 Well I have some code that is review about this... 15:11:35 I believe Phil is following my blueprint 15:11:52 belmoreira, can you give the URL of the code/blueprint you are talking about? 15:11:54 https://review.openstack.org/#/c/28635/ 15:12:20 belmoreira, thanks :) 15:12:20 belmoreira, do you have a BP to go with the code? 15:12:23 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates 15:12:50 it started to be a simple update to the aggregate_multitenacy_filter 15:13:12 but then Russell suggested to create a new filter 15:13:50 sounds like, if you have code already, you don't have too many open design issues, right? 15:14:57 after reading Phil BP I believe my use case is more simple 15:15:19 belmoreira: I think the approach that you are suggesting is the right way to address this 15:15:30 I have an aggregate and I want to make sure that only aggregates that I define can start instances in that aggregate 15:16:13 Hi Folks, Sorry - got delayed in joinign today 15:16:53 PhilDay, excellent, we've just been talking about you, we're going over the whole host capability BP 15:17:21 belmoreira, is online and I understand he has a BP and code that is similar to what you are proposing 15:18:06 we're curious is we can combine the BPs into one 15:18:29 Maybe - is there a link to the BP ? I'd probably need sometime to look at it 15:18:46 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates 15:19:21 What I propose is the creation of a new scheduler filter to allocate tenants to aggregates 15:19:34 I'm not asking you to make an on the spot decision, may we should defer talking about this until next when when everyone has a chance to study things. 15:19:42 s/may/maybe 15:19:57 * n0ano brain is outrunning fingers 15:21:12 Ok. I do remember looking quickly at this a few weeks back now. I think its similar, and may even be part of what I was looking for, bu tit doesn't as far as I can see expose the ablility to a user to explicitly allocate and deallocate hosts 15:21:53 Which is important to us as we want to be abel to charge for hosts that are dedicetd to a user, and make it a self service model, 15:22:11 yes… seems my use case is simpler than yours 15:22:21 yeah, I looked at your BP and liked the basic idea but the devil is in the details 15:22:42 as ever ;-) I'll make some time for a deeper look this wee 15:22:57 PhilDay: what you are suggesting is essentially capacity registration, plus multi-tenancy. maybe the two should be added separately. 15:23:15 week (doh my fat fingers) and make sure that we're aligned where it matters 15:23:16 (sorry, reservation) 15:23:22 in my case… a private cloud, there are groups of people that need to have dedicated resources. And these resources are well identified. 15:24:02 I want this to be completely transparent for the users. 15:24:31 the multi-tenancy part can probably be implemented the same way -- with aggregates 15:25:06 belmoreira - so it may be that what I need is an API layer to work on top of the filter. Using aggregates as the basis for exclusion was the same model I was thingk of - maybe its just a question of how those get created (by admin in your case - bu users subject to quotas etc in mine) 15:25:10 while introducing resource reservation should be a new concept, IMO -- higher level than 'host'. E.g., "I want capacity of 10 small instances" 15:25:50 My use case is specifically for whole hosts for isolation - not a general capacity reservation 15:26:20 Also having whole hosts provides the bedraock for having seperate schedulers, etc further down teh road 15:26:21 PhilDay, indeed, that's what I like abour your BP 15:27:13 So I think belmorira's work and mine are probably complementary - but i'll make the time this week to check 15:27:26 Anyway, let's defer to next week (note I think we're talking about merging proposals, not necessarily replacing one with the other). 15:27:53 moving on 15:28:01 #topic host directory service 15:28:23 well, the notion of resource reservation is new. I would recommend adding it in such a way that it is not limited to a very specific usage. 15:28:33 Unfortunately, David Scannell doesn't appear to be here today so I don't know if we can discuss this much 15:28:52 I would keep them seperate rather than merge them - sounds like what belmoreiar has does everything he needs for his use case, what I want to add should be seen as additional / incremental to that 15:30:03 PhiDay, belmoreira: I like the idea of allocating a whole hose for a particular user. Good stuff. 15:30:19 glikson - I understand you point, but am wary of make a general mechanism to cover two quite different things (otherwise we end up like Nova and bare metal provisioning :-) 15:31:04 PhilDay, then you think there's no overlap between the two of you? 15:31:32 glikson: generic reservation mechanism is indeed very interesting topic. but it might need much more work than this particular use case needs 15:32:31 what i have is only a filter… shouldn't interfere with anyone… The only problem could be duplication of efforts 15:33:35 PhilDay: what kind of API did you have in mind? 15:33:44 I'm not convicned tehre is overlapp between effectivley making aggregates a user feature (which is what whole host allocation does - but through an abstraction) and a general capacity reservation mechanism (which is what glikson is proposing I think) 15:36:03 glikson: Haven't got very far in that thinking - I did think that maybe hosts might be represented as a particular type of flavor (as that was how BM was heading) - but beyond that it would be pretty much a stand alone extension 15:36:09 my impression is once we understand better the API, the similarity might become more clear 15:37:22 glikson: agreed. 15:37:22 it is still a *user*-facing API, and you probably don't want to fully expose your physical hardware 15:37:25 Could be. So for now imaging an API that has allocate and deallocate primitives, where allocate takes some kind of spec that specifices the type/capacity of server you want, and which AZ you want it from 15:37:46 maybe something like: nova reserve host --flavor host-type1 15:38:18 and it returns some hashed value of the host identity 15:39:13 are you talking about splitting scheduling up into two parts, reservation and then actual schedule? If so I have concerns with deadlock issues and what not. 15:39:30 PhilDay: that sounds very much like what I had in mind. just replace "server" with "resource pool" (and map it to an aggregate at the backend) 15:39:48 Yep, something like that. I don't want to make it a hard scedulign issue though, so teh matchign of flavor to host would probably need some bounds (so you can abstract the host flavors a little) 15:40:27 Ah, OK. Maybe i get it now - so you want to use the instance flavours to specify the size of host needed ? 15:41:00 is this somethink like having a dynamic aggregate? (possibility from the API to add/remove type of host in the aggregate) 15:41:00 PhiDay: yes 15:41:12 user would probably want to know how many instances of which kind would fit in the reserved space.. so, this would need to be specified somehow in terms comparable with instance flavors. 15:41:27 I guess that would need to assume a 0 degree of overprovisioning, or else teh allcoation could get quite comoles 15:42:09 glikson: agree. Users may also want only 1 VM that uses all the capabilities of the host 15:42:14 jgallard: yep, sounds like one of the options 15:42:30 @jgallard - in effect yes - but in a way that users can only affect acggregates that belong to them (at the moment aggregates aren't scoped to a user) 15:43:34 glikson, PhilDay , thanks, i see the point : a kind of dedicated dynamic aggregates allocated per tenant 15:43:53 PhilDay: so, seems that belmoreira's patch solves the problem of mapping aggregates to users. and what is left is capacity reservation, backed by aggregates. 15:44:00 Once they have allocated hosts, and we support different schedulers, then they may be able to fit more instances in of course than the number used to size it 15:44:27 glikson - that's what I'm thinking. 15:44:56 sounds good. 15:45:00 PhilDay: yep, surfacing different scheduling policies to users is also an interesting use-case. I guess it would be more for tenant admins rather than regular users though. 15:45:24 glikson, +1 15:45:58 My feeling when I first came up with this was that I didn;t want to just make aggregates directly visible to users - as they are too closly linked with hosts, etc - and as glikson pointed out a degree of abstraction is needed. 15:46:00 glikson, yes, the scheduling policy will probably depend of the class of service 15:47:37 Adding different schedulers is a harder problem I think - which is why I was leving that out for now. But if we build on aggregares then I think that gives us the right basis for adding that in later. Agreed that there will be different roles required 15:48:14 PhilDay: mapping different scheduler implementation/policies to aggregates is exactly the BP we are working on 15:48:25 excellent 15:49:24 for references: https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers 15:49:30 there are some challenges there, hopefully will have a more specific proposal by next meeting 15:50:05 OK (I love the way we have the bulk of the discussion after I've moved on to a different topic :-) 15:50:11 I think I'll still add this to next week's agenda, give everyone time to study B 15:50:11 Ps and think of issues. 15:50:23 #topic opens 15:50:37 In the last few minutes does anyone have any opens to bring up? 15:51:50 hearing silence I think it's time to thank everyone and we'll talk again next week. 15:52:07 #endmeeting