14:00:35 #startmeeting nova_scheduler 14:00:36 Meeting started Mon Jul 10 14:00:35 2017 UTC and is due to finish in 60 minutes. The chair is edleafe. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:40 The meeting name has been set to 'nova_scheduler' 14:00:44 Good UGT morning! 14:00:48 Who's here? 14:01:08 o/ 14:01:58 o/ 14:02:10 jaypipes, alex_xu, bauzas - around? 14:02:16 o/ 14:02:17 yuppers. 14:03:14 Guess we'll start 14:03:15 #topic Specs and Reviews 14:03:27 There is a new spec 14:03:33 or rather an amendment to one 14:03:35 #link Amend spec for Custom Resource Classes in Flavors: https://review.openstack.org/#/c/481748/ 14:04:01 This was going to be done by jroll 14:04:08 Looks like it's now on me 14:04:33 i can probably be your off hours buddy on that? 14:04:36 edleafe: didn't you already have code for that? 14:04:43 I thought I remember reviewing that already? 14:04:46 jaypipes: for my half, yes 14:05:02 jroll was going to handle what needed to happen for migration 14:05:16 so that when Pike starts up, the correct resources are allocated 14:05:16 ah 14:05:31 FYI I've tried it with a devstack change, and still cannot make the tests pass: https://review.openstack.org/#/c/476968/. It may be my mistake, of course, or it may be this missing migration 14:05:47 dtantsur: tried what? 14:06:00 edleafe: sorry :) using resource classes for scheduling ironic instances 14:06:29 OK, I haven't looked at that patch. 14:06:36 I'll take a look at it later 14:07:09 I will as well. 14:07:25 both the spec and the patch 14:08:18 jaypipes: do we have the code merged to use the custom RC? 14:08:31 I know it was mine, but I thought there was another piece needed 14:08:37 edleafe: oh yes, since Ocata. 14:08:51 edleafe: oh, sorry, you're talking about the flavor thing 14:09:10 edleafe: not sure on the flavor thing... need to check 14:09:12 jaypipes: yeah, the patch I wrote grabbed the custom RC from extra_specs 14:09:22 and added it to the 'resources' dict. 14:09:31 right 14:10:49 Well, I'll be digging into what's needed for the migration. And I'd be happy to have cdent's help (and anyone else's) 14:11:04 oh snap, forgot meeting \o 14:11:07 you know where to find me and I’ll look for you 14:11:21 * edleafe waves to bauzas 14:11:21 stalker alert! 14:11:32 * bauzas bows to edleafe 14:11:32 :) 14:11:44 OK, next up... 14:11:47 #link Claims in the Scheduler: https://review.openstack.org/#/c/476632/ 14:12:06 The first part is +W'd, so this is the only active one 14:12:56 jaypipes: anything to note? 14:13:09 edleafe: I'll respond to mriedem's comments on there. 14:13:16 ok 14:13:17 edleafe: did you have further comments on it? 14:13:57 technically, we have not yet merged the bottom patch but okay 14:14:36 jaypipes: I haven't looked at it since Friday morning, so when I do I'll respond on the patch 14:14:55 k 14:14:59 Oh, I almost forgot to note: 14:15:01 #link Devstack to use resource classes by default https://review.openstack.org/#/c/476968/ 14:15:22 * edleafe wants to keep the record up-to-date 14:15:45 Moving on... 14:15:46 #link Nested Resource Providers: series starting with https://review.openstack.org/#/c/470575/ 14:15:59 This is still pretty much on hold, right? 14:17:08 * edleafe pokes jaypipes 14:17:16 edleafe: yeah 14:17:27 edleafe: it will pick up steam once claims are in. 14:17:30 ok, just making sure 14:17:35 * mriedem joins late 14:17:44 edleafe: and I add some more functional testing around the scheduler -> conductor -> compute interactions. 14:17:44 Finally... 14:17:47 #link Placement api-ref docs https://review.openstack.org/#/q/topic:cd/placement-api-ref+status:open 14:18:18 jaypipes: let us know how we can help (besides reviews, of course) 14:19:24 Anything else for specs/reviews? 14:19:39 the traits support in the allocation candidates are submitted 14:19:58 #link the first patch https://review.openstack.org/478464 14:20:10 #link the last one https://review.openstack.org/#/c/479776/ 14:21:08 mriedem: responded to your comments on ^ 14:21:20 mriedem: sorry, on https://review.openstack.org/#/c/476632/ 14:21:33 OK, thanks alex_xu - added to my review list 14:21:45 edleafe: I also remember there is one patch from you for 'GET /resources' with traits 14:21:53 edleafe: thanks 14:22:25 jaypipes: ok, i guess i'm missing something then because when originally planning this all out, 14:22:39 i thought we were going for some minimum nova-compute service version check before doing allocations in the scheduler 14:22:48 such that we would no longer do the claim in the compute 14:23:51 once we do the allocation in the scheduler, the claim in the compute is at best redundant but not a problem, 14:24:05 at worst the claim fails because of something like the overhead calculation 14:24:19 or pci or whatever we don 14:24:24 *don't handle yet in the scheduler 14:25:46 mriedem: we can do the *removal of the claim on the compute node* once we know all computes are upgraded. but that's a different patch to what's up there now, which just does the claim in the scheduler. 14:27:48 jaypipes: so if the scheduler starts doing claims, will that cause a problem with older computes? 14:27:54 edleafe: no. 14:28:06 Or will the compute claim just be a duplicate 14:28:19 it's a duplicate 14:28:21 edleafe: not even duplicate. it just won't be done. 14:28:35 what do you mean it won't be done? 14:28:39 edleafe: b/c the report client only writes allocations that are not already existing. 14:28:39 jaypipes: even on an old compute? 14:28:53 edleafe: yes. on ocata computes, we already do this. 14:29:03 writing the allocations is part of the claim process that happens on the compute *today* yes? 14:29:08 jaypipes: ok, I'll have to re-read that code 14:29:15 mriedem: yes, and the periodic audit job. 14:29:23 but before we have the RT call the report client to write allocations, we're doing pci and overhead calculations 14:29:41 mriedem: correct. 14:29:41 so we are still going to go through the same old claim process 14:29:47 which may fail, and trigger a retry 14:30:06 mriedem: correct. if that happens, the allocations are deleted from the placement API. 14:30:12 where? 14:30:18 in the periodic audit job. 14:30:25 update_available_resource() 14:30:29 will pick that uyp. 14:31:12 when does the alternates stuff for retries come in? 14:31:24 on top of https://review.openstack.org/#/c/476632/ ? 14:31:44 even if something writes allocations for the same instance multiple times, it is a replace action 14:31:59 PUT /allocations/consumer_uuid is replace 14:32:02 mriedem: yes, the alternatives stuff needs to come after this. 14:32:58 cdent: right, but we look up existing allocations first and do nothing if nothing changed: https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L863 14:33:20 jaypipes: yeah, I know, I was just saying that’s it’s safe even if that wasn’t happening 14:33:26 gotcha 14:33:36 ok so if we leave the allocation cleanup to the periodic task, 14:33:59 there is a chance you could "fill up" allocations for a compute node after a couple of failed attempts within a minute or something, 14:34:09 mriedem: yep. 14:34:14 which if you've got a lot of compute nodes and a busy cloud, should be ok... 14:34:42 mriedem: and I wrote in that comment that I could try and "undo" successful allocations in the scheduler _claim_resources() method, but that meh, eventually it'll get cleaned up by the periodic audit task on the compute 14:35:31 i have a bad feeling about relying on that 14:35:44 especially when someone does nova boot with min-count 100 14:36:04 e.g. you get to 99 and novalidhost, and we don't cleanup the allocations for the first 98 14:37:02 mriedem: I'm happy to take a go at that cleanup if you'd like. 14:37:06 the retry part of conductor could accelerate that 14:37:08 mriedem: just say the word. 14:37:10 will needing to undo allocations in the scheduler slow it down for other incoming requests? we're still single worker right? 14:37:34 dansmith: in this case we wouldn't get to conductor, 14:37:35 mriedem: single worker but we yield when making a call to placement 14:37:36 it's novalidhost 14:37:42 mriedem: there's no reason at all why the scheduler needs to be single process. 14:38:05 mriedem: you mean for a failed boot that never gets retried? 14:38:53 dansmith: yes 14:38:57 scheduler raises NoValidHost 14:39:21 okay I'm confused about why we'd still have stale allocations in that case 14:39:29 but we can discuss outside of the meeting 14:39:31 dansmith: he's talking about this code: 14:39:38 https://review.openstack.org/#/c/476632/19/nova/scheduler/manager.py@128 14:39:43 ya 14:39:46 danke mriedem 14:40:08 oh I see, just in the n-instances case, I gotca 14:40:30 mriedem: like I said, I'm happy to give a go at cleaning up already-successful allocations in that block. 14:40:38 mriedem: just say the word. 14:40:41 cleanup there would be easy I think, yeah 14:40:52 in general i think we should cleanup when we can 14:40:59 yeah, I'll just keep track of the instance UUIDs that succeeded. 14:41:03 yep 14:41:04 including when we retry from the comptue to the conductor with the alternates 14:41:26 mriedem: well, and we'll eventually want to be retrying *within* the scheduler. 14:41:29 mriedem: yeah that's the case I was thinking of and have always described it as "cleanup the old, claim the next alternate" 14:41:36 but whatevs, I hear ya, I'll fix that section up. 14:41:44 jaypipes: no, we can't retry in the scheduler once we've failed on the compute node 14:41:58 dansmith: retry on the allocation_request... 14:42:00 i think jay is talking about pre-compute 14:42:05 yeah 14:42:07 that, yes 14:42:07 right. 14:42:19 figured he meant: [07:41:04] including when we retry from the comptue to the conductor with the alternates 14:42:30 yeah, sorry, no I mean the allocation candidates thing. 14:42:37 retrying within the scheduler is the whole reason we decided to do it in the scheduler and not conductor 14:42:41 ack 14:42:44 so yeah we should do that :) 14:42:45 well, that's not really a retry when the scheduler can't claim 14:42:46 yeah 14:42:52 just validating the host 14:43:06 anyway, mriedem, besides the cleaning up successful allocations in that failure block, is there anything big you want changed on the patch? if not, I'll go and work on this. 14:43:24 jaypipes: i think you already replied on my other things 14:43:29 the other little nits I'll get, yep 14:43:39 Let's continue this in -nova 14:43:43 btw, we create the allocations after the filters right? 14:43:44 #topic Bugs 14:43:57 #undo 14:43:58 Removing item from minutes: #topic Bugs 14:44:20 mriedem: yes. 14:44:24 mriedem: and the weighers. 14:44:26 sorry was a bit afk 14:44:36 but I have a point about the above 14:44:47 Let's keep it quick 14:45:27 given the time we still have for Pike, do folks agree with me about possibly not having the conductor passing alternatives for Pike ? 14:45:45 no I don't agree 14:45:54 bauzas: no, I think it's absoltelyuy doable for Pike to have the alternatives done. 14:46:14 me too 14:46:17 would it be a problem not having that for Pike ? 14:46:20 bauzas: I think we can have claims merged and ready by Wednesday and patches up for alternatives by EOW 14:46:38 while I agree with all of us about why it's important, I'm just trying to be pragramatic 14:46:41 bauzas: yes, without that we're toast for the proper cellsv2 arrangement 14:47:01 yeah, we pretty much have to do it 14:47:01 bauzas: we can be pragmatic when we're out of time, but we're not there, IMHO 14:47:12 okay 14:47:14 we need to get alternatives done, flavors for resource classes complete, and claims done. 14:47:19 ack 14:47:21 those are absolutes for Pike. 14:47:37 nested stuff is nice to have, and we've made a bit of progress on it already. 14:47:39 and shared-RP, and custom-RP? :) 14:47:47 yeah, that's my point 14:47:48 shared is done 14:48:02 well, agreed 14:48:02 allocation candidates takes care of shared, at least for disk 14:48:16 mriedem: well, not completely done 14:48:29 mriedem: we don't handle complex RPs 14:48:35 mriedem: well, almost... still need a way to trigger the compute node to not want to claim the disk when shared provider is used... 14:48:37 okay, tbc, I don't disagree with the direction, I'm just trying to see what is left for Pike 14:48:39 mriedem: like a compute with both local and shared 14:48:41 * alex_xu puts the trait's priority low, focus on review the priority stuff 14:48:50 mriedem: but that is a short patch that all the plumbing is ready for. 14:49:04 edleafe: we don't *currently* handle that. 14:49:15 edleafe: so that's not something I'm worried about yet 14:49:24 jaypipes: exactly - which was going to be the subject I wanted to discuss in Opens 14:49:29 kk 14:49:33 but we are quickly running out of time 14:49:46 there is always #openstack-nova, ed :) 14:50:08 * edleafe blinks 14:50:12 Really?? 14:50:15 :) 14:50:31 anyway 14:50:38 I don't want to confuse people 14:50:47 Let's try to move on again... 14:50:48 #topic Bugs 14:50:48 #link https://bugs.launchpad.net/nova/+bugs?field.tag=placement 14:50:58 Only one new bug: 14:50:58 #link The AllocationCandidates.get_by_filters returned wrong combination of AllocationRequests https://bugs.launchpad.net/nova/+bug/1702420 14:51:00 Launchpad bug 1702420 in OpenStack Compute (nova) "The AllocationCandidates.get_by_filters returned wrong combination of AllocationRequests" [High,In progress] - Assigned to Alex Xu (xuhj) 14:51:01 alex_xu reported this one, and is working on it. 14:51:08 alex_xu: any problems with that? 14:51:21 edleafe: no, just waiting for review 14:51:31 great 14:51:39 Anything else on bugs? 14:52:16 #topic Open Discussion 14:52:28 I had one concern: the change to return a list of HostState objects from the scheduler driver to the manager. IMO, we really need the host to be associated with its Allocation object so that a proper claim can be made. The current design just returns hosts, and then picks the first allocation that matches the hosts RP id. 14:52:38 In the case of a host that has both local and shared storage, there will be two allocation candidates for that host. The current design will choose one of those more or less at random. 14:52:45 Jay has said that when we begin to support such complex RPs, we will make the change then. Since we are changing the interface between manager and driver now, wouldn't it be best to do it so that when we add complex RPs, we don't have to change it again? 14:53:07 if you haven't requested a trait of shared or not-shared, then at-random is fine right? 14:53:30 dansmith: in that case, yes 14:53:51 but in the case of local vs. public net for PCI, probably not 14:54:09 to be clear, the code just selects the first allocation request containing the host's RP ID. so yeah, there's no order to it. 14:54:41 Hi. I am wondering how the driver will be handling ResourceProviders? Will there be a dedicated class (ResourceProviderDriver) for each provider type? 14:54:41 in the case of network, if your flavors say "give me a pci net device but I don't care which kind" then you're asking for at random, no? 14:54:43 jaypipes: you can keep the randomness for now 14:54:44 agree it would be a dumb thing to do, but.. 14:55:00 jaypipes: I was concerned about having to change the interface yet again in Queens 14:55:23 dansmith: again, in that particular case, you would be correct 14:55:32 but that's not my point 14:55:40 edleafe: this is an internal interface. I'm not concerned at all about that. 14:55:48 me neither 14:55:55 and this is no worse than what we have today right? 14:56:09 edleafe: I mean, we need to change the RPC interface for alternatives support, and that's major surgery. This stuff was just a botox injection compared to that. 14:56:12 i'm more concerned about the <3 weeks to FF 14:56:18 mriedem: ++ 14:56:23 botox, heh 14:56:26 ok, fine. 14:56:37 mriedem: me too, hence my previous point 14:56:40 edleafe: you agree the RPC change is much more yes? 14:56:45 It just wasn't what we had originally discussed, and it raised a flag for me 14:56:52 jaypipes: of course 14:56:54 understod. 14:57:17 understood, edleafe and I appreciate your concerns on it. As you saw, I went through a bunch of iterations on thinking about those internal changes 14:57:50 edleafe: but returning the HostState objects instead of the host,node tuples allowed us to isolate pretty effectively the claims code in the manager without affecting the drivers at all. 14:58:17 As long as we all realize that this will have to change yet again in Queens, sure 14:58:35 change is and always will be inevitable 14:58:42 edleafe: certainly it may. but again, I'm less concerned about internal interfaces than the RPC ones. 14:58:43 how trite 14:58:48 * cdent is trite 14:58:55 always has been, always will be 14:59:31 jaypipes: I was more concerned about saying we will do X, and finding Y 14:59:32 1 min left 14:59:51 As long as we get to X eventually 15:00:05 That's it - thanks everyone! 15:00:07 #endmeeting