14:00:10 #startmeeting nova_scheduler 14:00:10 Meeting started Mon Mar 6 14:00:10 2017 UTC and is due to finish in 60 minutes. The chair is edleafe. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:13 The meeting name has been set to 'nova_scheduler' 14:00:20 ANyone around today? 14:00:43 o/ 14:01:30 \o for 25 mins 14:01:34 my we're a popular bunch! 14:01:40 \o 14:02:30 ommmmm 14:02:50 Well, let's start 14:02:57 #topic Specs and reviews 14:03:18 Nothing was added to the agenda, so let's go over the current focus 14:03:26 Traits spec: 14:03:27 #link https://review.openstack.org/#/c/345138/ 14:03:57 I was hoping that jaypipes would explain some of his concerns about the API 14:04:11 I mean, I get it, but it's not totally clear 14:04:52 I need a second loop over that spec 14:04:53 you mean on the actions on /traits ? 14:05:01 yeah 14:05:22 cdent: BTW I agree with the idempotent PUT 14:05:34 that's because it's right ;) 14:05:42 o/ 14:05:50 of course! 14:06:31 I think the issue with /traits is would you ever want to replace or remove everything? 14:07:31 cdent: so should you be able to delete one? 14:07:33 two? 14:07:34 ten? 14:07:50 just one, at its own URL? 14:08:06 I'm not a fan of writes on collection URLs 14:08:23 You mean like PUT /traits? 14:08:29 but I'm not sure if that's jay's issue. I pretty much had the same question (if you see my last comment I was wondering where it went) 14:08:39 yes 14:09:46 Well, PUT is pretty clear in its intended behavior. Maybe a POST for the "add to the current set of traits" case? 14:11:31 * cdent shrugs 14:11:42 * alex_xu waves late 14:11:50 I don't know what's wrong with singular PUTs to /traints/{trait} 14:11:58 s/nts/ts/ 14:12:04 cdent: nothing 14:12:17 edleafe: _only_ singluar puts 14:12:32 it's a PUT with a JSON body that is destructive 14:12:38 oh 14:12:50 that could be tedious :) 14:13:24 we have computers to do tedious things for us (and loops!) and thus make code less complicated 14:13:29 especially with the PUT /rp/{id}/traits/{trait} format 14:13:43 where you have to associate traits with RPs 14:14:03 jaypipes and dansmith just worry about the "PUT/DELETE /traits" is unsafe. The user may delete all the traits accidental 14:14:04 * cdent shrugs 14:15:22 So we would also have to remove the CLI stuff like: openstack resource-provider trait add $RP_UUID --traits $TRAIT $TRAIT... 14:16:13 edleafe: yea, I forget cleanup that 14:16:30 alex_xu: that's what I'm not clear on. If the traits are not associated with anything, why is that a problem? 14:16:49 alex_xu: if any trait is in use, return 409 14:17:02 edleafe: yes, I have same thinking with you, it isn't very unsafe 14:17:03 nothing gets deleted 14:17:44 maybe add that discussion in the alternatives section in the spec? would that help? 14:17:47 but i'm ok to add those API when found we really same bulk create 14:17:51 later 14:18:00 409 for in use seems a nice compromise to me 14:18:26 makes it hard to be accidentally destructive, or something 14:18:34 * jroll is having a hard time seeing a use case for "delete all custom traits" 14:18:43 jroll: exacty 14:18:45 at least PUT/DELETE /traits is called by the deployer, it isn't something we need in the scheduler report client 14:19:10 oh, delete all, yeah, not sure why thats needed 14:19:17 jroll: what about wanting to create all custom traits at once? 14:19:30 edleafe: I see no deletes there :) 14:19:38 With a PUT there would be 14:19:59 (if any existed beforehand) 14:20:08 maybe there's something I haven't read yet 14:20:36 edleafe: so your use case is "instead of adding to existing traits, a deployer might update custom traits by deleting all and re-adding"? 14:20:49 replacing entire set 14:21:05 Well, that's the semantics of a PUT 14:21:19 what about adding the simple one by one bits now, and adding the bulk later, if we end up needing them? 14:21:32 johnthetubaguy: that's what cdent was proposing 14:21:43 johnthetubaguy: +1 to that 14:21:49 I guess I am +1 cdent then 14:22:00 so +1 to cdent :) 14:22:14 yeah I'm +1 for that 14:22:15 FWIW, I don't have a use case for this in mind. I just didn't understand the scary destructive potential 14:22:23 * jroll now sees the PUT case, just isn't sure it's needed yet 14:22:29 could always push the other ideas into a follow spec, if people want to have that conversation, just makes it non blocking (you probably said that already, sorry) 14:23:20 It is really a API only called once after a new cloud deployment. 14:24:01 So to summarize: we'll update the spec to include these concerns, and if at a later date we find we need to add those API calls back, we'll handle that in a follow-up spec 14:24:06 Is that about right? 14:24:15 sounds right 14:24:48 not sure I followed the concerns, so please -W the spec with the consensus 14:25:20 * bauzas needs to disappear per $family dutiezs 14:25:22 keeping it simple sounds good, so +1 14:25:24 #agreed We will update the traits spec to remove the mass creation/deletion of traits, and if at a later date we find we need to add those API calls back, we'll handle that in a follow-up spec 14:25:37 +1 14:25:43 edleafe: thanks for summarizing, I'm good with that 14:26:23 OK, let's move on 14:26:24 Traits POC series, starting with: 14:26:25 #link https://review.openstack.org/#/c/416007/ 14:27:00 alex_xu: any concerns or problems with that series? 14:27:08 edleafe: thanks, I have three patches up for review 14:27:34 I think the only interesting thing is in https://review.openstack.org/#/c/441829/ 14:28:11 It adds a TraitCache, more than ResourceClassCache, it adds one more cache based on namespace 14:28:46 alex_xu: So is the concern about using underscore as a namespace separator? 14:29:26 edleafe: no, the underscore already get some agreement, currently we already change to use underscore in anywhere 14:29:56 alex_xu: ok, good 14:30:06 the concern I guess is more about whether people like that namespace based cache and whether the data structure is good for people 14:30:22 Are there any disagreements to discuss? 14:30:43 heh, you type faster :) 14:31:07 really? I type faster :) 14:31:46 Matthew have a suggestion about using a tree of dicts. But he also is looking for more comments. 14:32:11 Just introduce the interesting thing at here, and looking for more feedback on the reviews 14:32:25 OK, then all of us should review https://review.openstack.org/#/c/441829/ for opinions on the caching by namespace design 14:32:37 edleafe: yea, thanks 14:33:14 There is also the Nested Resource Providers series, starting with: 14:33:17 #link https://review.openstack.org/#/c/415920/ 14:33:38 Sort of stagnant right now 14:33:39 no recent activity on that 14:33:41 jinx 14:33:55 I've just realized that nobody bought me a coke in Atlanta :( 14:33:57 Typing speeds: alex_xu > edleafe > cdent 14:34:18 \o/ 14:34:49 Let's move on 14:34:53 #topic Bugs 14:35:00 Any new bugs to discuss? 14:35:23 I have to discuss on notification review 14:35:29 can I ? 14:35:55 sure 14:35:58 cdent: https://review.openstack.org/#/c/423872/ about your comment on line 26: 14:35:58 link? 14:36:02 These use cases sound a bit made up. I mean they sound reasonable and all, but do we have actual use cases expressed by real humans for these notifications? If so, it would be great if we could capture those 14:36:21 cdent: I posted your comment above 14:36:51 Yeah, that's just a general concern about us adding notifications for everything everywhere. Is there actually someone or something that's going to do something with these? 14:36:55 cdent: I dont understand "use cases expressed by real humans for these notifications" 14:37:06 cdent: can you explain what you want here 14:37:38 sure, so you say "maintainer wants to get the notifications when there are resources..." 14:37:48 My question is _why_ does the maintainer want that? 14:38:10 okay 14:38:20 Or to rephrase: what could they possibly do with that information? 14:38:27 * cdent nods 14:38:38 I know our public cloud would love to get a notification when a compute node comes online or goes offline 14:38:56 jroll: but that's not what this is, this is a proxy for that 14:39:06 the latter you could alert on 14:39:24 notifications is for getting actual status of resource on some time interval 14:39:33 cdent: true, do we have notifications in nova for compute_nodes changes? 14:39:43 cdent: will update the info on that 14:40:08 diga: the question continues from there: what do you do with that status? 14:40:31 What I'm trying to avoid here is just creating notifications because we can _and_ 14:40:56 cdent: got your question 14:40:59 okay 14:41:00 avoiding using the activity of placement to substitute for where we need notifications elsewhere that are more directly tied to use cases 14:41:30 ok 14:42:34 I'm thinking this might get real useful in the future; a capacity tracking system could just watch the placement service instead of watching nova, ironic, cinder, neutron, etcetc 14:42:46 cdent: I will think through this, come with actual notifications need for use cases 14:42:58 thanks diga 14:43:02 (not sure about today though) 14:43:14 cdent: welcome! 14:43:50 With that, let's move on to... 14:43:52 #topic Open discussion 14:44:17 Anything to discuss? 14:44:22 edleafe: what is timeframe for pike-1 release 14:44:35 sorry pike-1 cycle 14:44:35 quick question here - we talked about the ironic driver munging instance.flavor at some point to add the resource class, do we need a spec for that? 14:44:45 diga: GET /v1/nodes//states 14:44:49 nope, bad paste 14:44:52 https://releases.openstack.org/pike/schedule.html 14:44:54 https://releases.openstack.org/pike/schedule.html 14:44:55 that's the one :) 14:44:56 dammit again 14:45:01 :P 14:45:02 cdent: too slow 14:45:14 * cdent inserts jroll before cdent in typing speed list 14:45:15 :) okay 14:45:16 so much fail 14:45:39 jroll: that's a good question 14:45:45 jroll: I'd say probably yes 14:45:47 jroll: I would assume yes 14:45:54 * jroll nods 14:45:55 oh no, cdent is getting faster 14:46:05 * cdent limbers up 14:46:06 should I just maybe make one spec to cover that and the flavor changes? 14:46:16 that seems a reasonable starting point 14:46:21 well, there's a few flavor changes, let me rephrase 14:46:25 jroll: I would prefer that, but I'm not Matt 14:46:42 one for the overall "extra specs can override things" and another for the "plan to get ironic using this" 14:46:54 is what I'm thinking 14:47:04 "override things"?? 14:47:44 edleafe: e.g. flavor.vcpus=0 + flavor.extra_specs:vcpus:2 => uses 0 for scheduling but 2 for display 14:47:53 er, that's backwards 14:47:59 but that thing :) 14:48:10 edleafe: I filed one bug today on nova_cell0 DBNonExistanceError 14:48:14 pabelanger, hey there! continuing debugging the package-installs problem... this is the build log http://paste.ubuntu.com/24125072/ for the elements I'm working on here https://review.openstack.org/#/c/411500/ 14:48:22 pabelanger, you mentioned something about the version 14:48:24 vkmc: wrong channel? :) 14:48:30 yes, sorry 14:48:46 edleafe: to be clear, line 61 here https://etherpad.openstack.org/p/nova-ptg-pike-placement 14:49:10 ok 14:49:31 jroll: ah, I see. Didn't realize that display was an issue 14:49:41 if someone has any idea about this - https://bugs.launchpad.net/nova/+bug/1670262 14:49:41 Launchpad bug 1670262 in OpenStack Compute (nova) "DBNonExistenceDatabase: Unknown database "nova_cell0"" [Undecided,Incomplete] 14:50:07 I installed devstack, then while launching instance I get this, this may be the devstack fix 14:50:21 anyway I should be able to start hacking on this next week sometime, so I hope to have some specs to review in 2 weeks at the latest 14:50:39 jroll: sounds good 14:51:40 diga: you should probably ask that in the #openstack-nova channel. I know that others have dealt with this 14:51:50 edleafe: Sure 14:52:21 Anything else to discuss? Or should we get back to our lives? 14:52:29 you have a life? 14:52:35 such as it is 14:52:38 * cdent wows 14:53:26 OK, thanks everyone! 14:53:27 #endmeeting