*** ttsiouts has joined #openstack-placement | 03:08 | |
*** ttsiouts has quit IRC | 03:41 | |
*** bhagyashris has joined #openstack-placement | 04:55 | |
*** ttsiouts has joined #openstack-placement | 05:39 | |
*** ttsiouts has quit IRC | 06:12 | |
*** belmoreira has joined #openstack-placement | 06:32 | |
*** belmoreira has quit IRC | 06:56 | |
*** mcgigglier has joined #openstack-placement | 07:50 | |
*** belmoreira has joined #openstack-placement | 07:53 | |
*** belmoreira has quit IRC | 07:57 | |
*** ttsiouts has joined #openstack-placement | 08:01 | |
*** bauzas has left #openstack-placement | 08:05 | |
*** bauzas has joined #openstack-placement | 08:06 | |
*** belmoreira has joined #openstack-placement | 08:13 | |
*** tssurya has joined #openstack-placement | 08:38 | |
*** ttsiouts has quit IRC | 09:04 | |
openstackgerrit | Qiu Fossen proposed openstack/placement master: Cap sphinx for py2 to match global requirements https://review.opendev.org/659762 | 09:06 |
---|---|---|
*** belmoreira has quit IRC | 09:08 | |
*** belmoreira has joined #openstack-placement | 09:10 | |
*** e0ne has joined #openstack-placement | 09:17 | |
*** bhagyashris has quit IRC | 09:36 | |
openstackgerrit | Merged openstack/placement master: Cap sphinx for py2 to match global requirements https://review.opendev.org/659762 | 09:46 |
*** belmoreira has quit IRC | 09:47 | |
*** belmoreira has joined #openstack-placement | 09:51 | |
*** belmoreira has quit IRC | 10:00 | |
*** belmoreira has joined #openstack-placement | 10:21 | |
*** cdent has joined #openstack-placement | 10:30 | |
*** belmoreira has quit IRC | 10:49 | |
*** ttsiouts has joined #openstack-placement | 11:01 | |
*** ttsiouts has quit IRC | 11:34 | |
*** cdent has quit IRC | 12:48 | |
*** dklyle has joined #openstack-placement | 12:50 | |
*** cdent has joined #openstack-placement | 13:01 | |
*** alex_xu has joined #openstack-placement | 13:05 | |
*** belmoreira has joined #openstack-placement | 13:20 | |
*** mriedem has joined #openstack-placement | 13:21 | |
*** belmoreira has quit IRC | 14:54 | |
*** belmoreira has joined #openstack-placement | 14:58 | |
*** mcgigglier has quit IRC | 15:05 | |
*** tssurya has quit IRC | 15:36 | |
cdent | latest pupdate: https://anticdent.org/placement-update-19-19.html | 15:38 |
stephenfin | Is is possible to list all allocations against a given resource provider via osc-placement? I was expecting to see a 'openstack resource provider allocation list' command but there doesn't seem to be one | 15:49 |
stephenfin | Also, calling 'openstack resource provider allocation show' with an invalid UUID doesn't result in any error output, which is annoying (I used a compute node UUID instead of an instance UUID, by mistake) | 15:51 |
efried | stephenfin: Were you expecting that to be a 404? | 15:54 |
stephenfin | Yeah | 15:55 |
efried | stephenfin: At what microversion? | 15:55 |
stephenfin | I mean, there's no allocation corresponding to that UUID, right? | 15:55 |
efried | Yeah, but `ls` of an empty directory isn't an error | 15:55 |
efried | `ls` of a *nonexistent* directory is an error | 15:56 |
stephenfin | Um, whatever Devstack has given me. I'm not providing the --os-placement-api-version param | 15:56 |
efried | but the tracking of consumers is... | 15:56 |
stephenfin | Why would the allocation exist? | 15:56 |
efried | okay, iirc osc-placement is going to default to 1.0, where we didn't track consumers at all yet | 15:56 |
efried | not the allocation. The consumer. | 15:56 |
stephenfin | Oh, yeah | 15:57 |
efried | Before we tracked consumers separately (would have to look up when that happened) the only cue to their existence was allocations existing. | 15:57 |
efried | But just because there's no allocation doesn't mean the consumer doesn't exist | 15:57 |
cdent | there is no different between an identified allocation and a consumer. they are exactly the same thing | 15:57 |
stephenfin | $ openstack --os-placement-api-version 1.17 resource provider allocation show foobar | 15:58 |
stephenfin | 15:58 | |
stephenfin | --- | 15:58 |
stephenfin | that seems wrong | 15:58 |
cdent | when a group of allocations goes away, the consumer goes away | 15:58 |
cdent | stephenfin: I agree with you that it is confusing | 15:58 |
cdent | in fact the entire command is confusion | 15:58 |
stephenfin | Is there anyway to fix it | 15:58 |
cdent | because 'resource provider allocation' is really not at all the same as 'consumer allocation show' | 15:59 |
stephenfin | What I want is to find out all the things consuming resources from a provider | 15:59 |
stephenfin | i.e. all the instances using disk on a given node | 15:59 |
stephenfin | *e.g. | 15:59 |
cdent | that command, as far as I can tell, does not currently exist in osc-placement, but you're right that that name would have suggested it | 15:59 |
cdent | one moment, I'm looking at the code | 15:59 |
stephenfin | Is there a thing on the placement server side that would let us do that? | 16:00 |
stephenfin | This looks like what I want https://developer.openstack.org/api-ref/placement/?expanded=list-resource-provider-allocations-detail#resource-provider-allocations | 16:00 |
stephenfin | Any reason I couldn't hook that up to the 'openstack resource provider allocation list' command? | 16:01 |
cdent | I think it already is, I'm just looking to decode the code | 16:01 |
cdent | here we go: https://docs.openstack.org/osc-placement/latest/cli/index.html#resource-provider-show | 16:01 |
cdent | stephenfin: are you writing automation, or doing human exploring? | 16:02 |
stephenfin | Ah, I need the '--allocations' parameter | 16:02 |
cdent | yeah, not at all obvious. Which is how I feel about all of osc-placement, but I tend to think in terms of the API, not the grammar of osc | 16:03 |
stephenfin | cdent: Exploring. A customer has switched to OSP13 and is complaining that placement is rejecting their requests due to insufficient storage | 16:03 |
stephenfin | and I'm thinking that placement is counting stuff actually stored elsewhere (a Ceph cluster?) against the local compute node | 16:03 |
cdent | ah, yeah. that seems to happen quite a bit. support folk in vmware have resorted to making a db query tool that reports a summary of all resource providers, their inventory, their usage | 16:03 |
cdent | (which is not too overwhelming because there's usually only a few resource providers in a cluster setup) | 16:04 |
stephenfin | yeah, I think we need to integrate it into our sosreport tool if its not there already | 16:04 |
stephenfin | Can you remind me: the oft-quoted "shared storage" problem would result in us thinking we have more storage than we actually do, right? | 16:05 |
cdent | I assume you've already determined that placement is returning 0 results to the scheduler and it not something filtering it out | 16:05 |
cdent | yes, more than we actually do | 16:05 |
cdent | well | 16:06 |
stephenfin | Aye. I'm seeing "Over capacity for DISK_GB on resource provider a5c17d03-56fb-4078-a88e-1c1eaf2f561c. Needed: 12, Used: 832, Capacity: 837.0" | 16:06 |
stephenfin | (I'd have closed with "this is working as expected" but the fact that this apparently worked before placement (Newton) prevents that :( Fun ) | 16:07 |
cdent | the problem in the ceph situation is that everyone is consuming that disk but with no central accounting (which is why shared providers need to happen), so I would have thought that the problem is more likely workloads trying to place and then discovering that when placement thought there was space, there wasn't (because some other compute node ate it) | 16:08 |
cdent | Is there a chance the allocation rations are wrong for some mounted disk. Such that the disk itself has plenty of space (because thin) but placement thinks it is consumed? | 16:09 |
stephenfin | Yeah, that was my understanding of the situation too. Thanks for confirming | 16:09 |
cdent | the situation you have seems to be placement thinks there's no space, but the customer thinks there is. is that right? | 16:10 |
stephenfin | Correct | 16:10 |
stephenfin | I thought of allocation_ratio and asked for logs, but alas they're controller logs so not much use to me. I'll have to go back and ask for compute node logs | 16:10 |
stephenfin | *disk_allocation_ratio | 16:11 |
stephenfin | The output of 'openstack resource provider inventory list' would be beneficial too | 16:11 |
cdent | aye | 16:11 |
*** belmoreira has quit IRC | 16:11 | |
cdent | this back and forth stuff is so not fun | 16:11 |
stephenfin | Not at all | 16:12 |
cdent | I always want to be like: either grant me access to the machine and root right now, or go away | 16:12 |
stephenfin | But still, I'm generally happy to have osc-placement. It does make things a good deal easier than previously | 16:12 |
cdent | true | 16:12 |
stephenfin | When you can get access to the machine/can interact in real-time with someone at the machine | 16:13 |
cdent | anyway it is time for me to go home. let me know how it turns out. I'm curious to identify patterns on these sorts of things. | 16:13 |
cdent | bbl | 16:13 |
*** cdent has quit IRC | 16:13 | |
stephenfin | Will do o/ | 16:13 |
*** mriedem is now known as mriedem_away | 16:49 | |
*** e0ne has quit IRC | 17:17 | |
*** ttsiouts has joined #openstack-placement | 17:26 | |
*** mriedem_away is now known as mriedem | 18:23 | |
*** efried has quit IRC | 19:43 | |
*** efried has joined #openstack-placement | 19:43 | |
*** ttsiouts has quit IRC | 19:59 | |
*** ttsiouts has joined #openstack-placement | 20:15 | |
*** ttsiouts has quit IRC | 20:20 | |
*** ttsiouts has joined #openstack-placement | 20:51 | |
*** ttsiouts has quit IRC | 20:56 | |
*** ttsiouts has joined #openstack-placement | 21:28 | |
*** ttsiouts has quit IRC | 22:01 | |
*** mriedem has quit IRC | 22:20 | |
*** artom has quit IRC | 23:53 | |
*** ttsiouts has joined #openstack-placement | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!