*** efried_2dmtg has quit IRC | 00:35 | |
*** bhagyashris has joined #openstack-placement | 02:20 | |
*** bhagyashris has quit IRC | 02:57 | |
*** bhagyashris has joined #openstack-placement | 04:06 | |
openstackgerrit | Tetsuro Nakamura proposed openstack/placement master: Use local config for placement-status CLI https://review.openstack.org/632600 | 05:57 |
---|---|---|
openstackgerrit | Tetsuro Nakamura proposed openstack/placement master: Add upgrade status check for missing root ids https://review.openstack.org/632599 | 05:57 |
*** takashin has left #openstack-placement | 06:45 | |
*** e0ne has joined #openstack-placement | 07:38 | |
*** helenaAM has joined #openstack-placement | 08:12 | |
*** s10 has joined #openstack-placement | 08:50 | |
*** rubasov has quit IRC | 09:06 | |
*** rubasov has joined #openstack-placement | 09:06 | |
*** bhagyashris has quit IRC | 09:57 | |
*** s10 has quit IRC | 10:11 | |
*** gibi has quit IRC | 10:54 | |
*** gibi has joined #openstack-placement | 10:54 | |
*** e0ne has quit IRC | 11:59 | |
*** ioni has joined #openstack-placement | 12:15 | |
ioni | hello guys | 12:15 |
ioni | i was redirected from #openstack-nova in this channel | 12:15 |
ioni | i have this warnings on couples of compute nodes: https://paste.xinu.at/ecVs8/ | 12:15 |
ioni | I was wondering what's the right way to fix deallocated resources in openstack queens | 12:16 |
*** cdent has joined #openstack-placement | 12:19 | |
*** tssurya has joined #openstack-placement | 12:30 | |
*** e0ne has joined #openstack-placement | 12:41 | |
*** avolkov has joined #openstack-placement | 13:02 | |
*** mriedem has joined #openstack-placement | 13:12 | |
*** cdent has quit IRC | 13:14 | |
jaypipes | ioni: good morning | 13:19 |
ioni | jaypipes, hello | 13:20 |
jaypipes | ioni: are you seeing this in your nova-compute logs on restart of the nova-compute service? | 13:20 |
ioni | jaypipes, yes | 13:20 |
jaypipes | ioni: and the nova-compute you are seeing it on was the source host of an instance that was *live* migrated to another host? | 13:21 |
*** cdent has joined #openstack-placement | 13:21 | |
ioni | jaypipes, or migrations that failed at the end due to port cannot being alocate to the host or other varios problems that I fixed after that | 13:21 |
ioni | jaypipes, i don't usually use live migration | 13:22 |
jaypipes | ioni: ah, ok, that's the issue then. | 13:22 |
jaypipes | ioni: can you please provide me the UUIDs of the compute nodes for source and destination host as well as the instance's UUID? I will then give you some SQL statements to run. | 13:22 |
ioni | jaypipes, uuid of compute node is ID from openstack hypervisor list or host id? | 13:23 |
jaypipes | ioni: if the hypervisor list ID field is a UUID-looking thing, then yes. :) | 13:25 |
ioni | | a921d193-74d3-4b81-b473-6705c955e15c | cloudbox11 | | 13:25 |
ioni | | f6ac25cb-5ce9-4b91-a559-aa0a8d976255 | cloudbox13 | | 13:25 |
jaypipes | ioni: yeah, that's it. | 13:25 |
jaypipes | k. and 11 is the destination, 13 is the source? | 13:25 |
ioni | so first is the host with warning | 13:25 |
ioni | 13 is destination | 13:25 |
jaypipes | ah, yes | 13:26 |
jaypipes | and f5d12428-555f-4b5d-a555-60d36b85d73a is the instance UUID. | 13:26 |
ioni | indeed | 13:26 |
jaypipes | ok, hold please while I generate some SQL for you :) | 13:26 |
ioni | ok, i'll figure it out for all hosts what to do particulary | 13:26 |
jaypipes | ioni: question for you... | 13:27 |
jaypipes | ioni: when you said "migrations that failed" and "other various problems that I fixed after that", can you elaborate a bit on what those other problems were? a failed migration should really be cleaning up any placement records that might have been changed during migration.. | 13:28 |
ioni | jaypipes, so, one way for a migration to fail is to have an instance that has a port with a subnet with a service-type like compute:foo | 13:28 |
ioni | because i didn't find a way to set up neutron to not schedule from a specific subnet | 13:29 |
ioni | so when i migrate that instance, it will fail to finish the migration, because the port cannot be moved | 13:29 |
ioni | i simply reset the state from error and start the instance | 13:29 |
ioni | and works | 13:29 |
ioni | now is on the new node | 13:30 |
ioni | i don't remember another case from failing | 13:30 |
ioni | there was a case where nova simply returned that it doesn't have permission to the image(i have images that are marked as deactivated) | 13:31 |
jaypipes | ioni: how many instances are we talking about here? | 13:31 |
ioni | i have like 20 nodes and couples have warnings | 13:32 |
ioni | not all of them | 13:32 |
ioni | but i can manage to them manually | 13:32 |
jaypipes | ioni: can you tell me what the results of the following SQL statement are? (execute this against the nova_api database please): SELECT a.* FROM allocations AS a JOIN resource_providers AS rp ON a.resource_provider_id = rp.id WHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c'\G | 13:34 |
jaypipes | ioni: sorry, hold up :) | 13:35 |
ioni | ok, i'll hold | 13:35 |
ioni | i was about to paste the result | 13:35 |
jaypipes | SELECT a.* FROM allocations AS a JOIN consumers AS c ON a.consumer_id = c.id JOIN resource_providers AS rp ON a.resource_provider_id = rp.id WHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c' AND c.uuid = 'f5d12428-555f-4b5d-a555-60d36b85d73a'\G | 13:36 |
jaypipes | ioni: ^ | 13:36 |
ioni | Empty set, 84 warnings (0.00 sec) | 13:36 |
ioni | let me check to see if cleary was cloudbox11 | 13:37 |
jaypipes | ioni: can you pastebin the results of the first query pls? | 13:37 |
* jaypipes wonders if we remove the "-" from UUIDs.... | 13:37 | |
ioni | jaypipes, https://paste.xinu.at/hjBZex/ | 13:37 |
ioni | yes, it's cloudbox11 with destination cloudbox13 | 13:38 |
jaypipes | oh, duh, yeah, this is queens... | 13:38 |
ioni | yep | 13:38 |
jaypipes | we don't have the consumers table populated yet. | 13:38 |
jaypipes | OK, hold a minute. more SQL coming your way... | 13:38 |
ioni | just updated from pike to queens | 13:39 |
ioni | soon to rocky | 13:39 |
ioni | but wanted to make sure that everything is working fine | 13:39 |
jaypipes | ioni: SELECT a.* FROM allocations AS a | 13:41 |
jaypipes | JOIN resource_providers AS rp ON a.resource_provider_id = rp.id | 13:41 |
jaypipes | WHERE rp.uuid = 'a921d193-74d3-4b81-b473-6705c955e15c' | 13:41 |
jaypipes | AND a.consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a'; | 13:41 |
ioni | jaypipes, https://paste.xinu.at/YbNiJ/ | 13:42 |
jaypipes | ioni: and then please execute this (just want to verify the allocations don't exist on the destination before we update anything...) | 13:43 |
jaypipes | SELECT a.* FROM allocations AS a | 13:43 |
jaypipes | JOIN resource_providers AS rp ON a.resource_provider_id = rp.id | 13:43 |
jaypipes | WHERE rp.uuid = 'f6ac25cb-5ce9-4b91-a559-aa0a8d976255' | 13:43 |
jaypipes | AND a.consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a'; | 13:43 |
ioni | yeah, i figured it out that you wanted that | 13:43 |
ioni | it's alocated to cloudbox13 | 13:43 |
ioni | https://paste.xinu.at/5SQ/ | 13:43 |
ioni | so now i have to dlete 4567-4569 | 13:44 |
ioni | *delete | 13:44 |
ioni | there was a resize involved, that's why it has more ram and disk | 13:44 |
ioni | on resize, most of the time, nove wants to migrate | 13:44 |
jaypipes | ioni: yes. I just needed to make sure it was safe to do that. so, you can execute this now: DELETE FROM allocations WHERE resource_provider_id = 8 AND consumer_id = 'f5d12428-555f-4b5d-a555-60d36b85d73a'; | 13:45 |
jaypipes | ioni: for the other instance that was affected by the failed migration, perform the same steps. just make sure you don't DELETE before checking that the allocations table contains records for both the source and destination resource provider :) | 13:46 |
ioni | jaypipes, is not safe to use id? | 13:47 |
ioni | delete from allocations where id='4567' ? | 13:47 |
jaypipes | ioni: oh, yes, you can do that too. | 13:47 |
jaypipes | DELETE FROM allocations WHERE id BETWEEN 4567 AND 4569; | 13:48 |
ioni | jaypipes, because i don't know what is resource_provider_id :D | 13:48 |
ioni | it doesn't seem to be unique | 13:48 |
jaypipes | ioni: that's the source host provider internal ID. | 13:48 |
jaypipes | ioni: my original DELETE expression is saying "delete the allocation for this particular instance on the source host" | 13:49 |
ioni | jaypipes, cool. thanks for the hints | 13:49 |
ioni | i can now resolve my issue | 13:49 |
jaypipes | ioni: glad to be of assistance. let us know if you need any further help. | 13:49 |
ioni | i'll lurk around if you don't mind | 13:50 |
jaypipes | not a problem :) always happy to have folks lurk! | 13:50 |
*** jaypipes is now known as leakypipes | 13:50 | |
*** e0ne has quit IRC | 14:50 | |
*** cdent has quit IRC | 15:00 | |
*** cdent has joined #openstack-placement | 15:37 | |
*** e0ne has joined #openstack-placement | 15:55 | |
*** efried has joined #openstack-placement | 16:02 | |
*** efried is now known as efried_mtg | 16:05 | |
*** e0ne has quit IRC | 16:39 | |
*** tssurya has quit IRC | 16:43 | |
*** helenaAM has quit IRC | 17:04 | |
*** mriedem is now known as mriedem_afk | 17:10 | |
melwitt | is the placement api-ref being published in a new place? https://developer.openstack.org/api-ref/placement/ doesn't have the aggregates API, for example | 18:03 |
melwitt | oh, nevermind, it does | 18:04 |
cdent | melwitt: the ordering is perhaps a bit unintuitive | 18:17 |
melwitt | I was looking for something that said Aggregates but it's Resource provider aggregates. it's just me | 18:19 |
melwitt | I was too hasty | 18:20 |
*** mriedem_afk is now known as mriedem | 18:41 | |
*** dklyle has joined #openstack-placement | 19:17 | |
*** cdent has quit IRC | 19:24 | |
*** e0ne has joined #openstack-placement | 19:26 | |
*** dklyle has quit IRC | 19:51 | |
*** avolkov has quit IRC | 20:07 | |
*** dklyle has joined #openstack-placement | 20:13 | |
*** e0ne has quit IRC | 20:18 | |
*** efried_mtg has quit IRC | 20:31 | |
*** dklyle has quit IRC | 20:55 | |
*** dklyle has joined #openstack-placement | 20:59 | |
*** dklyle has quit IRC | 21:13 | |
*** dklyle has joined #openstack-placement | 23:13 | |
*** dklyle has quit IRC | 23:20 | |
*** efried has joined #openstack-placement | 23:23 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!