*** takashin has joined #openstack-placement | 00:59 | |
*** mriedem_away has quit IRC | 01:01 | |
*** tetsuro has joined #openstack-placement | 01:10 | |
*** tetsuro has quit IRC | 01:48 | |
*** tetsuro has joined #openstack-placement | 01:52 | |
*** tetsuro has quit IRC | 02:47 | |
*** tetsuro has joined #openstack-placement | 03:29 | |
*** tetsuro has quit IRC | 03:34 | |
zzzeek | efried: "Batches" mean you run several SELECT queries, or whatever it is you're running, with a portion of the IN list in each one | 04:50 |
---|---|---|
*** tetsuro has joined #openstack-placement | 05:07 | |
*** e0ne has joined #openstack-placement | 05:15 | |
*** e0ne has quit IRC | 05:20 | |
*** dklyle_ has joined #openstack-placement | 05:24 | |
*** david-lyle has quit IRC | 05:27 | |
*** purplerbot has quit IRC | 05:27 | |
*** purplerbot has joined #openstack-placement | 05:28 | |
*** alex_xu has quit IRC | 05:30 | |
*** irclogbot_1 has quit IRC | 05:30 | |
*** irclogbot_0 has joined #openstack-placement | 05:31 | |
*** alex_xu has joined #openstack-placement | 05:33 | |
*** yikun has joined #openstack-placement | 06:08 | |
*** tetsuro has quit IRC | 06:10 | |
*** belmoreira has joined #openstack-placement | 06:40 | |
*** tetsuro has joined #openstack-placement | 07:33 | |
*** tssurya has joined #openstack-placement | 07:38 | |
*** helenafm has joined #openstack-placement | 07:38 | |
*** belmoreira has quit IRC | 07:38 | |
*** tetsuro has quit IRC | 07:38 | |
*** ttsiouts has joined #openstack-placement | 07:42 | |
*** belmoreira has joined #openstack-placement | 07:44 | |
*** takashin has left #openstack-placement | 08:00 | |
*** ttsiouts has quit IRC | 08:06 | |
*** ttsiouts has joined #openstack-placement | 08:07 | |
*** ttsiouts has quit IRC | 08:11 | |
*** ttsiouts has joined #openstack-placement | 08:16 | |
*** e0ne has joined #openstack-placement | 08:34 | |
*** tetsuro has joined #openstack-placement | 08:55 | |
*** tetsuro has quit IRC | 09:11 | |
*** cdent has joined #openstack-placement | 09:15 | |
*** belmoreira has quit IRC | 09:25 | |
stephenfin | What version was the first one that placement was required in? Newton or Ocata? | 09:37 |
openstackgerrit | Chris Dent proposed openstack/placement master: Nested provider performance testing https://review.opendev.org/665695 | 09:46 |
cdent | stephenfin: i'm pretty ocata was where it was required, newton optional | 09:48 |
cdent | i'd need to dig to confirm that though | 09:49 |
stephenfin | cdent: That's what the Queens docs were alluding to but I wasn't sure myself https://docs.openstack.org/nova/queens/user/placement.html | 09:49 |
stephenfin | "The placement-api service must be deployed at some point after you have upgraded to the 14.0.0 Newton release but before you can upgrade to the 15.0.0 Ocata release." | 09:50 |
cdent | stephenfin: sorry got distracted. yeah, that corresponds with my memory | 10:14 |
cdent | gibi, efried : a simple case of over logging: https://storyboard.openstack.org/#!/story/2005918 ? | 10:14 |
gibi | cdent: is this log simply says home many canadidate is found per RP tree per request? | 10:24 |
gibi | s/home/how/ | 10:25 |
gibi | I guess the pre RP tree part makes it noisy | 10:25 |
gibi | do we have a log that states how many candidate found per request in total? | 10:25 |
gibi | I guess that would be enough | 10:26 |
cdent | gibi: I think it is simply saying "for root provider X, there are N allocation requests" | 10:26 |
cdent | there are more informative logs nearby about the entire result set | 10:27 |
gibi | cdent: then I think we can drop this log | 10:28 |
cdent | this one doesn't really provider enough info to distinguish from AR under the same tree from another | 10:28 |
*** ttsiouts has quit IRC | 10:47 | |
*** ttsiouts has joined #openstack-placement | 10:48 | |
*** ttsiouts has quit IRC | 10:52 | |
*** ttsiouts has joined #openstack-placement | 11:32 | |
*** ttsiouts has quit IRC | 13:00 | |
*** ttsiouts has joined #openstack-placement | 13:01 | |
openstackgerrit | Chris Dent proposed openstack/placement master: Remove overly-verbose allocation request log https://review.opendev.org/666281 | 13:03 |
*** ttsiouts has quit IRC | 13:05 | |
bauzas | do folks remember if we had some problems by Queens about zombie allocations ? | 13:11 |
*** mriedem has joined #openstack-placement | 13:11 | |
bauzas | I remember we stopped to use _heal_allocations IIRC | 13:11 |
bauzas | but when ? I don't remember | 13:11 |
cdent | bauzas: there have been multiple buglets throughout various parts of nova which have led to different styles of orphaned allocations. many of them have been fixed by mriedem, but he keeps finding more, i'm not sure of the timing on when they've all happened | 13:12 |
cdent | gibi has also done some related work | 13:12 |
bauzas | :/ | 13:12 |
gibi | I did not really worked with zombi allocations | 13:13 |
cdent | I think I'm thinking of healing | 13:13 |
gibi | yeah | 13:13 |
bauzas | s/zombie/orphaned if you prefer | 13:13 |
gibi | I'm still working on healin missing port allocations | 13:13 |
bauzas | I have an internal bug about it | 13:14 |
cdent | i've got to leave but will back soonish | 13:15 |
*** ttsiouts has joined #openstack-placement | 13:18 | |
*** cdent has quit IRC | 13:21 | |
*** tssurya has quit IRC | 14:06 | |
*** cdent has joined #openstack-placement | 14:11 | |
bauzas | folks, back with my issue but with a bit more details, do we know if we delete consumer records when instances are accordingly deleted ? | 14:26 |
bauzas | I'd dare say no | 14:26 |
edleafe | bauzas: https://github.com/openstack/placement/blob/master/placement/objects/consumer.py#L68 | 14:30 |
bauzas | thanks | 14:30 |
gibi | bauzas: do you see empty consumer record? or those consumers do consume some resources? | 14:31 |
bauzas | gibi: actually, the problem is more than than, my customer sees some orphaned allocations after upgrading from Ocata to Queens | 14:32 |
bauzas | so I guess this is normal that consumers table isn't purged | 14:32 |
bauzas | the root cause being the allocations themselves | 14:32 |
cdent | bauzas: can you copy your internal bug upstream (to nova, not placement, probably)? It's hard to get a good sense of what the problem is without something to read | 14:34 |
bauzas | I'll first ask for more details | 14:34 |
bauzas | to my customer | 14:34 |
bauzas | FWIW, the bug is https://bugzilla.redhat.com/show_bug.cgi?id=1721068 but I'm triaging it to something just about allocations records not deleted | 14:35 |
openstack | bugzilla.redhat.com bug 1721068 in openstack-nova "allocations database is not properly cleaned" [Unspecified,New] - Assigned to nova-maint | 14:35 |
bauzas | the rest isn't a real problem | 14:35 |
cdent | bauzas: the fact that that mentions migrations and upgrades suggests some of the issues that mriedem has fixed are involved here | 14:37 |
bauzas | I don't have the whole knowledge of all mriedem's fixes | 14:38 |
cdent | nor do I | 14:39 |
mriedem | nor I | 14:39 |
bauzas | \o/ | 14:40 |
bauzas | ... and that's a Queens customer | 14:41 |
bauzas | but heh, will do what I can | 14:41 |
*** dklyle_ has quit IRC | 14:42 | |
cdent | bauzas: if you've got unconfirmed migrations across an upgrade, that is likely a factor | 14:42 |
*** dklyle has joined #openstack-placement | 14:42 | |
cdent | in which case, since you're back in queens, a manual cleanup of the database may be the way to go | 14:43 |
cdent | but I'm totally guessing. placement will do what nova asks. If nova forgets to ask... | 14:43 |
bauzas | I wonder if that could be somehow related to API microversion 1.8 | 14:43 |
mriedem | if someone wants to summarize that bug for me then maybe i know the issue, but i don't feel like digging into reading that right now | 14:43 |
bauzas | mriedem: don't worry, I'll try to dig into first and make a better statement if I need your help | 14:44 |
mriedem | the best way i've found to debug anything related to nova + placement is write a functional test to recreate the scenario before trying to wrap my head around what the fix needs to be | 14:44 |
mriedem | i.e. https://review.opendev.org/#/c/663737/ | 14:44 |
bauzas | I think I may have a possible reason | 14:45 |
bauzas | if you upgrade from Ocata to Queens by some kind of FFU | 14:45 |
bauzas | then your allocation records don't have project_id/user_id | 14:46 |
bauzas | since https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#require-placement-project-id-user-id-in-put-allocations is only for Pike | 14:46 |
bauzas | in this case, you could hit the problem that was fixed by https://review.opendev.org/#/c/574488/ but only for Rocky | 14:47 |
bauzas | cdent: mriedem: am I correct with the assumption that we don't heal allocations in Queens if they don't have user_id/project_id ? | 14:48 |
bauzas | since https://review.opendev.org/#/c/574488/ isn't backported to Queens and Pike ? | 14:48 |
mriedem | well you're correct that https://review.opendev.org/#/c/574488/ isn't backported to queens | 14:50 |
mriedem | and the commit message implies that people might backport it, "Note that we should be using Placement API version 1.28 with consumer_generation when updating the allocations, but since people might backport this change the usage of consumer generations is left for a follow up patch." | 14:50 |
mriedem | there was an online data migration for the consumer stuff when it was added though - why wasn't that run as part of the FFU? | 14:51 |
bauzas | good call, I can ask | 14:51 |
bauzas | the customer said it did | 14:51 |
bauzas | but I'll ask for showing the records | 14:51 |
bauzas | anyway, I got a clue, thanks | 14:52 |
mriedem | https://review.opendev.org/#/c/567678/12/nova/cmd/manage.py | 14:52 |
mriedem | this is all rocky code... | 14:52 |
mriedem | so i'm not sure why it would be a problem in queens | 14:52 |
*** belmoreira has joined #openstack-placement | 14:53 | |
bauzas | we require project_id and user_id to be part of the allocation in Pike | 14:53 |
mriedem | in that bz, the request spec cleanup during archive was something that's been addressed for deleted instances, but probably isn't in queens | 14:53 |
mriedem | https://review.opendev.org/#/q/I483701a55576c245d091ff086b32081b392f746e i guess it is | 14:54 |
bauzas | mriedem: yup, I already found it, and I have a comment for it | 14:55 |
bauzas | and nope, that was merged in Pike, so Queens got it | 14:55 |
mriedem | as the bz says, migrations could fail and nova could fail to cleanup the allocations held by the migration consumer, ala https://review.opendev.org/#/c/661349/ | 14:56 |
mriedem | which is a backport that recently merged in queens but isn't released yet | 14:56 |
mriedem | the consumers records should be automtaically deleted in placement when their last remaining allocation is deleted, | 14:57 |
mriedem | so the thing to probably figure out / confirm is that the leftover consumers are tied to allocations for migration records, rather than instances | 14:57 |
mriedem | b/c we don't do such a great job of cleaning up migration-held allocations on failure | 14:58 |
mriedem | nor do we have tooling/periodics that scan for those and clean them up | 14:58 |
bauzas | mriedem: the consumers records are only deleted when allocations are gone by Rocky, not queens Ic2b82146d28be64b363b0b8e2e8d180b515bc0a0 | 14:58 |
bauzas | oops | 14:58 |
bauzas | mriedem: https://review.opendev.org/#/c/581086/ | 14:58 |
bauzas | so we could backport this one | 14:58 |
mriedem | i guess, that's not going to help the customer that already has a bunch of stale consumers records | 15:00 |
bauzas | yeah they'll need to purge those anyway | 15:00 |
mriedem | and hopefully not f up and delete consumers that have allocations | 15:00 |
bauzas | or if we backport the change, the table will heal at least for allocations that are removed after* | 15:01 |
*** Sundar has joined #openstack-placement | 15:24 | |
*** helenafm has quit IRC | 15:52 | |
*** belmoreira has quit IRC | 15:57 | |
*** ttsiouts has quit IRC | 16:02 | |
*** ttsiouts has joined #openstack-placement | 16:03 | |
*** ttsiouts has quit IRC | 16:07 | |
openstackgerrit | jacky06 proposed openstack/os-traits master: Sync Sphinx requirement https://review.opendev.org/666386 | 16:32 |
cdent | efried, mriedem: Would be great to get this new nested-perfload to some form of "useful" before we get too much further with the nested magic work. it's pretty close now, but some feedback needed. I left some prompts on the review. https://review.opendev.org/#/c/665695/ | 16:52 |
openstackgerrit | Chris Dent proposed openstack/placement master: DNM: See what happens with 10000 resource providers https://review.opendev.org/657423 | 16:59 |
cdent | mriedem: you might like this one (or at least feel vaguely qualified to care): https://review.opendev.org/663945 | 17:01 |
efried | cdent: ack | 17:03 |
*** e0ne has quit IRC | 17:20 | |
*** cdent has quit IRC | 17:22 | |
openstackgerrit | Colleen Murphy proposed openstack/placement master: Update SUSE install documentation https://review.opendev.org/666408 | 17:28 |
*** melwitt is now known as jgwentworth | 17:55 | |
efried | aspiers: As a suse-er, do you have the ability to validate ^ ? | 18:03 |
aspiers | yes | 18:04 |
efried | thanks | 18:08 |
openstackgerrit | Colleen Murphy proposed openstack/placement master: Update SUSE install documentation https://review.opendev.org/666408 | 18:09 |
*** e0ne has joined #openstack-placement | 18:35 | |
*** e0ne has quit IRC | 18:36 | |
*** e0ne has joined #openstack-placement | 19:05 | |
*** jgwentworth is now known as melwitt | 19:52 | |
*** artom has quit IRC | 20:02 | |
*** e0ne has quit IRC | 20:11 | |
*** belmoreira has joined #openstack-placement | 20:20 | |
*** e0ne has joined #openstack-placement | 20:55 | |
*** e0ne has quit IRC | 20:59 | |
*** e0ne has joined #openstack-placement | 21:00 | |
*** e0ne has quit IRC | 21:39 | |
*** mriedem has quit IRC | 21:59 | |
*** belmoreira has quit IRC | 22:13 | |
*** belmoreira has joined #openstack-placement | 22:16 | |
openstackgerrit | Eric Fried proposed openstack/placement master: Add a test for granular member_of flowing down https://review.opendev.org/666460 | 22:37 |
*** Sundar has quit IRC | 22:44 | |
*** belmoreira has quit IRC | 22:45 | |
openstackgerrit | Eric Fried proposed openstack/placement master: Spec for nested magic 1 https://review.opendev.org/662191 | 23:08 |
openstackgerrit | Eric Fried proposed openstack/placement master: Microversion 1.35: root_required https://review.opendev.org/665492 | 23:27 |
openstackgerrit | Eric Fried proposed openstack/placement master: Miscellaneous doc/comment/log cleanups https://review.opendev.org/665691 | 23:27 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!