14:00:20 <rafaelweingartner> #startmeeting cloudkitty
14:00:20 <opendevmeet> Meeting started Mon May 12 14:00:20 2025 UTC and is due to finish in 60 minutes.  The chair is rafaelweingartner. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:20 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <opendevmeet> The meeting name has been set to 'cloudkitty'
14:00:23 <rafaelweingartner> Hello guys!
14:00:26 <rafaelweingartner> Roll count
14:00:27 <rafaelweingartner> \o
14:01:32 <priteau> o/
14:03:10 <MattCrees[m]> o/
14:06:37 <rafaelweingartner> nice to see you guys here :)
14:06:42 <rafaelweingartner> #topic  Target reviews
14:07:23 <rafaelweingartner> #link https://review.opendev.org/c/openstack/cloudkitty/+/947908, this one is in the making for a long time, and it was ready, but now some tests started breaking
14:07:29 <rafaelweingartner> we do not clearly understand why yet
14:08:22 <rafaelweingartner> sorry, wrong comment on this one :)
14:08:30 <rafaelweingartner> it was meant to another patch.
14:08:54 <rafaelweingartner> This patch #947908, is one that priteau requested some changes, and then the tests started failing
14:12:04 <priteau> Good news is that the Prometheus plugin for DevStack exists elsewhere
14:12:31 <rafaelweingartner> exactly
14:12:32 <priteau> And indeed, bad news is that the jobs are failing. Might be a more general issue in CloudKitty following the reopening of the master branch?
14:13:02 <rafaelweingartner> I am not sure
14:13:12 <rafaelweingartner> maybe those issues regarding library updates
14:13:20 <rafaelweingartner> I just faced some issues in Gnocchi
14:13:44 <rafaelweingartner> Last week when opening a patch, due to Gabbits upgrade and oslo.middleware upgrade as well
14:13:53 <rafaelweingartner> maybe it is something similar
14:14:21 <MattCrees[m]> Details: b'Internal Server Error'... Not a very helpful of error, looks like it will need more in-depth debugging.
14:14:22 <MattCrees[m]> I would guess some dependency has changed recently, given all the projects have been making releases for Flamingo
14:14:25 <priteau> The logs show 500 internal server error when tempest tries to contact CK
14:14:42 <priteau> e.g.
14:14:44 <priteau> 10.0.17.190 - - [29/Apr/2025:11:40:45 +0000] "POST /rating/v1/collector/mappings/ HTTP/1.1" 500 231 "-" "python-urllib3/1.26.20"
14:14:46 <priteau> 10.0.17.190 - - [29/Apr/2025:11:40:45 +0000] "GET /rating/v1/collector/mappings/ HTTP/1.1" 500 231 "-" "python-urllib3/1.26.20"
14:15:10 <priteau> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_66e/openstack/66e14448cada460fad8913caa7cc78f7/controller/logs/screen-ck-api.txt
14:15:16 <priteau> failed to open python file /opt/stack/data/venv/bin/cloudkitty-api
14:15:50 <rafaelweingartner> gabbits change the way they integrate/work with wsgi
14:16:28 <priteau> Not sure it is gabbit in this case, maybe some change in devstack
14:16:30 <rafaelweingartner> they were causing an issue that Gnocchi-api was not working. I did not look into the root cause, and just pinned Gabbits version
14:17:14 <priteau> Someone should try running devstack manually
14:17:36 <rafaelweingartner> yes
14:20:16 <priteau> I will test it locally
14:21:55 <rafaelweingartner> thanks for taking this one priteau!
14:23:37 <rafaelweingartner> Moving on, we have #link https://review.opendev.org/c/openstack/cloudkitty/+/876643, which it the patch that has been in the making for a while
14:24:07 <rafaelweingartner> it has some test failures which seem to be related to the previous one that we discussed
14:25:11 <priteau> Let's focus on fixing the general issue first, and come back to this one soon
14:25:51 <mattcrees_> I have started testing this patch myself, but I still need to spend more time on it. Haven't found any issues with how it functions yet though
14:27:25 <rafaelweingartner> no worries
14:27:45 <rafaelweingartner> if you have questions/doubts or any other need for help, just let me know
14:29:12 <mattcrees_> will do :)
14:30:19 <rafaelweingartner> besides those, I noticed this one that I do not follow: #link https://review.opendev.org/c/openstack/cloudkitty/+/948480.
14:30:25 <rafaelweingartner> Do you guys what this one is about?
14:31:51 <priteau> It's a minor change, I think we can approve it if gmaan is OK with it
14:32:45 <mattcrees_> The same kind of change has been made in many projects so it looks fine to me
14:33:24 <rafaelweingartner> ok, thanks
14:34:13 <mattcrees_> Actually, it looks like there's disagreements in the placement patch. We should probably wait to see how that pans out: https://review.opendev.org/c/openstack/placement/+/941792/comment/2cf097e3_61949354/
14:34:35 <rafaelweingartner> I did not understand the context
14:34:44 <rafaelweingartner> that is why I did not manifested before
14:36:19 <priteau> It's related to metadata about OpenStack governance
14:36:48 <priteau> I think we can ignore it until gmaan provides a new opinion
14:37:38 <rafaelweingartner> ok
14:39:22 <rafaelweingartner> Moving on, we have #link https://review.opendev.org/c/openstack/cloudkitty/+/946330
14:39:32 <rafaelweingartner> which seems some basic info being added to CloudKitty doces
14:40:16 <priteau> Jobs failed but the logs are gone now
14:40:34 <priteau> I have rechecked
14:40:48 <rafaelweingartner> me too :)
14:41:09 <rafaelweingartner> let's see the output and then we can vote/evalute the patch
14:43:22 <rafaelweingartner> and last, but not least, we have #link https://review.opendev.org/c/openstack/cloudkitty/+/946336
14:43:55 <rafaelweingartner> I am not sure if this is needed, it has been a while that I do not  install from scratch, but I would expect for these tables to be created by the system automaticaly already
14:44:06 <rafaelweingartner> I would need to test this one indepth to understand what happened
14:44:36 <priteau> I had not realised we had a bug there
14:45:12 <rafaelweingartner> me neither
14:45:21 <priteau> It doesn't look right though, it is creating the cloudkitty_storage_states table, which should already exist
14:45:25 <rafaelweingartner> and we use this feature quite a lot
14:46:29 <priteau> I think it's an operator issue from zigo
14:46:43 <priteau> The reprocessing was added in cloudkitty/storage_state/alembic/versions/9feccd32_create_reprocessing_scheduler.py
14:46:57 <priteau> We have two separate DB migrations
14:47:00 <priteau> Maybe a ran just one
14:47:05 <zigo> o/
14:47:44 <zigo> It's a mistake, I should abandon this patch, I believe.
14:48:18 <rafaelweingartner> =)
14:48:22 <zigo> These are created when doing init-storage.
14:48:31 <zigo> My init-storage was simply crashing ...
14:48:46 * zigo just abandonned the patch.
14:48:52 <priteau> Maybe we are missing something in our grenade tests though
14:49:12 <rafaelweingartner> what do you mean?
14:49:17 <zigo> Well, I don't really understand why this has been separated form the normal migrations, though.
14:49:57 <priteau> zigo: Neither do we :) Historical decisions from Objectif Libre
14:50:06 <rafaelweingartner> exactly
14:50:17 <rafaelweingartner> we just followed the process already in-place :)
14:50:22 <zigo> Maybe just done when creating the v2 storage ?
14:50:42 <zigo> It'd be probably a good idea to "repair" the current state, but nothing urgent, IMO.
14:51:06 <priteau> rafaelweingartner: I mean that in grenade code, the upgrade_cloudkitty_database function only calls $CLOUDKITTY_BIN_DIR/cloudkitty-dbsync upgrade
14:51:21 <priteau> Maybe it needs to call the cloudkitty-storage-init binary too
14:51:32 <rafaelweingartner> I see
14:55:30 <zigo> Indeed.
14:56:59 <rafaelweingartner> And that is basically all from my side
14:57:05 <rafaelweingartner> do you guys have something else to add?
14:58:28 <priteau> Nothing from me
14:58:55 <mattcrees_> Nothing from me either
14:59:11 <rafaelweingartner> Ok
14:59:14 <rafaelweingartner> thank you all for participating. Have a nice week!
14:59:59 <mattcrees_> Thanks rafaelweingartner :)
15:00:02 <priteau> Thanks
15:00:04 <rafaelweingartner> #endmeeting