15:06:43 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:06:43 <opendevmeet> Meeting started Tue Nov  4 15:06:43 2025 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:06:43 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:06:43 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:06:56 <noonedeadpunk> #topic rollcall
15:06:59 <noonedeadpunk> o/
15:09:18 <noonedeadpunk> #topic office hours
15:10:01 <noonedeadpunk> We had a quite productive PTG session last week, I am still lagging behind a little bit in terms of sending out the summary
15:10:17 <noonedeadpunk> I am expecting to send it out today by EOD
15:11:17 <noonedeadpunk> We had also plenty of progress regarding Debian 13 support
15:11:24 <noonedeadpunk> all votes are there, CI is passing
15:12:05 <noonedeadpunk> If everything goes up to the plan - we have all chances to land patches today or tomorrow
15:14:51 <noonedeadpunk> PKI progress - not much so far
15:15:09 <noonedeadpunk> I am still struggling to review the patch to the pki role adding the vault driver
15:15:24 <noonedeadpunk> it somehow always end up somewhere at the end of the list
15:17:42 <noonedeadpunk> And we're coming to the end of the development cycle for 2025.2. Ideally, we need to branch next week, according to the ML sent by releases team yesterday
15:18:07 <noonedeadpunk> so we're running on time wrt vault driver :*
15:18:16 <noonedeadpunk> *out of time
15:19:05 <noonedeadpunk> we also need to make new minor releases this week for all supported branches
15:19:39 <noonedeadpunk> #link https://security.openstack.org/ossa/OSSA-2025-002.html
15:31:53 <noonedeadpunk> so once patches are merged to keystone, I will propose a bump and a release
15:32:19 <noonedeadpunk> Would be nice if we could backport and merge eligible to backport things
15:33:06 <noonedeadpunk> and we probably should start with https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/965941
15:34:10 <noonedeadpunk> as it's likely blocking this one: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/965142
15:35:04 <noonedeadpunk> as well as https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/965342
15:36:41 <noonedeadpunk> that's pretty much it from my side.
15:46:50 <jrosser> o/ hello
15:50:27 <noonedeadpunk> o/
15:50:50 <noonedeadpunk> Should we briefly discuss gluster thing?
15:51:21 <NeilHanlon> ah crap, timezone change got me :(
15:51:24 <opendevreview> Merged openstack/openstack-ansible-rabbitmq_server master: Bump RabbitMQ version to 4.1 series  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/965197
15:51:25 <NeilHanlon> gluster sure
15:51:26 <noonedeadpunk> yup...
15:52:09 <noonedeadpunk> so kind the thing is that we're not handling host being re-installed inside of the gluster role
15:52:42 <noonedeadpunk> but also during container re-creation we leave stale data on the host
15:53:05 <noonedeadpunk> thus I'm thinking about 2 possible actions here
15:53:39 <noonedeadpunk> first - try to handle clean-up or better way to re-add containers/hosts to the cluster after OS re-install for instance
15:54:08 <noonedeadpunk> and second - actually move /openstack/gluster/<hostname> to /openstack/<hostname>/gluster
15:54:29 <noonedeadpunk> so that when you drop the container, gluster data was also cleaned up
15:54:36 <mgariepy> it was working ok on 20.04 > 22.04 upgrade tho.
15:55:08 <mgariepy> ho i did a dist-upgrade on the hosts.
15:55:20 <noonedeadpunk> I think it works for us, but it's somehow unobvious behavior\
15:55:47 <noonedeadpunk> at least we get already couple of confused users in IRC withing couple of weeks
15:56:20 <noonedeadpunk> meaning - we have smth to improve :)
15:56:37 <mgariepy> yep.
15:59:26 <noonedeadpunk> I have a sandbox to play with, and will try to come up with smth
16:00:00 <noonedeadpunk> I wish gluster output could be in some parsable format, but I failed to find anything
16:00:38 <jrosser> i think that i'd intended the gluster state to persist on the host if the container was deleted
16:01:24 <jrosser> there is a uuid and other state in the data on each host which is shared across the cluster
16:02:02 <jrosser> if you destroy a node and reprovision it, quite easy to end up with gluster being N+1 nodes but one missing
16:02:29 <jrosser> this is quite easy to play with on a `infra` AIO because i think it makes 3 repo containers
16:02:35 <noonedeadpunk> yeah, so I'm thinking if based of uuids we can drop the stale version of it
16:02:55 <noonedeadpunk> as when container re-spawned it get's new UUID
16:03:12 <jrosser> yes, then you have to somehow determine the one that should not be there and get rid of it
16:03:16 <noonedeadpunk> so get all current uuids, get peers, intersect, drop missing
16:03:27 <jrosser> indeed that could work
16:03:38 <jrosser> but needs to be super careful if host down or something
16:03:40 <noonedeadpunk> one problem I don't understand how to figure out, is mapping of old uuid to hostname
16:03:58 <noonedeadpunk> as you need to drop peer by hostname specifically, you somehow can't do that by uuid
16:04:57 <noonedeadpunk> I was thinking to do action only if uuid diff exists. if host is down, it should not be a problem I guess
16:05:06 <noonedeadpunk> as it won't produce weird thing...
16:08:13 <jrosser> gluster get-state
16:08:15 <jrosser> ewwwwwww
16:09:06 <jrosser> `gluster peer status`
16:09:40 <jrosser> but i don't an example in the state with a deleted/recreated node just now
16:09:48 <noonedeadpunk> yeah, it's not really parsable
16:10:14 <noonedeadpunk> it looks smthj like this: https://paste.openstack.org/show/b6exkK3d7xbTlfwApghn/
16:10:24 <noonedeadpunk> `State: Peer Rejected (Connected)`
16:11:19 <noonedeadpunk> but I was thinking of smth like this: https://paste.openstack.org/show/bD4Xhlsxp6Vfhwnwf8WG/
16:12:03 <jrosser> yes we can slurp these
16:13:11 <noonedeadpunk> #endmeeting