15:06:43 #startmeeting openstack_ansible_meeting 15:06:43 Meeting started Tue Nov 4 15:06:43 2025 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:06:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:06:43 The meeting name has been set to 'openstack_ansible_meeting' 15:06:56 #topic rollcall 15:06:59 o/ 15:09:18 #topic office hours 15:10:01 We had a quite productive PTG session last week, I am still lagging behind a little bit in terms of sending out the summary 15:10:17 I am expecting to send it out today by EOD 15:11:17 We had also plenty of progress regarding Debian 13 support 15:11:24 all votes are there, CI is passing 15:12:05 If everything goes up to the plan - we have all chances to land patches today or tomorrow 15:14:51 PKI progress - not much so far 15:15:09 I am still struggling to review the patch to the pki role adding the vault driver 15:15:24 it somehow always end up somewhere at the end of the list 15:17:42 And we're coming to the end of the development cycle for 2025.2. Ideally, we need to branch next week, according to the ML sent by releases team yesterday 15:18:07 so we're running on time wrt vault driver :* 15:18:16 *out of time 15:19:05 we also need to make new minor releases this week for all supported branches 15:19:39 #link https://security.openstack.org/ossa/OSSA-2025-002.html 15:31:53 so once patches are merged to keystone, I will propose a bump and a release 15:32:19 Would be nice if we could backport and merge eligible to backport things 15:33:06 and we probably should start with https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/965941 15:34:10 as it's likely blocking this one: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/965142 15:35:04 as well as https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/965342 15:36:41 that's pretty much it from my side. 15:46:50 o/ hello 15:50:27 o/ 15:50:50 Should we briefly discuss gluster thing? 15:51:21 ah crap, timezone change got me :( 15:51:24 Merged openstack/openstack-ansible-rabbitmq_server master: Bump RabbitMQ version to 4.1 series https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/965197 15:51:25 gluster sure 15:51:26 yup... 15:52:09 so kind the thing is that we're not handling host being re-installed inside of the gluster role 15:52:42 but also during container re-creation we leave stale data on the host 15:53:05 thus I'm thinking about 2 possible actions here 15:53:39 first - try to handle clean-up or better way to re-add containers/hosts to the cluster after OS re-install for instance 15:54:08 and second - actually move /openstack/gluster/ to /openstack//gluster 15:54:29 so that when you drop the container, gluster data was also cleaned up 15:54:36 it was working ok on 20.04 > 22.04 upgrade tho. 15:55:08 ho i did a dist-upgrade on the hosts. 15:55:20 I think it works for us, but it's somehow unobvious behavior\ 15:55:47 at least we get already couple of confused users in IRC withing couple of weeks 15:56:20 meaning - we have smth to improve :) 15:56:37 yep. 15:59:26 I have a sandbox to play with, and will try to come up with smth 16:00:00 I wish gluster output could be in some parsable format, but I failed to find anything 16:00:38 i think that i'd intended the gluster state to persist on the host if the container was deleted 16:01:24 there is a uuid and other state in the data on each host which is shared across the cluster 16:02:02 if you destroy a node and reprovision it, quite easy to end up with gluster being N+1 nodes but one missing 16:02:29 this is quite easy to play with on a `infra` AIO because i think it makes 3 repo containers 16:02:35 yeah, so I'm thinking if based of uuids we can drop the stale version of it 16:02:55 as when container re-spawned it get's new UUID 16:03:12 yes, then you have to somehow determine the one that should not be there and get rid of it 16:03:16 so get all current uuids, get peers, intersect, drop missing 16:03:27 indeed that could work 16:03:38 but needs to be super careful if host down or something 16:03:40 one problem I don't understand how to figure out, is mapping of old uuid to hostname 16:03:58 as you need to drop peer by hostname specifically, you somehow can't do that by uuid 16:04:57 I was thinking to do action only if uuid diff exists. if host is down, it should not be a problem I guess 16:05:06 as it won't produce weird thing... 16:08:13 gluster get-state 16:08:15 ewwwwwww 16:09:06 `gluster peer status` 16:09:40 but i don't an example in the state with a deleted/recreated node just now 16:09:48 yeah, it's not really parsable 16:10:14 it looks smthj like this: https://paste.openstack.org/show/b6exkK3d7xbTlfwApghn/ 16:10:24 `State: Peer Rejected (Connected)` 16:11:19 but I was thinking of smth like this: https://paste.openstack.org/show/bD4Xhlsxp6Vfhwnwf8WG/ 16:12:03 yes we can slurp these 16:13:11 #endmeeting