15:00:40 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:00:40 <opendevmeet> Meeting started Tue Feb 21 15:00:40 2023 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:40 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:40 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:00:44 <noonedeadpunk> #topic rollcall
15:00:49 <noonedeadpunk> \o/
15:00:58 <damiandabrowski> hi!
15:06:34 <noonedeadpunk> #topic bug triage
15:06:42 <noonedeadpunk> We have a couple of new bugs here
15:07:39 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/2007296
15:08:57 <noonedeadpunk> Basically idea/proposal here was to create folder under inventory/group_vars for each group we have basically, and move playbooks/defaults/repo_packages there
15:09:19 <noonedeadpunk> but some naming convention for files should be present, so that bump script could find them and update
15:09:50 <noonedeadpunk> This will also affect haproxy thing I beleive, as instead group_vars file a directory worth to be used
15:09:57 <noonedeadpunk> any thoughts on that?
15:10:50 <damiandabrowski> IMO it's ok, we should leverage group_vars more often. That's also what i did for separated haproxy service config
15:12:43 <noonedeadpunk> I'd say it would be a bit more tough to find version that's being used, as file location will depend on group
15:13:00 <noonedeadpunk> But not sure it matters much to be frank
15:15:11 <noonedeadpunk> Ok, next one
15:15:34 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/2007849
15:15:59 <noonedeadpunk> I don't have anything to say here... I wasn't really digging deep into code of our linear implementation
15:16:20 <noonedeadpunk> But it looks like it's not even required after all?
15:18:03 <damiandabrowski> i also didn't dig deeper into this, but https://review.opendev.org/c/openstack/openstack-ansible/+/874482 looks good without it
15:18:18 <noonedeadpunk> It's hard to say also if there's any benefit in execution speed... At the moment it looks like load on nodepool workers is still high, so we have long executions overall
15:18:50 <damiandabrowski> there was a timeout for ceph scenario but it happens very often nowadays so i believe it's not relevant
15:18:58 <noonedeadpunk> nah, it's not.
15:19:29 <NeilHanlon> o/ sorry am late :)
15:19:38 <noonedeadpunk> I was trying to roughly compare time spent on LXC jobs of this patch and others
15:19:51 <noonedeadpunk> no worries Neil!
15:20:29 <damiandabrowski> hi Neil!
15:21:17 <damiandabrowski> yeah, i'm not sure how to compare performance looking at zuul becuse i believe it may strongly depend on a servers' provider
15:21:27 <noonedeadpunk> I think worth trying to calculate execution time on some more predictable AIO deployment
15:21:28 <damiandabrowski> maybe i should do some tests locally and compare results
15:21:39 <noonedeadpunk> and see if there's any benefit from custom strategy
15:21:48 <noonedeadpunk> yeah, would be great
15:21:53 <damiandabrowski> ok, i'll do that
15:22:00 <noonedeadpunk> #topic office hours
15:22:38 <noonedeadpunk> So haproxy role was updated after last review. I still haven't reviewed it as last 2 days were quite tough internally
15:23:41 <opendevreview> Dmitriy Rabotyagov proposed openstack/openstack-ansible-haproxy_server master: Accept both HTTP and HTTPS also for external VIP during upgrade  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864785
15:23:49 <damiandabrowski> no worries, there is also neutron and glance PKI/TLS support waiting for reviews
15:23:50 <damiandabrowski> https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/873654
15:23:52 <damiandabrowski> https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/821011
15:24:14 <damiandabrowski> currently I'm working on TLS support for nova but it's a bit complicated due to already existing TLS support for consoles
15:25:34 <noonedeadpunk> That's the topic for review
15:25:36 <noonedeadpunk> #link https://review.opendev.org/q/topic:separated-haproxy-service-config+status:open
15:26:05 <noonedeadpunk> #link https://review.opendev.org/q/topic:tls-backend+status:open
15:26:25 <noonedeadpunk> damiandabrowski: it's not only consoles but also libvirt
15:26:55 <noonedeadpunk> as we do encrypt live migrations and libvirt makes cert auth
15:28:09 <damiandabrowski> yeah..theoretically speaking we can share the same certs for API, libvirt and console if all of them reside on the same host, right?
15:29:18 <noonedeadpunk> well. I think consoles do reside on APIs, but they can use different interface iirc.
15:30:45 <damiandabrowski> i believe in most cases the do reside on the same host, that's why I'm thinking of sharing the same cert
15:30:52 <damiandabrowski> they*
15:31:15 <NeilHanlon> I've made some progress on cloud-init v22.2+ for RHEL 9 and friends.. hoping in the next week or so
15:31:27 <NeilHanlon> cc jrosser
15:31:49 <noonedeadpunk> And I think we still haven't backported curl hassle to stable branches
15:33:08 <noonedeadpunk> Also zuul result is quite confusing here: https://review.opendev.org/c/openstack/openstack-ansible/+/873289
15:33:34 <noonedeadpunk> But we still need reviews on dependant patch - maybe it will make zuul happier...
15:34:21 <noonedeadpunk> Eventually - we need plenty of reviews. Since Andrew is not around, damiandabrowski can you take a round of reviews on current patches?
15:34:42 <damiandabrowski> yeah, ofc
15:35:13 <noonedeadpunk> Another thing I was going to discuss. I started looking at quorum queues for rabbit as a replacement of our HA queues that are going to be removed from rabbit 4
15:35:58 <noonedeadpunk> And the thing is, that exchange must be removed in order to create quorum queues, since as of today exchange is not durable while it should be for quorum
15:36:42 <noonedeadpunk> And removing exchange is quite a hussle, as then you need to stop all services at the same time using this exchange and have a user with broad permissions
15:37:56 <noonedeadpunk> So what I was thinking - maybe we can create a new "clean" vhost, for example without leading `/` (it's sooooo confusing to be frank to have that `/`) and make vhost name conditional depending on usage of quorum queues or not
15:38:24 <noonedeadpunk> This way it should be possible to switch back and forth as well without stopping service for a really long time
15:39:53 <noonedeadpunk> But yes, service will be desynced until role is finished anyway, as members will be configured with different vhosts
15:40:22 <noonedeadpunk> The thing is that easiest way I found to drop exchange is along with vhost....
15:40:45 <noonedeadpunk> As I failed to drop exchange using rabbitmqadmin with administrator user...
15:41:31 <damiandabrowski> i'm not a rabbitmq expert but looks good at first glance. I believe you know what to do :D
15:41:49 <noonedeadpunk> I hope I do lol
15:42:03 <noonedeadpunk> Will know soon :D
15:42:05 <mnaser> you're not a rabbitmq expert if you think you're a rabbitmq expert
15:42:15 <noonedeadpunk> ^ soooo true
15:42:15 <mnaser> so you're on the right track damiandabrowski :)
15:42:45 <damiandabrowski> haha :D
15:45:03 <noonedeadpunk> So that's kind of it from my side
15:46:06 <damiandabrowski> btw. don't you think we have quite many intermittent gating failures/timeouts these days?
15:46:18 <damiandabrowski> for ex. I had to trigger recheck 5 times for https://review.opendev.org/c/openstack/openstack-ansible/+/871189
15:47:16 <noonedeadpunk> damiandabrowski: regarding time outs - it's known issue that affects literally every project as of today
15:48:46 <noonedeadpunk> My thinking is that it's related to high load on providers we're using for CI, or our CI is a noisy neighbour for itself
15:49:25 <noonedeadpunk> and afaik some quite big provider stopped donating infra for our CI, so load on others has increased
15:50:34 <damiandabrowski> ahhh okok, makes sense
16:00:46 <noonedeadpunk> #endmeeting