15:00:57 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:00:58 <opendevmeet> Meeting started Tue Jun 14 15:00:57 2022 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:58 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:58 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:01:07 <noonedeadpunk> #topic roll call
15:01:21 <noonedeadpunk> I'm on meeting for next several mins just in case
15:02:11 <damiandabrowski[m]> hey! I'm semi-available this week as we're dealing with major incident
15:05:39 <noonedeadpunk> #topic office hours
15:05:50 <noonedeadpunk> So we need to release next week
15:05:59 <mgariepy> hey /o
15:06:05 <noonedeadpunk> Before branching I think we might have at least 1 topic to cover
15:06:19 <noonedeadpunk> which is https://review.opendev.org/q/topic:osa%252Fservice_tokens
15:07:21 <noonedeadpunk> I think it should be passing now... And hopefully it does right thing..
15:08:48 <noonedeadpunk> Another thing I pushed is https://review.opendev.org/q/topic:osa%252Fmanage_lb which is basically clean-up
15:08:55 <admin1> partly here \o
15:09:10 <noonedeadpunk> But with that I wonder, if instead of just disabling haproxy backends, we should just set them to drain?
15:09:50 <noonedeadpunk> Whould we great to set drain and then disable, but dunno how to do that...
15:10:23 <mgariepy> draining connection from galera takes a long time.
15:10:57 <noonedeadpunk> Well, it's equal to connection timeout I believe
15:11:26 <noonedeadpunk> But as we do before running role... It might be not that bad. And previously we were just disabling...
15:11:45 <noonedeadpunk> So I guess that draining at least smth is better then just dropping at once?
15:11:58 <noonedeadpunk> And also I'm thinking about rest of services, not only galera
15:12:07 <mgariepy> not sure it does make a big difference.
15:12:27 <noonedeadpunk> Yeah, maybe you're right
15:12:34 <mgariepy> for galera the only used node is the first one anyway
15:13:04 <mgariepy> when it's drop, they switch to the second node. and when it comes back then the fallback.
15:13:26 <opendevreview> Merged openstack/openstack-ansible master: [doc] Fix compatability matrix layout  https://review.opendev.org/c/openstack/openstack-ansible/+/845731
15:13:29 <opendevreview> Merged openstack/openstack-ansible master: add comments to make editing the table clearer  https://review.opendev.org/c/openstack/openstack-ansible/+/845758
15:14:49 <noonedeadpunk> well, yeah...
15:15:24 <noonedeadpunk> so should I update to disable to be consistent?
15:15:48 <jrosser_> having that external role for only the galera playbook not so good, agree to remove that
15:16:57 <mgariepy> currenlty services tolerate db disconnect a lot better than they used to.
15:17:26 <noonedeadpunk> I wish they did same for rabbit...
15:17:29 <noonedeadpunk> anyway
15:17:36 <mgariepy> well. or memcached
15:18:41 <noonedeadpunk> the problem is that now galera backend will be disabled regardles if changes are required which kind of not neat
15:18:56 <noonedeadpunk> so tried to be as tolerant as possible...
15:19:37 <noonedeadpunk> but for other services external role was keeping backend when it could be broken so...
15:24:30 <spatel> sorry little late in meeting just trying to catching up with haproxy discussion, why do we need to drain connection?
15:25:27 <spatel> remove galera from haproxy?
15:26:56 <noonedeadpunk> nah, it was more that when you run playbook, in advance you drain connection just in case
15:28:24 <spatel> oh! we are using F5 so never looked at haproxy. but curious why need to drain them? what are we going to gain?
15:28:35 <mgariepy> drain will continue to send health-check.
15:28:55 <mgariepy> disable won't i guess
15:29:32 <noonedeadpunk> yup, that's mostly the difference
15:30:20 <spatel> also curious why are we not using conntrackd with keepalived/haproxy, it will keep all connection in sync between keepalived
15:30:49 <spatel> we are running conntrackd in production with keepalived and it works great
15:31:06 <mgariepy> but. well it's for the whole galera backend do drain might recover faster ?
15:36:04 <noonedeadpunk> Another topic - during summit there was interest from different parties on reviving ceph-ansible.
15:36:36 <noonedeadpunk> The main question is about governance of it afterwards
15:37:11 <noonedeadpunk> hopefully, we can un-deprecate it given contributions to it.
15:37:41 <mgariepy> who will be working on it?
15:39:12 <noonedeadpunk> except us (and I hope BBC:)) interest was raisen by 23Tech and vexxhost. It's obviously won't be RedHat
15:39:30 <noonedeadpunk> right now we're in stage of gathering other interested to contribute parties
15:40:01 <noonedeadpunk> and we agreed to start doing more active steps in July
15:40:16 <mgariepy> ok nice then :)
15:41:08 <noonedeadpunk> but I really hope that repo can stay under their governance at least, as otherwise things can get messy...
15:41:54 <noonedeadpunk> I believe JP reached them with some suggestions.
15:42:07 <noonedeadpunk> But will see were it all will end up
15:43:39 <mgariepy> yeah obviously but if it works it would be awesome :D
15:44:00 <mgariepy> as it kinda work really well in my limited experience with it.
15:44:21 <admin1> i used it in a lot of clusters .. it works very well
15:44:31 <mgariepy> also not a big fan of docker stuff..
15:44:32 <admin1> too bad that its deprecated and there is no similar tool
15:45:09 <noonedeadpunk> yeah, it appeared there's more not docker fans
15:45:27 <noonedeadpunk> I have heard that cephadm can deploy on bare metal, but never researched that
15:45:56 <noonedeadpunk> (and was pretty sure it can't
15:46:03 <admin1> ceph-ansible does not use docker or any containers ..
15:46:10 <admin1> its on bare metal
15:46:19 <admin1> though you can also have it use containers
15:46:25 <noonedeadpunk> ceph-ansible actually has support for docker
15:46:35 <noonedeadpunk> yeah
15:52:12 <mgariepy> did you had time to start asking nicely monty about 22.04 for lts mariadb ?
15:56:04 <noonedeadpunk> uh, no, sorry. My first working day in 2 weeks today. And started from total internal crap...
15:56:15 * noonedeadpunk goes now :P
15:58:47 <opendevreview> Jesse Pretorius proposed openstack/ansible-config_template master: [WIP] Add more complete examples with sample output  https://review.opendev.org/c/openstack/ansible-config_template/+/845768
15:59:18 <admin1> i am encountering a strange issue  .. may be nothing to do with OSA .. but since I only use OSA , asking here anyway .. I have ceph as object storage and   osa points to ceph ..   i can create buckets and  upload/download objects on it .. the issue is when I set a bucket to public, if I click the link, the link gives  NoSuckBucket
15:59:18 <admin1> (https://cloud.domain.com:8080/swift/v1/AUTH_8022c6328c8949bda2ff928295240a6a/test)  -- test is the bucket name ..        configs here:  https://gist.githubusercontent.com/a1git/9f58a83cc142b85c2675d8b5678af009/raw/d1676fd8ec077fe34cbf379cd02a6aec7eeac046/gistfile1.txt
16:00:28 <noonedeadpunk> admin1: there was totally a bug in Luminous or smth like that on rbd side
16:00:31 <noonedeadpunk> *rgw
16:00:51 <noonedeadpunk> So basically for OSA Victoria
16:01:28 <admin1> i am on pacific
16:01:29 <noonedeadpunk> The only solution I was able to get is just upgrade Ceph....
16:01:31 <noonedeadpunk> Huh
16:01:45 <admin1> osa and ceph both were installed independently using osa and ceph-ansible
16:01:47 <noonedeadpunk> On Pacific it worked nicely....
16:03:50 <admin1> if anyone has the same setup, a quick copy/paste of the endpoint list and    maybe ceph.conf or rgw.conf might help ( ips and secrets removed ofcourse)
16:11:58 <noonedeadpunk> at glance config looks good
16:12:36 <noonedeadpunk> I don't think though it would allow to list directories anyway... But not 100% sure
16:13:24 <noonedeadpunk> yeah, it does....
16:28:02 <noonedeadpunk> I have created https://jira.mariadb.org/browse/MDEV-28842 for CentOS. For ubuntu will be only with next release.
16:28:08 <noonedeadpunk> #nedmeeting
16:28:12 <noonedeadpunk> #endmeeting