hamidlotfi_ | Hi there, | 08:30 |
---|---|---|
hamidlotfi_ | Why not use any SQL proxy behind Galera instead of Haproxy? Is this a technical reason? | 08:30 |
hamidlotfi_ | Because Haproxy is blind to the Galera cluster and overloads your ecosystem.is it true? | 08:30 |
hamidlotfi_ | @noonedeadpunk @jrosser | 08:48 |
jrosser | hamidlotfi_: I think noonedeadpunk looked at proxysql but there are always priorities of features to consider, I’m not sure it worked straightforwardly | 08:52 |
jrosser | but it would be an improvement to have a proper sql aware component there instead of haproxy | 08:52 |
jrosser | contributions are at course welcome, as usual ;) | 08:53 |
jrosser | *of course | 08:53 |
hamidlotfi_ | Thanks for your reply, I'll definitely think about it. | 08:54 |
opendevreview | Merged openstack/openstack-ansible-os_nova master: Delegate compute wait tasks to service_setup_host https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/880139 | 12:14 |
NeilHanlon | hamidlotfi_: as jrosser said, proxysql can be problematic in and of itself, especially if it's not setup and tuned properly. Although, it would be really beneficial for scaling to have a proxysql layer in, so... | 13:48 |
hamidlotfi_ | so what? | 13:49 |
hamidlotfi_ | your text is cutted. | 13:50 |
NeilHanlon | apologies, that was just me trailing off. no more text or thought after that | 14:08 |
noonedeadpunk | Just in case, maxscale can not be used on any production system without support contract according to their license | 14:56 |
noonedeadpunk | And there are not much choice except proxysql which is slightly tricky | 14:56 |
noonedeadpunk | But I still wanna return to it | 14:58 |
NeilHanlon | aren't you supposed to be away? :P | 15:01 |
NeilHanlon | #startmeeting openstack_ansible_meeting | 15:01 |
opendevmeet | Meeting started Tue May 9 15:01:36 2023 UTC and is due to finish in 60 minutes. The chair is NeilHanlon. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:01 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:01 |
NeilHanlon | #topic rollcall | 15:01 |
NeilHanlon | #topic rollcall | 15:01 |
noonedeadpunk | o/ semi-around | 15:01 |
NeilHanlon | o/ | 15:02 |
NeilHanlon | #topic office hours | 15:05 |
noonedeadpunk | So I think we merged internal tls almost fully | 15:05 |
noonedeadpunk | Except magnum and zun that are still broken | 15:05 |
noonedeadpunk | I think we should merge magnum and do branching of roles | 15:06 |
NeilHanlon | agreed | 15:06 |
NeilHanlon | looking at the two patches for magnum I think I prefer just removing the labels, but probably still important to make sure we have it documented how to add the labels | 15:07 |
noonedeadpunk | Looking at the ptg etherpad, looks like we've covered all points except docs: https://etherpad.opendev.org/p/osa-bobcat-ptg | 15:07 |
noonedeadpunk | Oh, ok, then we can merge it fast, as I might be wrong in preffering leaving them | 15:08 |
noonedeadpunk | Do you have a link handy? | 15:08 |
NeilHanlon | https://review.opendev.org/c/openstack/openstack-ansible/+/881566 | 15:09 |
NeilHanlon | I guess I'm on the fence about it. i could be convinced to keep them | 15:10 |
noonedeadpunk | Ok, voted | 15:10 |
NeilHanlon | same | 15:10 |
noonedeadpunk | Well, it was more that defaults were not working back in the days, so we were keeping correct template there not to forget what options were required | 15:10 |
NeilHanlon | ah, i see | 15:11 |
noonedeadpunk | I'm not 100% sure what's the state as of today though, so maybe indeed we should drop them as we don't really test | 15:11 |
noonedeadpunk | Or better - test | 15:11 |
noonedeadpunk | But yeah, anyway | 15:11 |
NeilHanlon | i am making a note for myself as I will need to look at this in just a few months anyways | 15:12 |
NeilHanlon | i have a chapter on kubernetes and will talk about Magnum lol | 15:12 |
noonedeadpunk | Tempest plugin was weirdly broken with python type for string messed up | 15:12 |
noonedeadpunk | Tip: you can set +W when you're setting second V+2 | 15:14 |
NeilHanlon | oh, right. i kinda forgot I had that power lol | 15:14 |
noonedeadpunk | We'll need to backport the patch as well | 15:14 |
noonedeadpunk | At least to Zed I think | 15:15 |
NeilHanlon | yeah | 15:15 |
noonedeadpunk | Btw, has upgrade job fix landed? | 15:15 |
NeilHanlon | this one? https://review.opendev.org/c/openstack/openstack-ansible/+/879890 | 15:16 |
NeilHanlon | looks like need another recheck | 15:16 |
noonedeadpunk | Yeah:( | 15:16 |
NeilHanlon | what's timing out? | 15:16 |
noonedeadpunk | Dunno, logs simply don't load on my phone | 15:17 |
noonedeadpunk | It could be the node is simply slow though | 15:18 |
NeilHanlon | gotcha. i'll send a recheck request again | 15:18 |
noonedeadpunk | Ok, great:) I think these are by far only things I was concerned about | 15:22 |
NeilHanlon | awesome :) | 15:22 |
* noonedeadpunk returns back to a session with mgmt | 15:22 | |
NeilHanlon | 👍 | 15:22 |
NeilHanlon | on OVS front, i've got good progress on the nfv sig content for ovs3.1. The way we are doing it I think we may not even need to update the roles as the centos-release-nfv package will simply point to the right place when running on Rocky | 15:23 |
jrosser | o/ hi | 15:25 |
damiandabrowski | hi guys, im not really around but i just wanted to mention that tls job is fixed and this relation tree is ready for reviews | 15:25 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible/+/882454/3 | 15:25 |
jrosser | we should backport the first one also I think | 15:27 |
damiandabrowski | yup, you are right i guess | 15:27 |
damiandabrowski | im not logged in to gerrit on my phone so i cant set backport candidate there now😀 | 15:28 |
NeilHanlon | i can do it :P | 15:28 |
damiandabrowski | thanks! | 15:29 |
NeilHanlon | add backport-candidate on "Add 'tls' scenario", yes? | 15:29 |
damiandabrowski | no no | 15:29 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible/+/882454/3 | 15:29 |
damiandabrowski | this one | 15:29 |
NeilHanlon | yep yep, okay. i misinterpreted jrosser's "first one" | 15:30 |
* jrosser also on phone client | 15:30 | |
NeilHanlon | i'll put that patch set on my list to try and look at this week damiandabrowski | 15:31 |
damiandabrowski | great! | 15:31 |
NeilHanlon | sounds like we're pretty much done for this week. Thank you all for joining! | 15:49 |
NeilHanlon | #endmeeting | 15:50 |
opendevmeet | Meeting ended Tue May 9 15:50:22 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:50 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-05-09-15.01.html | 15:50 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-05-09-15.01.txt | 15:50 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-05-09-15.01.log.html | 15:50 |
admin1 | one of my controller died due to disk issue .. so OS reinstalled on new disks .. this is the error I get .. regarding repo-build .. https://gist.githubusercontent.com/a1git/01833addf6007f1bd26c27d6aa75dd14/raw/f0ac9b092ccacd8b8fa02166da03b6b85b91e25d/gistfile1.txt | 16:06 |
admin1 | tag is 26.1.0 | 16:25 |
jrosser | admin1: the playbooks deal with initial setup of the glusterfs cluster and can also recover from a delete/recreate container | 16:30 |
jrosser | but they cannot recover from a reinstalled node - this is expected | 16:30 |
jrosser | you will have to do some manual maintenance work on the glusterfs setup to remove the old node, then re-run the repo server playbook to add the replacement. there is state files youve lost from the failed node so it's not a particularly easy to automate recovery from this | 16:32 |
jrosser | admin1: i can't say anything about if this is appropriate for your situation but there is some steps described here https://docs.rackspace.com/support/how-to/recover-from-a-failed-server-in-a-glusterfs-array/ | 16:38 |
admin1 | is it possible/ok to lxc-containers-destory pkg-repo and retry ? | 16:50 |
jrosser | probably, but maybe not | 16:56 |
jrosser | there is state about the glusterfs cluster stored on the hosts, not in the containers | 16:56 |
jrosser | and that is there precisely to allow the containers to be deleted/recreated | 16:56 |
jrosser | that state will however contain data about the failed/replaced node, and i think specifically that in this case your replacement node will have a different UUID to what it had before | 16:57 |
admin1 | is there a way to delete the gluster and let it re-create ? | 19:29 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!