noonedeadpunk | johnsom: aha, ok, good to know! And we were talking not about public networks, but about management networks (amphora api and keepalived basically). As I was not sure if it's possible to make keepalived on Amphoras to use ipv6 for communicating with each other | 07:17 |
---|---|---|
noonedeadpunk | as issue stated was to make all openstack internals to work over ipv6 rather then externals | 07:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Enable multiple console proxies when required in deployments https://review.opendev.org/c/openstack/openstack-ansible/+/890522 | 08:00 |
opendevreview | Merged openstack/openstack-ansible master: Set correct language for docs https://review.opendev.org/c/openstack/openstack-ansible/+/893407 | 08:21 |
opendevreview | Merged openstack/openstack-ansible-os_neutron master: Stop haproxy on ovn-controller nodes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/886266 | 09:40 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Bump SHAs for Zed https://review.opendev.org/c/openstack/openstack-ansible/+/893419 | 12:21 |
noonedeadpunk | reviews of https://review.opendev.org/c/openstack/openstack-ansible/+/893413 are pretty much appreciated :) | 12:31 |
opendevreview | Marc Gariépy proposed openstack/openstack-ansible-os_neutron stable/2023.1: Stop haproxy on ovn-controller nodes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893450 | 12:34 |
opendevreview | Marc Gariépy proposed openstack/openstack-ansible-os_neutron stable/zed: Stop haproxy on ovn-controller nodes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893451 | 12:34 |
kleini | https://paste.opendev.org/show/bbxTyq09A8NebT4lCBHr/ <- I am trying to add a compute node to my cluster und 26.1.2 release. Do you have any hint, what could be wrong since my upgrade to Zed two month ago? I feels like being broken without any change from my side. | 13:14 |
kleini | oh, it was about wrong permissions for keys... | 13:14 |
mgariepy | might also be ansible stalled facts. | 13:15 |
kleini | trying to delete them | 13:16 |
mgariepy | i usually do `ansible all -m setup --forks 100` before running a playbook | 13:16 |
kleini | it have been issues with the facts. thanks for your help. issue is solved | 13:25 |
mgariepy | great :D | 13:29 |
kleini | and cluster got two new compute nodes. I wish a pleasant weekend! | 13:49 |
noonedeadpunk | sweet :) | 13:51 |
noonedeadpunk | have a nice one as well! | 13:51 |
noonedeadpunk | So... This seems to be a correct fix for adjutant on stable branches: https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/892505 | 14:59 |
noonedeadpunk | rest upgrade jobs will be fixed after merging (and bumping) zed | 14:59 |
Bico_Fino | Trying to run rabbitmq-install on a container and getting error: FAILED! => {"msg": "'dict object' has no attribute 'ansible_hostname'"} at task TASK [rabbitmq_server : Fix /etc/hosts] | 15:10 |
noonedeadpunk | what version is that? | 15:10 |
Bico_Fino | 18.1.20 | 15:10 |
noonedeadpunk | ugh, that's damn old.... | 15:10 |
Bico_Fino | :( | 15:11 |
noonedeadpunk | I believe this is related to stale ansible facts | 15:11 |
mgariepy | it's quite often the case i think. | 15:11 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Bump SHAs for 2023.1 https://review.opendev.org/c/openstack/openstack-ansible/+/893413 | 15:11 |
noonedeadpunk | It's way more rare these days | 15:11 |
Bico_Fino | recreate the facts? | 15:12 |
mgariepy | it depends on when you ran the playbook the last time. | 15:12 |
noonedeadpunk | But I was asking, since we should not have any vars looking like `ansible_hostname` on antelope, and probably zed | 15:12 |
mgariepy | ansible all -m setup --forks 100 ? | 15:12 |
Bico_Fino | I will try | 15:13 |
noonedeadpunk | or smth like https://paste.openstack.org/show/b0WShCYTzTgU4p6iuDbS/ | 15:14 |
noonedeadpunk | Ah, it's even rocky... | 15:14 |
Bico_Fino | -m setup did the trick | 15:26 |
mgariepy | good to hear. | 15:29 |
spatel | noonedeadpunk hey! | 16:01 |
spatel | Did you ever use openstack internal DNS function ? | 16:01 |
spatel | Just to resolve hostname of two vms? | 16:02 |
noonedeadpunk | um... you mean designate? | 16:05 |
daniel_ | Hey guys, I'm trying to install AIO lxc stable/2023.1 openstack-ansible in a proxmox vm (ryzen 9/16 cores, 32GB) to test it. It's normal 22 seconds time to get a token from keystone in this environment? | 16:18 |
noonedeadpunk | I'd day no | 16:19 |
noonedeadpunk | Is memcached around, alive and reachable? | 16:20 |
noonedeadpunk | there were quite some troubles with proxmox, since you need to explicitly allow some traffic to pass nicely | 16:20 |
noonedeadpunk | so you should check if keystone can connect to memcached | 16:20 |
noonedeadpunk | as if it can't - that would result delay you're describing | 16:21 |
daniel_ | Thanks noonedeadpunk. Yes, memcached is alive. I'll check the networking side and connection to memcached. Already pushed the haproxy timeouts and at least now I don't have 504s anymore. But so far networking it seems "OK", at least they have connectivity. But the installation so far it's freaking slow. Trying to install manually one service at a time. | 16:25 |
noonedeadpunk | what we have in CI is smth like 4 CPU cores and 8Gb of ram and 100gb of disk. Though I'd suggest no less of 12Gb dedicated to the env | 16:29 |
daniel_ | You are on the spot noonedeadpunk. Keystone can't access memcached. I'll check connectivity. Thanks! | 16:37 |
karni | When the feature multiqueue has not been enabled, does it make sence to bind some iperf3 to different vCPUs using `-A` switch (CPU affinity )? I mean iperf3_1 on vCPU1 and iperf3_2 on vcPU2 and so on... while it's expected to one core be taken by OpenStack for networking in case of no multiqueue? | 17:09 |
karni | I want compare my test in a scenario where multiqueue is enabled and in the other scenario tt's not. | 17:09 |
opendevreview | Merged openstack/openstack-ansible master: Fix ansible_ssh_extra_args extra newline https://review.opendev.org/c/openstack/openstack-ansible/+/893191 | 17:11 |
NeilHanlon | karni: yes, using taskset or so to ensure the iperf threads run on a single CPU would give you more consistent results | 17:30 |
karni | If it's possible to bind threads to vCPUs, why it's needed to enable "multiqueue"? | 17:43 |
opendevreview | Merged openstack/openstack-ansible master: Enable multiple console proxies when required in deployments https://review.opendev.org/c/openstack/openstack-ansible/+/890522 | 17:45 |
spatel | noonedeadpunk I figured out internal DNS issue | 19:22 |
spatel | It was config problem | 19:23 |
spatel | noonedeadpunk can I shared specific public network to specific project? | 19:23 |
spatel | I have multiple public network A, B and C | 19:27 |
spatel | I want only customer_A can see network_A | 19:27 |
spatel | I found it can set using RBAC | 20:42 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!