Wednesday, 2025-12-17

opendevreviewMerged openstack/openstack-ansible stable/2025.2: aio: fix octavia/trove when used with ovs  https://review.opendev.org/c/openstack/openstack-ansible/+/97080010:46
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove group conditionals from venv_packages  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/97036810:56
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove group conditionals from venv_packages  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/97036810:57
* f0o facepalms12:42
f0oI need more coffee, here I was wondering "something is off" while being in #openstack-ansible on libera and freenode12:42
f0oI blame julbord yesterday12:42
f0oAnyway - Can I shoehorn FQDN endpoints into a Port-Based endpoint setup by simply setting haproxy_base_service_overrides.haproxy_map_entries[base_domain].entries12:43
noonedeadpunkhehe12:44
f0oto all "$FQDN $service_backend" pairs and then slap haproxy_ssl_letsencrypt_domains with all of those FQDNs as well?12:44
noonedeadpunkhave you checked the doc already?12:44
f0othat should handle all of HAProxy without removing the default Ports right?12:44
f0oThe doc seems to cover only "all or nothing" approach, including using FQDN internally12:44
noonedeadpunkyeah, ok12:45
f0oI want somesort of hybrid where I dont mind Port-Based for Internal/Admin URI but only FQDN for Public URI. I figured I can just leave the per-service haproxy settings by default and just use the haproxy_base_service_overrides to define all of those. Then just use *_service_publicuri to populate the Endpoints12:45
noonedeadpunkSo I think the big part of that, is that endpoints in the keystone catalog will be updated to the new URL12:45
noonedeadpunkrather then new one created12:46
noonedeadpunkyes, I think you shoudl be able to do that as well12:46
f0owouldnt that be handled by setting for example keystone_service_publicuri = "keystone.region.domain.tld" ?12:46
f0othen my haproxy_base_service_overrides.haproxy_map_entries[base_domain].entries would contain "keystone.region.domain.tld keystone_service-back"12:47
noonedeadpunkyes, right, that shoudl work12:47
f0oneato12:47
f0onow just to muster the strength to actually deploy this, maybe after another coffee just to be sure.. considering I cant even IRC correctly anymore12:48
noonedeadpunkI'd wish to come up with some more neat way of doing that, but it seems tricky12:48
noonedeadpunkas some vars are defined only on role level, while haproxy is configured in playbooks, outside of roles12:48
noonedeadpunkalso you need to ensure that TLS is correct to have that said12:49
f0oHonestly I think this way is very nice. I can just block the ports on the firewall if I dont want them otherwise those ports would always serve as a backup route for healthchecks or similar too. And having the explicit list/mapping makes it less accidental for me as well. Concious decision which makes me double check DNS prior.12:49
f0oYeah haproxy_ssl_letsencrypt_domains needs to have the full list12:49
noonedeadpunkbut also changing the list won't trigger the let's encrypt to re-issue certs iirc12:51
noonedeadpunkI think I had to drop /etc/letsencrypt or smth to trigger new certs...12:51
noonedeadpunk(another thing which would be good to improve)12:51
f0ogood to know, I'll look out for that12:51
f0onoonedeadpunk: placement seems to have a non-standard setting for the public uri. All services use *_service_publicuri (apart from nova_novncproxy which is nova_novncproxy_base_uri) but placement uses placement_service_publicurl (url vs uri). All other services _do_ define a *_service_publicurl but will mostly default to the same value as *_service_publicuri with some13:04
f0oexceptions that add a version-path behind it.13:04
noonedeadpunkyeah, makes sense to align it for placement and add uri one for placement13:11
noonedeadpunkand contributions are always welcome :)13:20
f0oyeah will do the PR after everything works haha13:22
f0onot starting too many parallel tasks again13:22
noonedeadpunk++ fair enough14:13
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Do not define openstack_pki_san in integrated repo  https://review.opendev.org/c/openstack/openstack-ansible/+/97120616:01
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Do not define openstack_pki_san in integrated repo  https://review.opendev.org/c/openstack/openstack-ansible/+/97120616:04
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Add support for hashi_vault PKI backend  https://review.opendev.org/c/openstack/openstack-ansible/+/94888816:10
f0onoonedeadpunk: I think I found a bug with glance's mount points. We recently (over half a year ago) moved from NFS-backed Glance to Swift-backed Glance. The Inventory still has the NFS mount and running -t glance-config will actually try to recreate the mount-point. This NFS mountpoint is not configured anywhere anymore. Also since the NFS does not exist anymore the16:11
f0oplaybook does error16:11
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Enable openbao jobs  https://review.opendev.org/c/openstack/openstack-ansible/+/94888916:13
noonedeadpunkf0o: I think to disable it, the `glance_remote_client` should be defined with `state: absent`....16:15
noonedeadpunkthen it can be removed later16:15
noonedeadpunkpretty much to trigger https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_mounts.yml#L132-L14016:15
noonedeadpunkprobably also with `enabled: false` just in case....16:16
f0oTASK [systemd_mount : Create mount target(s)] Is the failing task tho16:16
f0ohttps://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_mounts.yml#L42 << this must be not matching16:17
noonedeadpunkum16:20
f0ohttps://paste.opendev.org/show/bl7eTSEuf0S73pze47HK/16:21
f0oI grepped the NFS mount and the only match is in openstack_inventory.json16:22
noonedeadpunkwas it defined as part of container_vars?16:22
noonedeadpunkas this thing indeed does not get removed from inventory....16:22
f0oit was in user_variables.yml16:23
f0oglance_default_store and the loit16:23
noonedeadpunkit should not get to inventory then...16:23
f0olet me git-blame real quick16:23
noonedeadpunkpretty much only conf.d and openstack_user_config would end up in inventory16:24
f0oit could've been in openstack_user_config16:24
noonedeadpunkso you might want to define it, but actually saying absent state, to get it removed as a mount16:25
noonedeadpunkand drop it from inventory.json16:25
opendevreviewDamian Dąbrowski proposed openstack/ansible-role-pki master: Add hashi_vault backend  https://review.opendev.org/c/openstack/ansible-role-pki/+/94888116:26
f0ook so 1) redefine it as state: absent; 2) manually edit the inventory to remove it?16:26
opendevreviewDamian Dąbrowski proposed openstack/ansible-role-pki master: Add hashi_vault backend  https://review.opendev.org/c/openstack/ansible-role-pki/+/94888116:27
f0onoonedeadpunk: http://paste.opendev.org/show/bxEkKCl3x9xiunG86Yqf/16:29
noonedeadpunkheh16:31
f0owith state:stopped it still tries to create them16:31
f0oso I guess I'm in a catch2216:31
noonedeadpunkright16:32
noonedeadpunkthis is a bug then indeed16:32
f0oso I guess I just remove it from the inventory? :D16:32
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_mount master: Do not try to reload mount on absent state  https://review.opendev.org/c/openstack/ansible-role-systemd_mount/+/97135616:34
noonedeadpunkf0o: this should be solving it ^16:34
noonedeadpunkbut yeah16:35
noonedeadpunkI think I've messed it during some recent refactoring16:35
f0oheh16:37
Baronvaile_Hi all. I'm running the openstack.osa.setup_opentstack script against a set of Rocky 9.5 boxes. When I get to 'Install python packages into venv' for the placement container; I'm getting a failure. One of the ERROR's reports a version conflict for dependencies.19:32
Baronvaile_I wondered if this was knonw with 31.1.2, and if this was to complicated for here, and I should drop a bug into the launch pad tracker?19:36
noonedeadpunkhey19:36
noonedeadpunkBaronvaile_: can you paste the error to https://paste.openstack.org/ ?19:37
noonedeadpunkand no - I don't think it's anything really known, as long as repo_containers are working as intended19:37
Baronvaile_Yeah. Paste #bSBf6M62ulMtRxQB0ZPW19:38
noonedeadpunkBaronvaile_: so, you have `[Errno 113] No route to host` it seems?:)19:39
noonedeadpunkwhat happens if you curl http://10.178.100.3:8181/os-releases/31.1.2/rocky-9-x86_64/wheels/ ?19:39
noonedeadpunkor trace, or whatever19:39
noonedeadpunkthis is expected to be an internal VIP address19:40
noonedeadpunkbut depending on the setup, it's either spawned by keepalived, or, if it's a single host sandbox, expected to be added to an interface by the user19:40
noonedeadpunk(as then keepalived is not installed by default)19:40
noonedeadpunkthough, you can force keepalived installation by setting haproxy_use_keepalived: true19:41
Baronvaile_curl returns an index of the directory19:41
noonedeadpunkhuh19:41
noonedeadpunkwhat if you run it from the `infra1-placement-container-66c6b6d3` ?19:42
noonedeadpunkyou can do `lxc-attach -n infra1-placement-container-66c6b6d3`19:42
noonedeadpunk(you might also need to install curl there with dnf)19:43
noonedeadpunkbased on the paste I'm like 99% sure there's some connectivity issue19:44
Baronvaile_in the container I get no route to host. My web browser returns the list of files. Curl from my setup host also returns a list of files19:45
noonedeadpunkso....19:47
noonedeadpunkidea is that from the deploy host this address should be reachable via br-mgmt19:47
noonedeadpunkwhich in the container is represented as eth1 interface19:47
noonedeadpunkso on the host - br-mgmt, in the container eth119:48
Baronvaile_Yeah, I'm seeing that I cannot ping teh 10.178.100.3 from the container. The container has my external Ip of 10.178.100.2919:48
noonedeadpunk(in a "conventional" setup - you can do all kind of crazy things ofc)19:49
Baronvaile_I suppose, I'm running into an issue with eth1 because Rocky has my port listed as eno0 or eno119:50
noonedeadpunkyou mean on the host or in the container?19:50
Baronvaile_On the host19:50
Baronvaile_The container is eth119:50
noonedeadpunkok, right... but what is part of the br-mgmt bridge?19:50
noonedeadpunkand do you actually have it ?:)19:51
noonedeadpunkas while containers are connected to it, you also need to ensure cross-host connectivity19:52
Baronvaile_On infra1  the br-mgmt is a bridge-slave to eno8 (hope that is what you are asking for)19:53
noonedeadpunkbut I hope I pointed you to some direction :)19:53
noonedeadpunkyeah19:53
noonedeadpunkso given you have an IP assigned to br-mgmt - can you actually ping it? And is eno8 is a valid interface there?19:54
noonedeadpunk(and is there some kind of firewalling on mac filtering between hosts if this is virtual environment?)19:55
noonedeadpunkas LXC will talk with it's own MAC19:55
Baronvaile_eno8 is a vaild interface, it had the same 10.178.100.3 IP on it before enabling the bridge. That IP currently does ping.19:56
noonedeadpunkso, um. that should be then pertty much same L2 network...19:57
Baronvaile_Ok. That's gives me something to look into. I'll have to study more on my network setup. Thx.19:58
noonedeadpunkyou can try adding some random unused ip to br-mgmt on the host (if it's not there already) and then check if lxc can reach the host, nearby host, containers on the same host, containers on the neighbouring hosts19:58
Baronvaile_Yeah, let me find an open IP quick and add it.19:59
noonedeadpunkshoot, I bet I have posted a script to healtcheck connectivity, but I can't find it now20:01
noonedeadpunkhttps://gist.github.com/noonedeadpunk/b9eee2331a3c732e4def0b97530940ba20:02
* noonedeadpunk needs to push that to the project20:02
Baronvaile_Found an open IP, added it to the bridge. I can ping it from the deployment host, my local host and from the container.20:04
Baronvaile_I guess I could flip the internal lb vip address to 10.178.100.8, the new IP I added and give it a go.20:09
Baronvaile_Sorry, I guess the container cannot ping the new ip.20:13
Baronvaile_Any way. Thanks Dimitriy, I'll go look over my network setup more.20:15
jrosserBaronvaile_: have you tried building an all-in-one deployment first? this should give you a template or understanding to copy when making a multi node setup20:18
Baronvaile_I have not tried an All in one yet. I have stood up an RDO packstack (not the same I know)20:19
Baronvaile_I will step back and try the AiO20:20
Baronvaile_Thank you again, and I'm out.20:23
opendevreviewMerged openstack/openstack-ansible master: Use ttl instead of not_after in pki_authorities  https://review.opendev.org/c/openstack/openstack-ansible/+/94888721:46

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!