| opendevreview | Merged openstack/openstack-ansible stable/2025.2: aio: fix octavia/trove when used with ovs https://review.opendev.org/c/openstack/openstack-ansible/+/970800 | 10:46 |
|---|---|---|
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove group conditionals from venv_packages https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/970368 | 10:56 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove group conditionals from venv_packages https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/970368 | 10:57 |
| * f0o facepalms | 12:42 | |
| f0o | I need more coffee, here I was wondering "something is off" while being in #openstack-ansible on libera and freenode | 12:42 |
| f0o | I blame julbord yesterday | 12:42 |
| f0o | Anyway - Can I shoehorn FQDN endpoints into a Port-Based endpoint setup by simply setting haproxy_base_service_overrides.haproxy_map_entries[base_domain].entries | 12:43 |
| noonedeadpunk | hehe | 12:44 |
| f0o | to all "$FQDN $service_backend" pairs and then slap haproxy_ssl_letsencrypt_domains with all of those FQDNs as well? | 12:44 |
| noonedeadpunk | have you checked the doc already? | 12:44 |
| f0o | that should handle all of HAProxy without removing the default Ports right? | 12:44 |
| f0o | The doc seems to cover only "all or nothing" approach, including using FQDN internally | 12:44 |
| noonedeadpunk | yeah, ok | 12:45 |
| f0o | I want somesort of hybrid where I dont mind Port-Based for Internal/Admin URI but only FQDN for Public URI. I figured I can just leave the per-service haproxy settings by default and just use the haproxy_base_service_overrides to define all of those. Then just use *_service_publicuri to populate the Endpoints | 12:45 |
| noonedeadpunk | So I think the big part of that, is that endpoints in the keystone catalog will be updated to the new URL | 12:45 |
| noonedeadpunk | rather then new one created | 12:46 |
| noonedeadpunk | yes, I think you shoudl be able to do that as well | 12:46 |
| f0o | wouldnt that be handled by setting for example keystone_service_publicuri = "keystone.region.domain.tld" ? | 12:46 |
| f0o | then my haproxy_base_service_overrides.haproxy_map_entries[base_domain].entries would contain "keystone.region.domain.tld keystone_service-back" | 12:47 |
| noonedeadpunk | yes, right, that shoudl work | 12:47 |
| f0o | neato | 12:47 |
| f0o | now just to muster the strength to actually deploy this, maybe after another coffee just to be sure.. considering I cant even IRC correctly anymore | 12:48 |
| noonedeadpunk | I'd wish to come up with some more neat way of doing that, but it seems tricky | 12:48 |
| noonedeadpunk | as some vars are defined only on role level, while haproxy is configured in playbooks, outside of roles | 12:48 |
| noonedeadpunk | also you need to ensure that TLS is correct to have that said | 12:49 |
| f0o | Honestly I think this way is very nice. I can just block the ports on the firewall if I dont want them otherwise those ports would always serve as a backup route for healthchecks or similar too. And having the explicit list/mapping makes it less accidental for me as well. Concious decision which makes me double check DNS prior. | 12:49 |
| f0o | Yeah haproxy_ssl_letsencrypt_domains needs to have the full list | 12:49 |
| noonedeadpunk | but also changing the list won't trigger the let's encrypt to re-issue certs iirc | 12:51 |
| noonedeadpunk | I think I had to drop /etc/letsencrypt or smth to trigger new certs... | 12:51 |
| noonedeadpunk | (another thing which would be good to improve) | 12:51 |
| f0o | good to know, I'll look out for that | 12:51 |
| f0o | noonedeadpunk: placement seems to have a non-standard setting for the public uri. All services use *_service_publicuri (apart from nova_novncproxy which is nova_novncproxy_base_uri) but placement uses placement_service_publicurl (url vs uri). All other services _do_ define a *_service_publicurl but will mostly default to the same value as *_service_publicuri with some | 13:04 |
| f0o | exceptions that add a version-path behind it. | 13:04 |
| noonedeadpunk | yeah, makes sense to align it for placement and add uri one for placement | 13:11 |
| noonedeadpunk | and contributions are always welcome :) | 13:20 |
| f0o | yeah will do the PR after everything works haha | 13:22 |
| f0o | not starting too many parallel tasks again | 13:22 |
| noonedeadpunk | ++ fair enough | 14:13 |
| opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Do not define openstack_pki_san in integrated repo https://review.opendev.org/c/openstack/openstack-ansible/+/971206 | 16:01 |
| opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Do not define openstack_pki_san in integrated repo https://review.opendev.org/c/openstack/openstack-ansible/+/971206 | 16:04 |
| opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Add support for hashi_vault PKI backend https://review.opendev.org/c/openstack/openstack-ansible/+/948888 | 16:10 |
| f0o | noonedeadpunk: I think I found a bug with glance's mount points. We recently (over half a year ago) moved from NFS-backed Glance to Swift-backed Glance. The Inventory still has the NFS mount and running -t glance-config will actually try to recreate the mount-point. This NFS mountpoint is not configured anywhere anymore. Also since the NFS does not exist anymore the | 16:11 |
| f0o | playbook does error | 16:11 |
| opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Enable openbao jobs https://review.opendev.org/c/openstack/openstack-ansible/+/948889 | 16:13 |
| noonedeadpunk | f0o: I think to disable it, the `glance_remote_client` should be defined with `state: absent`.... | 16:15 |
| noonedeadpunk | then it can be removed later | 16:15 |
| noonedeadpunk | pretty much to trigger https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_mounts.yml#L132-L140 | 16:15 |
| noonedeadpunk | probably also with `enabled: false` just in case.... | 16:16 |
| f0o | TASK [systemd_mount : Create mount target(s)] Is the failing task tho | 16:16 |
| f0o | https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/tasks/systemd_mounts.yml#L42 << this must be not matching | 16:17 |
| noonedeadpunk | um | 16:20 |
| f0o | https://paste.opendev.org/show/bl7eTSEuf0S73pze47HK/ | 16:21 |
| f0o | I grepped the NFS mount and the only match is in openstack_inventory.json | 16:22 |
| noonedeadpunk | was it defined as part of container_vars? | 16:22 |
| noonedeadpunk | as this thing indeed does not get removed from inventory.... | 16:22 |
| f0o | it was in user_variables.yml | 16:23 |
| f0o | glance_default_store and the loit | 16:23 |
| noonedeadpunk | it should not get to inventory then... | 16:23 |
| f0o | let me git-blame real quick | 16:23 |
| noonedeadpunk | pretty much only conf.d and openstack_user_config would end up in inventory | 16:24 |
| f0o | it could've been in openstack_user_config | 16:24 |
| noonedeadpunk | so you might want to define it, but actually saying absent state, to get it removed as a mount | 16:25 |
| noonedeadpunk | and drop it from inventory.json | 16:25 |
| opendevreview | Damian Dąbrowski proposed openstack/ansible-role-pki master: Add hashi_vault backend https://review.opendev.org/c/openstack/ansible-role-pki/+/948881 | 16:26 |
| f0o | ok so 1) redefine it as state: absent; 2) manually edit the inventory to remove it? | 16:26 |
| opendevreview | Damian Dąbrowski proposed openstack/ansible-role-pki master: Add hashi_vault backend https://review.opendev.org/c/openstack/ansible-role-pki/+/948881 | 16:27 |
| f0o | noonedeadpunk: http://paste.opendev.org/show/bxEkKCl3x9xiunG86Yqf/ | 16:29 |
| noonedeadpunk | heh | 16:31 |
| f0o | with state:stopped it still tries to create them | 16:31 |
| f0o | so I guess I'm in a catch22 | 16:31 |
| noonedeadpunk | right | 16:32 |
| noonedeadpunk | this is a bug then indeed | 16:32 |
| f0o | so I guess I just remove it from the inventory? :D | 16:32 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_mount master: Do not try to reload mount on absent state https://review.opendev.org/c/openstack/ansible-role-systemd_mount/+/971356 | 16:34 |
| noonedeadpunk | f0o: this should be solving it ^ | 16:34 |
| noonedeadpunk | but yeah | 16:35 |
| noonedeadpunk | I think I've messed it during some recent refactoring | 16:35 |
| f0o | heh | 16:37 |
| Baronvaile_ | Hi all. I'm running the openstack.osa.setup_opentstack script against a set of Rocky 9.5 boxes. When I get to 'Install python packages into venv' for the placement container; I'm getting a failure. One of the ERROR's reports a version conflict for dependencies. | 19:32 |
| Baronvaile_ | I wondered if this was knonw with 31.1.2, and if this was to complicated for here, and I should drop a bug into the launch pad tracker? | 19:36 |
| noonedeadpunk | hey | 19:36 |
| noonedeadpunk | Baronvaile_: can you paste the error to https://paste.openstack.org/ ? | 19:37 |
| noonedeadpunk | and no - I don't think it's anything really known, as long as repo_containers are working as intended | 19:37 |
| Baronvaile_ | Yeah. Paste #bSBf6M62ulMtRxQB0ZPW | 19:38 |
| noonedeadpunk | Baronvaile_: so, you have `[Errno 113] No route to host` it seems?:) | 19:39 |
| noonedeadpunk | what happens if you curl http://10.178.100.3:8181/os-releases/31.1.2/rocky-9-x86_64/wheels/ ? | 19:39 |
| noonedeadpunk | or trace, or whatever | 19:39 |
| noonedeadpunk | this is expected to be an internal VIP address | 19:40 |
| noonedeadpunk | but depending on the setup, it's either spawned by keepalived, or, if it's a single host sandbox, expected to be added to an interface by the user | 19:40 |
| noonedeadpunk | (as then keepalived is not installed by default) | 19:40 |
| noonedeadpunk | though, you can force keepalived installation by setting haproxy_use_keepalived: true | 19:41 |
| Baronvaile_ | curl returns an index of the directory | 19:41 |
| noonedeadpunk | huh | 19:41 |
| noonedeadpunk | what if you run it from the `infra1-placement-container-66c6b6d3` ? | 19:42 |
| noonedeadpunk | you can do `lxc-attach -n infra1-placement-container-66c6b6d3` | 19:42 |
| noonedeadpunk | (you might also need to install curl there with dnf) | 19:43 |
| noonedeadpunk | based on the paste I'm like 99% sure there's some connectivity issue | 19:44 |
| Baronvaile_ | in the container I get no route to host. My web browser returns the list of files. Curl from my setup host also returns a list of files | 19:45 |
| noonedeadpunk | so.... | 19:47 |
| noonedeadpunk | idea is that from the deploy host this address should be reachable via br-mgmt | 19:47 |
| noonedeadpunk | which in the container is represented as eth1 interface | 19:47 |
| noonedeadpunk | so on the host - br-mgmt, in the container eth1 | 19:48 |
| Baronvaile_ | Yeah, I'm seeing that I cannot ping teh 10.178.100.3 from the container. The container has my external Ip of 10.178.100.29 | 19:48 |
| noonedeadpunk | (in a "conventional" setup - you can do all kind of crazy things ofc) | 19:49 |
| Baronvaile_ | I suppose, I'm running into an issue with eth1 because Rocky has my port listed as eno0 or eno1 | 19:50 |
| noonedeadpunk | you mean on the host or in the container? | 19:50 |
| Baronvaile_ | On the host | 19:50 |
| Baronvaile_ | The container is eth1 | 19:50 |
| noonedeadpunk | ok, right... but what is part of the br-mgmt bridge? | 19:50 |
| noonedeadpunk | and do you actually have it ?:) | 19:51 |
| noonedeadpunk | as while containers are connected to it, you also need to ensure cross-host connectivity | 19:52 |
| Baronvaile_ | On infra1 the br-mgmt is a bridge-slave to eno8 (hope that is what you are asking for) | 19:53 |
| noonedeadpunk | but I hope I pointed you to some direction :) | 19:53 |
| noonedeadpunk | yeah | 19:53 |
| noonedeadpunk | so given you have an IP assigned to br-mgmt - can you actually ping it? And is eno8 is a valid interface there? | 19:54 |
| noonedeadpunk | (and is there some kind of firewalling on mac filtering between hosts if this is virtual environment?) | 19:55 |
| noonedeadpunk | as LXC will talk with it's own MAC | 19:55 |
| Baronvaile_ | eno8 is a vaild interface, it had the same 10.178.100.3 IP on it before enabling the bridge. That IP currently does ping. | 19:56 |
| noonedeadpunk | so, um. that should be then pertty much same L2 network... | 19:57 |
| Baronvaile_ | Ok. That's gives me something to look into. I'll have to study more on my network setup. Thx. | 19:58 |
| noonedeadpunk | you can try adding some random unused ip to br-mgmt on the host (if it's not there already) and then check if lxc can reach the host, nearby host, containers on the same host, containers on the neighbouring hosts | 19:58 |
| Baronvaile_ | Yeah, let me find an open IP quick and add it. | 19:59 |
| noonedeadpunk | shoot, I bet I have posted a script to healtcheck connectivity, but I can't find it now | 20:01 |
| noonedeadpunk | https://gist.github.com/noonedeadpunk/b9eee2331a3c732e4def0b97530940ba | 20:02 |
| * noonedeadpunk needs to push that to the project | 20:02 | |
| Baronvaile_ | Found an open IP, added it to the bridge. I can ping it from the deployment host, my local host and from the container. | 20:04 |
| Baronvaile_ | I guess I could flip the internal lb vip address to 10.178.100.8, the new IP I added and give it a go. | 20:09 |
| Baronvaile_ | Sorry, I guess the container cannot ping the new ip. | 20:13 |
| Baronvaile_ | Any way. Thanks Dimitriy, I'll go look over my network setup more. | 20:15 |
| jrosser | Baronvaile_: have you tried building an all-in-one deployment first? this should give you a template or understanding to copy when making a multi node setup | 20:18 |
| Baronvaile_ | I have not tried an All in one yet. I have stood up an RDO packstack (not the same I know) | 20:19 |
| Baronvaile_ | I will step back and try the AiO | 20:20 |
| Baronvaile_ | Thank you again, and I'm out. | 20:23 |
| opendevreview | Merged openstack/openstack-ansible master: Use ttl instead of not_after in pki_authorities https://review.opendev.org/c/openstack/openstack-ansible/+/948887 | 21:46 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!