opendevreview | Merged openstack/openstack-ansible stable/yoga: Add Glance tempest plugin repo to testing SHA pins list https://review.opendev.org/c/openstack/openstack-ansible/+/870778 | 00:43 |
---|---|---|
opendevreview | chandan kumar proposed openstack/openstack-ansible-os_tempest master: Add support for whitebox-neutron-tempest-plugin https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/870812 | 04:52 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Fix storage architecture links https://review.opendev.org/c/openstack/openstack-ansible/+/871050 | 08:34 |
noonedeadpunk | let's quickly land backport https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/870907 | 10:16 |
jrosser | done\ | 10:28 |
*** dviroel|afk is now known as dviroel | 11:13 | |
admin1 | i have a wildcard to *.domain.com that I use for my endpoint .. like cloud.domain.com .. i was wondering if its possible on haproxy_extra_services to have the frontend listen to say s3.domain.com and have it pointed to a ceph backend ? .. so that for object storage, the url becomes s3.domain.com | 11:13 |
dok53 | Hi all, it's been a while :) Could anyone give me a pointer as to what could be causing this or where to look? https://paste.openstack.org/show/b7k0RYHwijjzU66SlGU1/ | 11:19 |
noonedeadpunk | admin1: well, frontend listens not on domain, but on IP or interface. and for mathing domain name you need to add ACL to the frontend IIRC | 11:21 |
noonedeadpunk | and you can define haproxy_acls for haproxy_extra_services https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/templates/service.j2#L71-L78 | 11:22 |
noonedeadpunk | so I'd say that should be possible | 11:22 |
noonedeadpunk | dok53: oh, admin1 was coming with the same trouble yestarday | 11:23 |
noonedeadpunk | was you able to sort it out at the end? | 11:23 |
dok53 | It passed that task once for me noonedeadpunk and failed on something else but it was late so I decided to go back at it today and the above is all I get to | 11:29 |
noonedeadpunk | damn, I can totally recall seing that behaviour of keystone, when it refuses to issue a token... | 11:32 |
noonedeadpunk | oh! memcached? can it be that one of memcached is down? | 11:33 |
noonedeadpunk | or firewalled? | 11:33 |
dok53 | I'm not too sure as it's a new install so I'd imagine it should all be set up properly up to that point. I will look at that now though just in case | 11:37 |
noonedeadpunk | try to telnet to memcahced port from keystone container | 11:38 |
dok53 | Will do, just spotted that Failed unmounting /var/log/jo | 11:40 |
dok53 | urnal/2cce3301a44647dfa4e4644a99d5a4dc in the memcached logs | 11:40 |
admin1 | dok53, i strugged with the same yesterday and did almost 10 different builds | 11:53 |
admin1 | do you have iptables running ? | 11:53 |
admin1 | in the controllers ? | 11:53 |
admin1 | look for rules ( or docker rules ) | 11:53 |
dok53 | The memcached ip address is the gateway for the subnet I'm using, I guessthat's not great | 11:54 |
noonedeadpunk | hehe | 11:54 |
admin1 | and if you are using cephadm to deploy ceph and trying to make a controller also a mgr, this will also not work | 11:54 |
admin1 | curl :5000 on the keystone url . does it work from everywhere ? | 11:54 |
noonedeadpunk | yeah, I think you need to take care of used_ips in openstack_user_config and re-create containers | 11:55 |
noonedeadpunk | it's good though you've spotted that early enough | 11:55 |
dok53 | Yep I'll rebuild, I thought I had the first 20 IP's in that list but maybe I did something wrong. Thans for the pointers as usual :) | 11:56 |
opendevreview | Merged openstack/openstack-ansible-os_neutron stable/zed: Disable dhcp-agent and metadata-agent for OVN https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/870907 | 12:46 |
admin1 | noonedeadpunk, thanks for the link and i also read the examples here https://docs.openstack.org/openstack-ansible-haproxy_server/latest/configure-haproxy.html .. but could not figure out how to do s3.domain.com -> backend 8080 | 12:47 |
noonedeadpunk | admin1: have you checked this? https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/ | 13:03 |
mgariepy | you guys probably saw it but posting anyway: https://github.blog/2023-01-17-git-security-vulnerabilities-announced-2/ | 13:05 |
noonedeadpunk | yeah, thanks! | 13:12 |
dok53 | noonedeadpunk yep seems that was the issue. I had excluded all IPs on the interfaces but not the gateway or the virtuals. Thanks for pointing me to it | 13:42 |
dok53 | Thanks admin1 for your input too | 13:43 |
jrosser | admin1: there is already an haproxy ACL for LetsEncrypt so you can see how a match was made on the path | 13:56 |
jrosser | it should be pretty easy to extend that for matching a host as well | 13:57 |
jamesdenton | For Zed - any known issues rerunning the repo plays and getting stuck on "openstack.osa.glusterfs : Create gluster peers"? | 14:01 |
noonedeadpunk | Well, I'd say it setting `haproxy_acls: {'ACL_s3': {'rule': 'hdr(host) -i s3.domain.com'}}` should work | 14:01 |
noonedeadpunk | I guess | 14:01 |
noonedeadpunk | but yeah, referring let's encrypt one might be worth it | 14:02 |
jrosser | https://github.com/openstack/openstack-ansible/blob/37813cc247ff150bb99079ee42d4caeaa136f757/inventory/group_vars/haproxy/haproxy.yml#L238 | 14:02 |
jrosser | leads to | 14:02 |
jrosser | https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/defaults/main.yml#L178-L181 | 14:02 |
noonedeadpunk | jamesdenton: well I saw that but never digged deeper | 14:03 |
jamesdenton | restarting glusterd seems to get it to move along, i will let you know how the playbook progresses | 14:04 |
mgariepy | finnally you are upgrading ? | 14:11 |
jamesdenton | i couldn't help myself | 14:11 |
jamesdenton | i just bumped the SHAs manually on a few | 14:12 |
jamesdenton | Interesting... it's this command that's hanging, but only on infra01 - gluster volume status gfs-repo lab-infra01:/gluster/bricks/1 detail. And without 'detail', it works | 14:25 |
jamesdenton | and it ends up hanging glusterd | 14:26 |
jrosser | jamesdenton: what OS? | 14:35 |
jamesdenton | 20.04 | 14:35 |
jrosser | andrewbonney: ^ we didnt see this? | 14:36 |
andrewbonney | No, worked fine for us | 14:36 |
jrosser | hmm | 14:36 |
jamesdenton | did it? ok good. | 14:36 |
jamesdenton | ahh, i think it's not cluster but some worse disk-related issue | 14:38 |
jamesdenton | *gluster | 14:38 |
jamesdenton | df -h is hanging here, too. And the 'detail' param queries disk | 14:38 |
jamesdenton | maybe i have a stale NFS mount or something | 14:39 |
jamesdenton | alright, carry on. nothing to see here | 14:40 |
jamesdenton | thanks jrosser andrewbonney | 14:42 |
jrosser | jamesdenton: it's trivial to replace gluster with NFS if you have that lying around anyway | 14:46 |
jrosser | there are vars in the repo server role that let you point systemd_mount role at whatever you've got | 14:47 |
jamesdenton | thanks, i might consider that | 14:47 |
jrosser | and there should be a bool to disable the whole gluster deploy as well | 14:47 |
amarao | Is someone know which role/code is writing venv_tag for hosts in facts? I see it for compute hosts and localhost, but do not see for infra hosts... | 15:15 |
*** dviroel is now known as dviroel|lunch | 16:01 | |
jrosser | amarao: can you be a bit more specific? i see /etc/ansible/facts.d/openstack_ansible.fact on my infra host..... is that what you mean? | 16:21 |
*** dviroel|lunch is now known as dviroel | 16:57 | |
admin1 | jrosser, something like this ? https://paste.openstack.org/show/bwQ4wQm949EskcQ7yT9q/ | 17:29 |
admin1 | please ignore the first line : true | 17:30 |
jrosser | admin1: perhaps, though it's pretty confusing to re-use the var named letsencrypt for something completely different | 17:35 |
jrosser | perhaps put the ansible aside for a moment and adjust the haproxy config directly first to understand how it works | 17:40 |
jrosser | like this https://cheat.readthedocs.io/en/latest/haproxy.html#route-based-on-host-request-header | 17:42 |
noonedeadpunk | folks, what could be the reason that on new compute ovs not listening on 6640? Does it need some special config? | 17:47 |
noonedeadpunk | that's not osa-setup node fwiw | 17:47 |
noonedeadpunk | I don't have `is_connected: true` for manager for some reason... | 17:50 |
noonedeadpunk | ah. seems like wrong command for set-manager - used `ovs-vsctl set-manager tcp:127.0.0.1:6640` isntead of `ovs-vsctl set-manager ptcp:6640:127.0.0.1` | 17:52 |
moha7 | To use two IPs from the subnet 172.17.222.0/24 for the HAProxy internal & external VIPs, how each one should be introduced: `172.17.222.35/32` or `172.17.222.35/24`? | 18:08 |
moha7 | As I know, we use the mask when configuring interfaces | 18:09 |
moha7 | I mean in the file `user_variables.yml` | 18:09 |
moha7 | `haproxy_keepalived_internal_vip_cidr: "<vip>/<mask>"` | 18:10 |
spatel | all vip address should be /32 | 18:14 |
moha7 | +1 | 18:16 |
moha7 | 172.17.222.35/32 is the same with or without the /32 (AKA single IP), right? Then I think /32 is a bit confusing if it's not necessary there. | 18:17 |
noonedeadpunk | hm. why I even needed ovs-vsctl set-manager... | 18:28 |
noonedeadpunk | moha7: no, it's not abosuletly same. As that's being parsed and fed to ip | 18:29 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Bump OpenStack-Ansible Zed https://review.opendev.org/c/openstack/openstack-ansible/+/871152 | 20:04 |
noonedeadpunk | How we ended up having 1 release note o_O | 20:04 |
noonedeadpunk | or we merged directly to integrated... | 20:05 |
admin1 | moha7, better use 2 interfaces or ip ranges ? | 20:13 |
admin1 | yeah .. or add 2 ips in the same range | 20:14 |
moha7 | For security reason? yup | 20:14 |
admin1 | but in prod, you are usually going to have 2 diff ones | 20:14 |
admin1 | one that does 1:1 nat from public that is routed, and the other one is 172.29 that is unrouted | 20:14 |
admin1 | or if you don[t access public ip but just using internal. you can use netplan to also add 10.10.10.x in the interface, and then give that ip to the controllers | 20:15 |
admin1 | so in same interface, you can have multiple ranges | 20:15 |
moha7 | Cool; I didn't knew about to range on same interface | 20:18 |
moha7 | two ranges* | 20:19 |
*** dviroel is now known as dviroel|out | 21:05 | |
* jamesdenton needs to review Yoga deprecations before upgrading to Zed :| | 21:25 | |
jamesdenton | glance_remote_client got me | 21:25 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove "warn" parameter from command module https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/869662 | 21:25 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-specs master: Blueprint for separated haproxy service config https://review.opendev.org/c/openstack/openstack-ansible-specs/+/871187 | 21:38 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/871188 | 21:42 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Prepare service roles for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 21:43 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_keystone master: Enable separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/871190 | 21:47 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_nova master: Enable separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/871191 | 21:48 |
*** arxcruz|ruck is now known as arxcruz | 21:50 | |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-galera_server master: Enable separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/871192 | 21:50 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_horizon master: Enable separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/871193 | 21:51 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: [DNM] Remove temporary tweaks related to separated haproxy service config https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/871194 | 21:53 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: [DNM] Remove temporary tweaks related to separated haproxy service config https://review.opendev.org/c/openstack/openstack-ansible/+/871195 | 21:55 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-galera_server master: Enable separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/871192 | 21:58 |
damiandabrowski | hey folks, I'm leaving for a vacation, will be back on 30th Jan | 22:01 |
damiandabrowski | I wanted to push a blueprint and proposed changes for "separated haproxy service config". it can be found here: https://review.opendev.org/q/topic:separated-haproxy-service-config | 22:02 |
jrosser | damiandabrowski: why are the changes to the service roles needed, only adding a default scoped to the role that’s not otherwise used? | 22:55 |
damiandabrowski | yes, scope is the main reason because otherwise we'll probably need to define all haproxy services in group_vars/all which doesn't sound optimal | 23:03 |
damiandabrowski | another minor benefit is that instead of having haproxy_keystone_service, haproxy_placement_service etc. we just have one variable name: haproxy_services | 23:04 |
jrosser | I don’t understand what those do though | 23:04 |
jrosser | the role does not use the var | 23:04 |
jrosser | anyway, late now | 23:05 |
damiandabrowski | role itself doesn't but playbooks like galera-install.yml, os-keystone-install.yml do use it via https://review.opendev.org/c/openstack/openstack-ansible/+/871189/1/playbooks/common-tasks/haproxy-service-config.yml | 23:07 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!