*** ysandeep|out is now known as ysandeep | 05:41 | |
*** ysandeep is now known as ysandeep|brb | 06:16 | |
*** ysandeep|brb is now known as ysandeep | 06:57 | |
*** akahat|ruck is now known as akahat|rover|lunch | 07:35 | |
*** akahat|rover|lunch is now known as akahat|rover | 08:28 | |
kleini | I am on W and I see a lot of "AMQP server on 10.20.150.201:5671 is unreachable: Server unexpectedly closed connection. Trying again in 1 seconds.: OSError: Server unexpectedly closed connection". Is it somehow possible to get these connections to RabbitMQ more stable? | 08:34 |
---|---|---|
opendevreview | Andrej Babolcai proposed openstack/openstack-ansible-os_swift master: Add support for running object-servers Per Disk https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/864685 | 08:38 |
*** ysandeep is now known as ysandeep|lunch | 08:52 | |
dokeeffe85 | Morning, another quick one please. I pointed my deploy host at a subnet 10.37.100.0/24 (used a few IPs for the controller & computes) I now need to point it at 3 different machines in a 10.37.115.0/24 subnet. I changed all the IPs in openstack_user_config and uservariables and finally regenerated the openstack_inventory.json. My compute1, compute2 and infra1 IPs have all updated but all the container "ansible_host" have kept all the | 09:16 |
dokeeffe85 | "10.37.110" addresses. They're also in other places too as seen here https://paste.openstack.org/show/bdojHYvfdc3M1QNEr97B/ so the question being. Is there somewhere else I need to change the IPs or regenerate any files or will they be updated for the fields in the paste when ansible is run? | 09:16 |
jrosser | dokeeffe85: changing the subnet for the management network after deployment is one of the trickiest things you can do | 09:23 |
jrosser | and thats not a scenario we expect the ansible playbooks to be able to deal with *at all* | 09:23 |
jrosser | part of the complexity is that some of the IP addresses, like rabbitmq, get written into the config files for pretty much all of the services | 09:26 |
jrosser | so there is no way to do this without some kind of downtime | 09:26 |
dokeeffe85 | Oh ok, We thought maybe once we got the deploy host set up that we could just keep it as it was and point it at a new cluster so we could just deploy a full openstack after making all the adjustments we needed on our test environment | 09:27 |
jrosser | isnt that a slightly different question? | 09:29 |
jrosser | "can i use a deploy host for more than one deployment" vs. "can i adjust all the mgmt IP for a deployment i already have" | 09:29 |
dokeeffe85 | My bad, sorry if I asked incorrectly. So yes your question is phrased better, can I do that? | 09:30 |
jrosser | i think that if you have more than one deployment then really you should have more than one set of /etc/openstack_deploy/* | 09:33 |
jrosser | so there would be two ways to do that, have multiple deploy hosts, which you could do with physical machines, VMs, LXC/LXD or maybe even docker. this is what I do, one LXD deploy host per deployment | 09:35 |
jrosser | alternatively you can have multiple sets of /etc/openstack_deploy data on the same host, controlled by the OSA_CONFIG_DIR environment variable | 09:36 |
jrosser | see https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/openstack-ansible.rc#L15 | 09:36 |
jrosser | another thing you can do is keep the openstack_user_config pretty minimal, so it expresses really the deployment specific things such as IP addresses of the physical hosts | 09:37 |
jrosser | then customisation that is common you put in user_variables.yml | 09:38 |
jrosser | note that you can have multiple user_variables files, with the pattern user_variables*.yml | 09:38 |
noonedeadpunk | fwiw the pattern is user_*.yml | 09:52 |
noonedeadpunk | (the reason why/how user_secrets.yml are loaded actually | 09:53 |
noonedeadpunk | well, I've figured out why sahara breaks... And it's jsonschema version and basically sahara itself is just broken for Zed | 09:53 |
*** ysandeep|lunch is now known as ysandeep | 10:04 | |
dokeeffe85 | Ok, jrosser, thanks for that. I will look into that right away. | 10:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Use /healthcheck URI for backends https://review.opendev.org/c/openstack/openstack-ansible/+/864424 | 10:20 |
dokeeffe85 | Hey jrosser, I pronmise I'll leave you alone after this one (for today) so I copied /opt/openstack-ansible/etc/openstack_deploy/ to /etc/openstack_deploy_OSA/ and did my config files from scratch. Changed export OSA_CONFIG_DIR="${OSA_CONFIG_DIR:-/etc/openstack_deploy}" to the new directory and generated the inventory file to look at it but changed it to --config /etc/openstack_deploy_OSA/ now the qusetion I have is. It took the variables from | 10:40 |
dokeeffe85 | the files in that directory but wrote the inventory file to the old directory (/etc/openstack_deploy/) so do I need to add a flag to tell it where write the inventory to and where to use it from when I run the playbooks? Sorry for all the questions | 10:40 |
jrosser | if it's written to the wrong place that that sounds like a bug | 10:42 |
jrosser | but also the idea is that you set the environment variable OSA_CONFIG_DIR in your shell, you should not edit the OSA files to do that | 10:43 |
jrosser | OSA_CONFIG_DIR="${OSA_CONFIG_DIR:-/etc/openstack_deploy}" this means "take the value set in the OSA_CONFIG_DIR environment variable, falling back to /etc/openstack_deploy if it is not defined" | 10:44 |
jrosser | noonedeadpunk: you use OSA_CONFIG_DIR ? | 10:51 |
noonedeadpunk | not as of today | 10:51 |
noonedeadpunk | But used that couple of yars ago... | 10:52 |
noonedeadpunk | *years | 10:52 |
noonedeadpunk | dokeeffe85: I think question is - how you generated inventory? Running dynamic_inventory script manually or with running openstack-ansbile some playbook? | 10:53 |
noonedeadpunk | As I don't think that dynamic_inventory script consume OSA_CONFIG_DIR, but it does have flag that can be passed to it | 10:54 |
*** dviroel|afk is now known as dviroel | 11:20 | |
dokeeffe85 | Ah ok, noonedeadpunk I used this command inventory/dynamic_inventory.py --config /etc/openstack_deploy_OSA/ to generate it (just as a test to see what it would generate in the file) but realised it was generated in the other directory. jrosser, sorry I thought you meant edit the openstack.rc file to change the directory | 11:24 |
dokeeffe85 | Ok, not sure you'd recommend it but I renamed the etc/openstack_deploy dir and used the new one as /etc/oprnstack_deploy and the inventory is now correct | 11:53 |
jrosser | dokeeffe85: there is also the standard ansible command `ansible-inventory` if you want to generate / dump the inventory data | 12:05 |
*** ysandeep is now known as ysandeep|brb | 12:06 | |
*** ysandeep|brb is now known as ysandeep | 12:17 | |
*** ysandeep is now known as ysandeep|afk | 12:47 | |
dokeeffe85 | Thanks jrosser, will keep that in mind. For now what I did seems to suit so lets hope the cinder override I put in works and I'll be very happy :) | 12:51 |
*** ysandeep|afk is now known as ysandeep | 13:26 | |
*** akahat|rover is now known as akahat|ruck|afk | 14:03 | |
noonedeadpunk | I'm looking at all these ansible_host everywhere in code and jsut (>д<) | 14:45 |
noonedeadpunk | Kind of wonder if we should use management_address or openstack_service_bind_address instead. | 14:45 |
noonedeadpunk | for things like https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/defaults/main.yml#L129 | 14:46 |
*** ysandeep is now known as ysandeep|retro | 14:58 | |
*** akahat|ruck|afk is now known as akahat|ruck | 15:00 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment https://review.opendev.org/c/openstack/openstack-ansible/+/864750 | 15:06 |
anskiy | I have a question about using Ceph with OpenStack: if I'm supposed to move `cinder-volume` service to the control-plane nodes, then I would need to manually configure frontend in HAProxy and add `cinder-volume` service in Keystone by hand, so it would point to the balanced internal address? The reason I'm asking is that I see there is a `os-vol-host-attr:host` attribute for the volume and I wonder what it would be set to in | 15:08 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/864752 | 15:08 |
noonedeadpunk | anskiy: I totally didn't get part about haproxy. But os-vol-host-attr:host will be set to name of container or wherever cinder-volume runs. In case of active/active mode it doesn't matter much, since there's a `cluster` parameter that is set for each volume. So each cinder-volume service that is in the same "cluster" will be able to manage volumes regardles of this attribute | 15:11 |
noonedeadpunk | For active/passive you would need to set a backend_name when defining backend, this way all cinder-volumes will be discovered with this name, so each of them can manage these volumes if needed | 15:12 |
noonedeadpunk | If it's not active/active and cinder-volume service names are unique then in case of outage of the service, you won't be able to perform any action on volumes that are assigned to it, except update host using cinder-manage (to re-assign them to another service) | 15:14 |
anskiy | noonedeadpunk: ah, I see, so there would be another attribute for active/active. This perfectly answers my question, thank you. | 15:14 |
noonedeadpunk | Should be that: https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/templates/cinder.conf.j2#L17-L19 | 15:15 |
anskiy | got it :) | 15:15 |
*** ysandeep|retro is now known as ysandeep | 16:02 | |
*** ysandeep is now known as ysandeep|out | 16:08 | |
*** dviroel is now known as dviroel|lunch | 16:32 | |
noonedeadpunk | folks, any idea how to properly echo stuff to socket? Like `echo isro | nc 127.0.0.1 2181` but with some proper ansible module? | 17:04 |
*** dviroel|lunch is now known as dviroel | 17:27 | |
jrosser | noonedeadpunk: other than writing a very trivial module and put it in plugins repo just with `command`? | 17:49 |
jrosser | also is there any way to see your zookeeper module given the redirect in gitea? a mirror somewhere else? | 17:59 |
jrosser | module/role | 17:59 |
jrosser | oh i found https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/864752 | 18:20 |
jrosser | are we going to want to do TLS here too? | 18:20 |
*** dviroel is now known as dviroel|afk | 19:34 | |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications https://review.opendev.org/c/openstack/openstack-ansible-specs/+/822850 | 19:38 |
nixbuilder | I just re-installed using OSA 25.1.1 and my installation broke. I was using OSA 25.0.0.0rc1. The installation failed when it tried to start haproxy on the infra nodes and the installation was missing a certificate in /etc/haproxy/ssl. | 19:48 |
nixbuilder | Now that I have rolled back to 25.0.0.0rc1 the installation is working fine. | 19:50 |
opendevreview | Damian Dąbrowski proposed openstack/ansible-role-uwsgi master: Install OpenSSL development headers https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/864783 | 19:54 |
noonedeadpunk | jrosser: yup, for sure we want tls there | 19:57 |
noonedeadpunk | I haven't done it yet though. wanted to do mvp first and add complexity right after that | 19:58 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications https://review.opendev.org/c/openstack/openstack-ansible-specs/+/822850 | 20:13 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Change 'vip_bind' to 'vip_address' in templates/service-redirect.j2 https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864784 | 20:16 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Accept both HTTP and HTTPS also for external VIP during upgrade https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864785 | 20:16 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Fix warnings in haproxy config https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864786 | 20:16 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-specs master: Add proposal for enabling TLS on all internal communications https://review.opendev.org/c/openstack/openstack-ansible-specs/+/822850 | 20:20 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment https://review.opendev.org/c/openstack/openstack-ansible/+/864750 | 20:32 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_glance master: Add support for TLS to Glance https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/821011 | 20:32 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/864752 | 20:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment https://review.opendev.org/c/openstack/openstack-ansible/+/864750 | 20:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment https://review.opendev.org/c/openstack/openstack-ansible/+/864750 | 20:58 |
jrosser | nixbuilder: you need to provide some debug info if you want some insights | 21:05 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Add support for enabling TLS to haproxy backends https://review.opendev.org/c/openstack/openstack-ansible/+/821090 | 21:09 |
damiandabrowski | hmm does anybody know how can I mark somebody else's change as "work in progress"? seems like i have this option available only for my own changes | 21:11 |
damiandabrowski | i'm talking about https://review.opendev.org/c/openstack/openstack-ansible/+/821090 | 21:11 |
jrosser | damiandabrowski: you can ovrride haproxy_stick_table per service | 21:12 |
jrosser | see https://github.com/openstack/openstack-ansible-haproxy_server/blob/06e76706c7818843137add470c8c6cc2166eed62/releasenotes/notes/custom-stick-tables-1c790fe223bb0d5d.yaml | 21:12 |
damiandabrowski | yeah i see we're doing that for horizon, but decided to skip it for know as I'm trying to focus on internal TLS | 21:12 |
damiandabrowski | for now* | 21:13 |
jrosser | so haproxy_stick_table in group_vars applies to all services, but if you want to remove it from galera because it makes no sense then set service.haproxy_stick_table=[] | 21:13 |
jrosser | i was looking at this https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/864786/1 | 21:13 |
damiandabrowski | ahh that's actually right, stick tables for galera do not make any sense as only 1 backend is active | 21:14 |
jrosser | and we primarily use them for rate limiting as well on external API which also makes no sense for galera | 21:14 |
damiandabrowski | okok i'll try to fix it in a moment | 21:15 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: [WIP] Add support for enabling TLS to haproxy backends https://review.opendev.org/c/openstack/openstack-ansible/+/821090 | 21:17 |
jrosser | the WIP status thing is interesting | 21:23 |
jrosser | because we by convention use [WIP] in the commit message and anyone can push a new patch to take that away | 21:24 |
jrosser | or add it | 21:24 |
damiandabrowski | yeah and I tried to use `git review` with -w and got this: | 21:24 |
damiandabrowski | ! [remote rejected] HEAD -> refs/for/master%topic=tls-backend,wip (only users with Toogle-Wip-State permission can modify Work-in-Progress) | 21:25 |
damiandabrowski | so i guess it's something with privileges | 21:25 |
damiandabrowski | well, I can live with having [WIP] in commit message :D | 21:25 |
jrosser | damiandabrowski: example of manipulating rules for WIP flag https://review.opendev.org/c/openstack/project-config/+/863931 | 21:30 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Disable stick tables for galera https://review.opendev.org/c/openstack/openstack-ansible/+/864792 | 21:37 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: [WIP] Add support for enabling TLS to haproxy backends https://review.opendev.org/c/openstack/openstack-ansible/+/821090 | 21:37 |
jrosser | damiandabrowski: did you need another patch somewhere to ensure that the ssh headers are present in the repo container before we try to build uwsgi for the first time? | 21:58 |
jrosser | ssl headers i mean - to ensure we always build uwsgi with TLS capability | 21:59 |
damiandabrowski | it's not covered now, i thought we decided it's not needed as we're about to bump uWSGI version | 22:08 |
damiandabrowski | so theoretically speaking, during next openstack upgrade all environments will have uWSGI upgraded and openSSL dev headers installed | 22:08 |
damiandabrowski | do you think we still need to find a way to ensure that the "correct" uWSGI is installed? | 22:09 |
damiandabrowski | ahh and there's one more thing: ssl dev headers need to be present in container where uWSGI is running, not on the repo container | 22:15 |
damiandabrowski | AFAIK we don't store uwsgi wheel on repo container | 22:17 |
jrosser | isnt it a required at the point that the wheel is built though? | 22:20 |
jrosser | becasue on the repo container is where the compiling / linking against ssl libraries takes place | 22:21 |
jrosser | the ssl-dev package will contain C function headers that are no good at runtime | 22:21 |
damiandabrowski | actually I'm looking for an answer why I don't have uWSGI wheel on my repo container | 22:22 |
damiandabrowski | do you have any idea? | 22:22 |
jrosser | does it come direct from pypi instead? | 22:22 |
damiandabrowski | sounds like uWSGI was built on glance container and wheel was stored in /root/.cache | 22:25 |
damiandabrowski | https://paste.openstack.org/raw/bwQZqAj0aHU7Z5RP8LIF/ | 22:26 |
jrosser | that looks like the install | 22:28 |
jrosser | rather than compliation | 22:28 |
jrosser | the implementation is in C so it would need an entire toolchain which i'm not sure is in the glance container | 22:28 |
damiandabrowski | https://paste.openstack.org/show/bQ3ssU5gpOpu2xXkG1Rm/ | 22:30 |
damiandabrowski | hmm there is an information about 'uWSGI compiling server core' | 22:30 |
damiandabrowski | maybe it's the reason why gcc is set as a requirement? https://opendev.org/openstack/ansible-role-uwsgi/src/branch/master/vars/source_install.yml#L19 | 22:31 |
jrosser | ^ that is on the repo server? | 22:31 |
damiandabrowski | no, it's all on glance container | 22:32 |
jrosser | well that is interesting | 22:33 |
jrosser | i wonder if we intend that to be the case | 22:33 |
jrosser | it means we install gcc everywhere and do the build everywhere | 22:35 |
jrosser | so to ensure that uwsgi has TLS support we need to ad libssl-dev or whatever it was to the list of packages along with gcc in the uwsgi role | 22:36 |
jrosser | then it's a different question again if we want to be using the repo server here or not | 22:37 |
damiandabrowski | yeah, it sounds convenient to build uWSGI on repo host and store wheel there | 22:39 |
damiandabrowski | I only wonder if there was some reason behind current behavior... | 22:39 |
damiandabrowski | i'll try to find something tomorrow as my brain is not working anymore... :D | 22:40 |
damiandabrowski | have a good night | 22:40 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!