Wednesday, 2024-08-07

grauzikasHello everyone. I spent a lot of time by trying to setup openstack by using openstack ansible and i cant make ansible to install ovn northd… i have been following a lot of manuals, examples from google, but nothing works… it creates container for that, but nothing inside off it. Not sure may be something changed and manuals not updated or im missing something.06:53
grauzikasmy configs looks like this https://pastebin.com/cDBSNRbm06:54
grauzikasmay be im missing something, may be it reads some different configs or it should be in different folders… dont understand why and cant find anything in manuals06:54
grauzikaswill try now go thrue bugs may be there some one will be shared them configs and then i will try to follow it… already spent so much time that i could install everything manually (was doing that previously, but to have posibility easier upgrade everything wanted to give a try to ansible)06:56
grauzikashttps://www.irccloud.com/pastebin/5CoN2Jsa07:09
noonedeadpunkplatta: at very least we need to know what the task has failed to be able to trace it down07:26
noonedeadpunkwe do have some variables in code that can disable no_log, but they are different per usecase07:27
noonedeadpunkgrauzikas: hey07:27
jrossergood morning o/07:27
noonedeadpunkso if you have container created - I'm really very surprised that northd not getting installed inside it07:29
noonedeadpunkdefining network-northd_hosts as you did should be really enough for it to get it07:29
jrossernoonedeadpunk: i see overrides of env.d for nova and neutron there which i would be suspicious of?07:30
jrosser(i think?)07:30
noonedeadpunkah, I missed that07:31
noonedeadpunkgrauzikas: I assume you've found some old blog post designed to pre-Zed setup07:32
noonedeadpunkyou should not need any of these env.d overrides as of today07:32
noonedeadpunkalso I usually suggest using `network-infra_hosts` instead of `network_hosts` to avoid potential confusion07:34
jrossergrauzikas: i would *highly* recommend building an all-in-one following the quickstart guide to get you a reference07:34
jrosserthis is very useful even if you intend to have a multinode deployment in order to have a comparison of how all the parts fit together07:35
noonedeadpunk(or just as `git clone https://opendev.org/openstack/openstack-ansible; cd openstack-ansible; ./scripts/gate-check-commit.sh`)07:36
noonedeadpunkbut do that only on some clean system you ready to re-setup afterwards07:36
jrosseror in a vm07:37
noonedeadpunkyou can also inspect inventory using a script /opt/openstack-ansible/scripts/inventory-manage.py -g07:41
grauzikasok thank you… will try now your sugestions.08:35
grauzikasis it enough to retry deploy everything again?08:35
grauzikasapt-get --purge remove haproxy keepalived openvswitch-common openvswitch-switch ovn-host ovn-common -y && rm -rf /etc/haproxy/ /etc/keepalived/ && rm -rf /openstack && for container in $(lxc-ls -1); do   lxc-stop -n $container;   lxc-destroy -n $container; done08:35
grauzikason all nodes08:36
grauzikasalso im missing loggings in some of servers, for example in neutron i must go inside lxc container, edit neutron configuration and define log path and file and restart to get logings and i found that a lot of servers has same issue. should i some how define in ansible to enable loging for servives or this is manual job after installation?09:28
jrossergrauzikas: the logging should all be going to the systemd journal10:25
jrosserdo you have anything thats not doing that?10:26
jrosseryou should not have to configure anything manually after installation10:26
jrosserthere is information here about the different ways that you can customised the configuration where needed https://docs.openstack.org/openstack-ansible/latest/reference/configuration/using-overrides.html10:28
jrosserthe `config_template` mechanism we use is especially powerful10:28
grauzikasok trying to rerun playbooks after removing all containers and openstack_ansible will see what will be :)10:36
jrosserthere is also a playbook to destroy containers if you need it10:37
grauzikasis it correct binding for vxlan in ovn scenario? - neutron_ovn_controller and for vlan (used as exit to router where we have announced ips by bgp) neutron_ovn_gateway ?11:50
grauzikasbtw thanks for playbook what destroy containers, previusly was removing by hand and removing keys and so on :)11:54
noonedeadpunkno, vxlan is not a thing in OVN12:01
noonedeadpunkor well, I _think_ it can be used as external networks, but never tried that12:02
gillesMoHello Stackers !12:53
opendevreviewMerged openstack/openstack-ansible-os_neutron stable/2023.2: Correct 'neutron-policy-override' tag  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/92573412:54
gillesMoI'm trying to upgrade a Lab cluster which is OpenStack Wallaby on Ubuntu 18.04 to first, Ubuntu 20.04, and I fail already.12:54
gillesMoI'm following the doc distribution upgrade, but I had some failures (rabbitMQ, fernet tokens and creds renew...)12:55
gillesMoBut, for now, the main problem during setup-openstack (for keystone and all other projects), is that it does not build or find builds on the right repo container.12:57
gillesMoI don't understand how it's supposed to work. It seems replication via lsynd is only from the "primary" container to the others, but as I have upraded a non primary, some files have been put in the new secondary repo container, but not replicated12:58
jrossergillesMo: if you make the old repo containers be "down" for haproxy then you will only use the upgraded one when deploying the services13:02
jrosseryou can do that by putting the backends in maintenance mode in haproxy13:03
jrosseror more brute force by shutting down the web servers or whole repo server containers13:03
jrosserin later releases a shared filesystem is used to make that whole trouble with lsycnd go away13:03
mgariepygillesMo, it's a bit outdated but there are a few tips here: https://etherpad.opendev.org/p/osa-newton-xenial-upgrade13:06
gillesMoOh, yes. Thank you ! In the doc, it said (optional) for putting in MAINT mode backends, I'll check13:09
opendevreviewMerged openstack/openstack-ansible-os_neutron stable/2024.1: Correct 'neutron-policy-override' tag  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/92573313:10
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Remove deprecated http-use-htx option  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/92587714:05
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Remove deprecated 'stats bind-process' directive  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/92588114:17
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Remove deprecated 'stats bind-process' directive  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/92588114:20
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Remove the deprecated 'nbproc' config option from the example settings  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/92588414:25
gillesMoIt seems that this commit solves my constraint problem, I'v cherrypicked it : 14:35
gillesMohttps://opendev.org/openstack/ansible-role-python_venv_build/commit/57a2f226ebca7a2ec21920d4fd2b7ccf4549068414:36
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Ensure Octavia communicates with Neutron through internal URL  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/92577015:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Ensure Octavia communicates with Neutron through internal URL  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/92577015:23
plattaI'm stuck in the setup-openstack.yml playbook, getting a censored error for the task "open stack.osa.db_setup: Create database for service". HAProxy shows Galera as UP, and Keystone backend as DOWN. I can verify Galera is up. I do see that in my user_secrets.yml file there is no keystone_galera_password like there is for some of the other services.16:10
plattaNot sure if that's supposed to be there, but I generated the secrets file using the script provided. Any direction on what to check next?16:10
noonedeadpunkyou can run a playbook with `-e _oslodb_setup_nolog=False` to uncensor the output16:12
noonedeadpunkbut yes, these are supposed to be there16:12
noonedeadpunkor well, it's a keystone_container_mysql_password16:13
noonedeadpunkplatta: but run with -e _oslodb_setup_nolog=False first to see what's actually an issue is16:14
plattakeystone_container_mysql_password does exist in my secrets file. I'll re-run the playbook with that setting to see what's happening.16:17
noonedeadpunkso the script would populate passwords for everything that's in empty user_secrets.yml is present16:18
noonedeadpunkand then you're expected to use https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/user_secrets.yml as a base for generation16:18
noonedeadpunkit contains all required passwords for all services16:18
noonedeadpunkbut you can also pouplate/generate them manually as well16:18
plattafailed: [ark-keystone-container-76e19f64 -> ark-utility-container-e8c00157(172.29.239.82)] (item={'name': 'keystone', 'users': [{'username': 'keystone', 'password': 'xxx'}]}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "keystone", "users": [{"password": "xxx", "username": "keystone"}]}, "msg": "unable to connect to database,16:30
plattacheck login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (2006, \"MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))\")"}16:30
plattaI tried `ansible galera_container -m shell   -a "mysql -h localhost -e 'SELECT user FROM mysql.user;'"` and it doesn't show any service-specific users. I don't know if that's supposed to happen for all of them earlier on.16:36
plattaJust realized I didn't include the task on that: TASK [openstack.osa.db_setup : Create database for service]16:46
plattaLooks like what's failing is here https://opendev.org/openstack/openstack-ansible-plugins/src/commit/5b8a1d9be03146ffac8e91e92a044429e9286dbd/roles/db_setup/tasks/main.yml and I think it's expecting the keystone user to already exist. I'm doing a little searching in the opendev repositories, but I'm not sure at what point in the process those users16:56
plattaare supposed to get created.16:56
plattaLast piece before I have to step away for a while. If I attach to the utility container and run `mysql -h 172.29.236.101 -e 'show databases;'` I get the same error as shown by Ansible (that IP is the internal load balancer). I either get ERROR 2026 (HY000): TLS/SSL error: unexpected eof while reading or ERROR 2026 (HY000): TLS/SSL error: Broken17:06
plattapipe (32) if I run the command multiple times. Potentially a network configuration issue?17:06
opendevreviewMerged openstack/openstack-ansible master: Use unbound_clients role from plugins collection  https://review.opendev.org/c/openstack/openstack-ansible/+/92340917:58
opendevreviewMerged openstack/openstack-ansible master: Enchance reference_group logic for inventory  https://review.opendev.org/c/openstack/openstack-ansible/+/92359618:03
opendevreviewMerged openstack/openstack-ansible master: Permit Ubuntu Noble for deploy host and targets in requirements checks  https://review.opendev.org/c/openstack/openstack-ansible/+/92431121:05

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!