*** ysandeep|out is now known as ysandeep | 05:28 | |
*** ysandeep is now known as ysandeep|brb | 06:03 | |
*** ysandeep|brb is now known as ysandeep | 06:36 | |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/yoga: zuul: fix definition of centos 9 stream job https://review.opendev.org/c/openstack/openstack-ansible/+/851067 | 07:28 |
---|---|---|
mrf | Hi! We're testing openstack-ansibe for yoga stable and we found that hosts file got incorrect IP for keystone container, is not an autogenerated file? how can we solve this? | 08:05 |
noonedeadpunk | mrf: /etc/hosts is generated with openstack_hosts role. Or well, one specific block, that is separated with block header | 08:09 |
jrosser | mrf: if you can give us an example of what has happened in the hosts file at paste.opendev.org and also what you think should have gone there from openstack_user_config / inventory that would be helpful | 08:10 |
noonedeadpunk | it's taken from inventory (/etc/openstack_deploy/openstack_inventory.json) so I'm not really sure how is that possible unless you was manually removing openstack_inventory.json and openstack_ips | 08:11 |
mrf | any ansible cli for cleanup all and redeploy? | 08:12 |
mrf | lxc-containers-destroy? | 08:12 |
mrf | failed: [infra1_keystone_container-288623ac -> infra1_utility_container-a02704c2(172.29.239.186)] failed due container got the ip .163 not the 186 | 08:13 |
jrosser | which one | 08:14 |
jrosser | there are two things in scope there, keystone and utility containers | 08:14 |
jrosser | that looks like a play targetting the keystone container with a task that is delegated to the utlity container | 08:14 |
jrosser | also i don't think there have been any changes to the inventory in Yoga (or even several releases before that) so it would be a bit of a surprise if something is wrong only in Yoga | 08:16 |
mrf | https://pastebin.com/7BvgaKgy | 08:17 |
mrf | is our first time ussing openstack-ansible maybe we done a mistake for sure | 08:17 |
jrosser | ok, so what do you think is wrong? | 08:17 |
mrf | dont know as you said keystone is trying to do something at utility container | 08:21 |
jrosser | are you familiar with delegation in ansible? | 08:22 |
mrf | running again the playbook with -vvvv | 08:22 |
jrosser | mrf: it's also worth knowing that you can run all of the playbooks individually | 08:25 |
jrosser | see that setup-openstack.yml is just calling a bunch of sub-playbooks in turn https://github.com/openstack/openstack-ansible/blob/master/playbooks/setup-openstack.yml | 08:26 |
jrosser | so if something is going wrong with the deployment of keystone you can run `openstack-ansible playbooks/os-keystone-install.yml` to only re-do the keystone part | 08:27 |
mrf | that save a lot of time | 08:28 |
jrosser | but looking at the log you have posted there is something wrong with connecting to the database, it doesnt look like you have a wrong IP | 08:28 |
jrosser | what is happening here is that all of the services keystone, neutron, nova.... need users creating in the db, permissions setting and other stuff | 08:29 |
jrosser | to do that a mysql client and the relevant python libraries are needed | 08:29 |
jrosser | we use the utility container as a place to have all of those things so they're not needed in every service container | 08:30 |
mrf | mmm i read oficial guide at openstack.com and didnt see any requeriments on host for mysql-client | 08:30 |
mrf | ah ok | 08:30 |
jrosser | well we are using ansible mysql modules to interact with the database to do the setup | 08:31 |
jrosser | that has it's own set of requirements to work | 08:31 |
jrosser | thats the reason you see a slightly unexpected IP here | 08:32 |
jrosser | every time that database setup tasks are done they are "delegated" from the service container to the utility container | 08:32 |
jrosser | and you can see that in the output `infra1_keystone_container-288623ac -> infra1_utility_container-a02704c2(172.29.239.186)` | 08:32 |
mrf | i can see somewhere the log of the failure inside the utility container? | 08:33 |
jrosser | so the work of setting up the things in the database that keystone needs are actually done on the utility container | 08:33 |
jrosser | and the reason that you don't see much is `the output has been hidden due to the fact that 'no_log: true' was specified for this result` | 08:33 |
jrosser | otherwise you would be getting the database admin credentials all over the log output | 08:33 |
jrosser | perhaps the first thing to do here is look in the galera containers and check that the database is running ok, by looking in the journal | 08:36 |
mrf | checking ... | 08:37 |
mrf | looks like galera is running: WSREP: Synchronized with group, ready for connections | 08:39 |
mrf | maybe haproxy is failing | 08:40 |
mrf | what container do haproxy things? | 08:41 |
jrosser | haproxy is on the infra hosts by default, not in a container | 08:41 |
mrf | ok | 08:41 |
jrosser | you can either look in the journal or `hatop -s /var/run/haproxy.stat` | 08:43 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Do not install COPR repo for CentOS LXC https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/843672 | 08:43 |
mrf | im retard haproxy is wrong | 08:43 |
mrf | the ip of haproxy is not tagged at interfaces | 08:44 |
jrosser | aaaahhh | 08:44 |
mrf | i think thats why it didnt work | 08:44 |
jrosser | to have debugged this from the "other end" you could also have tried the `mysql` cli in the utility container | 08:44 |
jrosser | that should connect to the db via haproxy | 08:45 |
mrf | can i put haproxy in other host dedicated to haproxy? | 08:46 |
noonedeadpunk | you absolutely can do that | 08:48 |
mrf | any command for cleanup all ? | 08:50 |
mrf | inventory hostfile etc? | 08:50 |
noonedeadpunk | well no, not really. | 08:52 |
noonedeadpunk | for cleaning up inventory you would need to drop all containers first | 08:52 |
noonedeadpunk | and yes, lxc-containers-destroy will do that | 08:53 |
mrf | love that playbook :P | 08:53 |
noonedeadpunk | after that you can drop openstack_inventory.json | 08:53 |
noonedeadpunk | but we don't have anything to revert changes that has been done on bare metal hosts | 08:54 |
mrf | then i need to remove manually haproxy | 08:54 |
mrf | we're ussing vms for our testing | 08:54 |
noonedeadpunk | (you can also limit lxc-containers-destroy to drop specific container or group of containers only) | 08:54 |
noonedeadpunk | you can use ad-hoc for that as example, like `ansible -m package -a "name=haproxy state=absent" haproxy_all` | 08:55 |
noonedeadpunk | you would need to cd /opt/openstack-ansible first though | 08:55 |
mrf | good ! thanks im learning a lot :) | 08:56 |
mrf | should i remove too openstack_hostnames_ips.yml ? | 09:00 |
mrf | or just the json ? | 09:00 |
jrosser | i think best to remove both | 09:02 |
noonedeadpunk | btw I looked recently at openstack_hostnames_ips and started wondering why we have this at all? | 09:20 |
mrf | :) | 09:23 |
noonedeadpunk | what's the point of it? | 09:23 |
noonedeadpunk | Like we don't read it when generating inventory, we don't store there secondary ips... | 09:24 |
noonedeadpunk | This is by far the only place we use it https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/filesystem.py#L190-L213 | 09:25 |
mrf | looks like duplicated file information because you already got the json version | 09:26 |
noonedeadpunk | I would imagine we could use it to ease our lives with parsing openstack_inventory.json, but without having anything except container_address it's kind of useless | 09:27 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Prevent lxc.service from being restarted on package update https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/851071 | 09:40 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Deprecate openstack_hostnames_ips https://review.opendev.org/c/openstack/openstack-ansible/+/851363 | 09:48 |
noonedeadpunk | I _think_ we used it when we were generating /etc/hosts file with bash script on the localhost isntead of the inventory | 09:48 |
noonedeadpunk | but no, we didn't | 09:50 |
mrf | time to push a commit for deprected file creation :P | 09:56 |
noonedeadpunk | I already did :[p | 09:58 |
mrf | one more question, the internal_lb_vip_address: 172.29.236.9 will be setup manually on the two haproxy host as a seconday ip? | 10:00 |
mrf | or will do it the ansible? | 10:00 |
noonedeadpunk | mrf: it will be done by keepalived | 10:01 |
noonedeadpunk | but you might want to set it to FQDN in production | 10:02 |
noonedeadpunk | But then haproxy_keepalived_internal_vip_cidr should be defined to VIP/32 as keepalived does not work with fqdn | 10:02 |
mrf | ok lets try again the deployment :P | 10:03 |
*** ysandeep is now known as ysandeep|lunch | 10:21 | |
*** ysandeep|lunch is now known as ysandeep | 10:58 | |
*** dviroel|afk is now known as dviroel | 11:31 | |
opendevreview | wangjiaqi proposed openstack/ansible-role-uwsgi master: Use TOX_CONSTRAINTS_FILE https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/851326 | 11:47 |
opendevreview | wangjiaqi proposed openstack/ansible-role-systemd_service master: Use TOX_CONSTRAINTS_FILE https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/851327 | 11:51 |
*** ysandeep is now known as ysandeep|afk | 12:53 | |
jamesdenton | noonedeadpunk somewhere around... queens... we stopped modifying the install branches for these add-ons: https://github.com/openstack/openstack-ansible-os_neutron/blob/master/defaults/main.yml#L54-L77. This has created some complications with regard to dependencies, and i've had to set overrides locally. Also not sure how 'backwards compatible' their master branch is with some of our older releases. I'm not sure if | 13:14 |
jamesdenton | these have just been missed or if it was intentional. Just FYI | 13:14 |
jamesdenton | not queens, sorry. It was Stein -> Train | 13:14 |
*** ysandeep|afk is now known as ysandeep | 13:15 | |
jrosser | jamesdenton: there are some overrides here https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml#L197 | 13:18 |
cloudnull | jamesdenton in da house! | 13:18 |
jrosser | jamesdenton: perhaps around that time the version was managed from the openstack-ansible repo rather than inside os_neutron? | 13:19 |
jamesdenton | interesting. i just experienced this w/ Xena -> Yoga upgrade in the last 2 weeks. but lemme look further. thank you | 13:19 |
jamesdenton | cloudnull hey buddy! glad to see you hanging around | 13:20 |
* cloudnull loitering | 13:20 | |
jamesdenton | that's how it starts | 13:20 |
jamesdenton | i think we have some shingles that need replacing, and some painting that's been neglected | 13:21 |
cloudnull | works in dev :D | 13:21 |
jamesdenton | what a joker | 13:23 |
cloudnull | I rekicked my dev cloud, using Debian and mostly is_metal, only running gallera,hap,memcached,rabbit,repo,rsyslog in containers. Everything works! idle RAM consumption 16GiB, CPU 1-5%. | 13:23 |
jamesdenton | that looks a lot like mine, actually. everything ran in containers at one time but it's been a slow migration out | 13:24 |
cloudnull | a stark contrast to what I had been doing. | 13:24 |
jamesdenton | "the dark ages" | 13:24 |
cloudnull | hahaha | 13:24 |
jamesdenton | jrosser looks like this is/was a case of a missing networking_baremetal_git_* overrides in repo_packages/openstack_services.yml. i will work on a PR later | 13:26 |
jamesdenton | thanks for the pointer | 13:27 |
jamesdenton | headed to the office. prepare for IRC silence :( | 13:27 |
jrosser | jamesdenton: if this is regarding ironic i would also be interested in your opinion on https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml#L197 | 13:29 |
jrosser | that just looks wrong | 13:29 |
jrosser | argh | 13:29 |
jrosser | i mean this https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml#L197 | 13:29 |
jrosser | /o\ | 13:29 |
jrosser | 3rd time lucky https://github.com/openstack/openstack-ansible/blob/master/inventory/env.d/ironic.yml#L56-L57 | 13:30 |
jrosser | not sure we need neutron agents in ironic_compute_container? | 13:31 |
*** lowercas_ is now known as lowercase | 14:49 | |
*** ysandeep is now known as ysandeep|out | 14:50 | |
mrf | is_metal true disable the container setup?? and install like a service in the host? | 14:54 |
jrosser | you can do that if you want to | 14:55 |
mrf | the wave is going to fully container no? | 14:55 |
jrosser | wave? | 14:55 |
mrf | maybe in future is_metal will be not supported | 14:55 |
jrosser | openstack-ansible supports either | 14:55 |
mrf | trend* | 14:55 |
jrosser | the was a particularly enthusisatic company that contributed a lot of the is_metal support | 14:56 |
jrosser | as this is proper open source rather than derived from some product it does what the contributors need | 14:56 |
mrf | yeah, my frist openstack was vanilla queens no ansible , manual install ... this project help me a lot | 14:57 |
jrosser | an example might be other contributions we had to install openstack from .deb/.rpm | 14:57 |
jrosser | but that is kind of orthoganal to the "install from source" ethos so we really discourage using that and are working toward removing it | 14:58 |
jrosser | you can choose comtainers or not - i think there is broad equality but there might be some outlier cases that are not well tested without containers | 14:58 |
jrosser | horizon perhaps being the one that is first in my mind | 14:59 |
mrf | as "provider" need the most tested things | 14:59 |
mrf | install queens from source master were too much headaches | 14:59 |
mrf | and with this we can test easy other project like designate etc.. | 15:00 |
mrf | just with a "click" | 15:00 |
jrosser | it's probably correct to say that most people install OSA with containers | 15:00 |
jrosser | but it is also completely OK to choose metal deployment if that suits better | 15:01 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Increase ControlPersist timeout to 300 seconds https://review.opendev.org/c/openstack/openstack-ansible/+/851426 | 15:48 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Allow to add extra records to /etc/hosts https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/851428 | 15:57 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/yoga: zuul: fix definition of centos 9 stream job https://review.opendev.org/c/openstack/openstack-ansible/+/851067 | 17:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/ussuri: Bump MariaDB version https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/851439 | 17:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Use cloudsmith repo for rabbit and erlang https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/850350 | 17:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/ussuri: Bump MariaDB version https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/851439 | 17:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/ussuri: Bump MariaDB version https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/851439 | 17:24 |
opendevreview | Merged openstack/openstack-ansible-plugins master: Fix gluster play_hosts https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/850540 | 18:39 |
opendevreview | Merged openstack/openstack-ansible-os_rally stable/ussuri: Control rally-openstack installed version https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/850477 | 18:55 |
opendevreview | Merged openstack/openstack-ansible stable/xena: Fix facts gathering for zun https://review.opendev.org/c/openstack/openstack-ansible/+/849630 | 19:37 |
jamesdenton | jrosser i would agree w/ your statement about the neutron agents in ironic_compute_container | 20:10 |
jamesdenton | just so happens my ironic computes are also infra nodes, so i haven't really noticed | 20:11 |
admin1 | quick question .. hi ..i am using cinder with ceph backend .. during a volume snapshot, the disk space of the cinder-volume container is totally used up .. does it make a temporary copy ? | 20:11 |
*** dviroel is now known as dviroel|afk | 20:24 | |
prometheanfire | anyone have experience (or just know that it works) using the mariadb/galera role as standalone (no other osa roles)? | 21:53 |
jamesdenton | no clue | 21:55 |
prometheanfire | trying to find a good galera cluster ansible role and there are a bunch but no one to rule them all | 21:59 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_barbican master: Entrust nCipher Connect HSM Backend Example https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/851475 | 23:24 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!