*** openstackgerrit has quit IRC | 00:36 | |
*** gyee has quit IRC | 00:37 | |
*** maharg101 has joined #openstack-ansible | 00:45 | |
*** nurdie has joined #openstack-ansible | 00:46 | |
*** maharg101 has quit IRC | 00:50 | |
*** nurdie has quit IRC | 00:50 | |
*** spatel has joined #openstack-ansible | 01:24 | |
*** rfolco has joined #openstack-ansible | 02:35 | |
*** rfolco has quit IRC | 02:40 | |
*** maharg101 has joined #openstack-ansible | 02:46 | |
*** nurdie has joined #openstack-ansible | 02:47 | |
*** maharg101 has quit IRC | 02:51 | |
*** nurdie has quit IRC | 02:52 | |
*** maharg101 has joined #openstack-ansible | 02:59 | |
*** rfolco has joined #openstack-ansible | 03:05 | |
*** rfolco has quit IRC | 03:08 | |
*** rfolco has joined #openstack-ansible | 03:08 | |
*** rfolco has joined #openstack-ansible | 03:10 | |
*** rfolco has quit IRC | 03:13 | |
*** rfolco has joined #openstack-ansible | 03:13 | |
*** rfolco has quit IRC | 03:17 | |
*** rfolco has joined #openstack-ansible | 03:26 | |
*** rfolco has quit IRC | 03:29 | |
*** rfolco has joined #openstack-ansible | 03:29 | |
*** rfolco has quit IRC | 03:31 | |
*** rfolco has joined #openstack-ansible | 03:31 | |
*** rfolco has quit IRC | 03:33 | |
*** rfolco has joined #openstack-ansible | 03:34 | |
*** rfolco has quit IRC | 03:38 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-ansible | 04:33 | |
*** spatel has quit IRC | 04:41 | |
*** nurdie has joined #openstack-ansible | 04:48 | |
*** nurdie has quit IRC | 04:53 | |
*** rfolco has joined #openstack-ansible | 05:59 | |
*** rfolco has quit IRC | 06:03 | |
*** SmearedBeard has joined #openstack-ansible | 06:21 | |
*** SmearedBeard has quit IRC | 06:39 | |
*** nurdie has joined #openstack-ansible | 06:49 | |
*** nurdie has quit IRC | 06:53 | |
*** SmearedBeard has joined #openstack-ansible | 08:41 | |
*** SecOpsNinja has joined #openstack-ansible | 08:48 | |
*** nurdie has joined #openstack-ansible | 08:50 | |
*** nurdie has quit IRC | 08:54 | |
*** SmearedBeard has quit IRC | 09:55 | |
*** macz_ has joined #openstack-ansible | 09:55 | |
*** macz_ has quit IRC | 10:00 | |
*** nurdie has joined #openstack-ansible | 10:51 | |
*** nurdie has quit IRC | 10:56 | |
*** tosky has joined #openstack-ansible | 12:33 | |
*** spatel has joined #openstack-ansible | 12:43 | |
*** spatel has quit IRC | 12:48 | |
*** nurdie has joined #openstack-ansible | 12:52 | |
*** vesper11 has quit IRC | 12:53 | |
*** nurdie has quit IRC | 12:56 | |
*** nurdie has joined #openstack-ansible | 13:00 | |
*** nurdie has quit IRC | 13:15 | |
*** nsmeds has joined #openstack-ansible | 13:19 | |
nsmeds | Hey everyone - quick question about the `repo` containers. I just rebuilt one of the servers and successfully (after a few issues) reran the setup-* playbooks to provision it. However the `repo` container on this server is missing some packages. | 13:20 |
---|---|---|
nsmeds | Trying to find a way to update the repo container to download all the necessary packages. e.g. The OSA playbooks will fail looking for `os-releases/20.1.1/ubuntu-18.04-x86_64/python_kingbirdclient-0.2.1-py3-none-any.whl` and other similar packages. I can resolve this by setting the new `repo` backend to `MAINT` in HAProxy, confirming issue is isolated to the new container. | 13:22 |
*** MickyMan77 has joined #openstack-ansible | 13:27 | |
SecOpsNinja | hi! im having a strange error with sshd in one of the keystone nodes where is returning SSH_MSG_USERAUTH_FAILURE but the the keystone user as the same authorized kets as node 1. where the failed logs od sshd show? its not showing anything in journal or stdout if i run manually with sshd -De | 13:29 |
*** MickyMan77 has quit IRC | 13:35 | |
jrosser | nsmeds: the contents of the first repo server in then ansible group are synchronised to the others with lsyncd/rsync, you may want to check if that has happened | 13:54 |
*** MickyMan77 has joined #openstack-ansible | 14:04 | |
recyclehero | does OSA can do any DB manipulation, like when neutron db is out of sync? | 14:06 |
recyclehero | does offline_migrations relate to this? | 14:08 |
jrosser | that is to do with database schema updates at upgrade time | 14:10 |
*** MickyMan77 has quit IRC | 14:12 | |
recyclehero | so if for example the present neutron agents differ from the ones pointed out in DB, there are no way to correct the database using gatherd ansible facts? | 14:15 |
recyclehero | jrosser: I am reading about database migrations now being expand-migrate-contract. | 14:15 |
recyclehero | so I am asking if there is a script of playbook to rebuild the databases? | 14:16 |
recyclehero | jrosser: also how does OSA do database population when deploying? does it put anything in databases specificly for neutron? | 14:17 |
jrosser | other than creating the db for neutron and the user everything is done by neutron itself | 14:18 |
jrosser | every service has an identical file like this https://github.com/openstack/openstack-ansible-os_neutron/blob/master/tasks/db_setup.yml | 14:18 |
SecOpsNinja | yepo i dont understand why if recreating 2ª and 3ª keystone containers (incluiding destroying there data), only the 3 one can get acess from 1º keystone. the 2º container always get errors in ssh client with 51 type response :( | 14:25 |
SecOpsNinja | if i do ssh -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .ssh/id_rsa <keystone_container> i can connect with 3ª container but 2ª no :( and for what im seasing the key in authorized keys is correct... | 14:28 |
SecOpsNinja | lol i found the problm... forget to block some used_ips and the container is getting thesame ip as a physical hots :P | 14:36 |
*** ianychoi_ has joined #openstack-ansible | 14:38 | |
*** ianychoi has quit IRC | 14:40 | |
SecOpsNinja | is there a new way to force the recreationg of a container with a diferent ip? | 14:44 |
*** klamath_atx has quit IRC | 14:48 | |
*** nurdie has joined #openstack-ansible | 14:52 | |
*** tosky has quit IRC | 14:53 | |
*** MickyMan77 has joined #openstack-ansible | 15:25 | |
recyclehero | SecOpsNinja: maybe if u destroy the container, add to used ip and redeploy your case gets resolved. | 15:29 |
*** MickyMan77 has quit IRC | 15:33 | |
*** MickyMan77 has joined #openstack-ansible | 16:01 | |
*** MickyMan77 has quit IRC | 16:10 | |
*** maharg101 has quit IRC | 16:18 | |
*** rfolco has joined #openstack-ansible | 16:24 | |
*** rfolco has quit IRC | 16:29 | |
SecOpsNinja | recyclehero, yep i was think of that but i used the manage-inventory.py script and clear-ips to refresh all containers because the doc always say to avoid edit openstack_inventory.json | 16:35 |
recyclehero | SecOpsNinja: It says with clear ips load balancers would not be modified, what r u gonna do about that? | 16:38 |
*** MickyMan77 has joined #openstack-ansible | 16:39 | |
nsmeds | ok thanks @jrosser, I'll look into that. | 16:41 |
*** spatel has joined #openstack-ansible | 16:43 | |
SecOpsNinja | recyclehero, where did you saw that alert? atm i think my /etc/haproxy/haproxy.cfg is updated with new ips of containers | 16:44 |
recyclehero | SecOpsNinja: in the script itself | 16:44 |
SecOpsNinja | recyclehero, i didn't check the script itself but https://docs.openstack.org/openstack-ansible/latest/reference/inventory/manage-inventory.html | 16:46 |
SecOpsNinja | probably because im using haproxy from oepnstack ansible is using the correct values | 16:47 |
*** spatel has quit IRC | 16:47 | |
*** MickyMan77 has quit IRC | 16:47 | |
SecOpsNinja | ofc if i had external lb that would be a problm | 16:48 |
*** maharg101 has joined #openstack-ansible | 17:00 | |
*** tosky has joined #openstack-ansible | 17:06 | |
*** maharg101 has quit IRC | 17:08 | |
*** MickyMan77 has joined #openstack-ansible | 17:18 | |
*** MickyMan77 has quit IRC | 17:26 | |
*** rfolco has joined #openstack-ansible | 17:44 | |
*** tosky has quit IRC | 18:01 | |
*** MickyMan77 has joined #openstack-ansible | 18:01 | |
*** MickyMan77 has quit IRC | 18:10 | |
recyclehero | cannot set cors for uploading image to glance | 18:37 |
recyclehero | HAproxy doesnt respond to the OPTIONS preflight cors request | 18:37 |
*** MickyMan77 has joined #openstack-ansible | 18:38 | |
*** MickyMan77 has quit IRC | 18:47 | |
*** maharg101 has joined #openstack-ansible | 19:04 | |
*** maharg101 has quit IRC | 19:09 | |
*** admin0 has quit IRC | 19:39 | |
ebbex | INSERT INTO `clusters` VALUES ('2020-10-10 19:48:16','2020-10-10 19:48:16',NULL,0,2,'ceph@RBD','cinder-volume',0,NULL,0,'disabled',NULL,0); | 19:52 |
ebbex | that line is sometimes missing on my deployments. And was created by having just one cinder-volume.service start with cinder.conf containing [RBD]\nbackend_host: rbd:volumes | 19:54 |
*** MickyMan77 has joined #openstack-ansible | 19:55 | |
*** SecOpsNinja has left #openstack-ansible | 19:56 | |
*** MickyMan77 has quit IRC | 20:03 | |
*** MickyMan77 has joined #openstack-ansible | 20:39 | |
*** MickyMan77 has quit IRC | 20:47 | |
*** MickyMan77 has joined #openstack-ansible | 21:17 | |
*** MickyMan77 has quit IRC | 21:25 | |
*** rfolco has quit IRC | 21:38 | |
recyclehero | apparmor is precenting from mounting nfs volume in compute node | 21:41 |
*** MickyMan77 has joined #openstack-ansible | 21:57 | |
*** MickyMan77 has quit IRC | 22:05 | |
*** maharg101 has joined #openstack-ansible | 22:24 | |
*** maharg101 has quit IRC | 22:28 | |
*** tosky has joined #openstack-ansible | 22:51 | |
*** admin0 has joined #openstack-ansible | 22:58 | |
*** tosky has quit IRC | 23:10 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!