*** ysandeep|out is now known as ysandeep|rover | 04:52 | |
*** ysandeep|rover is now known as ysandeep|rover|brb | 05:51 | |
*** ysandeep|rover|brb is now known as ysandeep|rover | 05:56 | |
noonedeadpunk | jrosser: we're using it only in CI iirc? sshd role I mean?:) | 06:16 |
---|---|---|
noonedeadpunk | Well, looking at https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/bootstrap-aio.yml#L21 - I think we should be just with with package: sshd state: isntalled instead.... | 06:18 |
noonedeadpunk | *just fine with | 06:19 |
noonedeadpunk | we're totally overcomplicating there.... | 06:19 |
jrosser | yeah, I spent hour+ trying to debug the template and also figured it was waaaay too complex’s for what we actually need | 06:35 |
jrosser | i am away pretty much all day today and tomorrow now, centos-9 and remaining glusterfs patches need getting toward merging | 08:31 |
noonedeadpunk | gotcha | 08:39 |
*** ysandeep|rover is now known as ysandeep|rover|lunch | 09:31 | |
mouaa | Hi guys. Still questions about upgrading infrastructure nodes from Ubuntu 18.04 to 20.04 on ussuri. This documentation (link below) seems intended for installations using the "source" method of installation and not "distro" like us. The galera nodes being dissociated from the controller nodes on our deployment, there is no precise order of update between these different nodes? Or is there a documentation more adapted to our | 09:54 |
mouaa | Cf: https://docs.openstack.org/openstack-ansible/victoria/admin/upgrades/distribution-upgrades.html | 09:54 |
noonedeadpunk | I think that for distro it's way easier to upgrade OS | 10:07 |
noonedeadpunk | As for you I don't think order matters | 10:08 |
*** ysandeep|rover|lunch is now known as ysandeep|rover | 10:09 | |
mouaa | Thanks for your answer noonedeadpunk. On a first attempt, I updated the network nodes before the computes, galleys on arrival ! So I prefer to ask before... | 10:11 |
noonedeadpunk | Well, the thing is, that it's just theory - I don't know much deployments running distro path. But it indeed should be easier. As basically you jsut need to have a UCA repo that contains same openstack version packages but for different OS | 10:13 |
*** dviroel|out is now known as dviroel | 11:26 | |
*** ysandeep|rover is now known as ysandeep|rover|afk | 11:33 | |
*** ysandeep|rover|afk is now known as ysandeep|rover | 12:26 | |
*** dviroel is now known as dviroel|lunch | 15:26 | |
*** ysandeep|rover is now known as ysandeep|out | 15:29 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Have a symlink to u_c versioned file https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/842567 | 16:22 |
*** dviroel|lunch is now known as dviroel | 16:25 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Update repo verification file URI https://review.opendev.org/c/openstack/openstack-ansible/+/842571 | 16:32 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Remove all code for lsync, rsync and ssh https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837588 | 16:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Clean up legacy lsycnd, rsync and ssh key config https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/837859 | 16:41 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Use the same vars file for all versions of centos https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/841618 | 16:41 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Use distro packages for nginx on centos. https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/841619 | 16:42 |
noonedeadpunk | It's a win we don't have shared queues still implemented wrt to https://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html :D | 17:20 |
mgariepy | :) | 17:21 |
mgariepy | once again. doing nothing was the best course of action ! :D haha | 17:21 |
noonedeadpunk | so true))) | 17:22 |
noonedeadpunk | wdyt actually about in-person PTG in mid-October? | 17:23 |
mgariepy | where would it be ? | 17:24 |
noonedeadpunk | Columbus, Ohio | 17:24 |
noonedeadpunk | https://lists.openstack.org/pipermail/openstack-discuss/2022-May/028601.html | 17:25 |
noonedeadpunk | My guess that it would be with Summit... But won't put money on that | 17:25 |
mgariepy | it would be nice | 17:33 |
mgariepy | but i guess it will depend on a lot of factor on my end :) | 17:33 |
admin1 | one galera node out of sync .. i used the lxc containers destroy and then create and created that galera | 18:11 |
admin1 | then i ran the galera playbook and all 3 were in cluster .. but with no database | 18:11 |
admin1 | luckily i had done a mysqldump --all-databases before .. so i did mysql < sql file | 18:11 |
admin1 | and the database is back and all are in sync | 18:11 |
admin1 | but now when i do openstack endpoint list, i get an error saying keystoneauth1.exceptions.http.NotFound: Could not find project: fcfbdcff910645e2917ffae77bdd3f2a .. .. when i run playbooks, it stops at this .. keystoneauth1.exceptions.http.NotFound: Could not find project: fcfbdcff910645e2917ffae77bdd3f2a | 18:12 |
admin1 | but grep fcf. in the original database backup does not produce this | 18:12 |
admin1 | so what project is this actually | 18:13 |
mgariepy | which galera server you had to reinstall ? | 18:20 |
mgariepy | was it the first one ? in haproxy is all the galera server up ? | 18:21 |
admin1 | c1 c2 c3 = 3 servers .. c1 galera went out of sync .. so i deleted .. c2 and c3 were in cluster .. when added c1, c2 and c3 went blank .. no dbs .. but i had backup, so i restored the backup and all are in sync with the database | 18:26 |
admin1 | all is good in galera .. i can query databases ( mysql ) and then see the data | 18:26 |
admin1 | i tried to run nova playbook just to test and it stopped at that .. i have not ran any other playbooks | 18:27 |
admin1 | grep fcfbdcff910645e2917ffae77bdd3f2a full-backup.sql does not produce anything .. so that is very strange for me | 18:30 |
admin1 | mgariepy is it safe to run os-keystone playbook | 18:31 |
admin1 | at this point | 18:31 |
mgariepy | hmm the backup did it create the users in the DB ? | 18:40 |
admin1 | i am able to login as nova/neutron etc .. but not root locally | 18:47 |
mgariepy | did you restart your service after the rebuild of the DB ? | 18:47 |
admin1 | services not | 18:49 |
admin1 | but i did restart the galera itself | 18:49 |
mgariepy | try restarting keystone and see if you can list the projet ? | 18:50 |
admin1 | only kesytone service list works .. rest does not work | 18:53 |
mgariepy | openstack endpoint list does work ? | 18:54 |
mgariepy | restart the remaining service ? | 18:59 |
admin1 | keystone functions work | 19:00 |
admin1 | rest of the servies does not work .. | 19:00 |
admin1 | what happens now is .. since the whole of c3 db was restore and it got into galera .. locally the server cannot do mysql with blank pass | 19:01 |
admin1 | so running other playbooks stops due to it unable to mysql login as admin | 19:01 |
admin1 | i can try to MAIN the other galera, only run the c3 one and then re-run the playbooks and see if it works | 19:02 |
admin1 | set backend to MAINT from haproxy | 19:03 |
admin1 | fails on os_placement : Grant access to the database for the service | 19:10 |
admin1 | at this point, i think its best for me to stop the 2 databases, and continue with only 1 and later try to make it 3 again | 19:11 |
noonedeadpunk | I guess, if you restored from backup, you need to force re-bootstrap cluster to ensure state and pick valid one as bootstrap node | 19:30 |
noonedeadpunk | There were even bunch of variables for that in role | 19:31 |
mgariepy | it's kinda werid that it wiped the data on the other node. | 19:31 |
noonedeadpunk | But I guess what you did - you ignored cluster state when re-running role? | 19:31 |
noonedeadpunk | If role decided, that the clean one is bootstrap node - then it's kind of fair outcome | 19:32 |
noonedeadpunk | (and ignore cluster state was provided) | 19:32 |
noonedeadpunk | as without ignore role would fail on verification of cluster id | 19:33 |
admin1 | i did not ignore cluster state | 19:40 |
noonedeadpunk | huh | 19:52 |
noonedeadpunk | it's really weird then why in the world that happened | 19:53 |
mgariepy | if you could reproduce it it would be quite useful. | 19:54 |
*** dviroel is now known as dviroel|out | 20:45 | |
NeilHanlon | jrosser: a new package called lxc-templates-extra should be available in my copr repo that provides the requested packages from lxc/lxc-templates on github | 21:08 |
NeilHanlon | https://paste.opendev.org/show/bbUBsgdFkqSRjneVpial/ | 21:08 |
opendevreview | Neil Hanlon proposed openstack/openstack-ansible-lxc_hosts master: Add centos-9 support https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/842236 | 21:23 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!