admin1 | \o | 08:17 |
---|---|---|
jrosser | hello | 08:20 |
noonedeadpunk | mornings | 08:27 |
admin1 | hi . i have this error -- https://gist.githubusercontent.com/a1git/574abff786cd71a4b636492695e5a9af/raw/06ca4ebe5d03c06d3e59c9898cc242e518369968/gistfile1.txt .. trying to get both ceph and nfs backend to work on cinder | 08:32 |
admin1 | individually/independently they work just fine | 08:32 |
noonedeadpunk | admin1: so one tricky thing when doing both nfs and ceph - is that you'd need to disable active/active mode, as NFS does not support that | 09:01 |
noonedeadpunk | you can do that by setting cinder_active_active_cluster: false | 09:02 |
noonedeadpunk | but frankl;y speaking, when we had both nfs and ceph, we just set up another set of cinder-volume containers, to get use of active/active for ceph | 09:03 |
noonedeadpunk | So I'd suggest to keep them separate... | 09:03 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Update AIO config before performing an upgrade https://review.opendev.org/c/openstack/openstack-ansible/+/885190 | 09:36 |
admin1 | noonedeadpunk, how do I do a diff container ? | 09:36 |
admin1 | cinder_active_active_cluster: false -- is there any downside to this ? | 09:36 |
noonedeadpunk | Well, without having a coordination (like zookeeper/etcd/redis), activeactive probably should not be used... But we had weird behaviour without active/active enabled, as backends were just ignoring requested actions, so volumes could just stop unmounting or smth like that | 09:39 |
admin1 | i wanted to use gluster initially, but saw that there is no driver for it anymore | 09:41 |
noonedeadpunk | regarding extra group - we created an env.d file, and made these cinder_volumes_nfs_container to be part of cinder_volumes and cinder_all | 09:41 |
admin1 | 3x nvme on glutser ( no replication, just a big raid0 stripe) and then pass it to cinder | 09:42 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: [WIP] Add 'tls-transition' scenario https://review.opendev.org/c/openstack/openstack-ansible/+/885194 | 09:47 |
admin1 | noonedeadpunk , like this ? https://gist.githubusercontent.com/a1git/58f00f62fb9e08d8b15662f5ce69f9a2/raw/a37d9574f76cc8c279455cbaf1cd58e0443714f3/gistfile1.txt | 09:54 |
noonedeadpunk | admin1: iirc smth like that https://paste.openstack.org/show/bOW1Xl4QGyIJmTH0SUM1/ | 09:57 |
noonedeadpunk | but better backup inventory before trying :D | 09:58 |
noonedeadpunk | ah, missed `container_skel:` before `cinder_nfs_volumes_container` | 09:58 |
noonedeadpunk | https://paste.openstack.org/show/bptD7vyaBoxfHdxo5EHp/ | 09:59 |
noonedeadpunk | Another way around - you can set affinity: 2 for already existing `storage_hosts` - then new containers will be created. But then you'd need to have host_vars rather then group_vars which is less convenient | 10:01 |
noonedeadpunk | also don't forget to configure storage network to pass to this new group | 10:02 |
jrosser | as usual AIO is good for messing with this stuff - you can check the inventory very early on | 10:05 |
jrosser | then see if you get the extra container after setup-hosts | 10:05 |
admin1 | i have cinder.yml in env.d with container_skel .... is-metal: false .. do i add this to the same file ? | 10:06 |
admin1 | jrosser, good idea .. will spin up an aio for this | 10:06 |
noonedeadpunk | you can add there or create a new one - doesn't really matter | 10:44 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add quorum support for service https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/875408 | 11:32 |
noonedeadpunk | would be great to have some reviews on https://review.opendev.org/q/topic:bump_osa+status:open | 14:14 |
mgariepy | done. | 14:28 |
NeilHanlon | noonedeadpunk: just confirming something - OVN setup should be available as of Zed? | 14:48 |
noonedeadpunk | yup | 14:48 |
NeilHanlon | I'm responding to a question[1] about OSA on Rocky 9.2 w/ Zed and they're having bridge troubles https://stackoverflow.com/questions/76404439/trouble-creating-a-bridge-device-for-openstack-rocky-linux-9-2-networking | 14:49 |
NeilHanlon | but seems like they should just use OVN ;) | 14:49 |
jrosser | not sure that is really an OVN / not-OVN type of problem they have | 15:05 |
jrosser | independant of the neutron driver, two things..... 1) br-mgmt does not particularly need to be able to access the internet 2) br-mgmt on all the different hosts (vm in their case) need to be able to communicate | 15:06 |
jrosser | NeilHanlon: ^^ | 15:06 |
NeilHanlon | yeah, their issue is actually that NetworkManager already grabbed their ens160 | 15:07 |
NeilHanlon | so the bridge will never enslave the interface | 15:07 |
NeilHanlon | I commented with a bunch of stuff and pointed them to the channel ;) | 15:07 |
NeilHanlon | also: happy monday folks. hope everyone is having a good week so far | 15:08 |
jrosser | my week is starting interestingly via https://www.theregister.com/2023/06/01/moveit_transfer_zero_day/ | 15:11 |
jrosser | employer -> outsource -> outsource + crap software = DOH | 15:12 |
NeilHanlon | Ouch | 15:13 |
jrosser | gives me a reality check of how un-corporate my dayjob is to see that there are people buying bug-ridden products out there which are windows server + IIS + crap code which basically do what rsync is, badly, apparently | 15:15 |
NeilHanlon | at my $lastjob, my team (infra/ops) would routinely receive tickets from Marketing with something like "There's this AMI we need you to run and also we bought 20 domain names and we're going live Monday, can we get this all ready?" - on a friday | 15:20 |
NeilHanlon | and we'd be like "what do you mean 'there's this AMI"? We don't run workloads on AWS.." | 15:20 |
noonedeadpunk | Btw, I think it would be great to land this doc change https://review.opendev.org/c/openstack/openstack-ansible/+/883488 | 15:24 |
NeilHanlon | 👍 | 15:27 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Start 2023.2 (Bobcat) development https://review.opendev.org/c/openstack/openstack-ansible/+/884924 | 15:48 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Start 2023.2 (Bobcat) development https://review.opendev.org/c/openstack/openstack-ansible/+/884924 | 15:56 |
jlabarre-rh | I keep getting the error "'shell' is not a valid attribute for a Play" when I try to run commands in a playbook (such as trying to import a galaxy collection, using "ansible-galaxy collection install openstack.cloud") | 16:03 |
jlabarre-rh | the playbook is starting with: --- | 16:03 |
jlabarre-rh | - name: add Ansible Galaxy collection for openstack cloud | 16:03 |
jlabarre-rh | shell: | | 16:03 |
jlabarre-rh | ansible-galaxy collection install openstack.cloud | 16:03 |
jlabarre-rh | tried with and without that pipe after "shell:", we have other playbooks in our system running both ways, so I don't know what the purpose of it is anyway. | 16:05 |
jrosser | jlabarre-rh: you could paste the actual error output somewhere, very hard to read it as lines in IRC | 16:07 |
jrosser | also this is the irc for the openstack-ansible deployment tool, which is not the same as the ansible collection | 16:08 |
noonedeadpunk | jlabarre-rh: I think you should have also `hosts: localhost\n tasks:` or smth before using `shell` | 16:16 |
jlabarre-rh | I just took out some of my tasks that are just supposed to list what I just added (for checking that it ran OK) and it got further (but failed with an Incompatible openstacksdk library found). Hosts is defined in the playbook calling this one | 16:18 |
jrosser | jlabarre-rh: you need to take pretty great care to set up an environment with compatible openstack collection and openstacksdk versions | 16:21 |
jlabarre-rh | I'm probably going to have to rely on just shell commands for the user creation rather than a module for now | 16:23 |
opendevreview | Merged openstack/openstack-ansible master: [doc] Update upgrade guide to mention SLURP https://review.opendev.org/c/openstack/openstack-ansible/+/883488 | 18:12 |
depasquale | ciao everybody I have a strange problem for the very first time in my life with OpenStack | 18:22 |
depasquale | installing via openstack ansible, when installing keystone I receive the following issue: | 18:23 |
depasquale | SQL connection failed. 10 attempts left.: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '172.29.236.10' ([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate is not yet valid (_ssl.c:1131))") | 18:23 |
noonedeadpunk | depasquale: oh, that's interesting. | 18:24 |
noonedeadpunk | Are you sure that your clocks are synchornized? | 18:24 |
depasquale | On playbook setup-openstack.yml task os_keystone : Check current state of Keystone DB fails because exit code 1 from /openstack/venvs/keystone-20.1.0.dev9/bin/keystone-manage db_sync --check | 18:24 |
depasquale | I am running stable/zed | 18:25 |
noonedeadpunk | as that feels like deploy host date/time is not in sync with some other hosts | 18:25 |
jrosser | "certificate is not yet valid" <- this is generated on the deploy host | 18:25 |
depasquale | uhm ok. let me check | 18:25 |
jrosser | but the check will be done on a host like the utility container | 18:25 |
noonedeadpunk | though we're installing chrony by default, except deploy host | 18:25 |
jrosser | so any skew there in the wrong direction will be bad | 18:26 |
depasquale | ok. do you have the command you wish to execute on utility container? | 18:26 |
jrosser | deploy host being in the future, there is no room for error | 18:26 |
noonedeadpunk | `date`?:) | 18:26 |
jrosser | but deploy host being in the past won't be noticed | 18:26 |
depasquale | ok makes sense | 18:26 |
noonedeadpunk | well, or one lxc hosts is in past | 18:27 |
depasquale | but I have no idea why it thinks to be in the past | 18:27 |
depasquale | we are talking about a fresh installation on formatted servers | 18:27 |
noonedeadpunk | yeah, but server time is provided by bios | 18:27 |
depasquale | ok let me check | 18:27 |
noonedeadpunk | after installation | 18:27 |
depasquale | I think it can be an issue on the OSA machine | 18:28 |
noonedeadpunk | until time gets synchronized with chrony or smth like that | 18:28 |
depasquale | ok my OSA is looking at future... :D Tue 13 Jun 2023 09:09:19 AM UTC | 18:28 |
depasquale | let me check if this solves the issue | 18:29 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.1: [doc] Update upgrade guide to mention SLURP https://review.opendev.org/c/openstack/openstack-ansible/+/885257 | 18:29 |
noonedeadpunk | depasquale: you can try running hardening role against OSA host I guess... | 18:30 |
noonedeadpunk | At very least just with a tag to setup a chrony | 18:30 |
jrosser | i think also the certificate will not be re-issued, as it exists now | 18:32 |
noonedeadpunk | like `openstack-ansible playbooks/security-hardening.yml -e security_host_group=localhost --tags V-72269` | 18:32 |
noonedeadpunk | nah, it won't | 18:32 |
jrosser | i think `-e haproxy_pki_regen_cert=true` will be needed on the haproxy playbook to do that | 18:33 |
depasquale | time synched I am regenerating the certs and trying to update everything | 18:33 |
noonedeadpunk | arhg | 18:34 |
noonedeadpunk | `E:Failed to fetch https://ppa1.novemberain.com/rabbitmq/rabbitmq-server/deb/ubuntu/dists/jammy/main/binary-amd64/Packages.gz File has unexpected size (8854 != 9044). Mirror sync in progress? ` | 18:34 |
depasquale | It will require few time. But thanks a lot for the help! Let's see if it is all | 18:34 |
noonedeadpunk | You can trigger certificates re-generation | 18:35 |
noonedeadpunk | but will need to run all playbooks from the beginning | 18:35 |
* noonedeadpunk having sooo bad connection, that pages barely load | 18:36 | |
noonedeadpunk | depasquale: I think you will need to run `openstack-ansible setup-hosts.yml -e pki_regen_ca=true -e pki_regen_cert=true` | 18:41 |
noonedeadpunk | and same for setup-infrastructure afterwards | 18:41 |
NeilHanlon | noonedeadpunk: i've been seeing that error a bunch, too. They need to fix how they are syncing the packages, I think... | 18:46 |
NeilHanlon | I'm honestly debating just making my own mirror of them for us all | 18:46 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!