noonedeadpunk | good morning | 06:58 |
---|---|---|
jrosser | i wonder if we need a release note for the ceph-ansibe 8.0 stuff | 07:12 |
jrosser | also there might be left over rgw services left running given that the service name seems to have changed now | 07:13 |
jrosser | at upgrade | 07:13 |
noonedeadpunk | oh, yes, upgrade for rgw is a question for sure.... | 07:13 |
*** gaudenz_ is now known as gaudenz | 07:17 | |
jrosser | but then we really don't make any promise about ceph upgrades at all - but user expectiation might be that it will work | 07:26 |
jrosser | a release note could say that any duplicate rgw services must be stopped/masked manually? | 07:27 |
jrosser | but it will break too i think, as the IP/port will be in use by the old service | 07:27 |
noonedeadpunk | I'd actually try to deal with that somehow | 07:29 |
noonedeadpunk | as after all we use own playbooks for rgw deployment, so likely with serial we can stop the service before including ceph-ansible role | 07:30 |
jrosser | yes - we know what the service name used to be | 07:30 |
noonedeadpunk | ok, so for mariadb - it's this command that trigger ssl error - `/usr/bin/mariadb --defaults-file=/etc/mysql/debian.cnf` | 08:56 |
noonedeadpunk | but then somehow debian.cnf was not placed..... | 09:02 |
noonedeadpunk | or got overriden.... | 09:02 |
noonedeadpunk | aha, ok, so apparently smth has changed on how TLS communication for sockets happen | 09:07 |
noonedeadpunk | fwiw, just `mariadb` fails in exactly the same way | 09:12 |
noonedeadpunk | so... I'm a bit confused now on how to make mariadbclient to respect TLS we've used | 09:25 |
andrewbonney | Has anyone else tried an upgrade to C yet? I've hit a Cinder issue and have reported a bug (https://bugs.launchpad.net/cinder/+bug/2070475), but can't really believe I'd be the first | 09:43 |
noonedeadpunk | we have not yet - planned to test later during summer... | 09:56 |
noonedeadpunk | I've found something alike in masakari as well, though not exactly same ofc | 09:56 |
noonedeadpunk | I wonder how we won't catch that in CI | 09:57 |
noonedeadpunk | as upgrade tests for cinder from 2023.1 were passing there iirc | 09:57 |
andrewbonney | I guess it depends how varied the data is in the database after just a few tempest tests | 09:57 |
noonedeadpunk | well, we execute `cinder-manage db sync` kinda? | 10:06 |
noonedeadpunk | At least I'd expect to | 10:06 |
andrewbonney | Yeah, I think the change it makes it probably ok against an empty DB, but throws an error dependent on the existing data | 10:08 |
*** mgoddard- is now known as mgoddard | 10:12 | |
noonedeadpunk | ok, regarding mariadb - `--skip-ssl-verify-server-cert` does help kinda | 10:16 |
noonedeadpunk | so smth just wrong with the cert I assume | 10:17 |
noonedeadpunk | missing smth for socket auth... | 10:17 |
noonedeadpunk | like 127.0.0.1 | 10:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Update mariadb to 11.4.2 https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/922377 | 10:28 |
noonedeadpunk | andrewbonney: btw, are you already running workloads on top of openstack with ubuntu 24.04 ? | 11:23 |
noonedeadpunk | I'm jsut trying to find out why virtio_net is missing from kernel modules now and does it mean it's missing or it was somehow integrated/merged with kernel, so does not show up as a module anymore... | 11:23 |
andrewbonney | In the hypervisor or guest? All our HVs are 22.04. I can have a look if any guests are running 24.04 | 11:24 |
noonedeadpunk | guest | 11:24 |
andrewbonney | Of course, no way to deploy HVs yet... | 11:24 |
noonedeadpunk | yeah | 11:24 |
andrewbonney | Are you seeing a behaviour issue as a result? | 11:26 |
noonedeadpunk | well, I'm not sure yet :D | 11:27 |
noonedeadpunk | I'm also not sure if it's smth wrong with the image to begin with | 11:27 |
noonedeadpunk | or with hv... | 11:27 |
noonedeadpunk | or smth else very obvious | 11:27 |
noonedeadpunk | so was wondering if you have spotted that or maybe having some internal reports about poor performance or anything like that... | 11:28 |
noonedeadpunk | As Itried to find what has happened to virtio_net module and failed so far | 11:29 |
andrewbonney | I'll spin an instance up | 11:29 |
andrewbonney | Doesn't look like they're being used much yet since we made the images available | 11:30 |
andrewbonney | I see virtio_net and virtio_scsi missing compared to a similar 22.04 instance | 11:34 |
noonedeadpunk | yeah... ok... | 11:36 |
*** tosky_ is now known as tosky | 12:37 | |
*** kleini_ is now known as kleini | 12:47 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Update mariadb to 11.4.2 https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/922377 | 13:29 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Remove xinetd clean-up tasks https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/922819 | 13:30 |
lowercase | I'm failing in a very parcular way, https://paste.centos.org/view/9ce519ca . During bootstap-ansible, It's failing on a zuul task, which is strange to me as I don't knowingly perform any zuul related tasks | 13:38 |
noonedeadpunk | hey | 13:45 |
noonedeadpunk | what version are you running? | 13:46 |
noonedeadpunk | and do you have user-collection-requiremens? | 13:46 |
lowercase | morning! | 13:47 |
lowercase | This is 2023.1 and i do not have any user-collection reqs | 13:47 |
noonedeadpunk | weird part is why you don't have source key | 13:49 |
noonedeadpunk | as obviously is does have it: https://opendev.org/openstack/openstack-ansible/src/branch/stable/2023.1/ansible-collection-requirements.yml#L49-L52 | 13:49 |
noonedeadpunk | and the task is basically verifying if you're running in CI or not, as we're trying to use same path. | 13:52 |
lowercase | Take a second look at the error. That openvswtch succeeds | 13:52 |
jrosser | could try `debug: var=galaxy_collections_list` just here https://github.com/openstack/openstack-ansible/blob/stable/2023.1/scripts/get-ansible-collection-requirements.yml#L49 | 13:52 |
noonedeadpunk | ah, well.. then this indeed ^ | 13:52 |
jrosser | the error is coming from this task though i think https://github.com/openstack/openstack-ansible/blob/stable/2023.1/scripts/get-ansible-collection-requirements.yml#L50 | 13:52 |
noonedeadpunk | as then there's something off with the list... | 13:52 |
jrosser | not the one that prints the openvswitch line | 13:53 |
lowercase | https://paste.centos.org/view/b5941827 | 13:55 |
lowercase | Here is the result from the debug line | 13:55 |
jrosser | so netcommon does not have a source | 13:56 |
jrosser | but here it is https://opendev.org/openstack/openstack-ansible/src/branch/stable/2023.1/ansible-collection-requirements.yml#L42 | 13:56 |
noonedeadpunk | ansible.netcommon | 13:57 |
noonedeadpunk | maybe we missed that in earlier releases... | 13:57 |
jrosser | lowercase: look at line 2 of your paste? | 13:58 |
lowercase | i am appending a user collection! | 13:58 |
noonedeadpunk | ++ - you for sure should have /etc/openstack_deploy/user-collection-requirements.yml | 13:58 |
lowercase | but.. where | 13:58 |
lowercase | okay, i deleted that | 13:59 |
lowercase | Yep, that works | 13:59 |
lowercase | I do think I rememeber adding that to fix a depends a while back. But for what I don't recall | 14:00 |
jrosser | there was a time when the ansible people did a quite big breaking change moving/renaming all the netcommon stuff around | 14:01 |
lowercase | Awesome, thank you guys so much! | 14:03 |
jrosser | no problem :) | 14:04 |
andrewbonney | noonedeadpunk: has the switch to openstack_resources caused a bit of a behaviour change for octavia ssh keypairs? | 14:19 |
andrewbonney | I think previously it was generated by openstack then copied to the deploy host | 14:19 |
andrewbonney | Now it looks like we're generating in a utility container perhaps and uploading it? | 14:19 |
noonedeadpunk | Ideally it should not have changed | 14:21 |
noonedeadpunk | As IIRC idea was to even re-use previously issued keypairs | 14:22 |
noonedeadpunk | and I think that worked.... | 14:22 |
noonedeadpunk | but huh | 14:23 |
noonedeadpunk | it's indeed delegated to openstack_resources_setup_host | 14:23 |
noonedeadpunk | which is utility... | 14:23 |
noonedeadpunk | but it should have been octavia_keypair_setup_host | 14:24 |
andrewbonney | That got removed in from the role defaults as part of the change | 14:25 |
noonedeadpunk | yeah, so that's the bug.... | 14:25 |
noonedeadpunk | I guess | 14:25 |
noonedeadpunk | At very least this should be covered by upgrade script | 14:25 |
andrewbonney | Yeah, I can certainly work around it by putting the file in the right place on the utility host, but I view them as somewhat disposable so it could come up again in future | 14:26 |
andrewbonney | In fact it looks like before we removed the private keys from our deployment host and put them in vault so they weren't in plain text. Because the previous tasks only ran if the named key didn't exist in OpenStack the file didn't need to be present | 14:28 |
andrewbonney | Fwiw other than this and Cinder trouble 2023.2 -> 2024.1 appears to have worked fine | 14:30 |
noonedeadpunk | like the wierd thing is, that I totally tried to make it compatible with old keys for sure... | 14:32 |
noonedeadpunk | but likely I played on metal deployment back then.... | 14:33 |
noonedeadpunk | as I had to do that to prevent keypair being overriden by the role: https://opendev.org/openstack/openstack-ansible-os_octavia/src/branch/master/tasks/octavia_resources.yml#L125-L126 | 14:33 |
noonedeadpunk | Or otherwise I somehow decided to merge 2 different includes together at some point... | 14:34 |
noonedeadpunk | nah... I think I just played on metal all that time... | 14:36 |
noonedeadpunk | As I even tried to mimic the path by using lookup('env', 'HOME') ~ '/.ssh') | 14:36 |
noonedeadpunk | wait a sec | 14:37 |
noonedeadpunk | so that - is not a deploy host anymore, isn't it? https://opendev.org/openstack/openstack-ansible-os_octavia/src/branch/stable/2023.2/defaults/main.yml#L302-L307 | 14:38 |
andrewbonney | Yeah, that was confusing me, I think we've had two issues in our case, and the fact they key existence check was done differently before masked the host change | 14:38 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/772559 | 14:39 |
noonedeadpunk | so basically the var was dropped which broke some of your overrides I assume? | 14:39 |
noonedeadpunk | well, what we can do there - is to include openstack_resources twice instead | 14:41 |
noonedeadpunk | and return back octavia_keypair_setup_host variable | 14:41 |
andrewbonney | We were generating the keypair on the utility host as of the patch I wrote, but the tasks after that copied it back to the deploy host using the HOME env based path. I think now it's assuming the HOME env path is actually on the utility container | 14:43 |
andrewbonney | But yes, being able to specify the host where the key gets made ought to work around this, provided that host has any Python/Ansible pre-requisites installed | 14:44 |
noonedeadpunk | Soo.... You wanna propose something or should I take a look at what could be done to return to status quo? andrewbonney? | 15:02 |
andrewbonney | I can take a look tomorrow unless you have chance before then | 15:03 |
noonedeadpunk | unlikely will be able today either... | 15:03 |
andrewbonney | No problem | 15:03 |
noonedeadpunk | but ping me if anything :) | 15:03 |
noonedeadpunk | or well... | 15:07 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Return usage of custom keypair setup host https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/922837 | 15:11 |
noonedeadpunk | andrewbonney: I was thinking about smth like that ^ | 15:11 |
noonedeadpunk | but would be fine with alternative of upgrade script | 15:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Return usage of custom keypair setup host https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/922837 | 15:12 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Use mariadb client instead of mariadb for healthcheck https://review.opendev.org/c/openstack/openstack-ansible/+/922839 | 15:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Update mariadb to 11.4.2 https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/922377 | 15:18 |
andrewbonney | I'm not sure that's quite going to work for us, but I'll check tomorrow and suggest a change if needed | 15:18 |
noonedeadpunk | yeah, ok | 15:19 |
noonedeadpunk | ah | 15:19 |
noonedeadpunk | so you're talking about, that community.crypto.openssh_keypair can not read your vault encrypted secret and re-gens it or smth | 15:20 |
noonedeadpunk | here https://review.opendev.org/c/openstack/openstack-ansible/+/922839 | 15:20 |
noonedeadpunk | * https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/openstack_resources/tasks/keypairs.yml#L17 | 15:20 |
noonedeadpunk | ok, yeah, that's more tricky to handle.... | 15:20 |
noonedeadpunk | so the problem there, is that newer nova api does not support generation of SSH keys, so community.crypto should be leveraged instead... | 15:24 |
noonedeadpunk | but then we'd need more complex logic to load/verify the key if exists, and generate if not... | 15:24 |
andrewbonney | I think those tasks are still ok, it's just where they each run so that their pre-requisites are installed | 15:40 |
andrewbonney | The last patch I wrote meant we generated the keypair on the utility host because it has the OpenStack sdk installed in the venv I think, which we don't or didn't have on the deploy host where we store the primary copy of the key | 15:41 |
andrewbonney | Possible there is something about networks available to the deploy host vs utility container too in our case but I'll have to try running things to confirm that | 15:43 |
andrewbonney | I'm sure we can find something that works without a major change | 15:43 |
noonedeadpunk | yeah, ok, then I don't follow the issue... so just let me know if/how I can help... | 15:43 |
noonedeadpunk | ahhhhhhhhh | 15:44 |
noonedeadpunk | you mean this is now absent: https://opendev.org/openstack/openstack-ansible-os_octavia/src/branch/stable/2023.1/tasks/octavia_keypair.yml#L34-L41 | 15:45 |
noonedeadpunk | I guess there're multiple issues in fact with that then :D | 15:45 |
jrosser | noonedeadpunk: btw did you see this https://mariadb.com/kb/en/securing-connections-for-client-and-server/#requiring-tls | 16:10 |
jrosser | there might be some finer grain control over how to handle localhost connections possible | 16:10 |
jrosser | certificate for localhost feels pretty wierd/bad | 16:11 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!