opendevreview | Merged openstack/ansible-role-systemd_networkd stable/2023.2: Fix defenition of multiple static routes for network https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/903754 | 00:35 |
---|---|---|
opendevreview | Merged openstack/openstack-ansible-ceph_client stable/2023.2: Add backwards compatibility of ceph_components format https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/904849 | 00:52 |
opendevreview | Merged openstack/openstack-ansible-lxc_hosts master: Fix resolved config on Debian https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/904828 | 00:55 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Skip installing curl for EL https://review.opendev.org/c/openstack/openstack-ansible/+/904846 | 02:52 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Skip installing curl for EL https://review.opendev.org/c/openstack/openstack-ansible/+/904847 | 03:08 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible-lxc_hosts stable/2023.2: Fix resolved config on Debian https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/905081 | 08:40 |
jrosser | good morning | 08:41 |
noonedeadpunk | \o/ | 09:09 |
noonedeadpunk | according to the bug tracker, we have couple of new folks rolling out deployments right now | 09:09 |
signed | @noonedeadpunk For bug #2048751, I attached the correct log (the log attached at first was the other bugs's one) | 09:34 |
noonedeadpunk | signed: aha, thanks | 09:46 |
noonedeadpunk | question - have you tried to run the playbook I suggested? | 09:46 |
noonedeadpunk | `openstack-ansible playbooks/lxc-hosts-setup.yml -e lxc_image_cache_refresh=true`? | 09:46 |
noonedeadpunk | As I kinda wonder if it's trying to run against old LXC image or smth like that | 09:47 |
noonedeadpunk | as you're right - I don't also see a reason why the package is not available | 09:48 |
signed | noonedeadpunk: Saw your answer after destroying the test VM to test AIO (currently in the deploy phase, we're kinda time-limited to roll a test instance), if AIO is validated, I'll try | 09:48 |
noonedeadpunk | But, what is concerning, that apt update should list repos in output | 09:48 |
noonedeadpunk | And there's none | 09:48 |
signed | Yeah, this is definitely weird | 09:49 |
noonedeadpunk | So output I'd exepct would be like https://paste.ubuntu.com/p/ZGRzfyhg4C/ | 09:50 |
noonedeadpunk | (or smth) | 09:50 |
noonedeadpunk | (just provided from localhost) | 09:50 |
noonedeadpunk | So it felt like there're no repos configured at all for the container | 09:51 |
signed | Shouldn't that fail? | 09:51 |
noonedeadpunk | I think no.... | 09:51 |
signed | Gonna try in Docker 2s | 09:51 |
signed | Doesn't fail | 09:53 |
signed | With no repos set | 09:53 |
noonedeadpunk | And like no output for apt update? | 09:54 |
signed | https://paste.ubuntu.com/p/KNQ73Mrb97/ | 09:54 |
signed | I find that bug puzzling | 09:55 |
noonedeadpunk | ah, it also does `apt-get update` which will provide slightly less output | 09:55 |
noonedeadpunk | So yeah, I assume there're simply no repos were configured | 09:56 |
signed | Running `apt-get update` on my Docker container returns no repos | 09:56 |
noonedeadpunk | huh | 09:56 |
signed | Returns the same thing* | 09:56 |
signed | as the bug | 09:56 |
noonedeadpunk | aha, ok | 09:56 |
signed | And my AIO deployment failed | 09:58 |
signed | 2024-01-09 09:58:38.007 61410 ERROR cinder.service [-] Manager for service cinder-volume aio1@lvm is reporting problems, not sending heartbeat. Service will appear "down". | 09:58 |
noonedeadpunk | Ok, this is more or less "known" I think | 09:59 |
signed | Any leads on how to fix it? My SCENARIO was "aio_metal" | 09:59 |
noonedeadpunk | Or at least it's not first time I'm seing it | 09:59 |
signed | But that leads to a 503 | 09:59 |
noonedeadpunk | 503 on which request? | 10:00 |
signed | Update driver status failed: (config name lvm) is uninitialized.* | 10:00 |
noonedeadpunk | and what release are you running? | 10:00 |
signed | Master | 10:00 |
signed | ae6e59d1e5318cd909771b9bb3a198d321e6a03b | 10:00 |
signed | I have had so much problems with Openstack on our env. I am hoping I can get those resolved because I feel it's such a great product | 10:01 |
noonedeadpunk | What's in `journalctl -u cinder-volume.service`? There must be some related error I believe | 10:02 |
noonedeadpunk | Though I'd assume this issue should be gone on master... | 10:02 |
signed | Table 'cinder.services' doesn't exist | 10:03 |
noonedeadpunk | Um, can you kindly paste then error from the runtime of ansible? | 10:03 |
signed | Ansible succeeded | 10:03 |
noonedeadpunk | Then scroll down the log to latest error :) | 10:04 |
noonedeadpunk | Or let me try to reproduce it | 10:04 |
signed | Running jctl -f leads only cinder for shitting itsel | 10:04 |
signed | openstack-ansible setup-openstack.yml ran just fine | 10:05 |
signed | Without failures | 10:05 |
noonedeadpunk | I assume including tempest tests then | 10:05 |
signed | Should I try os-cinder-install.yml? | 10:06 |
signed | Would that fix the database? | 10:06 |
noonedeadpunk | what's the output of `openstack volume service list`? | 10:06 |
noonedeadpunk | 503? | 10:07 |
noonedeadpunk | `source /root/openrc` first | 10:08 |
noonedeadpunk | signed: ^ | 10:08 |
signed | Succeeds, I see a cinder-scheduler up and a cinder-volume down | 10:08 |
noonedeadpunk | fwiw, I don't see failures for cinder-volume in recently spawned AIOs of mine :( | 10:09 |
noonedeadpunk | Though they;re mainly LXC ones, not metal... | 10:09 |
jrosser | you have to me *much* more careful about getting addressing correct for a metal deploy | 10:09 |
noonedeadpunk | or using ceph... | 10:09 |
jrosser | *be | 10:09 |
signed | I didn't modify anything in the var files, so maybe my lvm config is botched | 10:09 |
signed | (The host is installed on LVM) | 10:10 |
signed | Gonna try redeploy | 10:10 |
jrosser | signed: the AIO deploy should "just work" without any modification in a VM, thats how we do our CI | 10:10 |
noonedeadpunk | works on metal as well.... | 10:11 |
jrosser | for both metal and lxc scenarios | 10:11 |
noonedeadpunk | but that was quite old aio... | 10:11 |
noonedeadpunk | well, we not always test cinder in CI with tests. | 10:11 |
signed | Yeah, I feel that... How does the LVM partitionning needs to be? | 10:11 |
noonedeadpunk | ./scripts/bootstrap_aio.sh does LVM partitioning in general | 10:12 |
signed | All dep start from a clean 22.04 install with 128GB disk | 10:12 |
signed | Do I need to shrink Ubuntu PV? | 10:12 |
jrosser | "it depends" | 10:12 |
signed | Because, for all my deps currently I just used use full disk partitionning | 10:13 |
noonedeadpunk | from the loopback device... | 10:13 |
jrosser | there is no prescribed way that OSA works, it is more like a toolbox | 10:13 |
jrosser | the AIO provides a known reference but you are able to construct your production deployment really any way you like | 10:13 |
noonedeadpunk | Here's the part that configures "volumes" for AIO: https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/tasks/prepare_loopback_cinder.yml#L57-L76 | 10:13 |
signed | Yeah, feel that, that's so weird I have so much issues, being with OSA, or with traditional deployment | 10:14 |
signed | DevStack left me with no VM networking at all, traditional deployment also had network issues (might very well be my config) | 10:14 |
noonedeadpunk | Well, I still feel that first two issues were pretty much related to cdrom actually | 10:15 |
signed | Probably | 10:15 |
noonedeadpunk | But networking is really a thing | 10:15 |
noonedeadpunk | You must very well understand how you want networking to look for your deployment | 10:15 |
noonedeadpunk | otherwise it will be a mess | 10:15 |
signed | Tbf, it's probably my very bad understanding of it | 10:15 |
jrosser | i think that this is my point about "toolbox" | 10:16 |
jrosser | with OSA pretty much anything is possible, but you have to know what you want | 10:16 |
jrosser | and then express that in the config | 10:16 |
noonedeadpunk | imo, networking is one of the most complex thing in openstack | 10:16 |
jrosser | rather than expect a magical installer to make all those choices for you | 10:16 |
signed | I don't really want anything for now, it's really a PoC for a company project | 10:16 |
signed | We're debating between Hypervisors | 10:17 |
jrosser | but you have to make some basic decisions about storage, networking and approach to H/A as a minimum | 10:17 |
noonedeadpunk | yeah, AIO should generally work. For this purpose I usually havew VMs with 100Gb disc, 4-8 vCPUs and 12+ gb of RAM. | 10:17 |
noonedeadpunk | (and nested virt) | 10:18 |
noonedeadpunk | fwiw, you should be also able to spawn instances even without cinder, just using nova ephemeral drives | 10:18 |
signed | Yeah I know | 10:18 |
signed | But Horizon spitted me a 503 | 10:19 |
noonedeadpunk | well. you can drop cinder installation then I guess as a whole... But wonder if this 503 related to cinder | 10:19 |
noonedeadpunk | or rather we have issue with horizon deployment... | 10:19 |
jrosser | would be useful to look at hatop output | 10:20 |
signed | Should I try on point release instead of Master? | 10:20 |
jrosser | master is the current development branch for the next release | 10:20 |
jrosser | 2023.2 is the most recently made stable release for OpenStack Bobcat | 10:20 |
noonedeadpunk | frankly speaking I haven't looked into our horizon deployment for a while since we don't use it internally | 10:21 |
signed | I have a lot of space in the ubuntu PV actuall (Ubuntu only stretches its VG to 64G) | 10:22 |
signed | So space is not an issue | 10:22 |
jrosser | you can see how that works in the AIO here https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables.aio.yml.j2#L337 | 10:23 |
jrosser | `volume_group: cinder-volumes` | 10:23 |
signed | Destroyed that VM, if that fails again, gonna hyu | 10:24 |
opendevreview | Merged openstack/openstack-ansible-os_magnum stable/2023.2: Add missing magnum octavia client configuration https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/904508 | 10:42 |
hamidlotfi_ | Hi there, | 10:46 |
hamidlotfi_ | What does the value of the `revision_number` field in the port information section indicate? | 10:46 |
hamidlotfi_ | It went from `1 to 500` within half an hour and also I have this message on Neutron server: | 10:46 |
hamidlotfi_ | `Successfully bumped revision number for resource 46436042-db65-4c6b-9f15-4b2f8e9ee942 (type: ports) to 500`? | 10:46 |
signed | noonedeadpunk: With LXC install on AIO I get actual apt updata | 10:48 |
jrosser | signed: how do you install your operating system - we're likley all doing test environments with ubuntu cloud image based VM | 10:49 |
signed | From ISO | 10:49 |
noonedeadpunk | hamidlotfi_: I assume that any update of port information, like attach/detach/move between hosts | 10:51 |
noonedeadpunk | though it's better to ask neutron folks here as they may know more | 10:52 |
signed | But it then fails somehow at the same step | 10:52 |
noonedeadpunk | Can you kindly paste sources.list with cdrom record so I could try to reproduce that? | 10:53 |
signed | God, this one is extra weird | 10:54 |
opendevreview | Merged openstack/openstack-ansible-rabbitmq_server stable/zed: Add ability to add custom configuration for RabbitMQ https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/900941 | 10:54 |
signed | It's getting a broken pipe | 10:54 |
noonedeadpunk | for ansible connection? Hard to understand what is the error without seeing what you're talking about | 10:55 |
signed | https://paste.ubuntu.com/p/r77jpJYr8m/ | 10:55 |
signed | And it magically succeeded | 10:56 |
signed | It tried 20+ times and then succeeded | 10:56 |
noonedeadpunk | There could be a reconnect issued afterwards. This is potentially related to SSH persistance connection | 10:56 |
noonedeadpunk | Or well... | 10:56 |
noonedeadpunk | 20+ times is too much | 10:56 |
signed | Went way too fast for me to see how many but I've seen 10 tries | 10:57 |
signed | So might be more | 10:57 |
jrosser | it's very helpful to have some wider context in debug pastes so that we can see which tasks are involved | 10:57 |
noonedeadpunk | ++ | 10:58 |
signed | After this dep I'll do a `script` record of the ansible output | 10:59 |
signed | In a clone of the base VM | 10:59 |
noonedeadpunk | there's a log stored in /openstack/log/ansible-logging/ansible.log just in case | 11:03 |
signed | noonedeadpunk: https://paste.ubuntu.com/p/smw7P4Vyqz/ | 11:10 |
noonedeadpunk | signed: that is expected to retry | 11:11 |
signed | Yeah, I know, but it succeeded, so no sources.list issue on AIO | 11:12 |
noonedeadpunk | we do some task in asyc way, and then check state of async. We have like 300 retries for this specific task | 11:12 |
signed | Oh okay | 11:12 |
noonedeadpunk | It usually succeed withing 40 or so for reasonably fast connection/cpu | 11:13 |
noonedeadpunk | in systemd_mount we even have a task that is usually expected to fail, but is rescued, so you won't see any failure in result of the play | 11:14 |
opendevreview | Merged openstack/openstack-ansible stable/2023.2: Modify RGW client format https://review.opendev.org/c/openstack/openstack-ansible/+/904855 | 11:24 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/2023.1: Fix resolved config on Debian https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/905082 | 11:51 |
noonedeadpunk | fwiw, blazar patch depends on openstack_resources implementation (the one that handles aggregate creation) | 11:53 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/904876 | 11:53 |
jrosser | given we are early in the cycle we should review/merge the openstack_resources stuff asap to get it tested | 11:56 |
noonedeadpunk | the only thing that fails is somehow TLS upgrade job for magnum: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/901185 | 11:58 |
noonedeadpunk | I kinda have little idea about reasons to be frank | 11:58 |
noonedeadpunk | `Cluster type (vm, Unset, kubernetes) not supported` | 11:58 |
noonedeadpunk | but how tls is different from non-tls is mistereos for me | 11:59 |
noonedeadpunk | * mysterious | 11:59 |
jrosser | that should be the OS type, ubuntu/coreos or whatever magnum wants | 11:59 |
jrosser | so perhaps the issue is actaually with the image upload not setting the OS metadata on the image | 12:00 |
noonedeadpunk | oh, it issues http request to keystone | 12:00 |
noonedeadpunk | yeah, indeed. | 12:01 |
noonedeadpunk | but regular upgrade job passes at the same time :( | 12:01 |
noonedeadpunk | But yeah, I see what you mean... | 12:02 |
jrosser | does the upgrade switch to TLS mode or start that way | 12:02 |
noonedeadpunk | I think it switches to tls.... | 12:02 |
noonedeadpunk | damiandabrowski: do you recall?:) ^ | 12:02 |
jrosser | http vs https for keystone could be magnum.conf error / not restarted service using old settings? | 12:03 |
noonedeadpunk | sad part is that we somehow don't have /etc in upgrade jobs logs | 12:04 |
jrosser | i was just going to say, where are they?! | 12:05 |
jrosser | we did merge my patch that refactored the log collection | 12:07 |
jrosser | that would be no.1 suspect /o\ | 12:07 |
jrosser | looks like we have the /etc/ logs for not-upgrade jobs | 12:08 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_heat master: Deprecate and remove heat_deferred_auth_method variable https://review.opendev.org/c/openstack/openstack-ansible-os_heat/+/905109 | 12:10 |
damiandabrowski | noonedeadpunk: I fixed similar issue some time ago: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/897526 | 12:10 |
damiandabrowski | so I'm surprised it showed up again :/ | 12:11 |
jrosser | oh well does the change to add openstack_resources role remove that wait? | 12:12 |
jrosser | should be ok https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/901185 | 12:12 |
noonedeadpunk | damiandabrowski: I think it's different actually, as things pass down to tempest | 12:15 |
noonedeadpunk | And only during tempest run magnum fails to create a cluster | 12:15 |
damiandabrowski | ah, you're right | 12:15 |
noonedeadpunk | But in tempest log I do see it asks keystone over http | 12:16 |
noonedeadpunk | but it get's reply as well... | 12:16 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Move insecure param to keystone_auth section https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905110 | 12:17 |
noonedeadpunk | ^ it's unrelated fwiw | 12:17 |
noonedeadpunk | will try to reproduce I guess... | 12:22 |
signed | AIO LXC deployment succeeded and I have Horizon working | 12:24 |
noonedeadpunk | actually... it might be needed to explicitly specify Horizon in SCENARIO when doing metal installation... As I guess we don't have it as "base" for metal still | 12:28 |
signed | I don't know, I'm pretty sure it installed (AFAIK it's the only service that listens on HTTP) | 12:30 |
noonedeadpunk | well, haproxy always listen on http recently | 12:31 |
noonedeadpunk | but it does not have backends | 12:31 |
noonedeadpunk | it's made also for service security.txt and some extra mappings if needed | 12:32 |
noonedeadpunk | So that smth is listening on 80/443 doesn't mean that horizon is installed | 12:33 |
signed | You know better than me, anyway I have a working cluster, thanks a lot | 12:33 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Modify RGW client format https://review.opendev.org/c/openstack/openstack-ansible/+/904856 | 12:57 |
opendevreview | Merged openstack/openstack-ansible-os_magnum stable/2023.1: Add missing magnum octavia client configuration https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/904509 | 13:01 |
opendevreview | Merged openstack/openstack-ansible-os_blazar master: Fix Blazar authentication and endpoints definition https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/904791 | 13:38 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:00 |
opendevmeet | Meeting started Tue Jan 9 15:00:16 2024 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:00 |
noonedeadpunk | #topic rollcall | 15:00 |
noonedeadpunk | o/ | 15:00 |
jrosser | o/ hello | 15:01 |
noonedeadpunk | #topic bug triage | 15:05 |
noonedeadpunk | So, we have quite some bugs. Some are already patched, but some I'd love to get some input on | 15:05 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/2048659 | 15:06 |
jrosser | sounds reasonable | 15:07 |
noonedeadpunk | so I'm thinkin on what we want to do here | 15:07 |
noonedeadpunk | Like remove cdrom from lxc conatiner repos is bare minimum | 15:08 |
noonedeadpunk | But should we also try to remove that from host repos? | 15:08 |
jrosser | there wasnt an example with the cdrom attached to the bug? | 15:08 |
noonedeadpunk | nope | 15:08 |
NeilHanlon | o/ hiya | 15:09 |
noonedeadpunk | So likely need to get CDROM installation somewhere/somehow.... | 15:09 |
noonedeadpunk | to get smth to parse | 15:09 |
noonedeadpunk | going next | 15:13 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/2048209 | 15:13 |
noonedeadpunk | In fact not sure what's going wrong here and was not able to reproduce the issue | 15:14 |
jrosser | `add-apt-repository --remove <whatever> || true` or something | 15:14 |
jrosser | anyway | 15:14 |
noonedeadpunk | I think we should do that only on containers? Not touch metal hosts? | 15:15 |
jrosser | yeah | 15:16 |
jrosser | so the prep script could do that if we know the thing to remove | 15:16 |
jrosser | regarding the magnum roles thing | 15:17 |
jrosser | andrewbonney has made a magnum test environment yesterday and i don't think we ran into that | 15:17 |
jrosser | and i agree that this looks like a bug that was fixed long long ago | 15:17 |
andrewbonney | Yeah, ran with latest 2023.1 tag | 15:18 |
noonedeadpunk | I will suggest to run a test playbook then that will do pretty much same logic then | 15:18 |
jrosser | was there a bug where we handled single roles / lists of roles incorrectly? | 15:20 |
jrosser | this https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/896017 | 15:21 |
jrosser | hmm thats not the same at all | 15:21 |
noonedeadpunk | nah, I was thinking about different issue | 15:22 |
jrosser | could be the SDK version in the utility container somehow messed up | 15:22 |
noonedeadpunk | I was thinking about this: https://opendev.org/openstack/openstack-ansible-plugins/commit/a4357fbb9a43f44bfee72b01db219f080268fbe7 | 15:23 |
noonedeadpunk | but yeah, what confuses me is that data plugin claims as missing is in args | 15:23 |
noonedeadpunk | so it pretty much looks like some collection issue | 15:23 |
noonedeadpunk | though it was reported as reproducible error | 15:24 |
jrosser | becasue the error actually is from the SDK not the collection | 15:24 |
noonedeadpunk | Yeah, could be actually | 15:24 |
noonedeadpunk | that is the good point | 15:24 |
jrosser | well what i mean is consistency between collection in contoller and SDK in utility host needs to be correct | 15:24 |
noonedeadpunk | I think, in 27.3.0 I did bumped collection version.... | 15:25 |
damiandabrowski | hi! (sorry, my meeting delayed) | 15:25 |
noonedeadpunk | nah..... only plugins and config_tmeplate https://opendev.org/openstack/openstack-ansible/commit/80f717a1d82bd5561d2a71a820ee4999d7abf87d | 15:26 |
noonedeadpunk | and u-c seems to be left intact as well.... | 15:26 |
noonedeadpunk | Ok, next | 15:27 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/2048284 | 15:28 |
noonedeadpunk | this is very good one | 15:28 |
jrosser | suggest to rebuild utility container i think | 15:28 |
noonedeadpunk | which will take us quite some effort to implement I believe | 15:28 |
noonedeadpunk | So GPG defenition to install should be part of the repo stanza I assume. | 15:29 |
noonedeadpunk | Then I guess we in fact need a module to do apt_key basically | 15:30 |
noonedeadpunk | except it won't use apt_key | 15:30 |
noonedeadpunk | but will be able to convert to gpg format when the key is not in it | 15:30 |
noonedeadpunk | and download/paste regardless of source (data/url/file/etc) | 15:31 |
noonedeadpunk | And I guess we need to cover that sooner then later, as I guess apt-key will be absent in 24.04 | 15:31 |
jrosser | well i did see in the latest debian that the structure of the /etc/apt directory is very much changed | 15:34 |
jrosser | it would need looking at but this might make it easier to manage config fragments than before | 15:34 |
jrosser | example https://2b411714dff7c938c230-49130948639ed40b079dd8450de896f5.ssl.cf5.rackcdn.com/878794/33/check/openstack-ansible-deploy-infra_lxc-debian-bookworm/c4d34b5/logs/etc/host/apt/trusted.gpg.d/ | 15:36 |
noonedeadpunk | so, I guess idea now, is to have an explicit path to GPG key per repo | 15:40 |
noonedeadpunk | and have `signed-by` | 15:41 |
noonedeadpunk | ie https://www.digitalocean.com/community/tutorials/how-to-handle-apt-key-and-add-apt-repository-deprecation-using-gpg-to-add-external-repositories-on-ubuntu-22-04#option-1-adding-to-sources-list-directly | 15:41 |
noonedeadpunk | or you saying that it should be jsut enough to put the key to trusted.gpg.d? | 15:42 |
jrosser | well actually i wonder if we should migrate to this everywhere https://docs.ansible.com/ansible/latest/collections/ansible/builtin/deb822_repository_module.html | 15:45 |
jrosser | maybe the `signed_by` parameter allows us enough flexibility | 15:46 |
jrosser | if we were to change anything at all it would be worth migrating to the modern ansible module and regularising the data around that | 15:46 |
noonedeadpunk | Ah, I fully missed existance of this module | 15:47 |
jrosser | `Either a URL to a GPG key, absolute path to a keyring file, one or more fingerprints of keys either in the trusted.gpg keyring or in the keyrings in the trusted.gpg.d/ directory, or an ASCII armored GPG public key block.` | 15:47 |
jrosser | is that good enough? | 15:47 |
noonedeadpunk | yes, should be totally fine actually | 15:48 |
jrosser | cool | 15:48 |
noonedeadpunk | that really slipped my attention | 15:48 |
noonedeadpunk | (module name is also slightly cumbersome) | 15:48 |
noonedeadpunk | #topic office hours | 15:48 |
noonedeadpunk | I guess we're about out of time for bugs, depsite there're couple still around (less important) | 15:49 |
jrosser | so i did want to discuss cluster-api a bit | 15:49 |
jrosser | andrewbonny and i are putting quite some effort into the patches for the vexxhost driver | 15:50 |
jrosser | and we need to be sure that we do the right thing by putting it in the ops repo | 15:51 |
jrosser | the integration for an 'easy' AIO is pretty wild https://review.opendev.org/c/openstack/openstack-ansible-ops/+/902178 | 15:52 |
jrosser | and this is only the start as it will need overridable playbook hooks putting in a bunch of the existing opesntack-ansible/playbooks/* | 15:53 |
noonedeadpunk | Suggest to place that all to integrated repo from the beginning? | 15:55 |
jrosser | well perhaps | 15:55 |
jrosser | even today there is talk in #openstack-containers of keeping the other driver out of tree | 15:56 |
jrosser | though there are probably downsides to putting it in os_magnum as well | 15:57 |
jrosser | maybe we don't want always to be carrying collection dependancies on the k8s stuff | 15:57 |
jrosser | though correct use of include vs import might make that not be an issue | 15:58 |
noonedeadpunk | as well as install bunch of vexxhost collections for everyone... | 15:59 |
jrosser | right | 15:59 |
noonedeadpunk | frankly speaking I don't know at this point | 16:00 |
jrosser | i am a bit sad about it all tbh | 16:00 |
jrosser | becasue for the first time ever magnum "just worked" in our environment | 16:01 |
noonedeadpunk | From one side I do fully understand and share your concerns. From other I don't want to support or be in any way "responsible" for having that as supported part of the project.... | 16:01 |
noonedeadpunk | And at the same time will highly likely install one of capi drivers this year as well | 16:01 |
noonedeadpunk | so pretty much interested in a good outcome as well | 16:01 |
jrosser | i can continue with what i'm doing and add a bunch of playbook hooks | 16:04 |
jrosser | that might be a good feature anyway | 16:04 |
noonedeadpunk | If you think it's unreasonably cumbersome - we in fact may indeed prefer adding all in tree somehow | 16:07 |
jrosser | this is why i'd really like someone else to have a go with it | 16:07 |
noonedeadpunk | maybe through naming/comment/documentation remove liability for the feature from ourselves.... | 16:07 |
jrosser | i think my tolerance of a complex setup is quite high | 16:07 |
jrosser | but that might not be good for everyone | 16:08 |
noonedeadpunk | then it should not be me lol | 16:08 |
jrosser | and ideally it should be trivial to make an AIO | 16:08 |
jrosser | and trivial to make a CI job | 16:08 |
noonedeadpunk | yeah.... | 16:08 |
jrosser | eventually for a production deployment we are managing really fine with it out of tree, in a collection in the ops repo | 16:09 |
jrosser | but making an CI job in os_magnum is pretty challenging | 16:10 |
jrosser | as we have to "call out" to the collection | 16:10 |
jrosser | at some point after magnum and before tempest | 16:11 |
noonedeadpunk | #endmeeting | 16:28 |
opendevmeet | Meeting ended Tue Jan 9 16:28:09 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:28 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.html | 16:28 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.txt | 16:28 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.log.html | 16:28 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!