15:00:40 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:00:40 <opendevmeet> Meeting started Tue May 17 15:00:40 2022 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:40 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:40 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:00:47 <noonedeadpunk> #topic rollcall
15:00:55 <noonedeadpunk> o/
15:01:46 <damiandabrowski[m]> hi!
15:04:17 <jrosser> o/ hello
15:04:38 <noonedeadpunk> #topic hours
15:05:09 <noonedeadpunk> so, evenctually, as you might all guess, I haven't done anything from what I promised to do previous week which upsets me a lot....
15:06:02 <noonedeadpunk> Like 10 mins before meeting I started looking into PKI stuff for octavia as that seemed pretty close
15:07:06 <noonedeadpunk> Didn't have chance to play with gluster stuff though
15:07:19 <noonedeadpunk> but from codebase I don't see any big issues right now
15:08:21 <jrosser> i nearly have the whole stack passing CI now https://review.opendev.org/q/topic:osa-gluster
15:08:40 <jrosser> just problem with uninstalling rsync on metal jobs breaks the CI log upload :)
15:09:25 <jrosser> but only for ubuntu /o\
15:10:44 <noonedeadpunk> I'm not sure I actually also realized whole mess with phased packages  :( https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/840518
15:11:05 <noonedeadpunk> Do we need to enable phased updates also for Focal?
15:11:37 <jrosser> oh yes there was a big mess with that
15:11:48 <jrosser> phasing is only after Focal
15:12:32 <noonedeadpunk> then we should add some conditional inside script for os release?
15:12:49 <noonedeadpunk> as now it's a global prep script
15:13:00 <noonedeadpunk> (which also targets Debian)
15:13:32 <jrosser> we could do, though it does seem to be OK
15:13:46 <noonedeadpunk> ok then
15:14:02 <noonedeadpunk> I guess it's last patch to land for yammy?
15:14:03 <jrosser> APT appears to disregard that config on the other OS
15:14:30 <jrosser> yes, expect for this https://review.opendev.org/c/openstack/openstack-ansible/+/839483
15:14:45 <jrosser> and i don't know how many jobs we want to introduce right now
15:15:25 <jrosser> that only puts jammy in the check pipeline and NV as well
15:15:59 <jrosser> which might be reasonable for 'experimental' support then it gets switched to voting/gate for Z?
15:16:38 <noonedeadpunk> yup, I guess it's fair atm
15:16:52 <jrosser> i expect that the patches generated during branching will throw up a bunch of role failures for jammy that we've not tried yet
15:16:52 <noonedeadpunk> And indeed we should switch to voting in Z
15:17:13 <noonedeadpunk> I won't be surprised :)
15:17:38 <noonedeadpunk> And indeed NV sounds quite suitable right now for it
15:17:53 <noonedeadpunk> considering there's no maria/rabbit at very least
15:18:13 <jrosser> then we have centos-9 i guess
15:18:43 <jrosser> https://review.opendev.org/c/openstack/openstack-ansible/+/823417 \o/
15:19:07 <jrosser> and a few small things left in here https://review.opendev.org/q/topic:osa%252Fcentos-9
15:19:17 <noonedeadpunk> And eventually we should make it voting kind of....
15:19:28 <noonedeadpunk> as centos-8 is being dropped in Z
15:19:39 <noonedeadpunk> (or well, py3.6 is)
15:19:40 <jrosser> right - though we have to decide what to do about LXC
15:20:00 <noonedeadpunk> let me guess - it's absent for it?
15:20:04 <jrosser> i just don't know, other than have some experimental code path that uses a snap, and just say "don't use this"
15:20:36 <jrosser> i could add code to lxc_hosts to install it via snap
15:20:42 <jrosser> use at own risk :)
15:21:47 <noonedeadpunk> so it's provided only by lxd?
15:22:55 <noonedeadpunk> well yeah, lxc for centos 8 that we're using also not maintained actually
15:23:06 <jrosser> oh hmm
15:23:18 <jrosser> maybe i misunderstand that lxc can't be installed on it's own with a snap
15:26:31 <noonedeadpunk> oh, so you mean there's only lxd option now?
15:28:00 <jrosser> snapcraft.io only seems to show a LXD snao
15:28:02 <jrosser> snap
15:28:24 <jrosser> thats going to bring in a whole load of stuff we don't want to manage, i think
15:28:39 <jrosser> annoyingly there is lxc4.x in fedora
15:32:06 <jrosser> i wonder how hard it is to use copr to build what we want
15:32:18 <jrosser> as really thats what we already use for c8 i think
15:33:14 <damiandabrowski[m]> i wonder what long-term solutions do we have to make our lives easier...switch to LXD? maybe systemd-nspawn?
15:33:22 <jrosser> NeilHanlon: would you have any tips on how we could get lxc packages for centos9?
15:33:29 <jrosser> given that we already do this https://github.com/openstack/openstack-ansible-lxc_hosts/blob/9a4004169450a0bb68ffe0fec31e6887a9c075f3/vars/redhat-host.yml#L19-L20
15:33:45 <jrosser> damiandabrowski[m]: i made some initial start on LXD support maybe a year ago
15:34:00 <jrosser> but it would suffer from the same problem that the only viable installation on all platforms is a snap
15:34:06 <jrosser> and thats got it's own problems
15:35:17 <noonedeadpunk> damiandabrowski[m]: we jsut got rid of nspawn as it's support is absent on centos because of dropped btrfs
15:35:41 <noonedeadpunk> everything leads to containerd lol
15:35:51 <damiandabrowski[m]> jrosser: ahh :/ have we noted these problems somewhere?
15:35:53 <jrosser> fwiw we use LXD here for a bunch of things outside OSA, and before we "deliberately broke" the mandatory auto-update thing we had * break simultaneously because of new buggy lxd versions then get pushed onto you
15:36:02 <damiandabrowski[m]> noonedeadpunk: fair point, thanks
15:36:53 <noonedeadpunk> yeah, auto-update of snap is nightmare
15:37:03 <noonedeadpunk> I can imagine that for galera as example....
15:37:25 <noonedeadpunk> jrosser: you;re right, fedora has even lxc4....
15:37:59 <jrosser> i understand zero about copr but i am guessing we can use that to compile the .spec file for fedora into a centos9 package
15:38:02 <noonedeadpunk> Basically, if sombody would volunteer to maintain lxc in epel for centos/rocky that would be awesome
15:38:05 <jrosser> or something similar
15:39:18 <noonedeadpunk> I also just found that https://koschei.fedoraproject.org/package/lxc?collection=epel8 - is it same thing?
15:40:10 <jrosser> that looks like where it comes from i think
15:40:55 <jrosser> that looks like we should even find lxc3 for centos8 in epel
15:40:56 <noonedeadpunk> Oh, ok I even submitted a bug there.... https://bugzilla.redhat.com/show_bug.cgi?id=2034709
15:44:08 <jrosser> it's all here https://src.fedoraproject.org/rpms/lxc/blob/f36/f/lxc.spec
15:45:01 <jrosser> anyway maybe we should look at bugs, we're going to be out of time
15:46:17 <noonedeadpunk> #topic bug triage
15:46:31 <noonedeadpunk> I'm looking at https://bugs.launchpad.net/openstack-ansible/+bug/1967270 at the moment
15:47:28 <noonedeadpunk> I think it's quite fair
15:48:01 <jrosser> indeed
15:48:50 * jrosser fixes
15:49:41 <noonedeadpunk> Another one that bothers me was https://bugs.launchpad.net/openstack-ansible/+bug/1971179
15:49:51 <noonedeadpunk> but I don't find it _that_ trivial
15:50:07 <noonedeadpunk> andrewbonney wrote that related fix is not really covering issue
15:50:23 <opendevreview> Jonathan Rosser proposed openstack/openstack-ansible-os_octavia master: Fix condition for deleting old amp images  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/842145
15:50:55 <noonedeadpunk> but I'm didn't have time to have a deep dive into CSP. And X default worked for us nicealy, but we don't use horizon in fact
15:51:30 <jrosser> i think andrewbonney may have addresses the CSP stuff
15:52:43 <jrosser> i have seen other reports of that RBD size mismatch too and never understood what was actually happening there
15:53:52 <noonedeadpunk> damiandabrowski[m]: was playing with that for a while
15:54:07 <noonedeadpunk> But basically it's indeed all about chunking
15:54:25 <noonedeadpunk> disabling uwsgi helps here if anybody facing it
15:54:47 <noonedeadpunk> basically glanceclient doesn't support chunking, while openstackclient does.
15:55:08 <noonedeadpunk> So there can be great confusion/mess between services who does use what
15:55:11 <damiandabrowski[m]> yeah, but i haven't found out yet why uwsgi is causing these issues :/
15:55:28 <jrosser> it is a shame there is no activity from the glance people on that bug for > 1 year
15:55:51 <jrosser> there was some stuff from the debian packaging which was about enabling chunking for uwsgi?
15:55:56 <noonedeadpunk> well... they were not very active in fixing uwsgi at all
15:56:16 <noonedeadpunk> well yes, but client should also support that
15:56:20 <noonedeadpunk> (I guess)
15:57:36 <jrosser> "If you plan to put uWSGI behind a proxy/router be sure it supports chunked input requests (or generally raw HTTP requests)."
15:57:42 <noonedeadpunk> At least in openstacksdk I see that  https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/image/iterable_chunked_file.py
15:58:02 <noonedeadpunk> And there's nothing close to that in glanceclient. And it has own implementation of client object as well
15:58:48 <damiandabrowski[m]> but on the other hand, if i recall correctly, i noticed these issues only when using uwsgi+ceph backend
15:58:57 <damiandabrowski[m]> uwsgi+file backend worked fine
15:59:07 <damiandabrowski[m]> so i don't really understand what's going on
15:59:52 <noonedeadpunk> so damiandabrowski[m] was able to reproduce issue nicely using glance image-create, but not with openstack image create
16:00:41 <noonedeadpunk> well yeah, maybe ceph adds some extra thing here as well....
16:01:27 <noonedeadpunk> As then you upload client -> haproxy -> uwsgi -> ceph and each should chunk correctly
16:01:38 <noonedeadpunk> maybe we're somewhere out of sync in that chain....
16:01:52 <jrosser> that bug is talking about snapshots written to ceph backend isnt it?
16:02:16 <noonedeadpunk> if you take snapshot from instance - it's kind of same?
16:02:40 <noonedeadpunk> as you have a file you need to upload to glance as an image
16:03:01 <jrosser> but potentially a different client code there?
16:03:07 <noonedeadpunk> yeah
16:03:26 <damiandabrowski[m]> +1
16:03:51 <jrosser> right now i'm not totally clear what is / is not working
16:03:52 <noonedeadpunk> and I guess horizon still uses glance client as well
16:04:02 <jrosser> might be worth updating the bug with what we know
16:04:15 <noonedeadpunk> damiandabrowski[m]: can you do that?
16:04:45 <jrosser> like if `openstack` cli is fine then thats at least one data point
16:04:50 <damiandabrowski[m]> yeah, i plan to come back to this issue when I'll have some time
16:04:58 <damiandabrowski[m]> our internal ticket about that is still open
16:05:26 <noonedeadpunk> yeah, I know :) But it's good point we should put that we're working and what's the progress (even if it's close to none)
16:06:07 <noonedeadpunk> and we're out of time...
16:06:10 <noonedeadpunk> #endmeeting