15:01:02 <noonedeadpunk> #startmeeting openstack_ansible_meeting 15:01:02 <opendevmeet> Meeting started Tue Feb 27 15:01:02 2024 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:02 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:02 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting' 15:01:08 <noonedeadpunk> #topic rollcall 15:01:10 <noonedeadpunk> o/ 15:01:27 <damiandabrowski> hi! 15:01:43 <jrosser> o/ hello 15:03:18 <noonedeadpunk> #topic office hours 15:03:33 <noonedeadpunk> so, it feels it's really high time for new point releases 15:03:48 <noonedeadpunk> though I saw some "blockers" which would be nice to handle first 15:04:12 <noonedeadpunk> seems https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/909868 was quite important, for instance 15:04:45 <jrosser> 2023.1 is totally blocked i think 15:05:20 <noonedeadpunk> yep, by Yoga upgrade 15:05:42 <noonedeadpunk> So we need to land Yoga upgrade disablement first: https://review.opendev.org/c/openstack/openstack-ansible/+/910220 15:05:53 <jrosser> i looked at how to handle stable|unmaintainted but that was just /o\ complicated 15:06:18 <noonedeadpunk> Yeah, I also failed to get us access to unmaintained. 15:06:30 <noonedeadpunk> And frankly - this branch removal/adding is quite confusing... 15:09:21 <NeilHanlon> o/ sorry i'm late 15:10:35 <noonedeadpunk> I also didn't check neither on docs for ops repo, nor for octavia and ovn scenario in AIO 15:11:13 <jrosser> i need some direction on the magnum patches 15:11:33 <jrosser> well not so much magnum, but the fixing * else that seems to be also involved :( 15:11:55 <jrosser> specifically tempest resource creation, it's just gigantic mess now 15:12:49 <noonedeadpunk> yup 15:12:59 <noonedeadpunk> I know... 15:13:08 <jrosser> i think that i can make time this week to just strip everythig to do with resource creation out of os_tempest 15:13:14 <jrosser> and port it to openstack_resources 15:13:31 <jrosser> but we should decide if that is a good idea or not 15:14:23 <noonedeadpunk> that is very good question 15:14:39 <noonedeadpunk> as problematic part - that plenty of logic and weirdness lies in tempest role itself 15:14:59 <jrosser> i am wondering if that is just historical accumulation 15:14:59 <noonedeadpunk> and I guess end-goal of all that would be to just skip tempest, but do have some resources? 15:15:10 <jrosser> yes thats right 15:15:30 <noonedeadpunk> And basically only public network is needed iirc 15:15:32 <jrosser> but you can't do that just now without making the logic in tempest role even more complicated 15:16:03 <jrosser> ultimately there is actually not much needed in tempest.conf 15:16:16 <jrosser> flavor / image id * 2, network id 15:16:19 <jrosser> maybe one more 15:17:03 <jrosser> so i was thinking to make it possible to pass in name -> os_tempest looks up the id 15:17:09 <jrosser> or pass the id directly 15:17:30 <jrosser> and move all the creation stuff out of the role completely 15:18:04 <jrosser> as even if we use openstack_resources that doesnt return the id really to re-use later 15:18:04 <noonedeadpunk> but why you still try to install it at all instead of just disabling it as a whole and including openstack_resources just here https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906363/14/mcapi_vexxhost/playbooks/install_and_test.yml#14 ? 15:18:33 <noonedeadpunk> yeah, output of openstack_resources result is actually a good topic on it's own 15:18:44 <noonedeadpunk> and if that should be covered 15:19:27 <noonedeadpunk> maybe registering results or output to some local facts might be useful... 15:19:32 <jrosser> well maybe you are right and i was trying too hard to make a general solution 15:19:57 <noonedeadpunk> I mean - doing general solution is perfect scenario 15:20:06 <noonedeadpunk> But given amount of overhead... 15:20:33 <noonedeadpunk> Maybe it should not be a blocker and we just need to iterate over things 15:20:44 <jrosser> yes tbh this is a better way to look at it 15:20:45 <noonedeadpunk> I still think we should do smth with tempest. 15:21:03 <jrosser> seems everone is busy++ so need to take a tractible path 15:21:15 <noonedeadpunk> but this should not really block capi from my perspective. Or at least if there's a way to unblock - better do that 15:21:31 <noonedeadpunk> Yes, until end of March I'm really just /o\ 15:21:40 <noonedeadpunk> So is damiandabrowski 15:22:31 <noonedeadpunk> I do hope to be able to catch-up though once thing we're working on is done. 15:24:14 <noonedeadpunk> Also, I guess it's time to start populating PTG etherpad.... 15:24:34 <noonedeadpunk> Let it be the link 15:24:36 <noonedeadpunk> #link https://etherpad.opendev.org/p/osa-dalmatian-ptg 15:24:53 <NeilHanlon> 🥳 15:25:47 <noonedeadpunk> and I'm adding ceph-ansible right away. 15:25:53 <NeilHanlon> yes. 15:26:04 <noonedeadpunk> Will populate it with leftovers from caracal ptg as well 15:26:27 <noonedeadpunk> but also - we probably should pick up a timeframe for the PTG 15:27:16 <noonedeadpunk> We can do "as usual" Tuesday - 14 - 17 UTC? 15:28:03 <noonedeadpunk> or 15 - 18 15:28:39 <noonedeadpunk> or should I make some kind of poll to vote on it? 15:29:02 <jrosser> what actual date is this? 15:30:04 <noonedeadpunk> good question 15:30:23 <noonedeadpunk> April 9 15:30:28 <NeilHanlon> April 8-12, 2024 15:30:36 <NeilHanlon> yep, so the 9th 15:31:00 <NeilHanlon> i'm flexible, but will be traveling to Texas for a conference on 4/11 15:33:04 <jrosser> hmm that is during school holidays for me so 50/50 at best for the whole week 15:33:26 <noonedeadpunk> ouch 15:33:37 <noonedeadpunk> that's defenitely a bad timing for PTG then... 15:34:12 <noonedeadpunk> but eventually, looking at scope for Caracal, it slightly feels that not much will be delivered out of it 15:34:24 <noonedeadpunk> like - incus for sure won't be done 15:34:36 <jrosser> tbh i think this is a large job 15:34:55 <noonedeadpunk> yeah... 15:35:03 <jrosser> and requires some pretty good thinking, as it is an opportunity to modernise things rather than just drop-in replacement 15:35:41 <noonedeadpunk> I close to never used LXD at scale, so hard to judge on what's best practise would be 15:36:03 <jrosser> i think that personally i can only commit to smaller things than that for maybe the next cycle or two 15:36:15 <noonedeadpunk> But also I guess it should be not drop-in but indeed smth modern which can be done as an option to old legacy 15:36:43 <jrosser> my hunch is that we can collapse many many ansible tasks into native things in LXD/incus 15:37:35 <NeilHanlon> I think incus is reasonable for next cycle, fwiw (on the Fedora/EL side) 15:44:10 <noonedeadpunk> well, will see about time/prios for that 15:44:32 <noonedeadpunk> as that is totally would be very-very appealing to have and quite logical evolution of what we have today 15:44:39 <noonedeadpunk> with LXC 15:44:47 <jrosser> are there any bugs to look at? 15:44:55 <ThiagoCMC> I have experience with LXD, I am currently running part of my OSA (Compute, Network, and OSDs) on top of LXD Containers. I want to help! 15:45:05 <jrosser> i had a report from hamburgler3 yesterday which i have just put into launchpad 15:45:40 <noonedeadpunk> well, I mean, we have also an etherpad from bug triage day that needs to be looked at 15:46:29 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/2055178 15:46:39 <noonedeadpunk> ok, I had very simmilar lately 15:46:52 <noonedeadpunk> I didn't get to the point of finding out wtf is going on 15:47:18 <noonedeadpunk> eventually, /var/lib/haproxy/dev/log is a "chroot" 15:48:05 <noonedeadpunk> And actually... not being idempotent might be the root cause 15:48:16 <noonedeadpunk> so that is potentially good catch 15:48:29 <jrosser> my thoughts were why we needed to do any of this 15:49:00 <jrosser> as i would expect the distro packages to do the necessary stuff when haproxy is installed 15:49:34 <noonedeadpunk> well... there's your note there.... 15:49:51 <jrosser> well indeed, but it has been a while and that might no longer be true 15:50:26 <noonedeadpunk> Yep, we had this exact issue being reproduced, so I for sure can look there with some priority 15:50:43 <jrosser> even needing to make the bind mount surprises me, as haproxy does this chroot thing as part of it's own functionality 15:51:01 <jrosser> but i kind of feel i miss something important here 15:51:23 <noonedeadpunk> yep, true, I did just rmdir and it was created is proper permissions on restart 15:51:33 <noonedeadpunk> and well, after systemd-journald restart as well 15:52:27 <noonedeadpunk> but again - that was all on ubuntu 15:53:04 <noonedeadpunk> worth trying dropping all that for sure 15:53:22 <jrosser> maybe it is as simple as boot a centos / ubuntu vm and chdeck that haproxy can log to the journal out of the box 15:53:27 <jrosser> if so we can delete all of this 15:54:35 <noonedeadpunk> ++ 15:55:00 <noonedeadpunk> btw, we've also tested and slightly adopted andrew's patch to keystone: https://review.opendev.org/c/openstack/keystone/+/910337 15:55:14 <noonedeadpunk> so if you can check if it works for you still - would be great :) 15:55:21 <noonedeadpunk> but yes 15:57:29 <jrosser> oh i did see that yes 15:57:35 <jrosser> we can look at that maybe next week 15:59:08 <noonedeadpunk> #endmeeting