15:02:27 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:02:27 <opendevmeet> Meeting started Tue Jun 11 15:02:27 2024 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:27 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:27 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:02:33 <noonedeadpunk> #topic rollcall
15:02:35 <noonedeadpunk> o/
15:02:45 <damiandabrowski> hi! good to be back after 2 weeks of absence \o/
15:02:50 <hamburgler> o/
15:03:08 <jrosser> o/ hello
15:04:07 <mgariepy> hey
15:04:25 <NeilHanlon> o/
15:06:13 <noonedeadpunk> #topic office hours
15:06:25 <noonedeadpunk> so. we currenty have a bug with magnum and qmanager
15:06:29 <noonedeadpunk> at least with magnum
15:06:34 <noonedeadpunk> and there're 2 ways kinda.
15:06:45 <jrosser> yeah - i think this is more widespread if you look in codesearch
15:07:02 <noonedeadpunk> first - revert the revert of enabling it (disable qmanager)
15:07:10 <noonedeadpunk> technically - final release didn't cut yet
15:08:48 <noonedeadpunk> so if we merge that now - final release can contain it disabled and help avoiding mass bug
15:08:54 <noonedeadpunk> #link https://review.opendev.org/c/openstack/openstack-ansible/+/921726
15:09:18 <noonedeadpunk> and then we can take time to do patches like https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/921690 and backport them
15:09:23 <noonedeadpunk> (or not)
15:10:00 <noonedeadpunk> and I also proposed to set jobs to NV: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/921727
15:10:10 <jrosser> i think it would be ok to backport those with the qmanager disabled
15:10:37 <jrosser> then we can opt-in to test it easily, which is kind of how we eneded up in trouble by not having really a testing window for it
15:11:40 <noonedeadpunk> yeah
15:12:01 <jrosser> what to think about is when we can settle on a new set of defaults and remove a lot of complexity to switch queu types
15:12:05 <noonedeadpunk> let me backport right awat
15:12:05 <opendevreview> Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2024.1: Revert "Enable oslomsg_rabbit_queue_manager by default"  https://review.opendev.org/c/openstack/openstack-ansible/+/921775
15:12:26 <jrosser> making preparation for rabbitmq4
15:12:32 <noonedeadpunk> so, ha queues are dropped with rabbitmq 4.0
15:12:58 <noonedeadpunk> but then it could be a valid thing to just use CQv2 without HA
15:13:17 <jrosser> 'late 2024' for that
15:13:43 <jrosser> so really for D we need to be getting everything really solid for quorum queues and considering removing support for HA
15:13:45 <noonedeadpunk> I think, we'll release 2025.1 with rabbit 3.* still...
15:13:56 <noonedeadpunk> or that
15:14:20 <noonedeadpunk> but given we keep option for CQv2 - all complexity for migration will stay kinda
15:14:40 <jrosser> it depends if we want to be allowing to handle rabbitmq3 -> 4 and HA -> quorum in the same upgrade, which might be all kinds of /o\
15:16:03 <noonedeadpunk> nah, I guess HA->quorum should end before 4
15:16:36 <noonedeadpunk> and that's why I was thinking to still have 3.* for 2025.1...
15:17:06 <noonedeadpunk> but then CQ<->QQ still will be there, right?
15:17:43 <jrosser> well it's kind of a decision to make about what we support
15:17:51 <jrosser> and that vs complexity
15:18:19 <noonedeadpunk> historically there were bunch of deployments who signed off from HA
15:18:31 <noonedeadpunk> which might be still valid for quorum
15:18:55 <noonedeadpunk> as CQv2 still gona be more performant I assume
15:20:04 <noonedeadpunk> dunno
15:20:28 <jrosser> ok - well whichever way we have some fixing up to do
15:22:53 <noonedeadpunk> Eventually, what we can do on 2024.2 - is remove HA policy
15:23:24 <noonedeadpunk> then, what you're left with - either migrate to quorum or regular CQv2 with no HA
15:23:42 <noonedeadpunk> this potentially opens path to 4.0 upgrade whenever we confident in it
15:24:10 <noonedeadpunk> but yes, first we potentially have some fixing to do...
15:26:09 <jrosser> ok so other thing i found today was broken cinder/lvm in aio
15:26:16 <noonedeadpunk> so, are we landing 921726 and backporting right away?
15:26:32 <noonedeadpunk> cinder/lvm/lxc I guess?
15:26:39 <noonedeadpunk> yeah...
15:26:58 <jrosser> yes i think we merge 921726
15:28:08 <noonedeadpunk> ok, then I'm blocking https://review.opendev.org/c/openstack/releases/+/921502/2
15:28:12 <noonedeadpunk> and we do RC3
15:33:03 <noonedeadpunk> regarding cinder/lvm - was you able to find why it's in fact borked?
15:33:10 <noonedeadpunk> or still checking on that?
15:35:11 <jrosser> it is because the initator id is not set
15:35:17 <jrosser> so pretty simple
15:35:43 <noonedeadpunk> so we should do some lineinfile or smth?
15:37:51 <jrosser> perhaps
15:37:58 <jrosser> i think this is what i am not sure of
15:38:40 <jrosser> from the charms patch `Cloud images including MAAS ones have "GenerateName=yes" instead of "InitiatorName=... on purpose not to clone the initiator name.`
15:38:45 <jrosser> and on debian/bu
15:39:13 <jrosser> buntu there is a script run as part of starting the iscsid service to ensure that the ID is generated, if needed
15:39:27 <jrosser> but i can't see anything like this on centos/rocky
15:39:31 <jrosser> NeilHanlon: ^ ?
15:40:43 <jrosser> tbh i have never set up iscsi myself so i don't know where responsibilty lies for creating the ID in a real deployment
15:41:09 <jrosser> so this might be a CI specific thing
15:41:36 <noonedeadpunk> though, ppl would expect it to work....
15:41:45 <noonedeadpunk> I bet I've seen bugs
15:42:06 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/1933279
15:42:25 <noonedeadpunk> but there was more....
15:43:40 <jrosser> oh actually `service iscsi start` is enough to generate the initiator name on rocky
15:43:51 <jrosser> so maybe this is just what we need to do for LVM
15:44:39 <noonedeadpunk> on side of ... cinder-volume I assume?
15:44:44 <jrosser> yeah
15:45:21 <jrosser> in here i guess https://github.com/openstack/openstack-ansible-os_cinder/blob/master/tasks/cinder_lvm_config.yml
15:46:11 <jrosser> ok i will make a patch for this
15:48:00 <noonedeadpunk> jrosser: we have that: https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/tasks/cinder_post_install.yml#L150-L158
15:48:31 <noonedeadpunk> so probably it's wrong or not enough now...
15:48:44 <jrosser> not quite https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/vars/debian.yml#L21
15:48:52 <noonedeadpunk> as these are just lioadm/tgtadm which is different
15:49:41 <noonedeadpunk> but potentially having another service started somewhere nearby might make sense...
15:50:31 <jrosser> i think iscsid is for persistent config, and probably cinder makes exported volumes as needed on the fly
15:53:03 <NeilHanlon> hm. i'm not really sure on why that would happen or be the case.. i don't really use that much iscsi myself, either
15:58:27 <NeilHanlon> actually, there's `iscsi-iname` from iscsi-initiatior-utils -- perhaps this is what is needed
15:58:39 <NeilHanlon> >iscsi-iname generates a unique iSCSI node name on every invocation.
15:59:56 <opendevreview> Jonathan Rosser proposed openstack/openstack-ansible master: Collect iscsid config for CI jobs  https://review.opendev.org/c/openstack/openstack-ansible/+/921778
16:00:15 <noonedeadpunk> makes sense
16:00:21 <noonedeadpunk> #endmeeting