15:01:26 #startmeeting openstack_ansible_meeting 15:01:26 Meeting started Tue Feb 15 15:01:26 2022 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:26 The meeting name has been set to 'openstack_ansible_meeting' 15:01:26 which could be a factor here 15:01:29 osa version was the same on all nodes - 22.3.2 15:01:38 #topic bug triage 15:02:05 uwsgi version was the problem here as its version is not pinned in 22.3.2 ;) so bionic and focal had different uwsgi versions in constraints.txt 15:02:13 but maybe let's leave it after the meeting 15:02:34 or we can discuss on the meeting) 15:02:40 as it's supposed to be bug? 15:02:40 +1 15:02:51 haha, that's right 15:03:09 i just didn't want to make a mess during Your meeting :D 15:07:58 i guess i was just pointing out that the uwswgi pin had been backported down all the stable branches 15:09:29 oh yes, it was 15:09:50 To have that said, I wasn't digging into that yet 15:10:01 So no idea about the bug yet:) 15:10:03 Merged openstack/openstack-ansible-lxc_container_create master: Allow redhat.yml to support any distribution and major release https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/829062 15:10:47 ok, then I think would be great to discuss bugs from LP 15:10:51 #link https://bugs.launchpad.net/openstack-ansible/+bug/1960587 15:11:43 So jrosser, you think we should comment out and make it requirement, 127.0.1.1 record from hosts? 15:12:05 As then for ubuntu hostname might become a bit messy 15:12:23 or well, it can change during deployment this way) 15:12:35 at least fqdn 15:12:35 i think i'm slightly not following the whole thread in the bug there 15:13:08 so rabbitmqctl uses 25672 port for managing rabbit. 15:13:24 by default it connects as user@hostname 15:14:02 and with https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/stable/xena/templates/rabbitmq.config.j2#L63 we are ensuring that it listens only on management ip 15:14:47 so when there's record for `127.0.1.1 hostname`, hostname resolvs to 127.0.1.1 and not to management ip 15:14:50 at the simplest level i can see that on a metal host the 127.0.1.1 entry is there, and on a container it is not 15:14:56 for focal 15:16:03 when we drop 127.0.1.1 and we have record of ` hostname` hostname starts resolving to the IP where 25672 listens 15:16:41 or we can drop that record, then we get 0.0.0.0:25672 and rabbitmqctl has no issues connecting as well 15:17:09 yeah, I believe that it's only metal issue 15:17:21 i guess i had been on a bit of a quest to remove all binding to 0.0.0.0 everywhere 15:17:23 we don't add such record to containers 15:17:41 i would be wondering what that meant for the external VIP on that port for a standard layout deployment 15:17:48 where infra is also LB 15:18:00 But that's rabbit? It's not balanced) 15:18:10 except monitoring port 15:18:12 no, but it's listening on 0.0.0.0 15:18:35 VIP is belonging to the host, not haproxy exlusively? 15:19:19 ah 15:19:35 oh, indeed, so we expose port to the world 15:19:46 yes, thats my concern 15:20:02 fair. I missed that bit indeed 15:20:19 there was two things for bind-to-mgmt really, all the metal port conflicts, but also exposing * on all interfaces before 15:20:59 larger deploys might have seperate haproxy node, so this would not matter so much 15:21:35 ok, fair 15:22:02 then another issue, is that this seems to gone for master after my config refactoring 15:22:06 will check that 15:23:04 and yeah, then the close to only way to prohibit 127.0.1.1 in hosts if rabbit is there 15:24:22 I think that's it for bugs for now 15:24:27 #topic office hours 15:24:37 we have exciting news for rocky linux 15:24:43 oh? 15:24:49 have we merged it ?:) 15:24:52 lxc and metal is working 15:24:55 ah not quite :) 15:25:10 hooo nice work 15:25:16 indeed! 15:25:20 i did some refactoring in lxc_hosts to make redhat-like things easier 15:25:28 and also lxc_container_create 15:26:09 I bet I vote some patches... 15:26:24 And zuul jobs are still not the case? 15:26:42 not yet 15:26:50 we still have some hidden issues for centos 15:27:04 repo config is messy, so i made this https://review.opendev.org/c/zuul/zuul-jobs/+/829028 15:27:32 and once that merges we can do this https://review.opendev.org/c/openstack/openstack-ansible/+/829111 15:28:47 oh! 15:28:50 i expect that is going to break our centos jobs a bit, but thats good as it's then the same behavious as outside zuul 15:28:54 so we can pass vars) 15:28:58 yes i think so 15:32:34 i think there is a similar refactoring we can do in the lxc roles for the debian/ubuntu vars 15:33:00 yeah, likely we can indeed 15:33:03 theres very little difference between them, and perhaps even some merging of debian/ubuntu/redhat as well 15:34:34 once the centos repos are done properly we really need a point release 15:35:47 Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Allow fast SSH cipher for upgrade jobs https://review.opendev.org/c/openstack/openstack-ansible/+/829258 15:36:13 I think we need to release 24.1.0 now? 15:36:23 Do we have anything we want to merge before that? 15:37:01 i think that the centos upgrade jobs will go green again with these 829167 829167 829167 15:37:11 would be nice to see the upgrades working 15:37:17 we should fix the repos for centos 15:38:03 yep fair. Also recently backported some recently merged fixes. 15:38:05 and i think that this is a hidden bug from the enabling of PowerTools in CI https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/829021 15:39:20 uh. we should use test there instead of comparison imo 15:40:24 not critical to use it but then it won't occur 15:40:29 here? https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/829021/2/defaults/main.yml#163 15:41:00 nah, here https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/829019/1/tasks/openstack_hosts_configure_dnf.yml 15:41:33 `is version('8', '==') 15:42:06 but yeah, might be no sense in that... comparison more lightweight likely... 15:42:27 overall i think we are quite close to being able to make 24.1.0 15:42:29 but tbh I'd expected `ansible_facts['distribution_major_version']` to be int... 15:42:30 just a few details 15:42:52 yeah, agree. https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/829253 backport would be nice as well I believe 15:43:17 `"ansible_distribution_major_version": "8"` 15:43:29 I see... 15:46:52 we still do have a lot of os_tempest patches from damiandabrowski[m] 15:47:06 are they still blocked on there being a comment to resolve on the bottom one in the stack? 15:47:28 i was wondering if it was necessary to stack them like that or if they can merge seperately to make reviewing easier 15:47:49 most of them can be independant yes 15:48:19 and yeah, if it wasn't updated then whole batch is blocked :( 15:48:41 is there anything else we are trying to land this cycle? 15:48:50 i'll try to find out which ones can be independent and fix it soon :/ 15:49:00 ssh keys stuff is on the way 15:49:14 proxysql? 15:49:44 I didn't have time to finish it yet :( But yes, will try to get some time quite soon 15:50:15 What about internal ssl? 15:50:58 Are we ready to merge https://review.opendev.org/c/openstack/openstack-ansible-specs/+/822850 ? 15:51:54 ah right yes 15:52:18 one of the things james gibson is looking at is a proposal for an intermediate haproxy config 15:52:38 once which supports http and https backends at the same time for use during a migration 15:53:02 perhaps seeing that helps know if the spec is right 15:53:14 oh, ok 15:53:31 btw. i've implemented a simple fix for mariabackup: https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/828977 15:53:43 I hope we will also be able to replace some cert management with pki role - like keystone and octavia 15:54:41 lgtm 15:54:54 oh wow i didnt even look in keystone :) 15:55:08 and fwiw we're really interested in make internal ssl going as eager to use it :) 15:55:26 switching internal VIP to SSL is relatively OK step 15:55:38 switching over the backends is much more involved 15:56:29 internal VIP is running via SSL for quite some time but nasty hacks being used for backends<->haproxy encryption 15:57:28 so yeah - if some help needed - let us know) 15:58:13 we only have james for another couple of weeks 15:58:22 so we will have the spec and a haproxy setup proposal 15:58:40 though turning that into a viable in-place upgrade is the real heavy lifiting i think 15:59:05 and we will need some really clear docs for this 15:59:13 yeah, agree 15:59:24 plus maybe a PTG topic is if we keep supporting a choice of SSL / non-SSL 16:00:00 damn 16:00:07 what is Z release name :D ? 16:01:42 let's start populating this https://etherpad.opendev.org/p/osa-Z-ptg 16:01:52 "OpenStack next release name is final- OpenStack Zed" 16:01:57 found it on ml 16:02:56 Lol, it's today one :) 16:03:20 Since there's no public voting, stopped keeping track on the progress 16:04:23 #endmeeting