15:12:14 #startmeeting openstack_ansible_meeting 15:12:14 Meeting started Tue Feb 22 15:12:14 2022 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:12:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:12:14 The meeting name has been set to 'openstack_ansible_meeting' 15:12:22 #topic rollcall 15:12:24 o/ 15:12:27 hey! 15:17:34 sorry, I moved between timezones, so got my alarm misbehaving) 15:17:59 #topic office hours 15:18:20 o/ hello 15:18:29 damiandabrowski[m]: have you had chance to check out comments for tempest patches? 15:18:42 as I believe one down the line blocks everything 15:19:04 no sorry :/ but I remember about it 15:19:06 also it's possible to de-couple them when things are not conflicting 15:19:13 ok, gotcha 15:19:24 yeah, i'll try to remove this huge relation chain 15:21:08 so we're progressing on internal tls 15:21:19 wip looks pretty fair to me atm 15:21:51 except the redirect include is never placed on the host 15:23:57 o/ heyo folks 15:24:11 \o/ hey there! 15:24:23 how things are doing with Rocky?:) 15:25:15 Goodly! Some issues with the nodepool but I think those should be resolved now 15:25:25 ok, great! 15:25:30 noonedeadpunk: the redirect is a jinja include into service.j2 15:25:40 it's not another seperate template 15:26:17 oh! 15:26:21 I'm blind( 15:26:25 :) 15:26:37 I still need to try another lxc install or two to check the patches jrosser put in last week 15:26:49 I read that as haproxy include 15:26:56 when do we think there will be a rocky node? 15:27:08 i ws also worried about the epel stuff yesterday 15:27:17 thats another repo we very specifically manage 15:27:18 Just realized it's not possible there :D 15:27:43 afaik JamesGibo has tested this in AIO 15:27:59 well, I haven't (yet) 15:28:14 unfortunatly i am double booked for meetings now 15:28:19 But I'd say it's fair enough atm and we can always fix later 15:28:36 but it would be great to all be agreed what the plan is for the TLS stuff 15:29:00 for example "Y release will be mandatory transition to internal TLS" <- discuss 15:29:40 "We support internal VIP on http or https" <- different discussion 15:30:01 I'd say we should discuss that on PTG likely? 15:30:27 and in the meantime we can carry on TLS'ing the roles and building an upgrade job 15:31:43 As I'd say it can be fine to say that VIPs are only TLS now, but moving forcelly to internal TLS - dunno if it make sense for everybody, considering we don't really have process yet to rotate rootCA 15:32:03 (or we do?) 15:32:30 jrosser: i'm hoping by the end of this week. looks like there was a dib release this morning so it should be ok to test the build now. https://review.opendev.org/c/zuul/nodepool/+/830345 15:32:38 i'll ask someone in infra-root if they can unpause it 15:32:47 btw regarding bug we discussed previous week - I pushed some patches to cover it https://review.opendev.org/q/topic:bug/1960587+status:open 15:33:37 damiandabrowski[m]: btw related to the question you asked in the morning as well regarding 127.0.1.1 record ^ 15:34:08 great! 15:35:25 btw. do we have any other idea for fixing this? 15:35:25 https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/829270 15:35:33 i'm not sure if we came up with any conclusion 15:37:08 So with different operating systems, constraints file should still be the same? 15:38:19 It may only happen with different OSA versions or when u-c version was updated and repo_server role wasn't run 15:38:27 for last case that won't help 15:38:45 and you actually shouldn't run different osa versions in same deployment? 15:39:07 so my question here is more - are we sure we understood the reason why this was affecting us? 15:39:41 in my case: i was using same osa version everywhere 15:39:48 as patch sounds atm a bit unrelated to reall issue 15:40:29 * damiandabrowski[m] trying to find more info about it, give me 1min pls 15:40:31 as long as _constraints_file_slurp is a registered var for all hosts in the play, we shouldn't care for which host it was gathered 15:41:48 in the meanwhile, I've adjusted a bit bump script to improve diff for openstack_services and also bumped shas for the release https://review.opendev.org/q/topic:bump_osa+status:open 15:42:22 we hit this issue because 22.3.2 doesn't have pinned uWSGI version, that's why uWSGI version was different for bionic and focal in my case 15:42:32 actually I will update master bump right after meeting as I also added command to bump collections versions 15:42:35 i'm not sure if there may be more cases like this 15:43:05 um.. and how that patch fixes that? 15:43:09 if we're certainly sure that constraints should be the same for all supported operating systems, then we can abandon my change 15:43:46 constraints are defined here https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/defaults/repo_packages/openstack_services.yml#L33-L35 15:44:06 by slurping constraints for each host separately 15:44:10 and they depend on osa version and openstack stable release... 15:44:33 on top of that. they are cached on repo container early during setup 15:45:07 https://opendev.org/openstack/openstack-ansible-repo_server/src/branch/master/tasks/repo_install_constraints.yml#L23-L28 15:45:23 NeilHanlon: we need 830345 to merge so we get new nodepool-builder images, clarkb approved it a few minutes ago 15:45:56 So I can kind of imagine that different repo containers had different constraints file becuase of different OS version on them... 15:46:02 once those images end up on dockerhub and get deployed to our servers, then we can unpause the rocky builds 15:46:16 but that sounds like slightly different issue... 15:47:26 damiandabrowski[m]: but still u-c are OS independant 15:48:57 but are upper constraints == constraints slurped in my change? 15:49:04 fungi: gotcha. I see that workflow in the zuul cfg now 15:50:46 `Slurp up the constraints file for later re-deployment` is delagated to `venv_build_host` so we are certainly sure they will be slurped from the "right" repo host 15:51:02 but only when we disable `run_once` 15:51:11 oh, hm 15:52:52 I think you're actually right 15:53:07 especially for cross-OS case 15:55:04 well, at least in my case disabling `run_once` helped :D 15:55:07 fungi: as you're here - can I ask you for review of https://review.opendev.org/c/openstack/project-config/+/829278 ? :) 15:56:13 yeah, indeed, it won't be required if we had a repo_container that was a desitnation of sync from all build_hosts 15:56:19 you can ask, sure ;) 15:57:01 lgtm 15:57:09 and now we have focal container wich is build_host for hostA and bionic container for hostB, which indeed doesn't work with run_once 15:58:14 fungi: thanks! As I clean forgot about patch in dicusssion, as it wasn't on review board because of my mistake and missing repo ACL 15:59:23 ok, awesome, we had some progress! :) 15:59:44 jrosser: https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/829270 worth having another look indeed 15:59:50 #endmeeting