15:00:51 #startmeeting kolla 15:00:51 Meeting started Wed Feb 2 15:00:51 2022 UTC and is due to finish in 60 minutes. The chair is mnasiadka. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:51 The meeting name has been set to 'kolla' 15:01:00 in other meeting at same time 15:01:01 #topic rollcall 15:01:17 o/ 15:01:20 o/ 15:02:00 (.Y.) 15:04:11 #topic agenda 15:04:11 * Review action items from the last meeting 15:04:11 * CI status 15:04:11 * Release tasks 15:04:11 * Current cycle planning 15:04:13 * Additional agenda (from whiteboard) 15:04:13 * Open discussion 15:04:18 #topic Review action items from the last meeting 15:04:39 * frickler sneaks in late 15:04:58 mnasiadka to triage security bugs and update them with resolution plan (if needed) 15:04:58 mnasiadka post a patch for docs - standard topics that should be discussed over PTG and then revisited in mid-cycle 15:04:58 kevko to let frickler know whether osism's solution is fine for his use case 15:04:58 yoctozepto to review going-podman patches 15:04:58 halomiva/hinermar propose changes for podman 15:04:59 hrw to prepare patches for R-8 Switch binary images to current release 15:05:05 triage sec bugs in progress 15:05:12 haven't posted patch for docs 15:05:26 I have reviewed the one about systemd 15:05:39 #action to triage security bugs and update them with resolution plan (if needed) 15:05:41 and consider that action done from my side ;-) 15:05:49 #action mnasiadka post a patch for docs - standard topics that should be discussed over PTG and then revisited in mid-cycle 15:05:54 #action mnasiadka to triage security bugs and update them with resolution plan (if needed) 15:06:01 hrw is working on R-8 patches 15:06:55 frickler: what about your task? can we stop tracking this? :) 15:07:16 it's more like kevko's task 15:07:26 well I still haven't heard anything from kevko, so I would advocate treating this as a non-issue for now 15:07:43 non-issue it is 15:08:33 good 15:08:36 let's move on :) 15:08:43 #topic CI status 15:09:05 I guess we should discuss the CentOS 8 issue here? 15:09:16 seems sensible to 15:09:36 So, I think it's time to drop all centos8 related CI jobs on any active branches? 15:10:10 yes, infra dropped centos8 images, so jobs that were using these will get node failures 15:10:13 c8 is gone 15:10:21 So victoria and earlier 15:10:24 no repos on mirrors 15:10:30 that too 15:10:46 I have proposed the changes 15:10:48 did not they merge? 15:11:04 I think we are more likely hit by mirror issue for images building INSIDE of the images 15:11:11 meaning container images 15:11:29 I think Kolla still has centos8 jobs 15:11:39 we also have some issue where we try to reference c8 repos within images, those will need to move to c8s I assume 15:11:42 like https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f8f/827240/1/check/kolla-build-centos8s-binary/f8f761a/kolla/build/000_FAILED_openstack-base.log 15:12:26 Anyway, any volunteer to drop all centos8 based zuul jobs between stable/train and stable/victoria? 15:12:50 (or find patches that were raised and bring to core reviewers attention) 15:13:27 dropping centos sounds like something I should be able to do 15:13:30 the only left unmerged is in k-a hanging on bifrost https://review.opendev.org/c/openstack/kolla-ansible/+/823860 15:13:44 k-a train 15:14:09 yoctozepto: there's also some periodic jobs like I mentioned this morning 15:14:45 frickler: periodic? 15:15:02 e.g. https://8d86acaaae403d298a22-243f33d84352232828abd4618be541dc.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/kolla/stable/ussuri/kolla-publish-centos8-source-quay/44f9e53/job-output.txt 15:15:12 that's kolla 15:15:22 and I'm sure there is something in Kayobe 15:15:31 frickler: it's running on bionic 15:15:31 ok, I can take this, doesn't sound like a half year project 15:15:33 I switched it 15:15:40 but we need to take care of image building 15:15:50 can we continue those with frozen packages? 15:16:06 I mean, frozen base distro packages 15:16:12 so you want to keep building c8 images? 15:16:13 obviously need to stop using mirrors 15:16:24 frickler: honestly, I don't know 15:16:32 maybe I should have dropped that entirely already 15:16:40 felt good to keep around 15:16:48 ok, let's get to the later topic what do we want to do with c8 images and how to inform users :) 15:17:09 any other CI issues than C8 related? 15:17:22 cephadm? 15:17:30 yeah, that one is on me 15:17:38 god knows why it's freaking on udevadm 15:18:07 sorry, I'm just reading communication right now ..I had internal meeting :( 15:18:10 But working on it 15:18:20 hrw: what about those missing repos for c8s like opstools? 15:19:13 again because of CentOS I feel like I'm on a battlefield... 15:19:36 there was something in the mailing list from tripleo about opstools, but it doesn't seem to have fixed our failures 15:19:53 mnasiadka: ++ on that battlefield 15:20:09 anyway, that's issue number three I guess - C8 is gone and we now know which repos we were using from C8 in C8S builds :) 15:20:33 let's try to resolve those in coming weeks... 15:20:49 the no 3 is pretty important 15:20:49 #topic Release tasks 15:20:54 as the CI is blocked 15:21:15 I was wondering whether we should make c8s n-v until this is resolved 15:21:24 to unblock CI 15:21:38 I don't mind if there really is not a quick fix for opstools 15:21:41 don't they exist for stream? 15:22:17 mrunge said "The centos-release-opstools package has been updated to provide the centos 8 stream repositories." 15:22:21 * frickler really knows even less about centos than everyone else 15:22:30 maybe the mirrors did not propagate 15:22:35 let me check how we do it 15:23:09 yeah, we are installing that pkg 15:23:39 yoctozepto: I found out today that there was an issue in the build 15:23:45 sorry, had to check other meeting 15:23:46 an update will follow today 15:24:20 ^^ frickler 15:24:25 mrunge: you mean update of the contents in repos? 15:24:31 nope 15:24:38 an updated repo configuration 15:24:50 irc cloud decided to have database issues ;-) 15:24:57 centos-opstools-testing has most of collectd related packages. some are missing still 15:25:03 mrunge: ah, based on the mail I thought it's already done 15:25:23 so we will need to report missing ones and/or adapt image contents 15:25:39 I thought so too. I can not do the builds myself, and the person doing them forgot a change in a file 15:25:52 hrw: what is missing? 15:26:02 mrunge: ah, interesting, thanks for clarifyin 15:26:14 Error: Unable to find a match: collectd-dpdk_telemetry collectd-libpod-stats collectd-sensubility python3-sqlalchemy-collectd 15:26:25 mrunge: ^^ 15:26:33 I did the necessary changes 9 days ago 15:26:46 dpdk_telemetry is gone, it won't come back 15:27:04 I'll have a check on sensubility and the other ones 15:27:20 ok 15:27:39 good to know, thanks mrunge 15:27:46 I just got the confirmation, the updated centos-release-opstools package (1.12) works like a charm 15:27:53 \o/ 15:28:03 thanks guys 15:28:14 team work, dream work 15:28:22 Marcin Juszkiewicz proposed openstack/kolla master: collectd: dpdk telemetry is gone for good https://review.opendev.org/c/openstack/kolla/+/827516 15:28:58 (and dtantsur dropped bifrost part of c8 reliance so k-a patch will merge) 15:29:04 a very productive meeting :-) 15:29:16 fantastic 15:29:19 mrunge: cool ;) my image had 1.11 and I enable centos-opstools-testing by hand to check how situation looks like 15:30:39 UCA/Yoga enablement is stuck on horizon: https://bugs.launchpad.net/cloud-archive/+bug/1959402 15:31:05 Egh, irccloud is not getting better. 15:31:19 hrw: is only binary affected? 15:31:22 hrw: I'm watching that bug 15:31:23 yoctozepto: yes 15:31:30 hrw: sweet 15:32:16 #topic Current cycle planning 15:32:17 Anybody wants to talk about any particular priority/patches? 15:32:27 yoctozepto: https://review.opendev.org/c/openstack/kolla/+/826488 - zuul says ubuntu source is fine 15:32:48 hrw: yeah :-) 15:33:30 https://review.opendev.org/c/openstack/kolla/+/826906 is bothering me (stop using --system-site-packages). helps on moments when we need to build wheels, not helps as we have python-libvirt from distros 15:34:05 #action hrw to discuss with pynacl upstream to release binary wheel of 1.4.0 for aarch64 15:34:18 small reminder, this week is R-8 - feature freeze is R-2 :) 15:34:55 so not that much time left 15:35:56 exactly 15:36:40 Ok, if there are no interested parties to talk about any features we should be merging, then let's go further. 15:36:54 #topic Additional agenda (from whiteboard) 15:37:05 (mnasiadka): CentOS 8 situation 15:37:24 Currently any CentOS 8 image is unbuildable due to repositories moved to vault.centos.org 15:37:38 Question - what should we do with that (if anything) 15:37:54 c8 is gone. we should abandon any support 15:38:52 I do not know how messy c7->c8s update path is now 15:39:05 Agree, should we add renos in respective branches? an entry in the docs? 15:39:25 c7/* -> c7/train -> c8/train and then quick update to old c8/ussuri -> c8s/victoria? 15:39:36 maybe a temporary workaround like for the "missing" repositories would help? RDO has provided temporary repositories for tripleo 15:40:03 (that may solve only immediate pain, is not a permanent solution) 15:41:39 hmm 15:42:01 train and ussuri are only c8 (as not c8+c8s), right? 15:42:11 basically a user needs to use vault.centos.org for package sources 15:42:13 well, we are an open source project, we can always use the "come help us" card ;-) 15:42:18 Merged openstack/kolla-ansible stable/train: [CI] Stop testing non-stream CentOS https://review.opendev.org/c/openstack/kolla-ansible/+/823860 15:42:49 train and ussuri are dead, no? 15:42:57 hrw: victoria's images are also c8 15:42:58 the existing images will work, unless it's the bifrost image that is installing packages as part of bootstrap... 15:43:08 only since wallaby we have centos stream in images 15:43:11 yoctozepto: victoria has c8s on ci iirc 15:43:18 yoctozepto: we have victoria c8s 15:43:26 mnasiadka: images too? 15:43:29 yes 15:43:33 I remember victoria c8s on host 15:43:36 then great 15:43:42 let's kill support for EM 15:43:53 and we are golden 15:43:59 mrunge: now, as we managed to get train running again no one wants to touch it again 15:44:07 kill support for centos8 in train and ussuri? 15:44:21 mnasiadka: or kill train and ussuri 8-) 15:44:21 ubuntu and debian still work 15:44:24 just kidding 15:44:38 just leave old c8 image sets so upgrade path is possible without FFU 15:44:44 yes, I guess it's the reasonable choice considering our workpower 15:45:02 hrw: yeah, I guess only killing off CI 15:45:13 I would rather propose a reno and change in docs claiming that CentOS 8 is EOL and we don't support it anymore - and users are on their own... + kill CI :) 15:45:25 mnasiadka: ++ 15:45:44 you have my CR+2 ticket which you can redeem at a later date 15:45:58 no expiration date on the ticket? ;-) 15:46:15 Anyway, is there a volunteer to do this work? 15:46:25 mnasiadka: you have to use it yourself in the next 2 weeks 15:46:34 and on the centos drop change 15:47:19 Ok then, you convinced me to do it by myself ;) 15:47:36 btw, https://review.opendev.org/c/openstack/kolla-ansible/+/823860 merged 15:47:56 (but I've seen no opendevreview comment about that) 15:48:04 #action mnasiadka to propose a reno and change in docs claiming that CentOS 8 is EOL and we don't support it anymore on train/ussuri/victoria - and users are on their own... + stop any CI related to CentOS 8 (non-stream) 15:48:13 nodesets are gone from k and k-a 15:48:38 yes, saw that 15:48:55 #topic Open discussion 15:49:06 11 minutes of open discussion - anybody? 15:49:09 (ah, /me blind, I've seen it too, just did not notice) 15:49:26 (in the heat of the discussion) 15:50:38 nothing to discuss from my side 15:50:43 what about the old debian-binary:master images that kevko mentioned yesterday, should we drop them or still leave them in place to confuse users 15:50:44 Ok, maybe I'll start the discussion - around Python 3.6 drop in Z - should we go the extra mile and run Python based services on Python 3.9 in CentOS Stream 8? 15:51:02 i think they should be dropped 15:51:15 DROP is the answer to both questions 15:51:26 who is going to do the DROP? :) 15:51:36 mnasiadka: proponents! 15:51:39 :D 15:51:45 frickler and kevko, good 15:52:04 so 15:52:11 are we dropping binary? 15:52:14 so which one is it for the action item? 15:52:21 I think it stalled on that kevko~frickler action 15:52:23 Well, we are, first day of the new development cycle :) 15:52:40 mnasiadka: did we deprecate properly? 15:52:51 frickler: drop 15:52:59 I think we did, Kolla Whiteboard says we did most of the stuff 15:53:11 so how to we drop images from quay.io, is there some precedent? 15:53:19 s/to/do/ 15:53:19 "Binary images are deprecated in Yoga and any support for them will be removed in the next cycle. Users are requested to migrate to source based images." 15:53:20 yeah 15:53:35 I don't think there's a precedent, but the credentials are in Zuul secrets 15:54:04 If there's a way to use them to delete those images 15:54:10 frickler: there is no precedent; we can use a Zuul job to do that; or I can go there and (I guess semi-)manually delete all the cruft 15:54:32 I have quay credentials that nobody else wanted 15:54:37 I protect them with my life 15:54:57 probably would be nice to have a clean up job in future, but this time we can just delete them using yoctozepto's hands? 15:55:22 https://docs.openstack.org/releasenotes/kolla/unreleased.html does not list deprecation of binary images ;( 15:55:50 ah. no it does. 15:55:52 prelude ;d 15:55:52 xD 15:55:57 first line :D 15:56:00 13.0.0.0rc1-52¶ 15:56:00 Prelude¶ 15:56:00 Binary images are deprecated in Yoga and any support for them will be removed in the next cycle. Users are requested to migrate to source based images. 15:56:04 yeah, "najciemniej pod latarnią" 15:56:11 yoctozepto: exactly 15:56:11 very dark 15:56:22 yoctozepto: do you agree to do the removal? 15:56:39 mnasiadka: yeah 15:56:42 action meeee 15:57:03 #action yoctozepto to remove debian master binary images from container registry 15:57:36 3 minutes left, anything else guys? 15:57:37 an maybe check other binary out-of-date images (i've checked only debian ) 15:57:54 only debian is so special that it has outdated binary images in master 15:58:06 yeah, but I can check images in general and report 15:58:28 we probably will want to remove all binary master images next cycle ;) 15:58:53 yep 15:59:10 ok then, all is clear 15:59:13 thanks for the meeting 15:59:16 #endmeeting