*** bauzas_ is now known as bauzas | 01:17 | |
*** bauzas_ is now known as bauzas | 04:10 | |
tkajinam | hmm it seems release note job is broken now ? I guess something went wrong when it was moved to noble https://zuul.opendev.org/t/openstack/build/7845d37b907b4d458cbd70828832752b | 06:00 |
---|---|---|
tkajinam | https://paste.opendev.org/show/bJb1vWv9OCnRmKZ8jMz7/ | 06:25 |
*** bauzas_ is now known as bauzas | 06:29 | |
frickler | tkajinam: clarkb proposed https://review.opendev.org/c/openstack/releases/+/926860 but it seems like we would need to do that for every repo then, which doesn't sound like a good solution to me. maybe we can look into getting rid of that pcre dependency instead. or likely work around this for now by pinning that job to jammy I guess | 06:39 |
tkajinam | yeah | 07:18 |
tkajinam | pushing the job back to jammy may be a short term solution until we find out a better solution than distributing the same bindep update | 07:20 |
opendevreview | Jens Harbott proposed openstack/openstack-zuul-jobs master: Pin build-openstack-releasenotes job to jammy https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926879 | 07:22 |
frickler | tonyb: ^^ please have a short look if you're still around, otherwise I'll self-approve if zuul passes, since this seems to affect every openstack project | 07:23 |
opendevreview | James Page proposed openstack/project-config master: Retire unmaintained charm repositories https://review.opendev.org/c/openstack/project-config/+/926791 | 07:24 |
frickler | hmm, got a linter failure on my fix, I thought that that had been fixed according to the backlog in #opendev, but maybe not for that repo? anyway I'm going to force-merge the fix now and look into the linter issue later | 07:52 |
opendevreview | Merged openstack/openstack-zuul-jobs master: Pin build-openstack-releasenotes job to jammy https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926879 | 07:54 |
*** __ministry is now known as Guest1136 | 09:00 | |
opendevreview | Elod Illes proposed openstack/project-config master: [release-tool] Workaround for getting dist_name https://review.opendev.org/c/openstack/project-config/+/926885 | 09:28 |
opendevreview | Merged openstack/ci-log-processing master: Add url subpath for Logsender to reach OpenSearch https://review.opendev.org/c/openstack/ci-log-processing/+/926717 | 09:49 |
opendevreview | Merged openstack/project-config master: [release-tool] Workaround for getting dist_name https://review.opendev.org/c/openstack/project-config/+/926885 | 11:11 |
fungi | looks like it's a dependency of whereto? | 12:16 |
fungi | python-pcre is, i mean | 12:17 |
fungi | the releasenotes playbook could just install the supporting packages on relevant platforms in that case | 12:18 |
fungi | i have a feeling the reason it worked on earlier platforms is we had pre-built and cached wheels of it but haven't done that for noble | 12:18 |
fungi | the playbook workaround would only last us until projects start running their normal docs jobs on noble next cycle though | 12:21 |
fungi | pinning the releasenotes job back to jammy makes the most sense short-term, i guess, and we need to decide what this means more generally for the whereto utility depending on a python lib that's only distributed as a decade-old sdist yet builds c extensions. that's not good in this era | 12:22 |
fungi | maybe the longer-term solution (for early next cycle) is to switch whereto to a different regular expression parser | 12:23 |
ttx | we also seem to have failures on the propose-update-constraints job now | 12:32 |
ttx | See https://lists.openstack.org/archives/list/release-job-failures@lists.openstack.org/ | 12:32 |
ttx | and we might need to reenqueue a few things | 12:33 |
frickler | ttx: this should already be fixed, see the discussion in the release channel | 12:34 |
ttx | ohm wrong channel! sorry | 12:35 |
frickler | and we cannot re-enqueue those, will need to create the u-c bumps manually | 12:35 |
*** bauzas_ is now known as bauzas | 12:48 | |
clarkb | frickler: re doing it for every repo that is the correct thing to do if you have the dependency because even if we built and hosted wheels in our wheel caches that is for CI only not for local developer workstations | 15:12 |
clarkb | and yes the dependency comes from whereto | 15:12 |
clarkb | I actually think spinning things up iniitally without a wheel cache has been educational. There have been a number of things where we're relying on that when we shouldn'y be | 15:12 |
frickler | yes, the wheel cache has been masking issues in the past, I wouldn't mind getting rid of it completely | 15:44 |
frickler | clarkb: btw. did you see the failure on https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926879 ? I think I'm running out of time to look at it today | 15:44 |
frickler | oh, and that's also an error I didn't notice before https://zuul.opendev.org/t/openstack/build/47e76090e20b4e768ae58933f0f921b4. I was referring to the linter failure above | 15:46 |
clarkb | frickler: the ozj linters issue is the oen that requires us to update the hacking version to latest (because it pulls in newer flake8 which works on python3.12. I can look into doing that for ozj later today (I've got a meeting then a school thing first) | 15:48 |
clarkb | the docs promote job failure is new to me. And also one I suspect isn't related to the nodeset change since it appears to be ansible setup related and we didn't change ansible | 15:49 |
clarkb | a quick skim would indicate the zuul artifact return data is maybe in a format that the ansible there doesn't expect? | 15:50 |
clarkb | looks like we will also have to bump up the ansible and ansible-lint versions in ozj which means I think the best path forward there is to pin to jammy and update hacking, then bump ansible and ansible-lint and fix all the resulting errors | 15:55 |
clarkb | then we can unpin jammy | 15:55 |
clarkb | fungi: have you see the issue in https://zuul.opendev.org/t/openstack/build/47e76090e20b4e768ae58933f0f921b4 before? That is a new oen to me and as mentioend i don't think it is related to the nodeset change | 15:55 |
frickler | oh, actually that may have happened because I force-merged the change, so there wasn't a gate build that could be promoted | 15:55 |
frickler | I can propose the hacking bump for ozj though | 15:55 |
fungi | that would indeed do it | 15:55 |
clarkb | frickler: oh yup that would do it | 15:55 |
opendevreview | Jens Harbott proposed openstack/openstack-zuul-jobs master: Bump hacking to latest version https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926933 | 15:57 |
clarkb | as noted you may still need to ping to jammy beacuse now that hacking won't fail on python things it will continue to run and run ansible-lint and fail on python things later | 15:57 |
clarkb | but we can see what the CI results say about that to confirm the behavior before we assume it will occur | 15:58 |
frickler | ah, yes I didn't read the last comments before submitting. but let's see CI results first | 15:58 |
frickler | actually let me test this on my noble node | 15:58 |
frickler | Failed: 2021 failure(s), 46 warning(s) on 2049 files. | 16:15 |
frickler | that's with all caps removed | 16:15 |
frickler | guess I'll pin to jammy first :-/ | 16:18 |
opendevreview | Jens Harbott proposed openstack/openstack-zuul-jobs master: Bump hacking to latest version https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926933 | 16:21 |
frickler | though I'm not sure ansible<6 will even work with python3.10? | 16:22 |
clarkb | frickler: we should bump ansible to >=8,<9 since that is what the jobs run with in zuul | 16:50 |
clarkb | and that solves that problem | 16:50 |
clarkb | ok now I'm off to do school things back in a bit | 16:51 |
fungi | yeah, it's what we did in some of the other jobs already | 16:51 |
fungi | i need to disappear for a few minutes to run some quick errands but should be back within the hour | 16:51 |
fungi | seems quiet here still | 17:40 |
frickler | my version of the patch passed CI, I would suggest to go with that for now. I still get ansible-lint errors for all variants I tried on noble | 17:45 |
clarkb | frickler: ya I don't think there is any way of avoiding the linter errors when running on python3.12 as we need newer version. Your patch lgtm and we can followup with linter improvements | 17:57 |
opendevreview | Merged openstack/openstack-zuul-jobs master: Bump hacking to latest version https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926933 | 18:03 |
*** bauzas_ is now known as bauzas | 18:26 | |
opendevreview | Maksym Medvied proposed openstack/project-config master: Mirror charm-ceph-nfs to GitHub https://review.opendev.org/c/openstack/project-config/+/926949 | 19:18 |
*** melwitt is now known as jgwentworth | 19:38 | |
*** jgwentworth is now known as melwitt | 19:38 | |
opendevreview | Merged openstack/project-config master: Mirror charm-ceph-nfs to GitHub https://review.opendev.org/c/openstack/project-config/+/926949 | 20:41 |
*** bauzas_ is now known as bauzas | 22:59 | |
opendevreview | Clark Boylan proposed openstack/openstack-zuul-jobs master: Run latest ansible-lint on Ubuntu Noble https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926970 | 23:05 |
clarkb | That ^ will definitely not pass, but I don't have a noble install locally so just wanted a quick set of results that I can use to determine where to skip rules and where to apply fixes | 23:06 |
opendevreview | Clark Boylan proposed openstack/project-config master: Split min ready nodes between jammy and noble https://review.opendev.org/c/openstack/project-config/+/926972 | 23:12 |
opendevreview | Clark Boylan proposed openstack/openstack-zuul-jobs master: Run latest ansible-lint on Ubuntu Noble https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926970 | 23:43 |
clarkb | that might pass assuming the list of errors on the first run was complete | 23:43 |
clarkb | oh except a whole bunch of extra josb run now so we'll see how that goes | 23:44 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!