opendevreview | Merged openstack/diskimage-builder master: rocky-container: use Quay.io instead of Docker Hub https://review.opendev.org/c/openstack/diskimage-builder/+/943337 | 01:22 |
---|---|---|
*** liuxie is now known as liushy | 06:15 | |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 07:37 |
opendevreview | Karolina Kula proposed openstack/diskimage-builder master: WIP: Add support for CentOS Stream 10 https://review.opendev.org/c/openstack/diskimage-builder/+/934045 | 08:52 |
opendevreview | Fabien Boucher proposed zuul/zuul-jobs master: zuul_log_path: allow override for emit-job-header and upload-logs https://review.opendev.org/c/zuul/zuul-jobs/+/943586 | 09:16 |
*** iurygregory__ is now known as iurygregory | 11:50 | |
opendevreview | Karolina Kula proposed opendev/glean master: WIP: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 12:18 |
fungi | both flex regions look good now and have recent history using up to their max-servers values | 14:02 |
fungi | gonna go grab lunch, should be back in an hour | 15:51 |
clarkb | thats excellent news | 16:02 |
clarkb | fungi: when you've consumed lunch what do you think about a gitea upgrade? | 16:03 |
clarkb | https://review.opendev.org/c/opendev/system-config/+/943352 this change if you think we should proceed. Its just a bugfix release so I think it will be ok | 16:04 |
clarkb | we can probably also land https://review.opendev.org/c/opendev/system-config/+/943625 but that may trigger all the infra-prod jobs due to updates to inventory (just be aware of that being a slow apply) | 16:33 |
fungi | clarkb: gitea upgrade sounds great to me, i'm around for the remainder of the day now | 17:38 |
clarkb | ok I've just approved th change | 17:52 |
clarkb | and I'll be around too just have to od a school run later which this should hopefully land well in advance of | 17:53 |
fungi | no worries, i can keep an eye on it regardless | 17:54 |
clarkb | I guess the other major qusetion I have is whether we want to proceed with https://review.opendev.org/c/opendev/system-config/+/943509 or wait for additional feedback. I think waiting is fine since stuff is mostly working right now | 17:59 |
fungi | it'd be nice to get at least one more infra-root to have a look at 943509 but also i wouldn't want to leave it dangling for long | 18:03 |
fungi | since otherwise we have some corner case delays for deploying configs | 18:04 |
clarkb | ya maybe on monday plan to merge it iwthout any additional review? the OID NA event is eating into availability for reviews, but maybe by monday that will lessen | 18:05 |
clarkb | I also stacked the parallelizatino change on top of it | 18:05 |
clarkb | so would need to restack if we want to prioritize that but I think having everything running consistently the way we want serially first is a good thing | 18:05 |
fungi | makes sense to me, sure | 18:09 |
clarkb | a reminder for everyone in this channel. Those of us in North America switch to daylight savings time over the weekend. Adjust meeting schedules as necessary | 18:40 |
opendevreview | Merged opendev/system-config master: Upgrade gitea to v1.23.5 https://review.opendev.org/c/opendev/system-config/+/943352 | 18:41 |
clarkb | wow that was quick. I like quiet fridays zoom zoom | 18:43 |
clarkb | https://gitea09.opendev.org:3081/opendev/system-config/ the first server is updated | 18:46 |
clarkb | web ui lgtm there and a git clone was successful | 18:47 |
clarkb | gitea10 is done now too and looks ok as well. I'll keep an eye on them as it iterates through | 18:49 |
clarkb | the shutdown phase of the upgrade seems to be slower than usual and I think we may be hitting the docker compose timeout | 18:49 |
clarkb | I wonder if memcached did that or something else | 18:49 |
clarkb | hrm gitea11 went more quicky so probably not memcached as its the same across all fo them | 18:51 |
fungi | 09 is testing okay for me so far | 18:52 |
clarkb | 11, 12, and 13 have been quick so whatever the slowness for shutdown is doesn't seem to be generic to the setup | 18:54 |
fungi | yeah, that's odd | 18:54 |
clarkb | maybe its a long running clone/fetch operation and gitea wants to wait for that to complete before gracefully stopping | 18:54 |
fungi | possible | 18:54 |
clarkb | fungi: you don't happen to have a chagne to push or update when the deployment is done so that we can check replication do you? | 18:54 |
clarkb | I'm not sure I have anything that neesd an update right now | 18:54 |
fungi | no, nothing handy | 18:54 |
clarkb | all of the backends are done just waiting on the buildset to complete | 18:55 |
clarkb | job was a success | 18:56 |
clarkb | let me dig around for something that I can push a new patchset to | 18:56 |
opendevreview | Clark Boylan proposed opendev/system-config master: Update Hound image to python3.12 https://review.opendev.org/c/opendev/system-config/+/943819 | 19:03 |
clarkb | I came up with ^ | 19:03 |
clarkb | I'll check that has replicated in a minute | 19:03 |
clarkb | origin https://opendev.org/opendev/system-config (fetch) then git fetch origin refs/changes/19/943819/1 && git show FETCH_HEAD lgtm | 19:05 |
clarkb | I think I'm happy with this. Let us know if you notice any problems | 19:06 |
fungi | yeah, looks fine, no issues spotted | 19:11 |
clarkb | we still need to do a bindep release with the pyproject.toml updates? | 19:19 |
clarkb | I think that is the case but I'm not positive | 19:19 |
fungi | correct | 19:22 |
fungi | master is 5 changes ahead of 2.12.0 currently | 19:22 |
*** winicius is now known as wncslln | 21:39 | |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Use required-projects in bootstrap-bridge https://review.opendev.org/c/opendev/system-config/+/943509 | 22:02 |
fungi | frickler: ^ sorted now | 22:03 |
frickler | thx | 22:05 |
opendevreview | Merged opendev/system-config master: Use required-projects in bootstrap-bridge https://review.opendev.org/c/opendev/system-config/+/943509 | 22:21 |
Clark[m] | I'm doing the school run but we will want to check hourlies still look good for ^ | 22:24 |
fungi | yeah, 2300 batch will kick off in another 35 minutes | 22:25 |
fungi | starting in exactly 1 minute now | 22:59 |
fungi | enqueued | 23:01 |
fungi | the bootstrap-bridge job is still starting first | 23:02 |
fungi | it paused and service-bridge is running now, nodepool/registry/zuul/eavesdrop are still waiting | 23:04 |
fungi | service-bridge succeeded, nodepool started | 23:05 |
clarkb | the streaming log also shows that project-config updated from master whcih it would now that it is in required projects | 23:06 |
clarkb | so I think this is looking how we expect it | 23:06 |
clarkb | maybe monday we try running things in parallel? | 23:08 |
fungi | yeah, sounds like a plan | 23:10 |
clarkb | and for clarification some infra-prod-* jobs did update project-config from master in periodic and hourly pipelines but not all (basically those that set the required project value to incldue project-config did). This was fine when every infra-prod job was updating repos. Now we've consolidated that all into the parentmost paused job | 23:15 |
clarkb | this should be roughly equivalent but doing it just the once so that we can then run jobs in parallel | 23:15 |
clarkb | before we approve the parallelization change it might be worth checking we don't have any obvious conflicts on the remote side of the ansible runs. We think we've handled conflicts on the local side of ansible (primarily the git repo updates). But we may have playbooks that talk to the same hosts and touch the same stuff at the same time? infra-prod-base is the obvious one but the | 23:16 |
clarkb | jobs should already depend on that one | 23:16 |
clarkb | Not sure if we have any others like maybe manage-projects and service-gitea/service-gerrit need a relationship if they don't already have one | 23:17 |
clarkb | manage projects already seems to have those rules in place | 23:43 |
clarkb | the base job which touches all servers to set up users etc and manage projects which updates gitea and gerrit state are the two that immediately come to mind as potentially impacting across servers on the ansible remote side. I'm not currently coming up with any others | 23:45 |
clarkb | oh letsencrypt as well but we have strict ordering on that because we need certs before deploying services | 23:47 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!