xinliang | ianw: The openEuler mirror repo we sync from is fixed. https://ru-repo.openeuler.org/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ | 01:30 |
---|---|---|
fungi | thanks xinliang! | 01:34 |
fungi | ianw: did we disable anything for that we need to put back now? | 01:36 |
ianw | xinliang/fungi: thanks, i don't think so | 01:36 |
fungi | okay, great | 01:36 |
xinliang | The openEuler mirror repo we sync from is fixed. https://ru-repo.openeuler.org/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ | 01:47 |
xinliang | fungi: that repo is maintained by openEuler, should be good to sync with | 01:49 |
xinliang | They just ran out of space yesterday and fix | 01:49 |
fungi | yep, thanks for the added detail. i was just making sure we hadn't disabled any tests or something in the meantime while it was down | 02:59 |
*** ysandeep|away is now known as ysandeep | 05:39 | |
*** iurygregory_ is now known as iurygregory | 06:28 | |
opendevreview | Merged opendev/system-config master: Retire openstack-i18n-de mailing list https://review.opendev.org/c/opendev/system-config/+/805646 | 06:50 |
*** rpittau|afk is now known as rpittau | 07:24 | |
*** jpena|off is now known as jpena | 07:37 | |
*** ykarel is now known as ykarel|lunch | 08:28 | |
*** sshnaidm|afk is now known as sshnaidm | 09:38 | |
*** ykarel|lunch is now known as ykarel | 10:18 | |
dtantsur | fungi: yep, the job seems to pass now, thank you! | 10:23 |
ykarel | frickler, the infra mirror pypi issue happening now, recall the issue with boto3 some days back? | 10:40 |
ykarel | failing: pip install --index-url=https://mirror.mtl01.inap.opendev.org/pypi/simple horizon==20.0.0 | 10:41 |
ykarel | example log https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_55d/806101/1/check/openstack-tox-py38/55d0a3c/tox/py38-1.log | 10:41 |
ykarel | #infra-root if someone can check ^ what's wrong with mirrors | 10:41 |
ykarel | also it's specific to mirrors, as i see some working like https://mirror.gra1.ovh.opendev.org/pypi/simple one | 10:48 |
ykarel | https://mirror.bhs1.ovh.opendev.org/pypi/simple and https://mirror.mtl01.inap.opendev.org/pypi/simple failing, not checked others | 10:49 |
*** ksambor__ is now known as ksambor | 10:55 | |
*** ysandeep is now known as ysandeep|mtg | 10:57 | |
*** dviroel|out is now known as dviroel|ruck | 11:14 | |
opendevreview | Slawek Kaplonski proposed openstack/project-config master: Update neutron-lib grafana dasboard https://review.opendev.org/c/openstack/project-config/+/806138 | 11:20 |
fungi | ykarel: we don't mirror pypi. the problem is almost always an issue with some fastly cdn endpoints in specific parts of the world returning old index copies | 11:25 |
fungi | you'd see the same thing from the same clouds whether you went through the proxy there or not | 11:25 |
fungi | it's not specific to our mirrors, it's specific to where on the internet you're trying to reach pypi from | 11:25 |
fungi | also it seems to happen with fastly's endpoints near montreal canada more than anywhere else, from what we've seen. it's possible they have one or more bad systems there | 11:26 |
fungi | note that ovh's bhs1 region and inap's mtl01 region are both in the montreal area, which is a strong indication it's this same issue again | 11:28 |
*** jpena is now known as jpena|lunch | 11:42 | |
dtantsur | hi folks! I have a zuul job that for one task has "Waiting on logger" instead of the task output. Still, the task and the job end successfully. What could cause that? | 11:52 |
dtantsur | https://zuul.opendev.org/t/openstack/build/3a2fe16398d54a3aa22a4756720c4c52/log/job-output.txt#30966 | 11:52 |
*** dviroel|ruck is now known as dviroel|ruck|brb | 11:58 | |
*** dviroel|ruck|brb is now known as dviroel|ruck | 12:20 | |
mordred | fungi: https://bugs.launchpad.net/qemu/+bug/1749393?comments=all | 12:36 |
*** jpena|lunch is now known as jpena | 12:38 | |
fungi | dtantsur: zuul polls trying to connect to the log streamer to build that log, and logs that it's doing so. if the log streamer process doesn't start or is unreachable, you'll see that printed | 12:49 |
fungi | that's more for the benefit of people watching the live log stream, but generally it's so that you don't simply see silence or a gap in timestamps and think zuul sat and did nothing for that span of time | 12:50 |
fungi | mordred: neat! so it really was an emulation bug? | 12:51 |
mordred | fungi: it seems so? I'm working on reproducing it locally so I can try the fix | 12:54 |
mordred | (I have a local focal box) | 12:54 |
mordred | hrm | 12:54 |
mordred | it does not fail locally \o/ | 12:54 |
fungi | same qemu package/version as we have on the job node? | 12:59 |
mordred | fungi: it's been a while - what is the current correct way to create an autohold? | 12:59 |
fungi | sudo docker-compose -f /etc/zuul-scheduler/docker-compose.yaml exec scheduler zuul autohold --tenant openstack --project opendev/system-config --job system-config-run-gitea --change 805458 --reason "fungi testing matrix cors" --count 1 | 13:00 |
fungi | that's one i set recently | 13:00 |
fungi | (from the scheduler's command line) | 13:00 |
mordred | thanks | 13:00 |
fungi | might also be possible to do it with zuul-client now | 13:00 |
mordred | and not sure about the version ... | 13:00 |
lucasagomes | morning/afternoon folks, if you have some time mind taking a quick look at this new project proposal https://review.opendev.org/c/openstack/project-config/+/805802 ? Thanks in advance | 13:15 |
*** ysandeep|mtg is now known as ysandeep | 13:29 | |
ykarel | fungi, Thanks for checking, how much it takes to clear it, i see it's still failing | 13:48 |
ykarel | frickler last time told that once it reproduces can have them try to flush those from their cdn network | 13:49 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Promote debian bullseye job to non-voting https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 13:57 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Add non-voting debian stable job https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 14:04 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Add non-voting debian stable job https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 14:05 |
clarkb | ykarel: I think you run a curl -XPURGE against the resource url on pypi | 14:06 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Add non-voting debian stable job https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 14:06 |
clarkb | eg curl -XPURGE https://pypi.org/simple/pbr but I'm not sure if that works on indexes like it does package data | 14:07 |
clarkb | the problem has to do with cdn fallback iirc so it may just fallback to a the broken nodes and reset the state to broken | 14:07 |
fungi | it varied. in recent events it wasn't serving copies of indices without relevant metadata, it was just serving old copies of indices which didn't include the versions of things we were pinning to | 14:10 |
fungi | though i suppose that still might be a fallback, if they fixed the fallback site to include the same metadata warehouse serves but just happens to have some old indices | 14:11 |
fungi | like maybe it stopped getting synced or something | 14:11 |
ykarel | yes ^ seems correct | 14:18 |
opendevreview | Monty Taylor proposed zuul/zuul-jobs master: Update binfmt support image used https://review.opendev.org/c/zuul/zuul-jobs/+/806057 | 14:18 |
ykarel | fungi, last time you said you can ask them to flush those cached data from their cdn network | 14:26 |
ykarel | can you do that | 14:26 |
fungi | ykarel: i'll need to find what specific page/url needs flushing, but i can try to do it myself as clarkb mentioned above. if they've got a fallback backend with outdated content however that's not going to fix it | 14:27 |
ykarel | fungi, okk Thanks | 14:28 |
fungi | ykarel: do you happen to know what it downloaded (or wanted to download) which didn't exist or was too old? the pip error is bubbling up from the dep solver unfortunately so it will take some tracking down, and if you've already done that it will save me the time | 14:30 |
ykarel | fungi, horizon==20.0.0 | 14:30 |
fungi | oh, it's coming from the horizon===20.0.0 constraint, okay | 14:31 |
ykarel | yes | 14:32 |
fungi | so it'll be the horizon index page i guess | 14:32 |
fungi | https://pypi.org/simple/horizon/ | 14:32 |
fungi | $ curl -XPURGE https://pypi.org/simple/horizon | 14:34 |
fungi | { "status": "ok", "id": "12826-1629899565-9246" } | 14:34 |
fungi | so the api replied to that purge request with an ok status at least | 14:35 |
fungi | i did it from the inap-mtl01 mirror, no idea if that matters | 14:35 |
fungi | i also did one with a trailing / because without the / is a redirect if you do a get | 14:36 |
Clark[m] | I think it is supposed to do a global purge when you do that so location shouldn't matter | 14:38 |
fungi | also the inap-mtl01 and ovh-bhs1 mirrors currently include horizon 20.0.0 links in the simple index they're proxying | 14:39 |
fungi | so we may need to wait a while to see it happen again | 14:39 |
ykarel | fungi, now install is working | 14:39 |
fungi | ykarel: thanks, hopefully it stays that way | 14:40 |
ykarel | Thanks fungi Clark[m] | 14:47 |
fungi | mordred: you've got a node held now: 104.130.135.208 | 15:16 |
mordred | fungi: yes! I have used it to verify a fix for https://review.opendev.org/806057 thanks! | 15:33 |
fungi | oh awesome | 15:35 |
fungi | and i just approved it. thanks! | 15:36 |
corvus | \o/ | 15:36 |
fungi | now i need to get ready for the lists.k.i server upgrade starting in 20 ~minutes | 15:37 |
*** jpena is now known as jpena|off | 15:39 | |
fungi | clark is otherwise engaged, so i'll be doing the upgrading on the server, but we tested it very thoroughly and he made extensive notes in https://etherpad.opendev.org/p/listserv-inplace-upgrade-testing-2021 | 15:40 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Add non-voting debian stable job https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 15:55 |
opendevreview | Riccardo Pittau proposed openstack/diskimage-builder master: Fix debian-minimal security repos https://review.opendev.org/c/openstack/diskimage-builder/+/806188 | 15:57 |
rpittau | hello everyone! Can anyone please check this ^ ? It's quite urgent as at the moment debian-minimal is broken and as a consequence also the ironic-python-agent-builder CI, thanks! | 15:59 |
fungi | starting on the lists.katacontainers.io server upgrad enow | 16:00 |
fungi | taking the server offline for a clean image | 16:02 |
*** rpittau is now known as rpittau|afk | 16:02 | |
opendevreview | Merged zuul/zuul-jobs master: Update binfmt support image used https://review.opendev.org/c/zuul/zuul-jobs/+/806057 | 16:03 |
fungi | #status log The lists.katacontainers.io service is offline for scheduled upgrades over the next two hours | 16:05 |
opendevstatus | fungi: finished logging | 16:05 |
mordred | mnaser: ^^ the zuul-jobs change there should fix the issues with arm docker builds | 16:07 |
opendevreview | Merged openstack/project-config master: Retire puppet-monasca - Step 3: Remove Project https://review.opendev.org/c/openstack/project-config/+/805103 | 16:11 |
fungi | image creation complete, bringing the server back up with mailman/exim4 disabled | 16:19 |
fungi | server is running again, and i've started a root screen session in which i'll perform the ubuntu upgrade steps | 16:21 |
fungi | there's an openssl security update today, and for some reason the server is getting a 401 unauthorized trying to upgrade libssl/openssl packages even though `ua status` claims esm-infra channel is entitled and enabled | 16:28 |
fungi | i don't think this is a critical blocker for the xenial to bionic upgrade, so i'm going to ignore it, but would be good to circle back around after and see if any of our other xenial servers with esm set up are having similar problems | 16:29 |
fungi | oh, except do-release-upgrade refuses to proceed | 16:30 |
Clark[m] | Not really here but maybe the disable on the snapshot server did disable thing but it didn't know it on the other server. | 16:31 |
Clark[m] | You could try disabling then reenabling or just disable entirely since you are upgrading | 16:32 |
fungi | yes, i'm thinking the same, i'm detaching it from esm now | 16:32 |
fungi | just need to clean up the sources.list entries | 16:32 |
Clark[m] | Be careful enabling again because by default it enrolls in fips type stuff we don't want | 16:32 |
Clark[m] | The Ansible we do to enable has the steps to avoid that | 16:33 |
fungi | yeah, ua detach, removing the corresponding sources.list.d file and then updating the package list again was enough to get do-release-upgrade to proceed | 16:34 |
fungi | i also have the server in the emergency disable list so that ansible won't try to undo anything while i'm working | 16:35 |
*** ysandeep is now known as ysandeep|away | 16:39 | |
fungi | infra-root: expedited review of https://review.opendev.org/806050 would be much appreciated, i've got the mirror-update lock held for opensuse already synced from that source it appears to have solved the package update problem for ironic's jobs | 16:46 |
fungi | the server is at bionic now and sanity checks are good, so i've initiated do-release-upgrade to take it to focal | 16:59 |
opendevreview | Monty Taylor proposed opendev/system-config master: Add backports repos to base and builder images https://review.opendev.org/c/opendev/system-config/+/800318 | 16:59 |
fungi | we're at the one hour mark for the lists.k.i upgrade, so doing well | 17:00 |
mordred | woot. the zuul-jobs fix fixed the python-builder issue. now we just have a uwsgi library version issue from the bullseye to deal with. I betcha this is going to turn green! | 17:01 |
fungi | until we found the sea of green | 17:03 |
fungi | the mailman package upgrade from bionic to focal complains about /var/lib/mailman/qfiles not being empty, investigating | 17:05 |
fungi | started mailman services but it's continually filling /var/lib/mailman/qfiles/out/ with new files and removing them | 17:10 |
fungi | looks like it's because i didn't stop apache and someone tried to use the webui to subscribe to a list while exim was down but the upgrade had restarted the mailman qrunners | 17:14 |
fungi | okay, cleaned out that looping qfile with mailman stopped again, and continued the upgrade process | 17:16 |
fungi | i also stopped apache once i realized that was problematic | 17:17 |
fungi | final reboot onto focal is underway now | 17:25 |
fungi | i lied, one more reboot after i enable exim4 and mailman again | 17:27 |
fungi | #status log Upgrades completed for the lists.katacontainers.io server, which is now fully back in service | 17:30 |
opendevstatus | fungi: finished logging | 17:30 |
fungi | i sent a message to the kata-dev ml letting them know everything's done, and received a copy of it as a list subscriber. also the archives still look intact | 17:31 |
fungi | i've taken the server back out of the emergency disable list now as well | 17:33 |
mordred | corvus, fungi, Clark: https://review.opendev.org/c/opendev/system-config/+/800318 is green! | 19:06 |
Clark[m] | Ok things are slowing down around here. I'm running errands and then lunch then another appointment then hopefully I can look at some changes in like 3 hours | 19:11 |
Clark[m] | mordred the issue was the python base images bumped to bullseye? | 19:15 |
fungi | well, the trigger was that the images updated to bullseye | 19:15 |
fungi | the problem was a couple of new and wondrus bugs that exposed | 19:15 |
Clark[m] | Fun | 19:15 |
mordred | yeah. you'll love the solution ;) | 19:16 |
mordred | it's nice and generally opaque | 19:16 |
fungi | segfaults during package installation are always a sign your day is looking up | 19:23 |
*** sshnaidm is now known as sshnaidm|afk | 19:32 | |
opendevreview | Shnaidman Sagi (Sergey) proposed openstack/diskimage-builder master: Add building for CentOS 9 https://review.opendev.org/c/openstack/diskimage-builder/+/805848 | 19:37 |
mordred | fungi: oh - should I rebase that backports change on master - and then also update the matrix-eavesdrop dockerfile to appropriately drop the buster-backports line? | 20:06 |
corvus | mordred: are the updated opendev images published -- or will the first bullseye publication be after 800318 merges? | 20:07 |
mordred | yeah - I think so - we reference a few different buster thigns in there | 20:07 |
mordred | corvus: after | 20:07 |
corvus | mordred: ack. also, i had just approved 800318, but based on your rebase comment i have downgraded that to a +2 | 20:08 |
corvus | but let me know if you want me to put the +w back | 20:08 |
mordred | cool. I'll rebase, then push up a modified patch (for ease of review) | 20:08 |
fungi | mordred: you could do it as a separate change, but i expect any builds of the matrix-eavesdrop image in the interim may break | 20:08 |
mordred | or - I suppose I could just do a followup change to update the matrix-eavesdrop stuff | 20:09 |
mordred | yeah | 20:09 |
mordred | just approving what's there now and doing a followup will waste fewer nodes | 20:09 |
corvus | +3 restored | 20:09 |
mordred | woot | 20:10 |
opendevreview | Monty Taylor proposed opendev/system-config master: Update matrix-eavesdrop for bullseye https://review.opendev.org/c/opendev/system-config/+/806269 | 20:12 |
mordred | corvus, fungi : ^^ used depends-on to stack it :) | 20:13 |
opendevreview | Merged opendev/system-config master: Add backports repos to base and builder images https://review.opendev.org/c/opendev/system-config/+/800318 | 20:37 |
fungi | #status log Used rmlist to remove the openstack-i18n-de mailing list from lists.openstack.org now that https://review.opendev.org/805646 has merged | 20:44 |
opendevstatus | fungi: finished logging | 20:44 |
mordred | corvus: :sadpanda: system-config-promote-image-python-base-3.8 https://zuul.opendev.org/t/openstack/build/f429f1990e7f405aab9dd458bd0529bc : FAILURE in 58s | 21:41 |
mordred | corvus: got a 500 from dockerhub on the promote: https://zuul.opendev.org/t/openstack/build/f429f1990e7f405aab9dd458bd0529bc/log/job-output.txt#94-118 | 21:42 |
corvus | mordred: ugh, we've been getting more of those lately. good news is that it was the cleanup phase, so it should actually be published | 21:43 |
corvus | mordred: i don't think you need to do anything and you can just ignore it | 21:44 |
mordred | oh cool | 21:53 |
mordred | I think this: https://review.opendev.org/c/opendev/system-config/+/806269 is good to go now then | 21:53 |
corvus | off it goes :) | 21:54 |
mordred | woohoo | 21:56 |
ianw | if there's a tl;dr and i can help with the images/arm64 issues etc. LMN | 22:12 |
mordred | ianw: I think we got it sorted: https://review.opendev.org/c/zuul/zuul-jobs/+/806057 | 22:15 |
mordred | ianw: tl;dr is - upstream python image is based on debian - and updated from buster to bullseye - which broke something in qemu/binfmt support | 22:16 |
mordred | waves hands furioulsy | 22:16 |
mordred | so - updating the binfmt config support image to use qemu-*-static instead fixed that - the rest was just followups to bindep versions | 22:17 |
*** dviroel|ruck is now known as dviroel|out | 22:18 | |
ianw | excellent. well hopefully another 2 years of stable operation :) | 22:18 |
fungi | ianw: https://review.opendev.org/806050 is mirror related and could use a look | 22:26 |
clarkb | I just +2'd it | 22:27 |
ianw | sure, i feel like we got leaseweb fixed before via a contact, can't remember | 22:28 |
clarkb | ya it is logan- related iirc | 22:29 |
ianw | that sounds right :) | 22:30 |
fungi | oh, is leaseweb a.k.a. limestone? | 22:31 |
clarkb | I don't know if it is aka/== but there is a relationship there iirc | 22:31 |
fungi | anyway, the "official" opensuse mirrors list is a minefield of incomplete and outdated mirrors, so the one i picked was a compromise of having what we needed, being up to date, and not being too far from afs01.dfw | 22:32 |
fungi | where "not too far" is defined as "on the same continent" | 22:33 |
opendevreview | Merged opendev/system-config master: Update matrix-eavesdrop for bullseye https://review.opendev.org/c/opendev/system-config/+/806269 | 22:34 |
ianw | hah in .au we consider "not too far" as "only crosses 1 or 2 oceans" :) | 22:34 |
fungi | "same hemisphere" (latitudinally or longitudinally? you may only pick one) | 22:35 |
fungi | from australia you get a choice of mirrors in siberia or chile | 22:36 |
opendevreview | Merged opendev/system-config master: Update OpenSUSE mirror source https://review.opendev.org/c/opendev/system-config/+/806050 | 22:57 |
fungi | once deploy is done i'll release the flock for that | 23:09 |
mordred | I choose to believe that clarkson.edu is somehow related to clarkb | 23:16 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!