opendevreview | Steve Baker proposed openstack/diskimage-builder master: Move grubenv to EFI dir https://review.opendev.org/c/openstack/diskimage-builder/+/804000 | 02:12 |
---|---|---|
opendevreview | Steve Baker proposed openstack/diskimage-builder master: Support grubby and the Bootloader Spec https://review.opendev.org/c/openstack/diskimage-builder/+/804002 | 02:12 |
opendevreview | Steve Baker proposed openstack/diskimage-builder master: RHEL/Centos 9 does not have package grub2-efi-x64-modules https://review.opendev.org/c/openstack/diskimage-builder/+/804816 | 02:12 |
opendevreview | Steve Baker proposed openstack/diskimage-builder master: Add policycoreutils package mappings for RHEL/Centos 9 https://review.opendev.org/c/openstack/diskimage-builder/+/804817 | 02:12 |
opendevreview | Steve Baker proposed openstack/diskimage-builder master: Add reinstall flag to install-packages, use it in bootloader https://review.opendev.org/c/openstack/diskimage-builder/+/804818 | 02:12 |
opendevreview | Steve Baker proposed openstack/diskimage-builder master: Add DIB_YUM_REPO_PACKAGE as an alternative to DIB_YUM_REPO_CONF https://review.opendev.org/c/openstack/diskimage-builder/+/804819 | 02:12 |
opendevreview | Steve Baker proposed openstack/diskimage-builder master: Use non-greedy modifier for SUBRELEASE grep https://review.opendev.org/c/openstack/diskimage-builder/+/805545 | 02:12 |
*** ysandeep|away is now known as ysandeep | 06:32 | |
*** sshnaidm|afk is now known as sshnaidm | 06:34 | |
*** jpena|off is now known as jpena | 07:37 | |
*** rpittau|afk is now known as rpittau | 07:51 | |
gthiemonge | Hi Folks, opendevreview disconnected from #openstack-lbaas 2 days ago and still hasn't reappeared | 07:56 |
opendevreview | Oleg Bondarev proposed openstack/project-config master: Update grafana to reflect dvr-ha job is now voting https://review.opendev.org/c/openstack/project-config/+/805594 | 08:08 |
*** ykarel is now known as ykarel|lunch | 08:23 | |
opendevreview | Jing Li proposed openstack/diskimage-builder master: Add new element rocky https://review.opendev.org/c/openstack/diskimage-builder/+/805601 | 08:27 |
*** ykarel|lunch is now known as ykarel | 09:36 | |
opendevreview | Jing Li proposed openstack/diskimage-builder master: Add new element rocky https://review.opendev.org/c/openstack/diskimage-builder/+/805609 | 09:50 |
*** ykarel is now known as ykarel|afk | 10:53 | |
*** arxcruz|off is now known as arxcruz | 11:01 | |
*** dviroel|out is now known as dviroel|ruck | 11:31 | |
*** jpena is now known as jpena|lunch | 11:33 | |
*** jpena|lunch is now known as jpena | 12:29 | |
opendevreview | sean mooney proposed openstack/project-config master: Add review priority label to nova deliverables https://review.opendev.org/c/openstack/project-config/+/787523 | 12:46 |
opendevreview | sean mooney proposed openstack/project-config master: Add review priority label to nova deliverables https://review.opendev.org/c/openstack/project-config/+/787523 | 12:47 |
fungi | gthiemonge: it doesn't stay in channels indefinitely, it joins and parts from them based on a sort of lru order for whatever channels it needs to announce things in next in order to stay under the 120 max joined channel limit for the network | 12:55 |
fungi | was there a change uploaded which should have been announced in #openstack-lbaas in the past two days and wasn't? | 12:55 |
gthiemonge | fungi: oh i didn't know that! I think the activity was low during the weekend | 12:56 |
fungi | if you notice something get pushed which should have been announced and isn't, let us know and we can check logs to see if something broke, but it's been continuing to announce changes in here so i think it's still working normally | 12:58 |
opendevreview | Ananya proposed opendev/elastic-recheck rdo: Fix ER bot to report back to gerrit with bug/error report https://review.opendev.org/c/opendev/elastic-recheck/+/805638 | 13:10 |
opendevreview | sean mooney proposed openstack/project-config master: Add review priority label to nova deliverables https://review.opendev.org/c/openstack/project-config/+/787523 | 13:20 |
opendevreview | Jing Li proposed openstack/diskimage-builder master: Add new element rocky https://review.opendev.org/c/openstack/diskimage-builder/+/805639 | 13:27 |
opendevreview | Jing Li proposed openstack/diskimage-builder master: Add new element rocky https://review.opendev.org/c/openstack/diskimage-builder/+/805641 | 13:28 |
opendevreview | sean mooney proposed openstack/project-config master: Add review priority label to nova deliverables https://review.opendev.org/c/openstack/project-config/+/787523 | 13:34 |
opendevreview | Andreas Jaeger proposed opendev/system-config master: Retire openstack-i18n-de mailing list https://review.opendev.org/c/opendev/system-config/+/805646 | 13:55 |
clarkb | fungi: I don't know why these things occur to me on monday mornings immediately before actually starting work but suddenly I was concered that our test lists.kc.io was also backing up to the backup server with the same target backup location as prod. Turns out prod doesn't have backups so this is a "non issue". | 14:53 |
clarkb | fungi: I think that enable backups will be simpler once we have finished the upgrade (we'll be sure to do a snapshot though) | 14:53 |
fungi | yeah, sounds good | 14:53 |
clarkb | otherwise we'll have to ensure the borg virtualenv install stuff is redone post upgrade (will have to sort that out on lists.o.o) | 14:53 |
clarkb | Anyway just added that to the checklist of things | 14:53 |
fungi | alternatively we could plan to move the lists.k.i site onto lists.o.o sooner | 14:54 |
fungi | since they'll be in sync as far as what version of software they're running | 14:54 |
clarkb | ya, though it serves as a good in place upgrade first platform | 14:54 |
fungi | right, i mean after we upgrade both | 14:54 |
clarkb | oh ya ++ | 14:54 |
fungi | but only needing to back up one server would be preferable | 14:55 |
fungi | we'll also want to let them run for a bit before we do that anyway so we can make sure we're not more resource-constrained after the upgrade | 14:56 |
fungi | as we've seen mailman processes get oom-killed on a semi-regular basis | 14:56 |
fungi | (that may not be due to an under-sized server, might have been a runaway memory leak or similar) | 14:57 |
clarkb | good point | 14:57 |
clarkb | Ok time for breakfast and watering the garden now that I don't have to fix up backups immediately. | 15:02 |
clarkb | infra-root landing https://review.opendev.org/c/opendev/system-config/+/805471 today to get new gerrit point release image published that we can upgrade to would be nice | 15:02 |
clarkb | https://review.opendev.org/c/opendev/system-config/+/805243 is half related to that in that it will test upgrading from our 3.2 image to our 3.3 image. Both of them get updated in the previous change | 15:03 |
*** dviroel|ruck is now known as dviroel|ruck|lunch | 15:17 | |
*** jpena is now known as jpena|off | 15:36 | |
mordred | clarkb: that looks good - any reason I shouldn't land it? | 15:43 |
Clark[m] | I think they should both be landable | 15:47 |
*** ysandeep is now known as ysandeep|dinner | 15:51 | |
*** rpittau is now known as rpittau|afk | 15:58 | |
*** ysandeep|dinner is now known as ysandeep | 16:10 | |
clarkb | Wednesday will be the 3 weeks after disabling accounts mark for the last batch of gerrit account cleanups. This means I'll plan to do the second step (which is far less reversable) either wednesday or Thursday to complete that batch | 16:12 |
clarkb | Then the real fun begins as we'll have ~30 accounts that we can do surgery on after communicating with their owners | 16:13 |
*** dviroel|ruck|lunch is now known as dviroel|ruck | 16:19 | |
clarkb | Zuul is busy this morning. I guess we're seeing OpenStack's feature freeze build up | 16:35 |
fungi | in positive news, that means we'll get a good scale test of recent zuul/nodepool development | 16:35 |
clarkb | fungi: Looking at my todo list realistically I'm not sure where time has gone and don't think I'll be able to do the thing for ricolin_ and OID. Do you think that may still be salvageable? I Feel like if it were a month later I'd have all the time in the world but with school starting up and trying to get some of my backlog completed it seems undoable in the current timeline | 16:36 |
fungi | no, i'm in the same boat you are. unanticipated (as well as poorly-anticipated) tasks have jumped up my todo list | 16:37 |
clarkb | alright I'll write ricolin_ an email to follow up on that | 16:45 |
fungi | thanks, it's unfortunate and i'm sorry it looks like i won't have time for it | 16:56 |
opendevreview | Merged opendev/system-config master: Update Gerrit images to most recent releases https://review.opendev.org/c/opendev/system-config/+/805471 | 16:58 |
opendevreview | Merged opendev/system-config master: Test a gerrit 3.2 -> 3.3 upgrade https://review.opendev.org/c/opendev/system-config/+/805243 | 17:01 |
opendevreview | Clark Boylan proposed opendev/system-config master: Preserve zuul executor SIGTERM behavior https://review.opendev.org/c/opendev/system-config/+/800315 | 17:04 |
clarkb | corvus: ^ updated zuul executor sigterm config change | 17:04 |
corvus | clarkb: thanks! | 17:08 |
clarkb | infra-root restarting gerrit to pick up https://review.opendev.org/c/opendev/system-config/+/805471 is probably a good idea. Maybe early afternoon my time when hopefully zuul has caught up a bit? | 17:29 |
fungi | #status log ze04 was rebooted at 02:16 UTC today due to a hypervisor host outage in that provider, but appears to be running normally | 17:29 |
fungi | mmm, looks like we've lost openstackstatus | 17:29 |
clarkb | I wonder if the dns issues on that executor are related | 17:30 |
fungi | er, opendevstatus i mean | 17:30 |
fungi | 2021-08-21 17:12:58 <-- opendevstatus (~opendevst@104.239.144.232) has quit (Remote host closed the connection) | 17:30 |
fungi | looks like we updated /etc/statusbot/statusbot.config and the container was restarted at that time | 17:31 |
clarkb | couldn't reconnect for some erason? | 17:32 |
fungi | dunno, statusbot_debug.log also ends at the same time | 17:33 |
fungi | i'll try manually restarting the container again | 17:33 |
fungi | it's joining channels now | 17:34 |
fungi | maybe something happened when ansible tried to restart it | 17:34 |
fungi | docker-compose logs was empty though | 17:34 |
fungi | seems to be done | 17:36 |
fungi | #status log Restarted the statusbot container on eavesdrop01 just now, since it seemed to not start cleanly immediately following a configuration update at 2021-08-21 17:12 | 17:36 |
opendevstatus | fungi: finished logging | 17:36 |
fungi | #status log ze04 was rebooted at 02:16 UTC today due to a hypervisor host outage in that provider, but appears to be running normally | 17:36 |
opendevstatus | fungi: finished logging | 17:36 |
opendevreview | Clark Boylan proposed opendev/system-config master: Upgrade gitea to 1.15.0 https://review.opendev.org/c/opendev/system-config/+/803231 | 17:53 |
opendevreview | Clark Boylan proposed opendev/system-config master: DNM force gitea failure for interaction https://review.opendev.org/c/opendev/system-config/+/800516 | 17:53 |
clarkb | gitea 1.15.0 exists now. I'm going to put a hold for the jobs running against 800516 for human verification. In particular we want to ensure the image files are hosted as gerrit and paste expect them on top of any other gitea verification | 17:54 |
clarkb | oh I should also double check the template files have no delta between rc3 and the release | 17:57 |
clarkb | yup no template deltas for the templtes we update | 17:58 |
*** timburke__ is now known as timburke | 18:14 | |
*** ysandeep is now known as ysandeep|away | 19:05 | |
clarkb | https://23.253.56.223:3081/opendev/system-config LGTM. https://23.253.56.223:3081/assets/img/opendev-sm.png is being served as expected too which is consumed by the gerrit theme however not at that url. I don't know if updating the url in the gerrit theme (which https://review.opendev.org/c/opendev/system-config/+/803231 does do) will require we restart gerrit | 19:45 |
clarkb | also the openstack gate queues are getting longer. Looks like they have had some resets :/ | 19:45 |
*** slaweq is now known as slaweq_ | 19:50 | |
clarkb | I'd still like to do the gerrit restart in about an hour or two. I don't think we should combine the gitea and gerrit updates | 19:52 |
clarkb | Much better to do the gerrit update and address any issues that might come up and then sort out gitea later even if gerrit needs a restart to see the new url for the theme | 19:52 |
fungi | i concur | 19:53 |
fungi | and i should be around, though i might be mid-dinner depending on exactly what time | 19:53 |
clarkb | I'm happy to be flexible if there is a time that works best for you. I'm finishing up some lunch now but should be around the rest of the afternoon | 19:54 |
clarkb | I was also hoping the gate queues would go in the opposite direction before restarting gerrit but now worry there may not be enough hours in the day (or week) for that :) | 19:56 |
fungi | they have shrunk a bit in the last few minutes | 20:02 |
clarkb | I've done a quick update to our meeting agenda. Anything else to add to that before I sned it out? | 21:06 |
fungi | nothink i can thing of, sned away! | 21:24 |
clarkb | snet! | 21:27 |
clarkb | Alright, once the current change at the tip of openstack's gate queue lands we should have about 15 minutes to restart gerrit | 21:28 |
clarkb | infra-root ^ is now a good time to do that? | 21:28 |
fungi | standing by | 21:28 |
fungi | good with me | 21:28 |
clarkb | ok I'll run the pull now | 21:28 |
fungi | opendevorg/gerrit 3.2 4a2af078eb62 5 hours ago 810MB | 21:31 |
fungi | that's the one we want i guess | 21:31 |
clarkb | yup | 21:31 |
clarkb | also the mariadb image updated as well so I'll do a full down then up -d | 21:31 |
fungi | we also ended up with a new mariadb, yeah saw that | 21:31 |
fungi | mariadb 10.4 59f9f97d14ce 2 weeks ago 400MB | 21:31 |
clarkb | the chagne I'm waiting on is still doing its thing | 21:32 |
clarkb | there is also a release-post job | 21:32 |
clarkb | I should warn the release team I guess | 21:32 |
fungi | good thinkin | 21:33 |
clarkb | ok looks like I have a window of about 6 minutes | 21:40 |
clarkb | fungi: you good for me to down then up -d now? | 21:40 |
fungi | go for it | 21:40 |
fungi | that should be enough | 21:40 |
fungi | the next change merging is an approximate time anyway | 21:40 |
clarkb | done | 21:40 |
fungi | could wrap up its last job in the next few seconds, you never know | 21:40 |
fungi | want me to send a status notice? | 21:40 |
clarkb | sure | 21:41 |
clarkb | the web ui loads for me again | 21:41 |
fungi | #status notice The Gerrit service on review.opendev.org has been restarted for a patch version upgrade, resulting in a brief outage | 21:41 |
opendevstatus | fungi: sending notice | 21:41 |
clarkb | and shows the expected version | 21:41 |
fungi | good, good | 21:41 |
-opendevstatus- NOTICE: The Gerrit service on review.opendev.org has been restarted for a patch version upgrade, resulting in a brief outage | 21:41 | |
fungi | also a good test of the status notices for matrix i suppose | 21:41 |
clarkb | do we have that working? | 21:42 |
fungi | no idea, finding out now | 21:42 |
clarkb | I wish I understood this java jvm log rotation errors so that I could clean them up | 21:43 |
clarkb | I suppose we could stop recording a jvm log entirely now that we're not under memory pressure | 21:43 |
fungi | yeah, i don't know that we need it now, and we can always turn it back on again if we decide we do | 21:56 |
zigo | Hi! Looks like I haven't recieve any announce from release-announce since last 15th of June. | 21:58 |
zigo | fungi: Could you check? | 21:58 |
clarkb | zigo: did you double check that you are subscribed and that the emails aren't getting sent to spam or similar? | 21:59 |
zigo | clarkb: I just did try to re-register, and mailman sent me a mail saying I was already registered. | 21:59 |
zigo | clarkb: It's my own mail server, and I well know my antispam settings are kind of super nice for spammers ... :P | 22:00 |
zigo | (ie: half of the spam can go through... except the *very* obvious spams) | 22:00 |
zigo | clarkb: I don't think it's possible that *all* of the release-announce emails would go to spam... | 22:01 |
* zigo checks anyways | 22:01 | |
clarkb | zigo: 451 Greylisted | 22:01 |
zigo | clarkb: Nothing wrong with that, is it? | 22:01 |
zigo | 451 means: please retry later ... | 22:02 |
zigo | clarkb: I confirm: a quick grep in my .SPAM folder shows no release-announce ... | 22:02 |
clarkb | looks like it does eventually retry and deliver at least some of those 451 error'd messages | 22:03 |
clarkb | I'm not sure how to find release-announce specific emails in the exim logs. I guess I need to find their entries in the mailman log somewhere first | 22:03 |
zigo | clarkb: If I'm not mistaking, I'm register with zigo@debian.org, so it should go through muffat.debian.org / mailly.debian.org that I have whitelisted in my postfix setup. | 22:04 |
clarkb | zigo: disabling due to bounce score 5.0 >= 5.0 | 22:04 |
zigo | ?!? | 22:04 |
clarkb | that happened on june 29 | 22:04 |
fungi | can also check mailman's unsub/bounce logs to see if the address got bounce-disabled | 22:04 |
fungi | or check the subscription status | 22:04 |
fungi | aha, you beat me to it | 22:04 |
zigo | clarkb: Can you revert this? | 22:04 |
zigo | My mail server doesn't bounce... :) | 22:05 |
fungi | zigo: even you can undo it, just have to use the mailman webui to adjust your subscription preferences | 22:05 |
clarkb | then in july it says openstack-announce: zigo@debian.org has stale bounce info, resetting | 22:05 |
clarkb | but I guess it didn't undo the disable just reset the county? | 22:05 |
clarkb | *counter/ | 22:05 |
zigo | fungi: To do that, I have to set a password in mailman, right? | 22:05 |
fungi | unless you recorded it | 22:06 |
zigo | I haven't set one... :/ | 22:06 |
fungi | but yeah, we can undo it for you too | 22:06 |
clarkb | I guess if the greylisting persists long enough then eventually we give up and count it as a bounce? | 22:06 |
clarkb | so in that case it is an issue | 22:06 |
zigo | fungi: That's be nice, thanks. | 22:07 |
zigo | I'm ok-ish not having the backlog from last June. | 22:07 |
zigo | But usually, the release-announce mails are very helpful, close to a release. | 22:07 |
zigo | (ie: I can track what package I missed...) | 22:07 |
clarkb | sure, but we don't decide when you bounce us right? | 22:08 |
zigo | :) | 22:08 |
zigo | I don't bounce !!! | 22:08 |
zigo | I only greylist ... | 22:08 |
clarkb | mailman disagrees | 22:08 |
clarkb | right but if an email fails to deliver for $time eventually it is treated as a bounce ? fungi can confirm that | 22:08 |
zigo | Let me see if I can add more stuff in my Postfix's "my-network" ... | 22:09 |
fungi | it would have to be a long time | 22:09 |
fungi | basically the server would need to retry for days, i doubt the greylisting persists that log per delivery | 22:09 |
clarkb | looks like the counter only increments once per day | 22:09 |
fungi | yeah, multiple bounces in a day don't add more than one bounce point | 22:10 |
fungi | and then the bounce points decay over time too | 22:10 |
clarkb | looks like there were 12 bounces over 5 days | 22:10 |
clarkb | and that is what tripped it | 22:10 |
fungi | but we only have something like 10 days of mta log retention, so not much chance of getting details from when it transpired | 22:10 |
zigo | I confirm that I have both debian MX in my whitelist, so I don't get why this is happening. | 22:10 |
clarkb | oh sorry it was over a longer period june 21 to 29 | 22:11 |
zigo | My greylisting is set at 600 seconds I believe. | 22:11 |
zigo | Ok, so now you've reseted the counter, it should be all good to go, no? :) | 22:12 |
clarkb | then on july 12 the bounce score had gone back down to 1.0 and on July 29 it was reset entirely. No new bounces since | 22:12 |
clarkb | zigo: I don't know. the counter reset on the 29th of july and there have been emails since. | 22:12 |
clarkb | I think we have to tell mailman to undisable your account | 22:12 |
zigo | Oh, got an idea: maybe it stopped when the lists where in spamhaus ? | 22:13 |
fungi | yes, it can be adjusted on the subscription. an admin can either do it on the subscribers list in the webui or via the cli | 22:13 |
zigo | Because then, effectively, it has bounced ... | 22:13 |
fungi | zigo: yes, that seems likely to have been the cause | 22:13 |
clarkb | fungi: but zigo can do it too right? | 22:13 |
fungi | clarkb: yeah, he'd have to set a password and then log in | 22:13 |
zigo | How do I set a password for my account if it's already regsiter? | 22:14 |
fungi | same place | 22:14 |
zigo | Oh, I just enter my addr, and set a password? | 22:14 |
fungi | well, enter your addr and ask it to send you one, yes | 22:14 |
fungi | http://lists.openstack.org/cgi-bin/mailman/options/release-announce | 22:15 |
fungi | zigo: the password reminder at the bottom | 22:15 |
zigo | Ok, thanks ! | 22:16 |
zigo | Hopefully, it will send me a password even if it thought my address bounces? :) | 22:16 |
clarkb | we can find out :) | 22:16 |
zigo | Got it ! :) | 22:17 |
fungi | it should, the bounce outcome is merely to suspend delivery for your subscription | 22:17 |
zigo | I'm in. | 22:18 |
zigo | My account was disabled indeed. | 22:18 |
zigo | I mean, recieving. | 22:18 |
zigo | Thanks a lot guys, this was very helpful. | 22:18 |
fungi | you're welcome, and sorry about the disruption | 22:19 |
clarkb | fungi: the ipv4 addr was never added to spamhaus right? maybe we should force ipv4 afterall | 22:20 |
zigo | Not your fault, IMO. | 22:20 |
clarkb | and ya it got added by rackspace in a bulk block iirc | 22:20 |
clarkb | or someone added rackspaces bulk ipv6 block | 22:20 |
fungi | clarkb: well, the v4 block was in the policy blocklist, and we filed an exclusion for it with spamhaus | 22:21 |
clarkb | fungi: recently? | 22:21 |
fungi | a while back | 22:21 |
clarkb | got it. The recent thing was ipv6 specific. We didn't have an exclusion preexisting I guess? | 22:21 |
clarkb | then when the block was added we got hit? | 22:21 |
zigo | I've long disabled IPv6 on mail servers: it's sadly more trouble than help. | 22:21 |
fungi | the recent v6 block was i think spamhaus assuming all addresses in a /64 are controlled by the same entity | 22:21 |
fungi | which isn't the case in service providers who assign individual /128 addresses instead of networks | 22:22 |
zigo | fungi: Not only spamhaus does this. At least Google does that too. | 22:22 |
clarkb | aha making assumptions about ipv6 allcoations that don't hold in the real worl;d | 22:23 |
zigo | They all base reputations of /64 blocks, which indeed, is silly, but there's little one can do about it. | 22:23 |
fungi | the v6 one wasn't the pbl, it was apparently that they added the entire /64 after some address in it started trying to deliver e-mail with a helo of "localhost.localdomain" | 22:23 |
fungi | (wasn't us) | 22:23 |
fungi | workaround would be to get a global /64 for a neutron network i suppose and put a new listserver in it | 22:24 |
clarkb | or force ipv4 | 22:24 |
fungi | sure | 22:24 |
clarkb | I think we may already do that for google because they don't trust any mail over v6 | 22:24 |
fungi | i doubt there are any v6-only mailservers anyway, so we should be able to assume not using v6 wouldn't cut out deliveries for any subscribers | 22:24 |
clarkb | hrm that is a good point though that that is the risk going with that workaround | 22:25 |
zigo | .oO(The Internet would have been a lot nicer if there was ways so that a v6 address could communicate with a v4) | 22:30 |
fungi | the problem there is on the v4 side. there's a standard way to encode a v4 address as v6, but no v4-only system is going to be able to communicate outside the v4 address space without some way to map/translate v6 addresses into v4 | 22:36 |
*** dviroel|ruck is now known as dviroel|out | 22:37 | |
clarkb | re the gerrit restart, it going so quickly makes me much more confident that we can land https://review.opendev.org/c/opendev/system-config/+/803231 and just restart gerrit to pick up the theme change if necessary. That means the focus goes back to ensuring the deployed gitea is functional. The test instance is up at https://23.253.56.223:3081/opendev/system-config | 22:38 |
clarkb | fungi: ianw: do one of you want to review https://review.opendev.org/c/opendev/system-config/+/800315 to update our executor configs to handle the change in sigterm behvaior for zuul executors? | 22:39 |
clarkb | the depends on change has merged. The current running zuul should ignore the extra config entry then once we've restarted in the future we want to have it there to avoid graceful stops unless explicitly requested | 22:40 |
ianw | i feel like i looked at that, did it change? | 22:40 |
clarkb | ianw: yes, zuul switched from using an env var to a config file entry to select the behavior | 22:40 |
clarkb | ps1 handled the env var ps2 does the current config file selector | 22:41 |
ianw | ah, right. cool, lgtm | 22:42 |
clarkb | thanks | 22:43 |
stevebaker | clarkb, ianw: hey it looks like the openeuler mirror is missing some sqlite.bz2 files, which is breaking dib jobs https://zuul.opendev.org/t/openstack/build/24c013fc5ee049379f968c59f43c74d2/log/logs/openeuler-minimal_20.03-LTS-SP2-build-succeeds.FAIL.log#296 | 23:14 |
stevebaker | http://mirror.mtl01.inap.opendev.org/openeuler/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ | 23:14 |
stevebaker | https://repo.openeuler.org/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ | 23:14 |
ianw | stevebaker: hrm, thanks for pointing out, let's see ... | 23:14 |
stevebaker | ianw: thanks. Also fedora mirrors sometimes break dib-functests-bionic-python3 without this fix https://review.opendev.org/c/openstack/diskimage-builder/+/805545 | 23:15 |
ianw | https://static.opendev.org/mirror/logs/rsync-mirrors/openeuler.log looks like it is correctly mirroring | 23:16 |
clarkb | looks like those updates are from today. But I'm not sure what timezone the timestamp belongs to or if that is 24 or 12 hour time :/ | 23:16 |
clarkb | if it is 12 hour time and we assume a utc timezone the update would've been from an hour and a half ago? | 23:16 |
stevebaker | it was failing yesterday too fwiw | 23:17 |
clarkb | also with the rsync updates we rely on them updating files "safely" | 23:18 |
clarkb | with debuntu we have reprepro producing complete mirrors each time we publish but with rpm repos we rely on the publishers not updating out of sequence | 23:18 |
ianw | it looks like we're in sync with https://ru-repo.openeuler.org/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ | 23:19 |
ianw | which looks like it is out of sync with "repo.openeuler.org" | 23:19 |
clarkb | http://ru-repo.openeuler.org/openEuler-20.03-LTS-SP2/update/x86_64/repodata/ oh yup you just found that | 23:19 |
stevebaker | should repo.openeuler.org be the source instead of ru-repo? | 23:20 |
ianw | ... maybe? | 23:21 |
clarkb | the issue was authentication | 23:21 |
clarkb | repo.openeuler.org requires authentication. We would prefer to not be a special mirror becuse we don't want people using our mirrors as actual distro mirrors | 23:22 |
ianw | yeah, not sure if that was fixed though | 23:22 |
clarkb | at least that was the situation when this was set up iirc. They pointed us at that mirror because we didn't need toauth iir | 23:22 |
ianw | i think we need to reach out and if we can't resolve in a timely fashion we can override testing | 23:22 |
clarkb | ya we should be able to stop using our mirror as well? | 23:23 |
clarkb | though that migt also be flaky depending on how things go | 23:23 |
ianw | yeah, tbh i think "this is turned off until mirror is fixed" is a better situation for everyone | 23:23 |
stevebaker | I just checked every mirror and ru-repo is the only one missing those files https://openeuler.org/zh/mirror/list/ | 23:24 |
clarkb | kevinz: ^ fyi | 23:24 |
fungi | i'm in favor of disabling those optional jobs until they have working mirrors | 23:24 |
ianw | xinliang has been the only contact via irc | 23:24 |
clarkb | ianw: oh I thought kevinz was involved too? | 23:24 |
ianw | i think so; xinliang doesn't appear signed in | 23:25 |
fungi | i mean, until they're back to having working mirrors | 23:25 |
ianw | we can try the admin@ address there too. i'll send a mail to that. in the mean time we can comment out the tests | 23:26 |
ianw | mail sent ... | 23:30 |
clarkb | thanks | 23:30 |
opendevreview | Merged opendev/system-config master: Preserve zuul executor SIGTERM behavior https://review.opendev.org/c/opendev/system-config/+/800315 | 23:34 |
clarkb | We should be ready to restart zuul again whenever that is appropriate for queues and the state of zuul things | 23:34 |
*** yoctozepto4 is now known as yoctozepto | 23:37 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!