Monday, 2022-01-31

opendevreviewMerged opendev/system-config master: Use versioned URL for Ubuntu Bionic
opendevreviewMerged opendev/system-config master: Add openstack-skyline channel in statusbot/meetbot/logging
fungifinally! i started out my day approving that change ;)00:57
ianwhaha yak shaving of the highest order01:06
opendevreviewMerged opendev/system-config master: Use grafyaml container image
opendevreviewMerged opendev/grafyaml master: Generate and use UID for acessing dashboards
ianwsince ^ has merged, I might as well push the change that updates to use the latest upstream grafana container.  if it doesn't work, we can revert (if too hard to fix expediently)03:46
ianw(merged and deployed, the current page has been synced with the new grafyaml03:49
opendevreviewMerged openstack/diskimage-builder master: Fix openSUSE images and bump them to 15.3
fungiawesome, thanks!04:08
opendevreviewMerged opendev/system-config master: grafana: update to oss latest release
*** ysandeep|out is now known as ysandeep05:58
*** marios is now known as marios|ruck06:02
opendevreviewIan Wienand proposed opendev/system-config master: infra-prod-grafana: drop system-config-promote-image-grafana
ianw^ that is why it didn't deploy06:15
ianwok, did it manually and lgtm.  all the dashboards are there, it's running 8.3.4 and the fonts look more ... fonty06:19
ianwi'm not 100% sure the urls have stayed the same, i forgot to check06:24
ianwhowever, from now on ,they shouldn't ever change as we make a UID by hashing the title and explicitly set that06:25
ianwthe centos volume has run out of quota06:27
ianwi think we were discussing this before06:28
ianwafs01 has ~500gb free.  i think let's bump it by 100gb for now, this growth might just be the steady increase of 9-stream06:29
ianwhrm, although 9-stream is a separate volume, actually06:30
ianwi'll bump it to get it re-syncing for now, and then on my todo list is to investigate further06:31
ianwI'm going to manually run with timeout06:32
ianwi should be able to parse the recent history logs to see what made it grow when i have time06:34
*** ysandeep is now known as ysandeep|brb06:42
*** amoralej|off is now known as amoralej07:02
*** ysandeep|brb is now known as ysandeep07:22
fricklerianw: when I looked last week it seemed to be steady almost linear growth for the centos volume, but I didn't check what part of it might have been the cause07:40
frickleralso clarkb wanted to drop the no-stream centos8 things, but I'm not sure we might want a bit more for devstack-consumers to clean up all stable things07:41
ianwReleased volume mirror.centos successfully07:41
ianw#status log bumped centos volume quota to 450gb, did a manual run to get it back in sync07:42
opendevstatusianw: finished logging07:42
ianwi've dropped the locks07:42
dpawlikianw: hey, did you change something on afs mirror?08:18
dpawlikah, I see there was some mention about that08:18
* dpawlik reading08:18
*** odyssey4me is now known as Guest121708:32
*** jpena|off is now known as jpena08:38
ianwdpawlik: centos had stopped syncing, so it is now back in sync08:44
dpawlikianw: ok, thans08:47
*** ysandeep is now known as ysandeep|coffee10:04
opendevreviewAlfredo Moralejo proposed zuul/zuul-jobs master: Install OVS from RDO Train Testing repository for CS8
*** ysandeep|coffee is now known as ysandeep10:51
*** rlandy is now known as rlandy|ruck11:14
dpawliktristanC, ianw, fungi: hi, could you check please12:40
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Change RDO train repository for Centos 8 stream
*** amoralej is now known as amoralej|lunch13:10
opendevreviewMerged zuul/zuul-jobs master: Install OVS from RDO Train Testing repository for CS8
fungidpawlik: ^13:19
dpawlikthanks fungi!13:28
*** amoralej|lunch is now known as amoralej13:54
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add cargo-test job
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add cargo-test job
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add packages support to ensure-cargo
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Update cargo-test job to support system packages
*** dviroel is now known as dviroel|lunch15:39
*** ysandeep is now known as ysandeep|dinner15:46
clarkbfrickler: ya end of january was mostly to give people a concrete target. If people are still actively working to clean up centos 8 usage I don't mind waiting a bit longer15:51
*** ysandeep|dinner is now known as ysandeep16:06
clarkbonce I've settled into the day a bit I'll merge the upstream fix for signed tag acls (assuming nothing pops up that distracts me)16:14
clarkbThen we can rebuild our image, deploy that, and update the acl on a single project and double check it works as expected in production (the test node seemed to show it worked for me there though)16:14
*** amoralej is now known as amoralej|off16:30
fungiawesome, thanks!16:34
clarkbok I submitted that change16:36
clarkbI'll get a change up to rebuild gerrit images16:36
*** dviroel|lunch is now known as dviroel16:38
opendevreviewNeil Hanlon proposed openstack/diskimage-builder master: Add new container element - Rocky Linux
opendevreviewClark Boylan proposed opendev/system-config master: Rebuild gerrit images for signed tag acl fix
*** marios|ruck is now known as marios|out16:42
clarkbLooks like Zuul is quite busy this morning. I don't see anything in grafana that looks like we've broken anything just big demand for node requests then all executors stop accepting new jobs as they get nodes and start running ansible en masse17:16
clarkbhrm though I am noticing if I change tenant status pages I'm seeing changes that should be removed from various tenants quickly hanging out there for a while implying that maybe the schedulers aren't keeping up for some reason?17:25
clarkbcorvus: ^ not sure if that is known17:25
opendevreviewMichal Nasiadka proposed openstack/diskimage-builder master: Add new container element - Rocky Linux
corvusclarkb: the merger queue looks werd17:30
*** ysandeep is now known as ysandeep|out17:30
fungicacti graphs for both schedulers definitely show they're active, but neither looks remotely at risk of running out of cpu/memory17:31
corvuszm01 is processing jobs...17:32
*** jpena is now known as jpena|off17:32
corvusso is zm02.  i have confirmed there are a lot of merge job requests in zk17:33
fungizk servers don't look overloaded either17:33
corvusdid something happen around 15:48?  bunch of new branches or tags or anything like that?17:34
clarkbNot that I am aware of, but I also hadn't thought ot look for merger activity17:35
fungi14:44 <opendevreview> Merged openstack/releases master: Release manila for stable/xena
fungier, that was an hour earlier17:36
clarkba nova stack made it into check around then17:36
corvusthe mergers are running, but very slowly; is gerrit slow?17:37
fungi for keystone stable/victoria was 15:03z, and that was the most recent things to merge for openstack releases17:37
fungigerrit cpu utilization in cacti looks typical for a weekday17:38
clarkbsystem load is currently acceptable on gerrit for the last 15 minutes. I didn't notice issues with the UI. I could be network related between vexxhost and rax though?17:38
corvuswhit is accounts-daemon and why does it use 0.5 cpu?17:40
clarkbinternet says it is a dbus service17:40
fungiaccountsservice: /usr/lib/accountsservice/accounts-daemon17:41
fungiDescription: query and manipulate user account information17:41
fungiit's in Section: gnome17:42
clarkbapparently high cpu usage from that daemon is a known thing :/17:42
fungiseems like a great choice for a headless server17:42
fungii agree, the merger queue graph in grafana is definitely a concerning sign17:44
fungithe mergers definitely look like they're running flat out, just not keeping up17:45
clarkbif I clone nova from review to home I'm getting an initially quite slow download speed but then it picks up to about 1MiB/s17:45
clarkbhrm seems to have fallen back off again to about 500 KiB/s17:45
clarkbbasically not great but also not completely off of where I would expect it (I think 2MiB/s is common)17:45
clarkband considering we're not completely overloaded on the review server (historically system load would shoot up when IO was slow) I suspect something networky17:46
fungii don't suppose the mergers relied at all on gerrit's mergeable metadata we're no longer providing since the upgrade?17:47
clarkbI don't ebelieve so17:48
corvusnah, this should just be git ops17:48
clarkbthey always need to calculate their own merges so relied on determining that locally17:48
fungiyeah, that's what i thought, just making sure17:49
clarkbonce this clone to home is done I'll do a clone on review02 and compare if removing the network produces a different result17:50
clarkbI guess I could dothem concurrently because it isn't like zuul stopped operating either. Initially I Thought I should avoid that to avoid extra noise but thats meaningless I think17:50
corvusinside the container this has a huge startup delay: GIT_SSH_COMMAND='ssh -i /var/lib/zuul/ssh/id_rsa' git clone git+ssh://
corvusoutside the container on zm02 has the same behavior17:51
clarkbdoing the clone locally on review02 started extremely quickly17:52
corvusit's about 25 seconds before any response17:52
clarkbthen around 80% of object download started to get slower17:52
fungii'm seeing signs of memory pressure, but it looks like it's a desire for more cache memory17:52
fungithere's little ticks in paging activity17:52
clarkbits doing about 5MiB/s locally17:52
fungi(on the mergers)17:52
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add cargo-test job
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add packages support to ensure-cargo
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Update cargo-test job to support system packages
corvusthere are a lot of zuul login/logout logs on gerrit ssh log.  do we have a cap for user ssh sessions that zuul is hitting?17:54
clarkbcorvus: there is a cap of 96 concurrent17:54
fungi96 per account, 100 per ip address17:54
fungi(the former is enforced by gerrit, the latter by iptables/conntrack)17:55
clarkbReceiving objects: 100% (602947/602947), 1.04 GiB | 6.08 MiB/s, done. <- that is the gerrit local clone of nova17:55
clarkbwhich isn't terrible but I would've expected it to be closer to the 25MiB/s that it showed when starting up17:55
corvusi think the startup delay is key here; i'm seeing the issue with small repos, not large ones17:55
clarkbcorvus: fungi  gerrit show-queue shows zuul waiting on pulls17:56
clarkbwhich might be the startup delay that you see17:56
clarkbit also shows a several day old clone for a third party ci system I'm inclined to kill to free up an extra gerrit thread17:57
corvuswatching the sshd logs, i see the connection show up immediately, so it does not seem like the delay is before the ssh connection handshake; it's after17:57
clarkbI'm going to kill that pull from a few days ago for that third party ci system17:58
clarkb7f22c7e1              Jan-28 15:08      git-upload-pack /openstack/nova17:59
clarkb(I removed the user name from the end of that entry)17:59
fungithanks, please do17:59
fungithough i doubt that alone will fix it, i suppose it could have been holding onto resources which gerrit wants to free or something17:59
corvusweird, "show-queue -w" doesn't seem to be giving me user names?18:00
clarkbcorvus: they are in parentheses at the end of the tasks18:00
clarkbfor example (zuul) for zuul tasks18:00
corvus ssh review gerrit show-queue -w18:00
clarkbcorvus: oh you may need to be admin18:01
corvusah right :)18:01
fungiconfirmed, being admin gets me those, normal user doesn't see them18:01
clarkbwhat I'm noticing is there are only like 5 pull tasks running at a time18:01
clarkband we allow significantly more thread18:02
fungimaybe config options changed in 3.4?18:02
corvusmaybe there's a new limit?  that would jive with the behavior we're seeing18:02
clarkbsshd.commandStartThreads maybe? Though it isn't clear if that affects git ops too18:04
clarkbalso we didn't have problems with this after the 3.4 upgrade until now? Maybe it is related to Friday's update instead of the 3.4 update?18:05
clarkbor maybe we just missed it during the week last week18:05
fungilast week might have been quieter. also the spike earlier may have been anomalous18:05
clarkbok I see 4 pulls now18:05
clarkbsshd.threads says "By default, 2x the number of CPUs available to the JVM (but at least 4 threads)."18:06
clarkbwe set it to 10018:06
corvusdo we have melody?18:07
funginot any longer, it was axed over log4j concerns18:07
fungiwe could probably safely readd it at this point18:07
clarkbyes I think we can safely readd it. After we removed it we were reassured that it was fine since gerrit forces its deps18:08
clarkbI'm still semi skeptical because I'm that way, but they seeemd confident of it18:09
corvusi'm wondering if there's a way (other than melody) to confirm the sshd.threads setting is working as expected; or if batchthreads is being used18:09
fungi was the change triggered at the time of the spike, along with several of its peers. it does seem to have rather a lot of explicit and implicit deps, so could account for a significant number of merge requests18:10
corvuszuul is not a service user, right?18:10
clarkbcorvus: oh that is an interesting possibility. It says if set to 0 then you share the pool with interactive users but by default it is 1 or 2 depending on core count18:10
clarkbcorvus: I think it may be now because of attention sets18:10
corvushow did attention sets make it a service user?18:11
clarkbcorvus: if you don't put CI systems in the system user group then the attention set system treats them as a human and expects them to take action and stuff. It braeks the workflow18:11
corvusso you're saying someone made it a service user?18:12
clarkbmaybe. I'm trying to do a listing now18:12
clarkblooks like zuul is not in the service users group so maybe something to consider for the future but unlikely to cause this issue18:12
corvusi just ran jstack on gerrit to get a thread dump18:14
clarkbcool I'm working on a change to get melody back shoudl we decide to go that route18:15
corvusmost of our "SSH-Interactive-Worker" threads are idle18:16
corvusbtw, can we install less in our images? :)18:17
clarkbIf it doesnt' make the images twice the size I'm not opposed :) vi nox should be pretty small and I think that may provide a less?18:18
clarkbcorvus: ya so I wonder if sshd.commandStartThreads could be the bottleneck18:18
corvusit looks like there are distinct SSH git-upload-pack threads, and there are 4 of them18:19
clarkboh are we using ssh or http?18:19
clarkbmaybe the difference is on the http size18:19
corvuspretty sure we're using ssh18:20
opendevreviewClark Boylan proposed opendev/system-config master: Revert "Remove melody"
corvusfwiw, jstack is easy; i don't think i need anything more right now18:21
clarkbbtw for my local clone the final result was Receiving objects: 100% (602947/602947), 1.04 GiB | 1.01 MiB/s, done. but I think now we suspect less parallelism and not individual throughput18:22
clarkbcorvus: noted, maybe we can add that info to our gerrit sysadmin docs when things settle down. I'll WIP that chagne to prevent accidnetal merging18:22
corvusoh it looks like the worker threads turn into ssh threads...18:23
clarkbaccording to grafana it looks like we may have caught up18:24
clarkband I'm seeing 9 pulls processed at once now18:24
fungioh, wow it burned down very quickly just out of nowhere18:24
corvusi observe a 7 second delay starting a clone18:24
clarkband now up to 1818:24
clarkbfungi: ya it seems we're making use of a lot more threads now18:25
clarkbwhatever it was seems to have corrected itself at least temporarily18:26
corvusin the thread dump, both command start threads are idle18:26
clarkbI wonder if the thread dump unstuck something18:26
corvusi think the problem is still observable.18:26
corvusdoubt it18:26
corvusit's just fast enough now that it's able to service the queue, but it's still right on the edge18:27
clarkbcorvus: I guess the impact is lower but the 7 seconds to start cloning is still quite a delay18:27
corvusshould be 018:27
fungiand that seems to be the client waiting on jgit to start talking?18:28
corvusfungi: hard to say; all i know is the ssh connection is immediate, but the output from git starts X seconds later.  don't know what happens during that time.18:28
corvusthe thread dump is in /tmp/dump inside the gerrit container if anyone wants a look18:29
corvusdown to about 5 seconds delay now18:30
corvusi'm out of ideas18:31
fungithis does, at least, seem unlikely to be a result of changes on zuul's side, given the behavior18:32
clarkbcorvus: fungi: maybe the thing to do is try to capture as much about this as possible in a gerrit issue and see if upstream has ideas18:32
clarkbThey have been super helpful recently when we bring stuff like this to them. Considering Zuul has caught up we can probably live with waiting on their input for a bit?18:32
fungijust to make sure i understand, we did see the number of parallel active threads for ssh increase once that episode passed?18:33
clarkbfungi: yes I observed one show-queue -w output with 18 pulls running and not waiting.18:33
fungiokay, so it seems like gerrit temporarily reduced the number of tasks it was willing to perform18:34
clarkball but 2 were for zuul18:34
clarkbya its like the requests go into a waiting state according to show-queue -w18:35
clarkband for a while we were only processing ~4-5 of them at a time18:35
clarkbwhich led to a backlog18:35
clarkbwe still see a backlog but it is much shorter18:35
fungii wonder if it's something like an unintentional throttle which we brought on with the spike from the nova/placement stack at ~15:55z and from that point on requests were being processed in a diminished capacity until gerrit managed to recover itself18:38
fungior whether the threads count had been constricted prior to that, and it was simply the event which generated enough requests to push the mergers over the edge into backlog18:39
clarkbya I think what we can do is write an issue explaining what we observed and ask for any guidance on settings/tunings or behaviors we might not be aware of that could explain this. Then continue to monitor and try to determine what is going on if it happens more regularly18:40
clarkbit could also potentially be related to that pull I killed if it holds some lock that everyone has to contend with?18:40
clarkbit was for nova which is a popular repo and maybe all pulls for the same repo all have to synchronize with each other?18:41
*** tkajinam is now known as Guest130718:43
clarkbcorvus: is the process to make the threadump just `jstack $PID > file` ?18:43
clarkbcool I can work on a docs update for that today. I'm probably not the best person to file the issue as I'm not sure what my availability will end up being this week due to local stuff. I can help with an issue draft though18:46
clarkb(basically thinking it would be good for someone who can be repsonsive to upstream to file that)18:46
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Add packages support to ensure-cargo
opendevreviewTristan Cacqueray proposed zuul/zuul-jobs master: Update cargo-test job to support system packages
clarkbfungi: corvus I copied the template from gerrit's issue tracker for a bug report if we want to draft something up there together18:57
clarkbfungi: corvus something like that maybe? feel free to edit, but I need to step away for a bit. If you would prefer I file that I can do that just with the caveat that my attentiveness to that bug may be lacking19:12
corvusclarkb: lgtm!19:21
fungiclarkb: i've suggested a few edits. feel free to take them or leave them19:42
clarkbfungi: I s/jgit/gerrit/ since there is more than just jgit involved with things like mina19:48
clarkbI don't want to indicate this is a jgit issue as I think we may be hittingthis before jgit is involved (but maybe not)19:48
clarkbanyway should I file that or do one of yall wish to do that?19:48
fungiyeah, just trying to make it clear we're not pulling refs from gitiles or some other backend19:49
fungii'm not sure how many ways gerrit exposes things, but we're specifically cloning via ssh to gerrit's dedicated service port it shares with its ssh cli19:50
fungimaybe that?19:51
clarkbI think that + the port indication should make it clear19:51
fungioh, you added the pirt number, that's probably enough19:51
fungiif you have a moment to file it, feel free. otherwise i can probably get to it in a bit but trying to juggle a few things19:54
clarkbok. I'm working on the jstack thing first. Then I need to step out again but if you haven't beaten me to it by then I can file it19:55
opendevreviewSteve Baker proposed openstack/diskimage-builder master: Allow the bootloader element with the default block device
opendevreviewClark Boylan proposed opendev/system-config master: Add info on running jstack against gerrit to docs
clarkbcorvus: ^ fyi thatis the jstack documentation change20:01
corvusclarkb: text lgtm but needs more cowbell er dash20:04
opendevreviewClark Boylan proposed opendev/system-config master: Add info on running jstack against gerrit to docs
clarkbthanks fixed20:06
*** dviroel is now known as dviroel|brb20:48
ianwso from the merger graph i see it suddenly dropped ~ 18:00 but am i correct that we didn't really find a smoking gun for what was happening, just that gerrit decided to go slow for a while?20:52
fungifrom outward appearances, yes20:56
ianwinfra-root: if anyone could take a quick look at just to confirm i haven't totally mixed things up.  it modifies the .repo files we use after i realised yum's "$releasever" doesn't include -stream on -stream distros21:11
ianwi'd like to get that testing stack there in before a release, which i'd like to do before cleaning up our fedora gate images, etc.21:12
opendevreviewGage Hugo proposed openstack/project-config master: Retire security-specs repo - Step 3
clarkbianw: ya that looks sane. Hardcoding should be fine21:26
opendevreviewClark Boylan proposed opendev/base-jobs master: Remove centos-8 as it is EOL
clarkbinfra-root ^ frickler mentioned holding off on that for now since people are still actively working to remove it from their jobs. I'm just proposing changes now so that we have them ready when we are ready to go for it21:28
opendevreviewGage Hugo proposed openstack/project-config master: Retire security-specs repo - Step 3
opendevreviewClark Boylan proposed openstack/project-config master: Set centos-8 min-ready to 0
opendevreviewClark Boylan proposed openstack/project-config master: Remove centos-8
clarkbfungi: your gerrit gitea stack needed monitoring to land the first few changes iirc? Then that would allow us to depends-on test the fix I pushed upstream21:38
opendevreviewClark Boylan proposed opendev/system-config master: Stop mirroring centos-8
fungiclarkb: oh, i thought we already merged those21:58
clarkbfungi: we merged the first one. But I didn't recheck the second since it affected gerrit configs and I wasn't able to babysit21:59
clarkbbut I think if we recheck the whole stack in order we hsould be able to test the depends-on upstream gerrit properly21:59
clarkbI've got the meeting agenda edited. Any last minute additions before I send that out?21:59
fungioh, yes i only saw the approvals, didn't notice the lack of merging22:00
clarkboh I'll add a thing about the gerrit issues22:04
clarkbfungi: did you manage to file an issue for the ssh pull thing yet? If not I'll do that nowish and add it to the infra agenda22:06
fungii have not yet, i've just wrapped up other chores and am starting to cook dinner, but could do it after probably if you're busy22:07
clarkbNo worries. I've got it as part of the agenda prep22:08
clarkbif you star that issue you'll get email notification which can help you keep on top of it if I am not around though22:09
ianwclarkb: did you want to merge the centos-8 retirement stuff now?22:12
clarkbianw: no I was going to check in with people tomorrow to see fi we are ready yet. Frickler mentioned devsatck may still need cleanup22:12
clarkbianw: that said should be safe to land now. Just not the others22:12
ianwyeah, i'll do a double check but given it s/8/8-stream/ should fix most everything i think a quick deprecation is reasonable22:15
jrosserwe are having a fairly torrid time over in openstack-ansible dropping our centos-8 jobs22:16
jrosserbecause of a succession of external factors happening one after the other, the mirrors were out of sync, now we are stuck for a couple of days because erlang-solutions repos are broken again22:17
clarkbya I mostly wanted to make sure we were ready on our side since ~now was when I said it would happen. But if people are actively working onit and struggling I don't mind waiting/helping as necessary to do the removal cleanly22:18
ianw++; if people are actively working on things that's good22:19
ianwclarkb/fungi: do we need to revert anything wrt to (signed acl fix?) .  should i deploy it this afternoon?22:34
clarkbianw: we need to revert the acl updates that allowed annotated tags in openstack/project-config for all the things. fungi has a change to do that, but it might be prudent to revert for a single project first and test that it works then revert for all22:35
clarkbbut that happens after the update of gerrit22:36
clarkbalright last call for the meeting agenda. I'll get that sent out in about 5 minutes22:39
fungiianw: yeah, we need a gerrit restart first on the new image, at a minimum22:46
fungionce it's restarted, we could make sure 826335 is still passing tests and then merge it, or we could take a more measured approach and update one repo first as clarkb suggests22:48
fungii'll un-wip 826335 now regardless so that's not blocking it22:48
clarkbya I guess reverting the revert if we got something wrong is easy too22:48
clarkbits just a lot of diff to look at :)22:49
fungii'm inclined to trust our testing and just make sure #openstack-release knows to test one tag first, not approve an entire batch, as they're likely to be the first to find problems22:52
fungibut if others are more comfortable with a conservative, incremental reversal, that's cool with me too22:53
clarkbthat plan wfm. I did test it. Happy for others to test it too. The held node is still up and running if they want to do that22:54
ianwhrm, centos-8 job just started failing and i think things have moved to vault23:21
ianwError: Failed to download metadata for repo 'centos-base': Cannot prepare internal mirrorlist: No URLs in mirrorlist23:22
fungii guess someone has removed centos 8 for us!23:22
opendevreviewMerged opendev/system-config master: Rebuild gerrit images for signed tag acl fix
opendevreviewMerged opendev/system-config master: Add info on running jstack against gerrit to docs
opendevreviewMerged opendev/system-config master: infra-prod-grafana: drop system-config-promote-image-grafana
ianwi think our CI will be somewhat isolated while we run in our little mirror eco-system.  but the dib job that pulls from upstream fails23:23
fungionce it propagates to afs this might hit jobs harder?23:26
fungii have a sinking suspicion rsync will just delete the files, call that success, and we'll release the empty result23:27
ianwstill investigating, still seems to be there23:27
ianwi'm wondering if just the "metalink" mirror choosing thing is failing to return23:28
fungiunless it happened since our last sync23:29
fungioh, sorry, i think i see what you mean now23:29
fungii'm not familiar with what role metalink plays23:30
fungii guess we use that as a convenience redirector in the dip element?23:30
*** ysandeep|out is now known as ysandeep23:30
fungidib element23:31
fungithough now i want crisps and dip23:31
ianw", we intend to bend the usual EOL process by keeping this content available until January 31st."23:31
* fungi shakes fist at nighttime snacking tendencies23:31
ianwthat sounds suspiciously like today23:31
fungiif the people in charge of taking it down are in cest, then it's already feb 1 there since half an hour23:32
ianwit does explain why it chose this moment to break23:33
fungino crisps, but i found a tin of savory biscuits. in the states we call them "crackers" but i assume that's an american colloquialism which is not shared by any other english-speaking culture (and what we call "biscuits" is more akin to shortcakes)23:41
fungii'm not going to defend the ways my ancestors have destroyed this language23:41
opendevreviewIan Wienand proposed openstack/project-config master: Stop building CentOS 8
ianwcrackers would be understood in australian english.  biscuits is definitely a sweet thing here, scone ~= biscuit, although they have a little sugar and are eaten with jam and cream23:43
ianwi don't know why we generally don't have a non-sweet biscuit/scone equivalent.  sausage, white gravy and biscuits is good stuff and i still make it for weekend breakfast sometimes23:44
fungiahh, maybe that was a 16th century working class difference brought to our respective homes by the unpaid labor the english shipped here23:44
fungi(17th, 18th century... the practice did rather linger more in some places than others)23:46
fungiokracoke, two islands to the south of me, has only ever been accessible by boat, and the locals there have retained some very colorful variations on the language23:50

Generated by 2.17.3 by Marius Gedminas - find it at!