Wednesday, 2025-11-05

clarkbhttps://github.com/ether/etherpad-lite/issues/721100:00
clarkbI think they did move the tag00:00
clarkbwhen I first tested it the tag was "correct" for some value of correct in that it contained the fix for our css issue00:00
clarkband since then the tag appears to have moved and now it does not include the fix for the css issue which is https://github.com/ether/etherpad-lite/pull/720400:01
clarkbI'm going to respond to 7211 with what I've found00:01
clarkbas mentioned I think we're ok. The 2.5.2 tag that exists now appears to be 2.5.1 with a single commit to fix the changelog script00:03
clarkbthe page is useable under firefox just looks bad and hopefully they make a 2.5.3 that we can upgrade to.00:03
clarkbcorvus: do you know why the docker image build stuff with buildx doesn't emit the actual buid logs to the job-output.txt file? they are in the json file so the console viewer in the zuul web ui renders them00:06
corvuscan you share a build url you're looking at?00:07
clarkbcorvus: https://zuul.opendev.org/t/openstack/build/3ec999c2acde470e959985f7340ae108/console#4/0/76/ubuntu-noble this is where it is rendered but if you switch to job-output.txt the actual docker build output doesnt' show up00:08
corvus"zuul_log_id":"in-loop-ignore"00:10
corvusshort answer: loops are complicated and the live log streaming isn't sophisticated enough to handle that case00:11
corvusbut that could conceivably change00:11
clarkback thanks. fwiw I'm able to get what I need from the console so it isn't urgent00:12
clarkbalso I discovered I have a local cached version of that tag so I'm forever out of sync with upstream unless I intervene explicitly to fix that tag00:12
corvusto anyone reading who may be unfamiliar with the subtext: never change a git tag (on a public repo).00:15
clarkbhttps://github.com/ether/etherpad-lite/issues/7211 has been updated00:16
clarkbto summarize we're effectively running v2.5.1 which does not include our desired fix because the upstream updated tags00:19
clarkbthis is ok other than the known issue with firefox00:19
clarkbI've also discovered we're setting an explicit tag for etherpad docker images which is still 2.2.6. We deploy :latest so we're basically ignoring that but when upstream fixes this I'll try to remember to update that explicit tag value to match00:20
clarkbwe could build with the old commit that v2.5.2 pointed to or grab the image from the intermediate registry that I built and tested already and then promote it. But upstream needs to fix this anyway so we should just rely on that imo00:21
clarkband to confirm I cleaned up /etc/hosts on etherpad0200:28
clarkbso I think we're basically in steady state there now00:29
clarkbfungi: I've got a todo list item to drop review03 from the emergecny file but you did that already right?00:32
opendevreviewDr. Jens Harbott proposed opendev/irc-meetings master: Revert "kolla: Move meeting one hour backwards (DST)"  https://review.opendev.org/c/opendev/irc-meetings/+/96617213:13
opendevreviewMerged opendev/irc-meetings master: Revert "kolla: Move meeting one hour backwards (DST)"  https://review.opendev.org/c/opendev/irc-meetings/+/96617213:37
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: Use mirror_info in configure-mirrors role  https://review.opendev.org/c/zuul/zuul-jobs/+/96618714:54
fungiclarkb: yes, review03 was dropped from the emergency file yesterday during the meeting when we discussed it, same time i removed the jitsi-meet servers14:55
fungithe main page at https://etherpad.opendev.org/ is readable for me on qutebrowser (also webkit-based), though looks like it's missing styling. text isn't trying to escape from any of the butttons14:59
fungiah, sounds like they (accidentally) rolled back the tag, so not a useful test15:00
fungiunrelated, should we consider mirroring the etherpad container image into quay? or is the forced ipv4 workaround our go-to now?15:03
Clark[m]fungi: that image isn't pushed to quay yet because the server is still on jammy running docker not podman. Moving it to quay would break speculative testing 15:22
Clark[m]Which to be fair upstream just negated most of the benefits for15:23
fungioh right15:25
mnasiadkaClark[m], fungi: While doing Alma 10 I posted a glean patch (https://review.opendev.org/c/opendev/glean/+/962567) - it seems it’s not required - should I abandon it?15:41
mnasiadkaYeah, it matches el10 for NM on Alma, so I’ll just abandon it.15:43
fungiif things are working without it, then sure, no need to add further complexity15:43
clarkb++15:45
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: Use mirror_info in configure-mirrors role  https://review.opendev.org/c/zuul/zuul-jobs/+/96618715:45
clarkbinfra-root I'm going to approve https://review.opendev.org/c/opendev/system-config/+/964728 now to force gitea access through the lb. That will take about an hour to gate so we have time to block it if there is reason to15:49
clarkbthen I can look at increasing the memcached memroy limit next15:50
clarkbthen we can look at the 1.25.1 upgrade15:50
clarkbI decided that doing our best to improve performance generally is good for the upgrade process too so want to do them in that order15:50
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: Use mirror_info in configure-mirrors role  https://review.opendev.org/c/zuul/zuul-jobs/+/96618715:54
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add trixie-arm64  https://review.opendev.org/c/opendev/zuul-providers/+/96620016:02
fungigonna go grab a quick lunch, back soon16:20
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add trixie-arm64  https://review.opendev.org/c/opendev/zuul-providers/+/96620016:49
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add trixie-arm64  https://review.opendev.org/c/opendev/zuul-providers/+/96620016:53
opendevreviewMerged opendev/system-config master: Force gitea http(s) connectivity through the load balancer  https://review.opendev.org/c/opendev/system-config/+/96472817:07
clarkbthat is currently waiting for the hourly deployments to complete17:10
clarkbiptables has updated on gitea09 at least17:20
clarkbI think iptables has updated on all backends now17:20
clarkbI'm still able to reach https://opendev.org via my browser17:21
clarkbdoesn't appear that we've broken anything. Not sure if anyone else wants to do a quick test, but once this deployment is done and I can check the overall result I'll plan to approve https://review.opendev.org/c/opendev/system-config/+/965420 if everything continues to look happy17:22
clarkbgit clone works too17:22
clarkbhrm the deployment failed17:24
clarkbmirror01.dfw3.raxflex.opendev.org is unreachable17:25
clarkbwhich I knwe but forgot that had impacts here. I'll put it in the emergency file now17:25
clarkbbut I don't think I need to recheck or reenqueue anything as the firewall updates are in place and things seem to be working17:25
clarkbI've approved https://review.opendev.org/c/opendev/system-config/+/965420 to increase the memcached cache size17:26
fungiquickly tested gitea and seems to still be working for me through haproxy but i get icmp network unreachable responses for gitea09 directly on 80, 443, 3000, 3080 and 308118:09
fungiso just what we wanted18:09
fungii can also still ssh into it18:09
clarkbthanks for checking18:12
opendevreviewMerged opendev/system-config master: Increase Gitea memacached limit to 4096MB  https://review.opendev.org/c/opendev/system-config/+/96542018:20
clarkball giteas are running with the bigger cache now and the deployment job succeeded18:43
clarkband the service is still accessible to me18:43
fungime as well18:45
clarkbif you notice opendev.org slowness please let us know. It might be helpful for us to evaluate how impactful these two changes are in improving general service uptimes and responsiveness18:46
clarkbvexxhost has responded and confirms it was a memory consumption problem. They have indicated steps have been taken on the hypervisor to free up resources and better accomodate the instance19:13
fungithanks! (to them and to you for reaching out)19:15
mnasiadkaclarkb: I’d say bigger cache made the UI and git clone operations somewhat faster, but maybe it’s just a late hour EU wise, and there’s less traffic from my side of the planet19:37
mnasiadkaI’ll have a look tomorrow in EU office hours to see if it persists19:37
fungii'd refrain from judging cache performance until things run uninterrupted for a few days so we're looking at a more populated/full cache19:40
clarkbI think only the UI is affected by this cache fwiw. And yes probably tneed to wait a bit to measure20:23
clarkbI've just realized that when testing held gitea nodes we now need to hit the haproxy due to the firewall. My current 1.25.1 held node was held before the firewall update so isn't affected but talking to its haproxy works fine as a check that we can do so in the future: https://158.69.67.86/opendev/system-config23:03
clarkbbut also ^ is an easy way to test gitea 1.25.123:03
clarkbetherpad upstream reports the release/tag issue was due to a broken script which they have now fixed and made a 2.5.3 release to address23:11
clarkbI'll work on getting another set of changes up to upgrade to that version now23:11
corvusgood to hear that was accidental23:14
opendevreviewClark Boylan proposed opendev/system-config master: Upgrade to etherpad 2.5.3  https://review.opendev.org/c/opendev/system-config/+/96623323:16
clarkbcorvus: ya I think 2.5.3 is basically what 2.5.2 was expected to be + the fix to the release script. but I'll hold a node for testing to double check since we've alreadybeen surprised once this week :)23:17
opendevreviewClark Boylan proposed opendev/system-config master: DNM force etherpad failure to hold node  https://review.opendev.org/c/opendev/system-config/+/84097223:17

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!