Friday, 2025-06-27

*** ykarel__ is now known as ykarel06:08
*** darmach2 is now known as darmach11:33
-opendevstatus- NOTICE: Gerrit is being restarted to pick up a configuration change. You may notice a short outage.17:36
clarkbfungi: what is the plan for the DCO implementation next week? Just start approving things first thing Tuesday (July 1st) and go until everything is landed?20:55
fungii plan to self-approve any tooling changes during my day on monday, then tuesday my morning approve the acl change21:13
fungithat is, self-approve any tooling changes that haven't merged before then21:13
fungithe openstack/releases change merged earlier today and seems to be working21:13
clarkbgot it, the things that can go in prior to enforcement go in early then on enforcement day land the acl updates21:13
fungiright-o21:14
fungii was tempted to merge the acl change at midnight utc, but that would mean me not being around for potential fallout from it21:14
fungiso thought better of that plan21:14
clarkbya I suspect this is the sort of change that would be best with people around to answer questions21:14
clarkbagreed that not doing that is for the best21:14
clarkbI'm trying to make sense of whether or not it is safe for me to try the zookeeper server replacements early next week if that is going on21:19
clarkbIf everything goes well it should be fine :)21:19
clarkbmaybe you want to weigh in on https://review.opendev.org/c/opendev/system-config/+/951164 (the associated etherpad is https://etherpad.opendev.org/p/opendev-zookeeper-upgrade-2025) with any timing concerns if you have them?21:20
* fungi puts "remember to `git commit -s`" on repeat and walks away21:20
fungilookin'21:20
clarkbmy goal is the sit at the computer early one morning and then just work through that all day ensuring it gets done in a 8-12 hour timespan and no longer21:21
fungias in 3-4 hours per cluster member?21:25
fungitaking the leader down last will force an election onto one of the two newer replacements right?21:26
clarkbsorry had to step away for a bit21:48
clarkbyes when we eventually stop the laeder one of the other two (which will be new servers by that point) should become leader then the new server that replaces the old leader should join as a follow21:49
clarkbI don't actually think it will take 3-4 hours per cluster member. More just thinking that it is likely to take at least 4-6 hours so planning for it to take a day allows for debugging or unexpected slowness21:49
clarkbthe first chaneg will be slower than the others because it updates the test jobs so they all have to run. The subsequent changes should only modify the inventory and run the base job and linters iirc21:50
clarkbbut then in deploy we have to let it run tyhrough all the jobs which is still a bit slow (like half an hour?)21:50
clarkbso figure minimum at least an hour per ndoe and thats 3 hours alone then you've got additioanl overhead waiting for test nodes and so on. I suspect 4 hours is a good best case scenario21:51
fungiclarkb: any guess how long the data will take to replicate to a new cluster member?22:27
fungii'm assuming we don't really have a good way to gauge that other than to jdi22:28
clarkbI expect that to be almost instantaneos22:34
clarkbthe entire data set is under 30MB currently22:34
clarkbI don't think we ever go over 100MB22:34
fungiah, okay, so highly space-efficient database then23:02

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!