*** ykarel__ is now known as ykarel | 06:08 | |
*** darmach2 is now known as darmach | 11:33 | |
-opendevstatus- NOTICE: Gerrit is being restarted to pick up a configuration change. You may notice a short outage. | 17:36 | |
clarkb | fungi: what is the plan for the DCO implementation next week? Just start approving things first thing Tuesday (July 1st) and go until everything is landed? | 20:55 |
---|---|---|
fungi | i plan to self-approve any tooling changes during my day on monday, then tuesday my morning approve the acl change | 21:13 |
fungi | that is, self-approve any tooling changes that haven't merged before then | 21:13 |
fungi | the openstack/releases change merged earlier today and seems to be working | 21:13 |
clarkb | got it, the things that can go in prior to enforcement go in early then on enforcement day land the acl updates | 21:13 |
fungi | right-o | 21:14 |
fungi | i was tempted to merge the acl change at midnight utc, but that would mean me not being around for potential fallout from it | 21:14 |
fungi | so thought better of that plan | 21:14 |
clarkb | ya I suspect this is the sort of change that would be best with people around to answer questions | 21:14 |
clarkb | agreed that not doing that is for the best | 21:14 |
clarkb | I'm trying to make sense of whether or not it is safe for me to try the zookeeper server replacements early next week if that is going on | 21:19 |
clarkb | If everything goes well it should be fine :) | 21:19 |
clarkb | maybe you want to weigh in on https://review.opendev.org/c/opendev/system-config/+/951164 (the associated etherpad is https://etherpad.opendev.org/p/opendev-zookeeper-upgrade-2025) with any timing concerns if you have them? | 21:20 |
* fungi puts "remember to `git commit -s`" on repeat and walks away | 21:20 | |
fungi | lookin' | 21:20 |
clarkb | my goal is the sit at the computer early one morning and then just work through that all day ensuring it gets done in a 8-12 hour timespan and no longer | 21:21 |
fungi | as in 3-4 hours per cluster member? | 21:25 |
fungi | taking the leader down last will force an election onto one of the two newer replacements right? | 21:26 |
clarkb | sorry had to step away for a bit | 21:48 |
clarkb | yes when we eventually stop the laeder one of the other two (which will be new servers by that point) should become leader then the new server that replaces the old leader should join as a follow | 21:49 |
clarkb | I don't actually think it will take 3-4 hours per cluster member. More just thinking that it is likely to take at least 4-6 hours so planning for it to take a day allows for debugging or unexpected slowness | 21:49 |
clarkb | the first chaneg will be slower than the others because it updates the test jobs so they all have to run. The subsequent changes should only modify the inventory and run the base job and linters iirc | 21:50 |
clarkb | but then in deploy we have to let it run tyhrough all the jobs which is still a bit slow (like half an hour?) | 21:50 |
clarkb | so figure minimum at least an hour per ndoe and thats 3 hours alone then you've got additioanl overhead waiting for test nodes and so on. I suspect 4 hours is a good best case scenario | 21:51 |
fungi | clarkb: any guess how long the data will take to replicate to a new cluster member? | 22:27 |
fungi | i'm assuming we don't really have a good way to gauge that other than to jdi | 22:28 |
clarkb | I expect that to be almost instantaneos | 22:34 |
clarkb | the entire data set is under 30MB currently | 22:34 |
clarkb | I don't think we ever go over 100MB | 22:34 |
fungi | ah, okay, so highly space-efficient database then | 23:02 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!