*** tosky has quit IRC | 00:39 | |
*** DSpider has quit IRC | 01:05 | |
*** ykarel|away has joined #opendev | 04:34 | |
*** ykarel|away has quit IRC | 04:36 | |
*** chandan_kumar has joined #opendev | 04:37 | |
*** chandan_kumar is now known as chandankumar | 04:37 | |
*** ysandeep is now known as ysandeep|brb | 05:24 | |
*** brinzhang has joined #opendev | 05:59 | |
*** marios has joined #opendev | 06:10 | |
*** ysandeep|brb is now known as ysandeep | 06:34 | |
*** sboyron has joined #opendev | 06:49 | |
*** ralonsoh has joined #opendev | 07:06 | |
*** whoami-rajat__ has joined #opendev | 07:21 | |
*** slaweq has joined #opendev | 07:27 | |
*** eolivare has joined #opendev | 07:37 | |
*** hashar has joined #opendev | 07:48 | |
prometheanfire | is gertty working, getting 401s, reset my http password just in case, running git master gertty, same issue in py37 and 38 https://gist.github.com/prometheanfire/85ef04eb64047088b09ace04223eb900 | 07:59 |
---|---|---|
*** andrewbonney has joined #opendev | 08:05 | |
*** tosky has joined #opendev | 08:37 | |
*** mgoddard has joined #opendev | 08:54 | |
*** DSpider has joined #opendev | 09:15 | |
*** rpittau|afk is now known as rpittau | 09:27 | |
*** fressi has joined #opendev | 10:07 | |
*** mgoddard has quit IRC | 10:47 | |
*** brinzhang_ has joined #opendev | 10:49 | |
*** brinzhang has quit IRC | 10:53 | |
*** cgoncalves has quit IRC | 11:18 | |
*** cgoncalves has joined #opendev | 11:21 | |
*** dtantsur|afk is now known as dtantsur | 11:31 | |
*** mgoddard has joined #opendev | 11:55 | |
*** mgoddard has quit IRC | 12:07 | |
*** tosky has quit IRC | 12:24 | |
*** tosky has joined #opendev | 12:25 | |
*** hashar has quit IRC | 12:30 | |
*** sshnaidm|off is now known as sshnaidm | 12:38 | |
*** hashar has joined #opendev | 12:46 | |
fungi | prometheanfire: did you switch the server entry in your config from digest auth to basic auth? | 13:04 |
fungi | prometheanfire: see item #3 in http://lists.opendev.org/pipermail/service-announce/2020-November/000014.html | 13:05 |
*** d34dh0r53 has joined #opendev | 13:58 | |
*** redrobot has joined #opendev | 13:59 | |
fungi | heads up everyone, pip 20.3 showed up today and turns on the new dep solver by default. keep an eye out for any unexpected changes in behavior | 13:59 |
*** fressi has quit IRC | 14:12 | |
*** ysandeep is now known as ysandeep|brb | 14:24 | |
*** ysandeep|brb is now known as ysandeep | 14:45 | |
*** d34dh0r53 has quit IRC | 14:48 | |
*** hashar has quit IRC | 14:51 | |
fungi | infra-root: possible data integrity issue in gerrit reported in #openstack-infra just now, this is what it looks like from the server side when a push is failing: http://paste.openstack.org/show/800546/ | 14:57 |
*** d34dh0r53 has joined #opendev | 14:59 | |
fungi | first occurrence of any "Missing tree" in the error_log was at 14:24:52 today and it looks like all of them are for that same 466cd024745bbeffebf3f8e598759fcc9b75df4d id | 15:00 |
fungi | double-checking those assertions now | 15:00 |
fungi | nope, there were also two for b5ecb7fe4fa73b1cd1d6e87142ce359c5124be90 in openstack/neutron | 15:03 |
fungi | so separate projects | 15:03 |
fungi | `git show <id>` in the corresponding bare repos in ~gerrit2 returns tree contents for both of those | 15:07 |
fungi | also the user was eventually able to push their changes, it looks like this may have been transient | 15:08 |
fungi | the most recent missing tree error logged was at 14:57:02 | 15:08 |
prometheanfire | fungi: thanks, missed that | 15:16 |
*** gibi has joined #opendev | 15:16 | |
openstackgerrit | Matthew Thode proposed ttygroup/gertty master: update examples for new opendev gerrit version https://review.opendev.org/c/ttygroup/gertty/+/764774 | 15:20 |
fungi | prometheanfire: i think there might already be a duplicate of that already waiting for corvus to review when he returns | 15:21 |
prometheanfire | heh | 15:22 |
fungi | infra-root: also seeing the gerrit load average climbing again like we observed early last week | 15:23 |
fungi | it's over 60 at the moment | 15:24 |
fungi | and again cpus are ~80% idle most of that time | 15:24 |
fungi | with the rest somewhat evenly split between user and system activity | 15:24 |
*** ysandeep is now known as ysandeep|away | 15:26 | |
fungi | seeing lots of change queries from (mostly openstack/cinder) third-party ci systems in the show-queue output again too | 15:29 |
openstackgerrit | Sean McGinnis proposed openstack/project-config master: [check-release-approval] Handle PTL-less projects https://review.opendev.org/c/openstack/project-config/+/764775 | 15:37 |
fungi | gibi: at 15:34:22z we logged a missing tree 466cd024745bbeffebf3f8e598759fcc9b75df4d in openstack/nova | 15:37 |
gibi | yes that was me again | 15:38 |
gibi | then on a second try it worked | 15:38 |
fungi | that's the same tree id it was complaining about earlier | 15:38 |
fungi | system load is scarily high on the server though so i'm wondering if it's timing out some internal read task and that's bubbling up as a git error | 15:39 |
gibi | could be the case as I see that the push is slow even when it succeeds | 15:39 |
fungi | having it only hit nova and neutron so far (and the first more often than the second) isn't surprising if whatever action is failing has performance relative to repository size | 15:41 |
fungi | we've got a smallish backlog of send-email tasks in the queue, though that's probably just a symptom of overall resource starvation on the server | 15:43 |
fungi | that backlog seems to be falling steadily over the past few minutes | 15:44 |
clarkb | fungi: https://groups.google.com/g/repo-discuss/c/7CemrH4lVJE I wonder if that is related (note they are seeing it on 2.16) | 15:44 |
clarkb | fungi: I think we shoudl look at melody again and show-queue? | 15:46 |
fungi | that doesn't mention if they were using notedb, so hard to know if it's related | 15:46 |
clarkb | fungi: ya | 15:46 |
clarkb | "The projects cache just exploded, with 10% of the projects being out of the cache coverage." | 15:47 |
clarkb | was in one of luca's replies. I'm not sure how that was determined though or how we check if we are in the same situation | 15:47 |
clarkb | fungi: `gerrit show-caches --show-jvm --show-threads` ? I think that is what luca was looking at to make that determiniation | 15:49 |
fungi | takes a while to return | 15:50 |
clarkb | projects | 1024 | 7.9ms | 90% | | 15:50 |
clarkb | was from the upstream thread so ya I think that was the source of the info | 15:50 |
fungi | projects | 1024 | 9.9ms | 99% | | 15:51 |
fungi | is what it gave me | 15:51 |
clarkb | fungi: any other caches look like low hit rates though? | 15:51 |
fungi | changeid_project | 1024 | | 24% | | 15:51 |
fungi | that's probably our culprit right there | 15:51 |
clarkb | cache.changeid_project.memoryLimit is the config value to set to change that | 15:52 |
clarkb | 1024 at 24% would mean that ~4096 may be enough? but maybe we do 8192 to be cautious? | 15:53 |
fungi | seems reasonable | 15:53 |
clarkb | fungi: also note that 1024 is the default limit for many of these cachces | 15:53 |
clarkb | so if others are showing 1024 entries maybe we should double them? | 15:54 |
fungi | lots of them are: | 15:54 |
fungi | groups_bysubgroup, groups_byuuid(2400), permission_sort, projects | 15:55 |
*** brinzhang0 has joined #opendev | 15:55 | |
fungi | those are the exclusively-memory-resident caches at 1024 entries or greater | 15:55 |
clarkb | groups_byuuid is unlimited apparently | 15:55 |
fungi | there are also disk-backed caches with a lot more entries | 15:55 |
fungi | the diff cache for example has a 65% cache hit rate, but it's disk-based and there may be little we can do to improve that | 15:56 |
fungi | also it will likely improve once the cache warms more | 15:56 |
clarkb | ya I imagine that one will warm as it goes. | 15:57 |
*** brinzhang_ has quit IRC | 15:58 | |
fungi | okay, so we want to try upping to changeid_project=8192 groups_bysubgroup=2048 permission_sort=2048 projects=2048 | 15:58 |
clarkb | based on that thread I linked luca seems to think poor cache hit rates are a bad thing (not surprising) so maybe we start with setting cache.changeid_project.memoryLimit to 8196 and cache.permission_sort.memoryLimit to 2048 and cache.projects.memoryLimit to 2048 ? The groups_* I think may already have high limits based on docs | 15:58 |
clarkb | fungi: ya that sounds right (though I'm not sure we need to go _bysubgroup | 15:59 |
clarkb | unless you see it stuck at 1024 implying it isn't one of the unlimited caches | 15:59 |
clarkb | in that case lets bump it to 2048 too | 15:59 |
fungi | groups_bysubgroup | 1024 | 4.6ms | 99% | | 15:59 |
clarkb | ya lets set that to 2048 then | 15:59 |
fungi | are you writing that change or shall i? | 15:59 |
clarkb | fungi: I've not yet loaded my ssh keys | 16:00 |
fungi | on it | 16:00 |
clarkb | if you want to start on it whiel I do that, that woudl be great | 16:00 |
*** mlavalle has joined #opendev | 16:00 | |
gibi | fungi: now got an internal error when pushed a longer commit chain | 16:08 |
gibi | fungi: http://paste.openstack.org/show/800554/ | 16:08 |
*** d34dh0r53 has quit IRC | 16:09 | |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Increase some of Gerrit's in-memory cache pools https://review.opendev.org/c/opendev/system-config/+/764779 | 16:10 |
fungi | gibi: thanks! we're hoping that ^ improves things | 16:10 |
gibi | cool | 16:10 |
clarkb | fungi: do you want to manually apply that and restart? or shoudl I approve it and we wait for it to apply through normal process? | 16:11 |
clarkb | fungi: re 8196 I meant to type 8192 :) but an off by 4 number shouldn't matter much here :) | 16:12 |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Increase some of Gerrit's in-memory cache pools https://review.opendev.org/c/opendev/system-config/+/764779 | 16:13 |
fungi | oops! that's what i get for copying and pasting from your numbers instead of mine ;) | 16:13 |
fungi | fixed in ps2 | 16:13 |
fungi | i hadn't even noticed | 16:13 |
clarkb | fungi: I've got ssh keys loaded if you want to manually apply that and restart fwiw | 16:14 |
fungi | clarkb: and yeah, we may as well apply it manually if you think it looks right. we've got users experiencing extremely sub-par performance and errors (i saw it take nearly a minute to push the first patchset for that change) | 16:14 |
clarkb | fungi: yes that change lgtm | 16:15 |
fungi | status notice The Gerrit service on review.opendev.org is being restarted quickly to troubleshoot high load and poor query caching performance, downtime should be less than 5 minutes | 16:15 |
clarkb | that lgtm too | 16:15 |
fungi | that work for a notice to users? | 16:15 |
fungi | cool, jumping in the root screen session now | 16:15 |
clarkb | I'm there too if you want to drive I'll follow along | 16:16 |
*** d34dh0r53 has joined #opendev | 16:16 | |
fungi | screen -x 123851 | 16:16 |
fungi | yeah. i'm in | 16:16 |
clarkb | put a #hi in the window I see to make sure we're looking at the same buffer | 16:17 |
*** eolivare has quit IRC | 16:17 | |
fungi | yeah, i saw it ;) | 16:17 |
*** brinzhang0 has quit IRC | 16:18 | |
clarkb | fungi: that looks correct to me | 16:19 |
fungi | thanks, writing | 16:19 |
*** brinzhang0 has joined #opendev | 16:19 | |
fungi | #status notice The Gerrit service on review.opendev.org is being restarted quickly to troubleshoot high load and poor query caching performance, downtime should be less than 5 minutes | 16:19 |
openstackstatus | fungi: sending notice | 16:19 |
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to troubleshoot high load and poor query caching performance, downtime should be less than 5 minutes | 16:19 | |
clarkb | fungi: I say go for it whenever you are ready | 16:20 |
fungi | okay, ready for me to down the container? | 16:20 |
fungi | cool, doing it now | 16:20 |
*** osmanlicilegi has joined #opendev | 16:20 | |
fungi | and it's on its way back up now | 16:20 |
openstackstatus | fungi: finished sending notice | 16:22 |
clarkb | fungi: `WARN com.google.gerrit.server.logging.LoggingContextAwareRunnable : Logging context is not empty` <- is it made about not being able to rotate the replication log or something? | 16:23 |
fungi | could be | 16:24 |
clarkb | seems like it is logging to the replication log just fine though | 16:26 |
fungi | system load average is fairly low at the moment | 16:26 |
clarkb | I'm going to run the show caches command to see what that looks like out of curiousity | 16:26 |
fungi | yeah, i just started that as well | 16:27 |
fungi | caches are still warming, other than projects which has already topped out | 16:27 |
fungi | i guess something's querying all our projects regularly | 16:27 |
fungi | i guess we could increase that even more. presumably it will never have more entries than the total number of repos in the system? | 16:28 |
clarkb | fungi: "Caches the project description records, from the refs/meta/config branch of each project." | 16:29 |
clarkb | I think the answer to that question is yes | 16:29 |
clarkb | not that changeid_project doesn't have a documentation entry describing its purpose | 16:29 |
clarkb | fungi: looks like load is climing | 16:32 |
clarkb | but maybe that is due to needing to rewarm the caches? | 16:33 |
clarkb | another thought I had was that maybe we should look at deploying java 11 just to get up to speed there and rule out gc performance as a potential cause | 16:33 |
fungi | makes sense | 16:35 |
fungi | also load isn't that high right now comparatively | 16:36 |
fungi | it's been hovering around 8 when i've checked after the restart | 16:36 |
clarkb | ya it is rising and falling | 16:36 |
clarkb | definitely better than before | 16:36 |
fungi | mmm... back up to 35 now | 16:40 |
fungi | changeid_project cache has only 246 entries, cache hit rate for it is 72% at the moment | 16:41 |
clarkb | fungi: its starting to fall back down again now fiw | 16:42 |
clarkb | it seems bursty | 16:42 |
*** hashar has joined #opendev | 17:00 | |
fungi | changeid_project | 2183 | | 38% | | 17:01 |
openstackgerrit | Merged zuul/zuul-jobs master: Fix typo with container_images siblings logic https://review.opendev.org/c/zuul/zuul-jobs/+/764230 | 17:01 |
fungi | oof | 17:01 |
fungi | granted that's a better hit rate than we had before the restart, and that pool is only ~ a quarter full so far | 17:01 |
fungi | but we may still want to consider bumping it up even more | 17:02 |
clarkb | ya and I think a lot of that population is happening in response to actions in the system | 17:02 |
clarkb | basically that cache is cold right now and things people aer doing are populating it | 17:02 |
clarkb | Probably too early totell if we'll plateau around the earlier prediced ~4k | 17:02 |
clarkb | fungi: maybe lets wait and see if we go above 4k? and if so then ya bump it up further | 17:03 |
fungi | sure | 17:03 |
fungi | also groups_byuuid has reached 2048 now | 17:03 |
clarkb | I expect that 38% should climb as more things use what was recently added to the cache | 17:03 |
clarkb | fungi: interestinly that one should be 32k I think | 17:03 |
fungi | could just be coincidental | 17:04 |
clarkb | fungi: I wonder if by setting a bound on bysubgroup < 32k we've set a lower bound for that one too | 17:04 |
clarkb | something to watch, its hit rate is still good | 17:04 |
fungi | yep | 17:04 |
*** marios is now known as marios|out | 17:11 | |
fungi | borg is running now too and eating an entire cpu, so that's contributing some of the load | 17:13 |
fungi | the last "Missing tree" exception in gerrit's error_log was 16:39:44z, so we're still seeing it after the restart | 17:22 |
*** hamalq has joined #opendev | 17:23 | |
*** mgoddard has joined #opendev | 17:30 | |
clarkb | fungi: that was not long after the restart though, maybe we still had cold caches? | 17:32 |
*** venkatakrishnath has joined #opendev | 17:32 | |
clarkb | fungi: that one was associated to what gibi was doing | 17:34 |
clarkb | gibi: ^ have you had more recent issues or is it happier now? | 17:34 |
venkatakrishnath | Facing the issue 'Create' rights to create new references while submitting code to openstack-cinder through Gerrit. More details http://paste.openstack.org/show/800557/ | 17:35 |
venkatakrishnath | Can some one help | 17:35 |
*** mgoddard has quit IRC | 17:35 | |
clarkb | venkatakrishnath: line 11 is the hint you need. git-review needs to be upgraded | 17:36 |
venkatakrishnath | Upgraded to 1.28.0 | 17:36 |
clarkb | ah the next issue is you can't push drafts | 17:36 |
clarkb | we haven't allowed drafts beacuse they create confusion and problems that people don't expect. Just push it as a regular change. You can mark it WIP in the web ui from there if you like | 17:37 |
venkatakrishnath | I have cleared all drafts on my username and here is the latest error message http://paste.openstack.org/show/800561/ | 17:38 |
clarkb | venkatakrishnath: you are pushing to refs/drafts/ (see line 28 of the latest paste) this is not allowed | 17:39 |
clarkb | this is because you are using git-review -D | 17:39 |
clarkb | stop using -D | 17:39 |
clarkb | fungi: gibi fwiw I was able to push a change to nova without getting that error | 17:41 |
venkatakrishnath | I used to check with Draft stuff to confirm git-review command is working. working now without -D | 17:41 |
venkatakrishnath | Thanks | 17:41 |
clarkb | venkatakrishnath: we've not enabled that functioanlity in years | 17:42 |
*** rpittau is now known as rpittau|afk | 17:42 | |
clarkb | it was just too confusing for CI and people thought it was private when it wasn't so we disabled it | 17:42 |
*** hashar is now known as hasharAway | 17:44 | |
fungi | venkatakrishnath: clarkb: yes, git review -D hasn't worked with our gerrit since https://review.openstack.org/255050 merged almost 5 years ago | 17:45 |
venkatakrishnath | clarkb: I have been submitting code since this August and following that. Thanks for the information | 17:45 |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Disable Gerrit's automatic Git GC on push https://review.opendev.org/c/opendev/system-config/+/764807 | 17:54 |
fungi | clarkb: ^ | 17:54 |
clarkb | fungi: my only nit is the whitespace difference :P | 17:55 |
clarkb | fungi: do you want to change that? | 17:55 |
fungi | oh, sure thing | 17:56 |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Disable Gerrit's automatic Git GC on push https://review.opendev.org/c/opendev/system-config/+/764807 | 17:57 |
*** andrewbonney has quit IRC | 17:59 | |
clarkb | fungi: thats weird ps2 shows the lines above as not having tabs anymroe but there is no diff there. I wonder if that is a rendering bug | 18:00 |
fungi | could be | 18:00 |
clarkb | +2 from me anyway | 18:00 |
fungi | also that file has a mix of spaces and tabs already | 18:00 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: WIP: Zuul Cache role https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 18:01 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: WIP: Zuul Cache role https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 18:03 |
*** venkatakrishnath has quit IRC | 18:07 | |
*** mlavalle has quit IRC | 18:11 | |
fungi | changeid_project | 4571 | | 43% | 18:12 |
fungi | we're over 4096 entries now | 18:12 |
fungi | load average has fallen back to reasonable levels | 18:12 |
*** dtantsur is now known as dtantsur|afk | 18:13 | |
fungi | that cache continues to grow and the hit rate continues to fall... i'm starting to wonder if it's effectively cacheable | 18:22 |
*** marios|out has quit IRC | 18:24 | |
melwitt | oof, gonna take awhile to get used to this new gerrit ui | 18:24 |
melwitt | mine eyes @_@ | 18:24 |
fungi | tried the dark theme under user prefs? | 18:24 |
melwitt | not yet | 18:25 |
fungi | doesn't blast my retinas with quite as much radiation at least | 18:25 |
melwitt | :D | 18:25 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: WIP: Zuul Cache role https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 18:28 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: WIP: Zuul Cache role https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 18:44 |
*** mgoddard has joined #opendev | 19:19 | |
fungi | changeid_project | 7450 | | 40% | 19:26 |
fungi | i have a feeling we're going to top out our new 8192 limit and not see a >50% cache hit ratio | 19:26 |
clarkb | fungi: ya we should probably consider another restart to apply the receive auto gc disable and bump up that cache value further? | 19:27 |
fungi | so increase changeid_project to... 16384? and maybe bump the projects cache to 4096? | 19:29 |
clarkb | 16k or even 32k maybe? We appear to have plenty of memroy according to show-caches | 19:29 |
clarkb | but ya that and the receive autogc disable? | 19:30 |
clarkb | corvus: ^ not sure if you want to review any of that | 19:30 |
fungi | i'll update the cache change with those values now | 19:30 |
*** mgoddard has quit IRC | 19:33 | |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Increase some of Gerrit's in-memory cache pools https://review.opendev.org/c/opendev/system-config/+/764779 | 19:34 |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Disable Gerrit's automatic Git GC on push https://review.opendev.org/c/opendev/system-config/+/764807 | 19:34 |
fungi | clarkb: corvus: ^ | 19:34 |
clarkb | fungi: lgtm | 19:35 |
fungi | changeid_project | 7900 | | 41% | 19:38 |
fungi | so it's just about topped out | 19:38 |
clarkb | fungi: I'm about to grab lunch. Do you want to do those updates and restart nowish or wait for later? | 19:43 |
*** tosky has quit IRC | 19:45 | |
fungi | i would maybe give it a little longer so we can observe behavior with warmed caches, and also because frequent restarts aren't helping users get work done. i think the urgency is somewhat lessened as the system load graph is looking a lot more sane and i haven't seen more "missing tree" exceptions since 16:39:44z | 19:46 |
clarkb | ok I'll grab lunch then | 19:46 |
fungi | maybe in a few hours once activity levels are trailing off again, a restart would be slightly less disruptive too | 19:47 |
clarkb | wfm | 19:47 |
clarkb | fungi: I suppose its possible that we'll evict from the cache and still be reasonably happy at the current level | 19:49 |
clarkb | whereas before maybe the level was too low/too much eviction | 19:50 |
fungi | changeid_project | 8192 | | 44% | 20:24 |
fungi | at least the hit rate has climbed a bit | 20:24 |
fungi | and that's nearly double the hit rate we saw at ~steady state with a smaller cache | 20:25 |
fungi | load has been nominal for over two hours now as well | 20:26 |
fungi | well under the cpu count | 20:26 |
*** ralonsoh has quit IRC | 20:36 | |
fungi | and back down to 41% cache hit ratio now | 20:46 |
clarkb | but load seems to be fairly stable? I wonder if that cache size is large enough that evictions aren't super costly and we udpate as we need to? | 20:47 |
clarkb | still would've expected a much better hit rate given teh previous data | 20:47 |
fungi | yeah, load average is still reasonable | 20:47 |
*** tosky has joined #opendev | 20:48 | |
fungi | it's been running somewhere between 2 and 15 that i've seen | 20:48 |
fungi | 5-minute average just dipped below 1 moments ago | 20:49 |
fungi | watched it dip as low as 0.72 for a bit | 20:50 |
fungi | but it's back up around 4 again now | 20:51 |
*** sboyron has quit IRC | 20:54 | |
*** sboyron has joined #opendev | 20:56 | |
*** whoami-rajat__ has quit IRC | 21:08 | |
*** DSpider has quit IRC | 21:46 | |
openstackgerrit | Matthew Treinish proposed opendev/subunit2sql master: Fix compatibility with latest oslo.config https://review.opendev.org/c/opendev/subunit2sql/+/764832 | 22:19 |
fungi | clarkb: seems the hit ratio for changeid_project continues to hover in the 40-44% range for the past few hours. also we haven't maxxed out any of the other newer cache memory limits besides project, so i've staged those increases and the autogc false in the root screen session we've been using if you want to double-check it | 22:30 |
clarkb | fungi: the grep you've got there looks good to me | 22:30 |
fungi | status notice The Gerrit service on review.opendev.org is being restarted quickly to make further query caching and Git garbage collection adjustments, downtime should be less than 5 minutes | 22:32 |
fungi | that work for the restart notice? | 22:32 |
clarkb | yes that lgtm too | 22:32 |
fungi | i can restart whenever folks are ready, zuul seems relatively idle | 22:33 |
clarkb | I'm deep into writing emails right now but dno't wait on me | 22:35 |
fungi | #status notice The Gerrit service on review.opendev.org is being restarted quickly to make further query caching and Git garbage collection adjustments, downtime should be less than 5 minutes | 22:35 |
openstackstatus | fungi: sending notice | 22:35 |
fungi | i'll give that a minute and then down/up -d the container | 22:35 |
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to make further query caching and Git garbage collection adjustments, downtime should be less than 5 minutes | 22:35 | |
fungi | and it's down | 22:36 |
fungi | and it's starting again | 22:36 |
TheJulia | ugh | 22:36 |
fungi | sorry TheJulia, hopefully it'll perform better with this configuration update | 22:37 |
TheJulia | no worries | 22:37 |
fungi | we're fighting internal gerrit queue backlogs and cache tuning needs for the new version now that it's being put under additional load as lots of people are returning from a week off | 22:38 |
openstackstatus | fungi: finished sending notice | 22:38 |
TheJulia | c'est la vie | 22:38 |
TheJulia | and it is back | 22:39 |
fungi | it's one of those look at the full caches and hit ratios, guess how big they need to be, reconfigure and restart, let things settle for a while, look at the caches again, realize you guessed poorly, lather, rinse, repeat | 22:39 |
*** slaweq has quit IRC | 22:40 | |
TheJulia | fun :( | 22:40 |
TheJulia | and then GC fun too | 22:40 |
fungi | TheJulia: yeah, this is a classic quote from the gerrit docs: "By default, git-receive-pack will run auto gc after receiving data from git-push and updating refs. You can stop it by setting this variable to false. This is recommended in gerrit to avoid the additional load this creates." or in other words "this option defaults to true, but we recommend you set it to false" | 22:42 |
TheJulia | heh, that sounds familiar | 22:42 |
fungi | it goes on to suggest a periodic repack instead, which we already do, so it was somewhat redundant anyway | 22:43 |
*** slaweq has joined #opendev | 22:45 | |
mordred | "the default behavior of git pull is to to pull and merge, which you probably don't want, so instead please remember to always type --ff-only when you type pull" | 22:49 |
*** slaweq has quit IRC | 22:50 | |
clarkb | they at least made that the default in new gerrit now iirc | 22:50 |
fungi | jokes on them, i always just `git remote update && reset --hard origin/master` instead | 22:50 |
mordred | fungi: me too! | 22:50 |
mordred | fungi: I use reset --hard instead of checkout most of the time :) | 22:51 |
TheJulia | hard resets ftw | 22:51 |
mordred | who needs named branches when you have git reflog? | 22:51 |
TheJulia | hehehe | 22:51 |
fungi | oh, i do also sometimes `git checkout -B mytopicbranch --no-track origin/master` | 22:52 |
TheJulia | which reminds me, I need to go dig through the my reflog on one of my repos after I finish my current change | 22:52 |
fungi | ...to fill out your weekly progress report? ;) | 22:52 |
fungi | sounds like something i'd do | 22:52 |
TheJulia | no, could hvae sworn I pushed up a decently sized change | 22:53 |
TheJulia | well, one with words that was more than a quick edit | 22:53 |
TheJulia | and I don't know what happened ot it | 22:53 |
TheJulia | s/ot/to/ | 22:53 |
fungi | tried a message:"words you thought you used" search in gerrit? | 22:54 |
TheJulia | no, I haven't | 22:54 |
TheJulia | I know the repo, so I guess I just never pushed it up | 22:54 |
TheJulia | or gerrit ate it which is something I don't want to think about | 22:55 |
TheJulia | so I'll go with "I forgot to run git review" | 22:55 |
fungi | also occasionally i can't find my changes because someone actually approved them. and then i get happily surprised | 22:55 |
mordred | maybe you pushed it up to a honeypot gerrit by mistake | 22:55 |
mordred | fungi: ++ | 22:55 |
TheJulia | huh, that is a good point, I thoguht I checked all changes already | 22:55 |
fungi | also (thankfully less often) because someone abandoned them | 22:56 |
fungi | TheJulia: also if you know what file you meant to change you can search for project:some/project file:path/to/thatfile | 22:56 |
mordred | what I love is when I write a big patch, push it up, then can't find it, then rewrite it and push it up - then find the old patch because the two are in conflict and show up in the gerrit conflicts list | 22:57 |
fungi | oh, yep, i've definitely had that happen. or more often i forget i pushed a fix and then push the same fix again, with varying degrees of agreement vs the original | 22:57 |
TheJulia | oh yeah, it is a specs repo, so not much happens there anyway | 22:57 |
*** hasharAway has quit IRC | 22:58 | |
TheJulia | I just need to finish reorganizing my desk since I started this past weekend and never finished | 22:59 |
* TheJulia grumbles about needing landscape screens now | 22:59 | |
mordred | TheJulia: my first reaction to landscape screens was assuming you were talking about gardening | 23:00 |
fungi | that sounds like a very healthy first reaction | 23:00 |
TheJulia | It does | 23:00 |
clarkb | I'm about to send out the infra meeting agenda for tomorrow. Anything else to add to it? | 23:01 |
TheJulia | not the "my screen is now too narrow to all fit and make optical sense of what I'm looking at" | 23:01 |
fungi | i have two 2560x1440 screens in portrait orientation and just keep a browser in one, so it gets 1440px width to do what it needs, and i've seen that be generally happy | 23:02 |
fungi | i was previously doing that with 1920x1080 screens but a lot of stuff that's happy with 1440px width isn't so good at 1080px width | 23:03 |
TheJulia | yeah, these are 1080 so sadness | 23:05 |
TheJulia | I've got new arms on the way so I can change up the mounting. It should work out a lot better | 23:05 |
clarkb | agenda sent as I didn't hear any need for updates | 23:06 |
TheJulia | clarkb: out of curiosity, any resolution to ui glitching with browser graphics acceleration? | 23:07 |
clarkb | TheJulia: no, sorry I don't know that we have filed a bug yet. | 23:07 |
clarkb | it does sound like Google has no intention of enabling hardware acceleration on linux by default though | 23:08 |
clarkb | I half expect the response will be "the browser does't support this so we can't" | 23:08 |
*** tkajinam has quit IRC | 23:10 | |
TheJulia | That is... extreme sadness | 23:11 |
TheJulia | since I can actually feel the impact on my browser and hear my CPU fan having to throttle more | 23:11 |
TheJulia | :( | 23:11 |
clarkb | I don't know if it is possible with chrome but firefox allows you to toggle that per user session I think | 23:11 |
*** tkajinam has joined #opendev | 23:11 | |
clarkb | maybe you can have the gerrit window without acceleration and the everything else window with it | 23:12 |
fungi | was anyone able to determine if it was similar to the chromium crashes for webrtc? | 23:12 |
clarkb | I didn't | 23:12 |
fungi | just wondering if the next release will quietly fix it again | 23:12 |
clarkb | I actually don't have chromium just chrome and firefox. I use firefox for 99% of my browsing then chrome for when that doesn't work (flash/nonfree codecs/etc) | 23:15 |
clarkb | fungi: should we approve your changes to reflect the config we're running in prod? I don't know that corvus or ianw will end up reviewing them today? | 23:16 |
fungi | yeah, probably a good idea just to keep configuration management in sync with configuration reality | 23:18 |
clarkb | fungi: did you want to push the +A buttno then? | 23:19 |
fungi | oh, yup, can do | 23:20 |
fungi | clarkb: this week is probably a good time for 763473 as well? | 23:24 |
clarkb | fungi: I think we took care of that when we removed the extra images we were building | 23:25 |
clarkb | double check against the current system-cofnig state | 23:25 |
fungi | oh, good call, i forgot we merged that already | 23:25 |
clarkb | we should only be building 3.2 now off the regular 3.2 branch | 23:25 |
clarkb | sorry I should've called out your change when that was happening but we had a few things in the fire at the time | 23:25 |
fungi | yeah, no sweat, just abandoned it with a mention of the change which addressed it | 23:28 |
TheJulia | quick question, w/r/t new gerrit. Any bulk way to nuke draft comments? | 23:58 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!