*** walshh__ is now known as walshh_ | 02:11 | |
opendevreview | Cody Lee Cochran proposed opendev/bindep master: feat: Add test support for Manjaro Linux https://review.opendev.org/c/opendev/bindep/+/814246 | 02:59 |
---|---|---|
opendevreview | Merged opendev/bindep master: feat: Add support for Manjaro Linux https://review.opendev.org/c/opendev/bindep/+/814171 | 04:10 |
opendevreview | Merged opendev/bindep master: feat: Add test support for Manjaro Linux https://review.opendev.org/c/opendev/bindep/+/814246 | 05:27 |
yoctozepto | hmm, any idea why regular changes enter the release-approval pipeline? | 17:10 |
yoctozepto | https://pasteboard.co/dOWVcN0bLcan.png | 17:11 |
Clark[m] | yoctozepto: if you remove your Killa filter it becomes more clear. There is a microstack change that has plugged up the queue somehow. Basically all changes for projects in the tenant are evaluated by every pipeline if they match the trigger criteria. But then in most cases they are gone as quickly as they entered because they don't have jobs configured for the pipeline. | 17:42 |
Clark[m] | Heh kolla got autocorrected to Killa | 17:43 |
Clark[m] | The microstack change doesn't touch zuul config. I suspect some sort of zuul issue. I'm not able to debug more than that right now | 17:44 |
yoctozepto | thanks Clark[m], now I understand | 17:44 |
yoctozepto | it's simply that I have never observed it | 17:44 |
yoctozepto | :-) | 17:44 |
fungi | i'll see if i can spot what's going on with it. if i can't and it's keeping things from generally working (is this what frickler saw earlier in the week?) then i'll make sure to get two thread dumps and the yappi stats before any restart | 18:08 |
fungi | so far it appears to only be a pileup in the release-approval pipeline for the openstack tenant, which thankfully is just a convenience mechanism for one job which runs on comments to changes openstack/release in order to check and see if the ptl or release liaison for a particular team has positively acknowledged the change | 18:12 |
fungi | i don't think that job even uses any nodes, just needs executors, so nothing there is waiting on node requests | 18:13 |
fungi | merger queue is flatlined at 0 too so doesn't look like it's waiting for merges | 18:15 |
fungi | we have a full compliment of executors and mergers, and all the zk stats are steady, not increasing noticeably | 18:15 |
fungi | i think it's event 868ac96e71954c0b9edc2ee00d0fcd34 in the scheduler debug log | 18:32 |
fungi | i think this could be the reason? https://paste.opendev.org/show/810047 | 18:35 |
fungi | i brought it up in the zuul matrix channel | 18:39 |
fungi | i think it's safe to leave it in this state for a while in case it helps further identify any bug responsible, but i need to step away for a bit | 18:41 |
fungi | also, i see we have 60 nodes which nodepool thinks are locked in-use for jobs since roughly 2d15h ago | 21:51 |
fungi | around the same time stuff got "stuck" in zuul leading to a restart of the scheduler | 21:51 |
fungi | so maybe fallout from zk connectivity disruption also? | 21:51 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!