Saturday, 2021-10-16

*** walshh__ is now known as walshh_02:11
opendevreviewCody Lee Cochran proposed opendev/bindep master: feat: Add test support for Manjaro Linux
opendevreviewMerged opendev/bindep master: feat: Add support for Manjaro Linux
opendevreviewMerged opendev/bindep master: feat: Add test support for Manjaro Linux
yoctozeptohmm, any idea why regular changes enter the release-approval pipeline?17:10
Clark[m]yoctozepto: if you remove your Killa filter it becomes more clear. There is a microstack change that has plugged up the queue somehow. Basically all changes for projects in the tenant are evaluated by every pipeline if they match the trigger criteria. But then in most cases they are gone as quickly as they entered because they don't have jobs configured for the pipeline.17:42
Clark[m]Heh kolla got autocorrected to Killa17:43
Clark[m]The microstack change doesn't touch zuul config. I suspect some sort of zuul issue. I'm not able to debug more than that right now17:44
yoctozeptothanks Clark[m], now I understand17:44
yoctozeptoit's simply that I have never observed it17:44
fungii'll see if i can spot what's going on with it. if i can't and it's keeping things from generally working (is this what frickler saw earlier in the week?) then i'll make sure to get two thread dumps and the yappi stats before any restart18:08
fungiso far it appears to only be a pileup in the release-approval pipeline for the openstack tenant, which thankfully is just a convenience mechanism for one job which runs on comments to changes openstack/release in order to check and see if the ptl or release liaison for a particular team has positively acknowledged the change18:12
fungii don't think that job even uses any nodes, just needs executors, so nothing there is waiting on node requests18:13
fungimerger queue is flatlined at 0 too so doesn't look like it's waiting for merges18:15
fungiwe have a full compliment of executors and mergers, and all the zk stats are steady, not increasing noticeably18:15
fungii think it's event 868ac96e71954c0b9edc2ee00d0fcd34 in the scheduler debug log18:32
fungii think this could be the reason?
fungii brought it up in the zuul matrix channel18:39
fungii think it's safe to leave it in this state for a while in case it helps further identify any bug responsible, but i need to step away for a bit18:41
fungialso, i see we have 60 nodes which nodepool thinks are locked in-use for jobs since roughly 2d15h ago21:51
fungiaround the same time stuff got "stuck" in zuul leading to a restart of the scheduler21:51
fungiso maybe fallout from zk connectivity disruption also?21:51

Generated by 2.17.2 by Marius Gedminas - find it at!