fungi | recommends is "this should be installed except in unusual circumstances" | 00:00 |
---|---|---|
clarkb | right but not always and for dnssec to work you really want this installed. it should be a proper dep imo | 00:00 |
fungi | if it were, then it might conflict with unbound-anchor or something, i dunno what led to that change | 00:00 |
fungi | though unbound could have certainly done a depends: (dns-root-data|unbound-anchor) in that case | 00:01 |
fungi | or both could provide a common virtual package name or something | 00:01 |
opendevreview | Tony Breeds proposed opendev/system-config master: [base/unbound] Install dns-root-data package https://review.opendev.org/c/opendev/system-config/+/925311 | 00:05 |
opendevreview | Merged opendev/system-config master: [base/unbound] Install dns-root-data package https://review.opendev.org/c/opendev/system-config/+/925311 | 01:12 |
fungi | deploy for ^ failed | 01:28 |
fungi | translate01.openstack.org hit a dpkg lock collision during the base deploy job | 01:28 |
tonyb | Whom do I contact for OSUOSL to up the quota for opendevci project so I can test/add a new mirror server for that cloud/region ? | 02:14 |
tonyb | fungi: will the next prod update recover translate01? | 02:15 |
fungi | tonyb: Ramereth[m] but he's on pacific americas time so might be easier for you to sync up next week since it's already your friday (or am i screwing up my timezone math again?) | 02:18 |
tonyb | It's still Thursday | 02:19 |
tonyb | (here) | 02:19 |
fungi | ah, okay, so your friday morning will be his thursday afternoon. right | 02:19 |
tonyb | Yup. | 02:19 |
fungi | my fault for still being on the computer at this time of night ;) | 02:20 |
fungi | anyway, as to your earlier question, we rerun the base in the daily periodic pipeline so it probably enqueued about half an hour ago (2am utc) | 02:30 |
tonyb | fungi: cool beans. Thanks | 02:31 |
fungi | any time | 02:42 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Add DNS for new Vexxhost mirrors https://review.opendev.org/c/opendev/zone-opendev.org/+/925437 | 04:01 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add inventory entries for new Vexxhost mirrors https://review.opendev.org/c/opendev/system-config/+/925438 | 04:10 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add inventory entries for new Vexxhost mirrors https://review.opendev.org/c/opendev/system-config/+/925438 | 04:36 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add inventory entries for new Vexxhost mirrors https://review.opendev.org/c/opendev/system-config/+/925438 | 08:33 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add Noble nodes to system-config-run testing https://review.opendev.org/c/opendev/system-config/+/925447 | 08:33 |
tonyb | Can I get some reviews on: https://review.opendev.org/q/(topic:ansible-devel+OR+topic:noble-mirror)+is:open | 08:36 |
tonyb | I note that https://review.opendev.org/c/opendev/system-config/+/924012 is marked WIP and failing, but I'd appreciate feedback on the idea of testing ARA master in the ansible-devel job | 08:36 |
ykarel | Hi Depends-On on github.com Pull requests do not work/configured? was trying it in https://review.opendev.org/c/openstack/neutron/+/925446 | 08:40 |
opendevreview | Attila Fazekas proposed openstack/diskimage-builder master: Not duplicate grub defaults https://review.opendev.org/c/openstack/diskimage-builder/+/925451 | 09:27 |
tonyb | ykarel: It ... should, I can't see anything wrong with that change / the general neutron setup | 09:28 |
ykarel | tonyb, what does zuul logs say on why it's not triggering? | 09:30 |
tonyb | Still checking | 09:30 |
ykarel | also to note this depends-on had also impacted our downstream zuul as sqlalchemy/sqlalchemy was not added in downstream zuul tenant config | 09:44 |
tonyb | ykarel: I'm not having much luck determining why zuul isn't enacting the Depends-On. | 09:55 |
tonyb | ykarel: Like I said it all looks right to me | 09:56 |
tonyb | ykarel: I'm still looking | 09:56 |
ykarel | tonyb, reading https://opendev.org/zuul/zuul/src/branch/master/doc/source/drivers/github.rst iiuc it mentions api_token needs to be defined to get Depends-On working | 09:57 |
ykarel | not sure if it's related here and if that config is set already | 09:58 |
tonyb | zuul is pulling master (or main) into src/guthub.com/sqlalchemy/sqlalchemy so I'm pretty sure that the github driver/connect are correctly setup | 10:00 |
ykarel | yes our jobs working fine with sqlalchemy main | 10:00 |
tonyb | ykarel: Sorry I am out of my depth here. I can work with other infra-roots to debug this and get back to you. | 10:08 |
ykarel | thx tonyb for looking, for now i have removed depends-on as it was impacting downstream zuul, but i think logs should have the required bits, patchset 1,2 and 4 had depends-on added | 10:10 |
tonyb | ykarel: I'm sure someone more familiar with zuul will know what to look for. | 10:13 |
tonyb | based on what I know it basically looks like zuul isn't doing anything to indicate that it saw the "Depends-On" at all | 10:14 |
frickler | tonyb: ykarel: https://paste.opendev.org/show/bNlEBwvyoGpipnvMCQp9/ not sure though if that's an issue with our token really or with the permissions for the sqla repo. in general working with github dependencies has been flaky at best, so I'd advise to avoid it if possible | 10:39 |
frickler | ykarel: I'm also confused by your comment about downstream zuul, do you have a zuul instance that runs jobs triggered from opendev gerrit, but doesn't report back there? | 10:40 |
tonyb | Gah! bitten by python tracebacks not showing up in grep | 10:41 |
tonyb | frickler: We (ykarel and I are both at Redhat) have an internal zuul (several actually) I believe that comment was meant to reference them not likeing the Depends-On for config reasons, not actionable by anyone else here | 10:42 |
ykarel | yes ^ right, zuul stopped triggering when it identified depends-on from unknown project(unknown as not defined in downstream zuul tenant config) | 10:44 |
ykarel | frickler, is the flaky part of github dependencies is that only apply for Gerrit? as we using zuul with github projects and there Depends-On works fine | 10:45 |
tonyb | frickler: Actually that traceback isn't the same PR as the one ykarel mentioned. | 10:45 |
tonyb | ykarel: Nope gerrit doesn't know anything about Depends-On | 10:46 |
ykarel | tonyb, ack was just trying to understand the flaky part mentioned above as not seen in general with github project | 10:48 |
tonyb | frickler: Now that I know what to look for the problem is the same. | 10:49 |
ykarel | tonyb, i once used 11584 PR as well to check behavior with closed PRs | 10:50 |
ykarel | other one was with open PR 11639 | 10:50 |
tonyb | ykarel: Ah okay | 10:51 |
ykarel | can you check if error was same even for open PR 11639 | 10:51 |
tonyb | ykarel: Yup it's the same | 10:52 |
tonyb | https://paste.opendev.org/raw/bEVvubAqJdKWMU62wSqM/ | 10:52 |
ykarel | :( seems need some token adjustment | 10:54 |
ykarel | thx tonyb++ frickler++ for checking | 10:54 |
tonyb | ykarel: frickler: We can look at the token that's being used and see if it needs updating? or if we need some buy-in from sqlalchemy | 10:54 |
ykarel | tonyb, yes if it's possible to update the token to include scope: public_repo | 11:11 |
frickler | checking my logs, I updated our github token on 2024-01-20 with a lifetime of 1y, so it should still be valid. not sure about the scoping, but it would be a bit of a surprise if that issue was only showing up now | 11:18 |
tonyb | It's running a 'canmerge' query. which I kinda expect to fail. I think the api_token we have should generally be able to pull a PR but not merge it back so I think that error is ok/expected. I do wonder if it failing aborts the rest of the fetch process | 11:21 |
tonyb | I'll check after this meeting | 11:21 |
opendevreview | Attila Fazekas proposed openstack/diskimage-builder master: Not duplicate grub defaults https://review.opendev.org/c/openstack/diskimage-builder/+/925451 | 12:26 |
fungi | tonyb: the periodic base deploy did succeed, so the translate01.openstack.org dpkg lock error really does seem to have just been a temporary collision (probably ran at the same moment an unattended upgrade was in progress) | 13:00 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Remove get_md5 parameter from stat module. https://review.opendev.org/c/zuul/zuul-jobs/+/921637 | 14:13 |
opendevreview | Attila Fazekas proposed openstack/diskimage-builder master: Not duplicate grub defaults https://review.opendev.org/c/openstack/diskimage-builder/+/925451 | 14:24 |
opendevreview | Merged zuul/zuul-jobs master: Remove get_md5 parameter from stat module. https://review.opendev.org/c/zuul/zuul-jobs/+/921637 | 14:28 |
fungi | python 3.13.0rc1 was tagged today | 16:46 |
fungi | time to get compiling | 16:47 |
fungi | https://discuss.python.org/t/59703 | 16:48 |
opendevreview | Clark Boylan proposed openstack/project-config master: Rotate the wheel kerberos keytab for afs operations https://review.opendev.org/c/openstack/project-config/+/925511 | 16:59 |
opendevreview | Merged openstack/project-config master: Rotate the wheel kerberos keytab for afs operations https://review.opendev.org/c/openstack/project-config/+/925511 | 17:21 |
clarkb | tonyb: I've reivewed the stack I have a concern on https://review.opendev.org/c/opendev/system-config/+/923686 in that we're crossing the streams a bit too much between production and CI and we probably want to avoid doing that. But if others have differing opinions I can probably be swayed | 18:42 |
clarkb | everything else looks fine and I think what 923686 aims to accomplish is doable we just need to be careful how we do it | 18:42 |
clarkb | I think the wheel jobs that use the keytab only run in the periodic queue so we won't have results on 925511 until after those run later today/early tomorrow | 19:41 |
fungi | yeah, i tend to be around at 0200z when they get enqueued, but may be asleep by the time they actually run | 19:43 |
clarkb | https://review.opendev.org/c/opendev/base-jobs/+/924786/ and child are a stack I've got hanging around to update our base jobs for the new zuul cleanup handling. Not really urgent but a nice cleanup I think | 19:51 |
clarkb | the first change is to base-test and we can check things look good before landing the followup whih affects base | 19:52 |
fungi | approved the first one, let's see how it fares | 19:56 |
opendevreview | Merged opendev/base-jobs master: Use modern cleanup tooling in base-test https://review.opendev.org/c/opendev/base-jobs/+/924786 | 19:59 |
clarkb | I think I have a change somewhere to recheck and test that | 20:00 |
clarkb | https://review.opendev.org/c/zuul/zuul-jobs/+/680178 has been rechecked | 20:01 |
clarkb | https://zuul.opendev.org/t/zuul/stream/759a1ac61305446c831dc2165d2631b8?logfile=console.log (I think you need to watch the live stream or look at executor debug logs to get the full log content) | 20:09 |
clarkb | https://zuul.opendev.org/t/zuul/build/759a1ac61305446c831dc2165d2631b8/log/job-output.txt that did about what I expected it to | 20:10 |
clarkb | corvus: ^ not sure if you want to review those results, you were helpful in reviewing earlier iterations of that stack | 20:11 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Update test-prepare-workspace-git to use a module https://review.opendev.org/c/zuul/zuul-jobs/+/925539 | 21:49 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Synchronize test-prepare-workspace-git to prepare-workspace-git https://review.opendev.org/c/zuul/zuul-jobs/+/925540 | 21:49 |
tonyb | clarkb: Thanks for the reviews. I'll look at better ways to accomplish the goal. Originally I did not have install-ansible using ensure-python. Then I thought it was better to treat roles repos like a library and re-use them more. I see that's less true in this case due to the differences in CI vs 'long lived bastion host' | 21:57 |
corvus | clarkb: i left a comment on https://review.opendev.org/c/opendev/base-jobs/+/924786 though it was already merged. i think it's worth doing that though. | 21:58 |
tonyb | Last night (for me) ykarel, reported an issue with a github depends-on not triggering in a neutron job. (https://meetings.opendev.org/irclogs/%23opendev/%23opendev.2024-08-01.log.html#t2024-08-01T08:40:56). I said I'd follow-up with you all as I don't understand the details of that interaction. | 22:08 |
tonyb | based on: https://paste.opendev.org/show/bCcK4PxPOdgeA8vVjGag/ it looks like there may be new requirements for our token? but it also looks like perhaps the canmerge query is "trying too hard" and perhaps it could have stopped looking for details somewhere around L11 of the paste above. | 22:10 |
tonyb | Thoughts or pointers for how to debug (probably not in production) would be good. | 22:11 |
tonyb | If this isn't OpenDev specific I'm happy to move to the zuul matrix | 22:11 |
clarkb | tonyb: if there are new token requirements it wouldn't surprise me. I think hitting issues like that when github mysteriously changes was one of the things that made the kata zuul stuff difficult | 22:27 |
clarkb | but ya that error seems to imply this is the case | 22:27 |
clarkb | corvus: the first part of your comment makes sense but for the second I wanted to note we're already getting these in the archived logs | 22:29 |
clarkb | I'm indifferent as to whether or not that should be a cleanup or post set of tasks | 22:29 |
tonyb | clarkb: any idea if those requirements are documented anywhere for us to verify? | 22:30 |
clarkb | tonyb: no I don't think they are (and github chagnes them without a changelog last I remember) | 22:31 |
clarkb | corvus: I think we had these tasks in a cleanup playbook because when the disk fills things can fail in ways that are weird and cleanup ensures they always run? | 22:34 |
clarkb | corvus: so maybe we keep it as a cleanup but separate the two cleanups? | 22:34 |
tonyb | clarkb: Thanks I'll do some research/testing. | 22:39 |
fungi | tonyb: were you going to try to get in touch with Ramereth[m] before eow? or wait for next week? | 22:41 |
tonyb | fungi: Oh, I pinged him yesterday | 22:42 |
fungi | aha, okay | 22:42 |
tonyb | if he doesn't reply (via IRC), I'll email him next week | 22:43 |
fungi | makes sense, there's no rush on this | 22:43 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add Noble nodes to system-config-run testing https://review.opendev.org/c/opendev/system-config/+/925447 | 22:50 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add inventory entries for new Vexxhost mirrors https://review.opendev.org/c/opendev/system-config/+/925438 | 22:50 |
opendevreview | Clark Boylan proposed opendev/base-jobs master: Update cleanups for base and base-minimal https://review.opendev.org/c/opendev/base-jobs/+/924787 | 22:58 |
opendevreview | Clark Boylan proposed opendev/base-jobs master: More improvement to base-test cleanup and post playbooks https://review.opendev.org/c/opendev/base-jobs/+/925549 | 22:58 |
clarkb | corvus: ^ that is what I ended up with based on your comment | 22:58 |
clarkb | now just need to remember to not let base and base-test sit diverged for too long | 22:59 |
corvus | clarkb: yeah, if there's something that would necessitate that, then 2 cleanups is fine. but maybe that's not necessary now? i think the only difference should be "runs on job abort" and we really don't need to do that for these tasks, so i like the idea of trying 925549 until we find out why we need it in cleanup | 23:02 |
clarkb | corvus: cool that should be what is in the change now | 23:04 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!