*** dasm is now known as Guest7895 | 00:38 | |
*** ykarel_ is now known as ykarel | 07:46 | |
frickler | heads up an git update in ubuntu likely causes failures like https://zuul.opendev.org/t/openstack/build/ec8d4d066c9941f7a2f270a811bd7e9d | 12:42 |
---|---|---|
fungi | for some reason i thought that had already been addressed years back in some jobs | 13:05 |
jrosser | this is new / different https://github.com/git/git/commit/f4aa8c8bb11dae6e769cd930565173808cbb69c8 | 13:06 |
fungi | oh, right it was the older ensure_valid_ownership() check we dealt with in the past | 13:13 |
fungi | where we cloned as one user but then accessed contents from that clone as another user | 13:14 |
fungi | while this is cloning from a repo where the original is owned by a different user | 13:15 |
JayF | What's the proper place to get support for bitergia? adamcarthur5 has updated all his information in openstack/opendev profile, including adding associated emails, but is still showing up as "Unknown" organization | 14:27 |
JayF | I can fetch screenshots and all for proof and share privately if it's helpful | 14:28 |
JayF | he is also showing up as author name "Sharpz7" despite that only being the email/name he used for some commits; so I wonder if there's some kinda deduplication issue | 14:29 |
JayF | (searching on Author_name: Sharpz7 shows his commits, unattributed to G-Research) | 14:29 |
clarkb | JayF: what do you mean by an openstack/opendev profile? we don't really have opendev profiles so want to be clear on what got updated | 14:30 |
JayF | there's the one form on openstack.org which has some info, and some other info via the opendev provider is piped in | 14:31 |
JayF | https://www.openstack.org/profile/ and https://id.openinfra.dev/accounts/user/profile | 14:31 |
JayF | ah, openinfra.dev not opendev | 14:32 |
JayF | I guess this means I'm in the wrong venue :D | 14:32 |
clarkb | ack thanks, ya those would be the places I would expect updates to need to happen through. They are also run by the foundation as is the relationship with bitergia. We (fungi and myself) can pass along these questions though or I think you can reach out directly via support@someemail address or similar (I would need to dig that address up) | 14:32 |
JayF | email would be ideal, as I'm outta normal tz so async is wonderful | 14:33 |
clarkb | I have to do a school run now but can figure that out after | 14:34 |
JayF | if you wanna just email it to me at jay at gr-oss.io that'll be fine, no urgency at all | 14:34 |
JayF | or here in a DM | 14:34 |
ildikov | JayF: Hi, thank you for bringing up the Bitergia dashboard issue. As clarkb mentioned, we're still working on an official channel to report dashboard issues to us. In the meantime, to move this one along, can you please send me a summary of the issue in email (ildiko at openinfra dot dev)? Please feel free to attach screenshots, etc. | 15:23 |
ildikov | JayF: if you haven't send the email yet, can you please send it to community@openinfra.dev rather? And if there are any other issues with the dashboard in the future, please use that email address to report it. Thank you! | 15:48 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add golang based docker compose tool. https://review.opendev.org/c/opendev/system-config/+/920760 | 15:49 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Remove old meetpad and jvb servers https://review.opendev.org/c/opendev/zone-opendev.org/+/920761 | 15:59 |
opendevreview | Tony Breeds proposed opendev/system-config master: Remove old meetpad and jvb servers https://review.opendev.org/c/opendev/system-config/+/920762 | 16:01 |
opendevreview | Clark Boylan proposed opendev/system-config master: Increase the number of mailman3 outgoing runners to 4 https://review.opendev.org/c/opendev/system-config/+/920765 | 17:39 |
clarkb | fungi: ^ fyi I didn't awnt to forget that we had discussed this briefly | 17:39 |
fungi | thanks! yes i meant to look into that | 17:39 |
opendevreview | Albin Vass proposed zuul/zuul-jobs master: prepare-workspace-git: fix loop logic for older ansible versions https://review.opendev.org/c/zuul/zuul-jobs/+/920769 | 18:58 |
clarkb | tonyb: I did leave a couple small comments on the docker compose change. Testing seemed to look good for that one though whcih is a great sign | 19:46 |
clarkb | I need to run an errand so may be a bit before I can answer any questions related to thatthough | 19:47 |
tonyb | clarkb: all good. I have research to do to answer your points anyway | 19:56 |
tonyb | Actually going to finish early. I have a local mediawiki working with OpenID (against LP) all the extensions load I haven't verified they work. | 20:09 |
tonyb | Even though I'm sticking with the same version as the current wiki I can't get the skin working which is a little strange but also no huge loss at this stage | 20:10 |
tonyb | I'm working on a rough pass at getting the inner (MW container) and outer (VM) apache configs to play ball and keep the same URLs | 20:11 |
tonyb | All in all reasonably solid progress. | 20:11 |
fungi | sounds awesome, thanks for picking that up! | 20:12 |
tonyb | Oh and I did do the linaro cert: | 20:38 |
tonyb | tony@thor:~$ openssl s_client -connect openinfraci.linaro.cloud:8004 < /dev/null |& grep -i before | 20:38 |
tonyb | v:NotBefore: May 29 19:24:40 2024 GMT; NotAfter: Aug 27 19:24:39 2024 GMT | 20:38 |
tonyb | v:NotBefore: Sep 4 00:00:00 2020 GMT; NotAfter: Sep 15 16:00:00 2025 GMT | 20:38 |
tonyb | tony@thor:~$ | 20:38 |
fungi | thanks tonyb! i guess we should expect to not get a reminder about that later today/tonight | 20:42 |
clarkb | ok back | 21:00 |
clarkb | thank you for rotating the cert | 21:01 |
clarkb | there is a lot of git permissions fallout on the zuul dashboard. I guess I was sort of ignoring that, but now am wondering if I need to look closer | 21:02 |
clarkb | it looks liek it only complains in some cases? do we know why? | 21:02 |
clarkb | cc corvus this may generally present a problem for prepare-workspace-git as well though I'm not sure yet: https://zuul.opendev.org/t/openstack/build/a10312d659ee4c2c85927f83224fff96/console | 21:03 |
corvus | clarkb: thanks, i'm looking but i don't know any more than you yet | 21:07 |
corvus | (in particular, i don't know what changed) | 21:08 |
clarkb | corvus: I think it only affect ubuntu nodes | 21:08 |
corvus | could it be related to a new image build? | 21:08 |
clarkb | frickler: metnioned early today there may be a ubuntu git package update causing it | 21:08 |
clarkb | ya | 21:08 |
clarkb | debian nodes seem fine fwiw and that seems to explain why some unittest jobs are ok and others are not (as they use different platforms to get access to different python versions) | 21:08 |
clarkb | https://changelogs.ubuntu.com/changelogs/pool/main/g/git/git_2.34.1-1ubuntu1.11/changelog that seems to show related changelog entries | 21:09 |
corvus | i'm assuming our cache in /opt/git is root-owned/world readable? and if we want to keep it that way, we may need a fix to zuul-jobs; but if we don't, we could chown it zuul? maybe? | 21:10 |
clarkb | https://github.com/git/git/security/advisories/GHSA-xfc6-vwr8-r389 | 21:10 |
clarkb | corvus: ya I think chowning to zuul will likely address it as long as the things breaking are running as zuul | 21:10 |
clarkb | its possible that may break devstack like tools in separate ways (but thats probably ok) | 21:11 |
clarkb | (probably ok since they are likely already broken too) | 21:11 |
corvus | this specific case is the global cache being used as an input to prepare-workspace-git; and i would expect devstack not to use that but instead use the results of prepare-workspace-git | 21:11 |
clarkb | corvus: looks like 755 root:root is what things are owned by in the git cache | 21:12 |
clarkb | corvus: oh ya good point | 21:12 |
clarkb | so ya I think we can maybe chown to zuul:zuul and have that error go away | 21:13 |
clarkb | alternatively set the safe directory flag in the images or as part of the jobs | 21:13 |
tonyb | I feel like setting the flag is okay for our use case | 21:14 |
clarkb | I think so considering the files are owned by root and you ahve to trust root. | 21:14 |
corvus | yeah... part of me thinks that maybe a zuul-jobs fix would be good because it seems like this should be fine...but then if we did that would we be making assumptions about other systems that might not be true. maybe we can trust it because it's owned by root, but maybe someone else set up a git repo cache owned by another user and it would be bad to blindly set the safe flag in zuul jobs | 21:17 |
tonyb | We could set the flag in at image build time. | 21:19 |
tonyb | but I do agree that not making assumptions is a good plan. | 21:21 |
clarkb | I want to say we already did something similar to this somewhere too | 21:23 |
clarkb | but that may have been specific cases like devstack after lookingat codesarch | 21:23 |
clarkb | looks like safe.directory can't be done recursively (but it may do globbing?) if we decide to modify our images then changing git cache ownership to zuul:zuul may be the best option there | 21:26 |
corvus | i feel like there's a high probability that a fix in zuul-jobs would be universally okay, but we may not be able to guarantee it, so it may be better for opendev to fix this on images, and then for us to update the zuul-jobs docs for those roles to suggest other people do the same assuming their security posture is the same. | 21:26 |
corvus | (or we could do an opt-in fix for zuul-jobs; "evil-bit: false") | 21:26 |
jrosser | I think you can list as many individual dirs as being safe as you like, or set “*” to disable the check | 21:27 |
jrosser | but not /all/my/repos/* | 21:27 |
clarkb | jrosser: ya thats a problem for us when we have like 2k things to list | 21:28 |
jrosser | indeed I had to address this for osa this morning and settled on “*” | 21:28 |
clarkb | side note: I feel like git clone should be somethign that can be done safely as the current userand create a new copy of a thing with appropriate permissions. But ya | 21:28 |
clarkb | I can totally understand not making operating in someone elses repo safe. but cloning should be the safe firewall | 21:29 |
corvus | yeah, this feels like the wrong solution in git. | 21:29 |
corvus | okay, i think 2 viable solutions: 1) chown in images; 2) opt-in safe.directory additions in prepare-workspace-git | 21:31 |
clarkb | I'm working on a project-config patch to chown the entire git cache recursively in dib image builds to zuul:zuul | 21:31 |
corvus | ok | 21:32 |
corvus | i can delete the current ubuntu images so we use the fallbacks | 21:33 |
corvus | is it just jammy? or all ubuntu? | 21:33 |
clarkb | corvus: so far I've only confirmed ajmmy is affected | 21:35 |
clarkb | we may end up immediately trying to rebuild the images once you delete them fwiw. Might also want to pause them | 21:35 |
opendevreview | Clark Boylan proposed openstack/project-config master: Chown the /opt/git repo cache to zuul:zuul https://review.opendev.org/c/openstack/project-config/+/920779 | 21:38 |
clarkb | something like that maybe. Not really tested :/ | 21:38 |
corvus | #status log paused ubuntu-jammy builds and deleted all most-recent ubuntu-jammy images due to git package update | 21:39 |
opendevstatus | corvus: finished logging | 21:39 |
clarkb | if 920779 looks good we can probably merge it then unpause jammy and have it rebuild with that change in place? | 21:40 |
clarkb | slightly concerned this will end up creating some sort of unexpected failure elsewhere, but I'm not changing perms so should still be 755 meaning we're no worse than before in terms of access | 21:41 |
clarkb | looks like focal was also updated: https://changelogs.ubuntu.com/changelogs/pool/main/g/git/git_2.25.1-1ubuntu3.12/changelog | 21:44 |
corvus | anyone else want to review that, or should we +w it now? | 21:45 |
clarkb | fungi and tonyb are probably the only other folks still around at this time of day? | 21:46 |
corvus | #status log deleted the ubuntu-jammy-e57f97d15e0b4878afd4d262b4f8ba75 dib build | 21:47 |
opendevstatus | corvus: finished logging | 21:47 |
clarkb | corvus: I guess we go for it and worst case we pause and delete the new image again? | 21:50 |
clarkb | though I suppose my change will affect all image builds | 21:50 |
clarkb | so the fallout could be much wider reaching than just ubuntu if there is a problem with chowning it for jobs (if the chwon itself fails then image buidls will fail) | 21:50 |
corvus | image build failure wouldn't be bad; failing everything would be worse, but we're already in a slow motion train wreck anyway | 21:55 |
corvus | i think we should go for it | 21:55 |
clarkb | ya I'm leaning that direction too | 21:56 |
clarkb | I feel like we're unlikely to make anything significantly worse | 21:56 |
clarkb | another option is to pause all image builds and then only unpause jammy once that is landed and see how it does | 21:57 |
clarkb | to limit the blast radius. What do you think of that? | 21:57 |
corvus | yeah, that may be the most conservative thing | 21:59 |
corvus | i'll execute that | 22:00 |
clarkb | corvus: thanks, I was trying to determine if I could show which images are paused and which are not paused but seems that info isn't in dib-image-list? | 22:01 |
clarkb | oh wait it is there sorry | 22:01 |
corvus | | ubuntu-jammy-e57f97d15e0b4878afd4d262b4f8ba75 | ubuntu-jammy | nb02.opendev.org | qcow2,raw,vhd | paused | 00:00:00:32 | | 22:01 |
clarkb | I wanted to make sure that we didn't unpause any image builds later that should stay paused, but jammy is the only thing currently paused | 22:02 |
corvus | #status log paused all image builds: https://paste.opendev.org/show/bxOJQAnEGwCHmeBs4tiU/ | 22:02 |
opendevstatus | corvus: finished logging | 22:02 |
corvus | that has the list for easy unpausing later | 22:03 |
clarkb | we haven't built a new focal image yet since we only build it weekly | 22:03 |
clarkb | its possibly noble is affected. Its newest image is 7 hours old. Unsure on that, but again its impact is limited at this point so I think we're mostly fine to focus on jammy, get that working then unpause the rest and rebuild noble | 22:04 |
corvus | ++ | 22:04 |
tonyb | FWIW I +2d that change even though it's already got the +A | 22:04 |
corvus | thanks! | 22:06 |
opendevreview | Merged openstack/project-config master: Chown the /opt/git repo cache to zuul:zuul https://review.opendev.org/c/openstack/project-config/+/920779 | 22:08 |
clarkb | that has enqueued the job we need to update the nodepool builders but it is behind the hourly jobs. Hopefully in 10-15 minutes we can unpause jammy and request a rebuild | 22:09 |
clarkb | slightly worried I'm still going to be firefighting this stuff on Friady and will need to push gerrit upgrading back. But one thing at a time | 22:11 |
fungi | sorry, stepped out to dinner but looking at the git fixes now | 22:20 |
fungi | looks like 920779 already merged | 22:21 |
clarkb | yup and the deploy job for it is still running but it looks like nb01 and nb02 both have the new content on disk | 22:21 |
clarkb | corvus: I think we can unpause jammy and request a build nowish. Did you want tod o that or should I? | 22:22 |
clarkb | fungi: assuming you think that plan is reasonable, saying something before we unpause and rebuild is a good idea :) | 22:23 |
fungi | fwiw, i would probably expect all images to end up impacted by this eventually, but with the chown in place that should be insurance against the distros/versions that haven't pushed backports yet | 22:23 |
clarkb | fungi: yup for sue | 22:23 |
fungi | and yes, unpause and trigger a new build sgtm | 22:23 |
clarkb | *for sure | 22:23 |
clarkb | I'll go ahead and do it after the deploy job finishes if corvus doesn't say anything before then | 22:24 |
clarkb | and it just finished so I'm proceeding | 22:24 |
clarkb | nb02 is building the image if anyone wants to follwo along with the logfile | 22:25 |
clarkb | I may take a break here and then check back in an hour or two. Since we're just waiting otherwise | 22:28 |
corvus | sorry went afk; sounds good | 23:25 |
clarkb | the image build is nearing completion it is doing all the format conversions now | 23:41 |
fungi | when i was waiting for the noble builds to finish, conversions took far longer than i expected | 23:52 |
clarkb | looks like it took 10 minutes to go from raw to vhd intermediate. So hopefully no longer than 10 minutes to go from intermediate to final vhd | 23:54 |
clarkb | and then we haev to wait for uploads so probably at least another hour away from determining if this sufficiently satifisfies git's demands | 23:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!