corvus | main things: 1) different org names possible; 2) copy all architectures; 3) copy all tags | 00:00 |
---|---|---|
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: build-container-image: directly push with buildx https://review.opendev.org/c/zuul/zuul-jobs/+/878487 | 00:08 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: build-container-image: enhance buildx documentation https://review.opendev.org/c/zuul/zuul-jobs/+/878494 | 00:08 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: test-registry: split docker and container paths https://review.opendev.org/c/zuul/zuul-jobs/+/878497 | 00:47 |
opendevreview | Merged zuul/zuul-jobs master: push-to-intermediate-registry: look for container_images variable https://review.opendev.org/c/zuul/zuul-jobs/+/878492 | 01:15 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: test-registry: split docker and container paths https://review.opendev.org/c/zuul/zuul-jobs/+/878497 | 01:19 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Move container-image-promote login block https://review.opendev.org/c/zuul/zuul-jobs/+/878499 | 01:20 |
opendevreview | Merged zuul/zuul-jobs master: Move container-image-promote login block https://review.opendev.org/c/zuul/zuul-jobs/+/878499 | 01:47 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: Add --insecure-policy to skopeo promote command https://review.opendev.org/c/zuul/zuul-jobs/+/878501 | 01:57 |
opendevreview | Merged zuul/zuul-jobs master: Add --insecure-policy to skopeo promote command https://review.opendev.org/c/zuul/zuul-jobs/+/878501 | 02:39 |
hitesh1409_ | Hi Everyone, | 06:04 |
hitesh1409_ | I've made changes in the python-jenkins library. Please review it and let me know if more changes are needed: https://review.opendev.org/c/jjb/python-jenkins/+/877926 | 06:05 |
frickler | github changed their hostkey, not sure if we have that cached somewhere. for sure it will affect anyone doing git ops via ssh-rsa with them https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/ | 06:42 |
*** jpena|off is now known as jpena | 08:41 | |
hashar | hi for jjb/python-jenkins I am in the `core` group and I am wondering how I can get myself added to the `release` group https://review.opendev.org/admin/groups/a4cef9ae068bc26c907a61ed32e9b88d6378b380,members | 13:24 |
hashar | should I reach out to people there so they can add me via the Web UI or are there some YAML files to define group membership | 13:24 |
frickler | hashar: the groups are self-managed by the members, if you cannot contact anyone there, feel free to ping infra-root for help | 13:28 |
hashar | ahh that explains it thank you frickler ! | 13:28 |
NeilHanlon | fyi https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/ | 13:59 |
fungi | just to follow up here as well, but as mentioned in #openstack-jjb i added hashar's gerrit account to the python-jenkins-release group since i am familiar with the history for that project and the current situation | 13:59 |
hashar | thank you fungi. And for record tracking: I have send an email to the jjb list to have jjb/python-jenkins to be maintained by the same set of people that maintains jjb/jenkins-job-builder https://groups.google.com/g/jenkins-job-builder/c/ock5Oztp0L4 | 14:43 |
fungi | cool, happy to help | 14:47 |
clarkb | fungi: if you have a moment this morning can you double check the backups moved from gitea01 to gitea09 in a manner that looks good to you then we can probably approve https://review.opendev.org/c/opendev/system-config/+/877301. Once that is approved I'll shutdown services on those hosts and if that doesn't turn up any problems after a bit we can delete the servers | 14:56 |
clarkb | pretty sure you reviewed that before the backups got moved. I did check the backups moved myself already but am hoping for a second set of eyeballs on that. We primarily care about the database iirc | 14:56 |
opendevreview | James E. Blair proposed zuul/zuul-jobs master: WIP: Update promote-container-image to copy from intermediate registry https://review.opendev.org/c/zuul/zuul-jobs/+/878538 | 14:59 |
fungi | sure, taking a peek | 15:13 |
fungi | clarkb: when i try to borg list it tells me "Warning: The repository at location /opt/backups-202010/borg-gitea09/backup was previously located at /opt/backups/borg-gitea09/backup" | 15:38 |
fungi | is that normal? | 15:38 |
fungi | i get the same warning on both backup servers | 15:38 |
clarkb | I think that has to do with how things were mounted? | 15:39 |
clarkb | on the backup server | 15:39 |
fungi | right, following the initial part of https://docs.opendev.org/opendev/system-config/latest/sysadmin.html#restore-from-backup just to check the listings | 15:40 |
clarkb | fungi: thats on the client side though | 15:40 |
clarkb | I think he warning is entirely server side saying these backups were at location X but are now at location Y (I don't know yet if this is a problem) | 15:41 |
fungi | it says "log into the backup server, sudo su - to switch to the backup user for the host to be restored, run /opt/borg/bin/borg list ./backup" | 15:41 |
clarkb | oh so it does | 15:41 |
clarkb | I think that is why you are getting the warning its mounting the backups at location /opt/backups/borg-gitea09/backup under /opt/backups-202010/borg-gitea09/backup or vice versa | 15:42 |
clarkb | I've only ever done the mounting from the client nodes myself | 15:42 |
clarkb | anyway borg is sensitive about the location of its repos | 15:43 |
clarkb | if things have moved it may complain. This happens to me when dynamic dhcp IPs change with my offsite backups location it warns about the ip change | 15:43 |
fungi | well, if i proceed, it does give me a listing that includes filesystem and mysql backups from 2023-03-24 on the vexxhost backup server and 2023-03-23 on the rackspace server, so lgtm | 15:44 |
clarkb | ok, as far as that specific warnings goes I think we need to determine if we've moved things inappropriately | 15:44 |
clarkb | looking at fstab /opt/backups-202010 is where we host the device | 15:46 |
fungi | lrwxrwxrwx 1 root root 20 Oct 8 2020 /opt/backups -> /opt/backups-202010/ | 15:46 |
fungi | i think it's that we're accessing it via symlink | 15:46 |
fungi | so we say "use /opt/backups" and then borg dereferences the symlink and says oh but this is /opt/backups-202010 | 15:46 |
clarkb | ya the homedir is listed as /opt/backups | 15:46 |
clarkb | ya that makes sense. I don't think the warning is an issue then | 15:47 |
clarkb | we might make note of it in the docs | 15:47 |
fungi | so probably not moved incorrectly or anything, just borg being paranoid about symlinks | 15:47 |
clarkb | yup | 15:47 |
clarkb | its information then we have to decide if it is a problem or not. Another good example is that my recovery plan for personal backups is to drive over to my brothers house and retrieve the device and bring it home and plug it in and mount it locally. Borg would warn me then too | 15:48 |
clarkb | but that is alsook because the change is due to me mounting the device under a new name locally to speed up recovery (cut out the network) | 15:48 |
clarkb | in this case I think we are using the symlinks so that we can move old devices to the side and mount new devices without udpating all the configs | 15:49 |
clarkb | all that to say I agree it is a non issue here, and annoying because unlike an intentional move or a known problem for dynamic IPs its a bit more subtle and unexpected | 15:51 |
clarkb | fungi: should we go ahead and approve the change then? I'm not sure anyone else is likely to review it this morning | 16:00 |
fungi | done | 16:01 |
clarkb | thanks! | 16:02 |
clarkb | thats going to take a bit due to its modification of the inventory | 16:02 |
*** jpena is now known as jpena|off | 17:22 | |
opendevreview | Merged opendev/system-config master: Remove gitea01-04 from configuration management https://review.opendev.org/c/opendev/system-config/+/877301 | 17:23 |
clarkb | thats going to spend some time running through all the jobs due to the inventory update, but I think I'm good to go ahead and shutdown the docker services on those 4 hosts in the meantime | 17:39 |
clarkb | thats done. On gitea01 I restarted the mariadb container to prevent email from failed db backup cron jobs from spamming us | 17:42 |
clarkb | please keep an eye out for any unexpected behavior but these should have all been removed from the load balancer and gerrit replication so nothing should care about this change. | 17:43 |
*** Guest8553 is now known as atmark | 17:48 | |
*** atmark is now known as Guest8755 | 17:49 | |
*** Guest8755 is now known as atmark | 17:58 | |
clarkb | I found good reason to start my python3.11 migration locally. The new tumbleweed x86-64-v3 support finally works on my hardware (still need to track down what the issue was) and python3.11 on tumbleweed includes x86-64-v3 packages. In theory this makes the python 3.11 install even faster than normal | 21:03 |
clarkb | I now have a new python3.11 venv with fancy newer cpu support and git-review tox nox reno installed into it | 21:09 |
clarkb | not that I need those tools to be particularly fast, but maybe when I run things with tox/nox I'll see adifference | 21:09 |
fungi | i've just been compiling my own 3.11 in mo homedir (and 3.12 alphas) | 21:19 |
fungi | python 3.11 for workgroups | 21:20 |
clarkb | and I guess turning on flags to take advantage of your cpu features | 21:20 |
clarkb | looking at my installed packages a number of boost libs, all the compression libs, cryptography libs (like openssl) and some image (jpeg not vm variety) stuff seem to have the add on packages to overlay the cpu extra vroom stuff | 21:23 |
Hitesh1409 | Can someone tell me the channels for jjb/python-Jenkins repository ? | 21:24 |
fungi | Hitesh1409: #openstack-jjb | 21:24 |
clarkb | assuming they havne't found a new home since that was used? | 21:25 |
Hitesh1409 | I’ve pushed changes since last two days, but I’m unable to contact people for reviewing it. | 21:25 |
fungi | they were using that channel this morning when i talked to hashar and added him to the python-jenkins-release group | 21:25 |
fungi | they also have a mailing list | 21:25 |
clarkb | fungi: oh cool | 21:26 |
fungi | today's discussion log from there: https://meetings.opendev.org/irclogs/%23openstack-jjb/%23openstack-jjb.2023-03-24.log.html | 21:26 |
Hitesh1409 | Hey fungi, thanks for helping out. Can you please ask them to review my changes: https://review.opendev.org/c/jjb/python-jenkins/+/877926 | 21:26 |
fungi | Hitesh1409: i was just helping them with some access changes to the code review system. if you /join #openstack-jjb you can ask them, or it looks like their mailing list is https://groups.google.com/g/jenkins-job-builder | 21:28 |
hashar | Hitesh1409: I will eventually, over the week-end | 21:28 |
clarkb | also it looks like the change was actively being reviewed and the reviewer asked for some time | 21:28 |
hashar | unless I end up too busy with family duties, but I haven't played with Python in a while so reviewing a few changes would probably be a great nap exercise | 21:29 |
hashar | at least i got access to approving change and even cutting new release (python-jenkins 1.8.0 I have cut earlier today) | 21:29 |
Hitesh1409 | Your time will be highly appreciated for reviewing it. Thanks in advance. | 21:30 |
hashar | also that library is not necessarily actively developed nor is there that many people still using jjb, we might be just a few dozen | 21:30 |
clarkb | hashar: I've definitely run into people with jenkins installs still using jjb | 21:31 |
hashar | I do! :] | 21:31 |
clarkb | I think kata is using it for example. But I'm not sure how many of them feel a need to develop it as well | 21:31 |
hashar | I guess Jenkins REST API has barely changed over the last decade or so and the python lib is thus rather low maintenance | 21:32 |
hashar | on another subject, I have noticed Zuul has different tenants ( https://zuul.opendev.org/tenants ) and jjb/python-jenkins happens to be in the "openstack" tenant | 21:34 |
fungi | i would have assumed jenkinsx changed everything | 21:34 |
clarkb | https://www.docker.com/developers/free-team-faq/ | 21:34 |
hitesh1409__ | Yes, I'm aware that the repo is not that active. | 21:34 |
hashar | should it be moved to "opendev" as a non openstack project? | 21:34 |
clarkb | "As of March 24, 2023, Docker has reversed its decision to sunset the “Docker Free Team” plan." | 21:34 |
fungi | hashar: it would probably be trivial to move to a different tenant, yes, depending on whether it reuses any openstack job definitions (i doubt it does) | 21:34 |
fungi | clarkb: hah, i guess we can take our time on the quay.io move then | 21:35 |
hitesh1409__ | How can we contribute to the jenkins rest api, any ideas? | 21:35 |
clarkb | it definitely makes it a lower priority, but at this point I think we're really close to working things out so probably need to keep the momentum going | 21:35 |
clarkb | I guess I can prioritize more server replacmeents instead? I'll see what corvus thinks as losing momentum is probably not a good thing | 21:36 |
hashar | fungi: jjb/jenkins-job-builder has some `openstack-*` jobs but I think I got a change to remove those. Then I don't know what are the criteria for being in opendev, I am guessing it is simply the default for anything not tied to OpenStack | 21:36 |
clarkb | hashar: we host open source projects. We're happy to have you as long as the license remains FOSS | 21:37 |
hashar | hitesh1409__: through the Jenkins project itself I guess? Looks like https://www.jenkins.io/participate/ is their starting page. Their repo is on https://github.com/jenkinsci/jenkins/ and they have a few mailing lists one for developers at https://groups.google.com/g/jenkinsci-dev | 21:38 |
hashar | clarkb: can I bring the thousands of MediaWiki related repos? :-] | 21:39 |
hitesh1409__ | hashar: thanks for the info, I'll check. | 21:39 |
fungi | hashar: it might need some discussion around logistics, but if you're serious we could probably work it out | 21:39 |
clarkb | hashar: its theoretically possible :) we might need to do some sizing and planning etc | 21:40 |
clarkb | ya what fungi said | 21:40 |
fungi | we already have thousands of repos, what's a few thousand more? ;) | 21:40 |
hashar | I just need Zuul v2.5 | 21:41 |
* hashar ducks | 21:41 | |
hashar | I am kidding, we do self host on principle and for historical reason. Albeit our stack is miles behind the OpenDev one | 21:41 |
hashar | but it is something I am sometime contemplating on my own. At least if I have an idea for a new project outside of Wikimedia I will consider OpenDev for sure | 21:42 |
fungi | if there's ever interest in openinfra foundation/wikimedia foundation collaboration on infrastructure though, i can help get the right people in touch | 21:43 |
fungi | i know we all have great respect for what wmf and its communities have accomplished | 21:44 |
hashar | fungi: thanks I will keep that in mind :] | 21:51 |
hashar | and the respect goes the other way. Our OpenStack cluster powers a lot of the tool developed by our volunters and is a key in feeding content to the wiki. I also got so much help setting up the CI stack eleven years ago, which in all honesty was merely just me copy pasting from all the work openstack-infra has done | 21:53 |
clarkb | github had a number of expired and mis served certs today | 22:07 |
clarkb | in addition to the rsa key rotation | 22:07 |
fungi | hashar: the openinfra foundation did somewhat recently add a new (non-paying) associate member organization class for nonprofits and educational/research institutions we partner and collaborate with, if that might be something wmf is interested in: https://openinfra.dev/members/#associate | 22:18 |
ianw | i'd agree that the work we're doing to abstract from dockerhub is generically useful, sunsetting expiry of free hosting or not | 22:20 |
ianw | we've found corners in image creation, tagging, deleting that it's nice to work out | 22:21 |
hashar | fungi: thanks! :) | 22:23 |
fungi | ianw: maybe now we have time to develop and popularize a dockerhub read-only api replacement which is redirectable ;) | 22:24 |
fungi | with no root level namespace so dockerhub is no longer an implicit root for clients implementing the new image fetching protocol | 22:26 |
fungi | and also fix the proxying/caching problems with the existing protocol while we're at it | 22:27 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!