@jreed:matrix.org | fungi: Is there any precedent for building deb packages in Zuul pipelines? I'm curious to learn whether or not we can update repos to build any/all deb packages that live in the repo as a check. This would help prevent build-breaking changes from being merged. | 14:41 |
---|---|---|
@fungicide:matrix.org | jreed: once upon a time, the primary debian developer maintaining the official debian packages of openstack tried to migrate his packaging repositories to opendev's gerrit. the problem is that his approach was based on requiring the packaging repository to be a fork of the git repository for the project being packaged, so we ended up with hundreds upon hundreds of copies of repositories in our gerrit, some quite huge | 14:53 |
@fungicide:matrix.org | there are definitely ways to use opendev's code review and ci to do debian packaging, but it would be good to avoid following a workflow that requires redundant copies of projects | 14:54 |
@fungicide:matrix.org | i don't think we have any great examples of doing it though, it would need to be built up from scratch | 14:54 |
@fungicide:matrix.org | the speculative build and test processes we have for container images might be a good template for what that would look like: basically have a job that builds your packages and creates a temporary debian package repository served from the test node, returning the deb repository url for that test node as zuul data and pausing the job, then dependent jobs can add that test node to their debian sources.list and install packages from the paused test node for the first job, running whatever tests they want to with those packages. the upshot is that your packaging recipes can be directly tested with depends-on from changes in other repositories that use those packages, and you'll get speculative test results as if the packaging changes had already merged and been published | 15:00 |
@fungicide:matrix.org | then once your changes merge, that can trigger a promote job to publish the packages to wherever you officially serve them | 15:01 |
@fungicide:matrix.org | actually, since deb repositories are really just flat file trees with indices served from a webserver, you wouldn't even really need to pause the first job. just create the temporary package repository and make it a job artifact (like the test logs), then dependent jobs can add the temporary url specific to that build and consume those packages from that location | 15:05 |
@fungicide:matrix.org | the reason we have to get more fancy with container images is that docker registries aren't just files behind a webserver, there's special negotiation and such which requires special software to serve it | 15:06 |
@jreed:matrix.org | I agree, some sort of containerized process would be best. Otherwise there would be package creep installed if a job like this was run directly on a host. | 16:48 |
The only test I was curious about was 1) Does the deb build, and 2) does it install without an error. The existing unit tests already have their own jobs and work fine. | ||
Are you saying package dependencies would be a pain to handle? In the past, I would make a base image that had deps pre-installed but that is not always the best thing to do. | ||
@jreed:matrix.org | Motivation is the fact that the build breaks from time to time because a developer ( myself included) makes a change that passes unit tests and all other checks but breaks the debian packaging for a given package. Then it's not noticed until, say, the next day when someone syncs up master, and the "downloader" or "build-pkgs" step fails. | 16:49 |
@jreed:matrix.org | I had hoped it could all be done in one job. | 16:50 |
@jreed:matrix.org | In Jenkins, one can look at nodes by label; eg all nodes with a 'docker' label. Can I find a docker-capable node with Zuul the same way? | 16:51 |
@fungicide:matrix.org | well, back up, i said the jobs we have for building containers could serve as a template for building packages. i didn't say anything about using containers at all. you could (somehow?) incorporate containers into your package building process i guess, but i don't personally see where they would fit into this | 19:00 |
@fungicide:matrix.org | and sorry, i was at lunch, so i'll try to address each of your followup questions separately, but let me know if i miss one | 19:02 |
@fungicide:matrix.org | so you want to test whether building a deb works. you could do this directly on a debian test node, but opendev only supplies nodes running stable debian releases while the debian community expects official package builds to occur in unstable (or at least in a chroot of unstable). you could make a job that checks out the project being packaged, checks out/copies your debian packaging files on top of that, and then runs debuild (or pdebuild if you want to use a chroot inside the node) | 19:04 |
@fungicide:matrix.org | Chuck Short probably has input on this idea if he's around | 19:05 |
@fungicide:matrix.org | you could then, i suppose, just `dpkg -i` the package you built onto the test node to make sure it installs, or in a chroot. debian also has in recent years started integrating what they call autopkgtests (a.k.a. dep-8), which it might make sense to try to leverage: https://salsa.debian.org/ci-team/autopkgtest/-/blob/master/doc/README.package-tests.rst | 19:09 |
@fungicide:matrix.org | where i was talking about getting fancy, and maybe it's overkill initially but would probably eventually become useful, is for other jobs to be able to install packages that were built with your proposed changes, without first having to merge your change and officially publish the package that includes it. zuul allows jobs to be interdependent on one another, and jobs can directly utilize artifacts generated by other jobs. that opens you up to being able to test your proposed packaging change in broader ways than you could directly in the job that built the package, and still prevent that change from actually merging if the package it built broke other jobs that tried to use it | 19:12 |
@fungicide:matrix.org | i rely on those sorts of features every day, but if you're not accustomed to having them then it may be hard to fully imagine how indispensable they can become | 19:14 |
@fungicide:matrix.org | i agree though, a job that just applies your proposed change, tries to build the package, maybe also tries to install the package, and perhaps runs some basic acceptance tests on the software once the package is installed, is a good place to start. there would be more work necessary to make it possible for other jobs to speculatively utilize the built package, but those features can be incorporated incrementally when you want them | 19:17 |
@fungicide:matrix.org | as for labels, zuul (via nodepool) also has that concept... in fact, earlier versions of zuul were just an intelligent scheduler for orchestrating a fleet of jenkins masters and adding/removing jenkins slaves. about 7 years ago we replaced jenkins with ansible, but the general execution model is still pretty similar (just far more flexible now). in opendev, our node labels correspond to versions of gnu/linux distributions (debian-bookworm, ubuntu-focal, rocky-9, openeuler-8, and so on). all our test nodes are "docker-capable" insofar as you can install docker on them and then use it. zuul supplies convenience roles so you don't have to work out the steps for doing that on different distros, e.g. https://zuul-ci.org/docs/zuul-jobs/latest/container-roles.html#role-ensure-docker | 19:22 |
@fungicide:matrix.org | jreed: does that answer all your questions? | 19:22 |
@jreed:matrix.org | Thank you for all the input. It was a lot to digest. Yes, for now. I think I just need to explore doing this with a simple starlingx repo with one package. I am currently trying it out on my laptop but what bugs me the most is I have to install dependencies to my linux environment that I would rather not have to. There's no debian equivalent of a python virtual environment that i know of. | 20:10 |
For more complicated repos, like config, it won't be easy because some packages in that repo rely on others also in the same repo so there's an ordering that matters. | ||
@fungicide:matrix.org | debian's equivalent of a python venv is a chroot. but yes, usually the expectation is that you at least install devscripts locally on the machine where you're doing your packaging work so you have their standard package development tools available | 21:04 |
@fungicide:matrix.org | jreed: if it helps, what i use for local test builds of debian packages is https://pbuilder-team.pages.debian.net/pbuilder/ since it allows you to create and archive chroots of multiple different debian versions and distros, will auto-clean the chroot between builds, and so on | 21:05 |
@fungicide:matrix.org | it at least keeps all the package dependencies off of your rootfs (merely installing them temporarily into a chroot) | 21:08 |
@fungicide:matrix.org | it also supports managing and building in, say, an ubuntu chroot if you're trying to target multiple debian derivative distros, though for starlingx that's probably not relevant | 21:09 |
@jreed:matrix.org | > <@fungicide:matrix.org> jreed: if it helps, what i use for local test builds of debian packages is https://pbuilder-team.pages.debian.net/pbuilder/ since it allows you to create and archive chroots of multiple different debian versions and distros, will auto-clean the chroot between builds, and so on | 21:34 |
Will check it out. Thanks! :) |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!