*** rfolco has quit IRC | 00:53 | |
*** swest has quit IRC | 01:18 | |
*** swest has joined #zuul | 01:32 | |
*** cloudnull has joined #zuul | 01:42 | |
*** rlandy|ruck|bbl is now known as rlandy|ruck | 02:01 | |
*** rlandy|ruck has quit IRC | 02:12 | |
*** hamalq has quit IRC | 02:51 | |
*** hamalq has joined #zuul | 02:53 | |
*** hamalq has quit IRC | 02:58 | |
*** bhavikdbavishi has joined #zuul | 03:28 | |
*** bhavikdbavishi1 has joined #zuul | 03:33 | |
*** bhavikdbavishi has quit IRC | 03:35 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 03:35 | |
*** hamalq has joined #zuul | 04:21 | |
*** hamalq has quit IRC | 04:21 | |
*** evrardjp has quit IRC | 04:34 | |
*** evrardjp has joined #zuul | 04:34 | |
*** hamalq has joined #zuul | 04:38 | |
*** hamalq has quit IRC | 04:43 | |
*** ysandeep|away is now known as ysandeep | 04:43 | |
*** sgw has quit IRC | 04:57 | |
*** hamalq has joined #zuul | 05:10 | |
*** hamalq has quit IRC | 05:15 | |
*** reiterative has quit IRC | 05:42 | |
*** reiterative has joined #zuul | 05:43 | |
*** ysandeep is now known as ysandeep|afk | 05:48 | |
*** marios has joined #zuul | 05:49 | |
*** hamalq has joined #zuul | 05:51 | |
*** saneax has joined #zuul | 05:55 | |
*** hamalq has quit IRC | 05:56 | |
*** bhavikdbavishi has quit IRC | 06:10 | |
*** cloudnull has quit IRC | 06:14 | |
*** rpittau|afk is now known as rpittau | 06:20 | |
*** bhavikdbavishi has joined #zuul | 06:22 | |
*** cloudnull has joined #zuul | 06:27 | |
*** hamalq has joined #zuul | 06:27 | |
*** hamalq has quit IRC | 06:32 | |
*** ysandeep|afk is now known as ysandeep | 06:44 | |
*** jcapitao has joined #zuul | 06:46 | |
*** hamalq has joined #zuul | 06:52 | |
*** hamalq has quit IRC | 06:58 | |
*** hashar has joined #zuul | 06:59 | |
*** sgw1 has quit IRC | 07:22 | |
*** bhavikdbavishi has quit IRC | 07:22 | |
*** hamalq has joined #zuul | 07:36 | |
*** hamalq has quit IRC | 07:41 | |
*** tosky has joined #zuul | 07:42 | |
*** bhavikdbavishi has joined #zuul | 07:47 | |
*** nils has joined #zuul | 07:53 | |
*** hamalq has joined #zuul | 07:55 | |
*** jpena|off is now known as jpena | 07:58 | |
*** hamalq has quit IRC | 08:00 | |
*** hamalq has joined #zuul | 08:16 | |
*** hashar has quit IRC | 08:16 | |
*** corvus has quit IRC | 08:17 | |
*** bhavikdbavishi1 has joined #zuul | 08:20 | |
*** hamalq has quit IRC | 08:21 | |
*** bhavikdbavishi has quit IRC | 08:22 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 08:22 | |
*** hashar has joined #zuul | 08:22 | |
*** hamalq has joined #zuul | 08:25 | |
*** hamalq_ has joined #zuul | 08:27 | |
*** corvus has joined #zuul | 08:30 | |
*** hamalq has quit IRC | 08:30 | |
*** hamalq_ has quit IRC | 08:31 | |
*** bhavikdbavishi1 has joined #zuul | 08:31 | |
*** bhavikdbavishi has quit IRC | 08:33 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 08:33 | |
mhu | hey there, ez-pz patch to review, documenting cURL examples of using the tenant-scoped admin REST API: https://review.opendev.org/#/c/727785/ | 08:34 |
---|---|---|
*** armstrongs has joined #zuul | 09:29 | |
*** ysandeep is now known as ysandeep|afk | 09:39 | |
*** dennis_effa has joined #zuul | 09:41 | |
*** bhagyashris is now known as bhagyashris|afk | 09:55 | |
*** hamalq has joined #zuul | 09:57 | |
*** hashar has quit IRC | 09:57 | |
*** hamalq has quit IRC | 10:01 | |
*** bhavikdbavishi has quit IRC | 10:02 | |
*** ysandeep|afk is now known as ysandeep | 10:05 | |
*** hamalq has joined #zuul | 10:13 | |
*** hamalq has quit IRC | 10:17 | |
*** rpittau is now known as rpittau|bbl | 10:20 | |
*** bhagyashris|afk is now known as bhagyashris | 11:00 | |
*** jcapitao is now known as jcapitao_lunch | 11:01 | |
*** ysandeep is now known as ysandeep|break | 11:17 | |
*** holser has quit IRC | 11:22 | |
*** harrymichal has joined #zuul | 11:26 | |
*** bhavikdbavishi has joined #zuul | 11:26 | |
*** jpena is now known as jpena|lunch | 11:37 | |
*** bhavikdbavishi has quit IRC | 11:41 | |
*** bhavikdbavishi has joined #zuul | 11:42 | |
*** bhavikdbavishi1 has joined #zuul | 11:45 | |
avass | checking if anyone has time to review a nodepool change that adds support for attaching instance profiles for ec2 instances: https://review.opendev.org/#/c/734774/ | 11:45 |
*** bhavikdbavishi has quit IRC | 11:46 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 11:46 | |
avass | I've tested it by running nodepool from that change so it works, but sadly the moto test framework doesn't support attaching instance profiles yet | 11:47 |
*** ysandeep|break is now known as ysandeep | 11:47 | |
*** dpawlik6 has quit IRC | 11:54 | |
*** rlandy has joined #zuul | 12:13 | |
*** rlandy is now known as rlandy|ruck | 12:13 | |
*** dpawlik6 has joined #zuul | 12:19 | |
*** rfolco has joined #zuul | 12:21 | |
*** hashar has joined #zuul | 12:22 | |
*** bhavikdbavishi has quit IRC | 12:24 | |
*** harrymichal has quit IRC | 12:30 | |
*** harrymichal has joined #zuul | 12:31 | |
*** jcapitao_lunch is now known as jcapitao | 12:33 | |
*** harrymichal has quit IRC | 12:40 | |
*** harrymichal has joined #zuul | 12:41 | |
*** jpena|lunch is now known as jpena | 12:42 | |
*** rpittau|bbl is now known as rpittau | 12:42 | |
*** harrymichal has quit IRC | 12:56 | |
*** harrymichal has joined #zuul | 12:57 | |
*** marios is now known as marios|call | 12:59 | |
*** holser has joined #zuul | 12:59 | |
openstackgerrit | Felix Edel proposed zuul/zuul master: Introduce Patternfly 4 https://review.opendev.org/736225 | 13:12 |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Update "fetching info ..." animation using empty state pattern https://review.opendev.org/738010 | 13:12 |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Update buildset result page (new layout and styling) https://review.opendev.org/738011 | 13:12 |
*** felixedel has joined #zuul | 13:12 | |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Update "fetching info ..." animation using empty state pattern https://review.opendev.org/738010 | 13:14 |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Update buildset result page (new layout and styling) https://review.opendev.org/738011 | 13:14 |
*** hashar has quit IRC | 13:16 | |
*** tumble has joined #zuul | 13:17 | |
openstackgerrit | Simon Westphahl proposed zuul/nodepool master: wip: reproduce lock race w/ deleteUpload https://review.opendev.org/738013 | 13:20 |
*** holser has quit IRC | 13:20 | |
*** holser has joined #zuul | 13:21 | |
*** harrymichal has quit IRC | 13:36 | |
*** harrymichal has joined #zuul | 13:37 | |
*** dpawlik6 is now known as dpawlik-2 | 13:48 | |
*** dpawlik-2 is now known as danpawlik | 13:48 | |
*** sgw has joined #zuul | 13:52 | |
*** marios|call is now known as marios | 13:53 | |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Update buildset result page (new layout and styling) https://review.opendev.org/738011 | 13:59 |
openstackgerrit | Felix Edel proposed zuul/zuul master: PF4: Add new Zuul logo with text https://review.opendev.org/738033 | 13:59 |
*** harrymichal has quit IRC | 14:01 | |
*** harrymichal has joined #zuul | 14:02 | |
felixedel | zuul-maint: I've proposed some new Patternfly 4 changes https://review.opendev.org/#/q/topic:patternfly-4+(status:open+OR+status:merged) to update the spinner animation on initial page load and the buildset result page. Both with a new layout and design. I hope you like it. As I'm on vacation next week I will continue with PF4 the week after. It | 14:05 |
felixedel | would be cool if you could review those changes until then and provide me some feedback :) | 14:05 |
*** sgw1 has joined #zuul | 14:09 | |
*** harrymichal has quit IRC | 14:11 | |
*** harrymichal has joined #zuul | 14:12 | |
*** felixedel has quit IRC | 14:18 | |
*** bhavikdbavishi has joined #zuul | 14:23 | |
*** bhavikdbavishi1 has joined #zuul | 14:26 | |
*** bhavikdbavishi has quit IRC | 14:28 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 14:28 | |
*** ysandeep is now known as ysandeep|away | 14:29 | |
*** jamesmcarthur has joined #zuul | 14:32 | |
openstackgerrit | Merged zuul/zuul-jobs master: upload-git-mirror: check after mirror operation https://review.opendev.org/737533 | 14:32 |
*** jamesmcarthur has quit IRC | 14:32 | |
*** jamesmcarthur has joined #zuul | 14:33 | |
*** harrymichal has quit IRC | 14:41 | |
*** harrymichal has joined #zuul | 14:42 | |
*** holser has quit IRC | 14:44 | |
*** bhavikdbavishi has quit IRC | 15:06 | |
*** harrymichal has quit IRC | 15:16 | |
*** harrymichal has joined #zuul | 15:17 | |
*** bhagyashris is now known as bhagyashris|afk | 15:19 | |
*** bhavikdbavishi has joined #zuul | 15:25 | |
*** harrymichal has quit IRC | 15:26 | |
*** harrymichal has joined #zuul | 15:27 | |
*** bhavikdbavishi has quit IRC | 15:29 | |
*** masterpe has joined #zuul | 15:30 | |
*** harrymichal has quit IRC | 15:36 | |
*** harrymichal has joined #zuul | 15:37 | |
*** harrymichal has quit IRC | 15:37 | |
*** harrymichal has joined #zuul | 15:37 | |
*** bhavikdbavishi has joined #zuul | 15:47 | |
*** bhavikdbavishi has quit IRC | 15:52 | |
*** hamalq has joined #zuul | 15:55 | |
*** harrymichal has quit IRC | 15:57 | |
*** harrymichal has joined #zuul | 15:57 | |
*** jcapitao has quit IRC | 16:00 | |
*** hamalq has quit IRC | 16:01 | |
*** hamalq has joined #zuul | 16:01 | |
*** hamalq_ has joined #zuul | 16:03 | |
*** nils has quit IRC | 16:05 | |
*** hamalq has quit IRC | 16:06 | |
*** marios is now known as marios|out | 16:06 | |
*** rpittau is now known as rpittau|afk | 16:11 | |
clarkb | mordred: I'm doing the community update tonight. Anything specific worth calling out after this morning's edition? maybe there were good questions or ? | 16:13 |
*** hashar has joined #zuul | 16:14 | |
*** armstrongs has quit IRC | 16:17 | |
mordred | clarkb: nope - it was straightforward and no questions for us | 16:19 |
mordred | clarkb: the primary question that came up was more openstack related and was about 3pci migration to v3 | 16:19 |
*** harrymichal has quit IRC | 16:22 | |
*** harrymichal has joined #zuul | 16:22 | |
*** tumble has quit IRC | 16:26 | |
corvus | gmann and rpioso chatted a bit about that | 16:31 |
gmann | corvus: yeah, let me find the spec link it is on my different laptop. | 16:32 |
gmann | he need help on this - https://opendev.org/opendev/infra-specs/src/branch/master/specs/zuulv3-3rd-party-ci.rst | 16:36 |
gmann | and after that tosky can help on job migration part | 16:37 |
*** harrymichal has quit IRC | 16:42 | |
*** harrymichal has joined #zuul | 16:42 | |
*** harrymichal has quit IRC | 16:47 | |
*** harrymichal has joined #zuul | 16:47 | |
*** holser has joined #zuul | 16:54 | |
*** harrymichal has quit IRC | 16:57 | |
*** harrymichal has joined #zuul | 16:57 | |
*** holser has quit IRC | 17:02 | |
*** bhavikdbavishi has joined #zuul | 17:04 | |
*** holser has joined #zuul | 17:06 | |
*** mgoddard has quit IRC | 17:18 | |
*** marios|out has quit IRC | 17:19 | |
*** harrymichal has quit IRC | 17:21 | |
*** harrymichal has joined #zuul | 17:22 | |
*** mgoddard has joined #zuul | 17:23 | |
*** jpena is now known as jpena|off | 17:28 | |
corvus | mordred, avass: sorry i got distracted yesterday; i'd like to resume work on the buildx stuff. yesterday, avass mentioned that we could build oci tarballs with buildx and therefore may not need a registry (we could pull the tarball into the local image cache). mordred -- did you look into whether that would work with the manifest images needed for multiarch? | 17:31 |
mordred | corvus: I did not - although I have vague cloudy memories of that not working either | 17:32 |
mordred | but - I don't have specific memories of it not working | 17:32 |
mordred | corvus: I think perhaps importing oci tarballs into docker is weird? and we'd need skopeo to do that? | 17:33 |
corvus | mordred: do you have a buildx build env handy? | 17:35 |
corvus | 19:38 < avass> corvus: looks like you can do '--output type=oci,dest=<file>' | 17:36 |
corvus | mordred: maybe we can run that real quick and check? i never got the cross-platform stuff running locally so it's not easy for me | 17:36 |
*** harrymichal has quit IRC | 17:37 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Web UI: add i18n support, french translations https://review.opendev.org/737290 | 17:37 |
*** harrymichal has joined #zuul | 17:37 | |
mordred | corvus: 104.130.135.78 is still there | 17:38 |
mordred | hrm. maybe that's not the right serve r... | 17:39 |
mordred | corvus: nope - nevermind. I do not have that environment available anymore | 17:39 |
mordred | corvus: maybe I should spin up another test env for us | 17:40 |
corvus | mordred: ok thanks | 17:40 |
mnaser | i have a really interesting problem. we use zuul's speculative image builds to build images for services (in this case, openstack services + one for an openstack operator). the challenge i'm having is that if i build images only if they change and always use 'latest' in my environment, latest might not always mean the same thing and it's possible that in autoscaling, k8s would end up with a mix of newer and older images | 17:43 |
mordred | corvus: booting a new test server | 17:44 |
mnaser | would it make sense to add some custom code that determines the pbr tag and tag the images that way? that does also means that i would sort of have to build an image every single time too? | 17:44 |
mnaser | it sounds to me like "i need to build a bunch of images that all need to work together with a long-lived tag" is something that zuul users might benefit from? | 17:44 |
corvus | mnaser: can you upgrade your deployment when an image builds so latest does mean latest? | 17:45 |
corvus | mnaser: are we talking pre-release or releases? | 17:45 |
mnaser | corvus: that's kinda been the challenge, because if the deployment hasn't updated yet, autoscaling events will pick up newer images which makes for a sad time | 17:46 |
mnaser | and i mean technically i'm trying to build things in a way so that it doesn't depend on the user ensuring their deployment is updated anytime the upstream repo changes | 17:46 |
mordred | mnaser: I'm getting a sandwich - but will ponder this use case in my brain while doing so | 17:47 |
mnaser | corvus: we don't currently release anything, it's all cd'd, i was thinking of using pbr's own self-versioning to gather a tag | 17:47 |
mnaser | mordred: bon appetit! | 17:47 |
corvus | mnaser: i'm unclear about the goal. my personal preference for deployment is continuous deployment of the latest code that has merged, because i have confidence that it works because of gating. so i want everything to be running latest always, which means kicking the deployment when new 'latest' images are built. | 17:47 |
corvus | mnaser: but you're talking about tags, which are based on releases... | 17:48 |
mnaser | sorry, i should clarify 'docker image tags' | 17:48 |
corvus | right, i would expect docker image tags to match git repo tags, which are usually releases, in the pbr world at least | 17:48 |
mnaser | i'm thinking the 1.0.0git543858 type of pbr thing | 17:49 |
corvus | oh, a pre-release version | 17:49 |
fungi | mnaser: and the assumption is that you would never rebuild the image when code (say dependencies maybe) of the primary software being packaged in the image changes without that software itself changing? | 17:49 |
corvus | mnaser: that tag is only going to be for that one image, and constantly changing | 17:49 |
mnaser | fungi: at the moment, yes, that's the assumption. i could also be wrong in all of this and taking a different direction may be a better approach | 17:50 |
corvus | that may not really be any different than pinning to a sha256 | 17:50 |
mnaser | corvus: right, but at least i can use pbr inside my operator and get the 'version of the operator' and start the services with the tag matching the version of the operator | 17:50 |
corvus | mnaser: but the operator version will have a different git sha than each of the services | 17:51 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: Web UI: add i18n support, french translations https://review.opendev.org/737290 | 17:51 |
mnaser | but then i would either have to rebuild all the images every single time (not cool), or in the promote pipeline, promote the ones that were built, and 'fake-promote' the ones that weren't built | 17:51 |
mnaser | sigh but i just realized then now i cant do speculative testing inside zuul because the tag i'm trying to test doesn't exist | 17:51 |
mnaser | the super annoying inefficent way seems to be 'build all images all the time' but that just doesn't seem like a good use of computer time | 17:52 |
corvus | mnaser: i'm sorry i'm still confused about something; it seems like "i'm trying to build things in a way so that it doesn't depend on the user ensuring their deployment is updated anytime the upstream repo changes" is incompatible with wanting to continuously deploy | 17:53 |
mnaser | corvus: that's fair. i guess i'm trying to 'make every commit deployable' | 17:55 |
mnaser | and perhaps that's not necessarily continuously deploying. | 17:55 |
corvus | gating will make every commit deployable... but maybe you're getting at the problem of wanting to know the state of the entire set of images for each gated test | 17:56 |
corvus | mnaser: like what we do with the openstack/openstack repo for git commits? | 17:56 |
corvus | basically, if you look at the integrated gate pipeline as a linear timeline, getting the complete state of the system for each point in that timeline? | 17:57 |
mnaser | corvus: i think that's what i'm struggling to describe, correct | 17:57 |
mnaser | so that ideally i can just revert to a previously-pinned tag and the system would also use the older images | 17:58 |
fungi | this sounds like a case for a ledger | 17:58 |
corvus | okay -- if, hypothetically we had that for docker images as we do for git repos, how would you tell k8s to deploy them? would you have the operator update service descriptions to point to that? | 17:58 |
mnaser | so instead of running vexxhost/openstack-operator:latest cause something broke, i switch to vexxhost/openstack-operator:some-sha .. and then when it starts, it will update all the k8s manifests to use whatever some-sha was teste dwith | 17:59 |
corvus | mnaser: okay, i think i fully understand now. :) | 17:59 |
fungi | or some similar timestamping service so that you know what the full set looked like for any given point in time, and the sequence of changes which occurred to the set when | 17:59 |
mnaser | fungi: yeah, something pretty much similar to that, because sometimes a git revert and wait for pipeline is a very time expensive operation | 18:00 |
corvus | mnaser: if you want the openstack operator to do this, you could potentially build the openstack operator on every commit, and burn the sha256s of all the images into it | 18:00 |
corvus | mnaser: so the operator itself would be fungi's ledger | 18:00 |
fungi | if a list of image shas were recorded in a git repository, that would give you the properties you describe (though at the expense of needing to create tooling around picking the point in the revision history containing the set of checksums you're wanting to reset to) | 18:00 |
corvus | mnaser: and as long as you're doing that in gate, and using gate/promote for images, then you know that the sha256s of the images used in the gate test are going to be the ones published | 18:01 |
fungi | and yeah, if the operator contained a list of checksums, and was maintained in a git repository, i guess that answers the consumer tooling part | 18:01 |
mnaser | corvus: so the "build operator on every commit" already happens right now, the burn the sha256 is where it might get tricky i think.. or maybe not, because i will splice that into the docker image build, not the code itself | 18:02 |
mnaser | so its not like that needs to end up being committed to the git repo | 18:02 |
corvus | fungi: well, i wasn't thinking the checksums would be in the git repo, but rather, the operator image build job would run on every commit to the whole system, so if there's a nova commit, we build the nova image, then start building the operator image, and as part of that process, we ask docker what the nova sha256 is, and include that in the operator build process. | 18:03 |
corvus | yeah, i think mnaser and i just said the same thing | 18:03 |
mnaser | and then dropping the 'cleanup code' inside promote.. or at least spacing it out to leave some headway for reverts | 18:03 |
corvus | mnaser: right -- so there's a tag so the sha doesn't get orphaned | 18:04 |
fungi | corvus: oh, sure if the checksums are baked into the operator image and it has sequence numbers/timestamps/something that could also work, for sure | 18:04 |
fungi | so you know you're broken at operator image #12345 you can just say to roll back to operator image #12344 | 18:05 |
corvus | yep, and each of those has the sha256s of the images it was tested with. | 18:05 |
corvus | mnaser: i think that's a good solution; i think a slightly more explicit solution would be your idea of re-tagging images (you described it as 're-promoting' or 'fake promoting' or something like that) | 18:06 |
mnaser | and i think that might be easier than the whole burn-in stuff because all i need to do is $current_version and then point images at $current_version | 18:07 |
mnaser | and i think all i need to touch in this scenario is the promote job | 18:08 |
corvus | mnaser: i think that would be effectively the same thing, except a little more work since you'd need to do more registry interactions, but in exchange, you'd get more explicitness/visibility. | 18:08 |
mnaser | well i could just _always_ run the promote jobs all the time, and they would promote 'latest' and '$current_version' | 18:08 |
avass | corvus, mordred: 158.69.68.12 is still up, reading through the scrollback to catch up | 18:09 |
mnaser | i guess the only caveat is make sure that don't fail if review-xxx exists | 18:09 |
mnaser | s/exists/does not exist/ | 18:09 |
mnaser | just take latest and retag it with $current_version | 18:09 |
avass | corvus, mordred: the last patchset is working though, should we merge it and try retriggering the nodepool release and clean it up a bit in a follow-up change? | 18:10 |
corvus | mnaser: i think there are two options there: you could tag every image with a pbr tag, then just use that instead of the sha256 (ie, you lookup that tag and use that instead of the sha256); or you could name the tested-set and tag everything with that name. for example, you could tag everything with the pbr pre-release tag of the operator. | 18:10 |
corvus | avass: oh cool, that's probably the priority yes :) | 18:11 |
mnaser | corvus: well in my case, the images and operator code is all in the same repo, so technically the 'pbr version' will naturally be identical | 18:11 |
corvus | mnaser: oh because you're building nova images out of your operator repo? | 18:11 |
mnaser | corvus: yeah... for now | 18:11 |
fungi | the up side to using pbresque autoversions is that it makes an attempt at providing some sense of serial ordering, though ultimately that's not 100% possible since the source of information is has to work from is nonlinear | 18:12 |
avass | corvus: it does pull from the buildset-registry explicitly in build-docker-image now for buildx though, I'm not sure why it didn't work as it should | 18:12 |
mnaser | i think for a future state where images are provided by openstack upstream, the 'burn-in' option is the best one | 18:12 |
corvus | mnaser: okay, yeah, i was imagining otherwise, but maybe it's better to keep imagining that so if nova builds its own images in the future, this works with it. | 18:12 |
mnaser | yeeep | 18:12 |
corvus | avass: and yeah, i'd still like to remove the buildset registry requirement from the role, since that's a trap for users to fall into | 18:13 |
mnaser | probably a simple python script to interact with docker and get the sha's and generate a manifest inside docker build | 18:13 |
corvus | avass: so let's see about landing your change, then either doing oci tarball or internal temporary registry | 18:13 |
avass | corvus: ah yeah, then that part doesn't really matter too much since we want to remove the buildset-registry requirement completely | 18:13 |
avass | yeah | 18:13 |
corvus | avass: left a debug thing in, otherwise lgtm | 18:15 |
avass | corvus: oh yep | 18:15 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Test multiarch release builds https://review.opendev.org/737315 | 18:16 |
avass | done | 18:16 |
*** harrymichal has quit IRC | 18:17 | |
*** harrymichal has joined #zuul | 18:17 | |
*** bhavikdbavishi has quit IRC | 18:18 | |
*** harrymichal has quit IRC | 18:22 | |
*** harrymichal has joined #zuul | 18:22 | |
*** Open10K8S has quit IRC | 18:24 | |
*** Open10K8S has joined #zuul | 18:25 | |
mordred | corvus: 198.101.251.60 is booted and latest docker is installed | 18:37 |
mordred | corvus: in case we need a place to poke | 18:37 |
corvus | mordred: yeah, i think this is the status: can you review https://review.opendev.org/737315 ? if that's good, we can re-run the nodepool tags; then i think we should poke at image tarballs, and either move to that, or if it doesn't work, move to an internal temporary registry | 18:38 |
avass | mordred: running skopeo inspect --raw shows two manifests with arch arm64 and i386 after building those platform and outputting an oci archive | 18:40 |
mordred | corvus, avass: yeah. I think that's good | 18:42 |
mordred | avass: cool - so exporting to an oci tarball gets us a thing that at least skopeo knows how to deal with | 18:42 |
mordred | or, rather, at least has the manifests | 18:42 |
mordred | avass: hrm. here's a question - does the oci archive have the full contents of both arch? or is it just a tarball with the manifests in it | 18:43 |
mordred | that refer to images that may not exist anywhere but inside the buildx context? (wondering if we have to do additional exports or something) | 18:43 |
avass | mordred: I was able to upload it with 'skopeo copy --all oci-archive:oci docker://158.69.68.12:5000/testarch' | 18:44 |
avass | and there are blobs in the oci archive | 18:45 |
mordred | avass: neat! | 18:45 |
corvus | cool -- can we do this without skopeo? | 18:46 |
avass | gonna go get some food, I'll be back in ~30min | 18:46 |
corvus | we might also try the 'tar' output format | 18:47 |
*** dennis_effa has quit IRC | 18:58 | |
avass | corvus, mordred: right docker doesn't do ipv6, should we just reverify that and fix that later? | 19:25 |
corvus | hrm, i think not -- we're going to have lots of issues with this if we do | 19:26 |
corvus | let's plow on with not using the buildset registry | 19:27 |
corvus | avass: i've logged into 158.69.68.12. what was your command to build the image? i'd like to try various formats | 19:28 |
avass | that's the fake publication registry that is failing though, so we need the same fix that buildx uses for the buildset-registry now | 19:28 |
corvus | avass: can you just use 'localhost' ? | 19:28 |
avass | corvus: no since that would be localhost inside the buildx container | 19:29 |
avass | i think | 19:29 |
avass | I believe i tried it | 19:29 |
avass | corvus: 'DOCKER_CLI_EXPERIMENTAL=enabled docker buildx build . --tag 158.69.68.12:5000/test --platform=arm64,i386 --output type=oci,dest=oci' | 19:29 |
avass | corvus: but it's in the history for the zuul user | 19:30 |
corvus | avass: thanks :) | 19:30 |
corvus | does the local docker image cache even have support for images of a different arch? | 19:34 |
avass | corvus: I'm not sure, I tried importing it but it shows up as 'Architecture: amd64' | 19:35 |
corvus | like, can i 'docker pull' the arm64 nodepool image from dockerhub? | 19:35 |
corvus | avass: yeah i see that too | 19:35 |
corvus | so i wonder if it only imported the one arch | 19:35 |
avass | corvus: I enabled experimental docker daemon support, so you can import the platforms with '--platform 386' or something like that | 19:36 |
avass | or you should be able to | 19:36 |
avass | but it still shows up as amd64 | 19:36 |
avass | mordred: why do we push to the buildset-registry, pull it, retag it, then push it again anyway? | 19:38 |
avass | mordred: shouldn't it be enough to keep it in the buildx cache and push it later in upload-docker-image | 19:38 |
avass | ? | 19:38 |
avass | if we can do that we won't need to do the docker push,pull,push dance or skopeo | 19:39 |
avass | or output an oci image archive | 19:39 |
corvus | avass: the 'docker manifest' command only interacts with a registry; that seems to support the idea that the local image cache can't handle multi-arch | 19:41 |
*** harrymichal has quit IRC | 19:42 | |
*** harrymichal has joined #zuul | 19:42 | |
avass | corvus: so the 'docker pull' in build-docker-image for buildx might not do what we want, and could be the reason for the arch mixup clarkb explained yesterday | 19:43 |
avass | I'm guessing | 19:43 |
*** harrymichal has quit IRC | 19:43 | |
*** harrymichal has joined #zuul | 19:44 | |
avass | mordred: is it because buildx doesn't care about the buildset-registry mirror? | 19:44 |
corvus | avass: the oci image export is the test-playbook which is supposed to be based on debian, right? | 19:48 |
avass | corvus: the oci image export isn't used in the test-playbook | 19:49 |
corvus | docker image import --platform amd64 oci test-amd64; docker run --rm -it test-amd64 /bin/sh | 19:49 |
corvus | avass: i did that ^ | 19:49 |
corvus | and got this: docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown. | 19:49 |
corvus | so i'm wondering if that export is valid | 19:50 |
avass | corvus: how did you build the image? | 19:50 |
corvus | avass: i didn't; that's your oci export (i think) | 19:50 |
corvus | in /home/zuul/src/opendev.org/zuul/zuul-jobs/test-playbooks/container/docker on the held node | 19:50 |
avass | corvus: that's only i386 and arm64, so I guess that's why | 19:51 |
corvus | i386 and not amd64? | 19:51 |
corvus | i s/amd64/i386/ and got the same result | 19:51 |
avass | corvus: I think I built it when checking what happens if you import a non amd64 image | 19:51 |
avass | corvus: hmm, what happens if you rebuild the image for amd64? | 19:52 |
corvus | oh, so i should probably rebuild with amd64,arm64 | 19:52 |
avass | yeah | 19:52 |
corvus | will do | 19:52 |
mordred | corvus, avass: I could definitely see local registry not supporting multi-arch being related to our multi-arch weirdness | 19:53 |
corvus | mordred: local cache you mean? | 19:54 |
mordred | (sorry - I got waylaid AFK and am just catching up) | 19:54 |
mordred | corvus: yeah | 19:54 |
corvus | avass: still no joy | 19:54 |
avass | and what if it's built for amd64 only? | 19:54 |
corvus | avass, mordred: i started a screen session on 158.69.68.12 if you care to join | 19:55 |
corvus | run 'screen -x' as root | 19:55 |
avass | looking :) | 19:56 |
corvus | same result | 19:56 |
avass | I wonder what's going on with the oci export then | 19:57 |
*** olaph has quit IRC | 19:57 | |
corvus | maybe we can try the tar export | 19:59 |
avass | corvus: I believe you can also do --output=<dir> | 19:59 |
*** hashar has quit IRC | 19:59 | |
corvus | oh that's what that does | 19:59 |
corvus | that's not what we want | 20:00 |
corvus | (tar output makes a tarball of the resulting filesystem; it's not image layers/blobs/manifests/etc) | 20:00 |
mordred | yah -that's def not what we want | 20:00 |
corvus | and output=dir is the same thing but extracted; not a tarball | 20:01 |
avass | yep | 20:02 |
mordred | so - still catching up - skopeo copy worked, but is skopeo, right? so the current thing is "is there a docker incantation that will work with these oci images" yeah? | 20:04 |
corvus | mordred: skopeo also hit the ipv6 issue, which means we need to duplicate even more infrastructure for testing. and yes, that's the current question. | 20:05 |
avass | mordred: yeah, and if we can keep the images in the buildx cache until we want to upload them somewhere that would be the best, no? | 20:05 |
mordred | kk. I think I'm up to speed | 20:05 |
mordred | avass: WELL - sort of | 20:05 |
corvus | and if that fails, we'll look into the temp registry idea | 20:05 |
mordred | if all we're doing is buiolding and uploading, yes | 20:05 |
avass | ah, I guess it wouldn't work if you actually want to test something :) | 20:06 |
mordred | but if the job expects to build the image and then use the image in some way on the same vm, we'd need it in local cache | 20:06 |
mordred | yeah. it will work if tests happen on different vms | 20:06 |
avass | but we don't really need buildx for building an image for the system we're on, so can we do some dumb logic around that? | 20:07 |
avass | because we won't actually be able to run the images build for the other arches right? | 20:08 |
corvus | (back to the oci image import issue -- inspecting the filesystem export, it looks like we have binaries for the right arch, so i don't know why docker run is failing) | 20:09 |
avass | corvus: maybe it's importing the wrong image? | 20:09 |
avass | or did you build it for amd64 only? | 20:09 |
corvus | avass: i built it for amd64 only | 20:09 |
corvus | let me try omitting the platform args entirely | 20:10 |
corvus | nope | 20:11 |
mordred | corvus: is it importing the oci tarball as a root filesystem and not an oci tarball? | 20:11 |
avass | oh strange | 20:11 |
corvus | mordred: yes i think so | 20:11 |
mordred | corvus: that is not very helpful | 20:11 |
corvus | i'd just like to point out that help message is nonsensical and incorrect | 20:13 |
avass | corvus: yes I got very confused about that message hah | 20:13 |
*** harrymichal has quit IRC | 20:13 | |
corvus | wow finally | 20:13 |
*** harrymichal has joined #zuul | 20:14 | |
avass | oh so, load works but not import | 20:14 |
corvus | so we can export a single-arch oci tarball and 'docker image load' it. | 20:14 |
corvus | now maybe let's see what happens with multi-arch | 20:14 |
corvus | it refuses to load a multi-arch oci image export | 20:15 |
corvus | one more thing i want to try | 20:16 |
avass | that should not work right? | 20:17 |
corvus | i think i just did an oci export of the arm64 image, imported it, and ran it on an amd64 host; i would not expect that to work, but maybe it works as a side effect of the binfmt_ stuff we do to make the cross-arch build work in the first place? | 20:17 |
corvus | root 16275 0.6 0.0 123416 6256 pts/0 Ssl+ 20:16 0:00 /usr/bin/qemu-aarch64 /bin/dash | 20:17 |
corvus | yep | 20:17 |
corvus | that's what's going on there | 20:17 |
corvus | so i'm running an emulated arm64 dash shell | 20:18 |
mordred | yeah | 20:18 |
avass | oh, so we need to build one arch at a time, load it to docker and then push it | 20:18 |
corvus | i think that means that what we can do is export the arch-specific images from buildx one at a time and import one or more of them into the local cache | 20:18 |
mordred | which is how we'll eventually be able to run these (in theory) on an amd64 nodepool-builder | 20:18 |
corvus | if we import them all into the cache, does that mean we can use the docker manifest command to do the final push? | 20:19 |
avass | there's a registry running on that machine so you can test it :) | 20:20 |
corvus | root 16543 5.3 0.0 2396 836 pts/0 Ss+ 20:20 0:00 /bin/dash | 20:21 |
mordred | cool. so that's a mechanism to build each arch one at a time | 20:21 |
corvus | no qemu that time | 20:21 |
mordred | and get them into local cache | 20:21 |
corvus | mordred: yep | 20:21 |
corvus | "docker manifest create MANIFEST_LIST MANIFEST [MANIFEST...]" | 20:21 |
corvus | i just see "docker docker manifest manifest" :) | 20:22 |
mordred | corvus: yeah ... one sec - I have example | 20:22 |
mordred | corvus: docker manifest create toolboc/azure-iot-edge-device-container:latest \ | 20:22 |
mordred | --amend toolboc/azure-iot-edge-device-container:x64-latest \ | 20:22 |
mordred | --amend toolboc/azure-iot-edge-device-container:arm32-latest \ | 20:22 |
mordred | --amend toolboc/azure-iot-edge-device-container:aarch64-latest | 20:22 |
mordred | corvus: there's a really. nice side effect to this - we could also have this same system push manifests - as well as per-arch tags individually (in case it's helpful to have a per-arch tag for specifying in a run command) | 20:23 |
corvus | do the component images need to already be in the registry? | 20:23 |
corvus | (ftr, i'm going to try some commands i don't expect to work :) | 20:24 |
*** harrymichal has quit IRC | 20:24 | |
mordred | kk | 20:24 |
mordred | https://dev.to/toolboc/publish-multi-arch-docker-images-to-a-single-repo-with-docker-manifests-329n | 20:24 |
mordred | is what I was reading | 20:24 |
corvus | avass: what's the local registry? :5000? | 20:24 |
avass | corvus: yup | 20:25 |
avass | corvus: localhost:5000 should work | 20:25 |
avass | corvus: might need to login with user: zuul, password: testpassword | 20:25 |
corvus | okay, that worked | 20:27 |
avass | nice :) | 20:28 |
corvus | huh, docker pull localhost:5000/test-manifest | 20:28 |
corvus | that should work right? | 20:28 |
avass | I guess so? | 20:29 |
*** harrymichal has joined #zuul | 20:33 | |
corvus | ohh | 20:35 |
corvus | i think there are more steps | 20:35 |
avass | corvus: yeah test-manifest doesn't exist, I'm testing with curl locally :) | 20:35 |
avass | corvus: against that registry that is | 20:35 |
corvus | i think it's an ssl cert mismatch | 20:37 |
avass | corvus: ah right, it might be installed for ansible_host and not localhost, check /etc/docker/certs.d/ | 20:38 |
corvus | yay! | 20:39 |
avass | (it's also installed as a system certificate so that's why it might have worked for some commands) | 20:39 |
avass | but yep, looks like we can drop the buildset-registry requirement :) | 20:40 |
corvus | cool, let me try to sketch this out real quick, and i'll get it in an etherpad for us to look at | 20:40 |
*** harrymichal has quit IRC | 20:47 | |
*** harrymichal has joined #zuul | 20:48 | |
*** harrymichal has quit IRC | 20:54 | |
*** saneax has quit IRC | 21:02 | |
corvus | mordred: i think i see a bug in the buildx task list -- this task pulls but doesn't have the buildset registry in the image id: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/build-docker-image/tasks/buildx.yaml#L57 | 21:06 |
avass | corvus: yes that's part of my change | 21:06 |
corvus | avass: ha :) | 21:06 |
avass | :) | 21:06 |
corvus | above we would have built and tagged "zuul-jobs.buildset_registry/nodepool:latest" and then we would have pulled "nodepool:latest" | 21:07 |
corvus | so we're very likely pulling an upstream image | 21:07 |
avass | corvus: ah! that's why it didn't work for my test, because it shouldn't work :) | 21:07 |
corvus | i'm almost done writing this up | 21:08 |
*** harrymichal has joined #zuul | 21:14 | |
corvus | avass, mordred: https://etherpad.opendev.org/p/build-docker-image-role | 21:17 |
mordred | corvus: so many words! | 21:17 |
corvus | well, it's pseudo code for what i think the roles need to do | 21:17 |
mordred | yeah | 21:17 |
mordred | I think it's great | 21:17 |
corvus | i put 2 approaches in there; one using the oci exports, and the other using a temporary registry | 21:18 |
mordred | corvus: so - for the things mentioning tag, that's relaly "for each tag" yeah? | 21:18 |
corvus | basically, the temp registry is what we're doing now, but relying on that instead of the buildset registry | 21:18 |
corvus | yep | 21:18 |
mordred | (I think it's ok to elide that for readability) | 21:18 |
corvus | i also omitted all the wacky change_{changeid} tag stuff too for the same reason | 21:19 |
mordred | corvus: so - in the oci export approach we wind up with foo:latest (a manifest) and foo:latest-amd64 and foo:latest-arm64 | 21:19 |
corvus | correct, i think that's the big observable difference between the approaches | 21:19 |
mordred | yeah. I actually really like that as a thing | 21:19 |
corvus | the other one obviously being that we'll be running a docker container with a registry on localhost | 21:19 |
mordred | because it allows for explicit targetting of an arch with simple docker runs just with a tag | 21:20 |
corvus | i kinda prefer the temp registry because it doesn't have the extra tags :) | 21:20 |
mordred | fwiw - I think we'll want extra tags for nodepool - but we could just do them explicitly | 21:20 |
corvus | cause 'docker run' should just dtrt, right? like, why would you want to 'docker run nodepool-arm64'? | 21:20 |
corvus | k maybe that's the pice i'm missing | 21:21 |
mordred | so that you can run an arm64 nodepool-builder on an amd64 host | 21:21 |
mordred | to build arm dib images without running nodepool-builder itself on an arm host | 21:21 |
corvus | ohh | 21:21 |
mordred | which will be brilliant | 21:21 |
avass | yeah that sounds good | 21:21 |
corvus | yeah, i recall now we discussed that emulating arm for our dib builders in production was an acceptable cost (esp compared to trying to keep an arm builder running long term) | 21:22 |
mordred | ++ | 21:22 |
corvus | mordred: and there's no way to say "docker run --platform=arm64 nodepool" ? | 21:22 |
mordred | corvus: not to my knowledge at the moment | 21:23 |
mordred | I believe the podman folks may have added on | 21:23 |
mordred | one | 21:23 |
mordred | or are working on one | 21:23 |
corvus | yeah, i'd be surprised if there were for docker, but just thought i'd check | 21:23 |
avass | mordred, corvus: there is, but only for experimental docker features | 21:23 |
corvus | oh neat | 21:23 |
corvus | experimental is fine | 21:23 |
mordred | yeah - if that's a thing, then I'm more ok with the temp registry approach | 21:23 |
corvus | yeah https://docs.docker.com/engine/reference/commandline/run/ --platform | 21:24 |
mordred | sweet | 21:24 |
*** dennis_effa has joined #zuul | 21:24 | |
corvus | i'm trying it out now | 21:24 |
avass | and docker manifest needed to image to already exist in the publication registry right? | 21:25 |
corvus | avass: sorry, having trouble parsing that question | 21:25 |
avass | replace the first 'to' with 'the' :) | 21:25 |
corvus | avass: yes | 21:26 |
corvus | i wonder how we find out what docker api level we have | 21:26 |
corvus | docker version. we have 1.40 so this should work | 21:26 |
mordred | I agree | 21:27 |
corvus | however when i run "docker run --platform=arm64 --rm -it 158.69.68.12:5000/test-manifest /bin/dash" i don't see qemu | 21:27 |
avass | corvus: I enabled experimental on that machine, so we'd have to add that to ensure-docker | 21:27 |
corvus | also it accetped --platform=arm64aeu without error :/ | 21:28 |
avass | or some other role that enabled experimental features | 21:28 |
mordred | corvus: run unmame -a | 21:28 |
mordred | yeah. not so much | 21:28 |
mordred | corvus: platform linux/arm64? | 21:28 |
corvus | hrm -- given that the local image cache doesn't support multi-platform (image manifests) how could this even work? | 21:29 |
mordred | oh - good point | 21:29 |
corvus | oh, i'm suspecting this may be passed through to pull | 21:30 |
mordred | oh! | 21:31 |
corvus | at least, based on some text in captain Illig's blog: https://www.paraesthesia.com/archive/2019/06/12/windows-and-linux-docker-containers-side-by-side/ | 21:31 |
corvus | "If you forget to specify the --platform flag, it will default to Windows unless you’ve already downloaded the container image" makes me think that | 21:31 |
corvus | bingo | 21:32 |
mordred | corvus: can you pull both platforms into the same cache? | 21:32 |
corvus | mordred: i can't think of a way to do that | 21:33 |
mordred | that would be less cool for having a single nodepool that invoked dib as docker run --platform= zuul/nodepool-builder disk-image-create | 21:33 |
mordred | but | 21:34 |
corvus | mordred: "docker pull --platform=arm64,amd64 158.69.68.12:5000/test-manifest" is the best i could come up with, and it rejected that. | 21:34 |
mordred | maybe shrug | 21:34 |
avass | different tags would solve that wouldn't it :) | 21:34 |
corvus | yeah, i think the idea we're exploring is whether they are necessary | 21:34 |
mordred | yeah - different tags definitely work - but have the downside of being ugly and not taking advantage of the honestly nice stuff | 21:34 |
mordred | in manifests | 21:34 |
avass | ah yeah | 21:35 |
corvus | (because my eyes glaze over after seeing a list of about 4 tags, so i think they aren't very user friendly, but can live with them if they are important) | 21:35 |
corvus | mordred said better words | 21:35 |
corvus | mordred: oh are you thinking of docker-in-docker for nodepool dib? | 21:36 |
mordred | corvus: we _could_ | 21:36 |
mordred | corvus: not sure which is the best | 21:36 |
mordred | but if we just do it in the nodepool.yaml | 21:36 |
corvus | (i mean, we run bwrap in docker, it's not crazy :) | 21:36 |
mordred | then we can tie a given arch to a given image | 21:36 |
mordred | and all of teh builders can be identical | 21:36 |
mordred | rather than needing a builder for arm - whether that's one running arm or amd | 21:37 |
mordred | now - I haven't found the cantrip for docker-in-docker with containerd | 21:37 |
corvus | have we explored the idea of a standalone dib docker image? | 21:37 |
mordred | becuase /var/run/docker.sock isnt' the sock yet | 21:37 |
mordred | corvus: I think we should make one if we go down this road | 21:37 |
mordred | but - it's not like using the nodepool-builder image would be onerous -it'll already be in the local cache | 21:38 |
mordred | so it's not any more bytes | 21:38 |
corvus | agreed, but in the long run, way simpler nodepool image, and generally useful dib image. | 21:39 |
mordred | even thugh docker run -v/var/run/docker.socker:/var/run/docker.sock zuul/nodepool-builder docker run zuul/nodepool-builder disk-image-create | 21:39 |
mordred | IS | 21:39 |
corvus | (if we made nodepool's dependency on dib be entirely run-time binary via docker or other system installation) | 21:39 |
mordred | silly | 21:39 |
corvus | anyway, popping up the stack... | 21:40 |
mordred | (actually - I think iirc - I was thinking podman | 21:40 |
mordred | so in nodepool.yaml it would be podman run zuul/nodepool-builder disk-image-create ... beause you can skip the "pass in the docker sock" that way | 21:40 |
mordred | but | 21:40 |
mordred | we can come back to that | 21:40 |
corvus | a single builder that runs multiple docker containers to do dib building is going to either need explicit tags, or some special prep | 21:41 |
mordred | yeah | 21:42 |
corvus | like "docker pull --platform A; docker tag; docker rm; docker pull --platform B" -- iow, making the explicit tags on the consumer side | 21:42 |
mordred | nod - good point | 21:42 |
mordred | that wouldn't be impossible to do | 21:42 |
corvus | i wish dockerhub had invisible tags | 21:43 |
corvus | how do they sort? | 21:44 |
corvus | most recent by default | 21:45 |
mordred | I think a lot of people publish readmes with lists of tags | 21:46 |
mordred | because many projects have a BAJILLION tags | 21:46 |
mordred | corvus: didn't you say you'd found a dockerhub api for pushing readme content? | 21:46 |
corvus | so we'll end up with 9 tags for every release, instead of 3 | 21:47 |
corvus | mordred: yeah, i think it's possible | 21:47 |
avass | 9 tags? | 21:47 |
corvus | i pasted them in the etherpad (line 27) | 21:47 |
avass | technically 10 | 21:48 |
corvus | i mean 9 for each release | 21:48 |
corvus | so there will be 9 more for the 'latest' cohort | 21:48 |
corvus | er | 21:48 |
corvus | 3 more sorry | 21:48 |
corvus | so 3 for latest, then 9 for each release | 21:48 |
avass | ah yeah because the latest needs arch specific tags as well | 21:49 |
avass | brb | 21:50 |
avass | is the manifests list just a pointer to the other manifests or are they new copies of the manifests? | 21:54 |
corvus | pointer | 21:55 |
avass | if so there could ne latest-<arch> and the releases are manifests | 21:55 |
avass | ah | 21:55 |
avass | sorry, getting late. "there could ne latest-<arch> and the releases are just tags" 3,3.19,3.19.0 etc but that won't work then | 21:56 |
corvus | i may have misinterpreted the question | 21:56 |
corvus | manifest lists are lists of shas, where each sha is a manifest for that platform | 21:56 |
avass | I may have phrased it in an ambigious way I see what you mean | 21:56 |
corvus | a manifest list doesn't require a tag either way | 21:56 |
avass | yes | 21:56 |
corvus | (it's just the 'manifest create' program does require tags already exist -- or... i gues technically we didn't try feeding it shas, that could work) | 21:57 |
avass | so there's no actual need for 3-<arch>, just latest-<arch> and the tags can be created from those | 21:57 |
corvus | well, i think if we're going to offer latest-arch as being available for end-users, we should do that for the releases too | 21:58 |
*** harrymichal has quit IRC | 21:58 | |
*** harrymichal has joined #zuul | 21:59 | |
corvus | it seems like not having -arch tags is generally fine, except that it makes running cross-arch builders slightly more complicated. so i think the question for us is whether we should serve that use case by having explicit arch tags, or just some instructions saying "you can run cross-arch builders, but you'll need to run some extra docker pull commands to get all the necessary arch images first" | 21:59 |
corvus | having said that, i bet doing that in k8s would be hard | 22:00 |
corvus | though i imagine that getting k8s to do binfmt qemu emulation would also be hard, so maybe that's not worth worrying about | 22:01 |
avass | it could also be a good reason to not make it harder | 22:02 |
avass | it's way too late for me to stay up, I'll check in tomorrow :) | 22:03 |
corvus | i'm guessing that if you're okay with installing the binfmt extensions to make it work, you can probably docker pull onto the k8s nodes too... | 22:04 |
corvus | avass: thanks for your help! i think we understand this now :) | 22:04 |
avass | corvus: no problem, it's been fun :) | 22:04 |
mordred | corvus: yeah - I imagine it just works in k8s | 22:07 |
*** harrymichal has quit IRC | 22:08 | |
*** harrymichal has joined #zuul | 22:09 | |
mordred | corvus: I mean - it's also worth noting that we could do the temp registry (no arch tags) version - and then just list an additional docker image in docker_images for nodepool_builder which is latest-arm64 with only arm64 in its arch list | 22:09 |
mordred | so we could still use the system just in our job config to make explicitly tagged per-arch images | 22:09 |
corvus | yeah, it's probably easier to expand the temp registry system to explicitly do arch tags than the other way around | 22:09 |
corvus | (i bet we could do the other way around, but may involve deleting tags from dockerhub) | 22:10 |
mordred | yeah | 22:10 |
corvus | so maybe tomorrow we should start on the temp registry idea, and evaluate whether we want explicit nodepool tags separately later. | 22:11 |
corvus | (and maybe the way we do that is to try it without explicit arch tags, and see how onerous it is?) | 22:11 |
mordred | corvus: ++ | 22:11 |
mordred | corvus: well - for us in opendev I'm certain it'll be easy | 22:11 |
mordred | because we can totally do the pull-and-retag thing in ansible | 22:12 |
mordred | without really even blinking an eye | 22:12 |
corvus | we could always document what we come up with, let people know about it, and ask folks to let us know if they want to do that but its awkward in their environment | 22:12 |
mordred | yeah | 22:12 |
openstackgerrit | Monty Taylor proposed zuul/zuul master: Add krb5-user to bindep for the images https://review.opendev.org/738119 | 22:15 |
mordred | corvus: ^^ also, lest we forget about that | 22:15 |
corvus | hopefully we can try that again tomorrow | 22:16 |
mordred | corvus: I hope so! | 22:18 |
*** rlandy|ruck is now known as rlandy|ruck|bbl | 22:25 | |
*** tosky has quit IRC | 22:40 | |
*** harrymichal has quit IRC | 22:48 | |
*** harrymichal has joined #zuul | 22:49 | |
*** harrymichal has quit IRC | 23:03 | |
*** harrymichal has joined #zuul | 23:04 | |
*** jamesmcarthur has quit IRC | 23:10 | |
mordred | corvus: uhm - https://zuul.opendev.org/t/zuul/build/35152121198a4519a03b23bc0e41b9d9 | 23:13 |
mordred | corvus: that's for https://review.opendev.org/#/c/738119/ | 23:13 |
mordred | corvus: oh - that's a misleading error | 23:14 |
mordred | fix coming | 23:14 |
openstackgerrit | Monty Taylor proposed zuul/zuul master: Add krb5-user to bindep for the images https://review.opendev.org/738119 | 23:17 |
corvus | ++ | 23:19 |
openstackgerrit | Monty Taylor proposed zuul/zuul master: Remove noninteractive flag from Dockerfile https://review.opendev.org/738122 | 23:20 |
mordred | corvus: also fixed it in the python-builder image | 23:20 |
mordred | since - you know - that's just how that shoudl work there | 23:20 |
*** dennis_effa has quit IRC | 23:44 | |
*** cloudnull has quit IRC | 23:57 | |
*** cloudnull has joined #zuul | 23:58 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!