*** swest has quit IRC | 01:19 | |
*** swest has joined #zuul | 01:34 | |
*** bhavikdbavishi has joined #zuul | 03:13 | |
*** bhavikdbavishi1 has joined #zuul | 03:16 | |
*** bhavikdbavishi has quit IRC | 03:17 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 03:17 | |
*** saneax has joined #zuul | 05:26 | |
*** hashar has joined #zuul | 06:22 | |
*** yolanda has joined #zuul | 06:43 | |
*** bhavikdbavishi has quit IRC | 06:49 | |
*** ysandeep is now known as ysandeep|afk | 06:57 | |
*** iurygregory has joined #zuul | 07:10 | |
*** jcapitao has joined #zuul | 07:13 | |
*** rpittau|afk is now known as rpittau | 07:14 | |
*** bhavikdbavishi has joined #zuul | 07:27 | |
*** tosky has joined #zuul | 07:39 | |
*** dennis_effa has joined #zuul | 07:44 | |
*** hashar has quit IRC | 07:47 | |
*** jpena|off is now known as jpena | 07:56 | |
*** nils has joined #zuul | 08:07 | |
*** wuchunyang has joined #zuul | 08:08 | |
*** ysandeep|afk is now known as ysandeep | 08:08 | |
*** ttx has quit IRC | 08:19 | |
*** ttx has joined #zuul | 08:19 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Fix test race in test_client_dequeue_ref https://review.opendev.org/734032 | 08:27 |
---|---|---|
tobiash | zuul-maint: I just got hit by this test race in a local run ^ | 08:30 |
*** sshnaidm|afk is now known as sshnaidm | 08:44 | |
openstackgerrit | Sorin Sbarnea (zbr) proposed zuul/zuul master: Make task errors expandable https://review.opendev.org/723534 | 08:47 |
*** wuchunyang has quit IRC | 09:08 | |
*** sgw has quit IRC | 09:54 | |
*** dennis_effa has quit IRC | 10:04 | |
*** bhavikdbavishi has quit IRC | 10:12 | |
*** rpittau is now known as rpittau|bbl | 10:15 | |
*** avass has quit IRC | 10:18 | |
*** bhavikdbavishi has joined #zuul | 10:23 | |
zbr | corvus: mordred tristanC: https://review.opendev.org/#/c/723534/ about more/less on errors seems ready. | 10:24 |
*** masterpe has joined #zuul | 10:42 | |
masterpe | I want to set zuul_log_cloud_config but where do I define the secret? | 10:42 |
*** wuchunyang has joined #zuul | 11:05 | |
*** jcapitao is now known as jcapitao_lunch | 11:08 | |
*** fbo|off is now known as fbo | 11:18 | |
*** wuchunyang has quit IRC | 11:26 | |
*** jpena is now known as jpena|lunch | 11:32 | |
*** rfolco has joined #zuul | 11:37 | |
*** bhavikdbavishi has quit IRC | 11:45 | |
*** dennis_effa has joined #zuul | 11:53 | |
*** rfolco is now known as rfolco|rover | 11:54 | |
*** jcapitao_lunch is now known as jcapitao | 12:01 | |
*** rlandy has joined #zuul | 12:06 | |
*** rpittau|bbl is now known as rpittau | 12:13 | |
*** avass has joined #zuul | 12:43 | |
*** saneax is now known as saneax_AFK | 12:54 | |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: gitlab - add driver documentation https://review.opendev.org/733880 | 12:57 |
*** jpena|lunch is now known as jpena | 13:03 | |
*** bhavikdbavishi has joined #zuul | 13:07 | |
*** rlandy is now known as rlandy|mtg | 13:20 | |
*** iurygregory is now known as iurygregory|afk | 13:20 | |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles https://review.opendev.org/730360 | 13:31 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: [WIP] web UI: user login with OpenID Connect https://review.opendev.org/734082 | 13:32 |
*** sgw has joined #zuul | 13:45 | |
tobiash | masterpe: typically next do the job using it | 13:58 |
masterpe | I figured out how the secrets work, When I try op upload the logs to swift I get a failure on: SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed | 14:00 |
masterpe | The swift server has a privately ca | 14:00 |
tobiash | then you need to add this to the ca certs on the executor | 14:01 |
masterpe | the ca is in /usr/local/share/ca-certificates/ | 14:01 |
masterpe | on the executor | 14:01 |
tobiash | (or disable ca cert verification in clouds.yaml) | 14:01 |
tobiash | mordred: openstacksdk uses requests for accessing swift right? | 14:03 |
tobiash | masterpe: maybe this helps: https://stackoverflow.com/questions/42982143/python-requests-how-to-use-system-ca-certificates-debian-ubuntu | 14:03 |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles https://review.opendev.org/730360 | 14:05 |
mordred | tobiash: yes it does. looking | 14:07 |
mordred | masterpe: yes - what tobiash said about telling requests to use system bundle. you can also put the path to the ca into clouds.yaml for that cloud. one sec, getting a link with an example | 14:09 |
*** bhavikdbavishi has quit IRC | 14:09 | |
mordred | masterpe: https://opendev.org/opendev/system-config/src/branch/master/playbooks/templates/clouds/bridge_all_clouds.yaml.j2#L154-L174 | 14:10 |
mordred | we use the cacert option there for the limestone cloud | 14:11 |
*** rlandy|mtg is now known as rlandy | 14:15 | |
tobiash | corvus, mordred: should we provide tagged docker images? Thinking about the release plan for scale out scheduler I think we should consider this because it's not really possible to stick with a release when running a dockerized zuul with the official images. | 14:26 |
fungi | tobiash: we do provide tagged docker images: https://hub.docker.com/r/zuul/zuul-scheduler/tags | 14:27 |
fungi | or do you mean specifically versioned tags? | 14:27 |
tobiash | fungi: we have only the latest tag. I mean versioned tags | 14:28 |
mordred | tobiash: yeah - it's a thing we've talked about before - there's an unsolved problem to solve | 14:30 |
fungi | i want to say we've talked through this before and the main logistical challenge to solve is mapping a git tag back to a gerrit change since our docker uploads are triggered by change-merged events... but then the version number itself isn't properly baked into the image either | 14:30 |
mordred | yes - that | 14:30 |
fungi | so we likely need to actually build new images from the git tags | 14:30 |
mordred | it's a problem we shoudl solve, becuase it's an important use case | 14:30 |
fungi | but... how do we test those? | 14:30 |
mordred | exactly | 14:30 |
mordred | I feel like corvus had an idea recently about how we could find the right image sha to tag | 14:31 |
tobiash | oh hrm | 14:31 |
fungi | i suppose we could do a build->test->upload cycle in a tag-triggered release pipeline | 14:31 |
avass | couldn't you do the reverse, trigger a release and tag + upload if the tests pass? | 14:31 |
fungi | the risk is that if a tag fails the test portion, we have a git tag with no corresponding docker image tag | 14:32 |
mordred | avass: but releases are triggered by git tags | 14:32 |
mordred | yeah | 14:32 |
mordred | git tags don't have a speculative aspect currently | 14:32 |
mordred | and once a tag is pushed it should be considered permanent, becuase tag deletes don't propagate to people's clones | 14:32 |
avass | mordred: yeah currently, but they don't have to do they? | 14:32 |
fungi | right, what we're really lacking is some way for zuul to test a git tag which isn't applied to the repository, and then apply that tag | 14:32 |
mordred | yah | 14:33 |
tobiash | mordred: openstack is doing that differently right? | 14:33 |
tobiash | via a release repo? | 14:33 |
mordred | tobiash: yeah - openstack has a release repo | 14:33 |
fungi | right, they propose releases into yaml files which list the desired tag name and commit | 14:33 |
*** bhavikdbavishi has joined #zuul | 14:33 | |
mordred | but I think it would be a really good experience for users of the zuul-jobs image build jobs if they could handle mapping a git tag to the appropriate already-tested docker image | 14:34 |
tobiash | with that workmode we could request a release, build and push docker images and at the end tag | 14:34 |
avass | yeah we do the same with some projects | 14:34 |
fungi | and then release jobs can make preliminary tags and exercise those before having automation create the final git tags | 14:34 |
fungi | but it basically means a human can no longer sign and push git tags | 14:35 |
mordred | there's another thing - which is that docker tags aren't tags like git tags are - they're really more like branches | 14:36 |
*** iurygregory|afk is now known as iurygregory | 14:36 | |
fungi | ahh, so you'd have one tag per release series and recreate that tag onto the latest point release image for that series | 14:37 |
fungi | and then people would specify that tag and get the latest image for that particular release series? | 14:37 |
mordred | yeah. like - frequently if you have like a 3.0.0 - you might have a docker image with :3.0.0 and :3.0 and :3 | 14:38 |
corvus | it's an option; you don't have to do it that way | 14:38 |
*** bhavikdbavishi has quit IRC | 14:38 | |
mordred | and when you do 3.0.1 - :3.0 and :3 update but there might be a new tag called :3.0.1 | 14:38 |
avass | having changes for tags in gerrit would have been nice to be honest. | 14:38 |
mordred | avass: +1000 | 14:38 |
mordred | avass: I've wanted that for so long | 14:39 |
corvus | the practical thing we can do today is build new docker images from tags just like we do release artifacts | 14:39 |
mordred | yeah | 14:39 |
corvus | someone just needs to set up the jobs for that :) | 14:39 |
corvus | the harder thing is what mordred was talking about wrt retagging | 14:39 |
mordred | corvus: do we worry that image rebuilds are more dangerous than artifact rebuilds due to dependency shift? | 14:40 |
tobiash | if the image build fails we could re-trigger the release pipeline (if it's not broken due to a package update at least) | 14:40 |
corvus | mordred: isn't it roughly the same risk as the release artifacts? | 14:40 |
corvus | those are currently rebuilt as well. the additional risk is only the opendev base images and underlying debian images | 14:41 |
mordred | yeah - and I guess since we're pinning to specific python image versions that risk is also pretty low | 14:41 |
mordred | yeah - maybe just doing rebuilds for now is fine and we can improve that later if we figure out retagging | 14:42 |
corvus | i think so. at least, i think i'm comfortable with a rebuild model until we get around to implementing retagging | 14:42 |
corvus | i think for retagging, we'd have to find the change for the latest non-merge commit in the git tag, then query the api to find the gate build for it, then re-upload the image from the intermediate registry; so i guess not so much retagging as re-uploading. | 14:46 |
fungi | though that image won't have the right zuul version number internally since its creation predates the git tag, right? | 14:47 |
mordred | that is correct | 14:47 |
mordred | the git sha portion of its version number will be accurate - but it will not know about the tag | 14:48 |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: gitlab - add driver documentation https://review.opendev.org/733880 | 14:48 |
fungi | af for generating new images at tag push, that seems like a reasonable compromise, just means (counterintuitively) that our non-release images are better tested than our release images, so we may want to explain that somewhere | 14:49 |
corvus | fungi: we don't explain that for our tarballs | 14:51 |
corvus | the version number issue may be enough to say that just rebuilding is better | 14:52 |
fungi | good point. as you said earlier, our tarballs have the same exact situation | 14:53 |
corvus | tobiash, fungi, mordred: if we don't implement rebuild tags by the time we want to cut 4.0, we could probaby manually make a one-time 3.x tag in docker just to keep the release going | 14:55 |
mordred | corvus: we should maybe do that anyway - we're not planning another 3.x git tag are we? | 14:57 |
fungi | wild idea... what if we had some magic whereby a job could build release artifacts from a prerelease tag but manipulated the version to bake in the final version number. we could upload those images tagged with prerelease tags and then later the release tag could trigger a job which re-tags the image with the release tag | 14:58 |
mordred | fungi: ooh. I actually like that ... | 14:58 |
corvus | mordred: yeah, that's probably a good idea, because re-enqueing it may get messy? not sure what would happen with the tarball releases in that case | 14:58 |
fungi | so, e.g., 4.0.0rc1 tag tells pbr the version is actually 4.0.0 when building the image but we tag it on dockerhub as 4.0.0rc1, and then later if/when we tag the same commit as 4.0.0 we replace or add to the existing rc tag on the image with 4.0.0 | 14:59 |
corvus | fungi, mordred: version munging would be pretty language specific right? like, to do that, we'd need to do some kind of python setuptools thing, but that isn't going to translate to something in rust, etc? | 15:00 |
fungi | yeah, this would probably be pbr-specific even | 15:00 |
fungi | so not a general solution | 15:00 |
mordred | I thnik it could _just_ be local git-tag based | 15:00 |
mordred | but we'd want 2 versions of the job | 15:00 |
mordred | one that did not munge tags (so you could have an image with the pre-release tag a tagged) | 15:01 |
fungi | oh, that's a fair point, we could stick a fabricated git tag in the local copy of the git repo on the worker | 15:01 |
mordred | and one that was built with a fabricated additional git tag | 15:01 |
mordred | fungi: exactly | 15:01 |
fungi | so you have the rc1 image and the rc1-for-release image or whatever | 15:01 |
mordred | yeah | 15:01 |
corvus | in order to use this, you would need to know that you are going to tag a certain commit ahead of time? | 15:01 |
mordred | no - I think you do the image-rebuild-on-tag approach | 15:02 |
fungi | well, you'd need to know what you plan to tag it if it's viable | 15:02 |
mordred | to rebuild images on tags - but on tags matching a pre-release regex | 15:02 |
fungi | hence tying it to rc tags | 15:02 |
mordred | yeah | 15:02 |
corvus | i don't understand what problem this solves | 15:02 |
mordred | corvus: it solves having the wrong version reported by zuul in a retagged workflow | 15:03 |
fungi | the "existing image predates official release tag" problem | 15:03 |
mordred | if you build an image from the -rc tag as-if it was retagged without the -rc - then the retag job could grab that image and it would be correct for the target final tag | 15:04 |
fungi | so we can build and publish an image which internally considers itself to really be 4.0.0 but is generated from an rc tag and tagged as rc on dockerhub | 15:04 |
fungi | and then later all we have to do is update the tag on dockerhub *if* we decide to retag rc1 as the actual release in git | 15:04 |
corvus | wow i am so completely lost | 15:04 |
clarkb | the benefit to that is being able to separately verify the rc1 before calling it the release? | 15:04 |
corvus | i think i need one of you to explain step by step how an image gets created | 15:05 |
mordred | corvus: ok. let me do that | 15:05 |
clarkb | corvus: ya I'm with you on not understanding the problem this solves | 15:05 |
avass | wouldn't you need to store a lot of images or only be able to tag the latest commit then? | 15:05 |
mordred | before I explain step by step, let me explain the problem again just so we're on the same page | 15:05 |
fungi | clarkb: mainly that we can test the image before upload but not end up burning release tags if that image isn't viable (due to changes in dependencies or whatever) | 15:06 |
mordred | this is specifically talking about retagging in docker previously built images when we push a git tag | 15:06 |
mordred | and the issue is that we have code in our software that derives the version reported to the user from the git repository | 15:06 |
mordred | so if you re-tag a docker image that was built from a git repo that was not tagged, you will get a version reported that is based on that git repo state - and that will continue being the case even if the docker image gets retagged to match the git tag | 15:07 |
mordred | does that part make sense as an issue? | 15:07 |
mordred | so like zuul.opendev.org currently reports Zuul version: 3.18.1.dev166 319bbacf - which is fine for us, but if we tagged 319bbacf as 3.19.0 and then did a docker re-tag of the image that corresponded to 319bbacf and published to dockerhub as 3.19.0 - their zuul would still report 3.18.1.dev166 and that might be confusing | 15:10 |
clarkb | right but building on tag in the release pipeline solves that? | 15:12 |
mordred | right. I am not tlaking about that | 15:12 |
mordred | I am talking about the retagging workflow | 15:12 |
clarkb | ok reading scrollback what I read was "don't worry about retagging" | 15:12 |
clarkb | because it requires speculative tagging which we don't have, but maybe I misread that | 15:13 |
mordred | I believe that's "dont' worry about retagging for now - we'll rebuild on tag" | 15:13 |
mordred | I think ultimately if we can have a retagging solution it jives much better with the story of "the image you publish is the image that was gated" which we ultimately otherwise go through great pains to implement - but I agree, due to lack of speculative git tags is not a quick implementation. I think however, fungi's suggestion can greatly simplify some parts of the equation | 15:14 |
mordred | so let me describe stepwise how this could work | 15:15 |
clarkb | I'm guessing the way most deal with this is to always use the speculative version on "release" | 15:15 |
clarkb | so even for change 123456 we'd tag that as 3.next | 15:16 |
clarkb | and drop the .dev | 15:16 |
mordred | the way most people deal with this is garbage | 15:16 |
clarkb | ya I'm not assigning any value to it, merely thinking about how others must be addressing this in the wild | 15:16 |
clarkb | and I'm guessing as soon as development begins on version 3.3.2 that is the version used for all published images until 3.3.2 is cut and 3.3.3 starts | 15:17 |
mordred | no - they usually do a git commit dance | 15:17 |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles https://review.opendev.org/730360 | 15:17 |
mordred | so there is a version of 3.3.2-dev in the git repo, then they make a commit changing the version to 3.3.2 with a message "prepare for 3.3.2 release" then they hopefully remember to tag that commit, then they make anotehr commit changing the version to 3.3.3-dev | 15:18 |
mordred | like - that's basically 98% of why we wrote pbr :) | 15:18 |
clarkb | right, that was all before dockerhub though? | 15:19 |
clarkb | on dockerhub what you've described is that others are retagging as things move | 15:19 |
clarkb | and if you do that having the .dev versions is ugly | 15:19 |
mordred | well - what I'm describing is us retagging | 15:19 |
mordred | other people rebuild | 15:19 |
clarkb | gotcha | 15:19 |
mordred | so - the process to do fungi's suggestion ... we tag 319bbacf as 3.19.0-rc1 | 15:22 |
mordred | two build/publish jobs are triggered | 15:22 |
mordred | one builds, tests and publishes the image as-is as 3.19.0-rc1. we can test here because if we have a git rc tag with no published artifact it's not really that big of a real, we can always make an rc2 - that's fine | 15:22 |
fungi | 3.19.0.0rc1 but who's counting ;) | 15:23 |
fungi | (or is that 3.19.0rc1? i never can remember) | 15:23 |
mordred | the other job has a pre-playbook that looks at the tag, strips the rc and locally re-tags the git repo without the rc sufix - then runs the build/test/publish sequence | 15:23 |
mordred | it publishes it as, let's say 3.19.0-rc1-retagged (or something like that) | 15:24 |
mordred | and the image contents of 3.19.0-rc1-retagged would be zuul software with a 3.19.0 tag applied | 15:24 |
mordred | then - when we decide 3.19.0-rc1 was good, we tag that same commit as 3.19.0 | 15:25 |
fungi | note that we can also run tests in the release pipeline if we want, maybe create the image and stick it in a build registry, run our integration tests on it, then upload | 15:25 |
mordred | fungi: yes - absolutely | 15:25 |
fungi | if tests fail due to a problem with the image, we may not upload some rcs | 15:26 |
fungi | which doesn't seem like a problem to me at least | 15:26 |
mordred | the 3.19.0 tag triggers a retagging job. that job looks at the tag, and now it doesn't have to look for latest non-merge commit - it can just look for the most recent rc commit matching the same tag | 15:26 |
fungi | just fix and move on to rc2 | 15:26 |
mordred | or - really - scratch that last part | 15:26 |
corvus | mordred: i understand now, thanks. | 15:27 |
mordred | it can just look for the docker image matching the most recent rc with the -retagged suffix - and it can do the retag on dockerhub | 15:27 |
mordred | corvus: cool. | 15:27 |
corvus | the part i was missing was that we would tag the same commit twice, one with -rc, then promote an edited version of the rc | 15:28 |
corvus | and both of those tag events would be retroactive | 15:28 |
*** iurygregory has quit IRC | 15:29 | |
fungi | that pattern is borrowed from openstack's rc model... you keep tagging a new rc and when you're happy with one, you re-tag the same commit as the release | 15:29 |
*** iurygregory has joined #zuul | 15:29 | |
*** bhavikdbavishi has joined #zuul | 15:29 | |
fungi | but the building the rc artifact as if it's the release artifact is the novel bit | 15:29 |
fungi | anyway, i did call it a "wild idea" and it may be more complexity than its worth | 15:30 |
mordred | yeah - I could see it working, and also see it being too much :) | 15:30 |
corvus | mordred: why bother with the "one builds, tests and publishes the image as-is as 3.19.0-rc1." part? | 15:31 |
fungi | the rc bit does sort of mirror how we've been logistically releasing zuul though... we do come up with a candidate commit of sorts, decide it's been sufficiently exercised, and then tag it as the release. tagging an rc there first when we identify a potential release point is just a slightly more formal declaration | 15:31 |
mordred | corvus: good point | 15:31 |
mordred | corvus: that is probably wasted | 15:31 |
corvus | k. could be useful if we really wanted an rc for people to test, but that's not how we roll :) | 15:31 |
fungi | i didn't originally suggest parallel jobs for rc-built-as-rc and rc-built-as-release (only the latter) | 15:32 |
mordred | corvus: yeah - in fact we could just publish the images as 3.19.0-rc1 but with the git retag done | 15:32 |
corvus | (so maybe a component of a more general-purpose complete system, but not something we'd use for the zuul project itself) | 15:32 |
mordred | yeah | 15:32 |
*** bhavikdbavishi has quit IRC | 15:42 | |
avass | well, it still seems to me that the tags shouldn't be what triggers the release (unless you want to save every build from a merged change). but the tags should be created after a release build succeeds | 15:51 |
avass | but then you'd need a manual trigger that can start testing any commit | 15:52 |
mordred | avass: yeah - the manual trigger process you describe is what openstack uses a releases repo for - someone adds a commit id there and jobs then run stuff | 15:54 |
*** ysandeep is now known as ysandeep|afk | 15:54 | |
mordred | avass: for instance: https://review.opendev.org/#/c/730663/2/deliverables/victoria/osc-lib.yaml | 15:55 |
mordred | but that's a bit harder to generalize | 15:56 |
avass | mordred: yeah, we do the same. but I'm not sure I like that solution entirely | 15:56 |
mordred | I think one of teh things I like about the above rc-baesd system is that you only have to save the rc builds | 15:56 |
mordred | and - you have the build system build those rc's as if they actually were the release - which makes them even closer to being true release candidates :) | 15:57 |
avass | yep | 15:57 |
mordred | theyr'e still not 100% speculative - in that you couldn't propose a tag, and then have a depends on somewhere else in a patch that bumps a minimum... but we don't have that anywhere | 15:58 |
avass | I guess what I don't like about it is having more tags in the repo, but that's a bit nitty :) | 15:58 |
mordred | (I wish we did - I actually have need for that righ tnow) | 15:58 |
avass | you could technically create a new stub branch that tests against HEAD^ | 15:58 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints https://review.opendev.org/734134 | 15:59 |
avass | and tags that if it succeeds, but that would leave a lot of branches. Well unless you delete them | 15:59 |
mordred | yeah - and then that's kind of similar to an out-of-band file - it might not be clear what's happening, and possible hard to implement in a reproducible manner | 16:00 |
avass | yeah | 16:00 |
mordred | we could pretty easily make some jobs that implement the rc idea and just say "if you tag things using this strategy, the following will happen" | 16:00 |
mordred | at the cost of some extra tags :) | 16:01 |
avass | except that the jobs could have changed between the tagged commit and the head of the branch, and if you really want to release that commit you would need to branch out anyway | 16:02 |
fungi | that's true, however the image isn't benig rebuilt | 16:04 |
avass | speaking more generally :) | 16:04 |
fungi | oh, well sure. i think the idea with this model is that you build artifacts from the rc tag as if they're the release (by locally retagging in the jobs) and then don't rebuild again when the actual release is tagged from the same commit | 16:05 |
fungi | just update external references to the rc-as-release artifacts | 16:06 |
avass | yup | 16:06 |
avass | I guess having changes for tags would solve all of this really. | 16:07 |
*** rpittau is now known as rpittau|afk | 16:08 | |
corvus | mordred: what's your need for more tags in the repo right now? | 16:08 |
avass | I believe it's building + testing without using the release tag (if I got that right) | 16:09 |
openstackgerrit | Sagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles https://review.opendev.org/730360 | 16:09 |
corvus | mordred, avass: ah, i misread that, thanks :) | 16:11 |
*** ysandeep|afk is now known as ysandeep | 16:15 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code https://review.opendev.org/734139 | 16:23 |
*** dennis_effa has quit IRC | 16:24 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints https://review.opendev.org/734134 | 16:26 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code https://review.opendev.org/734139 | 16:29 |
*** ysandeep is now known as ysandeep|away | 16:30 | |
*** sshnaidm is now known as sshnaidm|afk | 16:34 | |
zbr | corvus: avass : https://review.opendev.org/#/c/723534/ ? (more) | 16:36 |
sshnaidm|afk | cores, please take a look: https://review.opendev.org/#/c/730360/ | 16:37 |
avass | zbr: oh, nice | 16:38 |
avass | sshnaidm|afk: looking :) | 16:41 |
zbr | avass: i wonder if you could create another linter rule that fails if someone adds new role that does not have testing jobs. | 16:41 |
avass | zbr: you don't need a linter rule for that, just something like "find roles -mindepth 1 -maxdepth 1 | xargs -I {} bash -c 'git grep -q {} -- test-playbooks'" | 16:44 |
zbr | is not so easy, we still have few w/o tests | 16:45 |
zbr | i think is bit tricky because you could have a broken trigger rule and still miss to test the change | 16:46 |
zbr | not sure how to enforce something like this, we may even have jobs that tests a group of roles | 16:46 |
zbr | there is no 1-1 match, afaik | 16:47 |
avass | zbr: replace find with: git diff-tree --name-only -r --diff-filter A HEAD | grep roles | awk '{split($0,a,"/"); print a[2]}' | uniq | 16:51 |
avass | :) | 16:52 |
corvus | zbr, avass: i think it's probably better that we ask zuul-jobs reviewers to think about the test coverage when adding roles. determining whether a test is adequate should be part of any change. | 16:52 |
avass | corvus: yeah I agree :) | 16:52 |
zbr | indeed, not everything needs to be automated | 16:52 |
avass | but automating everything is the goal isn't it? ;) | 16:54 |
corvus | how about a compromise: we automate the test tests after we've automated writing the code to start with? :) | 16:54 |
*** jcapitao has quit IRC | 16:55 | |
avass | or we write the tests first and let an ai write the code to fulfill the tests! | 17:00 |
fungi | "automate all the things it makes sense to automate" is a mouthful, and not as catchy ;) | 17:01 |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: WIP gitlab: implement git push support https://review.opendev.org/734159 | 17:05 |
*** fbo is now known as fbo|off | 17:08 | |
*** bhavikdbavishi has joined #zuul | 17:17 | |
*** jpena is now known as jpena|off | 17:20 | |
*** nils has quit IRC | 17:25 | |
*** dennis_effa has joined #zuul | 17:37 | |
openstackgerrit | Merged zuul/zuul master: Make task errors expandable https://review.opendev.org/723534 | 17:49 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints https://review.opendev.org/734134 | 17:51 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code https://review.opendev.org/734139 | 17:52 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code https://review.opendev.org/734139 | 17:58 |
openstackgerrit | Merged zuul/zuul master: Fix quickstart gating, Add git name and email to executor https://review.opendev.org/724096 | 18:03 |
*** saneax_AFK has quit IRC | 18:03 | |
*** dustinc has joined #zuul | 18:16 | |
*** rlandy is now known as rlandy|mtg | 18:20 | |
avass | corvus: would it make sense extend nodepool-builder for dib with ec2? I found out how nice that tool actually is and might set up some roles to import images to aws :) | 18:23 |
avass | corvus: and do we want to support multiple builders, like amazon image builder as well? | 18:24 |
Shrews | avass: there has been talk in the past about having a "build API" similar to how nodepool has a "driver API", and i think that would be a welcome change. It's not something I had in mind when designing the initial code, so there will be some significant (I'm guessing) reworking of that to allow such an API. It would take someone familiar with more than one or two different build processes, I would think, to make it a good, extensible API. | 18:29 |
avass | Shrews: I'm looking into it a bit at a time, we're not using it right now since it only supports dib and building something in AWS is a bit easier than importing it through s3. | 18:30 |
mordred | avass: for context- the very original version of nodepool made images by booting vms in cloud and then saving them (the approach things like packer use) and we abandoned it due to its fragility at scale - and also so that we could make images that were actually consistent no matter who the cloud provider was | 18:32 |
avass | mordred: yeah it makes sense | 18:32 |
mordred | avass: so I'd still advocate for building images and uploading them to ec2 than snapshotting an image based on a base ami | 18:32 |
mordred | BUT | 18:32 |
mordred | that's what I'd suggest- I can't see a reason to make a hard rejection of a driver that did it the other way | 18:33 |
mordred | it just seems much much harder | 18:33 |
avass | not needing credentials and extra resources to build images is a plus as well :) | 18:33 |
mordred | yah | 18:33 |
mordred | also - corvus has been working on code for dib to be able to use arbitrary docker image as the source of an initial root filesystem | 18:34 |
mordred | end-goal there being that you should be able to just express what you want in your cloud images in a dockerfile | 18:34 |
avass | that would be cool | 18:34 |
mordred | and then have dib transform that into a bootable cloud image | 18:34 |
mordred | all that said - we should totally grow the support for nodepool-buiolder to interact with aws :) | 18:35 |
avass | yeah, I'm planning on 'whenever I get time' create some roles for s3 log uploads and things like that. but there's always so much to do :) | 18:36 |
fungi | the biggest reason for the current builder design is that we wanted to be able to upload consistent images to a dozen or more regions of independent service providers | 18:37 |
fungi | and the "easiest" way to do that was build once upload everywhere, rather than trying to force builds in multiple locations to produce identical results | 18:37 |
mordred | avass: "whenever I get time" is my favorite schedule | 18:38 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints https://review.opendev.org/734134 | 18:43 |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code https://review.opendev.org/734139 | 18:44 |
*** rlandy|mtg is now known as rlandy | 18:56 | |
*** hashar has joined #zuul | 18:59 | |
avass | oh and getting support for other builders might be necessary unless dib wants to support windows :) | 19:04 |
fungi | sure... i mean, basically nodepool already has support for using whatever process you like to create images and then just referencing those in your configuration | 19:05 |
fungi | you're on the hook for making sure they exist in your provider(s) | 19:05 |
clarkb | ya I think the key bit is what shrews pointed out, we'll need someone with enough familiarity with more than one tool/process to produce something that makes sense. But the basic design is already set up to support various builders | 19:06 |
fungi | but i do think extending the builder could make sense | 19:06 |
clarkb | I expect that we could get pretty far by making the command to run configurable (and maybe removing elements as top level config). Then rely on the command configuration as well as env vars to produce the expected result | 19:07 |
clarkb | but having never used amazon image builder I couldn't say for sure. I think that would work for packer though | 19:07 |
avass | yeah amazon image builder might have been a bad example | 19:08 |
clarkb | then we end up with the tricky bit being upload if the builder operates like dib (because right now we can only upload to openstack clouds) | 19:08 |
fungi | the only non-openstack cloud i've ever touched was a vmware-based one i used to run for a hosting company ~10 years ago, so i'm really no help | 19:09 |
avass | I haven't actually used amazon image builder myself, so we'll probably stick with packer for now | 19:15 |
avass | and since we'll probably end up using both azure and aws we might contribute with something from that :) | 19:15 |
*** bhavikdbavishi has quit IRC | 19:20 | |
mhu | hey there, could I have some reviews on these: https://review.opendev.org/#/q/status:open+project:zuul/zuul+branch:master+topic:CORS_OPTIONS please? | 19:43 |
AJaeger | mhu: could you review https://review.opendev.org/#/c/731606/, please? | 19:46 |
AJaeger | mhu: that's a policy update that you might be interested in. | 19:46 |
*** dennis_effa has quit IRC | 19:52 | |
*** hashar has quit IRC | 20:11 | |
*** harrymichal has joined #zuul | 20:23 | |
*** y2kenny has joined #zuul | 21:04 | |
openstackgerrit | James E. Blair proposed zuul/zuul master: Contain pipeline exceptions https://review.opendev.org/733917 | 21:09 |
openstackgerrit | James E. Blair proposed zuul/zuul master: Detect Gerrit gate pipelines with the wrong connection https://review.opendev.org/733929 | 21:09 |
corvus | tristanC: ^ that should be ready to go now | 21:09 |
y2kenny | Can someone tell me how to specify a tox test environment to test? (for example, testenv:functional_kubernetes) I just started doing driver dev and I am totally new to tox | 21:13 |
y2kenny | (nodepool driver) | 21:14 |
fungi | the generic tox job from zuul-jobs takes a tox env name as a parameter i think... pulling up docs | 21:15 |
fungi | oh, wait, you mean locally? | 21:15 |
fungi | tox -e functional_kubernetes | 21:15 |
fungi | sorry i thought you were asking about jobs there for a moment, my bad ;) | 21:16 |
y2kenny | yes locally... Ah ok thanks. For some reason I thought only py3/pep8, etc. applies to the -e | 21:17 |
y2kenny | and looks like I also had a typo before... I can continue now, thanks! | 21:20 |
y2kenny | um... I am getting yaml load error (load() without Loader is deprecated, as the default Loader is unsafe)... am I missing some dependencies? | 21:33 |
y2kenny | and for some reason that error comes from kube_config.py... | 21:34 |
corvus | y2kenny: that should just be a warning which you can probably ignore | 21:40 |
y2kenny | for some reason it shows up under stderr | 21:40 |
corvus | y2kenny: yes, python warnings get written to stderr, but they generally don't affect execution of the program | 21:41 |
y2kenny | another error I see... load() take 3 positional arguments but 4 were given... | 21:41 |
y2kenny | a lot of those... | 21:41 |
corvus | now that's a problem :) | 21:41 |
y2kenny | Oh... | 21:41 |
y2kenny | I think it's my driver | 21:41 |
y2kenny | ok... | 21:41 |
y2kenny | both k8s and openshift functional requires minikube right? | 21:42 |
corvus | the openshift functional test job in the gate uses an openshift installation rather than minikube | 21:44 |
corvus | y2kenny: you can check the playbooks/nodepool-functional-openshift/pre.yaml playbook to see how it's set up | 21:44 |
y2kenny | ok. | 21:44 |
y2kenny | thanks | 21:44 |
*** harrymichal has quit IRC | 22:04 | |
*** threestrands has joined #zuul | 22:15 | |
*** threestrands has quit IRC | 22:16 | |
*** threestrands has joined #zuul | 22:17 | |
*** threestrands has quit IRC | 22:18 | |
*** threestrands has joined #zuul | 22:18 | |
*** threestrands has quit IRC | 22:19 | |
*** threestrands has joined #zuul | 22:20 | |
*** threestrands has quit IRC | 22:21 | |
*** threestrands has joined #zuul | 22:21 | |
openstackgerrit | Merged zuul/zuul master: Contain pipeline exceptions https://review.opendev.org/733917 | 22:22 |
*** threestrands has quit IRC | 22:22 | |
*** threestrands has joined #zuul | 22:23 | |
*** threestrands has quit IRC | 22:24 | |
*** threestrands has joined #zuul | 22:24 | |
*** threestrands has quit IRC | 22:25 | |
*** threestrands has joined #zuul | 22:26 | |
*** threestrands has quit IRC | 22:27 | |
*** threestrands has joined #zuul | 22:27 | |
*** threestrands has quit IRC | 22:28 | |
*** threestrands has joined #zuul | 22:29 | |
*** threestrands has quit IRC | 22:30 | |
*** threestrands has joined #zuul | 22:30 | |
*** dmellado_ has joined #zuul | 23:10 | |
*** dmellado has quit IRC | 23:11 | |
*** dmellado_ is now known as dmellado | 23:11 | |
*** tosky has quit IRC | 23:13 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!