SpamapS | #FryingImportantFish | 00:36 |
---|---|---|
*** mordred has quit IRC | 00:49 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: run-test-command: support list in test_command variable https://review.openstack.org/610888 | 01:23 |
*** bhavikdbavishi has joined #zuul | 02:46 | |
*** bjackman has joined #zuul | 03:22 | |
*** bhavikdbavishi has quit IRC | 03:33 | |
*** bhavikdbavishi has joined #zuul | 03:38 | |
*** rlandy|bbl is now known as rlandy | 04:21 | |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Add Fedora 29 testing https://review.openstack.org/618671 | 04:58 |
bjackman | Are there still bugs related to https://storyboard.openstack.org/#!/story/2002080 ? | 05:15 |
bjackman | My Depends-On only works if I use the deprecated Change-Id format. I may have something misconfigured, only thing I'm sure of is that my connection['gerrit'].baseurl matches the URL I'm putting in my Depends-On field | 05:18 |
bjackman | However my conncetion['gerrit'].baseurl is very different from my connection['gerrit'].server (former is external URL, latter is just the name of the docker service) | 05:18 |
*** chandankumar has joined #zuul | 05:33 | |
*** chandankumar is now known as chkumar|ruck | 05:33 | |
tristanC | bjackman: what version of gerrit are you using? | 05:58 |
bjackman | The one from the gerritcodereview/gerrit Docker image, looks to be 2.16.rc0-1 | 05:59 |
bjackman | Ah yeah which is the latest AFAICS | 06:00 |
bjackman | (@ tristanC) | 06:00 |
tristanC | bjackman: perhaps the schema changed, here is what was needed to close the 2002080 story: https://review.openstack.org/433748 and https://review.openstack.org/577154 | 06:01 |
tristanC | (to support gerrit-2.14) | 06:01 |
tristanC | bjackman: it seems like the gerrit url format changed, what are you using as depends-on url? | 06:03 |
bjackman | tristanC, So.. my URLs look like "$baseurl/c/$repo/+/1021" | 06:03 |
bjackman | tristanC, looking in the zuul source looks like the regex is r"/%s(\#\/c\/)?(\d+)[\w]*" % prefix_ui | 06:04 |
tristanC | bjackman: yeah, that's probably the issue. the regex needs to be updated for '/c/$repo/+/' | 06:05 |
bjackman | I wonder if I can get Gerrit to exclude it | 06:05 |
bjackman | Actually probably better to update zuul to work with gerrit defaults | 06:06 |
tristanC | (\#\/c\/|c\/.*\/\+\/)? may be enough? | 06:07 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Only setup zNode caches in launcher https://review.openstack.org/619440 | 06:07 |
bjackman | * Slowly turns the regex-parsing crank on my brain * | 06:07 |
*** bhavikdbavishi has quit IRC | 06:07 | |
bjackman | tristanC, Yeah that looks about right (although I think you might need brackets around the subexpressions either side of the '|' ??) | 06:08 |
bjackman | tristanC, At some point I think I'm gona have to get a dev version of zuul up & running so maybe I can test it then | 06:08 |
bjackman | tristanC, ATM I'm just using the docker images from the registry | 06:09 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Fix print-zk tool for python3 https://review.openstack.org/619441 | 06:10 |
tristanC | bjackman: it seems like you should be able to build them locally using pbrx. i don't know how it works though | 06:10 |
bjackman | tristanC, OK will give that a go at some point, probably have to be next week though. | 06:15 |
bjackman | Hopefully there's a way to bind-mount your source directory or something | 06:15 |
*** bhavikdbavishi has joined #zuul | 06:46 | |
*** bhavikdbavishi has quit IRC | 06:51 | |
*** bjackman has quit IRC | 07:01 | |
*** quiquell|off is now known as quiquell | 07:01 | |
*** bjackman has joined #zuul | 07:01 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool master: Implement an OpenShift resource provider https://review.openstack.org/570667 | 07:02 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool master: Implement an OpenShift Pod provider https://review.openstack.org/590335 | 07:02 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Add Fedora 29 testing https://review.openstack.org/618671 | 07:08 |
*** bjackman has quit IRC | 07:11 | |
*** pcaruana has joined #zuul | 07:22 | |
*** bjackman has joined #zuul | 07:27 | |
*** eumel8 has joined #zuul | 07:51 | |
*** bhavikdbavishi has joined #zuul | 08:12 | |
*** gtema has joined #zuul | 08:17 | |
*** ParsectiX has joined #zuul | 08:23 | |
*** jpena|off is now known as jpena | 08:37 | |
bjackman | tristanC, found a lazy way to build a container with my own copy of the zuul source, just made a Dockerfile that uses the default zuul/zuul-scheduler image then overwrites the Python package: https://paste.gnome.org/p5detyvee | 08:59 |
*** hashar has joined #zuul | 09:01 | |
tristanC | bjackman: that's smart :) | 09:02 |
*** mrhillsman is now known as openlab | 09:19 | |
*** openlab is now known as mrhillsman | 09:20 | |
quiquell | bjackman: that's cool | 09:29 |
quiquell | tristanC, bjackman: I am running executor + merger in the same node, runing them at differnt nodes can reduce start time at jobs ? | 09:30 |
quiquell | I mean parallelizing clones per project and the like | 09:30 |
quiquell | Like having multiple mergers | 09:31 |
tobiash | corvus, Shrews, clarkb: just found out that our launcher registration in zk is broken. We call it /nodepool/launchers but what we actually register is each pool with the supported labels of the whole launcher. I see two ways to solve it. Either really register pools with all supported labels or really register launchers. | 09:31 |
tobiash | corvus, Shrews, clarkb: I think what we want is to actually register the launcher right? | 09:31 |
tristanC | tobiash: not sure if it's related, but node request allocation to provider currently goes through each provider, even if the provider doesn't have the label | 09:37 |
tristanC | which makes the declined_by list hard to investigate when you have many providers with different labels | 09:38 |
tobiash | tristanC: that's defined behavior, a node request is marked as failed once *all* providers declined | 09:39 |
tristanC | couldn't we just check all providers actually providing the labels? :) | 09:40 |
tobiash | that requires a more complicated algorithm to determine if a node request failed | 09:42 |
tobiash | that would probably be a tradeoff between many declines and more complicated failure detection | 09:42 |
tobiash | but I could see some benefit in changing this because it turns out that lock,decline,lock can produce quite some load on the system if you have many providers and pools | 09:45 |
tobiash | Shrews, corvus: what do you think about this idea? ^ | 09:45 |
tristanC | tobiash: yes, so back to your original suggestion, shouldn't we need to register pool and labels? | 09:46 |
tobiash | tristanC: the original idea was to register the launcher. Your idea above would require that we register pools with labels too (that's exactly what we do now but with the wrong name). | 09:47 |
tobiash | so maybe we should 'fix' launcher registration and add pool registration if we want to change the decline algorithm | 09:48 |
tobiash | for the record, this was the change that introduced the launcher registration: https://review.openstack.org/548376 | 09:50 |
tobiash | so judging from the commit message this really was meant to be the launcher | 09:51 |
*** electrofelix has joined #zuul | 09:52 | |
tobiash | oh actually the current decline algorithm treats the registered launchers as pools | 10:04 |
*** bhavikdbavishi has quit IRC | 10:10 | |
*** bjackman has quit IRC | 10:10 | |
*** ianychoi has quit IRC | 10:25 | |
*** jpena is now known as jpena|off | 10:37 | |
*** bhavikdbavishi has joined #zuul | 10:41 | |
*** bjackman has joined #zuul | 10:48 | |
*** toabctl has joined #zuul | 10:54 | |
*** sshnaidm|afk is now known as sshnaidm | 10:54 | |
quiquell | I have a Connetion reset by peer at zuul_log.streamer | 10:55 |
quiquell | Exactly here https://github.com/openstack-infra/zuul/blob/33699fa31689b2b40cff5b103a55f7cbd80fc096/zuul/lib/log_streamer.py#L139 | 10:55 |
quiquell | tobiash, tristanC: ^ any idea what can produce it ? | 10:56 |
tristanC | quiquell: does the instance accept tcp connection on 19885? | 10:58 |
*** bhavikdbavishi has quit IRC | 11:01 | |
quiquell | tristanC: Yep, openned the port at my openstack cloud | 11:01 |
quiquell | tristanC: I see the "shell" ansible module working | 11:01 |
quiquell | tristanC: Do wee need something else special about the port ? is just a tcp ingress port ? | 11:02 |
tristanC | quiquell: oh, so the shell command works but it fails after? | 11:03 |
quiquell | tristanC: Something like that, it also get stuck there and zuul job never ends | 11:05 |
quiquell | tristanC: If I go to the node, I see different playboks being executed | 11:05 |
quiquell | tristanC: Me having the rest by peer is bad but the issue is how zuul is handling it having the job stuck | 11:06 |
tristanC | quiquell: that's odd, is this happening with the static node from the quickstart? | 11:06 |
quiquell | tristanC: nope openstack node | 11:06 |
quiquell | tristanC: I am connecting to my rdo personal tenant | 11:07 |
tristanC | there shouldn't be multiple playbooks being executed on an openstack node | 11:07 |
tristanC | perhaps your job is changing the firewall or killing the console log stream? | 11:07 |
*** zigo has joined #zuul | 11:08 | |
tristanC | quiquell: you say the job is getting stuck, maybe there is something going on with docker-compose outbound connection from the zuul-executor container? | 11:09 |
tristanC | is there connection error in executor.log? | 11:09 |
quiquell | tristanC: I have this https://paste.fedoraproject.org/paste/bIJF03NhF5nUuL1iiG-eAg | 11:14 |
quiquell | tristanC: between good line and reset by pair are 5 idle minutes... maybe there is a timeout | 11:14 |
tristanC | quiquell: perhaps it's because of https://success.docker.com/article/ipvs-connection-timeout-issue ? | 11:16 |
quiquell | tristanC: Could be, so I have to change the systctl.conf of the container ? | 11:17 |
tristanC | quiquell: probably from the host, i don't know much how docker actually works... | 11:18 |
quiquell | looks like apine is tuning the tcp stuff | 11:19 |
tristanC | does docker-compose uses ipvs? | 11:20 |
quiquell | tristanC: no, but the sysctl is valid I can set it in the executor | 11:21 |
quiquell | going to try it | 11:21 |
*** dkehn has quit IRC | 11:22 | |
*** hashar has quit IRC | 11:24 | |
bjackman | Do I have the right remote URL to submit patches here: git://git.openstack.org/openstack-infra/zuul ? | 11:27 |
*** ianychoi has joined #zuul | 11:31 | |
AJaeger | bjackman: you submit to gerrit via git-review, see http://docs.openstack.org/infra/manual/developers.html - but that's the rightr URL for the Zuul repo | 11:33 |
AJaeger | bjackman: http://git.openstack.org/cgit/openstack-infra/zuul/tree/README.rst#n33 | 11:34 |
bjackman | Yeah I'm getting "fatal: remote error: access denied or repository not exported: /openstack-infra/zuul". Will scour the docs to see if I've missed something | 11:34 |
openstackgerrit | Fabien Boucher proposed openstack-infra/nodepool master: Add ip-pool option to the openstack provider https://review.openstack.org/619525 | 11:38 |
fbo | Shrews: pabelanger hi I was surprised to not found an option to set the floating ip pool in the openstack provider pool defition. I got trouble in a new cloud I was tested where multiple floating ip pool were available to the tenant. | 11:44 |
tobiash | fbo: you can set this in the clouds.yaml | 11:44 |
fbo | tobiash: how ? | 11:45 |
tobiash | let me dig it up | 11:45 |
tobiash | fbo: https://review.openstack.org/602618 | 11:46 |
tobiash | you probably want nat_source | 11:46 |
tobiash | but that's very new in the openstacksdk so you should check if you have a new-enough version | 11:47 |
fbo | tobiash: ok I have the 0.19 that should be fine. I don't see any doc in the patch. I'm testing it | 11:53 |
fbo | I guess I only add nat_source: <floating-ip-pool-name> in clouds.yaml | 11:54 |
openstackgerrit | Brendan proposed openstack-infra/zuul master: Update change URL for Gerrit v2.16 https://review.openstack.org/619533 | 11:55 |
AJaeger | bjackman: worked? ^ Great! | 11:55 |
tristanC | bjackman: gg :) | 11:56 |
bjackman | Got there in the end :D | 11:57 |
bjackman | tristanC, I stole your regex BTW, is there a way to credit you? In linux I've put a "Suggested-By" | 11:58 |
tobiash | fbo: yes, that should work | 12:00 |
tristanC | bjackman: oh that's ok. In openstack we use "Co-Authored-By: ", but no need here :) | 12:01 |
fbo | tobiash: It does not work. I added it in clouds.yaml clouds.<name>.nat_source: <name> | 12:01 |
fbo | I see in the nodepool launcher log (DEBUG) that a POST is done on os-floating-ips endpoint on the wrong floating ip pool (not the one I specified) | 12:03 |
fbo | tobiash: With the patch I submitted the ip-pool is took in account and the right pool is used. Also advantage is the setting is by provider's pool not provider global. | 12:04 |
bjackman | So, I'd to get the Gerrit driver to optionally behave a bit more like the Github driver in that it doesn't need to run every individual commit through every pipeline (my team has lots of pressure on test resources due to specialised HW) | 12:29 |
bjackman | Obviously the semantics of Gerrit are pretty different from Github so there's some discussion to be had on what that behaviour would even look like. | 12:29 |
bjackman | What's the best way to start that discussion? Mailing list? Or are people happy discussing that kind of thing here" | 12:29 |
tristanC | bjackman: how would you like zuul to behave then? | 12:31 |
tristanC | bjackman: there has been discussion to be able to skip the gate pipeline when check ran on the same state as when approved (e.g. if no change got merged after the check result) | 12:33 |
bjackman | tristanC, well I'm not at all sure of the details but what I ultimately want is to run check against every commit, but "chunk" the gate pipeline wrt. commits. So if you submit 10 commits in a single stack, it can just run the gate pipeline once against the whole stack | 12:35 |
bjackman | One way I can see that behaving is that the changes don't run in the gate pipeline until the whole patch stack has the pipeline.require.gerrit satisfied | 12:37 |
bjackman | Then it just runs the tip of the stack | 12:37 |
tristanC | bjackman: is the stack part of a single project? | 12:38 |
bjackman | Yeah I'd imagine if you submitted changes to multiple projects it would consider them as multiple stacks | 12:38 |
bjackman | tristanC, so I think when I say "stack" I mean what Gerrit calls a "relation chain" and not the "same topic" or "submitted together" | 12:40 |
tristanC | bjackman: that's an interesting idea, it may be possible to implement it without a major refactor, but you should ask corvus, or maybe write it down the mailing list | 12:41 |
bjackman | tristanC, OK I will send a little proposal to the mailing list as I'm heading off soon (UTC+7). | 12:44 |
*** gtema has quit IRC | 12:45 | |
*** hashar has joined #zuul | 12:48 | |
quiquell | tristanC: after seting the tcp keepalive, looks like it's working :-) | 12:59 |
quiquell | tristanC: you can set it up in the docker-compose so maybe you guys want it at zuul quickstart | 12:59 |
tristanC | quiquell: nice | 13:00 |
*** hashar has quit IRC | 13:21 | |
*** gtema has joined #zuul | 13:37 | |
*** bjackman has quit IRC | 13:41 | |
*** hashar has joined #zuul | 13:45 | |
*** pcaruana has quit IRC | 13:50 | |
*** dkehn has joined #zuul | 13:56 | |
*** chkumar|ruck has quit IRC | 14:02 | |
*** nilashishc has joined #zuul | 14:04 | |
*** rfolco has quit IRC | 14:15 | |
*** rfolco has joined #zuul | 14:17 | |
*** pcaruana has joined #zuul | 14:25 | |
quiquell | tristanC: now I don't havate the tcp issue but log streaming is not closing https://paste.fedoraproject.org/paste/NJPokn-PbQpv0vYpy-nL1Q | 14:33 |
quiquell | tristanC: But don't know if it causes the 'ERROR' | 14:35 |
tobiash | quiquell: did you start the zuul_stream module in your base job? | 14:37 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add second level cache of nodes https://review.openstack.org/619025 | 14:37 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add second level cache to node requests https://review.openstack.org/619069 | 14:37 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Only setup zNode caches in launcher https://review.openstack.org/619440 | 14:37 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Asynchronously update node statistics https://review.openstack.org/619589 | 14:37 |
quiquell | tobiash: I run "start-zuul-console" | 14:39 |
quiquell | tobiash: at pre | 14:39 |
quiquell | tobiash: this is it ? | 14:39 |
tobiash | quiquell: yes, that was what I meant | 14:39 |
* tobiash keeps forgetting this name | 14:40 | |
tobiash | quiquell: is that a shell/command task on a remote host? | 14:40 |
quiquell | tobiash: but looking at code does not look related to the job failing, is just a message at cleanup | 14:40 |
quiquell | tobiash: yep | 14:40 |
tobiash | with ssh connection? | 14:40 |
quiquell | tobiash: yep | 14:40 |
tobiash | hrm | 14:40 |
quiquell | tobiash: Looking here https://github.com/openstack-infra/zuul/blob/5ae158a4f65cb4e3ed1558e9e12a36b021386340/zuul/ansible/callback/zuul_stream.py#L283 | 14:41 |
quiquell | tobiash: It cannot break the job, it's just a message about linguering stuff | 14:41 |
tobiash | yes, so if the console stream is missing you might want to check the buildlog.json file | 14:41 |
tobiash | this often contains more information if log streaming had a problem | 14:41 |
tobiash | the log streaming is only used for the buildlog.txt file and live streaming | 14:42 |
quiquell | tobiash: You right found it, I don't have it at .txt because log stream is broken but, I see the task that fails | 14:44 |
quiquell | tobiash: thanks | 14:44 |
quiquell | tobiash: Question is why log streamer get bad | 14:45 |
tobiash | quiquell: the zuul action module sends an exit code via the log stream. That is the stop indication the executor waits for. | 14:46 |
tobiash | quiquell: so the interesting part would be, is there an error in the ansible module so it doesn't send it? | 14:46 |
tobiash | quiquell: are you able to share the part of the json with the failed task? | 14:46 |
tobiash | quiquell: just to double check, do you have this problem for all shell/command tasks or just for this one? | 14:47 |
* quiquell peparing a pastbinb | 14:49 | |
quiquell | tobiash: https://paste.fedoraproject.org/paste/XI0koy2neiDuggiVs0hh4g | 14:49 |
quiquell | tobiash: it's huge | 14:49 |
quiquell | tobiash: look for "TASK [standalone : Lint playbooks]" | 14:50 |
quiquell | tobiash: The failure is expected and legit | 14:50 |
quiquell | tobiash: that's why zuul is failing, but I don't see it at .log | 14:50 |
quiquell | .txt I mean | 14:50 |
*** quiquell is now known as quiquell|off | 14:53 | |
quiquell|off | tobiash: Have to drop now | 14:54 |
quiquell|off | tobiash: I will read the stuff tomorrow in case you find something | 14:54 |
quiquell|off | tobiash: thanks a lot !!! | 14:54 |
*** bhavikdbavishi has joined #zuul | 14:55 | |
evrardjp | I am curious about the isolation of the pre-run and run plays. If I set_fact in a pre-run, will the fact be available/set on the 'run' play? | 15:01 |
dmsimard | evrardjp: it won't | 15:03 |
dmsimard | evrardjp: that's not a zuul thing, that's an ansible thing | 15:03 |
dmsimard | evrardjp: variables do not carry from one play to the next, even if it's two plays in the same playbook | 15:03 |
evrardjp | that's not true , except if that changed | 15:04 |
dmsimard | is it ? | 15:04 |
evrardjp | you can define it in a play, it will be accessible in next play of the same playbook | 15:04 |
evrardjp | (so you have to keep the same runner) | 15:04 |
dmsimard | ah, I guess you're right | 15:04 |
dmsimard | but they definitely wouldn't carry across different playbooks | 15:04 |
evrardjp | ofc it is dependning on where you set said fact :) | 15:05 |
evrardjp | that's true | 15:05 |
dmsimard | maybe they would carry with import_playbook but not through different ansible-playbook executions for sure | 15:05 |
evrardjp | so that's why I am asking if pre-run run and post-run are basically 'assembled' into a a run of a runner, or if they spawn different runners | 15:05 |
dmsimard | zuul currently runs a single ansible-playbook execution per playbook | 15:05 |
evrardjp | ok | 15:05 |
dmsimard | different | 15:05 |
evrardjp | that clarifies it :) | 15:05 |
evrardjp | yeah so that won't work | 15:06 |
evrardjp | thanks! | 15:06 |
dmsimard | what are you trying to do ? | 15:06 |
evrardjp | weird stuff again :) | 15:06 |
tobiash | evrardjp: if you want to do that you currently need to dump the vars to disk and load them in the next playbook | 15:06 |
evrardjp | tobiash: yeah that would do the trick | 15:06 |
dmsimard | I wonder if ara_record could be used for this | 15:06 |
dmsimard | you'd still need a way to refer to the previous playbook or something | 15:07 |
evrardjp | dmsimard: I wanted to re-use a single playbook: defining a parent job with pre-run and run, and a child job which defined another pre-run, so it would automatically alter behavior of run without changing code of run | 15:07 |
evrardjp | basically adding extra tasks in between conditionally on jobs | 15:08 |
evrardjp | but I will do this differently :) | 15:08 |
dmsimard | evrardjp: I've done something like this before iirc | 15:08 |
dmsimard | let me check | 15:08 |
evrardjp | what I did was already use the "onion layers" part of zuul to make things more modular, but never use it as an injector of tasks :p | 15:09 |
evrardjp | never used* | 15:09 |
evrardjp | dmsimard: you don't have holidays? | 15:09 |
dmsimard | Canadian and American thanksgiving are different :) | 15:09 |
evrardjp | I thought that kind of tradition extended to Canada | 15:09 |
dmsimard | like, literally different date | 15:09 |
evrardjp | I see | 15:09 |
evrardjp | makes sense | 15:10 |
dmsimard | well, I mean, stores are all over black friday | 15:10 |
tobiash | quiquell|off: that log looks a bit weird to me, are you running ansible within a command task? | 15:10 |
evrardjp | dmsimard: even here we have "black friday" at every corner | 15:10 |
evrardjp | that doesn't mean ANYTHING. | 15:10 |
dmsimard | evrardjp: ah, so the way I've done it wasn't with set_fact | 15:11 |
dmsimard | evrardjp: but rather using vars in the job | 15:11 |
dmsimard | and job vars carry across all playbooks | 15:11 |
evrardjp | yeah | 15:12 |
evrardjp | I have done that already :) | 15:12 |
dmsimard | tobiash: could evrardjp use zuul_return for passing data from a playbook to the next ? | 15:13 |
evrardjp | It was task injection that had my interest, not really var injection. But I guess I can add tasks conditionally and pass a var to trigger that behaviour | 15:13 |
dmsimard | pabelanger: ^ I think you were doing something like that at one point ? | 15:13 |
tobiash | dmsimard, evrardjp: currently only to the next job | 15:13 |
tobiash | to the next job is an open issue | 15:13 |
dmsimard | tobiash: ah, next job -- not next playbook | 15:13 |
evrardjp | that's still interesting | 15:13 |
tobiash | open means open for contribution ;) | 15:13 |
evrardjp | ofc | 15:14 |
dmsimard | oh yeah I remember now | 15:14 |
dmsimard | we were trying to dynamically trigger jobs based on the results of a "parent" job | 15:14 |
tobiash | I think that was already discussed some months ago but I didn't see a patch that add this | 15:14 |
evrardjp | Is there a doc pointing to this kind of "next job" pipeline? | 15:14 |
dmsimard | because the parent job had business logic | 15:14 |
evrardjp | pipeline is probably the wrong term | 15:14 |
evrardjp | certainly the wrong term | 15:14 |
*** quiquell|off is now known as quiquell | 15:15 | |
dmsimard | evrardjp: https://zuul-ci.org/docs/zuul/user/jobs.html?highlight=zuul_return#return-values ? | 15:15 |
tobiash | yes, that describes what you can do with zuul_return | 15:15 |
tobiash | it's a lot :) | 15:15 |
evrardjp | :) | 15:16 |
evrardjp | Gives me enough bullets :D | 15:16 |
dmsimard | tobiash: next step is a zuul_job ansible module which triggers a job through the zuul api :P | 15:17 |
evrardjp | dmsimard: interesting :) | 15:19 |
evrardjp | that's a way to keep things between runs easily :p | 15:19 |
tobiash | it can at least already used to skip child jobs | 15:19 |
quiquell | tobiash: Yep, we are not fully integrated with zuulv3 using pure ansible | 15:37 |
quiquell | tobiash: we have some glue bash code still | 15:37 |
*** quiquell is now known as quiquell|off | 15:49 | |
*** hashar has quit IRC | 16:10 | |
*** bhavikdbavishi has quit IRC | 16:19 | |
*** bhavikdbavishi has joined #zuul | 16:20 | |
*** bhavikdbavishi1 has joined #zuul | 16:23 | |
*** bhavikdbavishi has quit IRC | 16:24 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 16:24 | |
*** bhavikdbavishi has quit IRC | 16:48 | |
*** nilashishc has quit IRC | 17:38 | |
pabelanger | dmsimard: evrardjp: because of the fact cache we enable in zuul, you should be able to use set_fact: foo, cachable: true and the next playbook should get access to it. However, I haven't actually tried it | 17:43 |
pabelanger | https://docs.ansible.com/ansible/2.5/modules/set_fact_module.html | 17:43 |
openstackgerrit | Markus Hosch proposed openstack-infra/zuul master: Add config parameters for WinRM timeouts https://review.openstack.org/619630 | 17:46 |
dmsimard | pabelanger: TIL about cacheable but I'm not convinced that was the intended purpose | 17:47 |
dmsimard | which is okay, I just wonder if we can rely on it | 17:47 |
pabelanger | actually, These variables will be available to subsequent plays during an ansible-playbook run, but will not be saved across executions even if you use a fact cache. | 17:48 |
dmsimard | persisting facts would typically be done through /etc/ansible/facts.d | 17:49 |
dmsimard | pabelanger: ah, there you go | 17:49 |
pabelanger | not sure why it wouldn't persist across executions, that is the purpose of the fact cache too | 17:50 |
pabelanger | I'd give it a test, just to be sure | 17:50 |
pabelanger | but yah, facts.d is likely the best way | 17:51 |
*** gtema has quit IRC | 18:07 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Fix manual dequeue of github items https://review.openstack.org/619272 | 18:39 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Retry queries for commits https://review.openstack.org/575137 | 18:40 |
*** electrofelix has quit IRC | 18:53 | |
tobiash | pabelanger: I saw that you abandoned your fedmsg change, shall we close https://storyboard.openstack.org/#!/story/2001350 too? | 19:08 |
*** hashar has joined #zuul | 19:55 | |
tobiash | pabelanger: left a comment/question on 593150 | 20:01 |
openstackgerrit | Merged openstack-infra/nodepool master: Move k8s install to pre playbook https://review.openstack.org/617699 | 20:12 |
openstackgerrit | Merged openstack-infra/zuul master: Add allowed-triggers and allowed-reporters tenant settings https://review.openstack.org/554082 | 20:14 |
openstackgerrit | Merged openstack-infra/zuul master: encrypt_secret: support self-signed certificates via --insecure argument https://review.openstack.org/617281 | 20:14 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: upload-logs-swift: Turn FileList into a context manager https://review.openstack.org/592850 | 20:41 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: upload-logs-swift: Keep the FileList in the indexer class https://review.openstack.org/592851 | 20:41 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: [wip] Add Fedora 29 testing https://review.openstack.org/618671 | 20:55 |
pabelanger | tobiash: re: fedmsg, I only abandoned it because it seemed some of the fedmsg libraries in fedora were going away. Given we have mqtt now, I figured we'd just use thta | 21:15 |
*** hashar has quit IRC | 22:14 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!