*** swest has quit IRC | 02:48 | |
*** bhavikdbavishi has joined #zuul | 02:53 | |
*** swest has joined #zuul | 03:03 | |
*** sdake has quit IRC | 03:08 | |
*** sdake has joined #zuul | 03:10 | |
openstackgerrit | Merged openstack-infra/nodepool master: bindep: add sudo https://review.openstack.org/635876 | 03:54 |
---|---|---|
*** chandankumar is now known as chkumar|ruck | 04:32 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: add-build-sshkey: remove previously authorized build-sshkey https://review.openstack.org/632620 | 04:44 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 04:44 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 04:54 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 05:00 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 05:16 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 05:27 |
*** swest has quit IRC | 05:29 | |
*** bjackman has joined #zuul | 06:00 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: add-build-sshkey: remove previously authorized build-sshkey https://review.openstack.org/632620 | 06:00 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 06:06 |
*** swest has joined #zuul | 06:16 | |
*** swest has quit IRC | 06:20 | |
*** swest has joined #zuul | 06:34 | |
*** sshnaidm|off is now known as sshnaidm|afk | 06:45 | |
*** quiquell|off is now known as quiquell|rover | 06:48 | |
*** saneax has joined #zuul | 06:53 | |
*** AJaeger has quit IRC | 07:10 | |
*** AJaeger has joined #zuul | 07:23 | |
*** toabctl has joined #zuul | 07:26 | |
*** saneax is now known as saneax|lunch | 07:29 | |
*** quiquell|rover is now known as quique|rover|brb | 07:39 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 07:50 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 08:00 |
*** saneax|lunch is now known as saneax | 08:01 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 08:05 |
*** gtema has joined #zuul | 08:09 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: DNM: trying to test add-build-sshkey https://review.openstack.org/636087 | 08:11 |
bjackman | Does the name of a project in Zuul have to match its URL suffix wrt. the connection's base URL? | 08:17 |
*** electrofelix has joined #zuul | 08:18 | |
bjackman | e.g. if my connection.baseurl is https://git.kernel.org , and I want to add the Linux kernel stable tree as a project, do I have to call it "pub/scm/linux/kernel/git/stable/linux.git/" | 08:18 |
*** jpena|off is now known as jpena | 08:22 | |
bjackman | Had a look in the code and it does indeed seem that way.. I think I'll just rather cheekily use a longer base URL - if it turns out we need repos from a different section of git.kernel.org I"ll just add a separate connection. ¯\_(ツ)_/¯ | 08:34 |
bjackman | Long base URL is hidden from users but long project names aren't | 08:34 |
*** electrofelix has quit IRC | 08:35 | |
*** electrofelix has joined #zuul | 08:37 | |
*** quique|rover|brb is now known as quiquell|rover | 08:39 | |
*** themroc has joined #zuul | 08:58 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: executor: enable zuul_return to update Ansible inventory https://review.openstack.org/590092 | 09:08 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: [WIP] web: add tenant and project scoped, JWT-protected actions https://review.openstack.org/576907 | 09:11 |
*** bhavikdbavishi has quit IRC | 09:16 | |
*** avass has joined #zuul | 09:20 | |
fbo | Hello ! we have some Zuul users that would like to trigger jobs based on the status of a web resource (an artifact for instance). They want to monitor when the artifact has changed then trigger job(s). | 09:25 |
*** sshnaidm|afk is now known as sshnaidm | 09:26 | |
fbo | To provide the feature I'm proposing a new driver https://review.openstack.org/#/c/635567/ | 09:26 |
fbo | Is this a feature that could acceptable and eventually land in Zuul ? Or is there a better solution to trigger jobs based on an external resource change ? | 09:28 |
tobiash | fbo: I think that can be useful | 09:33 |
avass | I know you can use job.files to run a job if certain files are affected but does zuul supply information about which files were affected to the jobs? | 09:46 |
avass | affected by a change | 09:51 |
tobiash | avass: you can git diff against origin/<target branch> | 09:55 |
tobiash | avass: origin/<target branch> will contain the speculative future state of the change ahead (or tip of the branch if there is no change ahead) | 09:56 |
avass | tobiash: I was figuring either that or querying gerrit. was just wondering if zuul were already carrying that inforamtion | 09:58 |
fbo | tobiash: alright, great, well what I wanted is a first agreement from a core that say "it not a completly dumb change" :) so then I can move forward and implement unittests. | 10:08 |
*** bhavikdbavishi has joined #zuul | 10:08 | |
tobiash | fbo: but you should also discuss it with corvus ;) | 10:10 |
fbo | tobiash: yes sure. corvus what do you think about this idea to have a new driver URLTrigger like https://review.openstack.org/#/c/635567/ ? | 10:14 |
*** bjackman has quit IRC | 10:33 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul-jobs master: Add the skip_bindep option to the tox job https://review.openstack.org/635877 | 10:37 |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul-jobs master: Add the tox_install_bindep option to the tox job https://review.openstack.org/635877 | 10:38 |
AJaeger | fbo: your change misses tests and documentation and thus will not merge as is. But sure, we can discuss whether we want this first... | 10:54 |
*** bjackman has joined #zuul | 10:58 | |
fbo | AJaeger: yes I know but I prefer to get a first feedback on the idea and implementation then move forward with unittest and doc if feedback is good. | 10:59 |
AJaeger | fbo: good practice is to mention this in commit message or mark as WIP with comment. That way it's clear what your intention is... | 11:00 |
fbo | AJaeger: yes you're right I'll amend the change then | 11:01 |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: [WIP] URLTrigger driver time based - artifact change jobs triggering driver https://review.openstack.org/635567 | 11:07 |
*** bjackman has quit IRC | 11:18 | |
*** gtema has quit IRC | 11:18 | |
*** gtema has joined #zuul | 11:27 | |
*** bjackman has joined #zuul | 11:28 | |
*** bhavikdbavishi has quit IRC | 11:56 | |
*** goern has joined #zuul | 11:57 | |
*** bjackman has quit IRC | 12:07 | |
*** jpena is now known as jpena|lunch | 12:32 | |
*** swest has quit IRC | 12:41 | |
*** swest has joined #zuul | 12:43 | |
*** rlandy has joined #zuul | 13:10 | |
*** jpena|lunch is now known as jpena | 13:35 | |
*** quiquell|rover is now known as quique|rover|eat | 13:49 | |
*** quique|rover|eat is now known as quiquell|rover | 14:17 | |
*** gtema has quit IRC | 14:33 | |
jkt | I wonder how much work would it be to convert the static provider in nodepool to execute an action such as a node reboot after a job finishes | 14:36 |
jkt | my TL;DR use case can be roughly characterized as "I would prefer not to install OpenStack with all its layers on these eight physical servers" | 14:37 |
jkt | so I was thinking of VMs provisioned via libvirt with an auto-reimage-on-VM-reboot :) | 14:38 |
pabelanger | you could do a post-run playbook today, with reboot (node) and wait_for. When job finished, your static node is in correct state. | 14:44 |
jkt | pabelanger: thanks, I didn't know that ansible can do *that* | 14:51 |
jkt | pabelanger: the cosmetic issue is that it ties image-specific bits into a base job, which will become a problem once I migrate to that openstack that $other_department has been promising for quite some time | 14:52 |
jkt | but me try that, I think I like it :) | 14:53 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Fix excluded config if excluded in earlier tenant https://review.openstack.org/636146 | 15:03 |
*** bhavikdbavishi has joined #zuul | 15:03 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Don't exclude config if excluded in earlier tenant https://review.openstack.org/636146 | 15:04 |
*** snapiri has quit IRC | 15:08 | |
*** ParsectiX has joined #zuul | 15:20 | |
*** saneax has quit IRC | 15:33 | |
*** avass has quit IRC | 15:38 | |
*** quiquell|rover is now known as quiquell|off | 15:39 | |
*** electrofelix has quit IRC | 16:05 | |
*** ParsectiX has quit IRC | 16:16 | |
SpamapS | jkt: you could write an ovirt driver. :) | 16:21 |
*** chkumar|ruck is now known as chandankumar | 16:24 | |
*** themroc has quit IRC | 16:31 | |
*** bhavikdbavishi has quit IRC | 16:32 | |
*** bjackman has joined #zuul | 16:42 | |
corvus | fbo: sorry i should have responded to your email. i think a urltrigger in principle is a good idea. basing it on the timer trigger makes sense. i wonder even if you could actually inherit from the timer trigger. because basically it's a timer trigger which only triggers if the url has changed. that might save some code. | 16:43 |
dkehn | tobiash: when setting the executor's hostname zuul.conf & docker-compose.yaml (example), it's assuming it will resolve to that for other containers (zuul-web) to find it? | 16:48 |
*** dmsimard6 has joined #zuul | 16:48 | |
jkt | SpamapS: :) | 16:50 |
*** dmsimard6 is now known as dmsimard | 16:51 | |
fbo | corvus: hi, alright yes I should be able to refactor to use most of the existing timer driver. thanks for the feedback I'll provide a new PS :) | 16:54 |
tobiash | dkehn: yes | 16:55 |
*** ParsectiX has joined #zuul | 16:58 | |
*** sdake has quit IRC | 17:12 | |
jkt | do I have access to some info from nodepool within a job definition in ansible? | 17:13 |
jkt | to e.g. call a certain playbook just on nodes which originated within a given nodepool static pool | 17:13 |
corvus | jkt: there are two things available to help with that | 17:14 |
corvus | 1 -- there is some information from nodepool available; it's not formally specified, but you can see it in the inventory file. things like cloud, region, etc. we've used that to, for example, configure region-local mirrors. | 17:15 |
*** sdake has joined #zuul | 17:16 | |
corvus | 2 -- when zuul requests a node from nodepool, you can assign it a name, or assign it to a group. so if you request nodes which only come from a specific region, you can name or group those in such a way that you can write a play that targets them. | 17:16 |
jkt | corvus: thanks. My TL;DR use case is that I'm too lazy to deploy openstack, and I've setup my static VMs to reset to a known-good state after each reboot, so I would like to reboot them from within a pre-run playbook as per pabelanger's suggestion | 17:17 |
corvus | jkt: for an example of (1) see the "nodepool" section at the top of http://logs.openstack.org/24/635924/5/check/netlify-sandbox-build-site/926bb0a/zuul-info/inventory.yaml | 17:17 |
corvus | jkt: oh, with (1) i was assuming openstack driver | 17:17 |
corvus | jkt: i see the word "static" in your question now. sorry. | 17:17 |
jkt | corvus: in future I will get access to a random openstack cloud from $another_dept, and I would like to do my base job properly, and only reboot/init/whatever that VM when it's coming form my crap pool | 17:19 |
corvus | jkt: would option (2) work for you? if the label you are requesting from nodepool only returns these static nodes, you should be able to do that | 17:19 |
corvus | jkt: but if the label might return non-static nodes, then it won't work | 17:19 |
jkt | corvus: I thought that I would use labels for bits like identifying a linux distro + version | 17:20 |
jkt | so I was thinking about a label like fedora-29 for that image, regardless whether it comes from my static pool or from openstack | 17:21 |
corvus | ok then #2 isn't suitable | 17:21 |
jkt | but perhaps it's a wrong usage of labels/tags, dunno? | 17:21 |
corvus | jkt: no i think you've got the right idea about labels | 17:21 |
corvus | jkt: for the openstack driver we have this: https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.node-attributes | 17:21 |
corvus | maybe that needs to be ported to the other drivers | 17:21 |
jkt | that sounds easy enough | 17:22 |
jkt | thanks! | 17:22 |
corvus | i'm double checking that that gets passed through to the executor... | 17:22 |
corvus | jkt: it is not. but i think it would be a reasonable change to do so. so that would need a change to nodepool to add that to the static driver, then a change to the zuul executor to include that information in the inventory. | 17:24 |
corvus | i think those would both be fairly small changes. we should run this by Shrews too. | 17:24 |
jkt | thanks for looking into this | 17:26 |
corvus | clarkb, mordred: a review of https://review.openstack.org/634825 would help me move the docker registry work along | 17:28 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul-jobs master: Add intermediate registry push/pull roles https://review.openstack.org/634829 | 17:29 |
Shrews | corvus: jkt: i think that's one of the common config options (node-attributes) so my config changes from a couple weeks ago should get that in every driver | 17:32 |
corvus | Shrews: awesome -- do you know if the other drivers are putting that info into zk, or do we need to update them to do that? | 17:33 |
Shrews | confirmed, that is a common attr | 17:33 |
Shrews | corvus: i think the driver(s) would need to be updated to actually use it | 17:33 |
corvus | at least we're halfway there | 17:33 |
Shrews | currently only openstack driver does | 17:33 |
Shrews | yeah, should be easy enough | 17:33 |
*** jpena is now known as jpena|off | 17:35 | |
Shrews | corvus: but that only guarantees data shows up in zookeeper. does the executor dump all node-attributes values to the inventory file? | 17:35 |
Shrews | might be a question for pabelanger ^^^ | 17:36 |
corvus | Shrews: it does not, but it would be a small change to do so | 17:37 |
clarkb | corvus: in https://review.openstack.org/#/c/634825/5/zuul/driver/sql/sqlconnection.py why rename metadata to meta? | 17:41 |
*** panda is now known as panda|off | 17:41 | |
corvus | clarkb: ah, that probably warrants a note. sqlalchemy reserves the attribute 'metadata' in the way we're using it | 17:42 |
corvus | that seemed the easiest way around the issue | 17:42 |
clarkb | gotcha | 17:44 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul master: Add a note about sqlalchemy metadata https://review.openstack.org/636189 | 17:45 |
corvus | clarkb: ^ | 17:45 |
*** bhavikdbavishi has joined #zuul | 17:46 | |
clarkb | Thanks. I went ahead and approved both changes (the first already had a +2 and the second is only a comment addition) | 17:47 |
*** bhavikdbavishi has quit IRC | 17:47 | |
corvus | clarkb: w00t, thanks :) | 17:47 |
corvus | when we restart openstack's zuul with that, we can land the other 2 changes and start testing it out. the opendev intermediate registry is running now. | 17:48 |
corvus | though... apparently i forgot to docker the docker network dockers or something | 17:49 |
clarkb | is that our first srevice running on top of a docker image in production? | 17:49 |
corvus | clarkb: yep. | 17:49 |
corvus | hopefully soon to be our first service running and listening on a network port :) | 17:50 |
Shrews | corvus: i think you have to be wearing dockers to docker properly | 17:50 |
Shrews | so, ya know... put some pants on | 17:51 |
* corvus goes to rectify the situation | 17:51 | |
*** saneax has joined #zuul | 17:53 | |
*** sdake has quit IRC | 18:01 | |
*** sdake has joined #zuul | 18:03 | |
openstackgerrit | Merged openstack-infra/zuul master: Return artifacts as dicts and add metadata https://review.openstack.org/634825 | 18:07 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: [WIP] Allow operator to generate auth tokens through the CLI https://review.openstack.org/636197 | 18:14 |
*** bjackman has quit IRC | 18:14 | |
*** ParsectiX has quit IRC | 18:14 | |
*** sdake has quit IRC | 18:27 | |
openstackgerrit | Merged openstack-infra/nodepool master: Rename aws flavor-name to instance-type https://review.openstack.org/635153 | 18:28 |
*** sdake has joined #zuul | 18:30 | |
*** sdake has quit IRC | 18:32 | |
* mordred now tries to imagine corvus wearing dockers | 18:42 | |
*** openstackgerrit has quit IRC | 18:51 | |
SpamapS | BTW, GoodMoney's zuul and nodepool are now patch free! | 18:52 |
*** rlandy is now known as rlandy|brb | 18:52 | |
corvus | SpamapS: congrats! | 18:52 |
SpamapS | I'm working on a patch to the EC2 driver to use EC2's capacity reservation system to act like quota management in OpenStack... | 18:53 |
mordred | SpamapS: woot! | 18:54 |
SpamapS | Because just yesterday I ran into instance type limits... somebody spun up a t3.medium in the same region as nodepool, and I had max-servers: 10, but I can only run 10x t3.medium's. This resulted in NODE_FAILURE rather than queued jobs. If I use a capacity reservation, that user would have been denied the t3.medium. | 18:54 |
SpamapS | Not sure if there's a way to also just decline a request if we try to take it, but then get that error back from EC2 actually. | 18:55 |
*** sdake has joined #zuul | 18:55 | |
*** rlandy|brb is now known as rlandy | 19:09 | |
*** sdake has quit IRC | 19:09 | |
jkt | can I have zuul stage content somewhere else than into the user's home dir? | 19:17 |
corvus | jkt: yes. zuul itself only stages it on the executor; it's the prepare-workspace role in zuul-jobs that's typically in the base job that pushes it on to the remote hosts | 19:18 |
jkt | corvus: and how do I override bits like ~/.ansible? | 19:19 |
corvus | jkt: i'm not sure i understand that question; i don't think ~/.ansible is required on remote nodes | 19:20 |
jkt | 2019-02-11 20:15:56.846380 | f29 | "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /home/ci/.ansible/tmp/ansible-tmp-1549912556.7046826-158828137740828 `\" && echo ansible-tmp-1549912556.7046826-158828137740828=\"` echo /home/ci/.ansi | 19:22 |
jkt | in case that was truncated, https://ci-logs.gerrit.cesnet.cz/t/public//89/1389/3/check/test-shell-f29/4005ebc/job-output.txt.gz | 19:22 |
mordred | ah - remote_tmp_path | 19:23 |
jkt | I can whitelist it pretty easily, but that's one more image rebuild for me :) | 19:23 |
corvus | jkt: if you can do that, i think that's going to be the best option. because support for an alternate remote_tmp_path in zuul might be a bit complex (in that it's probably more of a node attribute than an executor attribute, which probably means we have to plumb it all the way through nodepool). | 19:25 |
corvus | at the very least, we would need to discuss it a bit :) | 19:25 |
mordred | we don't currently have any mechanism for configuring remote_tmp_path in the ansible.cfg differently - if we wanted to grow one, we'd need to think it through - cause it might be one of those things that should be a node attribute | 19:25 |
mordred | corvus: jinx | 19:25 |
jkt | ack, no problem, let me hack around it | 19:25 |
corvus | mordred: that's very reassuring :) | 19:26 |
*** saneax has quit IRC | 19:29 | |
jkt | what am I doing wrong that this change: http://paste.openstack.org/show/744886/ on my base job won't really affect the prepare-workspace playbook? | 19:59 |
jkt | it is not getting propagated to the inventory as far as I can tell, https://ci-logs.gerrit.cesnet.cz/t/public//89/1389/3/check/test-shell-f29/77ace83/zuul-info/inventory.yaml | 20:01 |
corvus | jkt: it looks like the prepare-workspace-git and mirror-workspace-git roles are hard-coded to put the repos in ansible_user_dir, so you may need to update those roles to support another location. | 20:02 |
corvus | jkt: http://git.zuul-ci.org/cgit/zuul-jobs/tree/roles/prepare-workspace-git/tasks/main.yaml | 20:02 |
corvus | jkt: and i can't seem to connect to that host | 20:03 |
jkt | corvus: sorry, it's ipv6-only :( | 20:03 |
corvus | jkt: cool, no apology necessary :) | 20:04 |
jkt | I'm actually using prepare-workspace, not prepare-workspace-git | 20:04 |
corvus | (my tunnel is offline at the moment) | 20:04 |
jkt | that one looks correct | 20:04 |
corvus | jkt: hrm, i agree that should work. i'll look at your inventory | 20:04 |
jkt | here it is, http://paste.openstack.org/show/744887/ | 20:05 |
corvus | jkt: did you merge that change, or is it in review? | 20:06 |
jkt | corvus: the job definition comes from a change that's under review | 20:06 |
jkt | corvus: but I've force-pushed the change to the base job | 20:06 |
jkt | does Zuul like force-pushes to trusted project-config repos? :) | 20:07 |
corvus | jkt: it should be fine | 20:07 |
corvus | well. actually... | 20:07 |
* jkt restarts | 20:07 | |
corvus | jkt: i'm sure that if you force-merged the change it would work | 20:07 |
corvus | jkt: but if you *pushed* it, i'm not positive. | 20:07 |
jkt | corvus: I restarted executor and scheduler, and it helped :) | 20:09 |
corvus | jkt: cool, 'zuul-scheduler full-reconfigure' should fix that with a little less interruption in the future | 20:11 |
corvus | someday we'll add a cli tenant reconfig option too | 20:11 |
jkt | however, it seems that {{ zuul.project.src_dir }} still points to the home dir | 20:13 |
jkt | so apparently just overriding zuul_workspace_root within a base job is not the way to go | 20:13 |
pabelanger | you might be hitting variable precedence issue, you could try override the variable from your playbook | 20:15 |
mnaser | mordred: regarding zuul-preview, https://github.com/openresty/lua-nginx-module could be an interesting alternative :) | 20:18 |
jkt | well, this actually makes sense -- if I create a variable within a scope of the base job, a variable called 'zuul_workspace_root', how is Zuul supposed to feed that data back to, e.g., zuul.project.src_dir ? | 20:18 |
jkt | so this probably has to be done via site-wide variables | 20:20 |
mordred | mnaser: yeah - but then I'd have to write lua | 20:20 |
jkt | I think it would make a lot of sense to pass this zuul_workspace_root from nodepool, though | 20:20 |
mnaser | mordred: you did write c++ :) | 20:20 |
mnaser | :P | 20:20 |
pabelanger | jkt: http://paste.openstack.org/show/744889/ should work | 20:21 |
pabelanger | or https://zuul-ci.org/docs/zuul/user/jobs.html#user-jobs-sitewide-variables | 20:21 |
jkt | pabelanger: let me rephrase my question -- how does Zuul know what the "work dir" is going to be at the target node? | 20:27 |
corvus | jkt: zuul.project.src_dir is actually a relative path | 20:29 |
corvus | jkt: it's relative to zuul.executor.src_root, which is on the executor | 20:29 |
corvus | jkt: allow me to back up and preface this by saying two things: 1) there's a lot of stuff that probably expects that the user's home dir on the remote node is accessible. making that work is going to be the best experience. :) | 20:31 |
corvus | jkt: 2) but zuul is designed to account for any configuration on the remote node, so you can absolutely do whatever you want. but it's going to be harder and may require you to implement (or re-implement) more things yourself | 20:32 |
corvus | jkt: so with that out of the way, the key thing to remember here is that *zuul itself* only writes things into a work directory on the executor it controls. everything else that happens after that is controlled by ansible jobs and roles. | 20:32 |
corvus | jkt: so all of those path variables in the inventory are executor paths. | 20:33 |
corvus | they are, however, conveniently structured in such a way that they can and are also used by the roles in zuul-jobs. | 20:33 |
corvus | so for example, on the executor, a repo might be checked out into /var/lib/zuul/builds/8b9b85d62fb14f3cab8f9d62274436fe/work/src/git.openstack.org/openstack-infra/system-config | 20:34 |
corvus | which is zuul.executor.work_root + zuul.project.src_dir | 20:34 |
corvus | the workspace setup roles in zuul-jobs re-use the latter part of thet -- zuul.project.src_dir | 20:35 |
*** quiquell|off is now known as quiquell|rover | 20:35 | |
corvus | so on the remote node you end up with the repo at ansible_user_dir + zuul.project.src_dir | 20:35 |
jkt | can I easily have zuul launch ansible with its cwd pointing "somewhere else"? that sounds like what I'm looking for | 20:35 |
corvus | jkt: ansible is run on the executor, so that won't help here | 20:35 |
jkt | and there's no easy override for ansible_user_dir because that one comes from the remote system's config at runtime, and even if there were, that would breaks bits such as ssh key provisioning, okay | 20:37 |
corvus | jkt: yeah, that goes into the same bucket as a configureable remote_tmp -- possible but needs thought. | 20:38 |
jkt | sounds like I will find a way to just have the home dir prepopulated on that throw-away filesystem | 20:38 |
*** gouthamr has quit IRC | 20:38 | |
*** dmellado has quit IRC | 20:39 | |
*** quiquell|rover is now known as quiquell|off | 21:00 | |
tobias-urdin | corvus: fungi friendly ping on https://review.openstack.org/#/c/635941/ if you have some minutes over today :) https://review.openstack.org/#/c/635965/ is related project-config change | 21:15 |
tobias-urdin | pabelanger: ^ new approach without removing revoke-sudo, so secret never leaves executor | 21:15 |
mordred | mhu: :) | 21:25 |
pabelanger | jkt: sorry, stepped away from keybord, but agree with corvus | 21:35 |
pabelanger | keyboard* | 21:36 |
*** gouthamr has joined #zuul | 21:39 | |
*** dmellado has joined #zuul | 21:43 | |
*** openstackgerrit has joined #zuul | 21:54 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: [WIP] Allow operator to generate auth tokens through the CLI https://review.openstack.org/636197 | 21:54 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul master: Add a note about sqlalchemy metadata https://review.openstack.org/636189 | 21:55 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul master: Fix typo in build api endpoint https://review.openstack.org/636227 | 21:55 |
*** sdake has joined #zuul | 22:19 | |
*** sdake has quit IRC | 22:51 | |
*** sdake has joined #zuul | 22:56 | |
*** sdake has quit IRC | 22:59 | |
*** sdake has joined #zuul | 23:06 | |
clarkb | corvus: were there still potential test node performance issues being found by the zuul test suite? | 23:12 |
clarkb | I'm trying to take stock of the current set of known failures now that e-r has some data in it again | 23:13 |
corvus | clarkb: unclear. i suspect so, but my non-zuul cup overfloweth at the moment. :) | 23:13 |
clarkb | ok I'll keep an eye out for related issues. Right now I'm mostly just doing high level "what types of bugs are affecting us?" sorting | 23:14 |
corvus | clarkb: yeah http://logs.openstack.org/89/636189/2/gate/tox-py36/27ccaf4/testr_results.html.gz had a performance sad | 23:15 |
*** sdake has quit IRC | 23:31 | |
corvus | zuul maintainers (should we define an irc highlight? zuul-maint?), i have uploaded https://review.openstack.org/636238 to import zuul-preview, please review :) | 23:38 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: [WIP] Allow operator to generate auth tokens through the CLI https://review.openstack.org/636197 | 23:46 |
*** mrhillsman has quit IRC | 23:46 | |
*** TheJulia has quit IRC | 23:46 | |
*** jtanner has quit IRC | 23:46 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!