dmsimard | https://review.openstack.org/#/c/536319/ | 00:00 |
---|---|---|
dmsimard | By mhu | 00:00 |
ianw | dmsimard: oh, right, i'm talking about the puppet to deploy that though | 00:01 |
ianw | currently, the launchers don't have an apache | 00:01 |
dmsimard | Oh, I suppose there isn't anything for that yet | 00:01 |
ianw | and the builders are redirecting /dib-image-list etc to the webapp which isn't running on them | 00:01 |
*** weshay is now known as weshay_PTO | 00:04 | |
corvus | let's talk about that in detail at the ptg... | 00:21 |
corvus | ianw: for now, for openstack-infra, let's maybe just set up apache on the launchers? | 00:22 |
ianw | corvus: yep, i'll send a change KISS soon ... what i'm really interested in is having those daily build logs exposed for my automated monitoring | 00:25 |
corvus | ianw: yeah, i guess it's going to be a bit weird for a little while -- api on launchers, logs on builders | 00:27 |
*** rlandy is now known as rlandy|bbl | 00:48 | |
*** elyezer has quit IRC | 00:51 | |
*** elyezer has joined #zuul | 00:55 | |
*** openstackgerrit has quit IRC | 01:03 | |
dmsimard | Interesting: Go accepts patches from GitHub pull requests now https://news.ycombinator.com/item?id=16363511 | 01:26 |
dmsimard | They bridge Github and Gerrit | 01:26 |
dmsimard | With a bot that synchronizes between the two. | 01:26 |
clarkb | dmsimard: there is a plugin for gerrit for that | 01:27 |
dmsimard | And they named it... *drumroll* gerritbot | 01:27 |
clarkb | its the plugin that nuked all of jenkins on github | 01:27 |
dmsimard | Oh ? I'll admit I'm not familiar with that | 01:28 |
ianw | corvus: log/webapp setup (modulo any puppet issues i'm sure i have) - https://review.openstack.org/543671 | 01:29 |
clarkb | https://gerrit.googlesource.com/plugins/github/+/master/README.md | 01:30 |
clarkb | I don't know how well supported it is, I just remember jenkins/ getting nuked by the development of it | 01:30 |
ianw | "the pull-request model adopted by GitHub is often used as “easy shortcut” to the more comprehensive and structured code-review process in Gerrit." ... tell us what you really think gerrit :) | 01:34 |
*** elyezer has quit IRC | 01:51 | |
*** elyezer has joined #zuul | 01:53 | |
*** elyezer has quit IRC | 02:00 | |
*** elyezer has joined #zuul | 02:01 | |
dmsimard | ianw: hah, an old post from jd_ is even in that doc | 02:10 |
dmsimard | ianw: jd_ should send a *pull* request to tell them about all the hacks he did to get their stuff working properly once they took gnocchi out | 02:11 |
dmsimard | https://julien.danjou.info/blog/2017/git-pull-request-command-line-tool and https://julien.danjou.info/blog/2017/pastamaker | 02:11 |
clarkb | I mean you are trying to smash two incompatible methodologies togerher | 02:19 |
clarkb | that are both moving jn different dirwctions at the same time | 02:20 |
clarkb | seema like just not doing it is the only real way to avoid those type of problema | 02:20 |
clarkb | reading the golang implementation comments arent synced so its a halfway thing which can be useful if the big need is more change ingestion | 02:43 |
*** elyezer has quit IRC | 02:59 | |
*** openstackgerrit has joined #zuul | 03:01 | |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Add native distro test jobs https://review.openstack.org/543268 | 03:01 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Revert fixes for legacy boot jobs https://review.openstack.org/543350 | 03:01 |
*** elyezer has joined #zuul | 03:02 | |
*** harlowja has quit IRC | 03:04 | |
*** rlandy|bbl is now known as rlandy | 03:37 | |
*** rlandy has quit IRC | 03:37 | |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul master: Add target=_blank to log links https://review.openstack.org/543782 | 03:53 |
*** threestrands has quit IRC | 04:50 | |
*** elyezer has quit IRC | 05:11 | |
*** elyezer has joined #zuul | 05:14 | |
*** harlowja has joined #zuul | 05:52 | |
*** sshnaidm|off has quit IRC | 05:54 | |
*** sshnaidm|off has joined #zuul | 05:54 | |
*** sshnaidm|off has quit IRC | 06:06 | |
*** sshnaidm|off has joined #zuul | 06:21 | |
*** harlowja has quit IRC | 06:33 | |
*** sshnaidm|off has quit IRC | 06:33 | |
*** elyezer has quit IRC | 06:36 | |
*** elyezer has joined #zuul | 06:46 | |
*** chrnils has joined #zuul | 07:28 | |
*** AJaeger has quit IRC | 07:29 | |
*** AJaeger has joined #zuul | 07:31 | |
*** sshnaidm|off has joined #zuul | 07:40 | |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul master: Add target=_blank to log links https://review.openstack.org/543782 | 07:49 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands https://review.openstack.org/539004 | 07:57 |
openstackgerrit | Merged openstack-infra/zuul master: Make timeout value apply to entire job https://review.openstack.org/541485 | 08:03 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Add native distro test jobs https://review.openstack.org/543268 | 08:06 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Revert fixes for legacy boot jobs https://review.openstack.org/543350 | 08:06 |
*** jpena|off is now known as jpena | 08:06 | |
*** hashar has joined #zuul | 08:08 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands https://review.openstack.org/539004 | 08:18 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands https://review.openstack.org/539004 | 08:21 |
*** xinliang has quit IRC | 08:23 | |
*** sshnaidm|off is now known as sshnaidm|ruck | 08:44 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Clean held nodes automatically after configurable timeout https://review.openstack.org/536295 | 09:07 |
*** electrofelix has joined #zuul | 10:09 | |
*** elyezer has quit IRC | 10:11 | |
*** elyezer has joined #zuul | 10:15 | |
*** bhavik1 has joined #zuul | 10:33 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool master: config: add statsd-server config parameter https://review.openstack.org/535560 | 10:51 |
*** bhavik1 has quit IRC | 10:53 | |
kklimonda | I'd like to split our "project-config" moving most roles to either separate zuul-jobs, or even per-repository config - any pointers what to look out for when doing that? For example, jobs are not shadowed by defaults, how about playbooks and roles? If the same role exists in project-config and zuul-jobs which will be picked? | 11:23 |
*** mwhahaha has quit IRC | 11:51 | |
*** mwhahaha has joined #zuul | 11:52 | |
tobiash | kklimonda: regarding roles, you pick them by repo: https://github.com/openstack-infra/project-config/blob/master/zuul.d/jobs.yaml#L57 | 11:54 |
tobiash | playbooks are directly referenced by the jobs and should be no problem | 11:55 |
tobiash | if you want to move jobs you may need temporary shadowing | 11:55 |
tobiash | kklimonda: so moving roles out of that should be pretty simple, first add the role in the new repo, second change/add the role reference in the job definition | 11:57 |
*** robcresswell has quit IRC | 11:58 | |
*** robcresswell has joined #zuul | 11:59 | |
tobiash | kklimonda: related in the doc: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.roles.zuul | 11:59 |
tobiash | this misses an example so far | 12:00 |
*** jtanner has quit IRC | 12:00 | |
*** jtanner has joined #zuul | 12:01 | |
kklimonda | thanks | 12:14 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands https://review.openstack.org/539004 | 12:19 |
*** jpena is now known as jpena|lunch | 12:48 | |
rcarrillocruz | odyssey4me: put nodepool.yaml support on https://github.com/ansible/ansible/pull/35602 | 12:54 |
odyssey4me | rcarrillocruz awesome, looks good with my eyes :) | 12:56 |
*** Wei_Liu has quit IRC | 13:18 | |
dmsimard | rcarrillocruz: that's sexy, but why | 13:38 |
*** rlandy has joined #zuul | 13:38 | |
*** jpena|lunch is now known as jpena | 13:48 | |
rcarrillocruz | dmsimard: non-zuul/just-nodepool CI envs | 14:17 |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Clean held nodes automatically after configurable timeout https://review.openstack.org/536295 | 14:22 |
*** openstackgerrit has quit IRC | 14:33 | |
*** JasonCL has quit IRC | 14:44 | |
*** JasonCL has joined #zuul | 14:46 | |
*** elyezer has quit IRC | 14:48 | |
dmsimard | rcarrillocruz: hmmm... you got me curious -- how do you assign a particular set of nodes to someone ? and then destroy them ? | 14:48 |
*** JasonCL has quit IRC | 14:49 | |
*** elyezer has joined #zuul | 14:49 | |
*** JasonCL has joined #zuul | 14:50 | |
*** JasonCL has quit IRC | 14:50 | |
rcarrillocruz | dmsimard: if you mean how to do the lock/unlock dance, you can use zuul zk.py. However, since now more parties are showing interest in accessing nodepool in non-Zuul envs, it's been 'agreed' to decouple zk.py into 'nodepool-lib', post 3.0 | 14:50 |
*** JasonCL has joined #zuul | 14:50 | |
rcarrillocruz | if you ask how we do today, is super simple and rough: Our nodepool has 1 node per type, so the scheduling is simple :-). Then we run the tests against an ansible group by doing a combination of the openstack dynamic inventory plus static inventory | 14:51 |
rcarrillocruz | like | 14:51 |
rcarrillocruz | on static | 14:51 |
rcarrillocruz | you'd have | 14:51 |
rcarrillocruz | [ios] | 14:51 |
rcarrillocruz | ios-15.blah | 14:51 |
rcarrillocruz | the ios-15.blah is itself another group coming from openstack dyn inventory, which contains the instances with that label (i added that patch to nodepool, to get per-label groups) | 14:52 |
rcarrillocruz | odyssey4me was working on something similar, he posted a gist not a while ago for the lock/unlock dance | 14:52 |
dmsimard | rcarrillocruz: interesting | 14:54 |
dmsimard | rcarrillocruz: I think there's definitely a use case for standalone nodepool :) | 14:54 |
rcarrillocruz | in retrospect, i should have written that inventory earlier | 14:54 |
rcarrillocruz | it's faster to poke at zk, compared to openstack dyn inventory | 14:55 |
rcarrillocruz | moreover, the nodepool inv is intended to have nodepool node logic | 14:55 |
rcarrillocruz | now that we have static driver (and more coming) | 14:55 |
rcarrillocruz | dmsimard: to me it's a no brainer, been using it at home for a while to test my patches. Rather than vagrant or doing manual nova boot, i just have a nodepool keeping min-ready 1 for the usual node types | 14:56 |
*** openstackgerrit has joined #zuul | 15:02 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue & autohold commands https://review.openstack.org/539004 | 15:02 |
dmsimard | rcarrillocruz: yeah, I basically do the same thing | 15:03 |
dmsimard | rcarrillocruz: except it's a bash alias that does an "openstack server rebuild" with the snapshot | 15:04 |
rcarrillocruz | heh, i have an alias for 'ssh nodepool && nodepool delete' | 15:05 |
*** JasonCL has quit IRC | 15:07 | |
rcarrillocruz | corvus , mordred : reading up the etherpad, just saw your containers item. Just in case it wasn't under your radar, k8s/openshift connection plugins landed recently in ansible https://github.com/ansible/ansible/pull/26668/files | 15:07 |
pabelanger | rcarrillocruz: somebody at RHT recently created: https://github.com/mpryc/devnest/ Same idea, reservation system but with jenkins. If you look at code, most could be replaced with nodepool for managing nodes, while reservations handled in their app. | 15:08 |
rcarrillocruz | guess they can be leveraged at somepoint (linked that in the etherpad for ptg reference) | 15:08 |
rcarrillocruz | pabelanger: yah, i did something similar at gozer, lol. It was called 'slavemanager' | 15:09 |
rcarrillocruz | so, i used ssh | 15:10 |
rcarrillocruz | by restricting a given user on that machine | 15:10 |
rcarrillocruz | https://research.kudelskisecurity.com/2013/05/14/restrict-ssh-logins-to-a-single-command/ | 15:10 |
rcarrillocruz | i wrote down a bash script, which you could do things like 'hold node for $time', unhold | 15:10 |
rcarrillocruz | the hold, would put an at job for deleting the node at $time | 15:10 |
rcarrillocruz | crazy hacky stuff | 15:11 |
rcarrillocruz | electrofelix: ^ you must remember it | 15:11 |
rcarrillocruz | :-) | 15:11 |
*** JasonCL has joined #zuul | 15:11 | |
rcarrillocruz | pabelanger: we should sit down at PTG and talk about a nodepool beaker driver, which i think is what devnest uses under the covers ? | 15:12 |
pabelanger | rcarrillocruz: the cylons were right, history is destined to repeat :) | 15:12 |
rcarrillocruz | lulz | 15:12 |
pabelanger | rcarrillocruz: Yes! | 15:12 |
rcarrillocruz | yeap | 15:12 |
pabelanger | rcarrillocruz: when do you get into PTG? | 15:13 |
rcarrillocruz | sunday | 15:13 |
rcarrillocruz | returning friday | 15:13 |
pabelanger | cool | 15:13 |
pabelanger | rcarrillocruz: any of that code public? | 15:14 |
rcarrillocruz | you mean the slavemanager thingy? | 15:14 |
rcarrillocruz | that got lost i guess, it was on gozer's gerrit | 15:14 |
rcarrillocruz | yolanda: you must remember that slavemanager too :D | 15:15 |
rcarrillocruz | hmm | 15:15 |
rcarrillocruz | https://storyboard.openstack.org/#!/story/2001125 | 15:16 |
*** JasonCL has quit IRC | 15:16 | |
rcarrillocruz | hoping we can chat about it at PTG | 15:16 |
corvus | rcarrillocruz: add to https://etherpad.openstack.org/p/infra-rocky-ptg | 15:17 |
rcarrillocruz | i think that may be what i need for 'private' launcher/executor | 15:17 |
electrofelix | rcarrillocruz: reading backscroll | 15:17 |
*** JasonCL has joined #zuul | 15:18 | |
corvus | tobiash, clarkb: are we agreed on using git remotes for http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-February/000023.html or do we want further discussion? | 15:18 |
rcarrillocruz | corvus: adding | 15:20 |
tobiash | corvus: from my point of view I think we can agree on that but haven't talked with my collegue about that yet | 15:22 |
corvus | tobiash: okay, no rush. let me know when you do. | 15:25 |
tobiash | kk | 15:30 |
corvus | tobiash: in https://review.openstack.org/535716 why doesn't .remote_url reflect what's on disk? why can't we use it to determine if we don't have to update the remote url? | 15:30 |
tobiash | corvus: the repo object just takes the remote url on construction and not necessary from the git repo on disk | 15:34 |
corvus | tobiash: i can see one case: when we initialize a Repo() object, we don't know what's on disk. but maybe we could always update it in that case, but otherwise, use remote_url to avoid updating when not needed once we've cached the Repo object. | 15:34 |
tobiash | corvus: that's also what I thought but then I thought, that it's just easier to set that regardless | 15:35 |
corvus | tobiash: well, part of the reason they're cached is to reduce some of this overhead | 15:35 |
tobiash | corvus: I could also change that to set the remote url regardless in _ensure_cloned | 15:37 |
tobiash | would that be better? | 15:37 |
tobiash | (and add ifunchanged: return to setRemoteUrl) | 15:40 |
corvus | tobiash: i would do it in __init__. ensure_cloned is called often (in case the operator removes the repo while the process is running). if ensure_cloned actually clones, we'll automatically get the right value. the only time we'd be unsure is on __init__. so i'd just put it right after the ensure_cloned in __init__. | 15:40 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Set remote url on every getRepo in merger https://review.openstack.org/535716 | 15:53 |
tobiash | corvus: like this? ^ | 15:53 |
corvus | tobiash: yep! | 15:54 |
tobiash | :) | 15:54 |
openstackgerrit | Merged openstack-infra/zuul master: Add tenant project group definition example and definition in the doc https://review.openstack.org/535730 | 15:55 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul master: Set remote url on every getRepo in merger https://review.openstack.org/535716 | 16:01 |
tobiash | corvus: docstring fixup in the tests ^ | 16:01 |
*** openstackgerrit has quit IRC | 16:04 | |
*** tosky has joined #zuul | 16:07 | |
andreaf | corvus pabelanger rcarrillocruz about group vars at nodeset level | 16:07 |
dmsimard | andreaf: ansible automatically loads the group_vars, host_vars and vars directories in the directory from where the playbook is executed (so for a playbook in foo/playbooks/playbook.yml, it would load foo/playbooks/group_vars, etc.) | 16:08 |
rcarrillocruz | corvus: to my understanding, nodepool job 'vars' => ansible vars. Now, i think there's a valid use case to have vars at node level ( == ansible host_vars) and groups level (== ansible group_vars) | 16:08 |
dmsimard | rcarrillocruz: job vars are in the inventory | 16:08 |
rcarrillocruz | i got a very similar question today from dmellado , which was about multigate job | 16:08 |
pabelanger | personally, i try to keep variables out of inventory files when possible. And do what dmsimard suggests with group_vars folders | 16:08 |
pabelanger | however, that might mean the job would have to be updated, since playbooks are not shared between jobs | 16:08 |
dmsimard | pabelanger: depends where the playbook(s) live | 16:08 |
rcarrillocruz | dmsimard: fair, but net net what i'm after is that nodepool vars are always global | 16:09 |
pabelanger | right, using group_vars/foo.yaml would have to live next to playbook | 16:09 |
rcarrillocruz | wondering if it's worth having a zuul job construct to have host_vars/group_vars | 16:09 |
rcarrillocruz | rather than having that at playbook level | 16:09 |
dmsimard | rcarrillocruz: what is "global" ? variables in the inventory are global but they are fairly low in terms of precedence | 16:09 |
pabelanger | I believe devstack works today, but just setting up vars, with no run playbook, the parent has it | 16:10 |
dmsimard | rcarrillocruz: I believe we've discussed that before, let me search logs | 16:10 |
rcarrillocruz | asking, cos i got a very similar question today from dmellado | 16:10 |
rcarrillocruz | re: multigate jobs | 16:11 |
rcarrillocruz | today it seems the services are set in 'vars' | 16:11 |
andreaf | Just re-sharing the link with my use case https://etherpad.openstack.org/p/zuulv3-group-variables | 16:11 |
pabelanger | rcarrillocruz: my issue with that, is it makes it harder to run ansible playbooks locally, without depending on zuul. since variables are stored in zuul.yaml | 16:11 |
rcarrillocruz | but if you want to have per'node services, you have to craft that yourself at playbook level | 16:11 |
andreaf | rcarrillocruz | 16:11 |
rcarrillocruz | and i think the limitation is due to have just 'vars' as global | 16:11 |
dmsimard | rcarrillocruz: but it doesn't have to be that way | 16:11 |
rcarrillocruz | pabelanger: that's fair | 16:11 |
andreaf | rcarrillocruz: yeah multinode is one of the main use cases | 16:11 |
dmsimard | rcarrillocruz: either you can do it at the playbook level or through var files | 16:11 |
rcarrillocruz | dmsimard: sure, i know it can be done, just raising if the use case is worth that have the ability to express host_vars/group_vars at zuul job def | 16:12 |
dmsimard | rcarrillocruz: I mean, we've even discussed doing something at the beginning of the playbook like - include_vars: path: foo/{{ inventory_hostname }}.yml | 16:12 |
pabelanger | however, I recently did run into the issue where I needed to use extra-vars, so this topic is interesting :) | 16:12 |
andreaf | rcarrillocruz, pabelanger, dmsimard: variables in group vars are not job specific so using group vars does not fulfill the requirement | 16:12 |
corvus | pabelanger: as long as we have job-level variables in zuul, i think we've already crossed the threshold of making it harder to run locally. the solution is still the same though -- create a local inventory with the variables you need (or copy the one zuul created) | 16:12 |
dmsimard | rcarrillocruz: nevermind there was no real discussion about group_vars before, it was for a use case pabelanger had for putting the same host in two groups which was getting complicated | 16:13 |
andreaf | if I want to have two jobs that run a different setup of services on multinodes for instance I would have to create different host groups one for each job to be able to set different things in group_vars in the repo | 16:13 |
pabelanger | corvus: agree, however (personal preference) I've not found a need to use inventory variables when I could instead use playbooks/group_vars | 16:14 |
dmsimard | pabelanger: I think job-level vars are simpler to use for people not familiar with Ansible | 16:14 |
rcarrillocruz | right | 16:14 |
andreaf | besides splitting the job vars definition between job and group vars would make things hard to read | 16:14 |
rcarrillocruz | i would imagine people more used to ansible will just use pb vars | 16:14 |
dmsimard | pabelanger: groups_vars/host_vars/vars is a concept only ansible users would know about and it's not necessarily straightforward | 16:14 |
pabelanger | dmsimard: right, totally agree, however I think we also want users to learn more ansible. Over having zuul grow some of the concepts | 16:15 |
*** elyezer has quit IRC | 16:15 | |
dmsimard | rcarrillocruz: I think if we implement something like this in zuul, it would be defined at the nodeset declaration -- like when you declare a host, you have a vars: field, when you declare a group, you have a vars: field | 16:15 |
dmsimard | rcarrillocruz: and then we would need to plumb that back into a hostvars/groupvars file on the executor or something like that. | 16:16 |
rcarrillocruz | yeah, that's exactly what i had in mind | 16:16 |
dmsimard | rcarrillocruz: nevermind the files, we can just use the inventory that we already have | 16:16 |
dmsimard | pabelanger: yeah we definitely have to be careful about zuul needing to expose every parameter and knob ansible has | 16:17 |
*** elyezer has joined #zuul | 16:17 | |
dmsimard | pabelanger: like playbook strategy, serial, max failure rate and whatnot.. could get complicated quickly | 16:17 |
dmsimard | although all the stuff we're talking about here can be set at a playbook level | 16:17 |
pabelanger | well, host, serial, etc can be controlled via extra-vars (or secrets in zuul) today. I even have a use case for serial, but that is a different discussion | 16:18 |
dmsimard | pabelanger: what's your use case for extra-vars ? | 16:20 |
andreaf | dmsimard rcarrillocruz I would consider the possibility of attaching variables to a node-set when it's used in a job, else a job would have to define it's own node set to define group vars on it | 16:21 |
pabelanger | dmsimard: https://review.openstack.org/526708/ | 16:21 |
corvus | i'm inclined to think that because we allow defining variables, and we allow defining nodesets, we should allow defining variables on nodesets. i think an implementation like what's in the etherpad looks pretty close to ideal. | 16:22 |
rcarrillocruz | ++ | 16:22 |
dmsimard | andreaf: at this point you have to manage different nodesets though.. with unique names and all | 16:22 |
dmsimard | andreaf: It might get confusing, I don't know | 16:22 |
corvus | dmsimard: i think what andreaf is saying is that if we permit setting group_vars on the *job*, we don't have to have a lot of named nodesets. | 16:23 |
dmsimard | andreaf: something you could consider is doing something like a "scenarios" folder in which you have variable files, and then in your playbook you do something like - include_vars: scenarios/{{ scenario }}.yml -- and then "scenario" ends up being defined at the job level | 16:23 |
dmsimard | corvus: how would that work in practice ? Zuul would overload the original nodeset a bit like how job variants work ? | 16:24 |
corvus | dmsimard: i think andreaf is suggesting (in line 59-75 of the etherpad) that we could copy named nodesets and augment them for a specific job | 16:26 |
pabelanger | yah, I would be a fan of that. I would make http://git.openstack.org/cgit/openstack/windmill/tree/.zuul.d/jobs.yaml#n42 easier to maintain. Since all 3 jobs have the same nodeset stanza, except for label | 16:28 |
dmsimard | corvus: ah, "ephemeral" nodeset definition inside the job | 16:28 |
corvus | (though we could also consider making group_vars and host_vars job attributes and leaving the anonymous nodeset definition alone; i think that would work just as well) | 16:30 |
dmsimard | corvus: it's probably more straightforward at the nodeset level -- otherwise you need a dict for the host and group names ? | 16:33 |
corvus | dmsimard: you still do. see the etherpad. | 16:34 |
dmsimard | corvus: ah, you're considering both you mean, I read that as an either or | 16:35 |
rcarrillocruz | y, not mutually exclusive | 16:36 |
* rcarrillocruz afk for a bit | 16:36 | |
SpamapS | dmsimard: for the scenario thing that works fine with just regular job vars. | 16:58 |
SpamapS | vars: {scenario: deploy} | 16:59 |
SpamapS | idea for today: I wish there was a short-circuit around gate running the same jobs as check when check and gate are effectively running the same repo state. | 17:00 |
SpamapS | In a slower moving repo, it's entirely possible only 1 or 2 patches are ever being check'd per day.. and often it's check->gate immediately. For that scenario, waiting for the tests twice is often annoying. | 17:00 |
SpamapS | So I wonder if zuul could like, keep a cache of hashes that passed check, and if they end up right back in gate.. just skip the ones that already passed with that hash. | 17:01 |
SpamapS | Could set something on a job to say it's ok, like: fast-forward-pipeline: check | 17:04 |
*** elyezer has quit IRC | 17:14 | |
*** elyezer has joined #zuul | 17:14 | |
dmellado | rcarrillocruz, just read your message | 17:33 |
dmsimard | SpamapS: the scenario thing I propose shifts the logic from the job definition to the variable files | 17:33 |
dmellado | It'd be awesome to clarify that, thanks for bringing that up | 17:33 |
dmsimard | I don't have a strong opinion one way or another, I think I'll really need to hit that use case myself to make up my mind as to what I think would be the best idea | 17:34 |
*** jpena is now known as jpena|off | 17:34 | |
corvus | SpamapS: you could just skip check and go to gate | 17:37 |
corvus | SpamapS: openstack only has a 'must-pass check' requirement because it's big enough and moves fast enough that gate failures are expensive. for smaller systems, just relying on the gate to reject failures works great. | 17:38 |
corvus | SpamapS: but also, i like your idea and think it's worth exploring | 17:39 |
clarkb | is that info the sql db is already trackign? (commit hashes to job results) | 17:40 |
dmellado | dmsimard: rcarrillocruz | 17:43 |
dmellado | so, regarding those options on the subnodes | 17:43 |
dmellado | on the devstack-multinode gate | 17:44 |
corvus | clarkb: i imagine we'd need a bit more info, like sibling repo states | 17:44 |
dmellado | I was wondering if there's any plan going on the make that selectable via a default job | 17:44 |
dmellado | + I've realized that the support for centos-7 nodes is broken due to virtualenv installation | 17:44 |
dmellado | it's done using the package ansible module but the names are diffferent, so kaboom xD | 17:45 |
clarkb | dmellado: see https://review.openstack.org/#/c/540704/ and its parent | 17:45 |
clarkb | virtualenv install should be getting fixed and 540704 shows you how to swap out different nodesets | 17:45 |
pabelanger | https://review.openstack.org/541010/ | 17:46 |
dmellado | clarkb but IIUC that last patch would just show a failure | 17:48 |
clarkb | dmellado: ? | 17:48 |
dmellado | clarkb: I put this up for centos/fedora where the package name is different | 17:48 |
clarkb | it shouldn't because the images come with virtualenv | 17:48 |
clarkb | so you have to preinstall virtualenv | 17:48 |
clarkb | is essentially what ianw is saying | 17:48 |
dmellado | got it | 17:48 |
dmellado | so then I shouldn't install anything | 17:49 |
dmellado | but the centos7 image is wrong | 17:49 |
clarkb | dmellado: I'm not sure I have enough context to answer those questions | 17:49 |
clarkb | but 540704 shows things working on our centos7 images | 17:49 |
dmellado | clarkb: sure, let me give you a little bit of context then | 17:49 |
clarkb | ans suse and fedora | 17:49 |
dmellado | I've got a WIP patch for a multinode gate based on centos7 | 17:50 |
dmellado | and it failed on the check | 17:50 |
dmellado | basically because it was looking for a package called virtualenv | 17:50 |
dmellado | which is called, python-virtualenv in fedora and rhel | 17:50 |
clarkb | yes which is now in the process of being addressed. If you depends on 541010 like 540704 does you should be fine | 17:51 |
dmellado | ian patch should resolve it as it removed the dependency of it | 17:51 |
dmellado | cool then | 17:51 |
dmellado | I'll abandon m ine | 17:51 |
clarkb | and then 540704 shows you how to mix in different node sets against existing jobs | 17:51 |
dmellado | clarkb: thanks! I'll take a look | 17:52 |
dmellado | clarkb: now that you're around I've also another question | 17:54 |
dmellado | is there any way to disable *all services* in devstack_services: | 17:55 |
dmellado | and just enable some selected ones? | 17:55 |
dmellado | I didn't get to see any sample about taht | 17:55 |
dmellado | and it might just be a one-liner | 17:55 |
*** chrnils has quit IRC | 17:55 | |
clarkb | I think you have to define your own base job with no services set, unless one of those exists already | 17:56 |
clarkb | I'm looking at the role | 17:56 |
dmellado | basically I'd like to mimic what was ENABLED_SERVICES= | 17:56 |
dmellado | and then enable_service foo and so | 17:57 |
dmellado | in order to be explicit | 17:57 |
corvus | you can disable them individually, but there isn't a way to override a set to disable everything. | 17:58 |
dmellado | I see, so then I'll just create a base disabling everything as a parent for that | 17:59 |
dmellado | thanks! | 17:59 |
*** openstackgerrit has joined #zuul | 18:05 | |
openstackgerrit | Merged openstack-infra/zuul master: Retry more aggressively if merger can't fetch refs https://review.openstack.org/536151 | 18:05 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue & autohold commands https://review.openstack.org/539004 | 18:17 |
*** sshnaidm|ruck is now known as sshnaidm|off | 18:18 | |
dmsimard | dmellado: what's the use case in using the devstack job if you don't want to enable any service ? | 18:31 |
*** tosky has quit IRC | 18:33 | |
*** harlowja has joined #zuul | 19:03 | |
rcarrillocruz | He wants to enable in a node by node basis, not disabling everything for the test purpose. So disable in parent, then enable in child | 19:44 |
dmsimard | rcarrillocruz: might be easier to just spin off a new job without this default configuration instead of trying to override it | 19:50 |
clarkb | zuulians. Trying to work up a rough schedule for the PTG and about half the proposed infra topics are zuul or nodepool related. Would wednesday and half of thrusday or half of thursday and friday be ebtter for focused zuul work? | 20:56 |
clarkb | my hunch is that wednesday and half of thursday may be better due to travel on friday and generally not being completely exhausted already | 20:56 |
corvus | clarkb: i favor earlier for those reasons, but don't have anything else on the calendar, so nothing blocking friday. | 20:57 |
pabelanger | clarkb: wfm | 20:59 |
openstackgerrit | Merged openstack-infra/zuul master: Cleanly shutdown zuul scheduler if startup fails https://review.openstack.org/542159 | 21:00 |
openstackgerrit | Merged openstack-infra/zuul master: Run zuul-stream-functional on every change to ansible https://review.openstack.org/542203 | 21:01 |
*** pabelanger has quit IRC | 21:05 | |
*** pabelanger has joined #zuul | 21:05 | |
corvus | clarkb: can you review https://review.openstack.org/543619 and https://review.openstack.org/540489 when you get a sec? | 21:08 |
corvus | SpamapS: can you take a look at my revisions on https://review.openstack.org/542518 ? | 21:09 |
rcarrillocruz | Yeah I am off early on Friday, would prefer wed and thu | 21:18 |
*** myoung is now known as myoung|bbl | 21:33 | |
openstackgerrit | Merged openstack-infra/zuul master: Don't decode str in LogStreamingHandler https://review.openstack.org/543365 | 21:34 |
clarkb | ok stuck zuul in the first half of the hacking days https://ethercalc.openstack.org/cvro305izog2 | 21:34 |
openstackgerrit | Merged openstack-infra/zuul master: Fix broken fail_json in zuul_console https://review.openstack.org/543370 | 21:40 |
*** elyezer has quit IRC | 21:45 | |
*** openstackstatus has joined #zuul | 21:46 | |
*** ChanServ sets mode: +v openstackstatus | 21:46 | |
*** elyezer has joined #zuul | 21:47 | |
clarkb | corvus: https://review.openstack.org/#/c/543619/2/zuul/model.py line 1076 won't variants have the same name? | 21:54 |
clarkb | corvus: put another way should that be == instead of != | 21:55 |
corvus | clarkb: yes they will have the same name -- that's trying to detect when we cross from one job to another and so it resets the attribute on that transition | 21:56 |
clarkb | corvus: but its setting it to the parent's job value which may be true? | 21:56 |
clarkb | oh but that is specifically in apply variant | 21:57 |
* clarkb needs to warp head around this | 21:57 | |
corvus | clarkb: for example, we start by copying the base job, then apply the unittest job on top of that. so the first time we call that method, "self" is a copy of the base job, and "other" is "unittests". 1076 is true, therefore we forget whatever value of abstract we already had (which we got from "base") and instead set it to whatever unittest is. | 21:58 |
jhesketh | Morning | 21:58 |
corvus | clarkb: if there's a second unittest variant, then the next invocation "self" is what we ended up with from the last call. 1076 is false (since the names are both "unittest"). so we either want to stick with the value we had before, or, if the second unittest variant set abstract to true, we should be set to true (once a variant sets abstract, we don't want to unset it) | 22:00 |
clarkb | corvus: ok thanks, I think I get it now | 22:00 |
clarkb | I think I had self and other mixed up | 22:00 |
corvus | i wrote that code 5 times | 22:01 |
andreaf | dmellado, corvus: actually I added a mechanism for that, with devstack_base_services | 22:03 |
andreaf | dmellado, corvus: sorry with the special "base" devtsack_service | 22:04 |
corvus | andreaf: i don't see devstack_base_services set in the devstack repo | 22:05 |
andreaf | dmellado, corvus: https://github.com/openstack-dev/devstack/blob/master/roles/write-devstack-local-conf/README.rst | 22:05 |
andreaf | corvus, dmellado: devstack_base_services can be used to add or remove features as defined in the test-matrix | 22:06 |
corvus | andreaf, dmellado: can we move this to #openstack-qa? | 22:06 |
openstackgerrit | Merged openstack-infra/zuul master: Add abstract job attribute https://review.openstack.org/543619 | 22:12 |
openstackgerrit | Merged openstack-infra/nodepool master: Add native distro test jobs https://review.openstack.org/543268 | 22:29 |
*** hashar has quit IRC | 22:43 | |
*** elyezer has quit IRC | 23:10 | |
*** elyezer has joined #zuul | 23:11 | |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Unpause Xenial build for non-src functional test https://review.openstack.org/544125 | 23:22 |
*** jimi|ansible has quit IRC | 23:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!