jamielennox | pabelanger: quick question, https://github.com/openstack-infra/zuul/blob/feature/zuulv3/playbooks/base-post.yaml any reason you use a synchronize in one place and a copy in the other? | 00:00 |
---|---|---|
jamielennox | they would seem to be doing the same jo | 00:00 |
jamielennox | b | 00:00 |
Shrews | jamielennox: actually, hrm... we'd have to get to the Node that fails in order to test that | 00:00 |
pabelanger | jamielennox: yes, it is mostly a hack right now | 00:00 |
pabelanger | we should add a fat warning on the copy | 00:00 |
Shrews | not quite sure how to do that offhand | 00:00 |
pabelanger | because, we are actually modifying code outside the job DIR | 00:01 |
pabelanger | from an untrusted job | 00:01 |
jamielennox | pabelanger: right, there's a few things i'm discovering here where trusted vs untrusted is going to be an interesting problem | 00:01 |
pabelanger | jamielennox: tl;dr, it should be synchronize, but the job is untrusted and it will fail to run on executor | 00:01 |
jamielennox | pabelanger: wait, why will one fail and the other succeed ? | 00:02 |
pabelanger | jamielennox: that is also going to change once we add our ssh keys to logs.o.o | 00:02 |
pabelanger | jamielennox: first syncs to jobdir | 00:02 |
pabelanger | which we allow | 00:02 |
pabelanger | if you sync outside of it, we don't allow that | 00:02 |
jamielennox | pabelanger: oh, right, i see, the copy is a local copy which is the equivalent of place it on a web server somewhere | 00:02 |
pabelanger | jamielennox: right, but we are going to block (plug) that. just haven't done so yet | 00:03 |
jamielennox | so publish logs will become something else | 00:03 |
pabelanger | correct | 00:03 |
pabelanger | we'll sync to a remote host | 00:03 |
jamielennox | it's really interesting to have that section in a user-modifiable ansible | 00:04 |
jamielennox | similar to success-url and failure-url, i can see no reason i would want that to be user modifiable | 00:04 |
pabelanger | most of these playbooks will move to trusted, we just have them untrusted for now to iterate faster | 00:05 |
jamielennox | is there a way to specify running pre and post on every job? | 00:06 |
jamielennox | some of the prepare-workspace stuff would seem like you wouldn't want a user job to have to parent base to get that information | 00:07 |
jeblair | jamielennox: inherit from base is the way | 00:07 |
jamielennox | jeblair: is pre and post something overridable from an untrusted job? | 00:08 |
jeblair | jamielennox: the vision is that the site admin configures the base job with pre and post playbooks that do all this. most of this should also be in zuul's stdlib, so that site admins don't even have to write it either. i'd like success/failure url to be populated by the log publishing playbook, but we haven't plumbed that yet. | 00:08 |
jeblair | jamielennox: you can't override pre/post within an inheritance hierarchy, only add to them. if you don't want a parent's pre/post, then you have to avoid inheriting from it. | 00:09 |
jamielennox | jeblair: yea, i'm aware that everything i'm playing with atm is still a moving target - just in some places i'm trying to distill the vision form the progress | 00:09 |
jamielennox | jeblair: oh, nice - excellent | 00:09 |
jeblair | jamielennox: yeah, so generally, i would expect every job to have a single base job as the ultimate parent. | 00:10 |
jeblair | jamielennox: i'd also expect some prototypical parent jobs too; like: base -> tox -> python27 | 00:11 |
jeblair | jamielennox: (where base has git repo copy and log copy; tox adds on to that installation of deps and subunit log processing, etc) | 00:11 |
jamielennox | that makes sense, and it's really not an impediment to users creating jobs to have to specify a parent to get basics like log publishing | 00:12 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Revert "Configure regional mirrors for our jobs" https://review.openstack.org/446786 | 00:13 |
Shrews | k, n-l running better now | 00:16 |
pabelanger | cool | 00:16 |
Shrews | jeblair: if you're interested, i'm still seeing IN_USE nodes that are locked w/o requests (0000000856 and 0000000861) | 00:17 |
Shrews | there are currently no requests at all | 00:17 |
jeblair | i will look | 00:18 |
Shrews | 856 allocated to 100-0000002522, 861 allocated to 100-0000002524 | 00:18 |
pabelanger | sorry for the pending noise | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create run-cover role https://review.openstack.org/441332 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create zuul_workspace_root job variable https://review.openstack.org/441441 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Organize playbooks folder https://review.openstack.org/441547 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Rename prepare-workspace role to bootstrap https://review.openstack.org/441440 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add run-docs role and tox-docs job https://review.openstack.org/441345 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add net-info to bootstrap role https://review.openstack.org/441617 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add revoke-sudo role and update tox jobs https://review.openstack.org/441467 | 00:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-tarball job https://review.openstack.org/441609 | 00:18 |
Shrews | node AZs are populated now \o/ | 00:19 |
* Shrews calling it an evening | 00:19 | |
jamielennox | pabelanger: aww, i just spent a day or two emulating all that :) | 00:20 |
jeblair | Shrews: locked and in use seems pretty normal, yeah? | 00:21 |
jeblair | 2017-03-17 00:13:41,588 INFO zuul.ExecutorClient: Execute job tox-cover (uuid: f722dcbc76274a9b9e7aa88741995c8e) on nodes <NodeSet OrderedDict([('ubuntu-xenial', <Node 0000000856 ubuntu-xenial:ubuntu-xenial>)])> for change <Change 0x7f1e1830e450 446785,1> with dependent changes [] | 00:21 |
jeblair | Shrews: zuul thinks it's currently running a job on it | 00:22 |
jeblair | 2017-03-17 00:21:51,977 INFO zuul.nodepool: Returning nodeset <NodeSet OrderedDict([('ubuntu-xenial', <Node 0000000856 ubuntu-xenial:ubuntu-xenial>)])> | 00:22 |
Shrews | jeblair: hrm, yeah. maybe i expected the request to exist still. that seems wrong | 00:22 |
Shrews | things are progressing now | 00:22 |
Shrews | jeblair: sorry | 00:22 |
jeblair | Shrews: oh, zuul deletes the req as soon as it is fulfilled | 00:22 |
jeblair | np | 00:22 |
jeblair | "everything's fine" are the best bugs to find | 00:23 |
Shrews | right? | 00:23 |
Shrews | imma REALLY leave before we find more bugs :) | 00:23 |
Shrews | night | 00:23 |
jeblair | hey this is fun to watch | 00:23 |
jeblair | Shrews: goodnight! | 00:24 |
Shrews | yeah, i been watching it all day! :) | 00:24 |
pabelanger | jamielennox: I'll look at 446749 tomorrow, I've see that locally too | 00:24 |
jamielennox | pabelanger: yea, i went down a bit of an ansible rabbit hole looking for a better way to do that | 00:25 |
jamielennox | but it probably doesn't matter until we have a way of user specifying ansible groups | 00:25 |
jamielennox | again, it's working in my install - but i've no idea how to write a test for that | 00:26 |
pabelanger | remote: https://review.openstack.org/446791 Bump nl01.o.o to 10 servers | 00:28 |
jeblair | jhesketh: i just noticed that the individual job progress bars at http://zuulv3-dev.openstack.org/ don't popup the times when you mouseover them | 00:29 |
pabelanger | jeblair: Shrews: if you think we are ready^ | 00:29 |
jeblair | pabelanger: that's what we reserved, right? | 00:29 |
pabelanger | jeblair: yes, total server is 100, and nodepool.o.o is using 90 ATM | 00:29 |
jeblair | pabelanger: tbh, i wouldn't mind spending a bit more time in our artificially constrained capacity. we found some good bugs that way. | 00:30 |
jhesketh | jeblair: that looks like a regression, yes... will take a look | 00:30 |
pabelanger | sure, that's fine with me too | 00:30 |
jamielennox | pabelanger: is there some way to template things into project-config from the host? | 00:41 |
jamielennox | pabelanger: so example, my use case is i don't want to put the equivalent of logs.o.o into project-config because it makes it harder to replicate that environment for testing | 00:41 |
jamielennox | is there a way we can like -e @/etc/host-data.yml in plays that we can inject deployment specific stuff into project-config? | 00:42 |
jamielennox | or something out of the executor specific config ? | 00:43 |
jhesketh | jeblair: mind if I rebase your sql forward port that is in merge conflict | 00:52 |
* jhesketh assumes it's okay | 00:58 | |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Port SQLAlchemy reporter to v3 driver structure https://review.openstack.org/442114 | 00:58 |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Remove more swift configurations https://review.openstack.org/446056 | 00:58 |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Move alembic into sql driver https://review.openstack.org/442124 | 00:58 |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Port SQLAlchemy reporter to v3 driver structure https://review.openstack.org/442114 | 01:13 |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Remove more swift configurations https://review.openstack.org/446056 | 01:13 |
openstackgerrit | Joshua Hesketh proposed openstack-infra/zuul feature/zuulv3: Move alembic into sql driver https://review.openstack.org/442124 | 01:13 |
openstackgerrit | Jamie Lennox proposed openstack-infra/zuul feature/zuulv3: Re-add the ability to set username on zuul-executor https://review.openstack.org/446308 | 01:24 |
SpamapS | jamielennox: you're saying you want variables that vary by.. what? Node provider? | 01:26 |
jamielennox | SpamapS: no, by deployment | 01:26 |
jamielennox | so i'm doing a POC now that basically just lets you add an additional file to the ansible run | 01:27 |
jamielennox | so that we can say something like "{{ log_host }}" in the project-config playbooks | 01:27 |
jamielennox | but have "{{ log_host }}" point to logs.o.o only on a production deploy | 01:28 |
jamielennox | and i don't know logs-testing the rest of the time | 01:28 |
jamielennox | will let us do things like actually test log publishing in vagrant deploys | 01:28 |
jamielennox | using real project-config files | 01:28 |
SpamapS | jamielennox: ew, vagrant. :-P | 01:29 |
jamielennox | well vagrant as an example, but in our case most of us have spun up some form of dev environment and we need to be able to override the logs url by environment | 01:30 |
jamielennox | previously we did that by templating layout.yaml, but that doesn't work anymore | 01:30 |
SpamapS | Anyway, so what you want is a well tested set of standard jobs, that hey customized by local site configs? | 01:30 |
jamielennox | well put, yes | 01:30 |
SpamapS | s/hey/get/ | 01:30 |
SpamapS | I think you can just do that now, call it std-py27 and then on site have py27 inherit std-py27, no? | 01:32 |
jamielennox | SpamapS: there's no where i can see to inject the site configs though | 01:32 |
SpamapS | And have most of it in roles with defaults, but the py27 role would depend on std-py27 with values for the site | 01:33 |
jamielennox | zuul writes out a vars file with job_vars and it looks like you can add to that via individual job configuration, but not per site | 01:33 |
SpamapS | Roles is where you'd inject the site bits | 01:33 |
SpamapS | Or if you can add to them in the job config, just do it in the site specific one? | 01:34 |
jamielennox | SpamapS: modifying roles still means making project-config aware that you run different things in different environments though right? | 01:35 |
SpamapS | But it might be a good idea to be able to override at the site level rather than have to make a new site specific job | 01:36 |
jamielennox | SpamapS: i'm not sure if we're on the same page | 01:38 |
jamielennox | SpamapS: i'll see if my POC works and then point you at it to show you what i'm trying to accomplish | 01:38 |
SpamapS | K | 01:38 |
SpamapS | I should probably follow along via laptop instead of phone | 01:38 |
*** jamielennox is now known as jamielennox|away | 01:56 | |
*** jamielennox|away is now known as jamielennox | 02:01 | |
jamielennox | SpamapS: ok, so using https://github.com/jamielennox/zuul/commit/eb9a63ca342f64c6a973b15b06a9e36789f19a55 | 04:12 |
jamielennox | i made an /etc/zuul/site-variables.yaml containing: http://paste.openstack.org/show/603046/ | 04:12 |
jamielennox | then my project-config looks like: https://github.com/jamielennox/project-config/blob/master/playbooks/roles/base/tasks/post.yaml#L12 | 04:13 |
jamielennox | the idea is simply to split some things that are site specific vs those that are job/specific so we can reuse the job definitions across multiple sites | 04:14 |
openstackgerrit | Jamie Lennox proposed openstack-infra/nodepool feature/zuulv3: Split webapp into its own nodepool application https://review.openstack.org/445675 | 04:27 |
openstackgerrit | Jamie Lennox proposed openstack-infra/nodepool feature/zuulv3: Refactor nodepool apps into base app https://review.openstack.org/445674 | 04:27 |
openstackgerrit | Jamie Lennox proposed openstack-infra/nodepool feature/zuulv3: Allow loading logging config from yaml https://review.openstack.org/445656 | 04:41 |
*** bhavik1 has joined #zuul | 04:53 | |
*** bhavik1 has quit IRC | 05:15 | |
SpamapS | jamielennox: kk, I think this deserves a wider discussion because variables and ansible make me go o_O | 05:39 |
jamielennox | SpamapS: yea, i very much expect for it to be a widerdiscussion, there are questions about trusted/untrusted etc as well and whether you publish that file with the logs | 05:40 |
SpamapS | jamielennox: if I weren't using Zuul, and just running ansible on multiple sites, I'd have my inventory set group vars. | 05:43 |
SpamapS | http://docs.ansible.com/ansible/intro_inventory.html#group-variables | 05:43 |
SpamapS | jamielennox: but since Zuul generates inventories, seems like it's going ot have to take the reigns on this anywa. | 05:45 |
SpamapS | y | 05:45 |
jamielennox | SpamapS: the problem here is that because we're cloning project-config independent of our deploy scripts there's really no way you could distinguish between the groups | 05:47 |
jamielennox | like if i want to re-use that project-config between a dev/prod environment i would still need some way of informing the ansible what environment we are in | 05:47 |
SpamapS | Agreed, though you can have multiple config repos | 05:49 |
SpamapS | so you can have a common-config and then prod-config and dev-config | 05:49 |
SpamapS | but I'm still not sure how that would actually work here :-P | 05:50 |
jamielennox | yep, but event then dev- is still one person's dev so you basically end up having to replicate the job for everyone who actually does a custom install | 05:59 |
SpamapS | jamielennox: replicate? 'inherit: jobname' ? | 06:03 |
jamielennox | SpamapS: we can do this later, but the problem isn't the running, it's that publishing is part of the same ansible | 06:08 |
jamielennox | and publishing is deploy specific not job specific | 06:10 |
jamielennox | like i shouldn't need to recreate or reinherit or anything the tox job because the parent: base log ip has changed | 06:11 |
SpamapS | jamielennox: i wonder if that could then be handled by a setting in the base job? | 06:18 |
jamielennox | SpamapS: right, that's what https://github.com/jamielennox/project-config/blob/master/playbooks/roles/base/tasks/post.yaml#L19 is, a way to set log ip in the base task | 06:21 |
jamielennox | and bonnyci job inherits from that base: https://github.com/jamielennox/project-config/blob/master/zuul.yaml#L24 | 06:22 |
jamielennox | so now i need something that's not defined in project-config that lets me change that settings | 06:22 |
jamielennox | hence the site specific variables file | 06:24 |
*** yolanda has joined #zuul | 06:45 | |
*** Cibo_ has joined #zuul | 07:56 | |
*** hashar has joined #zuul | 08:17 | |
*** Cibo_ has quit IRC | 08:57 | |
*** bhavik1 has joined #zuul | 09:04 | |
*** bhavik1 has quit IRC | 09:10 | |
*** jesusaur has quit IRC | 09:13 | |
*** jesusaur has joined #zuul | 09:15 | |
*** rcarrillocruz has quit IRC | 11:00 | |
*** zaro has quit IRC | 12:21 | |
*** zaro has joined #zuul | 12:24 | |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Refactor nodepool apps into base app https://review.openstack.org/445674 | 12:51 |
*** rcarrillocruz has joined #zuul | 12:51 | |
tobiash_ | hi | 12:57 |
tobiash_ | just noticed that nodepool-builder in my environment creates md5sum and sha256 sum of the images | 12:58 |
tobiash_ | are these used somewhere in nodepool? | 12:59 |
Shrews | tobiash_: shade uses those to check for duplicate images. I don't *think* nodepool uses it anywhere else | 13:05 |
tobiash_ | Shrews: my images are approaching 30gb and take a bit for checksumming. So I thought if it would break things to inhibit that. | 13:06 |
Shrews | i'm not sure what would happen, tbh. it would probably allow you to upload duplicate images, at least | 13:09 |
Shrews | tobiash_: actually, i think shade is going to calculate those values anyway | 13:11 |
tobiash_ | Shrews: ok, then I'll leave it as is and maybe give the nodepool-builder more ram :) | 13:12 |
Shrews | yeah | 13:12 |
Shrews | jeblair: mordred's https://review.openstack.org/297950 lgtm, but giving you another chance to review | 13:28 |
Shrews | actually, nm. that's failing the dsvm jobs | 13:30 |
pabelanger | tobiash_: if you remove DIB_CHECKSUM: '1' from nodepool.o.o, diskimage-builder will stop creating them. However, shade will just create it. And if you are uploading to more then 1 provider, its better just to create the checksum files with DIB, then shade (since shade does it for each provider) | 13:35 |
tobiash_ | pabelanger: thanks, then I'll leave it on as checksumming has to be payed anyway | 13:39 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename NodeCleanupWorker to DeletedNodeWorker https://review.openstack.org/446649 | 14:07 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Split DeleteNodeWorker into two threads https://review.openstack.org/446651 | 14:07 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Create BaseCleanupWorker class https://review.openstack.org/446650 | 14:07 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add ansible-lint to tox (pep8) https://review.openstack.org/438276 | 14:36 |
mordred | Shrews: maybe we should just give up on that patch for master | 15:01 |
Shrews | mordred: that's your call | 15:02 |
mordred | Shrews: I think we should probably just get the shim done so that v3 can be master | 15:03 |
*** bhavik1 has joined #zuul | 15:28 | |
SpamapS | jamielennox: yeah, I'm just suggesting you have one more base between base and bonnyci, and that is what varies per site. | 15:44 |
pabelanger | SpamapS: mordred: jeblair: Do we want to try and land topic:zuulv3-ansible today? I that gives us a good base of tox jobs to maybe think about include nodepool into zuulv3 testing too. | 15:50 |
jeblair | mordred: i like that. | 15:52 |
jeblair | SpamapS, jamielennox: note that the "git" driver i recently added may be useful here. that lets you put a local git repo into your configuration without needing it to be in github/gerrit/etc. | 15:54 |
jeblair | SpamapS, jamielennox: so you can add a low-overhead site-local config repo | 15:55 |
jeblair | pabelanger: i'd like to review that stack when i finish up the current problem i'm on | 15:56 |
SpamapS | jeblair: yeah that's what I was thinking. Maybe jamielennox didn't realize that. | 16:03 |
SpamapS | I haven't looked closely, but if I pointed two projects at the same git repo, would it freak out? | 16:04 |
jeblair | SpamapS: it's kind of muddled now. i think i have a proposal to clarify things a bit. with that in place, you shouldn't be able to do that (project == repo and is globally unique) | 16:06 |
* jeblair digs | 16:07 | |
jeblair | SpamapS: http://lists.openstack.org/pipermail/openstack-infra/2017-March/005208.html | 16:07 |
SpamapS | jeblair: because I can see a situation where for dev I just want to use what's in my tree for base and for site, but then in prod I obviously want site-specific things in their own repo | 16:10 |
SpamapS | but that likely wouldn't work anyway | 16:10 |
* SpamapS unwinding things in my head | 16:10 | |
pabelanger | for example (single playbook), you could set up a reasonable default in your playbook (maybe testing mode), then from your job definition in .zuul.yaml assign production variables. Then do a variable check in your playbook to setup variables in playbook is passed them, otherwise just setup using testing vars. | 16:18 |
SpamapS | jeblair: so this is more a question about the model of inheritance than anything else. If I have two jobs, say tox-py27 and tox-py35, that both inherit from base, I have to edit base to change anything for both jobs. | 16:19 |
SpamapS | So that behooves me to make all my jobs inherit from a job in the middle.. site-base | 16:20 |
SpamapS | And I need to be able to replace site-base relatively easily. | 16:20 |
pabelanger | you could edit both tox-py27 and tox-35 playbooks to add the same code | 16:20 |
pabelanger | or update base | 16:20 |
SpamapS | pabelanger: I could, but that wouldn't exactly scale to 1000 jobs and 5 sites. | 16:20 |
pabelanger | right, if all jobs need the same update, base is the place for it | 16:21 |
SpamapS | Except I want to test base as-is because I'm testing changes to my source repo. | 16:21 |
pabelanger | right, I think this is where you need to look at setting some vars in .zuul.yaml | 16:22 |
pabelanger | I see some potential magic using set_fact too | 16:24 |
SpamapS | I can't set anything in .zuul.yaml | 16:25 |
SpamapS | I have 1000 users | 16:26 |
SpamapS | with their own .zuul.yaml | 16:26 |
SpamapS | (hypothetically) | 16:26 |
SpamapS | But I want to be able to run a zuul in a new region, with a new logs server, and have their jobs work. | 16:26 |
SpamapS | I think a layer between base, and all-jobs, is necessary. | 16:27 |
clarkb | couldn't you use your nodepool var data for that? | 16:27 |
pabelanger | group vars in ansible too | 16:27 |
*** bhavik1 has quit IRC | 16:27 | |
clarkb | that gives you az region provider | 16:27 |
SpamapS | so just name my logs server after the region always? | 16:28 |
clarkb | not necessarily (yay DNS) | 16:28 |
clarkb | you could have txt records, CNAMEs whatever | 16:28 |
clarkb | but ya if you need per region data then using the per region data supplied seems like a good place to start | 16:29 |
SpamapS | also when I'm doing dev testing of the playbook.. | 16:29 |
SpamapS | we should probably let jamie explain the problem he's trying to solve | 16:29 |
SpamapS | because I still don't think I understood it | 16:29 |
jeblair | SpamapS: regardless of all the other ideas, it seems that 'base' and 'site-base' would be a way of solving the problem you describe. do you foresee a problem with that? | 16:31 |
SpamapS | 17:41 <jamielennox> pabelanger: is there some way to template things into project-config from the host? | 16:32 |
SpamapS | 17:41 <jamielennox> pabelanger: so example, my use case is i don't want to put the equivalent of logs.o.o into project-config because it makes it harder to replicate that environment for testing | 16:32 |
SpamapS | 17:42 <jamielennox> is there a way we can like -e @/etc/host-data.yml in plays that we can inject deployment specific stuff into project-config? | 16:32 |
SpamapS | that's jamie's description of the problem | 16:32 |
jeblair | i read it :) | 16:32 |
SpamapS | ok good | 16:32 |
pabelanger | the -e@ /etc/host-data.yaml would be the vars section from zuul.yaml, so that part is covered | 16:33 |
SpamapS | Except what Jamie was wanting to test, I think, was "Run zuul with all the jobs in all the .zuul.yamls, but on my laptop zuul" | 16:34 |
pabelanger | I'm not sure I understand the 'template things' part for project-config | 16:34 |
SpamapS | or s/laptop/private cloud tenant/ | 16:34 |
jeblair | this is a big conversation that is starting to feel like it's going in circles. SpamapS, i was asking you if you saw any problems with the scenario you were were constructing with the base and site-base construct. i asked that because a bunch of people threw out ideas, all of which are valid and interesting. but it diverted the conversation from the very specific suggestion of multiple git repos. | 16:35 |
jeblair | so i'm just trying to figure out if i can mentally close that out, or if there's some engineering on that which still needs to be done. :) | 16:35 |
SpamapS | jeblair: sorry for going in circles. I'm worried I haven't characterized the original issue well. | 16:35 |
jeblair | SpamapS: no need to apologize, i'm just trying to keep thing straight in my head. and i think you have put forth an issue that's worth considering whether or not it's the same one as jamielennox | 16:36 |
SpamapS | jeblair: I think multiple git repos was me thrashing toward quick fixes before I really understood the problem. | 16:36 |
SpamapS | I'm going to set it aside and let jamielennox write up a clear use case so we can all get on the same page | 16:37 |
SpamapS | (BTW IIRC this came up because jamielennox is deploying v3 in our cloud so yay for that) | 16:38 |
jeblair | cool. i don't think the config syntax is quite ready for this kind of thing yet; the stuff in my email as well as some other things we need to do to make it safe for the 'third-party ci' case are probably going to be necessary before we start making very complex config structures. | 16:39 |
jeblair | other than mental exercises, we should probably not push it too much further than the somewhat simple things we've done so far | 16:40 |
jeblair | but the mental exercises are good, so we can say ("once this is done, we should be able to ...") | 16:40 |
jeblair | the third-party-ci stuff i'm referring to is basically that we need to update the main config file to allow a user to specify what top-level objects should be read from a given repo. that way a third-party ci can point their zuul at openstack-infra/devstack and get the job definitions but *not* the project definitions. | 16:41 |
jeblair | that sort of thing may be useful here as well. solutions to these kinds of problem may involve something like "point your dev zuul at this repo to get the jobs but not the projects, then point it at this repo to get the projects" | 16:42 |
SpamapS | Oh I like that. | 16:48 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Decrypt secrets and plumb to Ansible https://review.openstack.org/446688 | 16:48 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Serve public keys through webapp https://review.openstack.org/446756 | 16:48 |
* SpamapS goes back to getting the current implementation done | 16:48 | |
*** Cibo_ has joined #zuul | 16:56 | |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul master: Check ssh keyfile readability up front https://review.openstack.org/447066 | 16:58 |
*** nt has quit IRC | 17:15 | |
*** nt has joined #zuul | 17:15 | |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul master: Close paramiko connections explicitly https://review.openstack.org/447066 | 17:25 |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul master: Close paramiko connections explicitly https://review.openstack.org/447066 | 17:34 |
*** Cibo_ has quit IRC | 17:37 | |
*** hashar has quit IRC | 17:46 | |
openstackgerrit | Merged openstack-infra/zuul master: Close paramiko connections explicitly https://review.openstack.org/447066 | 17:46 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Isolate encryption-related methods https://review.openstack.org/447087 | 18:12 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Augment references of pkcs1 with oaep https://review.openstack.org/447088 | 18:12 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-db jobs https://review.openstack.org/447089 | 18:13 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-db jobs https://review.openstack.org/447089 | 18:14 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-db jobs https://review.openstack.org/447089 | 18:25 |
clarkb | pabelanger: re ^ weren't we moving away from tox-db and just going to run the setup script if found? | 18:26 |
pabelanger | clarkb: for JJB ya, but I think we could make it into a playbook | 18:27 |
pabelanger | I'm just testing things at this point, to see what it would look like | 18:27 |
pabelanger | but easy to add a role for the test script way | 18:28 |
clarkb | pabelanger: well I think in general we didn't want to have a separate set of JJB configs (or playbooks) for running unittests | 18:28 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-db jobs https://review.openstack.org/447089 | 18:29 |
pabelanger | clarkb: sure, don't have a preference currently, execpt to maybe more away from bash scripts | 18:29 |
pabelanger | also possible the job for zuul, could have a pre-run to for tox-py27 variant for this | 18:30 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-db jobs https://review.openstack.org/447089 | 18:47 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: [WIP] Create tox-db jobs https://review.openstack.org/447089 | 19:04 |
jeblair | pabelanger: i don't see a reply to jhesketh's comment on 441440 | 19:12 |
pabelanger | jeblair: looking | 19:13 |
pabelanger | oh, we talked in channel about that. Let me find it | 19:14 |
pabelanger | jeblair: http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2017-03-15.log.html#t2017-03-15T20:41:14 is some discussion about it | 19:15 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: [WIP] Create tox-db jobs https://review.openstack.org/447089 | 19:17 |
jeblair | pabelanger: hrm. i remain unconvinced. :) i feel like roles should have descriptive names. | 19:18 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Create run-cover role https://review.openstack.org/441332 | 19:18 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Add run-docs role and tox-docs job https://review.openstack.org/441345 | 19:20 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Isolate encryption-related methods https://review.openstack.org/447087 | 19:24 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Augment references of pkcs1 with oaep https://review.openstack.org/447088 | 19:24 |
Shrews | pabelanger: jeblair: Not asking for it right now, but a good next phase for testing nodepool-launcher would be to add a second provider. We could do that, yeah? | 19:27 |
Shrews | seems to be performing well right now. might need to shake things up soon | 19:27 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create zuul_workspace_root job variable https://review.openstack.org/441441 | 19:30 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Organize playbooks folder https://review.openstack.org/441547 | 19:30 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add net-info to bootstrap role https://review.openstack.org/441617 | 19:30 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add revoke-sudo role and update tox jobs https://review.openstack.org/441467 | 19:30 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Create tox-tarball job https://review.openstack.org/441609 | 19:30 |
jeblair | pabelanger, Shrews: we can probably siphon some off of vanilla? | 19:30 |
Shrews | wow, that produced a lot of requests | 19:30 |
pabelanger | jeblair: Shrews: no objection here | 19:31 |
jeblair | they're adding more jobs :) | 19:31 |
Shrews | 21 from those 5 | 19:31 |
pabelanger | Let me make changes to nodepool.o.o | 19:31 |
Shrews | neat | 19:31 |
pabelanger | propose* | 19:31 |
pabelanger | 447108 and 447109 for when we are ready for another cloud | 19:37 |
Shrews | pabelanger: umm, i thought we were using different provider names b/w production and v3? | 19:40 |
pabelanger | Shrews: not now. nodepool-id in v2 allows us to keep the provider name the same | 19:42 |
pabelanger | otherwise, if we change provider name, we need to upload images too | 19:42 |
pabelanger | Shrews: should we be using a different provider name? | 19:43 |
Shrews | the only reason we aren't seeing production instances as leaked v3 instances is b/c v3 looks for the provider name in the nodepool_provider_name instance meta, which thankfully doesn't exist in | 19:44 |
Shrews | production | 19:44 |
Shrews | nodepool_id support is not in v3 yet | 19:44 |
Shrews | that's yet to be merged from https://review.openstack.org/#/c/445325/1/nodepool/nodepool.py | 19:44 |
Shrews | jhesketh's master merge | 19:44 |
Shrews | so production would delete v3's nodes, but v3 could totally delete production's if the meta changed in production | 19:45 |
Shrews | production wouldn't* delete v3 | 19:46 |
Shrews | unless i've missed something somewhere | 19:46 |
jeblair | nodepool_id *was* needed when v3 was creating json data | 19:46 |
jeblair | now that it isn't anymore, the fact that v0 uses json and v3 uses plain means they don't see each other | 19:47 |
Shrews | right, but if we had merged mordred's change into master... | 19:47 |
jeblair | so that's yet another reason not to merge mordred's change to master :) | 19:47 |
jeblair | yeah that ): | 19:47 |
Shrews | i think he may have decided to abandon that | 19:48 |
pabelanger | okay, cool. Cause I was going to ask why we haven't had issues yet :) | 19:49 |
jeblair | pabelanger, clarkb, fungi: mordred has reviewed the job tree -> graph change. do one of you have a few minutes to look it over? i'd like to merge it asap, because i'm about to write a bunch of code that has conceptual conflicts with it. | 19:51 |
jeblair | https://review.openstack.org/443973 and https://review.openstack.org/444055 | 19:51 |
Shrews | pabelanger: changes looked good, i just -1'd the first so no one merges it for a bit | 19:51 |
jeblair | my plan is to squash them after review but before merge. but i can squash now if you prefer. | 19:52 |
pabelanger | Shrews: sure, merge when ready | 19:52 |
clarkb | jeblair: as long as its follows the specs I reviewed I'm happy :) | 19:52 |
clarkb | (currently distracted by kernel samepage merging in an effort to make gate testing more reliably by reducing memory consumption) | 19:52 |
jeblair | i think so :) | 19:53 |
fungi | ooh, that reminds me, i have some zuul v3 spec updates to approve | 19:54 |
jeblair | clarkb, fungi, pabelanger: okay, unless you or someone else speaks up in a few minutes, i'll go ahead and assume mordred's code review + everyone else's spec review is sufficient and then squash and self-approve. | 19:56 |
fungi | i'm taking a look in moments, just skimming back over the specs real quick | 19:56 |
jeblair | fungi: ok cool, thanks :) | 19:57 |
pabelanger | ya, just looking too | 19:57 |
jeblair | i'll go ahead and rebase my secrets work on it then | 19:57 |
fungi | the main spec update is approved, jhesketh has a suggestion on the clarification change https://review.openstack.org/445022 which could either go in a new patchset or another follow-on change | 19:59 |
jeblair | i've generally left "report on this error condition" out of the spec as being unecessarily detailed (it's quite long enough). but the implementation already does include that. i'll leave a response for the record. | 20:01 |
fungi | cool, with that review comment i'm happy to approve it now | 20:01 |
fungi | as for 443973... | 20:01 |
fungi | jeblair: the edit in doc/source/zuul.rst doesn't match the example it's commenting on. did you intend to adjust the example along with it? | 20:02 |
fungi | looks like the example is still the old-style yaml dep tree, and the project-final-test job (which was intended to demonstrate having multiple dependencies) isn't there at all | 20:04 |
jeblair | fungi: the docs are all wrong and need a rewrite anyway. that's a partial carryover from the original patch, which did update the jobs, but with the old syntax. i didn't want to spend time correcting an example which would still be wrong without much more work, but i also didn't want to lose that bit of prose which i thought would be useful. | 20:05 |
fungi | okay, just so long as it was intentional ;) | 20:06 |
jeblair | (my feeling, btw, is that we're probably a couple weeks out from being worthwhile to start working on docs) | 20:07 |
fungi | makes sense | 20:07 |
fungi | jeblair: the unsquashed pair have my +1 (which when added together, conveniently make a +2!) | 20:13 |
jeblair | fungi: hah! | 20:15 |
*** yolanda has quit IRC | 20:26 | |
*** yolanda has joined #zuul | 20:26 | |
*** Cibo_ has joined #zuul | 20:29 | |
*** Cibo_ has quit IRC | 20:35 | |
*** Cibo_ has joined #zuul | 20:36 | |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Decrypt secrets and plumb to Ansible https://review.openstack.org/446688 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Isolate encryption-related methods https://review.openstack.org/447087 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add secret top-level config object https://review.openstack.org/446156 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Improve job dependencies using graph instead of tree https://review.openstack.org/443973 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Augment references of pkcs1 with oaep https://review.openstack.org/447088 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Associate secrets with jobs https://review.openstack.org/446687 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add per-repo public and private keys https://review.openstack.org/406382 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Serve public keys through webapp https://review.openstack.org/446756 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add support for job allowed-projects https://review.openstack.org/447134 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add support for job allowed-projects https://review.openstack.org/447134 | 21:01 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add 'allow-secrets' pipeline attribute https://review.openstack.org/447138 | 21:01 |
jeblair | that's the "Secrets" part of the spec implemented \o/ | 21:02 |
jeblair | and it collided with executor rename | 21:04 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Decrypt secrets and plumb to Ansible https://review.openstack.org/446688 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add 'allow-secrets' pipeline attribute https://review.openstack.org/447138 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Isolate encryption-related methods https://review.openstack.org/447087 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add secret top-level config object https://review.openstack.org/446156 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Improve job dependencies using graph instead of tree https://review.openstack.org/443973 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Augment references of pkcs1 with oaep https://review.openstack.org/447088 | 21:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Associate secrets with jobs https://review.openstack.org/446687 | 21:06 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add support for job allowed-projects https://review.openstack.org/447134 | 21:06 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add per-repo public and private keys https://review.openstack.org/406382 | 21:06 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Serve public keys through webapp https://review.openstack.org/446756 | 21:06 |
openstackgerrit | Jamie Lennox proposed openstack-infra/nodepool feature/zuulv3: Split webapp into its own nodepool application https://review.openstack.org/445675 | 21:20 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Rename NodeCleanupWorker to DeletedNodeWorker https://review.openstack.org/446649 | 21:26 |
openstackgerrit | Jamie Lennox proposed openstack-infra/nodepool feature/zuulv3: Allow loading logging config from yaml https://review.openstack.org/445656 | 21:27 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Create BaseCleanupWorker class https://review.openstack.org/446650 | 21:28 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Split DeleteNodeWorker into two threads https://review.openstack.org/446651 | 21:30 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Improve job dependencies using graph instead of tree https://review.openstack.org/443973 | 21:33 |
openstackgerrit | K Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Reenable test_build_configuration_conflict https://review.openstack.org/446275 | 22:07 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Create zuul_workspace_root job variable https://review.openstack.org/447152 | 22:17 |
jeblair | pabelanger: ^ i made an alternative suggestion | 22:18 |
pabelanger | jeblair: sure I'll look in a bit | 22:19 |
pabelanger | jeblair: left a comment on 441441 but have to run away. I am sure we'll take about it more :) | 22:42 |
jeblair | pabelanger: goot chat :) | 22:56 |
jeblair | good chat even | 22:56 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Simplify the log url https://review.openstack.org/438028 | 23:11 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Remove url_pattern config parameter https://review.openstack.org/447165 | 23:11 |
jeblair | SpamapS, jhesketh: ^ i gave that a fresh pass. | 23:12 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Port SQLAlchemy reporter to v3 driver structure https://review.openstack.org/442114 | 23:18 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Remove more swift configurations https://review.openstack.org/446056 | 23:18 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: [WIP] Create tox-db jobs https://review.openstack.org/447089 | 23:18 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Move alembic into sql driver https://review.openstack.org/442124 | 23:18 |
SpamapS | jeblair: are you around? | 23:19 |
SpamapS | jeblair: I'm trying to understand the stuck jobs test.. I don't really understand how or when gate-noop gets enqueued. | 23:19 |
jeblair | SpamapS: sure! | 23:20 |
SpamapS | as I typed that I realized I haven't actually watched the 2.5 test run | 23:20 |
jeblair | SpamapS: it's because of the reconfiguration with the second config file | 23:20 |
jeblair | SpamapS: the first config file says "run project-merge in gate", so we start that (we don't get very far because we pause it in queue) | 23:21 |
jeblair | SpamapS: the second config says "run gate-noop in gate". notably, it *does not* say to run any other jobs, such as project-merge | 23:21 |
SpamapS | jeblair: ok, what has me confused is, when we reconfigure and add gate-noop, there's no new event that would enqueue it that I see | 23:21 |
jeblair | SpamapS: so the reconfiguration handler works out the difference and says "i have an item in the pipeline; i need to stop any jobs which don't exist anymore [project-merge]". | 23:23 |
jeblair | SpamapS: as for starting -- that will happen as long as the, erm, NNFI handler runs (it always tries to run as many jobs as it can) | 23:23 |
jeblair | SpamapS: so presumably something in the reconfiguration process triggers that; let me look up the exact path | 23:23 |
SpamapS | I think what I was missing was that the item would still be pending somewhere | 23:23 |
SpamapS | I guess I thought once all the current jobs were enqueued, the item was handled and we wouldn't go back and re-evaluate it on reconfiguration. | 23:24 |
jeblair | SpamapS: ah, it stays in the pipeline until it's reported, and anything in the pipeline is subject to reconfiguration. so yeah, things can change mid-stream for a change. | 23:25 |
SpamapS | cool ok, this is my first time stumbling into that reality. :) | 23:26 |
SpamapS | reading through the 2.5 test output logs now to get some sense of how it was when the test worked :) | 23:26 |
SpamapS | anybody else use ccze (or another colorizer) to look at test logs? :) | 23:27 |
SpamapS | I find it helps me to break it up visually | 23:27 |
jeblair | SpamapS: the event that causes NNFI (processQueue in manager/__init__.py) is actually the reconfiguration admin event. the scheduler runhandler runs nnfi on every pipeline after every event is handled, including those. | 23:28 |
jeblair | SpamapS: so basicaly, the reconfigure is going to re-enqueue everything, cancel any obsolete jobs, then run the pipeline queue manager on all the pipelines which will start any new jobs | 23:28 |
jeblair | (the 'run the pipeline queue manager bit' is in Scheduler.run) | 23:29 |
SpamapS | 1764 stuck.nodates.txt | 23:29 |
SpamapS | 524 stuckv3.nodates.txt | 23:29 |
SpamapS | v3's tests are less chatty | 23:29 |
SpamapS | btw, stripping off the date/time stamps and vimdiffing the test outputs actually works pretty well | 23:31 |
SpamapS | dejavu | 23:31 |
SpamapS | I've said that before :-P | 23:32 |
clarkb | vimdiff is one of the best things ever | 23:32 |
SpamapS | jeblair: so in v2.5, we get this after reconfiguration: | 23:33 |
SpamapS | zuul.Scheduler DEBUG Re-enqueueing changes for pipeline gate | 23:33 |
SpamapS | zuul.DependentPipelineManager DEBUG Re-enqueing change <Change 0x7f2b1c1455d0 1,1> in queue <ChangeQueue gate: org/project> | 23:33 |
SpamapS | but in v3... | 23:33 |
SpamapS | zuul.Scheduler DEBUG Re-enqueueing changes for pipeline gate | 23:33 |
SpamapS | zuul.DependentPipelineManager ERROR Unable to find change queue for project org/project | 23:33 |
SpamapS | zuul.Scheduler WARNING Canceling build <Build f59e40cdf6954d2dbc461d0cc70a99e9 of project-merge on <Worker Unknown | 23:34 |
SpamapS | oh | 23:34 |
SpamapS | haha | 23:34 |
SpamapS | do we need a queue: integrated? | 23:34 |
SpamapS | I bet we do | 23:34 |
SpamapS | hrm no | 23:35 |
SpamapS | common-config doesn't have a queue: for org/project | 23:35 |
jeblair | that's fine since there's only one project involved | 23:35 |
jeblair | canceling the build is correct.... i'm not sure about that error line... | 23:36 |
SpamapS | may be something missing in layout-no-jobs pipelines | 23:37 |
* SpamapS checking now | 23:37 | |
SpamapS | One tiny difference is we trade a success-message for a failure-message | 23:39 |
SpamapS | zomg | 23:39 |
SpamapS | jeblair: n/m | 23:39 |
SpamapS | layout-no-jobs has... | 23:40 |
SpamapS | no jobs | 23:40 |
SpamapS | bad name | 23:40 |
SpamapS | it's not no-jobs | 23:40 |
jeblair | SpamapS: aha, yeah, that would explain that error | 23:43 |
SpamapS | but now doesn't seem like the job is getting released correctly | 23:45 |
* SpamapS still on it | 23:45 | |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Port SQLAlchemy reporter to v3 driver structure https://review.openstack.org/442114 | 23:47 |
SpamapS | aaaand no valid playbook | 23:47 |
SpamapS | hrm | 23:48 |
SpamapS | aha! because commitLayoutUpdate only commits zuul.yaml changes | 23:49 |
* SpamapS hopes you all enjoy seeing inside his brain | 23:49 | |
jeblair | SpamapS: i guess we can add the gate-noop playbook to the standard config repo? | 23:54 |
jeblair | (which has a striking similarity to needing to add the job to gearman in the old version :) | 23:54 |
SpamapS | I got it done in commitLayoutUpdate | 23:56 |
SpamapS | but I don't love it | 23:56 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Re-enable test_stuck_job_cleanup https://review.openstack.org/446783 | 23:56 |
SpamapS | however, hard stop now | 23:56 |
SpamapS | jeblair: ^^ that passes for me | 23:56 |
SpamapS | no idea if it breaks other tests ;) | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!