pabelanger | jeblair: mordred: Shrews: I am noticing odd behavior with nl01 / nl02. Currently have a 2 trusty nodes marked ready (unlocked) and we built a new image. I would have expect the existing ready nodes to have been used first | 00:10 |
---|---|---|
pabelanger | jeblair: mordred: Shrews: also see 2 opensuse nodes online, evern though we have min-ready: 1 for labels | 00:12 |
*** Shrews has quit IRC | 01:22 | |
*** Shrews has joined #zuul | 01:23 | |
*** xinliang has quit IRC | 02:04 | |
*** mordred has quit IRC | 02:09 | |
*** xinliang has joined #zuul | 02:21 | |
*** xinliang has joined #zuul | 02:21 | |
*** mordred has joined #zuul | 02:37 | |
*** jamielennox has quit IRC | 03:08 | |
*** jamielennox has joined #zuul | 03:13 | |
* SpamapS wishes he was heading to DEN to work on the transition :-P | 03:54 | |
*** yolanda has quit IRC | 04:15 | |
* tobiash sits at the airport (MUC) and waits for departure | 06:08 | |
openstackgerrit | Dirk Mueller proposed openstack-infra/nodepool master: Reorder except statements to avoid unnecessary traces https://review.openstack.org/502312 | 12:13 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Fix typo - the word is "available" https://review.openstack.org/502320 | 14:37 |
mordred | jeblair: http://logs.openstack.org/65/500365/10/check/shade-functional-devstack/7e8d1d3/job-output.txt.gz - which is one of the attempts to use the new devstack job - is failing here: | 14:51 |
mordred | http://logs.openstack.org/65/500365/10/check/shade-functional-devstack/7e8d1d3/job-output.txt.gz | 14:51 |
mordred | gah | 14:51 |
mordred | http://logs.openstack.org/65/500365/10/check/shade-functional-devstack/7e8d1d3/job-output.txt.gz#_2017-09-09_20_44_47_810768 | 14:52 |
mordred | jeblair: it looks like it's failing in a pre-run task, we're not logging anything, then we shift to the post-run | 14:52 |
mordred | jeblair: I can't see anything in the logs to indicate what might have gone wrong | 14:52 |
jeblair | mordred: what's the last log lines from that playbook? | 14:53 |
mordred | jeblair: ah! I think: 2017-09-09 20:44:47,933 DEBUG zuul.AnsibleJob: [build: 7e8d1d39913a4d67b94c1c9d1c99f903] Ansible output: b'ERROR! A worker was found in a dead state' | 15:00 |
jeblair | likely another segfault :( | 15:01 |
clarkb | one newer python? | 15:01 |
clarkb | *on | 15:01 |
mordred | jeblair, clarkb: I'm not convinced we have the right python installed | 15:03 |
mordred | k. nevermind. apt-cache was just showing me weird things | 15:03 |
mordred | python3.5 is already the newest version (3.5.2-2ubuntu0~16.04.2~openstackinfra). | 15:03 |
mordred | jeblair, clarkb: "luckily" it seems to be consistent and reproducible - I think all of those jobs failed out in the same place | 15:04 |
*** jkilpatr has joined #zuul | 15:04 | |
jeblair | mordred: "cool" | 15:04 |
*** jkilpatr has quit IRC | 15:10 | |
mordred | jeblair, clarkb, fungi: if you have a sec: https://review.openstack.org/#/c/502270 | 15:12 |
* mordred would like to clear that stack if possible | 15:12 | |
mordred | also https://review.openstack.org/#/c/502256/ is an easy one | 15:14 |
fungi | mordred: you've got some earlier changes adding generate-log-url which seem to run counter to that. should they be abandoned now? | 15:25 |
* fungi is mildly confused about which ones we're keeping | 15:26 | |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Add migration tool for v2 to v3 conversion https://review.openstack.org/491805 | 15:40 |
mordred | fungi: yah - one sec | 15:42 |
mordred | fungi: so, https://review.openstack.org/#/c/502270 https://review.openstack.org/#/c/502271 and https://review.openstack.org/#/c/502320 are good as is https://review.openstack.org/#/c/502272/ | 15:45 |
fungi | thanks! | 15:45 |
mordred | fungi: I've marked the https://review.openstack.org/#/c/502266 and https://review.openstack.org/#/c/502265 WIP - once the other have landed and we've verified, we can rework those | 15:45 |
fungi | knowing what not to review is at least as important as knowing what to review at this point ;) | 15:45 |
mordred | fungi: +100 | 15:45 |
mordred | update https://review.openstack.org/502265 to actually be correct and to depend on that base-test patch | 15:49 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: Add max-job-timeout tenant setting https://review.openstack.org/502332 | 16:22 |
mordred | clarkb: I can can do some hacking here in the tc/board room, but going all the way to tracking down python segfault is a bit too much - any chance you can help with that one? | 16:30 |
clarkb | mordred: sure what can I do to help? | 16:30 |
clarkb | I'm slowly booting this morning so haven't quite followed all the segfault stuf | 16:30 |
mordred | clarkb: I think we just need to trigger/run one of the changes with core files enabled and then gdb a little bit | 16:31 |
mordred | clarkb: you can see all the devstack jobs here: https://review.openstack.org/#/c/500365/ hit the retry limit (beaues they're all hitting the segfault) | 16:31 |
mordred | clarkb: so turning on keep on the executors and rechecking that should make it easy to reproduce the segfault (it seems to be happening every time) | 16:32 |
mordred | actually - I betcha we could make a much smaller patch that reproduces it - lemme see if I can make you one of those real quick | 16:32 |
clarkb | ok | 16:34 |
mordred | clarkb: https://review.openstack.org/502335 is a smaller one but still uses the devstack_local_conf parameter (the python module implementing that seems to be where the crashes are happening) | 16:35 |
mordred | I also wonder if there's $something we can do, maybe in our callback plugin, that puts in a segfault handler so that we can maybe emit a $something into the logs | 16:36 |
clarkb | ya catching sigsegv and recording state might be helpful | 16:37 |
clarkb | mordred: so I should see 502335 hit the retry limits ya? | 16:38 |
mordred | clarkb: yah | 16:40 |
mordred | clarkb: OR - if we don't, then we can compare it to the previous patch and see what it's doing that doesn't segfault when the others do | 16:40 |
clarkb | ok | 16:42 |
clarkb | and keep keeps the bubbewrap env around? | 16:42 |
mordred | clarkb: yah -if you run "zuul-executor keep" on the executor it tells it to not delete the build dir from /var/lib/zuul/builds | 16:45 |
clarkb | and is that global? or can I scope it to a specific job? | 16:46 |
mordred | clarkb: there's a zuul-bwrap command youcan use to manually run a playbook via bubblewrap | 16:46 |
mordred | clarkb: it's global | 16:46 |
mordred | clarkb: scoping it to a job might be a nice improvement though | 16:46 |
clarkb | mordred: and you double chcked the version of python? | 16:49 |
* clarkb looks at python | 16:49 | |
clarkb | Version: 3.5.1-3 is what apt cache says we have | 16:50 |
mordred | clarkb: yah - apt-cache seems to be lying | 16:52 |
mordred | clarkb: I tried apt-get update ; apt-get install python3.5 and it said: "python3.5 is already the newest version (3.5.2-2ubuntu0~16.04.2~openstackinfra)" | 16:53 |
clarkb | Python 3.5.2 (default, Aug 18 2017, 13:17:48) | 16:53 |
clarkb | is what python itself reports which is close to ^ than 3.5.1-3 | 16:53 |
mordred | clarkb: yah- I thnk we should figure out what's up with that - or see if the right one has hit the proposed bucket yet | 16:54 |
clarkb | any objections to installing aptitude? I find it easier to debug these sorts of things using aptitude than apt-* | 16:55 |
clarkb | also finger streaming really kills the browser | 16:58 |
clarkb | I wonder if we can have it poll every say 10 seconds | 16:58 |
clarkb | mordred: dpkg -s also says 3.5.1-3 | 16:59 |
clarkb | aha dpkg -l to the rescue | 17:01 |
clarkb | mordred: package confusion seems to be python3 vs python3.5 | 17:01 |
clarkb | and python3 is not a real package it is a dependency package that depends on the python3.5 package | 17:02 |
clarkb | mordred: your reproduction change seems to have failed properly | 17:03 |
clarkb | user jenkins tried to write to zuul's homedir | 17:04 |
mordred | clarkb: oh - goodie | 17:07 |
clarkb | also /new/shade no such file or directory | 17:08 |
clarkb | so its mostly just dir movement | 17:08 |
mordred | cool. so - where did you see that error and was there actually a segfault? | 17:09 |
mordred | clarkb: oh - that actually didn't reproduce the failure | 17:09 |
clarkb | mordred: http://logs.openstack.org/35/502335/1/check/shade-functional-devstack/2b1bdd8/job-output.txt.gz#_2017-09-10_16_57_59_497595 ya no reproduced sefault | 17:09 |
clarkb | its a legit fail | 17:09 |
mordred | clarkb: that failed like a normal job ...yah | 17:09 |
mordred | clarkb: whereas the previous change did fail with the segfault | 17:10 |
mordred | clarkb: https://review.openstack.org/#/c/500365 | 17:10 |
clarkb | oh well that particular job failed with http://logs.openstack.org/65/500365/11/check/shade-functional-devstack/fe7c728/job-output.txt.gz#_2017-09-10_16_48_59_505509 which is anothre legit fail | 17:11 |
clarkb | no segfault there that I see | 17:11 |
clarkb | I see that is why you added heat to that one | 17:12 |
clarkb | you sur eit wasn't just error on clone causing problems? | 17:13 |
* clarkb looks at different logs | 17:13 | |
mordred | clarkb: "./stack.sh: line 512: generate-subunit: command not found" also a different thing, but I don't thnk the main thing | 17:17 |
clarkb | mordred: thats happening because the error on clone is bailing everything out early I think | 17:17 |
mordred | gotcha | 17:17 |
clarkb | mordred: so fixing the heat clone I think may fix it in general? I'm not seeing segfaults? | 17:18 |
mordred | clarkb: cool. I'm not sure why we didn't get good logging on that previous run - but lemme fix that in the real one and see how it goes | 17:18 |
clarkb | ya ltes do that | 17:18 |
mordred | clarkb: there's a flag I can set to turn off fail on clone, right? | 17:19 |
clarkb | mordred: ya ERROR_ON_CLONE is the devstack flag and I think its set by default in jeblairs base job? Not sure how setting it twice goes in the conflict resolution | 17:19 |
mordred | let's find out :) | 17:19 |
clarkb | mordred: but you shouldn't need it just add heat to your jobs like you did in the recent job | 17:21 |
clarkb | then you get to the path problems in your functional test script | 17:21 |
mordred | ah - yes | 17:23 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Add zuul_log_url to emit-job-header https://review.openstack.org/502270 | 17:36 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Rename log_path and generate-log-url https://review.openstack.org/502271 | 17:36 |
clarkb | mordred: http://logs.openstack.org/65/500365/12/check/shade-ansible-devel-functional-devstack/cb514cc/job-output.txt.gz#_2017-09-10_17_43_56_361478 I think you are in business now? | 17:46 |
clarkb | mordred: new errors | 17:46 |
clarkb | fun fun fun | 17:46 |
clarkb | I'm gonna push a new patchset | 17:48 |
mordred | clarkb: I actually just realized I can kill both of these hok scripts ... | 17:53 |
clarkb | ya I haven't looked at those yet | 17:53 |
clarkb | fixed a playbook problem in latest patchset | 17:54 |
mordred | one sec - patch coming | 17:54 |
mordred | (I can do a thing in v3 I couldn't do well before) | 17:54 |
mordred | clarkb: check out https://review.openstack.org/500365 now | 18:02 |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul-jobs master: Actually set zuul_log_path https://review.openstack.org/502340 | 18:04 |
mordred | clarkb: ^^ related to that, https://review.openstack.org/#/c/502272 | 18:06 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Actually set zuul_log_path https://review.openstack.org/502340 | 18:09 |
mordred | tristanC: https://review.openstack.org/#/c/502332 has a nit - but I think we should fix it - the patch otherwise looks great to me | 18:15 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Override tox requirments with zuul git repos https://review.openstack.org/489719 | 18:19 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Remove extra quotation mark https://review.openstack.org/502343 | 18:27 |
tristanC | mordred: oops, fixing it now | 18:39 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Fix typo - the word is "available" https://review.openstack.org/502320 | 18:40 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: Add max-job-timeout tenant setting https://review.openstack.org/502332 | 18:40 |
tristanC | mordred: actually, next we should thinker how-to generalize tenant resources limits, having a list of 'max_' doesn't looks clean | 18:42 |
tristanC | for example, we probably also want to limit more things such as available labels | 18:44 |
tobiash | tristanC: you intend a limits setting like this: http://paste.openstack.org/show/620830/ ? | 18:44 |
tristanC | tobiash: yes, something like that would looks better imo | 18:45 |
tristanC | maybe the structure could support, min,max,in type of limit | 18:45 |
mordred | tristanC, tobiash: I think we need to consider label/tenant mapping and label/job or project mapping fairly broadly - realized it when workingon some of the job conversions ... | 18:47 |
tristanC | mordred: on another topic is there some work already started to move the status.json controller to zuul-web (iirc through gearman) ? | 18:48 |
mordred | one of the things seemed like it would be a good use for static nodes - until we realized there is no way (currently) to prevent some random job in some random repo from requesting a given static node | 18:48 |
mordred | tristanC: there is not - I wrote down some thoughts on it somewhere, but I do not believe anyone has started work on it yet | 18:48 |
mordred | tristanC: I also wrote this: https://review.openstack.org/#/c/494655/ which doesn't work and was just me poking around in general at what paste->aiohttp conversion looks like | 18:49 |
tobiash | mordred: for this there was the idea of a 'allowed labels' setting | 18:49 |
mordred | tobiash: for a tenant in main.yaml? | 18:49 |
tobiash | mordred: yes | 18:50 |
tobiash | mordred: didn't have time for this yet unfortunately | 18:50 |
mordred | tobiash: I'd also like to ponder the idea of teaching nodepool about tenants - so that you could have different max quotas per tenant for instance | 18:51 |
tobiash | mordred: but this might also fit into a limits description | 18:51 |
tobiash | mordred: jeblair was against this as nodepool also has other users than zuul, this also could be handled directly in zuul I think | 18:51 |
tristanC | mordred: how about starting with making formatStatusJSON a gearman function and create the per-tenant status endpoint in zuul-web? | 18:54 |
mordred | tristanC: I think that's a great idea | 18:55 |
tristanC | i ask those things because it will help the work on the dashboard, it seems like having a per-tenant status page will help with thinkering what we want the zuul interface to look like for jobs and builds | 18:56 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Remove extra quotation mark https://review.openstack.org/502343 | 18:56 |
tristanC | mordred: ok great, i'll try to get that formatStatusJSON function exposed over gearman now | 18:57 |
mordred | tristanC: when I last spoke with jeblair about status.json and gearman, I believe his suggestion was to keep the status cache with the web size - so move the _refresh_status_cache and whatnot to zuul-web too | 19:02 |
fungi | in retrospect, wondering whether 500320 should really have just been constraints instead of upper constraints, given it's in zuul-jobs and the idea of the constraints list being a rendering of transitive upper versions from a global requirements list is quite openstack-specific | 19:21 |
*** dkranz has quit IRC | 19:34 | |
mordred | fungi: we could do that | 20:07 |
fungi | we can do it later for all i care, just keeping a mental tally | 20:08 |
mordred | yah. I think renaming it is a fine idea | 20:08 |
*** olaph has quit IRC | 20:16 | |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Rename tox_upper_constraints_file to tox_constraints_file https://review.openstack.org/502348 | 20:17 |
mordred | fungi: ^^ | 20:17 |
*** olaph has joined #zuul | 20:19 | |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Add migration tool for v2 to v3 conversion https://review.openstack.org/491805 | 20:26 |
*** olaph1 has joined #zuul | 20:36 | |
*** olaph has quit IRC | 20:38 | |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Add migration tool for v2 to v3 conversion https://review.openstack.org/491805 | 20:50 |
jeblair | i am at my hotel in denver and have caught up with scrollback here. | 22:07 |
dmsimard | need some ci_mirrors.sh reviews merged to fix fedora in v3 still: https://review.openstack.org/#/c/502268/ and (once tested) https://review.openstack.org/#/c/501954/ | 22:20 |
mordred | jeblair, pabelanger, clarkb: http://logs.openstack.org/05/491805/13/check/zuul-migrate/274648b/job-output.txt.gz - is reported as POST_FAILURE but I donmt' see anythiung that failed | 22:27 |
mordred | however, trying to figure out what was wrong led me down a little rabbit hole ... | 22:27 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Produce subunit and html at the end of tox https://review.openstack.org/502359 | 22:28 |
dmsimard | mordred: the end of that text file is truncated, no? | 22:30 |
jeblair | dmsimard: i went ahead and approved 268 since it's just a base-test add | 22:30 |
dmsimard | jeblair: \o/ | 22:30 |
mordred | dmsimard: it's always truncated - we can't log to the file anymore after we gzip it | 22:31 |
jeblair | mordred: we already have support for non-python subunit | 22:31 |
dmsimard | mordred: yeah but I mean, if there'd be a failure after that point you wouldn't see it in the log ? :) | 22:31 |
jeblair | mordred: if you change the filenames, you're going to have to change the subunit stuff i'm working on too | 22:31 |
mordred | jeblair: I can unchange the file names - I was more concerned about the todo around the fact that the roles were using the pre-existing virtualenv as well as the tox_envlist variable | 22:33 |
jeblair | mordred: thanks, i think that'd be best (i elaborated in a review comment) | 22:34 |
mordred | jeblair: kk - will do | 22:34 |
dmsimard | jeblair, mordred: would it be okay to send a patch to gzip the json-output.json file ? We have the mimetype for it ( https://github.com/openstack-infra/puppet-openstackci/blob/master/templates/logs.vhost.erb#L49 ) and it's >=1MB for each run | 22:38 |
mordred | dmsimard: wfm | 22:38 |
jeblair | ++ | 22:38 |
dmsimard | but of course the post-logs role is one of those things in base that can easily break | 22:39 |
dmsimard | so... ugh | 22:39 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Produce subunit and html at the end of tox https://review.openstack.org/502359 | 22:41 |
mordred | dmsimard: yup. it's the multi-step dance of joy! | 22:43 |
dmsimard | mordred: I'm not sure at what point the job-output.json file ends up being created, grepfu is failing and only returning things about job-output.txt. | 22:43 |
dmsimard | mordred: is it just continually written to by zuul_json ? | 22:44 |
dmsimard | oh wait nevermind, found it -- it's in v2_on_stats of zuul_json | 22:45 |
jeblair | dmsimard: yep | 22:45 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Add migration tool for v2 to v3 conversion https://review.openstack.org/491805 | 22:45 |
Shrews | Is there early check-in somewhere? | 22:45 |
dmsimard | Shrews: for what, the badges ? | 22:46 |
Shrews | Yeah | 22:47 |
dmsimard | on the lobby floor down the escalator on the left hand side (when coming in the hotel) | 22:47 |
Shrews | thx | 22:48 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Decode stdout from ansible-playbook before logging https://review.openstack.org/502362 | 23:01 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: Gzip the job-output.json file https://review.openstack.org/502363 | 23:04 |
dmsimard | jeblair: I'm not terribly confident about ^ mostly confused about whether this should occur on localhost or not | 23:04 |
jeblair | dmsimard: i think it wolud be better to gzip on localhost like the text console log. | 23:05 |
dmsimard | ah I think I get it now | 23:06 |
pabelanger | jeblair: mordred: 491805 is a dead worker | 23:09 |
pabelanger | 2017-09-10 21:21:20,087 DEBUG zuul.AnsibleJob: [build: 274648b3571a4cfa82a7aeb61fd9f447] Ansible output: b'ERROR! A worker was found in a dead state' | 23:09 |
jeblair | clarkb: ^ | 23:09 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: Gzip the job-output.json file https://review.openstack.org/502363 | 23:09 |
dmsimard | jeblair: I think that should work ^ | 23:09 |
jeblair | clarkb: did you set it to save core files? | 23:10 |
clarkb | jeblair: I didnt do anything b4cause I couldnt find segfaults | 23:11 |
clarkb | they were legit fails in pre that caused it to retry limit fail | 23:11 |
jeblair | well, we need to set it up to save core files before it segfaults | 23:11 |
jeblair | Sep 10 23:10:23 ze02 kernel: [2183616.902108] ansible-playboo[4575]: segfault at a9 ip 000000000050eaa4 sp 00007ffe55dfc808 error 4 in python3.5[400000+3a7000] | 23:13 |
jeblair | that's the corresponding segfault to pabelanger's error | 23:13 |
jeblair | pabelanger: what executors are running now? | 23:14 |
pabelanger | jeblair: I haven't changed anything over the last few days, but looking at the SRU SpamapS pushed, we sure enable xenial-proposed and pull in the new python3.5 | 23:16 |
pabelanger | https://launchpad.net/ubuntu/+source/python3.5/3.5.2-2ubuntu0~16.04.2 | 23:16 |
pabelanger | s/sure/should/ | 23:16 |
jeblair | i thought it was well established that we are running the correct python 3.5. is that still in doubt? | 23:17 |
pabelanger | I _think_ we still are running out patched version but I haven't really looked into state of current failures | 23:18 |
jeblair | clarkb, mordred: didn't i see you do that earlier? are you satisfied the correct python is being used? | 23:19 |
dmsimard | python -c "import sys; print(sys.version_info)" ? | 23:19 |
pabelanger | however python3 and python3-all are 3.5.1, where we have python3.5 package for ppa | 23:19 |
clarkb | yes I think it is correct python | 23:19 |
pabelanger | http://paste.openstack.org/show/620834/ | 23:20 |
jeblair | clarkb: great, thanks. did you check all 4 executors? pabelanger do we still have 4, is that correct? | 23:20 |
pabelanger | jeblair: yes 4 exectuors, but only ze01, ze02 are online | 23:21 |
dmsimard | clarkb: I guess the executor processes did get restarted (to run on the new python?) | 23:21 |
jeblair | pabelanger: thanks | 23:21 |
pabelanger | unless somebody have started ze03 / ze04 | 23:21 |
jeblair | pabelanger: well, if you're not sure, let's check | 23:21 |
jeblair | it's not running on 03 | 23:21 |
jeblair | and it's not running on 04 | 23:22 |
pabelanger | k | 23:22 |
pabelanger | that pastebin for apt packaging is alittle odd however, I don't know why we have both 3.5.1 and 3.5.2 installed | 23:23 |
pabelanger | maybe we should think about removing the unpacked version? | 23:23 |
pabelanger | unpatched* | 23:23 |
clarkb | jeblair: I only checked the first two | 23:23 |
clarkb | pabelanger: we have both versions because python3 deps on python3.5 | 23:24 |
pabelanger | clarkb: so, possible this is a new traceback | 23:30 |
jeblair | for pid in `pgrep zuul-executor`; do prlimit --core=unlimited:unlimited --pid $pid; done | 23:32 |
jeblair | clarkb, pabelanger: ihave done that on both executors | 23:32 |
jeblair | zuul-executor keep | 23:32 |
jeblair | and i did that on both as well | 23:32 |
jeblair | so the next time there's a segfault, we should have a core file in the build directory | 23:33 |
jeblair | echo -n "core" >/proc/sys/kernel/core_pattern | 23:33 |
jeblair | oh, also that ^, but that was only needed on ze02 | 23:33 |
jeblair | to get rid of the crazy apport thing | 23:33 |
pabelanger | jeblair: k | 23:36 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Gzip the job-output.json file https://review.openstack.org/502363 | 23:37 |
jeblair | mordred: can you re-review https://review.openstack.org/502273 ? clarkb can you review that as well? | 23:40 |
dmsimard | oh yeah I wanted to look at that. Looking now. | 23:43 |
pabelanger | jeblair: should we kick of the rechecks and try to reproduce segfault now? | 23:47 |
jeblair | pabelanger: go for it | 23:48 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!