clarkb | jeblair: looking at the cpu graph indicates possible goodness too | 00:01 |
---|---|---|
fungi | gothicmindfood/colette is just north of there as well? | 00:01 |
*** sarob has quit IRC | 00:01 | |
clarkb | yeah I think she may be around too | 00:02 |
jeblair | clarkb: at least nothing alarming so far | 00:02 |
* fungi is trying to keep track of where all the new blood sit | 00:02 | |
clarkb | aiui mordred is getting several of us together to braindump/onboard | 00:02 |
fungi | too awesome | 00:02 |
*** sarob has joined #openstack-infra | 00:03 | |
clarkb | jeblair: hrm, does nodepool need to be restarted? | 00:03 |
clarkb | or is the lack of nodes on 01 just lag while it shifts load from 02 to 01 | 00:03 |
sdague | we just had another reset | 00:04 |
jeblair | looking | 00:04 |
sdague | so everything is tearing down | 00:04 |
sdague | fwiw | 00:04 |
clarkb | sdague: https://jenkins02.openstack.org/job/gate-grenade-dsvm/1857/console did it | 00:04 |
sdague | yep | 00:04 |
*** sarob has quit IRC | 00:04 | |
sdague | which is actually *not* the error I'd be skipping | 00:04 |
clarkb | right, that was cinder failing to grenade | 00:05 |
jeblair | clarkb: nodepool is recieving data from jenkins01 zmq | 00:05 |
clarkb | jeblair: cool, probably just lag then | 00:05 |
clarkb | as all of the resources allocated to host 02 slowly die off they can be distributed more evenly | 00:05 |
*** reed has quit IRC | 00:06 | |
jeblair | clarkb: yeah, you should see new nodes there soon, it's putting all the new ones there | 00:06 |
mordred | jeblair, clarkb, AaronGr, fungi: yeah - clarkb may be less useful on Monday than usualy, as I've asked him to help me bootstrap AaronGr while I run around and do a bazillion other things too | 00:06 |
sdague | clarkb: and actually a completely unique failure | 00:06 |
jeblair | http://paste.openstack.org/show/54992/ | 00:06 |
mordred | but I'm hoping that the payoff of a bootstrapped AaronGr will be worth the distruption | 00:06 |
fungi | mordred: as long as we put in one clarkb and get two back, i'm cool with that formula | 00:07 |
clarkb | mordred: you didn't tell me about the bazillion other things /me wonders what he is getting into | 00:07 |
clarkb | sdague: interesting | 00:07 |
clarkb | :) | 00:07 |
sdague | clarkb: though that job failed on another neutron error as well | 00:07 |
mordred | clarkb: bwahahaha. you've fallen prey to my madness... | 00:08 |
fungi | if the clarkb multiplication experiment works out, we should figure out how to duplicate jeblair next | 00:08 |
fungi | you know, for science | 00:08 |
sdague | so because this is really #2 gate reset today (after the ssh one) - how would people feel about jumping it to the gate and popping to the top - https://review.openstack.org/#/c/62107/ | 00:09 |
*** sarob has joined #openstack-infra | 00:09 | |
fungi | i'm game | 00:10 |
fungi | i haven't tried out the magic promote command yet | 00:10 |
* fungi flexes his fingers and waits for people to say no | 00:11 | |
jeblair | maybe wait for check to complete? | 00:11 |
fungi | yeah, probably | 00:11 |
*** sarob_ has joined #openstack-infra | 00:11 | |
sdague | jeblair: sure, though that's going to be a couple hours | 00:11 |
*** sarob has quit IRC | 00:11 | |
sdague | it hasn't managed to get any devstack nodes allocated to it yet | 00:12 |
sdague | oh, it just got them | 00:12 |
fungi | sdague: it has | 00:12 |
jeblair | sdague: it's been running for like 5 mins | 00:12 |
sdague | yeh, I was looking a few ago | 00:12 |
fungi | oh ye of little faith | 00:12 |
sdague | sorry, my faith is burnt out today | 00:12 |
*** sarob_ has quit IRC | 00:13 | |
* fungi always wears an extra tank | 00:13 | |
*** sarob has joined #openstack-infra | 00:13 | |
*** jcooley_ has joined #openstack-infra | 00:15 | |
fungi | anyway, if it comes back +1 (or with unrelated known nondeterministic failures) i'm cool with promoting it to the head of the gate | 00:16 |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1260894 https://review.openstack.org/62112 | 00:17 |
uvirtbot | Launchpad bug 1260894 in swift "ERROR root [-] Timeout talking to memcached" [Undecided,New] https://launchpad.net/bugs/1260894 | 00:17 |
anteaya | fungi: this is the patch we have available for the ssh bug, it passed the first time and then failed on what I consider to be a known unrelated nondeterministic failure: https://bugs.launchpad.net/nova/+bug/1254890 | 00:17 |
uvirtbot | Launchpad bug 1254890 in tempest ""Timed out waiting for thing" causes tempest-dsvm-neutron-* failures" [Low,In progress] | 00:17 |
anteaya | what do you suggest I do? | 00:18 |
anteaya | salv-orlando: wanted it to go through check twice | 00:18 |
anteaya | which I support | 00:18 |
fungi | sounds cautious | 00:18 |
anteaya | we are trying | 00:18 |
*** sarob has quit IRC | 00:18 | |
* fungi looks | 00:18 | |
*** sarob has joined #openstack-infra | 00:19 | |
fungi | also, sdague: you don't have a grenade bugtask on bug 1259907 | 00:19 |
uvirtbot | Launchpad bug 1259907 in openstack-ci "check-grenade-dsvm marked as FAILED - n-api/g-api Logs have errors" [Undecided,New] https://launchpad.net/bugs/1259907 | 00:19 |
jeblair | anteaya: where's the change? | 00:19 |
anteaya | jeblair: no change, we put it through check twice | 00:19 |
jeblair | anteaya: what did you put through check twice? | 00:19 |
anteaya | didnt' want to hit the ssh bgu | 00:19 |
anteaya | we didn't want to add to the ssh bug problem | 00:19 |
anteaya | if this revert would do so | 00:20 |
fungi | jeblair: anteaya: https://review.openstack.org/59769 ? | 00:20 |
fungi | oh, abandoned | 00:20 |
fungi | yeah, review link? | 00:20 |
anteaya | dims said he abandoned that since the patch wasn't going to address the bug anyway | 00:20 |
anteaya | https://review.openstack.org/#/c/62098 | 00:20 |
*** sarob has quit IRC | 00:21 | |
anteaya | that is the hopeful patch to address the ssh bug | 00:21 |
*** sarob has joined #openstack-infra | 00:21 | |
salv-orlando | timeout for becoming active appears unrelated from neutron, and especially with the patch we're reverting. The latest failure seems to me unrelated to both the original patch and the revert | 00:21 |
markmcclain | yeah I would agree w/ salv-orlando | 00:21 |
markmcclain | I'm inclined to do the revert | 00:21 |
fungi | Timed out waiting for thing 19fcdc4c-a60a-4bab-9380-b16ea8656779 | 00:21 |
anteaya | salv-orlando: just wanted some eyes and opinions before next steps | 00:21 |
fungi | looks familiar | 00:21 |
anteaya | yes | 00:22 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: use parameters from project.yaml https://review.openstack.org/62113 | 00:22 |
salv-orlando | I would either +2 and +A based on the fact that the other jobs passed (and the tests which usually fail passed too in the failing job) | 00:22 |
clarkb | jeblair: going back to notmyname's discussion about how eventually stasticially a change will go through at least one reset, remember when I was trying to say we should have a throttle mechanism in zuul? the idea behind that is to not waste resources in the scenario notmyname found | 00:22 |
anteaya | fungi: bug https://bugs.launchpad.net/nova/+bug/1254890 | 00:22 |
uvirtbot | Launchpad bug 1254890 in tempest ""Timed out waiting for thing" causes tempest-dsvm-neutron-* failures" [Low,In progress] | 00:22 |
salv-orlando | or if we want to be extra cautious , another recheck | 00:22 |
sdague | fungi: it's actually a tempest fix | 00:22 |
salv-orlando | but I think I am more confident to approve even if we had this failure after the recheck | 00:22 |
clarkb | throttle zuul so that it will only start jobs for X changes at any given time in a dependent pipeline to reduce thrash | 00:22 |
fungi | sdague: oh, so it is ;) | 00:22 |
*** dcramer_ has joined #openstack-infra | 00:22 | |
sdague | but you are right, I didn't add it to the bug | 00:23 |
* fungi nods | 00:23 | |
jeblair | clarkb: it's worth considering if you can fully automate it, however, i don't think it's the best use of our time to continually twiddle such a knob. | 00:23 |
clarkb | sdague: can ahve a link here too? | 00:23 |
notmyname | clarkb: +1 to only starting XX jobs at a time. seems like a happy medium to try (I suspect "XX" will actually be pretty low, like 3-5) | 00:23 |
markmcclain | salv-orlando: also given the item we're proposing to revert specifically triggered the gate bug during the review process | 00:23 |
jeblair | clarkb: but moreover, why do you care? | 00:23 |
fungi | clarkb: https://review.openstack.org/62107 | 00:24 |
clarkb | jeblair: because the extra uneeded load stresses the rest of the sytems, making it harder to deal with problems | 00:24 |
jeblair | clarkb: i'm not really buying that | 00:24 |
clarkb | jeblair: I was thinking it could work like TCP maybe | 00:24 |
fungi | markmcclain: anteaya: salv-orlando: if it has a greater chance of fixing bugs than introducing them, i'm in favor of putting it in | 00:25 |
clarkb | do slow start (probably not so slow start), then backoff if we hit consistent resets | 00:25 |
jeblair | clarkb: if it's not implemented well, it's an anti-optimization -- it would serve to make sure that we can't ever actually hit the best case, which we normally do quite often. | 00:25 |
anteaya | fungi: thank you | 00:25 |
salv-orlando | markmcclain, fungi, anteaya: the revert patch already has my +2 - I'm disconnecting for the night. markmcclain can give the other +2 | 00:25 |
*** sarob has quit IRC | 00:25 | |
clarkb | jeblair: yes TCP's big problem is it rarely runs in the best case | 00:25 |
anteaya | salv-orlando: thank you | 00:26 |
sdague | question: is 56994,2 going to get requeued if something else fails ahead of it? | 00:26 |
jeblair | sdague: yes | 00:26 |
sdague | it's another stable/grizzly change that can't pass because docs is broken on neutron stable/grizzly | 00:26 |
sdague | ok, I'll snipe it out | 00:26 |
*** sarob has joined #openstack-infra | 00:27 | |
fungi | sdague: yeah, trivial rebase a new patchset for it or something | 00:27 |
fungi | markmcclain: sdague: so reverting https://review.openstack.org/60805 , that merged just over 24 hours ago--is that when we saw the sharp increase in these issues? | 00:28 |
sdague | fungi: yeh, typically I just update the commit message with a comment | 00:28 |
fungi | wfm, | 00:28 |
markmcclain | fungi: yes | 00:28 |
markmcclain | also that change had 4 rechecks for the exact bug that is currently breaking the gate | 00:29 |
fungi | oh wow | 00:29 |
markmcclain | I'm going to send out an email to our team reminding them that just rechecking/verifying to something passes isn't good | 00:29 |
markmcclain | and that playing a game of chance makes the gate more unstable | 00:30 |
jeblair | markmcclain: so it's not the origination of the bug, but it exacerbated it? | 00:30 |
*** sarob has quit IRC | 00:30 | |
markmcclain | jeblair: correct | 00:30 |
sdague | jeblair: right, this bug has been around for a long time | 00:30 |
fungi | that's seeming likely... clarkb filed that one weeks ago | 00:30 |
*** sarob has joined #openstack-infra | 00:30 | |
sdague | yeh, but this has been pernicious | 00:30 |
sdague | it's at least 6 months of behavior | 00:30 |
fungi | so either exacerbated it or this is a more voracious doppleganger | 00:31 |
sdague | we turned off the ssh tests completely in tempest for a while because of it | 00:31 |
markmcclain | fungi: my opinion is that this exacerbated it | 00:31 |
sdague | but I'm pretty convinced that the ssh check methodology is solid now | 00:31 |
fungi | yeah, i remember it was disabled for the longest time | 00:31 |
jeblair | fungi: if you promote, maybe include both of these changes | 00:31 |
*** sarob has quit IRC | 00:31 | |
fungi | jeblair: oh, it takes a list? great! | 00:32 |
sdague | mine should be good in 22 minutes | 00:32 |
markmcclain | and the next step is to do a postmortem and understand why it exacerbated because it might give insight into the original bug | 00:32 |
fungi | markmcclain: i couldn't agree more | 00:32 |
fungi | it could lead us to some underlying cause | 00:32 |
sdague | markmcclain: so was moving back to the metadata server discounted? | 00:32 |
*** sarob has joined #openstack-infra | 00:33 | |
markmcclain | yeah because metadata server verification still failed | 00:33 |
jeblair | fungi: yep, ordered list so "zuul promote --pipeline gate --changes 1234,5 6789,10" puts change 1234,5 at the top and then 6789,10 right after it | 00:33 |
sdague | ok | 00:33 |
salv-orlando | from my analysis this caused a different a failure mode. I checked all the know issues we've had causing the timeouts (missing dhcp notifications, slowness in agent loop, wiring of the floating ip) but the failures here occurred while everything was apparently perfectly wired | 00:33 |
fungi | jeblair: that's marvellous. agreed these two might make our weekend more pleasant | 00:33 |
*** UtahDave has quit IRC | 00:34 | |
*** sarob has quit IRC | 00:34 | |
salv-orlando | anyway, the truth is that in my opinion bug 1253896 is an accumulation bug, where factors from each agent concur. Removing these factors pushes us away from the timeout threshold, but then 'bad' patches then push us back past the timeout threshold | 00:35 |
uvirtbot | Launchpad bug 1253896 in neutron "Attempts to verify guests are running via SSH fails. SSH connection to guest does not work." [Critical,In progress] https://launchpad.net/bugs/1253896 | 00:35 |
jeblair | fungi: it should handle dependencies too, so you don't have to track those down | 00:35 |
*** jerryz has joined #openstack-infra | 00:35 | |
fungi | even better | 00:35 |
sdague | salv-orlando: so the timeout is 196 seconds | 00:35 |
sdague | are you saying the cirros image should take that long to be sshable? | 00:36 |
* fungi gives up and hits ctrl-c after about 15 seconds of not getting an ssh banner, for the record | 00:36 | |
sdague | yeh, the cirros images boots in about 7s iirc | 00:36 |
clarkb | sdague: in this case it is sshable | 00:36 |
clarkb | authentication is failing, but the server responds back with bits | 00:37 |
*** sarob has joined #openstack-infra | 00:37 | |
salv-orlando | sdague: not at all, I was just talking about the nature of the bug; the 196 seconds timeout is fine | 00:37 |
fungi | oh, sounds like it could be i/o starvation then | 00:37 |
sdague | ok | 00:37 |
jerryz | sdague: in large-ops , i have seen longer | 00:37 |
sdague | salv-orlando: ok, sorry which timeout. Just so I understand more | 00:37 |
fungi | sshd in a typical configuration will most commonly block on processing the password prompt if it can't read from the filesystem | 00:37 |
*** sarob has quit IRC | 00:38 | |
fungi | (or write, trying to update wtmp) | 00:38 |
*** sarob has joined #openstack-infra | 00:38 | |
fungi | clarkb: or does it respond in the negative? | 00:38 |
clarkb | fungi: this is dropbear doing key auth with authorized keys supplied via metadata server or config drive. I have a hunch that either the authorized keys file isn't showing up properly or doing the RSA maths is too slow on qemu | 00:38 |
clarkb | fungi: it responds negatively | 00:39 |
salv-orlando | sdague: sorry I must be a bit confused; I was just saying the current timeouts are fine, they should not be changed. It's that when changes are made to the various neutron agents, often the agent also becomes slightly 'slower' | 00:39 |
fungi | ohhhhhh | 00:39 |
jerryz | sdague: for some of my devstack slaves , io wait takes 40% cpu for large-ops tests. | 00:39 |
salv-orlando | thus pushing the whole process more closer to the timeout. | 00:39 |
sdague | jerryz: sure, but that's large-ops | 00:39 |
sdague | which doesn't use a real hypervisor | 00:39 |
sdague | which takes time :) | 00:39 |
jerryz | sdague salv-orlando: i don't think a common build_timeout var should apply in large-ops | 00:39 |
sdague | jerryz: so in the gate, we don't do large-ops on real hypervisor | 00:40 |
sdague | we only run it on fakevirt | 00:40 |
fungi | clarkb: my money would still be on file access then. rsa calculations are expensive... when implemented on a smartcard or microcontroller maybe | 00:40 |
sdague | because a real hypervisor actually bogs down too much, so doesn't move the load back to the control plane | 00:40 |
* salv-orlando decides it's better to go to sleep as I've just realised I was discussing a specific bug hitting neutron smoke jobs and not the large-ops job | 00:40 | |
sdague | which is actually it's intent | 00:40 |
jerryz | sdague: for large-ops to succeed in my env, i increased the timeout from 196 back to 400 for the test to pass | 00:41 |
fungi | clarkb: ~200 seconds to do rsa is not the cpu in this case, i'd guess | 00:41 |
clarkb | fungi: right | 00:41 |
salv-orlando | that's why we were getting mixed up with timeouts | 00:41 |
sdague | jerryz: yep, that's a different issue though :) | 00:41 |
jerryz | otherwise, the image would be deleted | 00:41 |
*** sarob_ has joined #openstack-infra | 00:41 | |
*** sarob has quit IRC | 00:41 | |
fungi | salv-orlando: sleep well, we need you rested ;) | 00:41 |
clarkb | 191 slaves 914 threads | 00:41 |
sdague | yes, thanks much salv-orlando | 00:42 |
salv-orlando | goodnight and thanks | 00:42 |
clarkb | not the greatest ratio but pretty sure that is better than 02 | 00:42 |
*** sarob_ has quit IRC | 00:42 | |
*** sarob has joined #openstack-infra | 00:43 | |
fungi | Threads on jenkins02.openstack.org@166.78.48.99: Number = 1,399, Maximum = 2,530, Total started = 133,995 | 00:44 |
*** sarob has quit IRC | 00:45 | |
*** sarob has joined #openstack-infra | 00:45 | |
*** salv-orlando has quit IRC | 00:46 | |
fungi | the total executors graph for 02 looks like about 135 | 00:46 |
*** salv-orlando has joined #openstack-infra | 00:46 | |
fungi | oh, short timespan graph shows more like 155 | 00:47 |
*** sarob has quit IRC | 00:48 | |
*** sarob_ has joined #openstack-infra | 00:48 | |
*** sarob_ has quit IRC | 00:49 | |
torgomatic | the python 2.6 check job for https://review.openstack.org/60215 seems to be hanging around a really long time | 00:50 |
torgomatic | like, I could have sworn I saw it there yesterday too | 00:50 |
*** jergerber has quit IRC | 00:50 | |
clarkb | https://jenkins02.openstack.org/job/gate-swift-python26/2082/console it does look stuck, I think the red bar means jenkins tried to kill it but failed | 00:51 |
clarkb | fungi: ^ that look right to you? | 00:51 |
clarkb | torgomatic: it is only 50 minutes old | 00:51 |
fungi | yeah, i was just looking... says 51 minutes | 00:51 |
*** sarob has joined #openstack-infra | 00:51 | |
sdague | torgomatic: and swift 2.6 tests have been timing out to 90 minutes a lot | 00:51 |
torgomatic | clarkb: yeah, but that change has been at the top left of the check column all day... maybe it gets retried for some reason? | 00:51 |
fungi | i was watching things clear from the top of the check pipeline earlier today, so it's not been there all day | 00:52 |
clarkb | torgomatic: must've | 00:52 |
torgomatic | eh, I'll keep an eye on it and see if it flushes out... if not, I'll say something | 00:52 |
fungi | but it may have returned for more abuse | 00:52 |
fungi | torgomatic: https://jenkins02.openstack.org/job/gate-swift-python26/2082/console | 00:52 |
clarkb | nose is still running | 00:52 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Don't attempt to open an interactive editor https://review.openstack.org/62116 | 00:52 |
fungi | definitely still active | 00:52 |
clarkb | I have a hunch that change is broken | 00:52 |
clarkb | or tickling other broken | 00:53 |
fungi | looks like it's been silent for ~50 minutes | 00:53 |
*** salv-orlando has quit IRC | 00:53 | |
fungi | so probably a broken change resulting in a timeout, yes | 00:53 |
*** sarob has quit IRC | 00:53 | |
*** sarob has joined #openstack-infra | 00:54 | |
torgomatic | ok, thanks | 00:54 |
clarkb | timeout is 60 minutes we should see it die here shortly | 00:54 |
fungi | sdague: gah, fail | 00:58 |
sdague | fungi: yeh, it's the RESCUE timeout nova issue | 00:58 |
fungi | sdague: so, money on promoting 62098,1 followed by 62107,1 i guess? | 00:59 |
sdague | so your call. This change was clearly not responsible for that... as it changed nothing but the log checker | 00:59 |
*** mriedem has quit IRC | 00:59 | |
sdague | fungi: sure | 00:59 |
sdague | I just put it in the gate | 00:59 |
jeblair | did it run the log checker? | 00:59 |
sdague | yes | 01:00 |
sdague | on all the jobs | 01:00 |
jeblair | cool, that's the main thing i think | 01:00 |
sdague | yep | 01:00 |
fungi | waiting to see if any of the remaining changes at the head of the gate make it through, and then i'll promote at the next reset | 01:00 |
sdague | fungi: 5 changes about to merge in gate | 01:00 |
sdague | yeh | 01:00 |
fungi | yep | 01:00 |
*** sarob has quit IRC | 01:00 | |
fungi | wow, so much green for once | 01:01 |
*** mrodden has quit IRC | 01:02 | |
fungi | 61442,1 is failing, but the changes ahead of it are making it in at least | 01:02 |
fungi | nevermind--it has a chance. i was looking at the wrong job | 01:03 |
sdague | that last neutron change at the head probably has about 2 more minutes | 01:03 |
sdague | oh, actually 5 - 8 more minutes | 01:04 |
fungi | yup. once it reports i'll promote the two we discussed | 01:04 |
sdague | it's in scenario | 01:04 |
sdague | funny to realize the non parallel jobs are still there :) | 01:04 |
clarkb | fungi: btw I think we should keep nodepool running on jenkins-dev just to exercise that server for a bit | 01:07 |
sdague | fungi: I think that job is screwed. It's been 8 minutes since it last logged | 01:07 |
fungi | clarkb: i'm happy to leave it running indefinitely | 01:07 |
sdague | I think the test it is in is stuck | 01:07 |
fungi | sdague: i was thinking the same | 01:07 |
sdague | "that job" top of gate | 01:07 |
fungi | and FAIL | 01:08 |
fungi | shall i wait for it to fail out, or bump these two ahead of it in case they fix it? | 01:09 |
clarkb | BUMP BUMP BUMP | 01:09 |
* fungi looks for a second in favor | 01:09 | |
jeblair | fungi: bump | 01:10 |
fungi | done | 01:10 |
*** eharney has quit IRC | 01:10 | |
fungi | so we've got the potential neutron fix for the ssh timeouts at the head, followed by the tempest fix for grenade error filtering | 01:10 |
fungi | and our other neutron friend formerly in front gets a stay of execution | 01:11 |
sdague | cool, time to run away. | 01:11 |
fungi | flee | 01:12 |
*** mriedem has joined #openstack-infra | 01:12 | |
torgomatic | aha, something happened to the check job for 60215; python 2.6 is rerunning here: https://jenkins02.openstack.org/job/gate-swift-python26/2089/ | 01:12 |
fungi | torgomatic: very interesting | 01:13 |
torgomatic | yes | 01:13 |
openstackgerrit | Clark Boylan proposed a change to openstack-infra/config: Use even bigger hpcloud region b nodes. https://review.openstack.org/62122 | 01:13 |
clarkb | mordred: ^ | 01:13 |
fungi | torgomatic: more interesting still, https://review.openstack.org/60215 which it's testing there was last updated almost a week ago... | 01:14 |
*** sarob has joined #openstack-infra | 01:14 | |
fungi | over a week ago | 01:15 |
fungi | and has no reviews... I SMELL A DRAFT | 01:15 |
*** sarob has quit IRC | 01:15 | |
torgomatic | also, https://review.openstack.org/60215 doesn't show a patch set 2 at all | 01:15 |
torgomatic | but the job shows 60215,2 on the status page | 01:16 |
*** sarob has joined #openstack-infra | 01:16 | |
fungi | yeah, did it get submitted as a non-draft and switched to a draft while it was being tested, and now zuul is infinitely looping it? | 01:16 |
fungi | i think that may be it | 01:16 |
*** AaronGr is now known as AaronGr_afk | 01:16 | |
clarkb | loldrafs | 01:16 |
torgomatic | I'm not sure; I don't recall it going to draft, but I haven't paid this one any attention | 01:16 |
* fungi digs in the db | 01:16 | |
notmyname | torgomatic: seems that maybe we can't actually see drafts | 01:17 |
*** talluri has joined #openstack-infra | 01:17 | |
torgomatic | notmyname: yeah, I think you have to be explicitly invited to see one | 01:17 |
fungi | you can't, that's part of the fail about drafts | 01:17 |
fungi | specialsecretpatch | 01:17 |
*** yamahata_ has joined #openstack-infra | 01:17 | |
torgomatic | fungi: aha, I have Gerrit emails about a patch set 2, so that supports your hypothesis | 01:18 |
*** ryanpetrello has joined #openstack-infra | 01:19 | |
*** rwsu has quit IRC | 01:19 | |
* clarkb AFKs good night | 01:20 | |
*** paul-- has quit IRC | 01:20 | |
*** sarob has quit IRC | 01:20 | |
fungi | night clarkb | 01:20 |
*** rwsu has joined #openstack-infra | 01:20 | |
fungi | yep, i'm going to un-draft 60215,2 on the pretense that i don't want zuul burning test resources indefinitely | 01:21 |
*** paul-- has joined #openstack-infra | 01:21 | |
torgomatic | sounds good to me | 01:21 |
fungi | i'll add an apologetic comment on the change after | 01:21 |
fungi | torgomatic: good eye, btw ;) | 01:21 |
torgomatic | heh, thanks :) | 01:21 |
*** ryanpetrello has quit IRC | 01:24 | |
*** sarob has joined #openstack-infra | 01:24 | |
clarkb | wip it and suggest wip instead | 01:26 |
*** apevec has quit IRC | 01:28 | |
*** sarob has quit IRC | 01:29 | |
jeblair | http://paste.openstack.org/show/54995/ | 01:29 |
jeblair | fungi, clarkb, mordred: ^ downloading get-pip from github failed. | 01:30 |
jeblair | mordred: erm, did you add something to devstack-gate that downloads something from the internet? | 01:30 |
clarkb | devstack does it iirc | 01:31 |
fungi | yeah, it's devstack. and i think there are devstack+devstack-gate (or maybe nodepool prep script) changes to cache that but i guess they either haven't landed or they're not working | 01:32 |
fungi | i will go in search of them | 01:32 |
jeblair | fungi: i recall the change to devstack-gate to use get-pip; i don't recall a change from mordred to cache it | 01:33 |
fungi | markmcclain: btw 62098 is failing out on https://jenkins02.openstack.org/job/gate-tempest-dsvm-neutron/2378/ | 01:33 |
jeblair | i also recall lifeless asking mordred where it was cached and getting no response | 01:34 |
*** julim_ has quit IRC | 01:34 | |
fungi | jeblair: came later, but i do remember it... i think that was on the caching change | 01:34 |
jeblair | fungi: "git grep get.pip" returns nothing in my config tree | 01:34 |
*** jerryz has quit IRC | 01:35 | |
jeblair | fungi: oh, is it this one? | 01:35 |
jeblair | https://review.openstack.org/#/c/58099/ | 01:35 |
fungi | ahh, i thin so | 01:35 |
fungi | think | 01:36 |
jeblair | mordred: please finish this change: https://review.openstack.org/#/c/58099/2 | 01:37 |
jeblair | i really prefer it when we start caching a thing _before_ we start using it | 01:38 |
fungi | or https://review.openstack.org/#/c/51425/ | 01:38 |
fungi | nevermind, that's puppet bits | 01:38 |
fungi | (me too) | 01:38 |
*** ryanpetrello has joined #openstack-infra | 01:39 | |
*** esker has joined #openstack-infra | 01:40 | |
fungi | jeblair: salient bits found in bug 1254275 | 01:45 |
uvirtbot | Launchpad bug 1254275 in openstack-ci "check-swift-devstack-vm-functional fails fetching requirements from pypi.python.org" [Undecided,Fix released] https://launchpad.net/bugs/1254275 | 01:45 |
fungi | https://review.openstack.org/58106 was the part i remembered | 01:46 |
fungi | but yes, the config bit you mentioned is pending | 01:47 |
*** ArxCruz has joined #openstack-infra | 01:50 | |
fungi | ummm | 01:51 |
fungi | USE_GET_PIP=${USE_GET_PIP:-0} | 01:51 |
fungi | if [[ -n "$USE_GET_PIP" ]]; then | 01:51 |
fungi | ... | 01:51 |
fungi | maybe i'm just misreading that, but won't it *always* match? | 01:52 |
fungi | ENOTPYTHON | 01:53 |
StevenK_ | fungi: Yes | 01:54 |
StevenK_ | -n is "length of string is non-zero | 01:55 |
fungi | right, i was more meaning "am i misreading openstack-dev/devstack:install_pip.sh" but that was more or less my point | 01:56 |
*** mrodden has joined #openstack-infra | 01:56 | |
fungi | though in this particular failure it was irrelevant. the not-yet-merged https://review.openstack.org/58099 was the bigger issue, since get-pip.py was not found in the local cache, got downloaded (poorly) and led to a syntax error when executed | 01:58 |
*** reed has joined #openstack-infra | 02:00 | |
*** paul-- has quit IRC | 02:01 | |
*** StevenK_ is now known as StevenK | 02:03 | |
*** CaptTofu has quit IRC | 02:03 | |
*** CaptTofu has joined #openstack-infra | 02:03 | |
*** paul-- has joined #openstack-infra | 02:04 | |
fungi | sdague's 62107 is going in at least | 02:07 |
fungi | hopefully that'll help | 02:07 |
*** spredzy_ has joined #openstack-infra | 02:10 | |
*** ryanpetrello has quit IRC | 02:15 | |
*** ryanpetrello has joined #openstack-infra | 02:15 | |
*** ryanpetrello has quit IRC | 02:18 | |
*** markmcclain1 has joined #openstack-infra | 02:24 | |
*** markmcclain has quit IRC | 02:24 | |
*** sarob has joined #openstack-infra | 02:38 | |
mordred | jeblair: k. I'm on it | 02:40 |
*** rongze has joined #openstack-infra | 02:42 | |
*** sarob has quit IRC | 02:43 | |
anteaya | fungi: can you look at this log: http://logs.openstack.org/98/62098/1/gate/gate-tempest-dsvm-neutron/b08831f/console.html#_2013-12-14_01_19_59_738 | 02:47 |
anteaya | get-pip.py invalid syntax | 02:47 |
clarkb | anteaya: that is a github error | 02:47 |
*** praneshp has quit IRC | 02:48 | |
mriedem | i just opened a bug for that | 02:48 |
anteaya | mriedem: have you a number for that one? | 02:48 |
mriedem | https://bugs.launchpad.net/devstack/+bug/1260913 | 02:48 |
anteaya | thanks | 02:48 |
uvirtbot | Launchpad bug 1260913 in devstack "SyntaxError calling get-pip" [Undecided,New] | 02:48 |
mriedem | clarkb: i thought you were saying the weekend was near? | 02:49 |
anteaya | elastic-recheck suggested https://bugs.launchpad.net/nova/+bug/1210483 for that error | 02:49 |
uvirtbot | Launchpad bug 1210483 in neutron "ServerAddressesTestXML.test_list_server_addresses FAIL" [Critical,Confirmed] | 02:49 |
clarkb | it was, but the thing I thought started earlier starts in 45 minutes | 02:49 |
mriedem | ha, ok | 02:49 |
anteaya | ah I hit both | 02:51 |
*** paul-- has quit IRC | 02:53 | |
*** paul-- has joined #openstack-infra | 02:53 | |
anteaya | before I go round again, just as a sanity check | 02:55 |
anteaya | https://review.openstack.org/#/c/62098/ doesn't contain code that created the github bug | 02:55 |
anteaya | what are the chances there is something in the patch that would make https://bugs.launchpad.net/nova/+bug/1210483 worse | 02:55 |
uvirtbot | Launchpad bug 1210483 in neutron "ServerAddressesTestXML.test_list_server_addresses FAIL" [Critical,Confirmed] | 02:55 |
anteaya | if there is a chance I will hold off reverifing until after talking with markmcclain and salvatore | 02:56 |
clarkb | anteaya you probably need to ask the teams involved | 02:58 |
anteaya | very good | 02:59 |
anteaya | will do so tomorrow | 02:59 |
anteaya | good night | 02:59 |
*** tma996 has joined #openstack-infra | 03:03 | |
*** syerrapragada has quit IRC | 03:06 | |
*** ryanpetrello has joined #openstack-infra | 03:16 | |
*** talluri has quit IRC | 03:20 | |
*** sandywalsh has quit IRC | 03:22 | |
*** mriedem has quit IRC | 03:25 | |
openstackgerrit | Darragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Use yaml local tags to support including files https://review.openstack.org/48783 | 03:26 |
*** Ryan_Lane has quit IRC | 03:27 | |
sdague | man, apevec just self approved a heat stable/grizzly change ... and their docs job was still broken | 03:28 |
*** talluri has joined #openstack-infra | 03:29 | |
mordred | wow really? | 03:33 |
*** Ryan_Lane has joined #openstack-infra | 03:33 | |
*** pcrews has quit IRC | 03:33 | |
portante | kinda like crossing the streams, huh? ;) | 03:34 |
*** sandywalsh has joined #openstack-infra | 03:34 | |
*** ryanpetrello has quit IRC | 03:35 | |
sdague | yeh, I just sent an email to the list | 03:40 |
sdague | I pulled a keystone one out earlier today as well | 03:40 |
sdague | and I think a cinder one yesterday | 03:40 |
*** Ryan_Lane has quit IRC | 03:40 | |
sdague | I also just went through and ran recheck no bug on everything in stable/grizzly | 03:40 |
sdague | that was older than Dec 10 | 03:41 |
sdague | so then at least there will be fresh -1s on things that can't pass | 03:41 |
*** ryanpetrello has joined #openstack-infra | 03:41 | |
sdague | a big part of the problem is a ton of those changes have last valid Jenkins run from October | 03:41 |
sdague | then people +A them | 03:41 |
*** ryanpetrello has quit IRC | 03:42 | |
*** SergeyLukjanov has joined #openstack-infra | 03:43 | |
sdague | on the flip side 60643,4 is a special kind of fun :) | 03:43 |
sdague | a glance change that fails *everything* except the docs job | 03:44 |
sdague | litterally, everything | 03:44 |
portante | so ,4 has an ORM change and a rename, and it is still failing? | 03:47 |
*** harlowja has quit IRC | 03:48 | |
portante | sorry, now I see the updated result | 03:50 |
portante | sdague there are a number of folks who just don't run tox on their code before commits | 04:00 |
portante | have folks considered a git review type check before it makes it into the review queues? | 04:00 |
*** SergeyLukjanov has quit IRC | 04:02 | |
*** aardvark has joined #openstack-infra | 04:11 | |
*** mkoderer_ has quit IRC | 04:12 | |
*** vishy has quit IRC | 04:12 | |
*** Alex_Gaynor_ has joined #openstack-infra | 04:13 | |
*** mkoderer_ has joined #openstack-infra | 04:13 | |
*** Alex_Gaynor has quit IRC | 04:14 | |
*** WarrenUsui has quit IRC | 04:14 | |
*** Adri2000 has quit IRC | 04:14 | |
*** morganfainberg has quit IRC | 04:14 | |
*** Alex_Gaynor_ is now known as Alex_Gaynor | 04:14 | |
*** yaguang has joined #openstack-infra | 04:15 | |
*** Adri2000 has joined #openstack-infra | 04:15 | |
*** morganfainberg has joined #openstack-infra | 04:19 | |
*** vishy has joined #openstack-infra | 04:20 | |
*** yaguang has quit IRC | 04:20 | |
*** yaguang has joined #openstack-infra | 04:20 | |
*** jcooley_ has quit IRC | 04:25 | |
*** talluri has quit IRC | 04:28 | |
*** ArxCruz has quit IRC | 04:28 | |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Move hacking guide to root directory https://review.openstack.org/62132 | 04:31 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Cleanup HACKING.rst https://review.openstack.org/62133 | 04:31 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Re-Add section on assertRaises(Exception https://review.openstack.org/62134 | 04:31 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Turn Python3 section into a list https://review.openstack.org/62135 | 04:31 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Add Python3 deprecated assert* to HACKING.rst https://review.openstack.org/62136 | 04:31 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Remove unnecessary headers https://review.openstack.org/62137 | 04:33 |
*** Ryan_Lane has joined #openstack-infra | 04:33 | |
*** Ryan_Lane has quit IRC | 04:38 | |
mordred | jog0 seems to be off the plane | 04:50 |
*** ryanpetrello has joined #openstack-infra | 04:51 | |
jeblair | i put 62098,1 back in the queue; if anyone wants to promote it next time there's a reset, i think that would be a good thing. i'm afk though. | 04:54 |
*** talluri_ has joined #openstack-infra | 05:03 | |
*** zhiyan has joined #openstack-infra | 05:04 | |
*** melwitt has quit IRC | 05:20 | |
*** rongze has quit IRC | 05:21 | |
*** julim has joined #openstack-infra | 05:21 | |
*** julim has quit IRC | 05:25 | |
*** sdake-OOO has quit IRC | 05:26 | |
*** sdake has joined #openstack-infra | 05:30 | |
*** sdake has quit IRC | 05:30 | |
*** sdake has joined #openstack-infra | 05:30 | |
*** talluri_ has quit IRC | 05:34 | |
*** Ryan_Lane has joined #openstack-infra | 05:34 | |
*** jcooley_ has joined #openstack-infra | 05:35 | |
*** AlexF has joined #openstack-infra | 05:38 | |
*** talluri has joined #openstack-infra | 05:39 | |
*** Ryan_Lane has quit IRC | 05:39 | |
*** talluri_ has joined #openstack-infra | 05:41 | |
*** talluri has quit IRC | 05:44 | |
*** SergeyLukjanov has joined #openstack-infra | 05:45 | |
*** persia has quit IRC | 05:49 | |
*** persia has joined #openstack-infra | 05:49 | |
*** persia is now known as Guest96097 | 05:49 | |
*** rongze has joined #openstack-infra | 05:51 | |
*** CaptTofu has quit IRC | 05:53 | |
*** CaptTofu has joined #openstack-infra | 05:54 | |
*** Abhishek has joined #openstack-infra | 05:56 | |
*** qs201 has joined #openstack-infra | 05:58 | |
*** rongze has quit IRC | 05:59 | |
*** qs201 has quit IRC | 05:59 | |
*** basha has joined #openstack-infra | 06:03 | |
*** zhiyan has quit IRC | 06:04 | |
*** talluri_ has quit IRC | 06:07 | |
*** basha_ has joined #openstack-infra | 06:13 | |
*** basha has quit IRC | 06:17 | |
*** basha_ is now known as basha | 06:17 | |
*** yaguang has quit IRC | 06:19 | |
*** basha has quit IRC | 06:19 | |
*** reed has quit IRC | 06:20 | |
*** AlexF has quit IRC | 06:20 | |
*** paul-- has quit IRC | 06:29 | |
*** talluri has joined #openstack-infra | 06:32 | |
*** praneshp has joined #openstack-infra | 06:42 | |
*** AlexF has joined #openstack-infra | 06:50 | |
*** sdake has quit IRC | 06:53 | |
*** zhiyan has joined #openstack-infra | 07:00 | |
*** vkozhukalov has joined #openstack-infra | 07:04 | |
*** basha has joined #openstack-infra | 07:04 | |
*** markmcclain1 has quit IRC | 07:06 | |
*** markmcclain has joined #openstack-infra | 07:06 | |
*** slong has joined #openstack-infra | 07:10 | |
*** AaronGr_afk is now known as AaronGr_Zzz | 07:12 | |
*** slong_ has joined #openstack-infra | 07:12 | |
*** basha has quit IRC | 07:13 | |
*** slong_ has quit IRC | 07:13 | |
*** basha has joined #openstack-infra | 07:14 | |
*** slong has quit IRC | 07:15 | |
*** AlexF has quit IRC | 07:18 | |
*** Abhishek has quit IRC | 07:25 | |
*** AlexF has joined #openstack-infra | 07:27 | |
*** SergeyLukjanov has quit IRC | 07:28 | |
*** ryanpetrello has quit IRC | 07:28 | |
*** rongze has joined #openstack-infra | 07:31 | |
*** tma996 has quit IRC | 07:31 | |
*** denis_makogon_ has joined #openstack-infra | 07:33 | |
*** Abhishek has joined #openstack-infra | 07:36 | |
*** AaronGr_Zzz is now known as AaronGr | 07:36 | |
*** basha has joined #openstack-infra | 07:37 | |
*** AaronGr is now known as AaronGr_Zzz | 07:42 | |
*** basha has quit IRC | 07:47 | |
*** praneshp_ has joined #openstack-infra | 07:50 | |
*** basha has joined #openstack-infra | 07:50 | |
*** praneshp has quit IRC | 07:51 | |
*** praneshp_ is now known as praneshp | 07:51 | |
openstackgerrit | A change was merged to openstack/requirements: HTTPretty: update to 0.7.1 https://review.openstack.org/61981 | 07:58 |
*** DinaBelova has joined #openstack-infra | 08:00 | |
*** basha has quit IRC | 08:01 | |
*** SergeyLukjanov has joined #openstack-infra | 08:01 | |
*** basha has joined #openstack-infra | 08:04 | |
*** basha has quit IRC | 08:05 | |
*** yolanda has joined #openstack-infra | 08:16 | |
*** DinaBelova has quit IRC | 08:18 | |
*** DinaBelova has joined #openstack-infra | 08:18 | |
*** SergeyLukjanov has quit IRC | 08:18 | |
*** enikanorov_ has joined #openstack-infra | 08:21 | |
enikanorov_ | hi folks | 08:21 |
enikanorov_ | what's the best way to do recheck, if elastick recheck doesn't find the proper bug? (doesn't leave the comment at all) | 08:21 |
*** Abhishek has quit IRC | 08:27 | |
notmyname | enikanorov_: that means you get to read through logs. start by clicking on the job that failed, then look at the console logs (link on the top left). there should be some instructions there on what to do IIRC | 08:31 |
enikanorov_ | notmyname: well, that is not something new. I looked through logs, found the failing tempest test. It doesn't give much clue, not to say it is unrelated to my patch. | 08:32 |
enikanorov_ | so elastic is silent | 08:32 |
notmyname | ah. ok | 08:32 |
notmyname | is this a gate job or a check job | 08:32 |
notmyname | ? | 08:32 |
notmyname | ie was it approved and trying to land, or just the check? | 08:33 |
enikanorov_ | check | 08:34 |
notmyname | ok, so you have two options. (1) search launch pad for a bug related to what you saw failing, and if nothing is found file a new bug or (2) recheck no bug | 08:35 |
*** Ryan_Lane has joined #openstack-infra | 08:35 | |
enikanorov_ | ah, i found the correct trace in the logs. it really faill due to a known bug | 08:35 |
enikanorov_ | *fals | 08:35 |
notmyname | option 1 is better, but it does take more work on your part | 08:35 |
enikanorov_ | *fails | 08:35 |
notmyname | loh, cool | 08:35 |
enikanorov_ | thanks | 08:35 |
notmyname | ok, so if elastic recheck didn't find it, it might be something that clarkb or sdague can add | 08:36 |
enikanorov_ | elastick should have been found it, because it usually does find this bug | 08:36 |
*** Ryan_Lane has quit IRC | 08:39 | |
*** Abhishek has joined #openstack-infra | 08:42 | |
*** jcooley_ has quit IRC | 08:47 | |
*** jcooley_ has joined #openstack-infra | 08:49 | |
*** SergeyLukjanov has joined #openstack-infra | 08:54 | |
*** basha has joined #openstack-infra | 08:54 | |
*** basha_ has joined #openstack-infra | 09:03 | |
*** AlexF has quit IRC | 09:03 | |
*** basha has quit IRC | 09:05 | |
*** basha_ is now known as basha | 09:05 | |
*** jcooley_ has quit IRC | 09:05 | |
*** praneshp has quit IRC | 09:06 | |
*** Abhishek has quit IRC | 09:24 | |
openstackgerrit | A change was merged to openstack-infra/devstack-gate: Don't save mysql logs https://review.openstack.org/61820 | 09:25 |
*** basha has quit IRC | 09:27 | |
*** denis_makogon_ has quit IRC | 09:31 | |
*** Ryan_Lane has joined #openstack-infra | 09:36 | |
*** basha has joined #openstack-infra | 09:36 | |
*** Ryan_Lane has quit IRC | 09:37 | |
*** Ryan_Lane has joined #openstack-infra | 09:37 | |
*** Ryan_Lane has quit IRC | 09:42 | |
*** basha_ has joined #openstack-infra | 09:47 | |
*** basha has quit IRC | 09:51 | |
*** basha_ is now known as basha | 09:51 | |
*** boris-42 has quit IRC | 09:53 | |
*** boris-42 has joined #openstack-infra | 09:55 | |
*** rongze_ has joined #openstack-infra | 10:07 | |
*** rongze has quit IRC | 10:10 | |
*** tma996 has joined #openstack-infra | 10:11 | |
*** basha has quit IRC | 10:19 | |
*** rongze has joined #openstack-infra | 10:25 | |
*** rongze_ has quit IRC | 10:28 | |
*** rongze has quit IRC | 10:45 | |
*** DinaBelova has quit IRC | 10:47 | |
*** SergeyLukjanov has quit IRC | 10:53 | |
*** basha has joined #openstack-infra | 10:57 | |
*** basha has quit IRC | 11:07 | |
*** rongze has joined #openstack-infra | 11:10 | |
*** yaguang has joined #openstack-infra | 11:10 | |
*** zhiyan has quit IRC | 11:14 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: remove the linebreak to see if this helps on matching https://review.openstack.org/62147 | 11:15 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: remove the linebreak to see if this helps on matching https://review.openstack.org/62147 | 11:17 |
*** rongze_ has joined #openstack-infra | 11:20 | |
*** rongze has quit IRC | 11:20 | |
*** elasticio has joined #openstack-infra | 11:22 | |
*** rongze has joined #openstack-infra | 11:22 | |
*** SergeyLukjanov has joined #openstack-infra | 11:23 | |
*** boris-42 has quit IRC | 11:25 | |
*** rongze_ has quit IRC | 11:26 | |
*** yassine has joined #openstack-infra | 11:31 | |
*** basha has joined #openstack-infra | 11:39 | |
*** talluri has quit IRC | 11:41 | |
*** basha has quit IRC | 11:41 | |
*** kashyap has joined #openstack-infra | 11:43 | |
*** talluri has joined #openstack-infra | 11:44 | |
*** basha has joined #openstack-infra | 11:47 | |
*** vkozhukalov has quit IRC | 11:49 | |
*** basha has quit IRC | 11:49 | |
*** markmcclain1 has joined #openstack-infra | 11:50 | |
*** markmcclain has quit IRC | 11:50 | |
*** yassine has quit IRC | 11:50 | |
*** boris-42 has joined #openstack-infra | 12:01 | |
*** ArxCruz has joined #openstack-infra | 12:11 | |
*** elasticio has quit IRC | 12:12 | |
*** elasticio has joined #openstack-infra | 12:12 | |
*** yolanda has quit IRC | 12:20 | |
*** ArxCruz has quit IRC | 12:32 | |
*** tma996 has quit IRC | 12:41 | |
*** elasticio has quit IRC | 12:44 | |
*** fbo_away is now known as fbo | 12:47 | |
*** tma996 has joined #openstack-infra | 12:52 | |
*** yamahata_ has quit IRC | 12:53 | |
*** jcooley_ has joined #openstack-infra | 12:54 | |
*** yamahata_ has joined #openstack-infra | 12:57 | |
*** jcooley_ has quit IRC | 12:59 | |
*** Abhishek has joined #openstack-infra | 13:00 | |
*** DennyZhang has joined #openstack-infra | 13:01 | |
*** pblaho has joined #openstack-infra | 13:06 | |
*** pblaho has left #openstack-infra | 13:10 | |
*** DennyZhang has quit IRC | 13:15 | |
*** michchap has quit IRC | 13:15 | |
*** michchap has joined #openstack-infra | 13:16 | |
*** markmc has quit IRC | 13:18 | |
*** hashar has joined #openstack-infra | 13:28 | |
*** adalbas has quit IRC | 13:32 | |
*** Abhishek has quit IRC | 13:34 | |
*** Abhishek has joined #openstack-infra | 13:36 | |
*** Ryan_Lane has joined #openstack-infra | 13:40 | |
*** yamahata_ has quit IRC | 13:43 | |
*** elasticio has joined #openstack-infra | 13:44 | |
*** Ryan_Lane has quit IRC | 13:45 | |
*** talluri has quit IRC | 13:47 | |
*** yamahata_ has joined #openstack-infra | 13:48 | |
*** talluri has joined #openstack-infra | 13:48 | |
*** CaptTofu has quit IRC | 13:48 | |
*** CaptTofu has joined #openstack-infra | 13:49 | |
*** jcooley_ has joined #openstack-infra | 13:51 | |
*** hashar has quit IRC | 13:51 | |
*** vkozhukalov has joined #openstack-infra | 13:52 | |
*** talluri has quit IRC | 13:52 | |
*** talluri has joined #openstack-infra | 13:54 | |
*** talluri has quit IRC | 13:57 | |
*** talluri has joined #openstack-infra | 13:57 | |
*** yamahata_ has quit IRC | 13:58 | |
*** yamahata_ has joined #openstack-infra | 14:01 | |
*** talluri has quit IRC | 14:01 | |
*** mriedem has joined #openstack-infra | 14:09 | |
*** che-arne has joined #openstack-infra | 14:14 | |
openstackgerrit | ChangBo Guo proposed a change to openstack-dev/hacking: Add check for removed modules in Python 3 https://review.openstack.org/61049 | 14:17 |
*** basha has joined #openstack-infra | 14:18 | |
*** adalbas has joined #openstack-infra | 14:27 | |
portante | anybody around? | 14:31 |
portante | need help with a failed grenade gate job, logs got cut off | 14:31 |
*** dcramer_ has quit IRC | 14:31 | |
portante | http://logs.openstack.org/96/61896/2/gate/gate-grenade-dsvm/5f10eb6 | 14:31 |
*** elasticio has quit IRC | 14:31 | |
portante | when i looked via jenkins it show me that the error would have been saved in some smoke log file, but am not able to find that jenkins reference again | 14:32 |
portante | jeblair, clarkb | 14:32 |
anteaya | 62098 got merged in at 11:52:18 utc Dec. 14, two failures show up on kibana since then - timestamps 12:00:56 and 14:14:46 | 14:36 |
anteaya | I am going with the notion that the job that failed at 12:00:56 probably didn't have the 62098 patch when it started the job | 14:37 |
openstackgerrit | ChangBo Guo proposed a change to openstack-dev/hacking: Add check for removed modules in Python 3 https://review.openstack.org/61049 | 14:37 |
anteaya | would the failure at 14:14:46 have had the patch by then? | 14:37 |
anteaya | hi portante, I can look | 14:38 |
*** CaptTofu has quit IRC | 14:38 | |
anteaya | after the jenkins job finishes, the node is deleted, so the jenkins console for that node would also not exist anymore | 14:40 |
anteaya | I don't know how to go about finding a referenced file after the node is gone, as in the link it was referencing | 14:41 |
anteaya | sorry I am not more help | 14:41 |
anteaya | enikanorov_: elastic-recheck bugs are added by hand | 14:42 |
portante | okay, I just filed a bug to track that the job got cut off so that we can see if this happens again, and reverified against that bug, https://bugs.launchpad.net/openstack-ci/+bug/1260983 | 14:43 |
uvirtbot | Launchpad bug 1260983 in openstack-ci "console log file appears cut off for gate job result" [Undecided,New] | 14:43 |
anteaya | anyone can add them and they are encouraged to do so: http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/README.rst | 14:43 |
anteaya | if elastic-recheck is not performing to your expectations, you are welcome to offer any patch you feel would improve it | 14:43 |
anteaya | I think that sdague and the rest of the elastic-recheck contributors would welcome the help | 14:43 |
anteaya | thanks enikanorov_ | 14:44 |
anteaya | portante: thanks, I added some of the console output as well | 14:46 |
anteaya | I was reading a bug report the other day and the link to the logs was stale | 14:46 |
portante | great, thanks | 14:47 |
portante | yes, I have had to add .gz to the ends | 14:47 |
anteaya | my pleasure, thanks for filing the bug | 14:47 |
anteaya | ah | 14:47 |
anteaya | I will try that next time | 14:47 |
portante | i tried it with this one, but the .gz vesion is missing, or not yet created | 14:47 |
anteaya | might be a consequence of the fact the logs are cut off | 14:48 |
*** basha_ has joined #openstack-infra | 14:48 | |
anteaya | okay, the ssh patch got merged, thanks for the reverify jeblair | 14:49 |
anteaya | I'm going offline to see if I can find a life | 14:49 |
*** basha has quit IRC | 14:51 | |
*** basha_ is now known as basha | 14:51 | |
mriedem | anyone ever seen a failure like this? not even sure where to start. http://logs.openstack.org/07/59607/4/gate/gate-tempest-dsvm-neutron/5415965/console.html | 14:51 |
mriedem | weird, console log is garbage but testr_results has information in it | 14:52 |
portante | looks like it is getting cut off | 14:52 |
portante | see bug filed above | 14:53 |
*** rongze has quit IRC | 14:53 | |
*** rongze has joined #openstack-infra | 14:53 | |
openstackgerrit | ChangBo Guo proposed a change to openstack-dev/hacking: Add check for removed modules in Python 3 https://review.openstack.org/61049 | 14:59 |
sdague | mriedem: look like a failure to build a node in your case | 15:00 |
mriedem | sdague: but it's weird, there are lots of other logs - when i see console have a sudden fail like that i expect there to not really be anything else showing up for logs | 15:01 |
sdague | mriedem: over what time period? | 15:02 |
sdague | there were a bunch of upgrades of jenkins | 15:02 |
sdague | due to crashes | 15:02 |
openstackgerrit | A change was merged to openstack-dev/pbr: Fix typos in documents https://review.openstack.org/61110 | 15:04 |
mriedem | sdague: http://logs.openstack.org/07/59607/4/gate/gate-tempest-dsvm-neutron/5415965/console.html | 15:05 |
*** rongze has quit IRC | 15:05 | |
*** rongze has joined #openstack-infra | 15:06 | |
openstackgerrit | ChangBo Guo proposed a change to openstack-dev/hacking: Add check for removed modules in Python 3 https://review.openstack.org/61049 | 15:06 |
*** CaptTofu has joined #openstack-infra | 15:10 | |
*** jcooley_ has quit IRC | 15:12 | |
openstackgerrit | Max Rydahl Andersen proposed a change to openstack-infra/jenkins-job-builder: Add support for sidebar-links plugin via properties. https://review.openstack.org/60710 | 15:31 |
*** dkliban has joined #openstack-infra | 15:33 | |
*** dcramer_ has joined #openstack-infra | 15:33 | |
*** tma996 has quit IRC | 15:38 | |
*** ryanpetrello has joined #openstack-infra | 15:39 | |
*** CaptTofu has quit IRC | 15:41 | |
*** adalbas has quit IRC | 15:47 | |
*** ryanpetrello has quit IRC | 15:50 | |
*** dkliban has quit IRC | 15:50 | |
*** basha has quit IRC | 15:53 | |
*** elasticio has joined #openstack-infra | 15:56 | |
*** rongze has quit IRC | 16:14 | |
*** adalbas has joined #openstack-infra | 16:15 | |
*** talluri has joined #openstack-infra | 16:17 | |
*** chandankumar has quit IRC | 16:18 | |
*** yaguang has quit IRC | 16:18 | |
*** chandankumar has joined #openstack-infra | 16:18 | |
*** talluri has quit IRC | 16:27 | |
*** rcarrillocruz has joined #openstack-infra | 16:31 | |
*** DennyZhang has joined #openstack-infra | 16:31 | |
*** markmcclain1 has quit IRC | 16:31 | |
*** markmcclain has joined #openstack-infra | 16:31 | |
*** Abhishek has quit IRC | 16:33 | |
*** SergeyLukjanov is now known as _SergeyLukjanov | 16:37 | |
fungi | anteaya: portante: the console log in jenkins for that job was actually https://jenkins01.openstack.org/job/gate-grenade-dsvm/1845/console | 16:41 |
fungi | not sure why it got truncated on the logserver | 16:41 |
fungi | (they don't get deleted when the slave where the job ran gets deleted) | 16:41 |
*** rongze has joined #openstack-infra | 16:45 | |
fungi | i think mriedem's is https://jenkins01.openstack.org/job/gate-tempest-dsvm-neutron/2415/console | 16:45 |
fungi | i wonder if something in jenkins 1.543 is causing problems for the console log collection routine in the scp publisher plugin | 16:46 |
*** Abhishek has joined #openstack-infra | 16:47 | |
mordred | fungi: so - I didn't exist yesterday - what were you guys doing with jenkins-dev and whatnot? | 16:47 |
fungi | so far the two examples we have are from jobs which ran on the upgraded jenkins master. too small of a sample size to call it a hunch | 16:47 |
fungi | mordred: we were testing nodepool on jenkins-dev upgraded to jenkins 1.543 to see if it worked okay | 16:47 |
jeblair | fungi: but considering console logs are copied by a separate thread after a job finishes, massive thread changes in jenkins could account for such a change | 16:47 |
mordred | fungi: ah! jenkins upgrade needed? | 16:48 |
fungi | jenkins downgrade needed, i thinkl | 16:48 |
fungi | or jenkins scp plugin fixes needed perhaps | 16:48 |
mordred | jenkins version change needed | 16:48 |
mordred | I'm pretty sure upgrade and downgrade are arbitrary words in that release stream | 16:48 |
fungi | do we want to hunt for more evidence, or put jenkins01 in shutdown to contain additional damage? | 16:49 |
jeblair | :) let's downgrade 01 to match 02, then next week we can try to repro on -dev | 16:49 |
fungi | k | 16:49 |
jeblair | maybe writing a job to spew 24k lines of text will do the trick. | 16:49 |
fungi | i've put it in prepare for shutdown | 16:49 |
fungi | i'll downgrade it once it quiesces | 16:50 |
fungi | mordred: unrelated, but i've reopened bug 1254275 and added notes based on revelations last night | 16:50 |
clarkb | :( there is a the old war in /usr/share/jenkins if you want the dirty downgrade | 16:50 |
uvirtbot | Launchpad bug 1254275 in openstack-ci "check-swift-devstack-vm-functional fails fetching requirements from pypi.python.org" [High,In progress] https://launchpad.net/bugs/1254275 | 16:50 |
mordred | fungi: awesome. thank you | 16:51 |
mordred | speaking of that failure ... | 16:51 |
jeblair | 02 is Jenkins ver. 1.525 | 16:51 |
mordred | oh - nevermind. I figured it out | 16:52 |
clarkb | should be able to reproduce well enough on jenkins-dev so that is nice | 16:52 |
fungi | mordred: also, noticed a logic error in the get-pip additions to devstack (though they're not relevant to our tests)... looks like we unconditionally set the get_pip var to 0 if it's empty, then later check to see whether it's nonempty (rather than whether it's nonzero). haven't had time to write a patch yet, but might be of interest to you | 16:52 |
*** _SergeyLukjanov is now known as SergeyLukjanov | 16:52 | |
*** rongze has quit IRC | 16:53 | |
mordred | fungi: ok. I'll look at that too | 16:53 |
fungi | thanks | 16:55 |
fungi | too bad we have to downgrade jenkins01... memory utilization looks to be improved somewhat (even looks like it actually reclaimed a little when activity waned) http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=819&rra_id=all | 16:58 |
*** dcramer_ has quit IRC | 17:00 | |
*** DennyZhang has quit IRC | 17:01 | |
*** DennyZhang has joined #openstack-infra | 17:02 | |
clarkb | :/ I am sure we can get it sorted out relatively quickly as reproducing the problem shouldn't be hard | 17:02 |
jeblair | ++ | 17:02 |
*** cody-somerville has joined #openstack-infra | 17:03 | |
*** cody-somerville has quit IRC | 17:03 | |
*** cody-somerville has joined #openstack-infra | 17:03 | |
jeblair | we'll want to consider nodepool's impact too -- did the scp stop when nodepool deleted the node? | 17:03 |
clarkb | jeblair: I don't think so because the master has the complete console log | 17:04 |
clarkb | I can't look at it today, about to start weekend things | 17:05 |
*** jcooley_ has joined #openstack-infra | 17:07 | |
fungi | portante: (or anybody) do you remember which bug was tracking swift container server not starting after a grenade upgrade from grizzly to havana? | 17:07 |
*** rcarrillocruz1 has joined #openstack-infra | 17:07 | |
fungi | we had a havana change hit it and just want to make sure i reverify against the right bug... http://logs.openstack.org/33/61533/1/gate/gate-grenade-dsvm/318aa6f/ | 17:07 |
fungi | sdague: if you're around, do you recall? ^ | 17:08 |
*** rcarrillocruz2 has joined #openstack-infra | 17:09 | |
fungi | nevermind--found. seems to be bug 1253886 | 17:09 |
uvirtbot | Launchpad bug 1253886 in swift "swift container server fails to start in grenade gate job" [Undecided,New] https://launchpad.net/bugs/1253886 | 17:09 |
*** rcarrillocruz has quit IRC | 17:09 | |
*** rcarrillocruz1 has quit IRC | 17:12 | |
fungi | looks like we're about 10 minutes out from being able to downgrade jenkins01 | 17:12 |
*** kashyap` has joined #openstack-infra | 17:12 | |
*** praneshp has joined #openstack-infra | 17:13 | |
*** chandankumar_ has joined #openstack-infra | 17:13 | |
*** kashyap has quit IRC | 17:15 | |
*** chandankumar has quit IRC | 17:16 | |
jeblair | i just checked the running job, it also suffered a truncated log, but the node is still there, so i don't think nodepool is part of the issue | 17:19 |
*** Abhishek has quit IRC | 17:20 | |
jeblair | fungi: 01 is idle | 17:21 |
fungi | yep, all idle | 17:21 |
fungi | heh | 17:21 |
*** kashyap has joined #openstack-infra | 17:24 | |
fungi | okay, jenkins stopped | 17:24 |
*** sdake has joined #openstack-infra | 17:24 | |
fungi | i've retrieved http://pkg.jenkins-ci.org/debian/binary/jenkins_1.525_all.deb into my homedir | 17:24 |
*** chandankumar__ has joined #openstack-infra | 17:24 | |
jeblair | fungi: let me know if you need anything; otherwise i'll cheer you on from the sidelines and check up on nodepool and log workers | 17:24 |
fungi | no problem. just confirming that's the deb we want | 17:25 |
jeblair | i think so | 17:25 |
fungi | okay, downgraded and starting | 17:25 |
fungi | seems up | 17:26 |
fungi | running jobs | 17:26 |
*** kashyap` has quit IRC | 17:27 | |
*** chandankumar_ has quit IRC | 17:28 | |
*** oubiwann has quit IRC | 17:28 | |
fungi | i'll update bug 1260983 with specifics | 17:28 |
uvirtbot | Launchpad bug 1260983 in openstack-ci "console log file appears cut off for gate job result" [Undecided,New] https://launchpad.net/bugs/1260983 | 17:28 |
jeblair | fungi: logstash client and nodepool are both getting events | 17:29 |
fungi | excellent | 17:30 |
fungi | http://logs.openstack.org/81/62181/1/check/gate-python-novaclient-pypy/cca17a4/console.html ran from jenkins01 after the downgrade, and has its whole console log archived, fwiw | 17:31 |
fungi | same for http://logs.openstack.org/81/62181/1/check/gate-python-novaclient-python33/8c4221b/console.html | 17:32 |
*** pcrews has joined #openstack-infra | 17:33 | |
clarkb | I guess that confirms the source of the problem | 17:35 |
*** basha has joined #openstack-infra | 17:35 | |
*** basha has quit IRC | 17:37 | |
*** Abhishek has joined #openstack-infra | 17:38 | |
*** oubiwann has joined #openstack-infra | 17:39 | |
fungi | seems so | 17:40 |
*** jcooley_ has quit IRC | 17:41 | |
*** dcramer_ has joined #openstack-infra | 17:44 | |
*** DennyZhang has quit IRC | 17:47 | |
*** pcrews has quit IRC | 17:59 | |
*** dstufft_ has joined #openstack-infra | 17:59 | |
*** persia has joined #openstack-infra | 18:04 | |
*** persia is now known as Guest78365 | 18:04 | |
*** tsufiev_ has joined #openstack-infra | 18:04 | |
*** belliott_ has joined #openstack-infra | 18:05 | |
*** portante_ has joined #openstack-infra | 18:05 | |
*** dstufft has quit IRC | 18:05 | |
*** morganfainberg has quit IRC | 18:05 | |
*** bknudson has quit IRC | 18:05 | |
*** mkoderer_ has quit IRC | 18:05 | |
*** Alex_Gaynor has quit IRC | 18:05 | |
*** cyeoh has quit IRC | 18:05 | |
*** sdague has quit IRC | 18:05 | |
*** sirushti has quit IRC | 18:05 | |
*** jog0 has quit IRC | 18:05 | |
*** Guest96097 has quit IRC | 18:05 | |
*** changbl has quit IRC | 18:05 | |
*** tsufiev has quit IRC | 18:05 | |
*** belliott has quit IRC | 18:05 | |
*** belliott_ is now known as belliott | 18:05 | |
*** briancline has quit IRC | 18:05 | |
*** portante has quit IRC | 18:05 | |
*** jog0 has joined #openstack-infra | 18:05 | |
*** briancline has joined #openstack-infra | 18:05 | |
*** morganfainberg has joined #openstack-infra | 18:06 | |
*** mkoderer_ has joined #openstack-infra | 18:06 | |
*** cyeoh has joined #openstack-infra | 18:06 | |
*** bknudson has joined #openstack-infra | 18:06 | |
*** sirushti has joined #openstack-infra | 18:06 | |
*** sdague has joined #openstack-infra | 18:06 | |
*** Alex_Gaynor_ has joined #openstack-infra | 18:06 | |
*** changbl has joined #openstack-infra | 18:06 | |
*** oubiwann has quit IRC | 18:07 | |
*** Abhishek has quit IRC | 18:24 | |
*** CaptTofu has joined #openstack-infra | 18:24 | |
*** basha has joined #openstack-infra | 18:27 | |
*** rcarrillocruz2 has quit IRC | 18:40 | |
*** cyeoh has quit IRC | 18:45 | |
*** yolanda has joined #openstack-infra | 18:47 | |
*** mdenny has quit IRC | 18:50 | |
*** rongze has joined #openstack-infra | 18:53 | |
*** basha has quit IRC | 18:57 | |
*** CaptTofu has quit IRC | 18:58 | |
*** rongze has quit IRC | 18:59 | |
portante_ | fungi: that container start failure one is kinda weird. It does not seem to happen too often, but it might indicate another problem | 19:04 |
fungi | portante_: agreed. it also may be fixed on master. i ran into it on a stable/havana change | 19:04 |
fungi | trying to pitch in cramming the last of the sphinx caps and cve fixes through for the 2013.2.1 release | 19:05 |
fungi | because, you know, it's always wednesday somewhere | 19:06 |
*** basha has joined #openstack-infra | 19:06 | |
*** praneshp has quit IRC | 19:07 | |
*** CaptTofu has joined #openstack-infra | 19:07 | |
*** praneshp has joined #openstack-infra | 19:08 | |
*** rcarrillocruz has joined #openstack-infra | 19:13 | |
*** enikanorov___ has joined #openstack-infra | 19:15 | |
*** CaptTofu has quit IRC | 19:16 | |
*** locke1051 has joined #openstack-infra | 19:16 | |
*** locke105 has quit IRC | 19:16 | |
*** dstanek has quit IRC | 19:17 | |
*** lifeless has quit IRC | 19:17 | |
*** moted_ has joined #openstack-infra | 19:17 | |
*** lifeless has joined #openstack-infra | 19:17 | |
*** moted has quit IRC | 19:17 | |
*** moted_ is now known as moted | 19:17 | |
*** Guest78365 has quit IRC | 19:18 | |
*** persia has joined #openstack-infra | 19:18 | |
*** beekneemech has quit IRC | 19:18 | |
*** enikanorov_ has quit IRC | 19:18 | |
*** persia is now known as Guest93136 | 19:18 | |
*** bnemec has joined #openstack-infra | 19:19 | |
*** sergtimofeev has joined #openstack-infra | 19:21 | |
*** rcarrillocruz1 has joined #openstack-infra | 19:22 | |
*** rcarrillocruz has quit IRC | 19:24 | |
*** enikanorov___ has quit IRC | 19:28 | |
*** saschpe_ has joined #openstack-infra | 19:28 | |
*** basha has quit IRC | 19:29 | |
*** Hefeweiz1n has joined #openstack-infra | 19:30 | |
*** NobodyCa1 has joined #openstack-infra | 19:30 | |
*** devanand1 has joined #openstack-infra | 19:30 | |
*** mestery_ has joined #openstack-infra | 19:31 | |
*** Hefeweizen has quit IRC | 19:32 | |
*** NobodyCam has quit IRC | 19:32 | |
*** devananda has quit IRC | 19:32 | |
*** saschpe has quit IRC | 19:32 | |
*** mestery has quit IRC | 19:32 | |
*** bogdando has joined #openstack-infra | 19:33 | |
*** dcramer_ has quit IRC | 19:38 | |
*** rainya has joined #openstack-infra | 19:45 | |
*** dcramer_ has joined #openstack-infra | 19:52 | |
*** rcarrillocruz1 has quit IRC | 20:04 | |
mordred | ok. it's become time to go unsubscribe from mailing lists I was subscribed to but had sorting into hidden folders | 20:10 |
*** dstanek has joined #openstack-infra | 20:13 | |
*** enikanorov has joined #openstack-infra | 20:15 | |
*** dstanek has quit IRC | 20:18 | |
fungi | it's like when you move and never get around to unpacking some boxes you stashed in to attic/garage... after a year or two, you can be pretty sure you didn't really need whatever was in them anyway | 20:18 |
*** dcramer_ has quit IRC | 20:25 | |
*** reed has joined #openstack-infra | 20:29 | |
*** slong has joined #openstack-infra | 20:35 | |
mordred | yup | 20:35 |
*** slong has quit IRC | 20:35 | |
mordred | I'm still, it turns out, subscribed to the autoconf list | 20:36 |
*** harlowja has joined #openstack-infra | 20:37 | |
*** zehicle has joined #openstack-infra | 20:56 | |
*** mrodden has quit IRC | 20:57 | |
*** mrodden has joined #openstack-infra | 20:59 | |
*** zehicle_at_dell has quit IRC | 20:59 | |
*** rcarrillocruz has joined #openstack-infra | 21:02 | |
*** boris-42 has quit IRC | 21:03 | |
mordred | fungi: hey - so, you know how we don't really read emails from root? | 21:08 |
mordred | fungi: best I can tell - manage-projects cron has not worked in QUITE some time | 21:08 |
mordred | because it needs to read the github key, which is in a file owned and only readable by root | 21:08 |
mordred | but the cronjob is owned by gerrit2 | 21:09 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Split config from projects list https://review.openstack.org/62187 | 21:09 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Finish the projects.yaml.erb rename https://review.openstack.org/62188 | 21:09 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Unlaunchpadify projects.yaml https://review.openstack.org/62189 | 21:09 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Track direct-release projects in projects.yaml https://review.openstack.org/62190 | 21:09 |
*** markmcclain has quit IRC | 21:10 | |
*** markmcclain1 has joined #openstack-infra | 21:10 | |
*** dcramer_ has joined #openstack-infra | 21:16 | |
*** SergeyLukjanov has quit IRC | 21:16 | |
*** zehicle has quit IRC | 21:20 | |
*** zehicle has joined #openstack-infra | 21:21 | |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1260644 https://review.openstack.org/62192 | 21:23 |
uvirtbot | Launchpad bug 1260644 in nova "ServerRescueTest may fail due to RESCUE taking too long" [High,Confirmed] https://launchpad.net/bugs/1260644 | 21:23 |
chmouel | fungi when you say " need an embargo" what do you mean by embargo? | 21:28 |
*** dcramer_ has quit IRC | 21:29 | |
*** rcarrillocruz has quit IRC | 21:31 | |
reed | chmouel: embargo is the american word for 'you're a communist, won't talk to you' | 21:36 |
*** dcramer_ has joined #openstack-infra | 21:43 | |
*** harlowja has quit IRC | 21:50 | |
*** rongze has joined #openstack-infra | 21:54 | |
*** rongze has quit IRC | 21:59 | |
*** harlowja has joined #openstack-infra | 22:12 | |
*** dstanek has joined #openstack-infra | 22:14 | |
*** flor3n has joined #openstack-infra | 22:16 | |
*** dstanek has quit IRC | 22:18 | |
*** harlowja has quit IRC | 22:23 | |
*** fbo is now known as fbo_away | 22:26 | |
anteaya | fungi thanks for the url, yes I wasn't able to express myself accurately - I didn't think the logs themselves got deleted, as you point out, they don't, but rather the gui to access the link to those logs evaporates | 22:31 |
*** harlowja has joined #openstack-infra | 22:32 | |
anteaya | which I still believe accurately represents events | 22:32 |
anteaya | chmouel: if embargo meant, everyone from that project needs to stop approving patches until <patch_number> patch merges | 22:33 |
anteaya | would that make sense? | 22:33 |
*** flor3n has quit IRC | 22:43 | |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: Add query for bug 1260644 https://review.openstack.org/62192 | 22:46 |
uvirtbot | Launchpad bug 1260644 in nova "ServerRescueTest may fail due to RESCUE taking too long" [High,Confirmed] https://launchpad.net/bugs/1260644 | 22:46 |
*** ArxCruz has joined #openstack-infra | 22:50 | |
*** dims has quit IRC | 22:55 | |
*** dims has joined #openstack-infra | 23:11 | |
fungi | mordred: good find on the cron job. obviously we should probably drop it anyway, at least until we put some locking around manage-projects to keep it from running concurrently with itself | 23:17 |
fungi | chmouel: "embargo" in the context of a security vulnerability means keeping it private to a specific audience for some brief period before allowing it to become public | 23:19 |
*** thomasem has joined #openstack-infra | 23:23 | |
openstackgerrit | Joe Gordon proposed a change to openstack/requirements: Unpin keyring, just skip keyring 2.0 https://review.openstack.org/58362 | 23:29 |
*** flor3n has joined #openstack-infra | 23:35 | |
*** flaper87|afk has quit IRC | 23:52 | |
*** adarazs has quit IRC | 23:52 | |
*** shardy has quit IRC | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!