*** ociuhandu has quit IRC | 00:01 | |
*** openstack has joined #openstack-gate | 00:02 | |
mriedem | i.e. test_rescue_server does a lot of global setup in the class with resources that aren't necessarily needed in all test cases | 00:03 |
---|---|---|
mriedem | that all runs in parallel making a load on the server when it's unnecessary | 00:03 |
mriedem | mtreinish: ^ | 00:05 |
mriedem | i also don't know why that test worries about adding an _unpause/_unrescue cleanup when it's just going to delete the server when the test exits anyway, seems like a waste of time | 00:08 |
*** ken1ohmichi has joined #openstack-gate | 00:11 | |
cyeoh | mriedem: its only deleted at tearDownClass isn't it? | 00:11 |
cyeoh | mriedem: so its just making sure the server is put back in the correct state before the next rescue test | 00:11 |
mriedem | cyeoh: hmm you might be right about that, still i'm not sure this test is very stable | 00:14 |
mriedem | cyeoh: like it creates a server to rescue in the setup, but then there are tests that rescue a different server | 00:15 |
mriedem | seems like this test is trying to do too much | 00:15 |
mriedem | or cover too many different scenarios | 00:15 |
mriedem | cyeoh: you might want to look at this: https://review.openstack.org/#/c/69455/ | 00:15 |
*** markmcclain has quit IRC | 00:16 | |
cyeoh | mriedem: looking now. Would be a bit concerning if cinder is causing this slowness though... | 00:17 |
cyeoh | mriedem: it looks like the second server is only used for negative tests, but could probably just use the original server | 00:19 |
cyeoh | and just put it into rescue state first | 00:19 |
*** sc68cal has joined #openstack-gate | 00:23 | |
mriedem | cyeoh: yeah, with the _unrescue cleanup i guess | 00:25 |
mriedem | cyeoh: i'm not sure how to tell if cinder is causing the slowness | 00:25 |
cyeoh | mriedem: timestamps on the cinder-api logs? | 00:25 |
mriedem | cyeoh: especially considering we did this for another volume-related race fail: https://review.openstack.org/#/c/69443/ | 00:25 |
mriedem | yeah, it's just this has been showing up since last september at least | 00:26 |
mriedem | the pause/rescue one at least has | 00:26 |
mriedem | this is the bug i'm tracking the pause fail: https://bugs.launchpad.net/nova/+bug/1226412 | 00:26 |
cyeoh | mriedem: ok I guess my concern about patches like 69443 is I don't think we really have an understanding of why it takes so long sometimes | 00:28 |
cyeoh | is it just a property of the the way we are doing testing in the gate or is it going to be a problem in real setups too? | 00:28 |
*** dims has quit IRC | 00:34 | |
*** dims has joined #openstack-gate | 00:36 | |
jgriffith | mriedem: cyeoh cinder causing slowness? | 00:41 |
cyeoh | jgriffith: looking at changes like this: https://review.openstack.org/#/c/69455/ which are trying to avoid creating/deleting volumes because its suspected that the timeouts are caused waiting for cinder to create/delete volumes | 00:44 |
jgriffith | cyeoh: I don't think that's accurate | 00:45 |
jgriffith | cyeoh: is there data to back that up? | 00:45 |
jgriffith | cyeoh: I mean... for example the last "cinder is too slow to attach bug" had absolutely nothing to do with cinder | 00:46 |
cyeoh | jgriffith: yea thats what I've been asking :-) | 00:46 |
jgriffith | cyeoh: ahhh... :) | 00:46 |
jgriffith | cyeoh: well I can surely help figure that out | 00:46 |
jgriffith | cyeoh: FWIW, I run all sorts of tests that create delete batches of 100 vols without problems... BUT | 00:47 |
jgriffith | it all changes if you throw in things like instances and attaches | 00:47 |
cyeoh | ok, which is what we have here. | 00:47 |
jgriffith | cyeoh: yeah, looking | 00:48 |
jgriffith | cyeoh: do you have an example failure? | 00:48 |
jgriffith | cyeoh: never mind | 00:48 |
jgriffith | cyeoh: I remember looking at one of these | 00:49 |
jgriffith | cyeoh: does this mean anything to you: http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/logs/screen-n-cpu.txt.gz#_2013-11-25_15_17_36_732 | 00:49 |
cyeoh | ... just looking... | 00:50 |
cyeoh | jgriffith: you're referring to the traceback's in there? | 00:51 |
jgriffith | cyeoh: yes | 00:51 |
jgriffith | cyeoh: also a glance failure or two | 00:51 |
jgriffith | cyeoh: some "WARNING" traces from instance not found as well | 00:52 |
jgriffith | cyeoh: so IIRC the create volume is a create bootable via nova | 00:52 |
jgriffith | cyeoh: and the fetch to glance is what's failing | 00:53 |
jgriffith | cyeoh: completely independent of the volume | 00:53 |
jgriffith | cyeoh: http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/logs/screen-n-cpu.txt.gz#_2013-11-25_15_18_55_585 | 00:53 |
jgriffith | cyeoh: and the "slowness" is of course the timeout waiting for the fetch from glance to fail | 00:54 |
cyeoh | ah ok. (so those InstanceNotFound ones I think are caused by some negative tests and I think we have a fix for it, but I'll need to track it back to the api logs to double check) | 00:54 |
jgriffith | cyeoh: sure... those are fine | 00:54 |
jgriffith | cyeoh: the get image one though... that's another story | 00:54 |
mriedem | mtreinish: about when did parallel testing go live in the gate? | 00:55 |
mriedem | ~september? | 00:55 |
cyeoh | jgriffith: ah ok, I'll look into it more. | 00:57 |
jgriffith | cyeoh: wish that tempest output had the volume ID | 00:58 |
jgriffith | cyeoh: that would sure help | 00:58 |
cyeoh | we have so many errors/warnings in the logs still and I'm pretty sure at least a few of them are just spurious logging caused by negative tests (eg we're logging errors where we shouldn't be) | 00:58 |
jgriffith | but anyway.. the attach is the real fail I tink | 00:59 |
jgriffith | think | 00:59 |
jgriffith | cyeoh: yeah, but I've noticed the last week it's gotten WAYYY better | 00:59 |
jgriffith | cyeoh: I think most of the negative tests are captured now | 00:59 |
jgriffith | cyeoh: if not all of them | 00:59 |
jgriffith | cyeoh: and IIRC there's now a gate job that fails if traces are present no? | 00:59 |
cyeoh | jgriffith: yea we're trying to clean up the logs.... | 00:59 |
cyeoh | jgriffith: that got turned off for $REASONS | 01:00 |
jgriffith | :( | 01:00 |
jgriffith | or :) | 01:00 |
jgriffith | depending which side of the patch you're on | 01:00 |
cyeoh | heh :-) Hopefully can get it on again soon | 01:00 |
jgriffith | cyeoh: I'd agree | 01:02 |
jgriffith | hmm | 01:02 |
jgriffith | there's an awful lot going on in this one, I think I was incorrect about the bootable volume piece here | 01:04 |
jgriffith | without the uuid it's kinda hellish to trace though | 01:04 |
*** masayukig has quit IRC | 01:06 | |
jgriffith | cyeoh: oh dear.... | 01:06 |
jgriffith | cyeoh: this is prior to the cinder cleanup for negatives | 01:06 |
jgriffith | cyeoh: so the c-api is full of garbage | 01:06 |
cyeoh | ah :-( | 01:07 |
jgriffith | cyeoh: alright, well if we get a recent version of this please let me know. I'm happy to help | 01:08 |
jgriffith | cyeoh: I don't see much sense in trying to work off of something from November at this point though | 01:08 |
*** alexpilotti_ has quit IRC | 01:08 | |
cyeoh | jgriffith: thanks - yea, agreed | 01:09 |
*** masayukig has joined #openstack-gate | 01:09 | |
fungi | eep, nova stable/havana needs a backport of the stevedore mock patch, looks like | 01:52 |
fungi | there's a stable nova change failing unit tests in the gate at this moment | 01:52 |
mriedem | fungi: crapola, link? | 02:08 |
mriedem | fungi: looks like sdague already has a backport: https://review.openstack.org/#/c/69515/ | 02:08 |
fungi | aha, i guess https://review.openstack.org/64521 just needs to be rebased onto that (along with every other nova stable/havana change which is open) | 02:09 |
mriedem | dansmith: you're a stable maintainer right? can you +2 sdague's stevedore backport ^ ? | 02:10 |
*** masayukig has quit IRC | 03:27 | |
*** mriedem has left #openstack-gate | 03:55 | |
*** mriedem has quit IRC | 03:55 | |
marun | 04:15 | |
*** masayukig has joined #openstack-gate | 04:31 | |
*** david-lyle has joined #openstack-gate | 04:34 | |
*** gsamfira has quit IRC | 06:46 | |
*** coolsvap has joined #openstack-gate | 07:17 | |
*** ndipanov_gone is now known as ndipanov | 07:27 | |
*** flaper87|afk is now known as flaper87 | 07:44 | |
*** david-lyle has quit IRC | 08:27 | |
*** ken1ohmichi has quit IRC | 09:05 | |
*** SergeyLukjanov_ is now known as SergeyLukjanov | 09:08 | |
*** marun has quit IRC | 09:19 | |
*** coolsvap has quit IRC | 10:47 | |
*** coolsvap has joined #openstack-gate | 11:00 | |
*** SergeyLukjanov is now known as SergeyLukjanov_a | 11:19 | |
*** SergeyLukjanov_a is now known as SergeyLukjanov | 11:19 | |
*** coolsvap has quit IRC | 11:30 | |
chmouel | sdague: i think we may need to classify this one http://is.gd/lFruQX | 11:51 |
chmouel | sdague: ah it's acually referenced here https://github.com/openstack-infra/elastic-recheck/blob/master/queries/1254772.yaml | 11:52 |
chmouel | sdague: but didn't seem to catchup on my review https://review.openstack.org/#/c/41450/ | 11:52 |
sdague | chmouel: sure, I'm trying to figure out the differences | 11:55 |
sdague | chmouel: so a bunch of those are different issues | 11:57 |
sdague | they aren't actually a volume setup failure, they are a network setup failure | 11:57 |
*** masayukig has quit IRC | 11:57 | |
chmouel | really? I guess i need to grab all the screen output as well to grep it | 11:59 |
sdague | yeh, I'm pretty sure the big spikes yesterday were dansmith's nova network series | 12:03 |
chmouel | that was just an hour or two ago but well it's perhaps still a WIP | 12:18 |
sdague | yeh | 12:33 |
*** markmcclain has joined #openstack-gate | 13:00 | |
*** dhellmann_ is now known as dhellmann | 13:16 | |
*** dhellmann is now known as dhellmann_ | 13:36 | |
*** dhellmann_ is now known as dhellmann | 13:37 | |
*** dims has quit IRC | 13:50 | |
*** dims has joined #openstack-gate | 13:52 | |
*** markmcclain has quit IRC | 13:57 | |
*** markmcclain has joined #openstack-gate | 13:57 | |
*** dims has quit IRC | 14:03 | |
*** mestery has quit IRC | 14:03 | |
*** dims has joined #openstack-gate | 14:04 | |
anteaya | mriedem, thanks for the link | 14:11 |
russellb | we keeping this channel? | 14:11 |
russellb | so many openstack channels ... | 14:11 |
russellb | surely -qa / -infra / -project channels suffice :) | 14:12 |
russellb | someone yell if i need to return | 14:12 |
*** russellb has left #openstack-gate | 14:12 | |
sdague | yeh, I'm ok abandoning this channel now | 14:14 |
*** sdague has left #openstack-gate | 14:15 | |
*** mestery has joined #openstack-gate | 14:15 | |
*** dansmith has left #openstack-gate | 14:28 | |
*** mestery_ has joined #openstack-gate | 14:40 | |
*** mestery__ has joined #openstack-gate | 14:42 | |
mtreinish | mriedem: it was right before h-3 which was late august I think | 14:42 |
*** meste____ has joined #openstack-gate | 14:43 | |
*** mestery has quit IRC | 14:43 | |
*** mestery_ has quit IRC | 14:46 | |
*** mestery__ has quit IRC | 14:46 | |
*** flaper87 is now known as flaper87|afk | 14:50 | |
*** portante has left #openstack-gate | 14:54 | |
*** RelayChatInfo has joined #openstack-gate | 14:56 | |
*** RelayChatInfo has left #openstack-gate | 14:56 | |
*** mestery has joined #openstack-gate | 14:57 | |
*** mestery_ has joined #openstack-gate | 14:58 | |
*** meste____ has quit IRC | 15:00 | |
*** mestery__ has joined #openstack-gate | 15:01 | |
*** mestery has quit IRC | 15:02 | |
*** mestery_ has quit IRC | 15:03 | |
*** mestery has joined #openstack-gate | 15:05 | |
*** mestery__ has quit IRC | 15:09 | |
*** coolsvap has joined #openstack-gate | 15:16 | |
*** sc68cal has left #openstack-gate | 15:18 | |
*** david-lyle has joined #openstack-gate | 15:41 | |
*** rossella_s has joined #openstack-gate | 15:55 | |
*** markmcclain has left #openstack-gate | 15:58 | |
*** SergeyLukjanov is now known as SergeyLukjanov_ | 16:09 | |
*** flaper87|afk is now known as flaper87 | 16:12 | |
*** marun has joined #openstack-gate | 17:03 | |
*** marun has quit IRC | 17:05 | |
*** dtroyer_zz has left #openstack-gate | 17:24 | |
*** flaper87 is now known as flaper87|afk | 17:28 | |
fungi | if it's decided that this channel no longer serves a real purpose, someone please submit a change to openstack-infra/config reverting 95c630f so that our meetbot doesn't hang out in here logging forever | 17:34 |
*** SergeyLukjanov_ is now known as SergeyLukjanov | 17:36 | |
*** therve has left #openstack-gate | 17:41 | |
*** rossella_s has quit IRC | 18:41 | |
*** jmeridth has quit IRC | 18:53 | |
*** alexpilotti has joined #openstack-gate | 18:54 | |
*** ndipanov has quit IRC | 19:09 | |
*** alexpilotti has quit IRC | 19:13 | |
*** alexpilotti has joined #openstack-gate | 19:16 | |
*** alexpilotti has quit IRC | 19:21 | |
ttx | agreed, this should be a transient channel | 19:25 |
*** ndipanov has joined #openstack-gate | 19:32 | |
*** mtreinish has left #openstack-gate | 19:33 | |
*** david-lyle has quit IRC | 19:42 | |
*** asadoughi has left #openstack-gate | 19:52 | |
*** salv-orlando has left #openstack-gate | 20:14 | |
chmouel | done: https://review.openstack.org/69714 | 20:29 |
*** ndipanov has quit IRC | 20:49 | |
*** ttx has left #openstack-gate | 20:57 | |
*** jog0 has left #openstack-gate | 21:27 | |
*** coolsvap has quit IRC | 21:35 | |
*** SergeyLukjanov is now known as SergeyLukjanov_ | 21:59 | |
fungi | chmouel: thanks! +2'd. just waiting for a few more people from here to +1 it so we're sure | 22:00 |
* fungi makes like a banana and leaves | 22:01 | |
*** fungi has left #openstack-gate | 22:01 | |
*** bnemec has left #openstack-gate | 22:20 | |
anteaya | https://review.openstack.org/#/c/69714/ dhellmann dims HenryG jgriffith mestery obondarev roaet SergeyLukjanov_ can you +1 this patch to remove logging from this channel? | 22:46 |
anteaya | we don't need it anymore, and can reinstate it again if we need to log this channel again | 22:47 |
dims | done. thx | 22:50 |
roaet | ditto | 22:50 |
dhellmann | anteaya: done | 22:51 |
anteaya | thank you | 22:52 |
*** ndipanov has joined #openstack-gate | 22:52 | |
*** ndipanov has quit IRC | 23:03 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!