*** CaptTofu has quit IRC | 00:00 | |
*** CaptTofu has joined #openstack-infra | 00:00 | |
openstackgerrit | Maru Newby proposed a change to openstack-infra/config: Run neutron functional job as tempest user https://review.openstack.org/81400 | 00:01 |
---|---|---|
*** che-arne has joined #openstack-infra | 00:01 | |
marun | reviews welcome ^^ | 00:01 |
dstufft | my brain pattern matched d-g stuff with dstufft and I though y'all were trying to get rid of me :[ | 00:02 |
marun | I'd like to see that functional job gating before too long so we can start migrating some of our 'unit tests' over. | 00:02 |
* clarkb looks | 00:02 | |
clarkb | dstufft: no we like you | 00:02 |
dstufft | :D | 00:02 |
clarkb | dstufft: please don't o | 00:02 |
*** alex-away is now known as cantgetnicknamew | 00:03 | |
dstufft | clarkb: if I go where will I complain about pbr! | 00:03 |
*** andreaf has joined #openstack-infra | 00:03 | |
clarkb | thats the spirit | 00:03 |
*** cantgetnicknamew is now known as alexalex | 00:04 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Add a net-info job builder macro https://review.openstack.org/81403 | 00:08 |
fungi | jeblair: clarkb: ^ (i simply added it to every job which had a link-logs macro) | 00:08 |
*** wchrisj has quit IRC | 00:08 | |
fungi | i'll draft a similar patch to rip that out of d-g | 00:09 |
fungi | (not rip the stuffing out of dstufft) | 00:09 |
jogo | fungi: looks like your right about override branch thing | 00:10 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/pbr: Make tools/integration.sh take a branch https://review.openstack.org/80723 | 00:11 |
fungi | jogo: yay! i'll take whatever small victories the universe flings my way. thanks! | 00:11 |
*** alexalex is now known as alexandra__ | 00:13 | |
*** hogepodge has joined #openstack-infra | 00:15 | |
*** mfer has joined #openstack-infra | 00:16 | |
clarkb | marun: reviewed. left a couple comments that are mostly for my understaning of stuff +2'd | 00:16 |
*** mfer has quit IRC | 00:16 | |
jogo | fungi: also https://review.openstack.org/#/c/80687/ passed ! | 00:16 |
fungi | jogo: congrats! | 00:17 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/devstack-gate: Stop printing network diagnostics https://review.openstack.org/81404 | 00:17 |
clarkb | jeblair: fungi https://review.openstack.org/#/c/81178/2 isn't that a noop now? iirc the /opt/git cache stuff is in use on git review hosts | 00:17 |
*** jergerber has quit IRC | 00:17 | |
* clarkb double checks | 00:18 | |
jeblair | clarkb: no, that's the list of projects that devstack-gate is going to manage | 00:18 |
clarkb | oh nevermind we use for PROJECT in PROJECTS to determine what to run setup_project on | 00:18 |
jeblair | yep | 00:18 |
fungi | yup | 00:18 |
jeblair | uh huh | 00:18 |
clarkb | ya, its just that there is a comment in functons.sh saying we should stop doing that once the cache does its thing | 00:19 |
clarkb | had me confused | 00:19 |
clarkb | er maybe I am just misreading the comment | 00:19 |
clarkb | in any case I understand now | 00:19 |
jeblair | brrring | 00:19 |
jogo | jeblair: devstack-vm-gate-wrap.sh:export OVERRIDE_ZUUL_BRANCH=${OVERRIDE_ZUUL_BRANCH:-$ZUUL_BRA | 00:20 |
jogo | thats what I based the commit msg on | 00:20 |
*** matsuhashi has joined #openstack-infra | 00:20 | |
jeblair | jogo: oh interesting. there may be a problem then, because you have a default value for that, but it seems that d-g will default that to zuul branch | 00:21 |
jeblair | jogo: basically, your default value of master will never be used | 00:21 |
*** mwagner_lap has joined #openstack-infra | 00:21 | |
fungi | that's a good point which hadn't crossed my mind | 00:21 |
jeblair | jogo: (in cases where integration.sh runs from devstack-gate) | 00:21 |
fungi | however, for changes to the master branch of projects, ZUUL_BRANCH will be master, right? | 00:22 |
fungi | if so, i think it's moot (and the default is still potentially useful if someone runs this outside of a d-g context) | 00:23 |
jeblair | fungi: yes. i think i agree with all of that. | 00:23 |
*** toddmore_ has joined #openstack-infra | 00:24 | |
jogo | yeah that is why I added a default, so if ron outside of d-g context | 00:24 |
jogo | so the catch with all this is I haven't tested any of it ... | 00:24 |
*** SumitNaiksatam has quit IRC | 00:24 | |
fungi | the summary is that we want this script to use the branch of openstack/requirements which matches the branch of whatever change is being tested on a project, rather than always assuming master even when testing changes to not-master. i think the fix there should do what we want | 00:25 |
jeblair | fungi: moreover, for changes to stable branches, the default in d-g will set it to be the stable branch (not master, as one might expect from the pbr change) | 00:25 |
jeblair | fungi: at least, for the main job (rather than the stable cross-check jobs) | 00:25 |
jeblair | fungi: agreed | 00:25 |
fungi | perfect | 00:25 |
fungi | now let's just hope it works like we expect ;) | 00:26 |
*** alexandra__ is now known as alex-afk | 00:26 | |
fungi | the reason i suggested using OVERRIDE_ZUUL_BRANCH instead of just ZUUL_BRANCH there is so that we *can* override it from the job definition later if we need, on special occasions (birthdays, anniversaries, bank holidays) | 00:27 |
*** CaptTofu has quit IRC | 00:27 | |
*** rhsu has joined #openstack-infra | 00:28 | |
*** peddamat has joined #openstack-infra | 00:31 | |
clarkb | is ip a thing on centos yet? | 00:31 |
* clarkb tests | 00:31 | |
*** alex-afk is now known as _alexandra_ | 00:32 | |
mattoliverau | ip as in the command ip? | 00:32 |
clarkb | ya | 00:33 |
clarkb | seems like centos is behind the times on stuff like that whenever I try to make changes to centos | 00:33 |
clarkb | eg grub 1 | 00:33 |
clarkb | my hpcloud centos6 box has ip | 00:34 |
clarkb | in /sbin which I am going to assume is in the jenkins users path | 00:34 |
clarkb | is that a bad assumption? | 00:34 |
clarkb | fungi: ^ | 00:34 |
mattoliverau | LOL yeah, but ip has been around for ages, people just got stuck using ifconfig :P So I am almost 100% sure it does. as in I used it a few years ago in another job and it was a redhat shop. | 00:35 |
clarkb | mattoliverau: but why grub 1 :P | 00:35 |
fungi | clarkb: i'll try a centos node on hpcloud and rax to be sure | 00:35 |
mattoliverau | clarkb: lol, good question, to be honest though, grub 1 was pretty awesome, and when grub 2 came out the slight changes in syntax for the commandline (when you got stuck in the bootloader) and the config file was annoying. | 00:38 |
*** Ryan_Lane has joined #openstack-infra | 00:38 | |
mattoliverau | s/was/were | 00:38 |
fungi | clarkb: yep, tested all four ip commands unqualified from the environment path of the jenkins user on centos6 nodes in hpcloud and rax. both worked | 00:38 |
clarkb | fungi: awesome | 00:38 |
mattoliverau | but now I sound like an old man.. in my day, etc etc :P | 00:39 |
clarkb | mattoliverau: config file is annoying but it helps protect you from screwing things up with a single file edit | 00:39 |
fungi | mattoliverau: i debugged korn shell on sco uphill both ways in the snow | 00:39 |
mattoliverau | fungi: lol, exactly :P | 00:39 |
* fungi shakes his cane at these young linux whipper snappers | 00:40 | |
*** rpodolyaka1 has joined #openstack-infra | 00:40 | |
fungi | clarkb: we so need coremudgeon tee shirts | 00:40 |
clarkb | fungi: change lgtm | 00:41 |
clarkb | fungi: YES! | 00:41 |
*** andreaf has quit IRC | 00:41 | |
*** thuc has quit IRC | 00:42 | |
*** thuc has joined #openstack-infra | 00:42 | |
mattoliverau | ip existed in centos5 and probably 4 as well. It's like there is also a "replacement" for netstat but trying to supersede netstat is blasphemy :P | 00:42 |
clarkb | yes I need my netstat | 00:43 |
clarkb | nestat -np the only thing you ever need | 00:43 |
*** Ryan_Lane has quit IRC | 00:43 | |
fungi | mattoliverau: the ip command comes from iproute2. it's useful because it actually uses "new" (linux 2.6.x i think ,but maybe it was 2.4 or 2.2) kernel interfaces to gather the data | 00:43 |
mattoliverau | yeah i agree, I love ip :) | 00:43 |
clarkb | how does one do netstat -np with ip? | 00:44 |
fungi | whereas ifconfig, route, arp and friends use (well probably not any longer) the deprecated interfaces | 00:44 |
*** rpodolyaka1 has quit IRC | 00:44 | |
mattoliverau | but around the same time ss came out, and is a replacement for netstat. It even takes most of the same switches.. but I'm a netstat man myself :) | 00:44 |
clarkb | fungi: honestly I think the biggest problem with ip is the man page | 00:44 |
mattoliverau | you use ss not ip | 00:44 |
clarkb | all of those replacements have good man pages | 00:44 |
clarkb | ip has a terrible man page | 00:45 |
fungi | clarkb: i will not disagree that $RANDOM_CLI_TOOL has incomplete manpages. it's all the rage with kids these days to not document things, after all | 00:46 |
jesusaurus | it takes some getting used to, but i kinda like that the ip manpage defines a grammar that defines the options | 00:46 |
openstackgerrit | Joe Gordon proposed a change to openstack-infra/elastic-recheck: Loosen the fingerprint for 1291926 https://review.openstack.org/81406 | 00:46 |
*** sarob has joined #openstack-infra | 00:47 | |
*** thuc has quit IRC | 00:47 | |
*** yamahata has quit IRC | 00:47 | |
mattoliverau | lol, yeah, ip has the worst.. it reminds me of http://xkcd.com/1343/ | 00:47 |
jesusaurus | clarkb: you dont use ip for netstat -np, you use lsof | 00:47 |
fungi | jesusaurus: i do concur that the standardization it brings to getting at a lot of that data is useful. i only use ifconfig on my openbsd boxes these days, and wish it had iproute2 | 00:47 |
fungi | jesusaurus: or ss, as mattoliverau noted | 00:47 |
* jesusaurus is unfamiliar with ss | 00:48 | |
clarkb | jesusaurus: it defines the grammar but none of the behavior | 00:48 |
*** sarob_ has joined #openstack-infra | 00:48 | |
openstackgerrit | A change was merged to openstack-infra/config: Really stop supplying the zuul url to ggp https://review.openstack.org/81299 | 00:49 |
*** medieval1 has quit IRC | 00:49 | |
fungi | clarkb: any opinion on 80723 before i approve it and recheck a stable and a master requirements change? | 00:49 |
jesusaurus | clarkb: sure it does... scroll down... | 00:50 |
clarkb | jesusaurus: there are one line blurbs that aren't even sentences for eachthing | 00:51 |
clarkb | fungi: looking | 00:51 |
*** nosnos has joined #openstack-infra | 00:51 | |
*** wchrisj has joined #openstack-infra | 00:52 | |
*** sarob has quit IRC | 00:52 | |
clarkb | fungi: nothing aside from jeblair's comment. I will approve | 00:52 |
clarkb | fungi: done | 00:53 |
fungi | clarkb: and as we discussed after that comment, the commit message is arguably correct (d-g does actually ensure it's set to the value of zuul_branch if not provided in the calling environment) | 00:53 |
*** nosnos_ has joined #openstack-infra | 00:53 | |
clarkb | gotcha | 00:54 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: Loosen the fingerprint for 1291926 https://review.openstack.org/81406 | 00:55 |
fungi | proof that the universe is actually shrinking rather than expanding: every couple years i discover that i no longer have small enough torx drivers for whatever new piece of technology i want to open up | 00:55 |
fungi | last year i needed a t5 for something. now i need a t4 | 00:56 |
*** mwagner__ has joined #openstack-infra | 00:56 | |
mattoliverau | lol | 00:56 |
clarkb | fungi: clearly you need one really big hammer | 00:56 |
*** _alexandra_ is now known as alex-theedge | 00:57 | |
fungi | i'm just going to get a precision set which also has a t3, so i'll be prepared for next year when the universe shrinks further | 00:57 |
*** nosnos has quit IRC | 00:57 | |
*** maxbit has quit IRC | 01:01 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/config: turn off voting on pypy jobs https://review.openstack.org/81409 | 01:02 |
sdague | ok, that's the only medium grumpy version of that, just blanket turn off voting | 01:02 |
clarkb | sdague: come on we can do full on grumpy :) | 01:03 |
* fungi is already full-on grumpy | 01:03 | |
fungi | it's not just a mood, it's a lifestyle ;) | 01:04 |
sdague | I feel like I've been beating up too many people of late, so decided only medium grumpy | 01:04 |
sdague | I can stack full grumpy on top of it :) | 01:04 |
*** Daisy_ has joined #openstack-infra | 01:04 | |
fungi | sdague: fwiw i try not to be be too hard on the pypy folk, as i suspect they're innocent bystanders in this. i'm getting a stronger and stronger sense that this is something wrong in setuptools | 01:05 |
sdague | I agree | 01:05 |
sdague | however, making all the projects land a local fix is problematic | 01:06 |
fungi | however, something about pypy sure is exposing whatever this is in ways which other interpreters are not | 01:06 |
sdague | yeh | 01:06 |
sdague | and, honestly, I'm not sure I understand the justification for testing pypy anyway. Because it's not default python on any system I know. | 01:06 |
fungi | and yes, i don't want to have to land and revert a tox.ini patch in every project gating on pypy | 01:06 |
clarkb | sdague: because it makes things more performant | 01:07 |
clarkb | sdague: apparently swift + pypy is happy times | 01:07 |
sdague | clarkb: ok, do any other servers run on it ? | 01:07 |
fungi | some people want to run openstack things in pypy for fastgood | 01:07 |
*** jhesketh__ has quit IRC | 01:07 | |
clarkb | sdague: vishy got nova to run under it once | 01:07 |
*** jhesketh__ has joined #openstack-infra | 01:08 | |
sdague | I thought it was only half running under it | 01:08 |
sdague | because of eventlet | 01:08 |
fungi | clarkb: i think that's just one of those motivational posters | 01:08 |
sdague | heh | 01:08 |
clarkb | sdague: ya something like that | 01:08 |
clarkb | it was a feat of programming prowess | 01:08 |
dstufft | pypy is pretty fast but it's probably not going to be the default python anywhere since it's not compatible with c-exts | 01:08 |
dstufft | er all c-exts* | 01:08 |
*** malini is now known as malini_afk | 01:08 | |
sdague | dstufft: so that means not compatible with pyopenssl? | 01:09 |
dstufft | sdague: pyopenssl 0.14 it is | 01:09 |
dstufft | Not sure about before that | 01:09 |
sdague | ok | 01:09 |
dstufft | pyopenssl 0.14 switched to a cffi based binding to OpenSSL | 01:10 |
dstufft | (one of the benefits of cffi is that it works with PyPy) | 01:10 |
sdague | dstufft: cffi isn't on my good side | 01:10 |
dstufft | some c-exts work afaik, some segfault as I udnerstand it | 01:10 |
dstufft | sdague: cffi has some serious faults :/ They are fixing them though afaik | 01:11 |
sdague | after loosing a friday debugging the whole 2 cffis versions on one system causes explosion | 01:11 |
sdague | dstufft: yeh, I saw the bug you were on for that | 01:11 |
dstufft | implicit compile and the fact that they are version locked to the exact cffi version are the two big ones | 01:11 |
sdague | dstufft: thanks for injecting sanity | 01:11 |
dstufft | :) | 01:12 |
*** maxbit has joined #openstack-infra | 01:12 | |
dstufft | cffi is real nice to work with as the person developing the thing | 01:12 |
dstufft | but the implicit compile makes it near impossible to actually tell if things are working sanely | 01:12 |
dstufft | I monkeypatch it disabled on my stuff | 01:12 |
clarkb | dstufft: the implicit compile gets monkey patched? how do you make the things that need C work? | 01:13 |
dstufft | clarkb: cffi can build stuff in setup.py just like a c-ext | 01:13 |
clarkb | oh gotcha, so rather than running when you execute the python thing you have it build and package into the egg? | 01:14 |
clarkb | or wheel or whatever | 01:14 |
dstufft | yea | 01:14 |
dstufft | that's supported always, but if you don't remove the implicit compile bugs in doing that are hard to discover | 01:15 |
dstufft | because when you use it locally it'll just implicit compile | 01:15 |
clarkb | right | 01:15 |
clarkb | and it may not be talking to the thing it expects at that point | 01:15 |
clarkb | whereas the build time linkage should continue to just work | 01:15 |
dstufft | yea | 01:15 |
dstufft | one bug i found was it was compiling with one extension, but looking for another on Python 3+ | 01:15 |
dstufft | took me 3 hours to find that and that's what finally made me kill implicit compile | 01:16 |
clarkb | :/ | 01:16 |
dstufft | because it wasn't till I killed that I actually figured out what it was | 01:16 |
sdague | dstufft: is there a reasonable fix that that we could apply in the gate? | 01:16 |
sdague | because that was a bear to track down | 01:16 |
dstufft | sdague: to the PyPy setuptools issue? | 01:16 |
dstufft | or cffi? | 01:16 |
sdague | the cffi issue | 01:17 |
fungi | dstufft: to https://github.com/pypa/pip/issues/1632 yeah | 01:17 |
fungi | oh, are we still hitting cffi problems? | 01:17 |
sdague | fungi: no | 01:17 |
dstufft | the cffi issue I just spoke of is fixed afaik | 01:17 |
sdague | but I want to protect us from it | 01:17 |
fungi | ahh | 01:17 |
sdague | dstufft: is that version of cffi released? | 01:17 |
dstufft | ya | 01:17 |
dstufft | it was like 0.7 or 0.6 or so | 01:17 |
dstufft | it was months ago | 01:17 |
sdague | dstufft: we hit this 2 weeks ago | 01:17 |
fungi | sdague: that was setuptools 3.0 removing the feature class | 01:18 |
dstufft | yea | 01:18 |
fungi | which cffi still used | 01:18 |
dstufft | I'm real sorry about all that :( | 01:18 |
sdague | fungi: no, I mean the tempest wedge | 01:18 |
sdague | which was a different thing | 01:18 |
sdague | where we had 0.8.1 of cffi installed in site | 01:18 |
dstufft | I have it on my list of things to try and make a CI infra that will try to install a known set of popular and working packages on all pieces | 01:18 |
dstufft | but it's kind of a complicated thing to do in general :/ | 01:19 |
clarkb | ES continues to recover, slowly. I am going to stop staring at it for the day | 01:19 |
sdague | and 0.8.2 installed in the tempest venv | 01:19 |
* clarkb & | 01:19 | |
sdague | and explode | 01:19 |
fungi | sdague: oh, the system site packages venv and cffi thing | 01:19 |
sdague | yeh | 01:19 |
sdague | that one I'm sure will screw us again some time | 01:19 |
fungi | system site packages in venv == teh eeeeevil | 01:19 |
sdague | yeh, well, so it is | 01:19 |
fungi | fruits of teh debbil | 01:20 |
sdague | we do that all the time though | 01:20 |
sdague | so that's the one I want to protect us from | 01:20 |
*** khyati_ has quit IRC | 01:21 | |
fungi | i have a feeling we could encounter the same class of failure from other uses of system site packages where we're mixing different versions of the same libs getting accessed via different paths | 01:21 |
dstufft | yea | 01:22 |
dstufft | system site packages are fragile | 01:23 |
dstufft | virtualenv itself is a big pile of giant hacks | 01:23 |
dstufft | and system site packages add some extra hacks ontop of it | 01:23 |
*** wchrisj has quit IRC | 01:25 | |
*** gtt116_ has quit IRC | 01:25 | |
*** bada has quit IRC | 01:25 | |
*** mgagne has quit IRC | 01:26 | |
*** bada has joined #openstack-infra | 01:26 | |
*** ryanpetrello has quit IRC | 01:31 | |
*** yaguang has joined #openstack-infra | 01:33 | |
*** amotoki has joined #openstack-infra | 01:38 | |
*** Daisy_ has quit IRC | 01:39 | |
*** Ryan_Lane has joined #openstack-infra | 01:39 | |
*** rpodolyaka1 has joined #openstack-infra | 01:40 | |
*** sarob_ has quit IRC | 01:43 | |
*** Ryan_Lane has quit IRC | 01:43 | |
*** rpodolyaka1 has quit IRC | 01:46 | |
openstackgerrit | Steve Baker proposed a change to openstack-infra/config: Open ports 8003, 8004 for heat API calls from compute https://review.openstack.org/81375 | 01:48 |
*** thuc has joined #openstack-infra | 01:52 | |
*** thuc_ has joined #openstack-infra | 01:57 | |
*** maxbit has quit IRC | 01:57 | |
*** dkehn has quit IRC | 01:58 | |
*** wenlock has quit IRC | 01:59 | |
fungi | jhesketh_: did you see my follow up to your comment on 78504 or did you have any other related concerns? | 01:59 |
*** thuc has quit IRC | 01:59 | |
*** wchrisj has joined #openstack-infra | 02:00 | |
jhesketh_ | fungi: oh yeah, so I agree with the comments there (that a) it should be cleaned up and b) that you're just fixing the bug at this point) | 02:00 |
jhesketh_ | so I don't really mind if it's fixed or not in this change but would like to see it cleaned up | 02:00 |
jhesketh_ | I'll update my review to reflect that | 02:00 |
fungi | jhesketh_: cool, agreement then ;) | 02:01 |
jhesketh_ | done | 02:01 |
*** dstanek has quit IRC | 02:02 | |
*** wchrisj has quit IRC | 02:04 | |
jhesketh__ | sdague: just wondering if you have a second to discuss 76796? | 02:04 |
*** miqui has quit IRC | 02:05 | |
*** dstanek has joined #openstack-infra | 02:06 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/zuul: zuul.gerrit.Gerrit.isMerged should not return None https://review.openstack.org/81418 | 02:07 |
fungi | jhesketh__: ^ | 02:07 |
*** dkehn has joined #openstack-infra | 02:07 | |
fungi | don't say i never play requests ;) | 02:07 |
*** dkliban has joined #openstack-infra | 02:08 | |
*** yamahata has joined #openstack-infra | 02:08 | |
jhesketh__ | fungi: nice stuff :-) | 02:08 |
jhesketh__ | THanks! | 02:08 |
*** zhiyan_ is now known as zhiyan | 02:09 | |
*** thuc has joined #openstack-infra | 02:11 | |
*** thuc_ has quit IRC | 02:12 | |
*** thuc_ has joined #openstack-infra | 02:12 | |
*** khyati_ has joined #openstack-infra | 02:13 | |
*** thuc_ has quit IRC | 02:15 | |
*** thuc has quit IRC | 02:16 | |
*** thuc has joined #openstack-infra | 02:16 | |
*** sweston has quit IRC | 02:17 | |
*** che-arne has quit IRC | 02:17 | |
*** Ryan_Lane has joined #openstack-infra | 02:18 | |
*** Ryan_Lane has quit IRC | 02:22 | |
*** thuc_ has joined #openstack-infra | 02:24 | |
*** CaptTofu has joined #openstack-infra | 02:25 | |
*** thuc has quit IRC | 02:27 | |
*** shakamunyi has joined #openstack-infra | 02:28 | |
openstackgerrit | A change was merged to openstack-infra/zuul: Submitted is _not_ necessarily merged in Gerrit https://review.openstack.org/78504 | 02:29 |
*** thuc_ has quit IRC | 02:29 | |
*** Daisy_ has joined #openstack-infra | 02:33 | |
*** chandan_kumar has joined #openstack-infra | 02:37 | |
*** coolsvap has quit IRC | 02:39 | |
*** markwash has quit IRC | 02:45 | |
*** amcrn has quit IRC | 02:49 | |
*** fandi has joined #openstack-infra | 02:58 | |
*** gyee has quit IRC | 02:59 | |
*** jepoy has joined #openstack-infra | 03:05 | |
*** jepoy_ has quit IRC | 03:08 | |
*** thuc has joined #openstack-infra | 03:12 | |
lifeless | should git.o.o be available on ipv6 ? | 03:12 |
clarkb | yes | 03:13 |
*** matsuhashi has quit IRC | 03:13 | |
lifeless | hmm, worked second time | 03:14 |
lifeless | pulling/updating diskimage-builder | 03:14 |
lifeless | error: Failed to connect to 2001:4800:7813:516:3bc3:d7f6:ff04:aacb: Network is unreachable (curl_result = 7, http_code = 0, sha1 = 46a19cec345eb1a032c1a88ff125fc432dc93d2e) | 03:14 |
lifeless | is what I saw | 03:14 |
clarkb | do you haev a global ipv6 address? | 03:14 |
clarkb | we have seen git do weird stuff on nodes that don't | 03:14 |
clarkb | or at least shouldn't (hpcloud VMs) | 03:14 |
*** ryanpetrello has joined #openstack-infra | 03:14 | |
lifeless | hmm I thought I did, but tis not currently configured | 03:15 |
clarkb | yup so I think in some cases git will get the AAAA record and do the wrong thing | 03:16 |
clarkb | but we haen't been able to pin it down to anything more specific than that | 03:17 |
*** changbl has joined #openstack-infra | 03:17 | |
*** sweston has joined #openstack-infra | 03:17 | |
*** Ryan_Lane has joined #openstack-infra | 03:19 | |
*** matsuhashi has joined #openstack-infra | 03:20 | |
*** SumitNaiksatam has joined #openstack-infra | 03:22 | |
*** harlowja is now known as harlowja_away | 03:23 | |
*** Ryan_Lane has quit IRC | 03:23 | |
*** matsuhashi has quit IRC | 03:25 | |
*** ryanpetrello has quit IRC | 03:26 | |
*** ryanpetrello has joined #openstack-infra | 03:26 | |
*** thuc has quit IRC | 03:35 | |
*** thuc has joined #openstack-infra | 03:35 | |
*** weshay has quit IRC | 03:39 | |
*** toddmore_ has quit IRC | 03:39 | |
*** thuc has quit IRC | 03:40 | |
*** nosnos_ has quit IRC | 03:41 | |
*** chandan_kumar has quit IRC | 03:41 | |
*** rpodolyaka1 has joined #openstack-infra | 03:44 | |
*** Daisy__ has joined #openstack-infra | 03:46 | |
*** Daisy_ has quit IRC | 03:46 | |
*** pcrews has quit IRC | 03:47 | |
*** rpodolyaka1 has quit IRC | 03:48 | |
*** _Alexandra_ has joined #openstack-infra | 03:53 | |
*** vkozhukalov_ has joined #openstack-infra | 03:55 | |
*** sandywalsh_ has quit IRC | 03:55 | |
*** sabari has joined #openstack-infra | 03:57 | |
*** matrohon has quit IRC | 03:58 | |
*** matrohon has joined #openstack-infra | 03:59 | |
*** rhsu has quit IRC | 04:01 | |
*** CaptTofu has quit IRC | 04:02 | |
*** CaptTofu has joined #openstack-infra | 04:03 | |
*** thuc has joined #openstack-infra | 04:03 | |
*** jepoy_ has joined #openstack-infra | 04:04 | |
*** jepoy has quit IRC | 04:05 | |
*** CaptTofu has quit IRC | 04:07 | |
*** ryanpetrello has quit IRC | 04:15 | |
*** ryanpetrello has joined #openstack-infra | 04:17 | |
*** Ryan_Lane has joined #openstack-infra | 04:19 | |
*** ryanpetrello has quit IRC | 04:22 | |
*** matsuhashi has joined #openstack-infra | 04:23 | |
*** Ryan_Lane has quit IRC | 04:24 | |
*** mihgen has joined #openstack-infra | 04:29 | |
*** nosnos has joined #openstack-infra | 04:35 | |
*** thuc has quit IRC | 04:37 | |
*** thuc has joined #openstack-infra | 04:38 | |
*** jepoy has joined #openstack-infra | 04:42 | |
*** thuc has quit IRC | 04:42 | |
*** dkliban has quit IRC | 04:43 | |
*** rpodolyaka1 has joined #openstack-infra | 04:45 | |
*** jepoy_ has quit IRC | 04:45 | |
*** mrmartin has joined #openstack-infra | 04:47 | |
*** rpodolyaka1 has quit IRC | 04:49 | |
*** mrmartin has quit IRC | 04:50 | |
*** alff has joined #openstack-infra | 04:52 | |
*** amcrn has joined #openstack-infra | 04:53 | |
*** alff has quit IRC | 04:58 | |
*** _nadya_ has joined #openstack-infra | 04:58 | |
*** wchrisj has joined #openstack-infra | 05:00 | |
*** talluri has joined #openstack-infra | 05:01 | |
*** _nadya_ has quit IRC | 05:02 | |
*** Daisy__ has quit IRC | 05:09 | |
*** Daisy_ has joined #openstack-infra | 05:10 | |
*** jepoy_ has joined #openstack-infra | 05:13 | |
*** _Alexandra_ has quit IRC | 05:13 | |
*** _Alexandra_ has joined #openstack-infra | 05:13 | |
*** jepoy has quit IRC | 05:15 | |
*** nicedice has quit IRC | 05:16 | |
*** jepoy has joined #openstack-infra | 05:16 | |
*** jepoy_ has quit IRC | 05:19 | |
*** Ryan_Lane has joined #openstack-infra | 05:20 | |
*** ArxCruz_ has joined #openstack-infra | 05:20 | |
*** ArxCruz has quit IRC | 05:20 | |
*** jesusaur has joined #openstack-infra | 05:22 | |
*** Ryan_Lane has quit IRC | 05:25 | |
*** jesusaurus has quit IRC | 05:26 | |
*** Daisy_ has quit IRC | 05:30 | |
*** andreaf has joined #openstack-infra | 05:35 | |
*** ArxCruz_ has quit IRC | 05:37 | |
*** ArxCruz has joined #openstack-infra | 05:37 | |
*** jesusaur is now known as jesusaurus | 05:37 | |
*** ArxCruz has quit IRC | 05:37 | |
*** wchrisj has quit IRC | 05:37 | |
*** _nadya_ has joined #openstack-infra | 05:38 | |
*** alff has joined #openstack-infra | 05:40 | |
*** sabari has quit IRC | 05:43 | |
*** _Alexandra_ has quit IRC | 05:44 | |
*** Daisy_ has joined #openstack-infra | 05:44 | |
*** _nadya_ has quit IRC | 05:44 | |
*** rpodolyaka1 has joined #openstack-infra | 05:45 | |
*** andreaf has quit IRC | 05:48 | |
*** thuc has joined #openstack-infra | 05:48 | |
*** rpodolyaka1 has quit IRC | 05:50 | |
*** thuc has quit IRC | 05:53 | |
*** yjiang has joined #openstack-infra | 05:55 | |
*** mihgen has quit IRC | 05:56 | |
*** saju_m has joined #openstack-infra | 05:58 | |
*** matsuhashi has quit IRC | 05:59 | |
*** Ryan_Lane has joined #openstack-infra | 06:00 | |
*** matsuhashi has joined #openstack-infra | 06:01 | |
*** rpodolyaka1 has joined #openstack-infra | 06:01 | |
*** nati_ueno has joined #openstack-infra | 06:03 | |
*** CaptTofu has joined #openstack-infra | 06:03 | |
*** jayvee has joined #openstack-infra | 06:03 | |
jayvee | referred here from #openstack-dev | 06:05 |
jayvee | I'm re-using the module https://git.openstack.org/cgit/openstack-infra/config/tree/modules/asterisk in a project of my own, and I noticed the package asterisk-sounds-extra-en-g722 that it depends on doesn't seem to exist any more in the Digium repo | 06:05 |
jayvee | would anyone be opposed to making $sounds a parameter to the class, so one can include the class without only some sounds, e.g. class { 'asterisk': sounds => [], } ? | 06:05 |
nibalizer | jayvee: sure! | 06:05 |
*** Ryan_Lane has quit IRC | 06:05 | |
*** ryanpetrello has joined #openstack-infra | 06:05 | |
nibalizer | do you know how to send commits to openstack? | 06:05 |
jayvee | no, although I presume it's some form of "spend about 40 minutes wrangling with `git pull --rebase`", "git format-patch", "mail", and "pray it gets accepted" :-) | 06:06 |
kashyap | nibalizer, https://wiki.openstack.org/wiki/GerritWorkflow | 06:07 |
jayvee | I haven't actually finished hacking on the module yet, but if I am happy with that modification, I'll be sure to send a patch(es) | 06:07 |
*** CaptTofu has quit IRC | 06:08 | |
*** Daisy_ has quit IRC | 06:09 | |
*** ryanpetrello has quit IRC | 06:10 | |
nibalizer | jayvee: check out the gerrit workflow kashyap posted | 06:11 |
nibalizer | if you get stuck come find us | 06:11 |
nibalizer | no patches to email for us! :) | 06:11 |
*** alff has quit IRC | 06:12 | |
*** Daisy_ has joined #openstack-infra | 06:14 | |
nibalizer | how many tests do we run a day (on average?) | 06:16 |
clarkb | nibalizer uh 15k? maybe | 06:17 |
clarkb | the zuul status page has graphite stuff you can manipulate to get better answers | 06:17 |
clarkb | in particular the jobs launched graph can be made to give daily data | 06:17 |
*** alex-theedge is now known as _alexandra_ | 06:18 | |
*** ildikov_ has quit IRC | 06:18 | |
openstackgerrit | Andreas Jaeger proposed a change to openstack-infra/config: Fix horizon-upstream-translation-update https://review.openstack.org/81450 | 06:20 |
*** _alexandra_ has quit IRC | 06:23 | |
*** _Alexandra_ has joined #openstack-infra | 06:27 | |
*** mrda is now known as mrda_away | 06:28 | |
*** nati_uen_ has joined #openstack-infra | 06:28 | |
*** nati_ueno has quit IRC | 06:31 | |
*** jepoy_ has joined #openstack-infra | 06:33 | |
SergeyLukjanov | sdague, I've approved "turn off voting on pypy jobs" | 06:34 |
*** _Alexandra_ has quit IRC | 06:34 | |
*** sabari has joined #openstack-infra | 06:35 | |
*** jepoy has quit IRC | 06:35 | |
*** _alexandra_ has joined #openstack-infra | 06:37 | |
openstackgerrit | A change was merged to openstack-infra/config: turn off voting on pypy jobs https://review.openstack.org/81409 | 06:37 |
*** nati_uen_ has quit IRC | 06:38 | |
openstackgerrit | Joshua Hesketh proposed a change to openstack-infra/os-loganalyze: Add support to check swift for log files https://review.openstack.org/76796 | 06:39 |
*** Daisy_ has quit IRC | 06:42 | |
*** rlandy has joined #openstack-infra | 06:43 | |
jayvee | nibalizer: cheers | 06:47 |
*** matsuhashi has quit IRC | 06:47 | |
*** matsuhashi has joined #openstack-infra | 06:50 | |
nibalizer | jayvee: :) | 06:58 |
*** khyati_ has quit IRC | 07:02 | |
openstackgerrit | Joshua Hesketh proposed a change to openstack-infra/os-loganalyze: Add support to check swift for log files https://review.openstack.org/76796 | 07:06 |
*** mihgen has joined #openstack-infra | 07:08 | |
*** Daisy_ has joined #openstack-infra | 07:09 | |
*** flaper87|afk is now known as flaper87 | 07:14 | |
*** yolanda has joined #openstack-infra | 07:17 | |
*** rpodolyaka1 has quit IRC | 07:17 | |
*** ildikov_ has joined #openstack-infra | 07:18 | |
*** _nadya_ has joined #openstack-infra | 07:22 | |
*** bada has quit IRC | 07:25 | |
*** bada has joined #openstack-infra | 07:26 | |
*** jepoy has joined #openstack-infra | 07:29 | |
*** matsuhashi has quit IRC | 07:30 | |
*** matsuhashi has joined #openstack-infra | 07:31 | |
*** _alexandra_ is now known as alex-away | 07:31 | |
*** jepoy_ has quit IRC | 07:33 | |
*** Daisy_ has quit IRC | 07:38 | |
*** alff has joined #openstack-infra | 07:43 | |
*** morganfainberg is now known as morganfainberg_Z | 07:43 | |
openstackgerrit | Clint "SpamapS" Byrum proposed a change to openstack-infra/pypi-mirror: Fix run-mirror writing wrong project directory https://review.openstack.org/81469 | 07:49 |
*** e0ne has joined #openstack-infra | 07:52 | |
*** ryanpetrello has joined #openstack-infra | 07:54 | |
*** nati_ueno has joined #openstack-infra | 07:55 | |
*** ryanpetrello has quit IRC | 07:58 | |
*** CaptTofu has joined #openstack-infra | 08:04 | |
*** _nadya_ has quit IRC | 08:05 | |
*** shakamunyi has quit IRC | 08:07 | |
*** CaptTofu has quit IRC | 08:09 | |
*** e0ne has quit IRC | 08:11 | |
lifeless | SpamapS: have you tested that? It DTRT ? | 08:14 |
*** jgallard has joined #openstack-infra | 08:21 | |
*** jcoufal has joined #openstack-infra | 08:22 | |
*** rpodolyaka1 has joined #openstack-infra | 08:24 | |
*** uvirtbot has joined #openstack-infra | 08:25 | |
SpamapS | lifeless: yes I did test it | 08:25 |
SpamapS | lifeless: and my images built w/o going through apache :) | 08:25 |
*** mrmartin has joined #openstack-infra | 08:27 | |
*** rpodolyaka1 has quit IRC | 08:28 | |
lifeless | SpamapS: great :) | 08:31 |
*** shakamunyi has joined #openstack-infra | 08:33 | |
*** Daisy_ has joined #openstack-infra | 08:34 | |
*** chandankumar_ has quit IRC | 08:37 | |
*** shakamunyi has quit IRC | 08:38 | |
*** chandan_kumar has joined #openstack-infra | 08:39 | |
*** amcrn has quit IRC | 08:42 | |
*** Daisy_ has quit IRC | 08:46 | |
*** yamahata has quit IRC | 08:51 | |
*** jcoufal has quit IRC | 08:54 | |
*** andreaf has joined #openstack-infra | 08:54 | |
*** vkozhukalov_ has quit IRC | 08:55 | |
*** afazekas has joined #openstack-infra | 08:56 | |
*** hashar has joined #openstack-infra | 08:57 | |
openstackgerrit | Ricardo Carrillo Cruz proposed a change to openstack-infra/config: Initial openstackdroid commit for Stackforge https://review.openstack.org/81234 | 08:57 |
*** Daisy_ has joined #openstack-infra | 08:58 | |
*** ominakov has joined #openstack-infra | 08:58 | |
*** mihgen has quit IRC | 08:59 | |
*** ams0 has joined #openstack-infra | 09:01 | |
*** e0ne has joined #openstack-infra | 09:03 | |
*** markmcclain has joined #openstack-infra | 09:03 | |
*** mihgen has joined #openstack-infra | 09:04 | |
*** jlibosva has joined #openstack-infra | 09:04 | |
*** ildikov_ has quit IRC | 09:13 | |
*** ildikov_ has joined #openstack-infra | 09:13 | |
*** ominakov has quit IRC | 09:14 | |
*** johnthetubaguy has joined #openstack-infra | 09:16 | |
*** Daisy_ has quit IRC | 09:19 | |
*** saschpe_ has joined #openstack-infra | 09:19 | |
*** e0ne has quit IRC | 09:20 | |
*** dizquierdo has joined #openstack-infra | 09:20 | |
*** yaguang has quit IRC | 09:21 | |
*** e0ne has joined #openstack-infra | 09:22 | |
jamespage | dims, fungi: something to be aware of - the qemu-kvm packages in the icehouse cloud archive and for 14.04 no longer provide the 'kvm' package - which breaks devstack | 09:22 |
jamespage | the right package to install is actually 'qemu-kvm' (that works on precise as well) | 09:23 |
jamespage | dims, fungi: one of my team is working on a change for that right now | 09:23 |
*** jgallard has quit IRC | 09:23 | |
*** jgallard has joined #openstack-infra | 09:24 | |
*** rpodolyaka1 has joined #openstack-infra | 09:24 | |
*** jcoufal has joined #openstack-infra | 09:24 | |
*** jcoufal has quit IRC | 09:25 | |
*** amotoki has quit IRC | 09:26 | |
*** jcoufal has joined #openstack-infra | 09:26 | |
*** rpodolyaka1 has quit IRC | 09:28 | |
*** sileht has quit IRC | 09:33 | |
*** nati_ueno has quit IRC | 09:33 | |
*** nati_ueno has joined #openstack-infra | 09:34 | |
*** jp_at_hp has joined #openstack-infra | 09:38 | |
*** nati_ueno has quit IRC | 09:38 | |
*** ryanpetrello has joined #openstack-infra | 09:42 | |
*** akscram has quit IRC | 09:43 | |
*** ams0 has quit IRC | 09:45 | |
*** e0ne_ has joined #openstack-infra | 09:46 | |
*** ryanpetrello has quit IRC | 09:47 | |
*** nati_ueno has joined #openstack-infra | 09:47 | |
*** e0ne has quit IRC | 09:49 | |
*** sileht has joined #openstack-infra | 09:50 | |
*** mrmartin has quit IRC | 09:53 | |
*** yjiang has quit IRC | 10:01 | |
sdague | SergeyLukjanov: thanks | 10:04 |
*** CaptTofu has joined #openstack-infra | 10:05 | |
*** jcoufal has quit IRC | 10:07 | |
*** CaptTofu has quit IRC | 10:09 | |
*** e0ne_ has quit IRC | 10:10 | |
*** weiwei has quit IRC | 10:11 | |
zigo | vishy: FYI, when there's no ./run_tests.sh command, I just indeed run testr directly this way: | 10:13 |
zigo | set -e && \ | 10:13 |
zigo | TEMP_REZ=`mktemp -t` && \ | 10:13 |
zigo | python setup.py testr --slowest --testr-args='--subunit ' \ | 10:13 |
zigo | | tee $$TEMP_REZ | subunit2pyunit || true ; \ | 10:13 |
zigo | cat $$TEMP_REZ | subunit-filter -s --no-passthrough | subunit-stats ; \ | 10:13 |
zigo | rm -f $$TEMP_REZ ; | 10:13 |
zigo | lifeless: Can you confirm that the above is correct, and that there's no better way? | 10:13 |
zigo | lifeless: Also, yesterday I was looking for the --python-version option, what is it exactly? As an option to testr or... ? | 10:14 |
*** ociuhandu has quit IRC | 10:14 | |
zigo | I'm sure I saw it somewhere, but can't find the right syntax. | 10:14 |
*** e0ne has joined #openstack-infra | 10:17 | |
*** KurtMartin has quit IRC | 10:19 | |
lifeless | zigo: thats not running testr directly | 10:21 |
lifeless | zigo: 'testr' is running testr directly :) | 10:22 |
zigo | Oh ok! :) | 10:22 |
zigo | This was opposed to using run_tests.sh, but nv mind. | 10:22 |
zigo | lifeless: What's best btw? Using testr directly, or through python setup.py? | 10:22 |
lifeless | directly | 10:23 |
*** johnthetubaguy1 has joined #openstack-infra | 10:23 | |
zigo | ok | 10:23 |
lifeless | the python setup.py shim really just gets in the way, its got its uses but ... | 10:23 |
zigo | lifeless: What's the way if I want to do python3 tests? | 10:24 |
*** weiwei has joined #openstack-infra | 10:24 | |
lifeless | zigo: 'testr'; make sure the .testr.conf will run the python you want to use for the backends; the python version of the frontend is only loosly coupled (by the subunit wire protocol) | 10:25 |
*** rpodolyaka1 has joined #openstack-infra | 10:25 | |
zigo | lifeless: Oh ok. So then, basically, I shouldn't care about this in my packaging, and upstream will write the correct thing in .testr.conf, right? | 10:25 |
lifeless | all the openstack .testr.conf should have ${PYTHON:-python} in them | 10:26 |
lifeless | so yeah | 10:26 |
lifeless | just export PYTHON=python3 and run testr | 10:26 |
*** johnthetubaguy has quit IRC | 10:26 | |
zigo | Ah, ok, easy enough. | 10:26 |
zigo | Cheers! | 10:26 |
zigo | BTW, testr & subunit are GREAT. | 10:27 |
zigo | Thanks for them. | 10:27 |
lifeless | you're most welcome | 10:27 |
zigo | The only thing is that it'd be nice to have man pages. :) | 10:27 |
*** rpodolyaka1 has quit IRC | 10:29 | |
*** johnthetubaguy1 is now known as johnthetubaguy | 10:30 | |
*** vkozhukalov_ has joined #openstack-infra | 10:33 | |
ekarlso | is there support for trusty yet in devstack ? | 10:38 |
*** jhesketh_ has quit IRC | 10:40 | |
*** jhesketh__ has quit IRC | 10:41 | |
sdague | ekarlso: no, patches welcomed | 10:53 |
*** derekh has joined #openstack-infra | 10:54 | |
pleia2 | good morning | 10:56 |
*** bada_ has joined #openstack-infra | 10:58 | |
*** bada has quit IRC | 11:00 | |
anteaya | pleia2: early for you | 11:03 |
pleia2 | anteaya: I'm in Maine this week, and doing long lunch with my mother around noon eastern | 11:04 |
anteaya | nice | 11:04 |
anteaya | still early for maine even | 11:04 |
pleia2 | I'm staying with a 2 year old, turns out I can't sleep through baby-cry | 11:05 |
pleia2 | fortunately he sleeps pretty well overall :) | 11:05 |
*** nati_ueno has quit IRC | 11:06 | |
anteaya | ah baby-cry, things click into place | 11:07 |
anteaya | you must welcome the quietness of irc then | 11:07 |
anteaya | and it is rather quiet atm | 11:07 |
anteaya | and I'm glad overall he is a happy chap | 11:08 |
* pleia2 nods | 11:09 | |
*** eharney has joined #openstack-infra | 11:11 | |
*** jcoufal has joined #openstack-infra | 11:13 | |
dims | jamespage, ah coo. thanks for the heads up | 11:13 |
jamespage | dims, np | 11:13 |
jamespage | dims, digging a bit it also looks like we could make the use of openvswitch dkms optional - 3.13 kernel in 14.04 has everything we should need | 11:15 |
* jamespage goes to have a hack on that | 11:15 | |
*** flaper87 is now known as flaper87|afk | 11:17 | |
*** jooools has joined #openstack-infra | 11:18 | |
*** jgallard has quit IRC | 11:18 | |
dims | jamespage, ack. any ETA on the 0.9.8+ fix landing in an official repo? sdague indicates this is breaking us badly (see last 2 comments in https://review.openstack.org/#/c/79816/) | 11:20 |
jamespage | dims, it will be recorded on the bug when it enters proposed | 11:20 |
*** e0ne has quit IRC | 11:21 | |
jamespage | which I now can;t find | 11:21 |
dims | jamespage, cool. thanks (don't want to miss it!) | 11:22 |
dims | jamespage, https://bugs.launchpad.net/nova/+bug/1254872 | 11:22 |
uvirtbot | Launchpad bug 1254872 in libvirt "libvirtError: Timed out during operation: cannot acquire state change lock" [High,Fix released] | 11:22 |
jamespage | dims, I think the sru team agreed to accept without a full test case | 11:22 |
*** matsuhashi has quit IRC | 11:22 | |
jamespage | its prob just waiting review and acceptance | 11:23 |
* jamespage looks | 11:23 | |
dstufft | https://github.com/pypa/pip/issues/1632#issuecomment-38027275 < PyPy + Setuptools | 11:24 |
*** matsuhashi has joined #openstack-infra | 11:24 | |
dstufft | Marcus is a hero | 11:24 |
dstufft | fungi: clarkb sdague etc | 11:24 |
*** rpodolyaka1 has joined #openstack-infra | 11:25 | |
sdague | jamespage: well this is our #2 bug at the moment so some urgency here would be appreciated. | 11:28 |
jamespage | sdague, I've pinged an SRU team member | 11:28 |
sdague | jamespage: thanks | 11:28 |
sdague | dstufft: nice | 11:29 |
*** rpodolyaka1 has quit IRC | 11:29 | |
*** fandi has quit IRC | 11:30 | |
*** ryanpetrello has joined #openstack-infra | 11:30 | |
*** ryanpetrello has quit IRC | 11:35 | |
*** markmcclain has quit IRC | 11:36 | |
*** matsuhashi has quit IRC | 11:37 | |
*** akscram has joined #openstack-infra | 11:39 | |
*** apevec has joined #openstack-infra | 11:42 | |
*** CaptTofu has joined #openstack-infra | 11:50 | |
*** e0ne has joined #openstack-infra | 11:52 | |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: Added db_api for comments https://review.openstack.org/81232 | 11:53 |
*** e0ne has quit IRC | 11:56 | |
*** rfolco has joined #openstack-infra | 11:57 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/os-loganalyze: wrap log lines https://review.openstack.org/80843 | 11:58 |
*** Daisy_ has joined #openstack-infra | 12:01 | |
*** talluri has quit IRC | 12:02 | |
*** talluri has joined #openstack-infra | 12:02 | |
*** hashar has quit IRC | 12:04 | |
*** sandywalsh has joined #openstack-infra | 12:05 | |
sdague | fungi: once this passes tests - https://review.openstack.org/#/c/81503/ it should get promoted in the gate | 12:07 |
*** e0ne has joined #openstack-infra | 12:07 | |
sdague | that's bug #3 in gate resets | 12:07 |
uvirtbot | Launchpad bug 3 in launchpad "Custom information for each translation team" [Low,Fix released] https://launchpad.net/bugs/3 | 12:07 |
*** flaper87|afk is now known as flaper87 | 12:08 | |
*** yassine has joined #openstack-infra | 12:08 | |
*** e0ne_ has joined #openstack-infra | 12:10 | |
*** e0ne has quit IRC | 12:10 | |
*** dprince has joined #openstack-infra | 12:11 | |
*** aysyd has joined #openstack-infra | 12:11 | |
*** pdmars has joined #openstack-infra | 12:12 | |
*** hashar has joined #openstack-infra | 12:14 | |
*** jcoufal has quit IRC | 12:19 | |
ttx | something wrong is going on at the gate, but i guess most people around here know that | 12:21 |
ttx | we definitely need to make more use of openstackstatus | 12:21 |
*** salv-orlando has quit IRC | 12:21 | |
anteaya | ttx what are you seeing in the gate? | 12:22 |
ttx | would avoid a lot of people pinging folks and asking wtf | 12:22 |
ttx | 15 hours top-of-gate queuing at 11:00 UTC, not a good sign | 12:22 |
SergeyLukjanov | ttx, yup, it that's a good idea | 12:22 |
anteaya | I see three failing patches, a swift, a nova, a ceilometer | 12:22 |
ttx | anteaya: ^ | 12:22 |
*** chandan_kumar has quit IRC | 12:23 | |
*** lcostantino has joined #openstack-infra | 12:24 | |
anteaya | zuul memory usage looks, concerning to me: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=23&page=2 | 12:25 |
*** Daisy_ has quit IRC | 12:25 | |
anteaya | zuul cache looks high and growing | 12:25 |
anteaya | it hasn't gotten into swap though | 12:25 |
ttx | yeah, don't think we can blame load and usual gate bugs | 12:26 |
*** rpodolyaka1 has joined #openstack-infra | 12:26 | |
ttx | something weird happened at 6 UTC | 12:26 |
ttx | top of gate job been running for 2h30 | 12:27 |
anteaya | ttx so far I am not seeing the weird thing | 12:27 |
anteaya | ttx you witnessed it at 6 UTC? | 12:27 |
ttx | not just looking at graphs | 12:28 |
anteaya | I wonder if the weird thing made it into the logs | 12:28 |
* anteaya goes back to the graphs | 12:28 | |
ttx | anteaya: that gate-tempest-dsvm-postgres-full job on the swift change at top of gate is definitely wedged | 12:28 |
*** e0ne has joined #openstack-infra | 12:28 | |
ttx | been running for 2hours 32 min now | 12:29 |
ttx | but then it still logs | 12:29 |
ttx | https://jenkins04.openstack.org/job/gate-tempest-dsvm-postgres-full/5496/console | 12:29 |
*** ociuhandu has joined #openstack-infra | 12:30 | |
anteaya | if memory serves, the timeout is 3 hours | 12:30 |
anteaya | so 28 minutes remaining to finish | 12:30 |
*** rpodolyaka1 has quit IRC | 12:30 | |
ttx | Looks like usual ETA is ~32min | 12:30 |
anteaya | the question I would have is why is it taking over 2 hours to run | 12:30 |
anteaya | and yes, in the zuul job queue graph there was a steep jump up which looks like it took place around 6 UTC | 12:31 |
*** ildikov_ has quit IRC | 12:31 | |
ttx | Not the first time I see that bump. isn't it when we renew the images ? If yes, is it a good idea to do that while most infra-core are going to sleep ? | 12:32 |
anteaya | there was a huge jump up in the check queue about that time | 12:32 |
*** e0ne_ has quit IRC | 12:32 | |
anteaya | I don't know what time the image cron job runs, but I do believe it occurs during night time north america time yes | 12:33 |
ttx | fungi: I fear your breakfast is served :) | 12:33 |
anteaya | I think the reason for that was that north america sun up time is the time of greatest activity on the system | 12:33 |
anteaya | a hearty morning meal of logs | 12:33 |
anteaya | his favourite dish | 12:34 |
*** _nadya_ has joined #openstack-infra | 12:34 | |
anteaya | I am also seeing the check tests taking over 2 hours to run | 12:34 |
ttx | couldn't spot a weirdness in that job log. It's just painfully slow. No big timeouts/jumps in time | 12:35 |
anteaya | we increased timeouts on small things here and there to deal with job failures due to timeouts | 12:35 |
anteaya | perhaps we need to reassess some of those timeout increases globally | 12:36 |
ttx | maybe the image is borked and randomly makes tests go way too long | 12:36 |
anteaya | since I do remember a time when tests took about 40 minutes to run and now we are over 2 hours | 12:36 |
*** shakamunyi has joined #openstack-infra | 12:36 | |
ttx | anteaya: same tests took 32 min in the nova change that goes in position #3 | 12:36 |
anteaya | ttx possibly, but borked images have in the past hit our git farm hard, but nothing is hitting git.o.o hard atm: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=2 | 12:37 |
ttx | hmm that was stable/havana | 12:37 |
* fungi bellies up to the table to see what's for breakfast | 12:38 | |
anteaya | ha ha ha | 12:38 |
ttx | gate-tempest-dsvm-postgres-full took 1h12 on devstack-precise-hpcloud-az2-2852200 | 12:38 |
ttx | fungi: symptom: 15-hour pileup | 12:39 |
ttx | fungi: symptom: top of gate blocking job ran for 2h40 | 12:39 |
fungi | i see plenty of random devstack job failures in the gate currently | 12:39 |
*** dkliban has joined #openstack-infra | 12:39 | |
ttx | still running | 12:39 |
ttx | + more than average resets | 12:39 |
ttx | + weird graphs trends starting around 6:00 UTC | 12:40 |
*** weshay has joined #openstack-infra | 12:40 | |
ttx | fungi: suggestion: do not run remarkable events at 0600 UTC | 12:40 |
fungi | oh yeah, whatever the issue on that postgres job is, doesn't seem to be failing, just running nice and slow (approaching 3 hours now) | 12:41 |
fungi | Branch: feature/ec | 12:41 |
ttx | because if fail, won't be fixed before 12:00 UTC | 12:41 |
*** shakamunyi has quit IRC | 12:41 | |
ttx | fungi: i'm pretty sure the swift branch is not the issue explaining slowness :) | 12:41 |
* ttx should definitely go to lunch now | 12:42 | |
fungi | just wondering if ec is making it run a lot more slowly as an image backend for glance in tempest tests or something, but if so i'd expect the other jobs on that change to have run similarly long | 12:43 |
anteaya | seeing ERROR: gate_hook failed in 3 of the 4 gate failing jobs | 12:43 |
ttx | https://jenkins06.openstack.org/job/gate-tempest-dsvm-postgres-full/2264/ ran for 1h12, sounds long too | 12:43 |
fungi | anteaya: yep, "gate_hook" is the tests | 12:43 |
fungi | anteaya: you'll have to look in the devstack log for details on what tests fail | 12:44 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: Don't separate bug links with ',' https://review.openstack.org/81312 | 12:44 |
fungi | we don't echo them into the console any longer, because it was getting way too huge | 12:44 |
ttx | but yeah, most took ~50min | 12:44 |
fungi | agreed, so this slow job must be node-specific not change specific | 12:45 |
ttx | fungi: anyway, that slow job won't account for the &5-hour pileup all by itself | 12:45 |
ttx | 15* | 12:45 |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: remove er query for 1248757 https://review.openstack.org/81518 | 12:45 |
fungi | it was piling up since yesterday. sdague attributed it primarily to increasing nondeterministic failures in jobs | 12:45 |
ttx | fungi: ok. and the weird trends lines at 6am are just regular activity from daily image refresh ? | 12:46 |
ttx | could be | 12:47 |
ttx | if you do that while the gate is very busy | 12:47 |
fungi | ttx: that spike is actually when the auto-abandon runs. there's a bug in the auto-abandon script right now that it leave a comment separate from the abandon message and zuul sees that as a reason to re-run freshness checks on hundreds of changes at once | 12:48 |
ttx | fungi: ok | 12:48 |
fungi | so it's a surge in check jobs, but generally has been clearing fairly quickly (i guess replaced by actual activity this time) | 12:48 |
ttx | so maybe there is no one big issue, more a pileup of small ones reaching critical mass | 12:48 |
fungi | 81503,1 is looking good in check so far, so i'll promote that here in the next 30 minutes or so hopefully | 12:49 |
ttx | fungi: that swift change will timeout soon, expect one reset to come up soon | 12:49 |
anteaya | funny I can't find a match for "gate_hook" in tempest | 12:50 |
fungi | ttx: it's actually wrapping up | 12:50 |
*** dims has quit IRC | 12:50 | |
ttx | ERROR: gate_hook failed | 12:50 |
fungi | anteaya: gate_hook is a function in the functions file of devstack-gate | 12:50 |
anteaya | ah | 12:50 |
sdague | ttx: that's normal run length on those jobs now | 12:50 |
anteaya | thanks | 12:50 |
anteaya | sdague: how did we get from about 40 minutes to over 2 hours? | 12:51 |
sdague | 1h12 minutes is a normal time | 12:51 |
ttx | sdague: OK, still doesn't explain the 2h58 run | 12:51 |
fungi | ahh, yep, that swift change with the 3-hour postgres job did hit a degredation error of some sort i think... | 12:51 |
fungi | Details: Server 464028c7-69eb-45cc-b065-e68a00e49e6e failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: BUILD. Current task state: None. | 12:51 |
fungi | that's the libvirt issue, right? | 12:52 |
sdague | fungi: maybe | 12:52 |
ttx | so that will reset anytime now | 12:52 |
fungi | yep | 12:52 |
mordred | morning all | 12:52 |
sdague | ttx: a run to 2h58 is probably a fail in a way that goes very far south | 12:53 |
sdague | fungi: we could probably pull back in the timeouts, I have to get to a car apt though | 12:53 |
ttx | sdague: couldn't spot any big jump in time in that one, it's just slower | 12:53 |
fungi | sdague: hope your car feels better soon | 12:53 |
ttx | ok, so the pileup is ~16h30 now | 12:54 |
fungi | morning, mordred. are you in a western-hemisphere timezone again, or are you just saying good morning for the sake of people for whom it actually is morning? | 12:54 |
mordred | fungi: I'll let you decide | 12:54 |
mordred | either could be and both perobably are true | 12:54 |
mordred | unless they aren't | 12:54 |
* mordred is sleep deprived and silly | 12:54 | |
fungi | mordred: i'm going to wager the last bar just closed and you had nothing better to do ;) | 12:55 |
pleia2 | hah | 12:55 |
*** adalbas has joined #openstack-infra | 12:55 | |
mordred | fungi: why would a bar close? | 12:55 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: remove er query for 1248757 https://review.openstack.org/81518 | 12:55 |
*** chandan_kumar has joined #openstack-infra | 12:56 | |
fungi | mordred: it could be situated in a country which doesn't truly understand bars | 12:56 |
mordred | weird | 12:57 |
*** thuc has joined #openstack-infra | 12:57 | |
hashar | most of France cities enforce bar closing at 1am / 2am | 12:58 |
*** Daisy_ has joined #openstack-infra | 12:58 | |
*** thuc_ has joined #openstack-infra | 12:58 | |
fungi | hashar: same for most of the usa, with the french-most city being its primary exception, strangely enough | 12:59 |
*** dkranz has joined #openstack-infra | 12:59 | |
*** jepoy_ has joined #openstack-infra | 12:59 | |
hashar | :-] | 12:59 |
*** mbacchi has joined #openstack-infra | 13:00 | |
fungi | actually, i think new orleans does require bars to close for a minimum of 15 minutes every day, though the patrons aren't required to leave. some weird city ordinance | 13:00 |
fungi | or at least it used to be that way, but it's been a while since i visited so maybe they've fixed that bug now | 13:00 |
hashar | Ryan Lane could confirm, he used to live here | 13:01 |
hashar | there | 13:01 |
hashar | and coffeeee time | 13:01 |
*** thuc has quit IRC | 13:01 | |
*** adalbas has quit IRC | 13:02 | |
*** jepoy has quit IRC | 13:02 | |
*** dcramer_ has joined #openstack-infra | 13:02 | |
*** ildikov_ has joined #openstack-infra | 13:03 | |
ociuhandu | mordred: hi, can you please have a look at the updates on the blocking pbr bug for Windows? it's at https://review.openstack.org/#/c/81322/ Thank you | 13:03 |
*** ryanpetrello has joined #openstack-infra | 13:05 | |
*** roeyc has joined #openstack-infra | 13:05 | |
openstackgerrit | A change was merged to openstack-infra/reviewstats: Add Oleg Bondarev to Neutron core https://review.openstack.org/75764 | 13:08 |
*** salv-orlando has joined #openstack-infra | 13:11 | |
*** ildikov_ has quit IRC | 13:11 | |
*** hashar has quit IRC | 13:12 | |
*** julim has joined #openstack-infra | 13:12 | |
*** _nadya_ has quit IRC | 13:13 | |
*** adalbas has joined #openstack-infra | 13:14 | |
*** hashar has joined #openstack-infra | 13:14 | |
*** jgallard has joined #openstack-infra | 13:15 | |
mordred | ociuhandu: looking | 13:15 |
mordred | ociuhandu: +2 - maybe fungi will +2 you too | 13:17 |
*** Daisy_ has quit IRC | 13:17 | |
*** krotscheck has quit IRC | 13:18 | |
fungi | mordred: ociuhandu: that is indeed a very interesting portability issue. i'll have to keep that in mind for the future | 13:19 |
mordred | fungi: is it too strange of me that I think that the suprocess call shoudl handle it for us? | 13:20 |
fungi | mordred: maybe it does in later python interpreters? | 13:20 |
*** esker has joined #openstack-infra | 13:21 | |
*** mfer has joined #openstack-infra | 13:22 | |
*** yassine has quit IRC | 13:23 | |
fungi | mordred: the only thing i find in the 2.7 module reference about subprocess environment on windows is the statement "If specified, env must provide any variables required for the program to execute. On Windows, in order to run a side-by-side assembly the specified env must include a valid SystemRoot." | 13:23 |
*** nosnos has quit IRC | 13:24 | |
*** mwagner_lap is now known as mwagner_dontUseM | 13:25 | |
mordred | fungi: that doesn't seem as helpful as other things | 13:26 |
*** rpodolyaka1 has joined #openstack-infra | 13:26 | |
*** mriedem has joined #openstack-infra | 13:27 | |
*** ildikov_ has joined #openstack-infra | 13:29 | |
*** mwagner__ has quit IRC | 13:30 | |
fungi | ociuhandu: did you have a link to any python documentation describing the portability concerns with casting environment contents to b'' on windows? | 13:30 |
fungi | i mainly just want to make sure we continue doing the right thing in clients and libs which might get run there | 13:31 |
*** rpodolyaka1 has quit IRC | 13:31 | |
fungi | Alex_Gaynor: when you wake up, it looks like the ball is squarely back in your court on https://github.com/pypa/pip/issues/1632 | 13:31 |
*** alff has quit IRC | 13:32 | |
fungi | Alex_Gaynor: "...pypy is importing a Python 3 feature into Python 2 and without the accompanying API to manage or detect the feature..." | 13:32 |
*** bknudson has joined #openstack-infra | 13:32 | |
fungi | sdague: mtreinish: did one of you want to approve 81503 so i can shove it to the front of the gate? check tests look good on it | 13:37 |
*** talluri has quit IRC | 13:38 | |
*** yassine has joined #openstack-infra | 13:39 | |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: [WIP] Comments controller https://review.openstack.org/81505 | 13:41 |
*** thuc has joined #openstack-infra | 13:41 | |
*** dcramer_ has quit IRC | 13:43 | |
*** thuc_ has quit IRC | 13:44 | |
*** thuc has quit IRC | 13:45 | |
sdague | fungi: just did | 13:46 |
*** alexpilotti has joined #openstack-infra | 13:47 | |
ociuhandu | mordred, fungi|: sorry, was out for a few minutes. All the documentation we could find on this is the official python one, so we relied on extensive testing for validation (both on 2.7 and 3.X) | 13:48 |
*** david-lyle has quit IRC | 13:48 | |
*** saschpe__ has joined #openstack-infra | 13:49 | |
*** santibaldassin has joined #openstack-infra | 13:49 | |
*** jayvee has left #openstack-infra | 13:49 | |
fungi | ociuhandu: it might be worthwhile to also submit a patch to the official python docs adding a warning about that potential pitfall | 13:50 |
*** nati_ueno has joined #openstack-infra | 13:50 | |
sdague | fungi: something else occurred to me that might optimize our system this morning. What if we kept test results on commit message changes. | 13:51 |
ociuhandu | fungi: sure, going to check on how to do it | 13:51 |
ociuhandu | fungi, mordred: we're looking into adding a CI for pbr as well, as soon as possible | 13:52 |
fungi | sdague: well, right now we consider trivial rebases and whitespace-only changes to be a subset of commit message changes. and that's not exposed in the gerrit event stream in any practical way either (unless you want to consider parsing the contents of the comment left by the hook script) | 13:53 |
*** jgrimm has joined #openstack-infra | 13:53 | |
*** caleb_ has joined #openstack-infra | 13:53 | |
*** thuc has joined #openstack-infra | 13:54 | |
*** wchrisj has joined #openstack-infra | 13:54 | |
*** caleb_` has joined #openstack-infra | 13:55 | |
*** jpeeler has quit IRC | 13:55 | |
*** jpeeler has joined #openstack-infra | 13:56 | |
sdague | fungi: sure, I just have seen plenty of instances, especially with clean check, where there is too much disincentive to get the commit message right because of the time delay it will add | 13:57 |
*** thuc has quit IRC | 13:57 | |
fungi | sdague: well, that and it wipes all review votes too | 13:57 |
sdague | sure | 13:58 |
fungi | i think that adds more of a disincentive than the recheck it triggers | 13:58 |
*** caleb_ has quit IRC | 13:58 | |
mordred | ociuhandu: that woudl make me super happy - I hate it when I break you guys | 13:58 |
sdague | honestly, not as much | 13:58 |
*** salv-orlando_ has joined #openstack-infra | 13:59 | |
sdague | fungi: the issue is, besides the couple of hacking rules, the commit message is only the realm of humans. | 13:59 |
sdague | anyway, it was a thought, probably part of an optimizing the gate session in Atlanta | 14:00 |
fungi | sdague: though that does bring up an important point... if you're fixing a hacking compliance issue in your commit message, you'd have to explicitly recheck | 14:00 |
sdague | fungi: sure, or we disconnect the commit message checking from the pep8 jobs | 14:01 |
*** salv-orlando has quit IRC | 14:01 | |
*** nati_ueno has quit IRC | 14:03 | |
*** salv-orlando_ has quit IRC | 14:03 | |
fungi | also, we just cleared a batch of jobs at the head of the gate, so i'm promoting that fix now | 14:04 |
sdague | fungi: thanks | 14:04 |
sdague | that should help some | 14:04 |
fungi | here's hoping | 14:05 |
*** vkozhukalov_ has quit IRC | 14:05 | |
santibaldassin | hey folks.....quick question....I have a review which is failing due to a couple of tests in tempest which has to be updated according the changes in my review... | 14:05 |
santibaldassin | how's the procedure in this case... | 14:05 |
santibaldassin | ? | 14:05 |
santibaldassin | this is my review https://review.openstack.org/#/c/81356/ | 14:05 |
*** thomasem has joined #openstack-infra | 14:06 | |
*** saschpe__ has quit IRC | 14:06 | |
*** saschpe__ has joined #openstack-infra | 14:06 | |
fungi | santibaldassin: you probably want to talk to the tempest developers in #openstack-qa about submitting patches to update tests in it | 14:06 |
*** thuc has joined #openstack-infra | 14:07 | |
santibaldassin | fungi: thanks | 14:07 |
openstackgerrit | Sean Dague proposed a change to openstack-infra/os-loganalyze: wrap log lines https://review.openstack.org/80843 | 14:07 |
*** saschpe__ has quit IRC | 14:07 | |
*** saschpe_ has quit IRC | 14:07 | |
*** Ajaeger1 has joined #openstack-infra | 14:07 | |
*** vkozhukalov_ has joined #openstack-infra | 14:08 | |
zaro | hashar: around? | 14:10 |
*** markmcclain has joined #openstack-infra | 14:10 | |
hashar | zaro: yes! | 14:10 |
zaro | hashar: would you have time to review this one? https://review.openstack.org/#/c/52080 | 14:11 |
zaro | hashar: it's been on review for a while and other good things are waiting on it. | 14:12 |
hashar | yeah sorry I am lagging out | 14:12 |
*** markmcclain1 has joined #openstack-infra | 14:12 | |
hashar | zaro: I havent looked at that patch yet to be honest. It scares me :] | 14:13 |
*** chandan_kumar has quit IRC | 14:13 | |
zaro | hashar: np. i can get someone else if you don't want to. | 14:14 |
*** markmcclain has quit IRC | 14:15 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/config: change 3 hr jobs to 2 hrs https://review.openstack.org/81538 | 14:16 |
sdague | fungi: that's probably worth getting in soonish | 14:17 |
*** chandan_kumar has joined #openstack-infra | 14:17 | |
*** shakamunyi has joined #openstack-infra | 14:18 | |
hashar | zaro: yeah it is safer to ask someone else. Will attempt to review it though | 14:18 |
*** freyes has joined #openstack-infra | 14:20 | |
*** malini_afk is now known as malini | 14:22 | |
*** ArxCruz has joined #openstack-infra | 14:23 | |
Ajaeger1 | Hi infra team, my recent translation fixes work - but now I found the next broken one, could you review https://review.openstack.org/#/c/81450/ , please? | 14:25 |
Ajaeger1 | mordred: thanks for your quick fix this morning, I'm still waiting to see whether it worked... | 14:25 |
*** freyes has quit IRC | 14:25 | |
*** rpodolyaka1 has joined #openstack-infra | 14:27 | |
*** ryanpetrello has quit IRC | 14:27 | |
*** ryanpetrello has joined #openstack-infra | 14:27 | |
*** rcleere has joined #openstack-infra | 14:28 | |
*** thuc_ has joined #openstack-infra | 14:28 | |
*** ryanpetrello has quit IRC | 14:28 | |
Ajaeger1 | thanks, fungi! | 14:30 |
*** thuc has quit IRC | 14:30 | |
*** rpodolyaka1 has quit IRC | 14:31 | |
*** thuc_ has quit IRC | 14:32 | |
*** che-arne has joined #openstack-infra | 14:33 | |
*** maxbit has joined #openstack-infra | 14:34 | |
*** miqui has joined #openstack-infra | 14:34 | |
*** saschpe- has joined #openstack-infra | 14:36 | |
*** markmcclain1 has quit IRC | 14:40 | |
*** medieval1 has joined #openstack-infra | 14:40 | |
*** unicell has quit IRC | 14:40 | |
*** saschpe- has quit IRC | 14:42 | |
*** saschpe- has joined #openstack-infra | 14:42 | |
*** saschpe- is now known as saschpe_ | 14:43 | |
*** saschpe_ is now known as saschpe__ | 14:43 | |
*** dims has joined #openstack-infra | 14:43 | |
openstackgerrit | Matt Farina proposed a change to openstack-infra/config: Updating the group config for the golang-client project to reflect the core and ptl structure. https://review.openstack.org/81548 | 14:46 |
jeblair | fungi, ttx: also the periodic jobs run at 0600, so there should always be a little bump there | 14:46 |
*** luis_ has quit IRC | 14:47 | |
*** pcrews has joined #openstack-infra | 14:47 | |
*** thedodd has joined #openstack-infra | 14:49 | |
dims | sdague, looks like we are seriously considering https://review.openstack.org/#/c/79816/, want me to yank the log_level out from there? | 14:49 |
*** shakamunyi has quit IRC | 14:52 | |
*** shakamunyi has joined #openstack-infra | 14:52 | |
dstufft | sdague: fungi clarkb whoever So more information about PyPy + setuptools | 14:53 |
dstufft | it's not PyPy's fault, it's a patch debian makes against PyPy | 14:53 |
*** thuc has joined #openstack-infra | 14:53 | |
fungi | dstufft: yeah, so the debian pypy package maintainer(s) backported pep 3147 support to it? | 14:53 |
fungi | bizarre... | 14:53 |
dstufft | yea | 14:53 |
dstufft | and it's not quite right | 14:54 |
dstufft | well mostly likely they did that so you can run pypy and python next to each other without them overwriting each other's .pyc files | 14:54 |
fungi | i'm digging up the package changelog entries associated with that now to find rationale (and targets of ire) | 14:54 |
dstufft | https://gist.github.com/dstufft/9643283 | 14:54 |
dstufft | a script that shows the problem | 14:54 |
*** markmcclain has joined #openstack-infra | 14:54 | |
*** salv-orlando has joined #openstack-infra | 14:54 | |
*** reed has joined #openstack-infra | 14:55 | |
dstufft | they just need to make sure they only create a __pycache__ directory only if they have a file to put in it | 14:55 |
*** unicell has joined #openstack-infra | 14:55 | |
openstackgerrit | Solly Ross proposed a change to openstack/requirements: Limit psutil to <2.0.0 https://review.openstack.org/81373 | 14:56 |
*** ryanpetrello has joined #openstack-infra | 14:57 | |
*** david-lyle has joined #openstack-infra | 14:58 | |
*** roeyc has quit IRC | 14:58 | |
Alex_Gaynor | fungi: We're working with the PPA maintainer now to get htis figured out | 15:00 |
fungi | Alex_Gaynor: awesome--thanks! | 15:00 |
*** caleb_` has quit IRC | 15:00 | |
fungi | Alex_Gaynor: should we go ahead and switch our pypy tests to non-voting in the interim so projects which were gating on it are no longer blocked or inserting unsavory workarounds in tox configs? | 15:01 |
Alex_Gaynor | fungi: that would make me sad, since it gives us a window in which thye might break :-( | 15:01 |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: Added db_api for comments https://review.openstack.org/81232 | 15:02 |
fungi | Alex_Gaynor: well, if you expect the ppa will have fixed packages today, then it's probably not worth the effort to make them non-voting and switch them right back | 15:02 |
Alex_Gaynor | fungi: let me figure how long this is expected to take | 15:03 |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: [WIP] Comments controller https://review.openstack.org/81505 | 15:03 |
*** kmartin has joined #openstack-infra | 15:04 | |
*** mwagner__ has joined #openstack-infra | 15:05 | |
*** yamahata has joined #openstack-infra | 15:07 | |
dstufft | Alex_Gaynor: fungi we can probably patch this in pip, but not sure if that's reasonable for openstack to depend on pip 1.5.5 or not | 15:07 |
Alex_Gaynor | dstufft: If there's some way to also get a fix into pip, that'd be super cool, this seems pretty implementation detail-y | 15:08 |
jeblair | mordred: maybe we should chat in irc about https://review.openstack.org/#/c/81197/ when you have a few minutes | 15:08 |
fungi | dstufft: it's likely to reach us faster if solved in the pypy packages | 15:08 |
*** dkranz has quit IRC | 15:08 | |
dstufft | Alex_Gaynor: probably pip should record which directories it installs too | 15:08 |
dstufft | not just which files | 15:08 |
dstufft | Oh look, I did it right I think https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=742132 | 15:09 |
*** dcramer_ has joined #openstack-infra | 15:09 | |
reed | good morning yall | 15:09 |
*** luisg has joined #openstack-infra | 15:10 | |
fungi | dstufft: that looks right | 15:10 |
fungi | (debianista hat on) | 15:10 |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: Added db_api for comments https://review.openstack.org/81232 | 15:11 |
*** flaper87 is now known as flaper87|afk | 15:11 | |
*** gokrokve has joined #openstack-infra | 15:11 | |
dstufft | fungi: bug reported from my mac, don't tell them that though they might not fix it ! | 15:11 |
*** medieval1 has quit IRC | 15:13 | |
*** medieval1 has joined #openstack-infra | 15:13 | |
*** atiwari has joined #openstack-infra | 15:14 | |
*** mgagne has joined #openstack-infra | 15:16 | |
*** caleb_ has joined #openstack-infra | 15:17 | |
*** medieval1 has quit IRC | 15:18 | |
*** flaper87|afk is now known as flaper87 | 15:18 | |
anteaya | morning reed | 15:19 |
* anteaya is willing to answer to yall | 15:19 | |
*** medieval1 has joined #openstack-infra | 15:20 | |
reed | :) | 15:21 |
*** CaptTofu has quit IRC | 15:21 | |
*** CaptTofu has joined #openstack-infra | 15:22 | |
*** dkranz has joined #openstack-infra | 15:22 | |
*** saschpe__ is now known as saschpe | 15:24 | |
openstackgerrit | Elizabeth Krumbach Joseph proposed a change to openstack-infra/config: Move channel reg requirement to top of IRC docs https://review.openstack.org/81286 | 15:24 |
*** caleb_ has quit IRC | 15:24 | |
openstackgerrit | A change was merged to openstack-infra/config: Remove obsolete third argument check from ggp https://review.openstack.org/81300 | 15:25 |
*** CaptTofu has quit IRC | 15:26 | |
openstackgerrit | Malini Kamalambal proposed a change to openstack-infra/config: Add post test hook to copy marconi logs https://review.openstack.org/81249 | 15:27 |
*** medieval1 has quit IRC | 15:27 | |
sdague | fungi: any idea what's up with ES? | 15:28 |
*** rpodolyaka1 has joined #openstack-infra | 15:28 | |
*** medieval1 has joined #openstack-infra | 15:28 | |
sdague | like why we still aren't recovering | 15:28 |
*** CaptTofu has joined #openstack-infra | 15:28 | |
jeblair | fungi, clarkb: question on https://review.openstack.org/#/c/81301/ | 15:28 |
fungi | sdague: high log volume? | 15:28 |
*** caleb_ has joined #openstack-infra | 15:29 | |
sdague | fungi: it's not that high | 15:29 |
pleia2 | fungi: can I get your eyes on this one? https://review.openstack.org/#/c/69510/ I still have pip problems to work through, but it would be nice to get this much in so I don't need to rebase again | 15:29 |
*** e0ne has quit IRC | 15:29 | |
fungi | sdague: it does look like we never made it completely out of the backlog from last night | 15:29 |
sdague | agreed | 15:30 |
*** rpodolyaka1 has quit IRC | 15:32 | |
*** medieval1 has quit IRC | 15:32 | |
dstufft | sdague: fungi how hard would it be to add an environment variable to the gate that would fix the PyPy problem? | 15:34 |
fungi | pleia2: does 'sudo yum -y install wget' no-op and return 0 successfully on systems where it's already installed? | 15:34 |
fungi | dstufft: what environment variable? we might be able to set it in the scripts which run tox, as long as exports in the calling environment convey through tox itself | 15:35 |
*** rpodolyaka has quit IRC | 15:35 | |
fungi | dstufft: right now the challenge is that pip is getting invoked by virtualenv/tox which is where we're running into issues | 15:36 |
jeblair | sdague: can we make rstcheck an out of repo tool so that all the -spec repos can use it? | 15:36 |
dstufft | fungi: I have to test it still - I may be wrong, but I think disabling Wheel will work around the issue | 15:36 |
*** rpodolyaka has joined #openstack-infra | 15:36 | |
fungi | dstufft: i was still hitting it with pip --no-use-wheel | 15:36 |
dstufft | hm | 15:36 |
dstufft | ah | 15:36 |
dstufft | you know | 15:36 |
dstufft | yea I'm wrong | 15:37 |
dstufft | because pip/setuptools are installed into the virtualenv using wheel | 15:37 |
fungi | i didn't mention it in the bug report because it seemed irrelevant | 15:37 |
SpamapS | ugh.. hacking on pypi-mirror is.. scary.. no unit tests.. | 15:39 |
SpamapS | (or well hidden unit tests) | 15:39 |
*** jcoufal has joined #openstack-infra | 15:39 | |
sdague | jeblair: probably, like bash8 I wanted to incubate it a little first, because right now it was 30 minutes of hacking | 15:39 |
*** jcoufal has quit IRC | 15:39 | |
sdague | I'll probably try to do the bash8 extract in down week before summit | 15:40 |
jeblair | sdague: yeah, though there are two specs repos in active use now, and i think it's important they don't evolve conflicting conventions | 15:40 |
openstackgerrit | Thierry Carrez proposed a change to openstack-infra/storyboard: Remove Branch and Milestone legacy tables https://review.openstack.org/81562 | 15:41 |
sdague | well, I think it's still very much an experimental phase, so I think they might for a while to figure out what works for them | 15:41 |
sdague | right now, I'd rather try a few things, and figure out what seems like common checking | 15:42 |
jeblair | sdague: then running those jobs is premature. we don't normally have more than one team diverging from our standard tools at a time; part of the reason is to make it less confusing for contributors (that's why we standardize all this) | 15:44 |
jeblair | sdague: since we do have two teams trying this divergence, i think you ought to be talking to each other and trying to keep things as in-sync as possible | 15:44 |
jeblair | sdague: so that contributing to nova and tempest aren't two completely different things (which are, in turn, both different than contributing to neutron) | 15:45 |
sdague | jeblair: so qa-specs repo isn't allowed to introduce it's own tests? | 15:45 |
sdague | I really don't understand how this is any different from the fact that we allow local hacking rules in source trees | 15:46 |
*** krotscheck has joined #openstack-infra | 15:46 | |
SpamapS | sdague: funny, I was working on a thing to check markdown for common mistakes too.. (I'm assuming thats what rstcheck is) | 15:47 |
sdague | SpamapS: it's actually not even doing that yet | 15:48 |
jeblair | sdague: this isn't about allowing or not allowing anything | 15:48 |
SpamapS | sdague: just parsing? | 15:48 |
sdague | it's enforcing line length and the file ends in .rst | 15:48 |
SpamapS | lovely, that's basically what I wanted to do too | 15:48 |
SpamapS | lost it though.. becuase it got -1'd for being in the wrong repo and not having unit tests.. and I was like "meh" :-P | 15:48 |
SpamapS | (of course I also wanted to extract source code blocks and try to parse them as well) | 15:49 |
sdague | jeblair: review #2 in qa-specs had those 2 issues, which I wanted to have automatically failed instead of manually failed | 15:49 |
jeblair | sdague: i really don't want to end up with two ways of dealing with specs repos that are completely different. let's look at this another way -- how are you working together with nova-specs? | 15:49 |
jeblair | sdague: if you had those issues, i bet they will/have too | 15:49 |
sdague | jeblair: maybe, and maybe they don't care | 15:49 |
openstackgerrit | Clint "SpamapS" Byrum proposed a change to openstack-infra/pypi-mirror: Fix run-mirror writing wrong project directory https://review.openstack.org/81469 | 15:50 |
jeblair | sdague: can you ask? | 15:50 |
sdague | so I'm participating on that side as well | 15:50 |
sdague | however the point here was to experiment with what works and what doesn't, I thought | 15:50 |
jeblair | sdague: my main concern about starting these both up at once was ending up with a divergence. what's the plan to make sure we don't end up with that? | 15:51 |
sdague | jeblair: in the experimental phase divergence is fine | 15:51 |
sdague | because the whole point is to figure out what works and what doesn't | 15:52 |
sdague | and if we're constantly saying one repo can't try something unless the other one signs up for it | 15:52 |
sdague | then we're not going to get those answers about good ideas or not | 15:52 |
jeblair | sdague: so at some point, you'd be okay adopting one standard for qa-specs, even if it's ditching everything you have come up with in favor of what's happened in nova specs? | 15:52 |
jeblair | sdague: and will reformat the repo if necessary? | 15:52 |
sdague | jeblair: why? We don't make tempest rewrite to conform to nova | 15:53 |
*** e0ne has joined #openstack-infra | 15:53 | |
jeblair | sdague: actually we do. we're asking new projects to ditch falcon to use pecan, etc.. | 15:53 |
jeblair | sdague: but regardless... | 15:53 |
sdague | there are common things between the code bases | 15:54 |
sdague | and there are divergences | 15:54 |
jeblair | sdague: this is about replacing part of the develpment process, and it's very important that the development process for openstack be consistent | 15:54 |
jeblair | sdague: that's who we are and what we do | 15:54 |
sdague | but i think, 4 days after these repos exist, is not the time to standardize | 15:54 |
jeblair | sdague: yeah, i'm trying to work with you on that | 15:54 |
*** coolsvap has joined #openstack-infra | 15:54 | |
*** chandankumar_ has joined #openstack-infra | 15:54 | |
SpamapS | clarkb: commented in the review too, but https://review.openstack.org/#/c/81469/ fixes file:/// usage of pypi-mirror-created pypi mirrors. | 15:54 |
jeblair | sdague: i'm talking about in many months time, at the J or K summit perhaps. | 15:55 |
jeblair | sdague: after the "experimental phase", let's talk about what that means | 15:55 |
SpamapS | clarkb: which we have been trying, and failing, to use for a while for TripleO. One less thing that we have to do to the developer's box. | 15:55 |
*** reed has quit IRC | 15:55 | |
openstackgerrit | Nikita Konovalov proposed a change to openstack-infra/storyboard: Added db_api for comments https://review.openstack.org/81232 | 15:55 |
sdague | jeblair: sure, I expect that the experiences from both of these groups are going to come together to figure out a common pattern that works, and what local variance is needed | 15:55 |
sdague | but we need a lot more experience to do thaf first | 15:55 |
sdague | I'd expect that to be at K summit | 15:55 |
*** chandan_kumar has quit IRC | 15:56 | |
*** dkranz has quit IRC | 15:56 | |
sdague | hacking was copy and pasted between projects for 6 months before we extracted it | 15:56 |
sdague | which is fine | 15:56 |
sdague | and it actually meant you could figure out "so, you liked that part, but deleted this other part, ok, so that's where the natural commonality is" | 15:56 |
*** rossella_s has quit IRC | 15:57 | |
jeblair | sdague: right. but in the long run, it's not going to be okay if someone has to write .rst for nova and .md for qa, and other random pointless differences. | 15:57 |
sdague | jeblair: we're both using .rst | 15:57 |
jeblair | just an example | 15:57 |
*** rossella_s has joined #openstack-infra | 15:57 | |
pleia2 | heading out to extended lunch with mom, bbl | 15:58 |
sdague | anyway, lunch time, away for a bit | 15:58 |
jeblair | sdague: so i need to know if you're okay with standardizing at some point, just like everything else in the project (bugs on launchpad, code passes pep8, etc.) | 15:58 |
*** dims has quit IRC | 15:59 | |
*** reed has joined #openstack-infra | 16:00 | |
openstackgerrit | Davanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Temporary workaround for Time out errors from libvirt https://review.openstack.org/79816 | 16:04 |
derekh | quick question, is there any plans to do a new release of gear? | 16:04 |
openstackgerrit | A change was merged to openstack-infra/config: pbx: Add a shared account https://review.openstack.org/81380 | 16:05 |
jeblair | sdague: https://review.openstack.org/#/c/81400/ | 16:05 |
jeblair | derekh: there's an outstanding change here: https://review.openstack.org/#/c/80304/ | 16:05 |
jeblair | derekh: but after that, we should, as a couple of important bugfixes have landed. | 16:06 |
derekh | jeblair: ok, thanks | 16:06 |
*** malini has left #openstack-infra | 16:06 | |
fungi | jeblair: on a related note, is https://launchpad.net/bugs/1289432 benign noise in the logs? | 16:07 |
uvirtbot | Launchpad bug 1289432 in openstack-ci "toci_devtest.sh failure: 'Exception in poll loop'" [Medium,Triaged] | 16:07 |
*** thuc has quit IRC | 16:07 | |
*** thomasem has quit IRC | 16:07 | |
*** rhsu has joined #openstack-infra | 16:07 | |
*** moted has joined #openstack-infra | 16:07 | |
*** thuc has joined #openstack-infra | 16:08 | |
*** Sukhdev has joined #openstack-infra | 16:08 | |
jeblair | fungi: well "Exception in poll loop" is usually gear's way of saying "someone disconnected". | 16:08 |
jeblair | fungi: we should probably capture the normal case of that and log it without a traceback | 16:09 |
jeblair | fungi: so it's usually benign. I can't really tell about that bug though, because maybe they weren't expecting it to disconnect? | 16:09 |
*** dkranz has joined #openstack-infra | 16:09 | |
fungi | okay. i spotted it in the logs a couple weeks ago when i was troubleshooting unrelated issues, and then coincidentally saw that bug report | 16:09 |
openstackgerrit | A change was merged to openstack-infra/config: Add a net-info job builder macro https://review.openstack.org/81403 | 16:10 |
*** flaper87 is now known as flaper87|afk | 16:11 | |
*** saschpe has quit IRC | 16:11 | |
*** rlandy has quit IRC | 16:11 | |
*** Ryan_Lane has joined #openstack-infra | 16:12 | |
*** thuc has quit IRC | 16:12 | |
*** saschpe has joined #openstack-infra | 16:12 | |
openstackgerrit | A change was merged to openstack-infra/zuul: zuul.gerrit.Gerrit.isMerged should not return None https://review.openstack.org/81418 | 16:12 |
*** medieval1 has joined #openstack-infra | 16:13 | |
*** ociuhandu has quit IRC | 16:13 | |
*** khyati has joined #openstack-infra | 16:13 | |
ttx | jeblair, fungi: when you have free cycles, would welcome your help in debugging http://logs.openstack.org/93/72093/3/check/gate-oslo.rootwrap-python33/ebbf14e/console.html | 16:14 |
ttx | Looks like some pip version conflict on the py33 test nodes | 16:14 |
*** saschpe has quit IRC | 16:14 | |
ttx | dhellmann looked into it but couldn't find the issue | 16:14 |
dhellmann | ttx, jeblair, mordred : it looks like a similar problem to the one reported for translation jobs on the -infra ML recently | 16:15 |
*** saschpe has joined #openstack-infra | 16:15 | |
ttx | he launched two bottles at sea: https://review.openstack.org/#/c/81570/1 and https://review.openstack.org/#/c/81567/ | 16:15 |
dhellmann | mordred's proposal is to "revert the pbr >= 1.4 patch" (which I think means pip in pbr, rather than pbr itself) | 16:15 |
*** rhsu has quit IRC | 16:16 | |
dhellmann | if we agree that's the right approach, I can put together a patch | 16:16 |
*** ociuhandu has joined #openstack-infra | 16:16 | |
* anteaya pictures dhellmann in a liferaft hefting bottles into the water, a al robert redford in all is lost | 16:17 | |
* dhellmann does not view being compared to redford unfavorably | 16:17 | |
anteaya | :D | 16:18 |
dhellmann | oh, and mordred's follow-up points out that there is an "unmerged puppet patch" that will remove the apt-installed pip, so I wonder if that's the more correct fix | 16:18 |
*** mrmartin has joined #openstack-infra | 16:19 | |
jeblair | dhellmann: yeah, the problem is that patch could break every system we run, and mordred hasn't been around to babysit it. our latest thinking was that we should get puppetboard up and running before merging that so that more people can find and fix errors | 16:19 |
jeblair | dhellmann: so that probably won't happen today | 16:20 |
*** ihrachys is now known as ihrachys|afk | 16:20 | |
dhellmann | jeblair: ok, I understand risk. We can't make any changes to oslo.rootwrap right now because of this. Is there some other work-around we can use in the mean time? | 16:20 |
*** bada_ has quit IRC | 16:21 | |
*** bada_ has joined #openstack-infra | 16:21 | |
jeblair | dhellmann: however, i'm a little confused because i thought the single use slaves were already doing what his unmerged puppet patch does | 16:21 |
jeblair | dhellmann: (the transifex slave is not single use, so i expect it to be different) | 16:22 |
fungi | it's also unclear to me why tox is not using the version of pip in the virtualenv when running setup.py sdist-make | 16:22 |
*** jlibosva has quit IRC | 16:22 | |
dhellmann | fungi: according to a later error, pip is not being installed in the virtualenv tox creates | 16:22 |
dhellmann | http://logs.openstack.org/93/72093/3/check/gate-oslo.rootwrap-python33/ebbf14e/console.html | 16:22 |
fungi | dhellmann: right, which i also find equally odd | 16:22 |
dhellmann | right | 16:22 |
*** dprince has quit IRC | 16:23 | |
dhellmann | I could change pbr, but that would force a release, and I'd had to go through all of that if we don't think it would fix anything. | 16:23 |
dhellmann | fungi: keep in mind this is python3, so I don't know if there's any difference in the node setup | 16:23 |
fungi | jeblair: i think the change you're thinking of is https://review.openstack.org/75213 which also isn't merged yet | 16:23 |
dhellmann | fungi: that does look like it might be related | 16:24 |
*** thomasem has joined #openstack-infra | 16:25 | |
fungi | dhellmann: i can try manually re-running that job with and without first trying the fix_pip.sh from 75213 to see whether it helps the situation | 16:25 |
*** maxbit has quit IRC | 16:25 | |
dhellmann | fungi: ok | 16:25 |
*** thomasem has left #openstack-infra | 16:26 | |
jeblair | fungi: oh, i had forgotten about that change. that's a _different_ change that could break lots of things that mordred hasn't been around to babysit | 16:26 |
*** thomasem has joined #openstack-infra | 16:26 | |
fungi | jeblair: right. different but similar but different but similar | 16:26 |
jeblair | fungi: also, i'm guessing that merging https://review.openstack.org/#/c/51425/ means we wouldn't need to merge https://review.openstack.org/75213 | 16:26 |
*** maxbit has joined #openstack-infra | 16:27 | |
jeblair | but https://review.openstack.org/75213 is the more expedient fix. | 16:27 |
jeblair | sigh. | 16:27 |
jeblair | fungi: so yeah, why don't you test that out, and i'll rebase https://review.openstack.org/75213 to get it ready to merge if we want to do that | 16:27 |
fungi | right, the fix_pip.sh will simply tell us whether this is the class of problem we think it might be | 16:27 |
*** melwitt has joined #openstack-infra | 16:28 | |
openstackgerrit | Maru Newby proposed a change to openstack-infra/config: Run neutron functional job as tempest user https://review.openstack.org/81400 | 16:28 |
*** rpodolyaka1 has joined #openstack-infra | 16:28 | |
fungi | jeblair: i think you held a few nodes for image update tests several days ago. do you still need them held (nodepool seems not to be reaping held nodes)? | 16:29 |
jeblair | fungi: i need none of those nodes, you can delete them | 16:29 |
fungi | done | 16:29 |
marun | sdague: Do you have concerns about this? https://review.openstack.org/#/c/81400/ | 16:29 |
openstackgerrit | Ricardo Carrillo Cruz proposed a change to openstack-infra/config: Initial openstackdroid commit for Stackforge, with review changes https://review.openstack.org/81234 | 16:30 |
fungi | pleia2: ignore my earlier question about yum install... seems to do what we expect (i was worried about causing image updates to fail on bare-centos6 nodes) | 16:31 |
*** rpodolyaka1 has quit IRC | 16:32 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Fix pip on py3k/pypy nodes https://review.openstack.org/75213 | 16:33 |
*** SumitNaiksatam has quit IRC | 16:33 | |
jeblair | ttx, dhellmann, fungi: ^ | 16:33 |
*** UtahDave has joined #openstack-infra | 16:34 | |
ttx | bit scary, but I guess that's far from the only thing you run as root downloaded from the internet | 16:35 |
*** UtahDave has left #openstack-infra | 16:35 | |
fungi | "zipimport.ZipImportError: bad local file header in /usr/local/lib/python2.7/dist | 16:36 |
fungi | -packages/setuptools-3.3-py2.7.egg" | 16:36 |
*** thuc_ has joined #openstack-infra | 16:36 | |
fungi | ick | 16:36 |
jeblair | ttx: it's actually the second time that same script is downloaded and run. | 16:36 |
ttx | jeblair: twice the fun! | 16:36 |
*** thuc__ has joined #openstack-infra | 16:37 | |
*** yassine has quit IRC | 16:37 | |
*** hashar has quit IRC | 16:39 | |
*** shakamunyi has quit IRC | 16:40 | |
*** thuc_ has quit IRC | 16:40 | |
*** salv-orlando has quit IRC | 16:40 | |
*** hogepodge has quit IRC | 16:41 | |
*** nithyag has joined #openstack-infra | 16:41 | |
*** salv-orlando has joined #openstack-infra | 16:42 | |
fungi | dstufft: you may be interested in this...? http://paste.openstack.org/show/73832 (ez_setup.py throwing "zipimport.ZipImportError: bad local file header") | 16:42 |
dhellmann | wow | 16:42 |
openstackgerrit | A change was merged to openstack-infra/config: change 3 hr jobs to 2 hrs https://review.openstack.org/81538 | 16:42 |
*** hogepodge has joined #openstack-infra | 16:43 | |
*** gyee has joined #openstack-infra | 16:43 | |
fungi | i wonder whether we should also be removing any local python-pkg-resources distro packages | 16:44 |
fungi | trying now | 16:44 |
*** rhsu has joined #openstack-infra | 16:45 | |
*** markmcclain has quit IRC | 16:45 | |
*** dprince has joined #openstack-infra | 16:47 | |
*** salv-orlando has quit IRC | 16:47 | |
*** salv-orlando has joined #openstack-infra | 16:48 | |
fungi | seems unrelated... and the pkg_resources.py throwing it is the one in the tempdir anyway | 16:48 |
*** dstanek_afk has joined #openstack-infra | 16:48 | |
*** dstanek is now known as Guest64110 | 16:49 | |
*** dstanek_afk is now known as dstanek | 16:49 | |
clarkb | SpamapS: right my comment was more I think you are attacking a symptom not a cause | 16:50 |
clarkb | SpamapS: so likely to run into more problems with that | 16:50 |
clarkb | SpamapS: projects have the wrong case in their requirements for example and so on | 16:50 |
clarkb | SpamapS: so while it is a bug and I have no problem with it getting fixed I don't think fixing that particular bug will fix your problems | 16:51 |
* clarkb reads scrollback to see if something is broken | 16:51 | |
*** caleb_ has quit IRC | 16:52 | |
*** tstevenson has quit IRC | 16:52 | |
*** yassine has joined #openstack-infra | 16:52 | |
*** salv-orlando has quit IRC | 16:52 | |
fungi | clarkb: SpamapS: i think i recall hearing that newer pip does (or soon will?) look case-insensitively for packages in local filesystem mirror trees, but the current workaround for it in integration jobs is that we test the resulting local copy by fronting it with apache and tell pip to access it through that | 16:53 |
clarkb | fungi: correct | 16:53 |
*** mrmartin has quit IRC | 16:53 | |
clarkb | but more importantly making the mirror do the correct case doesn't mean individual projects will so you will still be broken | 16:54 |
fungi | right, we have situations already where we develop package A which depends on package B we don't develop which depends on package C using an inaccurate naming pattern. you can endeavor to fix the entire web of those dependencies with various parties, or you can find a better solution | 16:55 |
*** SumitNaiksatam has joined #openstack-infra | 16:55 | |
mtreinish | jeblair: so on: 81366 what do you think run_run_tests.sh? Or is that too weird | 16:55 |
*** amcrn has joined #openstack-infra | 16:55 | |
*** krotscheck has quit IRC | 16:55 | |
clarkb | SpamapS: so my comment was trying to say that while fixing that bug isn't a bad thing it won't help either | 16:55 |
clarkb | SpamapS: which I may have failed to say correctly because it was early | 16:56 |
*** mrmartin has joined #openstack-infra | 16:56 | |
jeblair | mtreinish: weird but i understand it. :) run_tests.sh or run_test_script.sh work for me too. | 16:57 |
jeblair | clarkb, fungi, SergeyLukjanov: ^ opinions? | 16:57 |
clarkb | jeblair: ++ run_bash8 is confusing | 16:58 |
clarkb | and those two suggestions seem reasonable to me (and I can't come up with anything better) | 16:58 |
*** andreaf has quit IRC | 16:59 | |
fungi | test-wrapper.sh? | 16:59 |
fungi | notox.sh? ;) | 16:59 |
jeblair | oh, yeah, should be hyphens, not underscores (run-X.sh) | 17:00 |
*** santibaldassin has left #openstack-infra | 17:01 | |
*** ociuhandu has quit IRC | 17:01 | |
mtreinish | jeblair: heh so my first suggestion would become run-run_tests.sh | 17:01 |
*** sweston has quit IRC | 17:01 | |
jeblair | mtreinish: yep. that would be the blindly consistent version. :) | 17:01 |
jeblair | mtreinish: or run-run_tsets.sh.sh :) | 17:02 |
jeblair | okay, but seriously, maybe run-test-script.sh or run-test-wrapper.sh are the best bets | 17:02 |
rcarrillocruz | jeblair: could you please +2 https://review.openstack.org/#/c/81234/ ? I changed the topic to new-project as reviewed... | 17:02 |
fungi | the problem for me is that we generally don't call run_tests.sh directly from job builders/slave scripts... perhaps run_tests.sh in that repository could use a little name tweaking too for consistency? basically we have a non-python test runner we want to invoke, right? | 17:02 |
*** sweston has joined #openstack-infra | 17:02 | |
*** mihgen has quit IRC | 17:03 | |
*** markmc has quit IRC | 17:03 | |
mtreinish | jeblair: ok I'll push out a rename patch with one of those in front of the rstcheck job patch | 17:03 |
fungi | so instead of using tox to call testr and other things, we're using a different entry point which is not tox. having a standardized name for it which is maybe not the name of something we also have in some projects which we don't use to run jobs could help | 17:04 |
mtreinish | fungi: yeah basically. Blame sdague for the run_tests naming | 17:04 |
clarkb | ES is green again by the way | 17:04 |
fungi | run_tests.sh is a bit overloaded in its use elsewhere to make a good standard as a job backend name too, in my opinion | 17:04 |
mtreinish | fungi: it's really only non-python projects that do this though | 17:04 |
mtreinish | clarkb: cool | 17:05 |
fungi | mtreinish: right, but we're accumulating more and more of those | 17:05 |
fungi | so this is a good time to pick something consistent and clear before it spreads too far | 17:05 |
*** chandan_kumar has joined #openstack-infra | 17:05 | |
jeblair | fungi: so does 'run-test-wrapper' fit the bill? | 17:06 |
fungi | since whatever we're doing in the initial few will get cargo-culted everywhere later | 17:06 |
*** yassine has quit IRC | 17:06 | |
fungi | jeblair: maybe run-shell-tests.sh? | 17:06 |
mtreinish | fungi: that's a fair point | 17:06 |
jeblair | fungi: i'm hoping that bash8 and the rst checker will split out of their respective repos, and then we can have proper scripts for them. | 17:06 |
mtreinish | well the tests are actually normally python | 17:06 |
mtreinish | and the wrapper is shell | 17:06 |
fungi | mmm | 17:06 |
*** mrmartin has quit IRC | 17:07 | |
*** jamielennox is now known as jamielennox|away | 17:07 | |
fungi | but we don't run those python-based tests via tox? | 17:07 |
*** salv-orlando has joined #openstack-infra | 17:07 | |
jeblair | fungi: because the projects themselves aren't python | 17:07 |
jeblair | fungi: which doesn't stop us with running javascript tests for storyboard with tox | 17:07 |
jeblair | amusingly enough | 17:07 |
*** derekh has quit IRC | 17:07 | |
*** mrmartin has joined #openstack-infra | 17:08 | |
jeblair | but i don't think we have scaffolding to make a shell script project (or loose collection of rst files) a python project | 17:08 |
mtreinish | fungi: I think it was tox requires a setup.py or something like that | 17:08 |
jeblair | we could make the specs repos a sphinx project | 17:08 |
mtreinish | which is why sdague didn't pull it in | 17:08 |
mtreinish | because that caused other problems | 17:08 |
clarkb | mtreinish: it does, because it installs your code into the venvs | 17:08 |
clarkb | mtreinish: not sure how it causes problems though. our puppet repo has one | 17:08 |
dhellmann | jeblair: that's what I was planning to do for the oslo specs repo, so we could publish approved docs (esp. with images) | 17:09 |
dhellmann | jeblair: but now that I see there is some discussion of standardization, I'll wait to see how that falls out | 17:09 |
*** e0ne has quit IRC | 17:09 | |
*** harlowja_away is now known as harlowja | 17:09 | |
fungi | just making sure we have a good reason for special snowflakes which aren't tox-based... because if minor work could make them run via tox we already have the job running mechanisms for that standardized anyway | 17:09 |
dhellmann | jeblair: fwiw, for oslo I was going to do /$release/$libname/$bpname.rst or /$release/$libname/$bpname/index.rst (for bps with lots of attachments) | 17:10 |
dhellmann | fungi: is that py33 test still running? | 17:10 |
jeblair | fungi: ++, dhellmann: let's explore the idea of making specs repos sphinx projects, and just using tox to run the tests (and build and eventually publish them) | 17:10 |
mtreinish | fungi: I'm fine with it on the specs repo especially if start using sphinx (that's a really good idea) because that'll make it easier, just use the python job. | 17:10 |
mtreinish | I remember there was some objection with devstack but we can keep that separate from the specs stuff | 17:11 |
*** krotscheck has joined #openstack-infra | 17:11 | |
fungi | dhellmann: i'm still trying to figure out how to get mordred's proposed fix to work. apparently setuptools 3.3 doesn't play nice with ez_setup.py | 17:11 |
jeblair | russellb: ^ what do you think of having nova-specs be a sphinx project? | 17:11 |
dhellmann | fungi: ok, thanks, I just wanted to make sure I hadn't missed anything while I was in a meeting | 17:11 |
fungi | dhellmann: http://paste.openstack.org/show/73832/ if you want to join in the fun | 17:12 |
* dhellmann may already be having too much fun for today | 17:12 | |
dhellmann | fungi: why is that installing to python2.7 for a python 3.3 job? | 17:12 |
*** krotscheck_ has joined #openstack-infra | 17:13 | |
*** hogepodge has quit IRC | 17:13 | |
fungi | dhellmann: we have both python 2.7 and python 3.3 installed on these systems, and try to run ez_setup.py for both of them | 17:13 |
dhellmann | fungi, jeblair : I see https://review.openstack.org/#/c/75213/5/modules/openstack_project/files/nodepool/scripts/fix_pip.sh running ez_setup.py for both versions but get-pip only for one | 17:13 |
fungi | dhellmann: ubuntu won't run if you gut python 2.7 out of it | 17:13 |
*** krotscheck has quit IRC | 17:13 | |
*** krotscheck_ is now known as krotscheck | 17:13 | |
*** vkozhukalov_ has quit IRC | 17:14 | |
dhellmann | fungi: ok | 17:14 |
fungi | dhellmann: supposedly pip is python-version-agnostic so you need only one of it | 17:14 |
dhellmann | interesting | 17:14 |
*** dcramer_ has quit IRC | 17:15 | |
russellb | jeblair: i saw someone mention that before ... would love to see it prototyped | 17:15 |
fungi | dhellmann: and anyway, sudo python3 ez_setup.py fails in exactly the same way for me as sudo python ez_setup.py | 17:15 |
russellb | i think all options are still on the table | 17:15 |
dhellmann | fungi: you're taking all of the fun out of this | 17:15 |
clarkb | pleia2: https://review.openstack.org/#/c/81286/4 I am going to approve that, but fungi's inline suggestion is a good one for a follow up chnage | 17:15 |
*** morganfainberg_Z is now known as morganfainberg | 17:15 | |
*** hogepodge has joined #openstack-infra | 17:15 | |
clarkb | pleia2: (adding a new change on top of jeblair's change that documents the channel management stuff | 17:15 |
*** hogepodge has quit IRC | 17:16 | |
dhellmann | fungi: is /usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg a valid zip file? | 17:16 |
*** dstanek has quit IRC | 17:16 | |
*** chandan_kumar has quit IRC | 17:17 | |
fungi | dhellmann: it seems to be a shell-stubbed executable | 17:17 |
openstackgerrit | A change was merged to openstack-infra/config: Move channel reg requirement to top of IRC docs https://review.openstack.org/81286 | 17:18 |
clarkb | fungi: by the way we are getting near where I think we may want to bump the ES volume backed FS to 2TB | 17:18 |
fungi | dhellmann: these are the initial lines of the egg before it switches to 8-bit data: http://paste.openstack.org/show/73838/ | 17:19 |
clarkb | fungi: individual nodes have between 200GB and 300GB free space currently which is about 1.2 TB across the cluster which is getting closer to the 800GB needed to recover when a node fails | 17:19 |
clarkb | fungi: basically we need to make sure that the cluster wide free space is > max(data on any given node) so that if a node fails we can recover cleanly | 17:19 |
dhellmann | fungi: oh, boy | 17:20 |
clarkb | sdague: jogo ^ btw any idea why we are indexing so many documents recenlty? we are up to 3/4billion per day | 17:20 |
sdague | clarkb: not really, the only recent add was that http_errors one | 17:20 |
fungi | clarkb: so, we have a 25tb cinder quota at rackspace currently... we've got 13tb attached to static.o.o, 1tb each on the 6 new elasticsearch cluster members, 1tb on graphite.o.o... | 17:20 |
mtreinish | clarkb: billion!? | 17:20 |
sdague | clarkb: do we have a trend line? | 17:20 |
*** mrodden has joined #openstack-infra | 17:21 | |
sdague | mtreinish: 1 log line == 1 document | 17:21 |
fungi | clarkb: we could probably add 0.5tb volumes to each of them? | 17:21 |
clarkb | mtreinish: oui numbes for yesterday 713,117,965 documents 489GB (that includes the replica) | 17:21 |
mtreinish | sdague: ahh that makes sense | 17:21 |
*** julienvey has quit IRC | 17:21 | |
jogo | clarkb: we don't graph file size trends do we? | 17:21 |
clarkb | jogo: we don't | 17:21 |
clarkb | fungi: yeah .5tb per node gives us an additional 3tb cluster wide which should help | 17:22 |
clarkb | fungi: but at this rate I wonder if we might also want to consider adding nodes instead | 17:22 |
jeblair | clarkb: i bet we could have the workers spit out statsd numbers for sizes | 17:22 |
jogo | clarkb: ceilometer-collector is 7.7MBs | 17:23 |
clarkb | jogo: we don't index ceilometer anything | 17:23 |
*** dims has joined #openstack-infra | 17:23 | |
dhellmann | fungi: too bad ez_setup.py doesn't seem to have an option to turn off the egg archive creation | 17:23 |
jogo | clarkb: ohh right indexing | 17:23 |
dhellmann | fungi: running it locally produces the same header, and running the egg through bash does a bootstrap installation of setuptools | 17:24 |
*** ociuhandu has joined #openstack-infra | 17:24 | |
jogo | clarkb: http://logstash.openstack.org/#eyJzZWFyY2giOiIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE0NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NTI0OTg2ODk3NywibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6ImZpbGVuYW1lIn0= | 17:24 |
clarkb | fungi: jeblair: I think we have some breathing room to consider options, I do not think this needs to be done today | 17:25 |
clarkb | jogo: I think I am going to start a thing where people post queries instead of links | 17:25 |
jogo | clarkb: oh that is only 2k most recent never mind | 17:25 |
jogo | clarkb: its not a query its a page | 17:25 |
clarkb | what do you mean? | 17:25 |
sdague | jogo: it is a query | 17:25 |
jogo | blank | 17:26 |
jogo | trying to see which filename has the most hits | 17:26 |
jogo | in a given time window | 17:26 |
*** harlowja has quit IRC | 17:26 | |
sdague | clarkb: how I wish logstash used actual params instead of the uuencoded json | 17:26 |
clarkb | sdague: me too | 17:26 |
sdague | clarkb: can we get documents / filename / build ? | 17:27 |
sdague | as some kind of statistics | 17:27 |
*** dcramer_ has joined #openstack-infra | 17:27 | |
clarkb | sdague: with some queries I am sure | 17:27 |
clarkb | do queriy by filename then map reduce based on build | 17:27 |
jogo | looks like kibana can' | 17:28 |
jogo | can't get us the information you want | 17:28 |
clarkb | fungi: jeblair: in any case if we have an emergency disk situation short term fix is to delete old indexes to make room | 17:28 |
clarkb | fungi: jeblair: this is easy to do via elasticsearch-head or curl on one of the machines | 17:29 |
*** rpodolyaka1 has joined #openstack-infra | 17:29 | |
ttx | top of gate now at 17h46 min, not sure what our record was | 17:29 |
clarkb | ttx: record was close to a few days iirc | 17:29 |
*** hogepodge has joined #openstack-infra | 17:29 | |
ttx | clarkb: great! | 17:30 |
sdague | clarkb: ceilometer reports into syslog | 17:31 |
ttx | atleast it's mostly stable now. Was 15hour 5 hours ago | 17:31 |
sdague | ttx: we topped 60hrs in january | 17:31 |
ttx | sdague: how quick I can forget | 17:31 |
*** wenlock has joined #openstack-infra | 17:31 | |
sdague | though I'm not sure I'd count it currently as stable | 17:32 |
clarkb | sdague: wait really? | 17:32 |
jogo | a quick count shows we had about 75 known gate resets in last 24 hours | 17:32 |
clarkb | sdague: that might do it | 17:32 |
sdague | the tempest change at possition #4 was at position #6 3 hrs ago | 17:32 |
jeblair | jogo: known gate reset == reset due to classified bug? | 17:32 |
jogo | jeblair: correct | 17:32 |
jogo | jeblair: sum from this page http://status.openstack.org/elastic-recheck/gate.html | 17:33 |
sdague | we're at 83% classification over 14 days | 17:33 |
*** rpodolyaka1 has quit IRC | 17:33 | |
ttx | looks like it's time to ring the gate bug bell again | 17:33 |
ttx | before it adversely impact our capacity to release | 17:34 |
openstackgerrit | A change was merged to openstack-infra/devstack-gate: Add django_openstack_auth https://review.openstack.org/81178 | 17:34 |
sdague | clarkb: it also looks like someone is running amqp debug level in syslog | 17:34 |
sdague | http://logs.openstack.org/32/81332/2/check/check-tempest-dsvm-neutron/cab7fc5/logs/syslog.txt.gz | 17:34 |
* clarkb cires | 17:35 | |
sdague | though proxy-server, I thought that was swift | 17:35 |
clarkb | sdague: proxy-server is swift | 17:35 |
sdague | Mar 18 23:25:35 devstack-precise-rax-dfw-2836361 proxy-server: STDOUT: 2014-03-18 23:25:35.864 27814 DEBUG amqp [-] Closed channel #1 _do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:104 (txn: tx38decff13287498aa41ab-005328d5ef) (client_ip: 127.0.0.1) | 17:35 |
ttx | sdague: that may explain those 2h30 runs | 17:35 |
sdague | so that totally confuses me then | 17:35 |
jogo | clarkb: n-cpu is massive too | 17:35 |
sdague | jogo: not at info | 17:35 |
clarkb | jogo: we filter n-cpu to > DEBUG though | 17:35 |
jogo | sdague: oh right | 17:35 |
sdague | ttx: no, that wouldn't impact run time | 17:36 |
*** harlowja has joined #openstack-infra | 17:36 | |
sdague | clarkb: I didn't think swift used amqp | 17:36 |
ttx | anyway, time to get some exercise. Talk to you all later | 17:36 |
notmyname | sdague: it doesn't | 17:36 |
clarkb | sdague: neither did I | 17:36 |
*** mrodden has quit IRC | 17:37 | |
notmyname | sdague: maybe some middleware installed? (ceilometer?) | 17:37 |
sdague | so that can't be the same proxy server, right? | 17:37 |
clarkb | so proxy-server must be something else | 17:37 |
jogo | ttx: o/ btw we have been pushing on bug fixing quietly | 17:37 |
jogo | cinder fixed two already | 17:37 |
sdague | notmyname: yeh, that could be | 17:37 |
sdague | ceilometer middleware could cause it | 17:37 |
jogo | oh ceilometer | 17:37 |
notmyname | sdague: actually, it's only going to be ceilometer or keystone | 17:37 |
sdague | notmyname: well keystone wouldn't use amqp either | 17:37 |
notmyname | sdague: then ceilometer it is :-) | 17:38 |
sdague | clarkb: so yeh, that's probably the culprit | 17:38 |
sdague | yeh, if you look at s-proxy logs - http://logs.openstack.org/32/81332/2/check/check-tempest-dsvm-neutron/cab7fc5/logs/screen-s-proxy.txt.gz | 17:39 |
fungi | so if we curtail that in ceilo, utilization on the cluster should fall back off somewhat over the neaxt couple weeks, yeah? | 17:39 |
sdague | and remember that everything in swift is going to syslog as well | 17:39 |
sdague | fungi: it will definitely have an impact | 17:40 |
sdague | that's ~80k log lines indexed per devstack run | 17:41 |
*** sandywalsh has quit IRC | 17:41 | |
sdague | actually scratch that | 17:41 |
sdague | 80k is a neutron run | 17:42 |
*** weshay has quit IRC | 17:42 | |
*** weshay has joined #openstack-infra | 17:42 | |
sdague | this tempest full run I'm looking at is 180k | 17:42 |
sdague | lines | 17:42 |
*** weshay has quit IRC | 17:43 | |
sdague | so we're probably talking about 3/4 million documents per changset | 17:43 |
*** weshay has joined #openstack-infra | 17:43 | |
sdague | and I think 600 changesets a day was a working average | 17:43 |
sdague | so that's probably 50% of our content | 17:44 |
sdague | in logstash | 17:44 |
clarkb | nice | 17:44 |
sdague | if 3/4 billion documents is our daily | 17:44 |
sdague | ok, so how do we do this right? | 17:45 |
sdague | is there a sane way to turn ceilometer logging down in the swift middleware | 17:45 |
*** dcramer__ has joined #openstack-infra | 17:45 | |
openstackgerrit | Cedric Brandily proposed a change to openstack-infra/git-review: Add http(s) protocol support to fetch_review and list_reviews https://review.openstack.org/79280 | 17:46 |
*** dcramer_ has quit IRC | 17:47 | |
sdague | I'm also completely happy just removing the ceilometer swift middleware and making them sort this out later | 17:47 |
*** khyati has quit IRC | 17:47 | |
clarkb | sdague: well it isn't an emergency yet :) | 17:47 |
clarkb | sdague: that said I find it somewhat awesome that we can be more aware of this stuff now | 17:47 |
jeblair | sdague: is swift logging to syslog and screen? | 17:47 |
clarkb | because I would consider this an actual deployment bug | 17:47 |
sdague | it is in the fact that it's crippled | 17:48 |
sdague | jeblair: yes | 17:48 |
clarkb | however we don't consume swift-project from screen | 17:48 |
clarkb | so having swift *proxy go to syslog is kinda nice | 17:48 |
sdague | clarkb: so in my mind this is an emergency | 17:48 |
clarkb | sdague: ok | 17:48 |
sdague | because ER is now blind | 17:48 |
sdague | as we are 30k events behind | 17:48 |
jeblair | sdague: so my first thought is can we turn off swift syslogging? | 17:48 |
clarkb | sdague: well it isn't blind, it is mostly keeping up (the biggest behindness was cluster recovery after the OOM) | 17:49 |
clarkb | sdague: but agreed this makes recovery much slower | 17:49 |
sdague | clarkb: the bot keeps timing out | 17:49 |
sdague | so it's not reporting on changes | 17:49 |
openstackgerrit | Cedric Brandily proposed a change to openstack-infra/git-review: Git review assumes the wrong ssh default port https://review.openstack.org/79281 | 17:49 |
clarkb | sdague: yeah we havne't been able to catch up since recovery happened | 17:49 |
sdague | which is why I am calling it an emergency | 17:49 |
clarkb | wfm | 17:49 |
sdague | jeblair: if we disable syslogging on swift we don't have timestamps | 17:50 |
sdague | because swift doesn't use olso logging | 17:50 |
clarkb | sdague: but only for swift proxy | 17:50 |
clarkb | the other swift files mostly work | 17:50 |
fungi | dhellmann: so, anyway regardless of what zipimport thinks, the unzip utility is able to extract the contents of that egg just fine | 17:50 |
sdague | clarkb: sure, though isn't swift proxy the place we normally are finding the issues? | 17:50 |
*** talluri has joined #openstack-infra | 17:50 | |
clarkb | sdague: yes I think so :/ | 17:50 |
*** bada_ has quit IRC | 17:51 | |
*** afazekas has quit IRC | 17:51 | |
jeblair | sdague, clarkb: would we normally deal with this by not indexing DEBUG on the swift logs? | 17:52 |
*** bada_ has joined #openstack-infra | 17:52 | |
sdague | jeblair: correct | 17:52 |
*** dcramer___ has joined #openstack-infra | 17:52 | |
sdague | jeblair: though this case is probably also different | 17:52 |
*** medieval1 has quit IRC | 17:52 | |
sdague | because ceilometer log levels .... not good | 17:52 |
clarkb | rcarrillocruz: reviewed. sorry if there was confusion | 17:52 |
sdague | which is why we don't index any of ceilometer logs today | 17:53 |
clarkb | right | 17:53 |
sdague | because they are basically completely unuseful | 17:53 |
sdague | and giant | 17:53 |
openstackgerrit | A change was merged to openstack-dev/pbr: Make tools/integration.sh take a branch https://review.openstack.org/80723 | 17:53 |
jeblair | clarkb, sdague: two ideas: reconfigure rsyslog to filter them? grok filters to ignore them? | 17:54 |
sdague | jeblair: well, the experience is if you got to grok, it's too late | 17:54 |
notmyname | I only noticed this conversation when swift was mentioned, but I don't have any context. what's the problem du jour? | 17:55 |
*** sandywalsh has joined #openstack-infra | 17:55 | |
jeblair | sdague: though, i mean the theoretical "CEILOMETER_LOG_SPAM=disabled" in localrc sounds nice. | 17:55 |
sdague | notmyname: we were trying to figure out why we were up to 3/4 billion documents a day in logstash in the gate | 17:55 |
*** dcramer__ has quit IRC | 17:55 | |
sdague | and we think we narrowed down that 50 - 60% of that is ceilometer middleware in swift | 17:56 |
sdague | that 3/4 B documents is currently exceeding the capacity for our logstash cluster to keep up | 17:56 |
fungi | dhellmann: also inserting zipfile.is_zipfile('/usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg') immediately after the install phase fails in ez_setup.py returns True, so that's really weird | 17:56 |
notmyname | sdague: ah, ok | 17:57 |
sdague | jeblair: rm -rf /opt/ceilometer ? :) | 17:57 |
sdague | just to show an example, this is what ceilometer logging looks like - http://logs.openstack.org/32/81332/2/gate/gate-tempest-dsvm-full/4e4e235/logs/screen-ceilometer-anotification.txt.gz#_2014-03-19_16_46_43_740 | 17:57 |
*** dizquierdo has quit IRC | 17:57 | |
jeblair | and my browsers dead | 17:57 |
clarkb | sdague: does ceilometer use local* for its level? maybe we can filter that way? | 17:58 |
clarkb | just have syslog dump it on the floor for us | 17:58 |
sdague | clarkb: so the problem is, it's in the swift address space right? | 17:58 |
Ajaeger1 | jeblair: our transifex version is not able to create new resources - and I need to have for trove an initial resource created. | 17:58 |
sdague | so it's just using whatever swift is using | 17:58 |
clarkb | sdague: oh right so ya that | 17:58 |
Ajaeger1 | Can you do this, please? | 17:58 |
notmyname | clarkb: sdague: which is configurable per-middleware via the config | 17:59 |
Ajaeger1 | jeblair: you're admin for trove in transifex... | 17:59 |
jogo | in last 12 hours 107 million logstash entries tagged syslog | 17:59 |
clarkb | Ajaeger1: fifieldt and daisy have been doing most of that | 17:59 |
jogo | out of 279 million hits in same duration | 17:59 |
sdague | ok, so 35% | 17:59 |
notmyname | sdague: clarkb: any idea if the swift ceilometer middleware is using swift's get_logger? or maybe it's using oslo? (I don't know) | 17:59 |
sdague | notmyname: I don't know | 18:00 |
jogo | notmyname: when did swift ceilometer middleware get turned on? | 18:00 |
Ajaeger1 | clarkb: jeblair has the permissions on trove - and only he. I've been debugging transifex problems with both... | 18:00 |
Ajaeger1 | (exactly: Jenkins and jeblair) | 18:00 |
jeblair | Mar 18 23:47:51 devstack-precise-rax-dfw-2836361 proxy-server: STDOUT: 2014-03-18 23:47:51.661 27814 DEBUG amqp [-] Closed channel #1 _do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:104 (txn: tx1366227f15414831bcc11-005328db27) (client_ip: 127.0.0.1) | 18:00 |
clarkb | Ajaeger1: oh gotcha | 18:00 |
fungi | dstufft: any hints on why zipimport might think that the shell-stubbed setuptools-3.3-py2.7.egg isn't valid even though zipfile.is_zipfile says it's legit? | 18:00 |
notmyname | jogo: I have no idea. I'm generally not aware of changes that are made to swift in the gate :-/ | 18:00 |
notmyname | jogo: ie devstack | 18:00 |
jeblair | sdague: is that the kind of line you're interested in? | 18:00 |
sdague | jeblair: right those are things we should be dumping | 18:01 |
*** medieval1 has joined #openstack-infra | 18:01 | |
sdague | though I was actually pointing to the notification service which we don't index | 18:01 |
jeblair | Ajaeger1: ack | 18:01 |
*** vhoward has left #openstack-infra | 18:01 | |
sdague | because it is equally unhelpful | 18:01 |
*** wenlock has quit IRC | 18:01 | |
*** maxbit has quit IRC | 18:01 | |
*** vhoward has joined #openstack-infra | 18:02 | |
*** maxbit has joined #openstack-infra | 18:02 | |
*** mrmartin has quit IRC | 18:02 | |
*** wenlock has joined #openstack-infra | 18:02 | |
Ajaeger1 | jeblair: just create the resource "trove-translations", please | 18:02 |
*** amcrn has quit IRC | 18:02 | |
Ajaeger1 | jeblair: Here's a quick URL: https://www.transifex.com/projects/p/trove/ | 18:02 |
jeblair | we _can_ apply regexes in rsyslog... | 18:02 |
jeblair | i'm not sure we want to start chasing that thogh | 18:03 |
dstufft | fungi: not a clue, i don't know much about zipimport | 18:03 |
dstufft | fungi: there was a bug in 3.3 though if it's that version of python | 18:03 |
fungi | dstufft: running into it when trying to run ez_setup.py | 18:03 |
fungi | dstufft: i hit it with both python 2.7 and 3.3 | 18:03 |
jeblair | Ajaeger1: trove is part of the openstack hub project, so all the openstack maintainers can manage it | 18:04 |
*** caleb_ has joined #openstack-infra | 18:04 | |
jogo | jeblair: this sounds like a ceilometer bug why not disable the ceilometer swift patch until its resolved | 18:04 |
openstackgerrit | Joshua Harlow proposed a change to openstack/requirements: Add a file that can detail the requirements files https://review.openstack.org/81589 | 18:04 |
fungi | dstufft: the ez_setup.py output looks like http://paste.openstack.org/show/73832/ | 18:04 |
sdague | jogo: I'm fine with that | 18:04 |
jeblair | jogo: that sounds like what sdague is proposing; i'm just brainstorming. | 18:04 |
sdague | it was added a long time ago, but it's just an include of the ceilometer egg | 18:04 |
*** homeless has quit IRC | 18:04 | |
sdague | https://github.com/openstack-dev/devstack/blob/master/lib/swift#L337-L341 | 18:05 |
jogo | sdague: do we have a bug for this that I can use? | 18:05 |
sdague | jogo: no, there is no bugg | 18:05 |
jogo | I was just going to comment out those lines | 18:05 |
jogo | and say skipped until bug x is fixed | 18:05 |
*** maxbit has quit IRC | 18:05 | |
*** wenlock has quit IRC | 18:05 | |
Ajaeger1 | jeblair: Shall I ping Daisy instead? | 18:05 |
jogo | sdague: thoughts? | 18:06 |
*** wenlock has joined #openstack-infra | 18:06 | |
*** maxbit has joined #openstack-infra | 18:06 | |
jeblair | Ajaeger1: adding a new resource requires uploading a .po file | 18:06 |
sdague | jogo: so why don't you propose that, please file the bug as well, and we'll see if anything in our tests require that running | 18:06 |
openstackgerrit | Joshua Harlow proposed a change to openstack/requirements: Add a file that can detail the requirements files https://review.openstack.org/81589 | 18:06 |
sdague | it will take an hour to get those results anyway | 18:07 |
jogo | sdague: ack | 18:07 |
sdague | we can decide then | 18:07 |
Ajaeger1 | jeblair: I know. Unfortunately, the transifex client is broken in its latest release and does not automatically create a new resource. This is fixed i ngit but not released yet. | 18:07 |
Ajaeger1 | jeblair: I can give you the current po file. | 18:08 |
*** Sukhdev has quit IRC | 18:08 | |
jeblair | clarkb: can you verify that you have access to this? https://www.transifex.com/projects/p/trove/edit/ | 18:09 |
jeblair | just to make sure the hub thing works as expected | 18:09 |
clarkb | jeblair: checking now | 18:09 |
*** rpodolyaka1 has joined #openstack-infra | 18:09 | |
Ajaeger1 | clarkb, thanks! | 18:09 |
*** rfolco has quit IRC | 18:10 | |
clarkb | jeblair: Ajaeger1 yup works for me | 18:10 |
Ajaeger1 | Great! | 18:10 |
jeblair | Ajaeger1: so the regular admins should be able to help in the future. if this is blocking you, i can upload the po file if you tell me how | 18:11 |
Ajaeger1 | Yes, due to the bug in the transifex client and me not having the permissions, it's blocking. ;) | 18:11 |
Ajaeger1 | jeblair: easiest thing: Give me rights on trove, let me upload - and revoke again ;) I'm jaegerandi on transifex | 18:12 |
Ajaeger1 | Otherwise, I can email you the po file- let me check the webui... | 18:12 |
jeblair | Ajaeger1: you have access now | 18:13 |
clarkb | Ajaeger1: is the bug in the client fixed? eg should we be updated the client somewhere | 18:13 |
dhellmann | fungi: too bad http://hg.python.org/cpython/file/be1e015a8405/Modules/zipimport.c#l879 doesn't include the file_offset value | 18:13 |
Ajaeger1 | The bug is fixed in git. | 18:14 |
fungi | dhellmann: at this point i'm just trying to figure out how/why ez_setup.py is working for anyone... | 18:14 |
jogo | sdague: lets see how this goes https://review.openstack.org/#/c/81592/1 | 18:14 |
clarkb | Ajaeger1: but hasn't been released yet? I suppose we should wait for the release | 18:14 |
dhellmann | fungi: it definitely works for me locally using 2.7 and 3.3 | 18:14 |
Ajaeger1 | Yeah, updating the client would be nice - just don't know how you do it. Let me create the resource now. | 18:14 |
*** khyati has joined #openstack-infra | 18:14 | |
Ajaeger1 | clarkb: not released - correct | 18:14 |
sdague | jogo: cool, thanks | 18:14 |
clarkb | Ajaeger1: https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/manifests/proposal_slave.pp#n17 will do it for us when they make a new release | 18:15 |
fungi | dhellmann: ubuntu 12.04? | 18:15 |
*** homeless has joined #openstack-infra | 18:15 | |
Ajaeger1 | jeblair: Uploaded, you can revoke my permissions again. | 18:15 |
dhellmann | fungi: precise is 12.04, right? | 18:15 |
jeblair | Ajaeger1: done, thanks! | 18:15 |
fungi | dhellmann: yep | 18:16 |
clarkb | fwiw I am happy to let Ajaeger1 have those perms permanently | 18:16 |
clarkb | but may need to run that by daisy, not sure how they are organizing those | 18:16 |
dhellmann | fungi: python 2.7.3 and 3.3.3 (the latter installed from deadsnakes) | 18:16 |
fungi | dhellmann: if so, then there's something pythonbroken on our nodepool nodes | 18:16 |
*** khyati has quit IRC | 18:16 | |
dhellmann | fungi: bad zlib? | 18:16 |
Ajaeger1 | clarkb: hope there are not too many missing resources - Daisy gave me already permission to fix openstack-manuals-i18n where 4 resources where missing. | 18:16 |
dhellmann | fungi: s/bad/different/ | 18:16 |
fungi | dhellmann: zlib1g 1:1.2.3.4.dfsg-3ubuntu4 | 18:17 |
Ajaeger1 | clarkb: will ask daisy if I notice more things to fix. I didn't know about the Hub thingie and that that only clarkb had permissions... | 18:17 |
dhellmann | fungi: same | 18:17 |
fungi | dhellmann: that *does* look like a made-up version number ;) | 18:18 |
*** caleb_` has joined #openstack-infra | 18:18 | |
dhellmann | heh | 18:18 |
clarkb | Ajaeger1: I just approved https://review.openstack.org/#/c/81450/ so hopefully stuff moves on that front too | 18:18 |
*** caleb_ has quit IRC | 18:18 | |
*** gokrokve has quit IRC | 18:18 | |
*** khyati has joined #openstack-infra | 18:18 | |
Ajaeger1 | clarkb: Great, thanks! I'm trying to fix what I can in this area right now... | 18:19 |
*** vhoward has left #openstack-infra | 18:19 | |
openstackgerrit | A change was merged to openstack-infra/config: Fix horizon-upstream-translation-update https://review.openstack.org/81450 | 18:21 |
*** bada_ has quit IRC | 18:21 | |
*** rfolco has joined #openstack-infra | 18:21 | |
*** bada_ has joined #openstack-infra | 18:21 | |
fungi | dhellmann: dstufft: even weirder... python -c 'import sys;sys.path.insert(0, "/usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg");import setuptools;print(setuptools.__file__)' returns /usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg/setuptools/__init__.pyc | 18:21 |
fungi | so zipimport *seems* to be working with that file | 18:22 |
*** dcramer___ has quit IRC | 18:22 | |
fungi | which makes me wonder whether there's some sort of sync race on the file | 18:22 |
clarkb | fungi: jeblair for changes like https://review.openstack.org/#/c/81548/1 we are good to approve those during the week right? then they will get caught in the new project stuff on friday? (the manage-projects script must be manually triggered today so I think this is fine) | 18:22 |
fungi | clarkb: seems fine to me | 18:23 |
*** dcramer_ has joined #openstack-infra | 18:23 | |
jeblair | clarkb, fungi: wfm | 18:23 |
*** khyati has quit IRC | 18:23 | |
fungi | clarkb: though that group won't exist and acl won't be effective until then | 18:23 |
fungi | er, acl won't be updated. it'll be as effective as it is now until then | 18:24 |
*** khyati has joined #openstack-infra | 18:24 | |
*** vhoward has joined #openstack-infra | 18:25 | |
clarkb | fungi: yup | 18:25 |
*** Adri2000 has quit IRC | 18:26 | |
jogo | never seen this bug before: http://logs.openstack.org/68/78568/5/gate/gate-tempest-dsvm-neutron/99686c9/console.html#_2014-03-18_21_31_06_972 | 18:27 |
jogo | ValueError: Unable to compute factors p and q from exponent d | 18:28 |
fungi | jogo: rsa? | 18:28 |
fungi | sounds like an rsa-related operation anyway | 18:29 |
* fungi looks at the log | 18:29 | |
jogo | yeah from paramiko | 18:29 |
fungi | ahh, yep, busted ssh key | 18:29 |
jogo | fungi: how does that happen? | 18:30 |
*** bhuvan has joined #openstack-infra | 18:30 | |
*** salv-orlando has quit IRC | 18:31 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Revoke sudo from most jobs https://review.openstack.org/72779 | 18:31 |
fungi | jogo: i'm going to guess a paramiko bug, or issue in some underlying crypto primitive system lib it's relying on | 18:32 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Allow bare-precise nodes to sudo https://review.openstack.org/72780 | 18:32 |
fungi | jogo: i think it's trying to derive the public key string from a generated private key | 18:32 |
jogo | fungi: yeah | 18:32 |
jogo | anyway next bug | 18:32 |
jogo | http://logs.openstack.org/11/79511/5/gate/gate-tempest-dsvm-neutron/ebf4074/console.html | 18:33 |
jogo | http://logs.openstack.org/11/79511/5/gate/gate-tempest-dsvm-neutron/ebf4074/console.html#_2014-03-18_19_08_56_765 | 18:33 |
jogo | ERROR: Failed to upload files | 18:33 |
*** andreaf has joined #openstack-infra | 18:33 | |
*** dcramer_ has quit IRC | 18:33 | |
jogo | 83 hits in 2 days | 18:34 |
fungi | jogo: pycrypto more likely than paramiko, and i suspect paramiko is failing to preserve the original traceback for the exception it caught and rethrew | 18:34 |
jogo | message:"Failed to upload files" AND tags:"console" | 18:34 |
*** yassine has joined #openstack-infra | 18:34 | |
jogo | is that a valid infra bug? | 18:35 |
*** ildikov_ has quit IRC | 18:35 | |
jeblair | dims: why do you remove the ubuntu cloud archive in https://review.openstack.org/#/c/79816 ? | 18:35 |
fungi | jogo: ERROR: gate_hook failed | 18:35 |
fungi | jogo: i think you're looking at a cascade failure there | 18:35 |
*** melwitt has quit IRC | 18:36 | |
jeblair | 2014-03-18 19:08:56.792 | Caused by: java.io.InterruptedIOException | 18:36 |
jogo | http://logs.openstack.org/09/80409/7/gate/gate-keystone-docs/f5ff4ad/console.html | 18:36 |
*** talluri has quit IRC | 18:36 | |
*** jgallard has quit IRC | 18:36 | |
jogo | http://logs.openstack.org/35/81335/1/gate/gate-tempest-dsvm-neutron-heat-slow/fba8c6f/console.html | 18:37 |
jogo | although those both got marked as unstable | 18:37 |
jeblair | jogo: see the interrupted exception? | 18:37 |
jeblair | jogo: i don't believe that job was reported to gerrit | 18:37 |
jogo | jeblair: ahh makes sense | 18:38 |
jeblair | jogo: which means it's probably an aborted zuul build | 18:38 |
*** sabari2 has joined #openstack-infra | 18:38 | |
jogo | I thought we had a hidden bug for that | 18:38 |
jeblair | jogo: maybe it doesn't quite match that signature? | 18:38 |
fungi | jogo: jeblair: correct, that failure never ended up hitting https://review.openstack.org/80409 (occurred somewhere in the vast span of time it was in the gate, so almost certainly related to a gate reset) | 18:39 |
openstackgerrit | A change was merged to openstack-infra/config: Updating the group config for the golang-client project to reflect the core and ptl structure. https://review.openstack.org/81548 | 18:40 |
jogo | jeblair: yup message:"java.io.InterruptedIOException" | 18:40 |
jogo | AND message:"hudson.Launcher$RemoteLauncher.launch" | 18:40 |
jogo | AND filename:console.html | 18:40 |
jeblair | jogo: i think you should get rid of hudson.Launcher$RemoteLauncher.launch | 18:40 |
*** dcramer_ has joined #openstack-infra | 18:40 | |
jeblair | jogo: i'm pretty sure any java.io.InterruptedIOException in the console log is going to be related to zuul aborts | 18:40 |
jeblair | jogo: (possibly job timeouts, but i don't think it's important to distinguish for this purpose) | 18:41 |
*** talluri has joined #openstack-infra | 18:41 | |
jogo | jeblair: agreed | 18:42 |
openstackgerrit | Joe Gordon proposed a change to openstack-infra/elastic-recheck: Loosen fingerprint for bug 1260311 https://review.openstack.org/81597 | 18:43 |
uvirtbot | Launchpad bug 1260311 in openstack-ci "hudson.Launcher exception causing build failures" [Undecided,Invalid] https://launchpad.net/bugs/1260311 | 18:43 |
*** talluri has quit IRC | 18:44 | |
openstackgerrit | Joshua Harlow proposed a change to openstack/requirements: Add a file that can detail the requirements files https://review.openstack.org/81589 | 18:44 |
*** apevec has quit IRC | 18:46 | |
*** johnthetubaguy has quit IRC | 18:46 | |
*** julim has quit IRC | 18:47 | |
*** Adri2000 has joined #openstack-infra | 18:48 | |
*** Adri2000 has quit IRC | 18:48 | |
*** Adri2000 has joined #openstack-infra | 18:48 | |
*** gokrokve has joined #openstack-infra | 18:53 | |
*** Ryan_Lane has quit IRC | 18:56 | |
*** bada_ has quit IRC | 18:57 | |
fungi | dhellmann: i'm testing now to see whether i can recreate this issue running 'sudo python ez_setup.py' on an otherwise pristine ubuntu 12.04 vm in rackspace (i've confirmed it fails on both our bare-precise and py3k-precise nodepool node types) | 18:57 |
fungi | hopefully i can separate the wheat from the chaff this way | 18:58 |
*** jp_at_hp has quit IRC | 18:58 | |
dhellmann | fungi: thank you for spending so much time on this | 18:58 |
*** bada_ has joined #openstack-infra | 18:58 | |
*** mrmartin has joined #openstack-infra | 18:58 | |
openstackgerrit | Andreas Jaeger proposed a change to openstack-infra/config: Only initialize transifex if there's no .tx directory https://review.openstack.org/81599 | 18:58 |
dstufft | fungi: sorry I'm not more help :[ | 18:58 |
fungi | dhellmann: no problem. i've also confirmed that directly zipimport'ing from that egg fails in exactly the same way if i stick it at the end of the script, but running the exact same commands in an interactive python shell imports from it just fine. inserting a time.sleep(5) also made no difference | 18:59 |
fungi | dstufft: 'sokay. i'll have a bug report against *something* soon enough | 18:59 |
Ajaeger1 | clarkb, fungi: The change I did in https://review.openstack.org/#/c/81450/ is needed elsewhere as well ;(, seems nobody check all translation jobs for properl working ;(. See 81599 for the next patch. | 19:00 |
clarkb | Ajaeger1: yup reviewed | 19:00 |
fungi | Ajaeger1: yeah, they've been mostly broken for a while. proof that nobody usually looks at jobs that run in post | 19:00 |
*** Ryan_Lane has joined #openstack-infra | 19:00 | |
clarkb | jeblair: mordred: so the mysterious zuul merger dying phenomena appears to be deadlock around git operations | 19:00 |
Ajaeger1 | clarkb: you're ultra-quick - thanks. | 19:00 |
clarkb | jeblair: mordred: I have a thread that is >30 minutes old trying to do a git operation via subprocess.communicate | 19:01 |
Ajaeger1 | fungi: I agree, failures in post jobs need some visibility | 19:01 |
*** vhoward has quit IRC | 19:01 | |
clarkb | jeblair: mordred: and is just stuck there. Good news is I think that means the zuul merger itself is sound \o/ but now need to figure out why git is unhappy with pygit | 19:01 |
fungi | Ajaeger1: maybe we should have an e-mail reporter like we use for periodic jobs and use it to pester the -dev ml | 19:01 |
jeblair | clarkb: do you have something else touching those repos? | 19:01 |
clarkb | jeblair: have you seen that before with pygit? I can paste the threaddump too if you are interested in looking at it | 19:01 |
dstufft | fungi: what are you running in the python interactive btw? | 19:02 |
dstufft | which liens | 19:02 |
clarkb | jeblair: good question. I don't think so but somehow we did end up with two zuul mergers (the older one wouldn't even respond to sigusr2 and the pid in the pidfile was for the newer one) | 19:02 |
fungi | dstufft: python -c 'import sys;sys.path.insert(0, "/usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg");import setuptools;print(setuptools.__file__)' | 19:02 |
*** derekh has joined #openstack-infra | 19:02 | |
clarkb | jeblair: so I won't rule that out, certainly a possibility given ^ | 19:02 |
dstufft | fungi: try importing pkg_resources first | 19:03 |
fungi | dstufft: those commands raise the same exception from within ez_setup.py but not when run outside its context | 19:03 |
fungi | dstufft: trying | 19:03 |
openstackgerrit | Davanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Temporary workaround for Time out errors from libvirt https://review.openstack.org/79816 | 19:03 |
fungi | dstufft: python -c 'import pkg_resources, sys;sys.path.insert(0, "/usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg");import setuptools;print(setuptools.__file__)' works too | 19:04 |
clarkb | jeblair: I think my next step is to check if the git process is still around when it happens and strace that | 19:04 |
dims | jeblair, sorry, in this review, i don't use UCA at all, it's a left over from another review. | 19:04 |
jeblair | dims: ok, so those lines can go away? | 19:04 |
*** e0ne has joined #openstack-infra | 19:04 | |
dims | jeblair, y i nuked 'em | 19:04 |
jeblair | dims: awesome, thanks! | 19:04 |
dstufft | fungi: python -c 'import sys;sys.path.insert(0, "/usr/local/lib/python2.7/dist-packages/setuptools-3.3-py2.7.egg"); import pkg_resources; import setuptools;print(setuptools.__file__)' ? | 19:05 |
*** freyes has joined #openstack-infra | 19:05 | |
SlickNik | sdague: Just discussed mockito at the trove meeting. | 19:05 |
SlickNik | sdague: timeline to move away is next 3-4 days. | 19:05 |
fungi | dstufft: just tried that and it works too | 19:05 |
sdague | SlickNik: nice! | 19:05 |
SlickNik | sdague: I'll be taking care of it. | 19:06 |
Ajaeger1 | fungi: yeah, might help. But don't send it for each failing job, create some rate-limit. | 19:06 |
sdague | thanks much | 19:06 |
dstufft | fungi: ok that was my brilliant idea anyway s:/ | 19:06 |
dstufft | pkg_resources does some bad things sometimes | 19:06 |
sdague | so we'll make sure to get the others through to requirements | 19:06 |
SlickNik | sdague: No worries. It's something that we need to do sooner rather than later. :) | 19:06 |
SlickNik | sdague: Thank you! | 19:07 |
fungi | dstufft: ooh! i also once got this out of ez_setup.py a moment ago... "zlib.error: Error -5 while decompressing data: incomplete or truncated stream" | 19:09 |
dstufft | fungi: you might want to put a print here and see what downloader is being used https://bitbucket.org/pypa/setuptools/src/f90c6708ef51520e8fa6783a330dfcf81f382262/ez_setup.py?at=default#cl-262 | 19:09 |
dstufft | are you checking the exact file that is downloaded, or did you download it yourself | 19:10 |
fungi | dstufft: downloading https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py directly | 19:10 |
fungi | i can't reproduce this on a pristine ubuntu 12.04 vm though, so it's something to do with what combination of system packages we do or don't have installed i think | 19:10 |
* fungi will brb | 19:11 | |
*** dcramer_ has quit IRC | 19:11 | |
vishy | dhellmann, dstufft, et. al.: anyone familiar with entry points? | 19:13 |
dstufft | familar enough to know i'll probably regret saying yes | 19:14 |
dstufft | :) | 19:14 |
lifeless | jeblair: https://review.openstack.org/#/c/80304/ and a release with that is still at the top of our jeblair christmas list | 19:14 |
vishy | dstufft: so I'm trying to run the nova tests | 19:15 |
vishy | from a source code checkout and stevedore isn't picking up the api extensions | 19:15 |
vishy | it appears that perhaps nova needs to be installed for the tests to work? | 19:15 |
dstufft | yes | 19:15 |
vishy | does that make sense? | 19:15 |
jeblair | lifeless: that may not be the change you're thinking of (it has already merged) | 19:15 |
dstufft | extension points are in the package metadata | 19:15 |
dstufft | it has to be installed, but you can install it while still running locally | 19:16 |
jeblair | lifeless: but yes, i plan on merging that one and releasing soon | 19:16 |
dhellmann | vishy: right, what dstufft said | 19:16 |
dstufft | pip install -e or setup.py develop will "install" your local directory | 19:16 |
dhellmann | devstack uses pip install -e | 19:16 |
jeblair | lifeless: https://review.openstack.org/#/c/76588/1 may be the one you were thinking of | 19:16 |
lifeless | jeblair: https://review.openstack.org/#/c/80304/ shows 'review in progress' in gerrit | 19:16 |
lifeless | jeblair: oh, yes 76588 was our xmas present | 19:17 |
lifeless | jeblair: 80304 is a new different leak | 19:17 |
dstufft | pip install -e is probably what you want | 19:17 |
dstufft | executing setup.py directly is typically a recipe for some kind of pain | 19:17 |
lifeless | jeblair: thank you for 76588 | 19:17 |
jeblair | lifeless: yeah, seeing https://review.openstack.org/#/c/76588/1 made me look for other possible instances. https://review.openstack.org/#/c/80304/ should never be an issue but is nice to clean up anyway | 19:18 |
dstufft | it might be minor pain, like you bumped into a table, it might be major pain like you cracked open your skill and got poison ivy at the same time | 19:18 |
jeblair | (it would only affect a process that stops and starts the gear server a lot) | 19:18 |
dims | jeblair, question for you - is there anywhere we can host the files from serge? - http://people.canonical.com/~serge/libvirt-0.9.8-2ubuntu18/ i'd hate it for it to go away or someone blocking network access to it | 19:18 |
fungi | vishy: our tox-based tests mostly use setup.py develop | 19:18 |
fungi | but either is probably fine | 19:18 |
*** caleb_` has quit IRC | 19:19 | |
jeblair | dims: i'm not sure we have a location that could stand up to our own use; maybe we could stick it in swift and use the rackspace cdn... | 19:19 |
dims | jeblair, something i can do myself? pointers please | 19:20 |
jeblair | dims: no, we generally don't do things like this because we only test with released software so we don't have anything set up for it | 19:21 |
jeblair | dims: when do you think it will make it to the ubuntu archive? | 19:21 |
*** saju_m has quit IRC | 19:21 | |
*** rpodolyaka1 has quit IRC | 19:22 | |
*** saju_m has joined #openstack-infra | 19:22 | |
jeblair | dims: (well, if you have access to a swift cluster with a cdn somewhere, you could put it there yourself) | 19:22 |
*** ildikov_ has joined #openstack-infra | 19:23 | |
*** vkozhukalov_ has joined #openstack-infra | 19:23 | |
*** dcramer_ has joined #openstack-infra | 19:24 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: convert to filename from tags https://review.openstack.org/81604 | 19:24 |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: convert to filename from tags https://review.openstack.org/81604 | 19:25 |
*** rpodolyaka1 has joined #openstack-infra | 19:25 | |
dims | jeblair, it will get into proposed today hopefully from what i know. now sure when it will get promoted from precise-proposed to precise-updates | 19:25 |
*** bada_ has quit IRC | 19:25 | |
jeblair | dims: i'll stick it in a swift container in the openstackci account | 19:25 |
*** bada_ has joined #openstack-infra | 19:26 | |
dims | jeblair, thanks! let me know the url and i'll update the review | 19:26 |
jeblair | dims, sdague: i just realized something | 19:26 |
jeblair | dims, sdague: devstack-gate is distro-agnostic now | 19:27 |
sdague | jeblair: I suppose so | 19:27 |
jeblair | dims, sdague: so we shouldn't add ubuntu-specific commands (at least, not without protecting fedora/centos) | 19:27 |
sdague | it would be nice if those other distros reported test results :) | 19:27 |
*** hub_cap has quit IRC | 19:28 | |
dims | lol, jeblair i'll add checks for ubuntu only :) | 19:28 |
jeblair | sdague: i think pleia2 is working on that | 19:28 |
sdague | jeblair: cool, will be extra happy to see it | 19:29 |
sdague | jeblair: yeh, so force installing that update is basically a stop the bleeding measure | 19:29 |
jeblair | dims, sdague: how bad is this problem, btw? | 19:29 |
sdague | #2 gate bug | 19:29 |
sdague | http://status.openstack.org/elastic-recheck/gate.html | 19:29 |
sdague | 9 resets in the last 24hrs | 19:30 |
jeblair | because i'm a bit uneasy about violating the 'only test on released artifacts' thing; i'm not really keen on trying to fix the whole world | 19:30 |
sdague | jeblair: I'm with yuo | 19:30 |
jeblair | ok cool | 19:31 |
jeblair | dims: wait before you push up a new version with the check, and i'll give a a cdn url for the debs | 19:31 |
sdague | but I really think that given the current reset rate we should try to get over this hump, especially as ubuntu has acknowledged the bug, and is working the fix through process | 19:31 |
dims | jeblair, yep. will wait | 19:32 |
*** sarob has joined #openstack-infra | 19:35 | |
openstackgerrit | lifeless proposed a change to openstack-infra/config: Add Ironic check jobs via tripleo-ci https://review.openstack.org/81608 | 19:36 |
lifeless | devananda: ^ may interest you | 19:37 |
lifeless | also NobodyCam and GheRivero ^ | 19:37 |
lifeless | devananda: I'm wondering if you want those jobs on Ironic to be check rather than experimental (but still nonvoting) | 19:37 |
devananda | lifeless: i'm late for my bus... but taking a quick look | 19:38 |
*** julim has joined #openstack-infra | 19:38 | |
*** bhuvan has quit IRC | 19:39 | |
devananda | lifeless: experimental is fine if you're still working out things in tripleo to make them start functioning | 19:39 |
devananda | lifeless: check-nv would be good as soon as you think they should be passing with some regularity | 19:39 |
*** jooools has quit IRC | 19:39 | |
*** maxbit has quit IRC | 19:40 | |
*** alff has joined #openstack-infra | 19:42 | |
adam_g | what part of the gate infra is responsible for generating the localrc used in the devstack jobs? specifically the ENABLED_SERVICES setting? | 19:43 |
sdague | adam_g: devstack-gate-wrap | 19:44 |
*** medieval1 has quit IRC | 19:45 | |
adam_g | sdague, ah! | 19:45 |
sdague | adam_g: actually, d-g itself - https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L36 | 19:45 |
sdague | and the next 50 lines | 19:45 |
*** dcramer_ has quit IRC | 19:45 | |
adam_g | sdague, great, taking a look. thanks | 19:45 |
openstackgerrit | Derek Higgins proposed a change to openstack-infra/elastic-recheck: Add a fingerprint for bug 1292141 https://review.openstack.org/81609 | 19:46 |
uvirtbot | Launchpad bug 1292141 in tripleo "Cannot fetch index base URL http://pypi.openstack.org/openstack/" [Critical,Triaged] https://launchpad.net/bugs/1292141 | 19:46 |
*** hashar has joined #openstack-infra | 19:47 | |
*** bhuvan has joined #openstack-infra | 19:48 | |
lifeless | devananda: we require CI to pass on all landings | 19:50 |
lifeless | devananda: its nonvoting as an escape clause for systematic failures at the moment - e.g. that we have only one cloud | 19:51 |
*** yolanda has quit IRC | 19:52 | |
*** amcrn has joined #openstack-infra | 19:52 | |
*** lcostantino has quit IRC | 19:52 | |
*** zzelle has joined #openstack-infra | 19:55 | |
*** melwitt has joined #openstack-infra | 19:55 | |
openstackgerrit | Adam Gandelman proposed a change to openstack-infra/config: Enable Neutron for Ironic Devstack jobs https://review.openstack.org/81611 | 19:57 |
*** melwitt has quit IRC | 19:58 | |
*** melwitt has joined #openstack-infra | 20:01 | |
clarkb | ok back from lunch | 20:02 |
lifeless | devananda: done | 20:04 |
openstackgerrit | lifeless proposed a change to openstack-infra/config: Add Ironic check jobs via tripleo-ci https://review.openstack.org/81608 | 20:04 |
lifeless | clarkb: would quite like to get ^ in, once it passes verify :) | 20:05 |
*** bhuvan has quit IRC | 20:05 | |
jeblair | i am apparently incapable of uploading a file to rackspace cloud files | 20:05 |
*** CaptTofu has quit IRC | 20:06 | |
*** weiwei has quit IRC | 20:07 | |
*** saju_m has quit IRC | 20:08 | |
*** caleb_ has joined #openstack-infra | 20:08 | |
jeblair | yeah, the rackspace cloud files web ui is not working for me when i try to upload a file | 20:08 |
*** saju_m has joined #openstack-infra | 20:08 | |
*** krotscheck has quit IRC | 20:10 | |
JayF | jeblair: you had commented on this, wanted to make sure it was good and get the reviews aligned for new-project-friday | 20:12 |
JayF | jeblair: https://review.openstack.org/#/c/79088/ | 20:12 |
*** bhuvan has joined #openstack-infra | 20:13 | |
zzelle | Hi everyone | 20:14 |
clarkb | zzelle: hello | 20:14 |
zzelle | i have a trouble with git-review testing on py33 | 20:15 |
zzelle | i get the following pip requirement error | 20:15 |
zzelle | http://logs.openstack.org/80/79280/7/check/gate-git-review-python33/d640e43/console.html | 20:15 |
zzelle | clarkb, hi | 20:15 |
clarkb | fungi: is ^ related to the pypy thing? | 20:16 |
fungi | clarkb: not the pypy thing but related to the whatever i was trying to test whether running ez_setup.py and get-pip.sh would solve | 20:17 |
fungi | clarkb: the issue oslo.rootwrap is running into with python33 unit test jobs | 20:17 |
zzelle | i did some research using kibana, the trouble seems to be 2 days old | 20:17 |
fungi | and it's all fairly spontaneous, which leads me to believe this is possibly fallout from the setuptools 3.3 release | 20:18 |
clarkb | lifeless: where does USE_IRONIC come from? | 20:19 |
*** rpodolyaka1 has quit IRC | 20:20 | |
lifeless | clarkb: https://review.openstack.org/#/c/72969/ | 20:20 |
*** _nadya_ has joined #openstack-infra | 20:21 | |
*** rpodolyaka1 has joined #openstack-infra | 20:24 | |
*** weiwei has joined #openstack-infra | 20:26 | |
jeblair | notmyname: i'm trying to use python-swiftclient to talk to rax cloud files and it seems stuck at "INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): storage101.dfw1.clouddrive.com" | 20:26 |
jeblair | notmyname: do you know of someone who would be interested in that^ ? | 20:26 |
*** nicedice has joined #openstack-infra | 20:26 | |
*** vhoward has joined #openstack-infra | 20:27 | |
notmyname | jeblair: interested? or who could fix it :-) | 20:28 |
dstufft | fungi: with the latest version of get-pip.py you don't need ez_setup.py first btw | 20:28 |
dstufft | since like | 20:28 |
dstufft | 1.5.1? I think | 20:28 |
dstufft | or .2 | 20:28 |
*** vhoward has left #openstack-infra | 20:28 | |
jeblair | notmyname: "interested in and capable of fixing it immediately" would be the ideal, but you know, however close to that we can get. :) | 20:28 |
*** rpodolyaka1 has quit IRC | 20:28 | |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: upgrade review-dev.o.o to gerrit 2.8.3 https://review.openstack.org/81618 | 20:29 |
*** thomasem has quit IRC | 20:29 | |
notmyname | jeblair: I'm very interested, and I'd probably look to pandemicsyn or ahale (or maybe gholt or creiht) or talk to someone in support in #rackspace | 20:29 |
fungi | dstufft: oh! i'll see if that helps get me out of this rathole--thanks! | 20:29 |
dstufft | since 1.5.whatever pip can install Wheels without setuptools isntalled at all | 20:30 |
dstufft | still needs it for sdists | 20:30 |
dstufft | so we exploit that to have get-pip.py install setuptools and pip from Wheels | 20:30 |
morganfainberg | lifeless, for testr (or anyone else) is there a way to get the worker ID from within python? | 20:30 |
clarkb | lifeless: thank you, wasn't seeing it in d-g so figured it was something in tripleo | 20:30 |
openstackgerrit | lifeless proposed a change to openstack-infra/config: Add Ironic check jobs via tripleo-ci https://review.openstack.org/81608 | 20:30 |
*** dstanek has joined #openstack-infra | 20:31 | |
lifeless | morganfainberg: within a backend? For DB isolation I presume? You can use your pid :) | 20:31 |
morganfainberg | lifeless, yeah i was going to use the PID but i liked the worker-id concept better ;) | 20:31 |
morganfainberg | lifeless, if it was available | 20:31 |
morganfainberg | lifeless, but PID was my fallback alternative | 20:31 |
lifeless | morganfainberg: so worker id is actually 'routing path'- for now pid is better. | 20:32 |
openstackgerrit | Ricardo Carrillo Cruz proposed a change to openstack-infra/config: Initial openstackdroid commit for Stackforge, with review changes https://review.openstack.org/81234 | 20:32 |
*** CaptTofu has joined #openstack-infra | 20:32 | |
fungi | dstufft: works! | 20:32 |
fungi | dhellmann: ^ | 20:32 |
morganfainberg | lifeless, ok sounds good, PID it is | 20:32 |
*** _nadya_ has quit IRC | 20:32 | |
fungi | seems to solve the oslo.rootwrap python33 test issues... i'll fix up mordred's patch some | 20:32 |
* dhellmann reads backlog | 20:33 | |
dstufft | fungi: cool :) sorry I didn't think of it earlier, it didn't really register why you were (probably) running ez_setup.py | 20:33 |
fungi | dstufft: should we also be ripping out distro-supplied python-pkg-resources first too, or is removing python-setuptools python-pip first enough? | 20:33 |
dstufft | fungi: well pip will probably stomp all over it | 20:34 |
dstufft | and replace it with whatever comes with setuptools | 20:34 |
dstufft | you're probably gonna have a harder time getting rid of it though | 20:34 |
fungi | basically trying to make sure we cleanly gut any preinstalled python packaging debs before running get-pip.py | 20:34 |
dstufft | pip itself won't use it | 20:34 |
dstufft | it has a bundled copy of pkg_resources it uses | 20:34 |
dstufft | setuptools will use it though | 20:34 |
fungi | but will pip call things which might call their own copies? | 20:35 |
dstufft | setuptools via setup.py yea | 20:35 |
fungi | pip -> some setup.py -> setuptools -> system pkg_resources | 20:35 |
fungi | got it | 20:35 |
fungi | setuptools will bootstrap a new pkg_resources into the system though, correct? | 20:36 |
* dhellmann adds a chalk mark to the tally of beers owed fungi and dstufft | 20:36 | |
dstufft | fungi: yea installing setuptools will overwrite the system pkg_resources.py | 20:36 |
dstufft | because you know, silently overwriting things is a cool thing to do | 20:37 |
dstufft | :/ | 20:37 |
dstufft | well I slightly lie | 20:37 |
*** dprince has quit IRC | 20:37 | |
dstufft | on Debian based system you'll end up with the system pkg_resources.py located in /usr/lib/pythonX.Y/site-packages/pkg_resources.py, and one installed by pip in /usr/local/lib/pythonX.Y/site-packages/pkg_resources.py | 20:38 |
dstufft | on like Fedora and shit it'll just overwrite | 20:38 |
fungi | dstufft: so the one other wrinkle here... we're trying to support pip-installing into a python 3.3 virtualenv... any special things we need to do to make sure appropriate setuptools/pkg_resources get installed for all interpreter versions on the system? do we need to re-run get-pip.py under multiple interpreter versions for that? | 20:38 |
dstufft | so lets back up a second | 20:38 |
dstufft | are you installing anything into the system python? or only into virtualenvs | 20:39 |
fungi | dstufft: well, we're running tox and possibly some testr bits outside of the virtualenv | 20:39 |
dstufft | any reason you don't run them inside the virtualenv? | 20:39 |
fungi | well, for tox it's a bootstrapping question. we run tox and virtualenv outside of a virtualenv because they're creating teh virtualenv we're going to run things inside | 20:40 |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for cinder bug 1294824 https://review.openstack.org/81621 | 20:40 |
uvirtbot | Launchpad bug 1294824 in tempest "Volume test quota race condition" [Undecided,In progress] https://launchpad.net/bugs/1294824 | 20:40 |
dstufft | so tox handles being called from inside of a virtualenv just fine | 20:40 |
fungi | basically we need newish tox on the system. so that we can run it to create the virtualenv we'll test inside of | 20:40 |
fungi | and the way we're getting new tox is with pip, in the system context | 20:41 |
clarkb | dstufft: right but then you have to manage a virtualenv which is what tox is for | 20:41 |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for cinder bug 1294824 https://review.openstack.org/81621 | 20:41 |
uvirtbot | Launchpad bug 1294824 in tempest "Volume test quota race condition" [Undecided,In progress] https://launchpad.net/bugs/1294824 | 20:41 |
fungi | or are you saying we should create a virtualenv, install tox into that, then use the virtualenv'd tox to create other cirtualenvs? | 20:41 |
dstufft | yes that one fungi | 20:41 |
*** dcramer_ has joined #openstack-infra | 20:41 | |
jeblair | http://b64a126b01b637c4fdf4-11809a5fee9c1af804008df022f3a2d9.r93.cf2.rackcdn.com/libvirt-bin_0.9.8-2ubuntu17.18_amd64.deb | 20:41 |
jeblair | http://b64a126b01b637c4fdf4-11809a5fee9c1af804008df022f3a2d9.r93.cf2.rackcdn.com/libvirt0_0.9.8-2ubuntu17.18_amd64.deb | 20:41 |
jeblair | http://b64a126b01b637c4fdf4-11809a5fee9c1af804008df022f3a2d9.r93.cf2.rackcdn.com/python-libvirt_0.9.8-2ubuntu17.18_amd64.deb | 20:41 |
dstufft | virtualenv shouldn't clash with any of the preinstalled python packaging packages | 20:41 |
jeblair | dims: ^ | 20:41 |
dstufft | and even if it did, virtualenv can run without being installed | 20:42 |
fungi | dstufft: ahh, okay, i do that on my workstation so i don't have to pip install it system wide there, so i know it works fine for me | 20:42 |
jeblair | dims: sorry it took so long | 20:42 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: Gerrit-2.8: Add secondary index support https://review.openstack.org/60080 | 20:42 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: Gerrit-2.8: Allow encoded path separators in URLs https://review.openstack.org/60893 | 20:42 |
dstufft | and you isolate yourself from having to worry about ripping out all these packages | 20:42 |
clarkb | jeblair: are we hosting specific debs for libvirt now? | 20:42 |
dims | jeblair, thanks, will give it a whirl | 20:42 |
fungi | dstufft: so this takes us back to the fact that we're using puppet's pip package provider to install virtualenv and tox | 20:42 |
fungi | which i guess we would just have to not do | 20:43 |
dstufft | does puppet have a git provider? | 20:43 |
dstufft | to clone a repo? | 20:43 |
clarkb | it has a thing to do that yes | 20:43 |
dstufft | I assume so yea? | 20:43 |
fungi | yeah, we use vcsrepo for other things | 20:43 |
fungi | so it's possible we could grab it that way | 20:43 |
jeblair | clarkb: apparently the fix is making its way through ubuntu process, sdague thinks it's worth the emergency measure | 20:43 |
dstufft | clone github.com/pypa/virtualenv.git and do python virtualenv.py -ppython2.7 /pah/to/virtualenv | 20:43 |
clarkb | jeblair: roger, I saw mention of emergency measure didn't realize this was the way to emergence it | 20:43 |
dstufft | no need to install anything then | 20:43 |
dstufft | and you isolate yourself from the system python :) | 20:44 |
jeblair | clarkb: it's on people.ubuntu.com, but dims thought it would be nice for us to not hammer/rely on that | 20:44 |
sdague | clarkb: yeh, it's our #2 gate failure, and something that needs a new libvirt to fix | 20:44 |
clarkb | dstufft: but now you have two problems instead of 1 | 20:44 |
dstufft | clarkb: how? | 20:44 |
clarkb | dstufft: because now you have to manage system python and the virtualenv python | 20:44 |
dstufft | if you pretend the system python is for the OS you don't have to manage it :) | 20:45 |
*** gokrokve has quit IRC | 20:45 | |
*** gokrokve has joined #openstack-infra | 20:46 | |
fungi | clarkb: it makes things harder to update, but this might be fine for single-use worker nodes since we build those and then throw them away again when we're done. heck, run-tox.sh could possibly even bootstrap the virtualenv'd tox from cached sources on the filesystem | 20:46 |
*** _nadya_ has joined #openstack-infra | 20:46 | |
clarkb | fungi: yeah I think it is definitely doable. But giving up on system python seems oh so very unlinuxy to me | 20:47 |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for cinder bug 1291162 https://review.openstack.org/81621 | 20:47 |
uvirtbot | Launchpad bug 1291162 in tempest "tempest failure in tempest.api.volume.admin.test_volume_quotas" [Undecided,In progress] https://launchpad.net/bugs/1291162 | 20:47 |
dstufft | it's a chroot :D | 20:47 |
dstufft | sort of | 20:47 |
openstackgerrit | Davanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Temporary workaround for Time out errors from libvirt https://review.openstack.org/79816 | 20:47 |
clarkb | at this point why bother testing on any particular distro? we are just going to slurp in a different python anyways | 20:47 |
jeblair | clarkb, fungi, dstufft: remember that whatever it is we do on the test nodes is what we expect developers to do | 20:47 |
*** gokrokve_ has joined #openstack-infra | 20:47 | |
fungi | clarkb: well, the moment we try https://review.openstack.org/#/c/75213/5/modules/openstack_project/files/nodepool/scripts/fix_pip.sh we're basically giving up on system python for this purpose | 20:47 |
dstufft | clarkb: you're not testing the distro if you're ripping out the system installed packaging tools anyways | 20:48 |
jeblair | clarkb, fungi, dstufft: so this means we expect that developers can't bootstrap their own test environment from the distro | 20:48 |
*** gokrokv__ has joined #openstack-infra | 20:48 | |
dstufft | you're testing a particular openstack flavored version of the distro | 20:48 |
clarkb | dstufft: no see jeblairs comment | 20:48 |
*** maxbit has joined #openstack-infra | 20:48 | |
clarkb | I mean yes it is still in a chroot venv thingy. but getting there from distro is straightforward for everyone | 20:48 |
dstufft | Never use the system python has been advice for every Python developer ever fwiw | 20:49 |
*** mrmartin has quit IRC | 20:49 | |
dstufft | that way lies pain | 20:49 |
dstufft | (otoh distro maintainers yell at me sometimes so there's that) | 20:50 |
jeblair | dstufft: you and i disagree on that point. we don't need to hash it out here and now though. :) | 20:50 |
dstufft | jeblair: :) | 20:50 |
*** gokrokve has quit IRC | 20:50 | |
jeblair | dstufft: well, i'm pretty sure we agree that it's pain, actually. :) | 20:50 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: upgrade review.o.o to gerrit ver 2.8.x https://review.openstack.org/81622 | 20:50 |
*** mrda_away is now known as mrda | 20:50 | |
dstufft | When the python packaging tools fail most of the time I hear about it in some form, I get less complaints when people stop using the OS Python and isolate themselves (this is on any OS that provides Python, not just Linux). I want to make this better but as it stands shit just stomps all over each other and sooner or later somebody cries | 20:51 |
*** gokrokve has joined #openstack-infra | 20:51 | |
*** jamielennox|away is now known as jamielennox | 20:51 | |
*** coolsvap has quit IRC | 20:51 | |
dstufft | (I'm equating less complaints with it works better in practice, maybe they just give up too idk) | 20:51 |
*** gokrokve_ has quit IRC | 20:51 | |
*** harlowja is now known as harlowja_away | 20:52 | |
*** gokrokv__ has quit IRC | 20:52 | |
dstufft | but it's up to y'all of course :) If you want to do what you're doing you should probablly get rid of pkg_resources before doing this too | 20:52 |
morganfainberg | any infra folks able to provide a ban in #openstack? got some lovely language going on there | 20:52 |
morganfainberg | jeblair, ++ thanks | 20:53 |
*** rpodolyaka1 has joined #openstack-infra | 20:53 | |
*** e0ne has quit IRC | 20:53 | |
jeblair | np, we're putting together an irc ops team with global access so there should be a bunch of people able to help out with that soon | 20:53 |
jeblair | morganfainberg: not quite fully organized yet | 20:53 |
morganfainberg | jeblair, let me know if you need help w/ it | 20:53 |
morganfainberg | jeblair, i lurk around a lot | 20:53 |
*** vkozhukalov_ has quit IRC | 20:54 | |
*** e0ne has joined #openstack-infra | 20:54 | |
jeblair | morganfainberg: cool, thanks. | 20:54 |
morganfainberg | jeblair, /me should learn not to volunteer :P | 20:54 |
morganfainberg | jeblair, ^_^ | 20:54 |
*** _nadya_ has quit IRC | 20:56 | |
*** rpodolyaka1 has quit IRC | 20:57 | |
*** alff has quit IRC | 20:57 | |
*** mbacchi has quit IRC | 20:57 | |
*** e0ne has quit IRC | 20:57 | |
mattoliverau | Morning all | 21:00 |
*** alff has joined #openstack-infra | 21:00 | |
*** nati_ueno has joined #openstack-infra | 21:01 | |
jeblair | mattoliverau: good morning | 21:02 |
openstackgerrit | A change was merged to openstack-infra/gear: Close server connect pipes on cleanup https://review.openstack.org/80304 | 21:05 |
jeblair | lifeless, derekh: gear 0.5.4 pushed | 21:06 |
clarkb | jeblair: that includes a bunch of other fixes includeing the function reregistration fix right? | 21:07 |
lifeless | \o/ thanks | 21:07 |
jeblair | clarkb: yep, i'm bumping it in zuul now | 21:07 |
clarkb | cool, I will plan to restart the geard on logstash.o.o in the near future | 21:08 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Require gear 0.5.4 https://review.openstack.org/81628 | 21:08 |
clarkb | an all of the workers too I guess | 21:08 |
jeblair | clarkb: it's not really urgent, it won't affect them since they only have one function | 21:08 |
*** dstanek has quit IRC | 21:08 | |
clarkb | oh right that loop will do the right thing in that case | 21:08 |
derekh | jeblair: thanks, lifeless will rebuild the image | 21:09 |
lifeless | derekh: do you mean you will or I will ? | 21:09 |
derekh | lifeless: I will | 21:10 |
clarkb | jeblair: https://review.openstack.org/#/c/79212/1 has a response you may be interested in when gear is sorted | 21:10 |
jeblair | fungi: oops https://jenkins.openstack.org/job/post-mirror-python26/516/console | 21:10 |
lifeless | derekh: argh. Do you mean 'derek will' or 'lifeless will' ! | 21:10 |
derekh | lifeless: I have kicked off a new image build :-) | 21:11 |
lifeless | cool | 21:11 |
jeblair | lifeless: clearly the answer to that question is "yes"! :) | 21:11 |
fungi | jeblair: um... neat! apparently very old centos image | 21:12 |
*** blamar is now known as blamar-away | 21:12 | |
fungi | jeblair: i'll whip up a quick patch to make sure we install iproute2 everywhere | 21:12 |
clarkb | wow we even checked that | 21:13 |
*** dcramer_ has quit IRC | 21:13 | |
clarkb | just not on that machine... | 21:13 |
fungi | clarkb: yep, i guess rax started adding it to later centos images | 21:13 |
clarkb | fungi: could it possibly be a path issue? | 21:13 |
clarkb | image changes seem more likely I suppose | 21:13 |
*** dstanek has joined #openstack-infra | 21:14 | |
fungi | clarkb: oh, good point... /sbin/ip is on mirror26 | 21:14 |
fungi | and it's in my path | 21:14 |
*** aysyd has quit IRC | 21:14 | |
fungi | why is it not in jenkins's path? | 21:14 |
*** hashar has quit IRC | 21:14 | |
*** thuc__ has quit IRC | 21:14 | |
*** harlowja_away is now known as harlowja | 21:14 | |
fungi | it's in the jenkins user's path | 21:15 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: convert to filename from tags https://review.openstack.org/81604 | 21:15 |
fungi | on that machine | 21:15 |
*** thuc has joined #openstack-infra | 21:15 | |
clarkb | huh | 21:15 |
fungi | is the jenkins agent sanitizing the path there? | 21:15 |
clarkb | does that script set PATH? | 21:15 |
clarkb | yeah I wonder if jenkins is doing something funny | 21:15 |
fungi | maybe i can just export PATH=$PATH:/sbin in that builder macro? | 21:16 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: Loosen fingerprint for bug 1260311 https://review.openstack.org/81597 | 21:16 |
uvirtbot | Launchpad bug 1260311 in openstack-ci "hudson.Launcher exception causing build failures" [Undecided,Invalid] https://launchpad.net/bugs/1260311 | 21:16 |
*** thuc_ has joined #openstack-infra | 21:17 | |
*** asalkeld has joined #openstack-infra | 21:17 | |
*** thuc_ has quit IRC | 21:17 | |
clarkb | fungi: or maybe we give ip a fully rooted path in the macro? | 21:17 |
clarkb | my saucy box has it in /sbin/ip too | 21:17 |
*** mwagner__ has quit IRC | 21:17 | |
clarkb | either way wfm | 21:17 |
fungi | can we assume it's in /sbin on every platform i wonder | 21:17 |
clarkb | probably not | 21:18 |
*** thuc_ has joined #openstack-infra | 21:18 | |
clarkb | distros like Arch put everything in /bin or was it /usr/bin | 21:18 |
fungi | appending to the current env path seems safer | 21:18 |
*** thuc_ has quit IRC | 21:18 | |
jeblair | i'm not worried about arch | 21:18 |
fungi | we can certainly spot-check the set of systems we care about, but... | 21:18 |
*** thuc_ has joined #openstack-infra | 21:18 | |
clarkb | jeblair: but rax has arch images >_> | 21:19 |
jeblair | if it's in /sbin on centos and precise, isn't that good enough? | 21:19 |
clarkb | maybe we need to check fedora too then it iwll be good enough | 21:19 |
*** thuc_ has quit IRC | 21:19 | |
*** sabari2 is now known as sabari | 21:19 | |
clarkb | since pleia2 is close to having fedora images working | 21:19 |
asalkeld | hi guys, I think solum's dsvm tests are getting terminated early (destack is running and the job just get killled) http://logs.openstack.org/69/81169/2/check/gate-solum-devstack-dsvm/f7b7f5b/logs/devstacklog.txt.gz | 21:19 |
fungi | the question is whether we /sbin/ip on several commands or add PATH=$PATH:/sbin at the start of the builder | 21:19 |
*** thuc has quit IRC | 21:19 | |
clarkb | jeblair: I see you +2'd https://review.openstack.org/#/c/69510/ but didn't approve is that because babysitting? | 21:20 |
asalkeld | is there a timeout I can increase anywhere? | 21:20 |
*** thuc has joined #openstack-infra | 21:20 | |
clarkb | pleia2: I think you are semi vacationing but if you are up for it we can approve that change tomorrow and deal with fallout if it happens | 21:20 |
clarkb | asalkeld: in the job are two timeout values. the jenkins timeout and the devstack timeout. you can tweak those values as appropriate | 21:20 |
asalkeld | o, cool - thanks clarkb | 21:21 |
*** Ajaeger1 has quit IRC | 21:22 | |
fungi | asalkeld: out of curiosity, why do you think that job hit a timeout? looks like devstack only ran for about 10 minutes... | 21:22 |
asalkeld | well, why did it get killed? | 21:23 |
fungi | is "ERROR: Invalid OpenStack Nova credentials. | 21:23 |
fungi | " a benign warning or something? | 21:23 |
asalkeld | I see that all the time | 21:23 |
fungi | i see it a bunch in that particular devstack log | 21:23 |
asalkeld | ok | 21:24 |
asalkeld | just not sure why is works sometimes but not others | 21:24 |
*** derekh has quit IRC | 21:24 | |
*** hub_cap has joined #openstack-infra | 21:24 | |
fungi | "ERROR: gate_hook failed" suggests something failed during "Running devstack" | 21:25 |
openstackgerrit | A change was merged to openstack-infra/zuul: Make zuul more worker agnostic https://review.openstack.org/80152 | 21:25 |
asalkeld | ok, I'll double check that locally | 21:25 |
asalkeld | thanks fungi | 21:25 |
*** dkranz has quit IRC | 21:26 | |
openstackgerrit | A change was merged to openstack-infra/zuul: Add turbo-hipster to gearman-launchers https://review.openstack.org/80153 | 21:26 |
clarkb | dstufft: https://bugs.launchpad.net/openstack-ci/+bug/1272417 have an opinion on that? | 21:27 |
uvirtbot | Launchpad bug 1272417 in openstack-ci "Unable to install software dependencies due to: "Cannot fetch index base URL"" [Medium,Incomplete] | 21:27 |
clarkb | dstufft: mostly the last couple comments | 21:27 |
*** sweston has quit IRC | 21:27 | |
lifeless | clarkb: whats up with http://logs.openstack.org/08/81608/3/check/check-projects-yaml-alphabetized/e7f47f9/console.html ? | 21:27 |
*** saju_m has quit IRC | 21:28 | |
dstufft | clarkb: if it's a timeout then increasing the timeout is a reasonable solution, you can probably add -vvv to the pip call to get the exact error | 21:28 |
dstufft | oh it's via tox | 21:29 |
fungi | lifeless: looks like a network issue getting from a worker in rax-dfw to git.openstack.org (which is also in rax-dwf) | 21:29 |
fungi | er, dfw | 21:29 |
*** esker has quit IRC | 21:29 | |
*** mfer has quit IRC | 21:30 | |
*** alff has quit IRC | 21:30 | |
fungi | lifeless: on my workstation just a few minutes ago i also got a similar "Fetching origin ... error: fetch died of signal 13 ... error: Could not fetch origin" | 21:31 |
fungi | maybe network problems in dfw? | 21:31 |
*** che-arne has quit IRC | 21:33 | |
jeblair | clarkb: yes | 21:33 |
jeblair | clarkb: and image rebuild timing, etc; which i think is improved now :) | 21:33 |
*** maxbit_ has joined #openstack-infra | 21:35 | |
*** caleb_ has quit IRC | 21:35 | |
*** maxbit has quit IRC | 21:36 | |
*** zhiyan is now known as zhiyan_ | 21:36 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Add /sbin to the net-info job builder path https://review.openstack.org/81641 | 21:38 |
*** caleb_ has joined #openstack-infra | 21:38 | |
clarkb | jeblair: ok, I will work with pleia2 to get that in tomorrow | 21:39 |
clarkb | pleia2: assuming tomorrow is good for you | 21:39 |
jeblair | clarkb: or we could aprv it now and it will be rebuild in time for us to see the results in the morning | 21:40 |
clarkb | jeblair: yeah I am slightly worried it will just go sideways overnight and we will wake up to massive logs and leaked ssh keys again | 21:40 |
jeblair | clarkb: leaked ssh keys? | 21:40 |
clarkb | jeblair: yeah remember the key leakage when builds don't happen properly. I don't think we fixed that by moving to a single key for everything yet | 21:41 |
jeblair | oh, that kind of ssh keys | 21:41 |
jeblair | clarkb: yeah, but builds are now timed so that they finish when we're all awake | 21:41 |
clarkb | that did make it in. Won't the builds start now though since there is no existing image? | 21:42 |
clarkb | nodepool ignores the cron when there is no image | 21:42 |
jeblair | clarkb: for the new stuff, yes. | 21:42 |
jeblair | clarkb: but we won't start running jobs on those yet, so it's not going to affect zuul | 21:43 |
clarkb | ok /me hits the go button | 21:43 |
*** pdmars has quit IRC | 21:43 | |
clarkb | you have convinced me that there is less to worry about than I thought :) | 21:43 |
jeblair | the key leakage would be annoying, but we can deal with that. i'll see if i can track that down real quick | 21:44 |
*** rhsu1 has joined #openstack-infra | 21:45 | |
*** jooools has joined #openstack-infra | 21:45 | |
*** SumitNaiksatam has quit IRC | 21:46 | |
*** freyes has quit IRC | 21:46 | |
*** krtaylor has quit IRC | 21:46 | |
*** rhsu has quit IRC | 21:46 | |
*** msuriar_ is now known as msuriar | 21:46 | |
*** mriedem has quit IRC | 21:47 | |
*** jhesketh_ has joined #openstack-infra | 21:47 | |
*** dhellmann is now known as dhellmann_ | 21:48 | |
*** sarob has quit IRC | 21:48 | |
jhesketh_ | Morning | 21:48 |
*** sarob has joined #openstack-infra | 21:49 | |
jeblair | jhesketh_: good morning! | 21:49 |
clarkb | jhesketh_: mattoliverau you guys must've done come off DST as we went on it | 21:49 |
clarkb | because you wake up really late now :P | 21:49 |
jeblair | jhesketh_: if you have a minute to review https://review.openstack.org/#/c/81311/ that would be great (should help a teensy bit with our test node quota) | 21:50 |
jhesketh_ | clarkb: we don't come off until the 6th of April | 21:50 |
clarkb | huh so it will be even later later | 21:50 |
jeblair | clarkb: hrm, testing a failed image build in az2 does not result in a leaked keypair | 21:51 |
jeblair | i stuck "exit 1" in the prepare script | 21:51 |
jhesketh_ | jeblair: sure, looking | 21:52 |
*** krtaylor has joined #openstack-infra | 21:52 | |
jhesketh_ | clarkb: it's just before 9am here | 21:52 |
clarkb | jeblair: hrm. Maybe it only leaks with nova boot fails? | 21:53 |
*** adalbas has quit IRC | 21:53 | |
*** sarob_ has joined #openstack-infra | 21:53 | |
clarkb | jeblair: perhaps this problem is less nasty than I thought | 21:53 |
*** sarob has quit IRC | 21:53 | |
*** rpodolyaka1 has joined #openstack-infra | 21:53 | |
*** sarob_ is now known as sarob | 21:53 | |
morganfainberg | lifeless, running into a strange issue, it seems like the group_regex i'm doing in testr.conf is causing me to run really fast through some tests but then all ungrouped tests endup on a single worker? | 21:55 |
jeblair | clarkb: that might be it | 21:55 |
*** mrodden has joined #openstack-infra | 21:55 | |
morganfainberg | lifeless, or that somehow a bunch of tests are being lumped onto one worker (bad balance) | 21:56 |
*** zzelle has left #openstack-infra | 21:56 | |
*** zzelle has quit IRC | 21:56 | |
clarkb | morganfainberg: the initial run should be naive and do a 1:1 ration across all workers. But subseuqent runs use timing data in the testrepository to balance more properly. Is it possible that is causing the imbalance? | 21:57 |
morganfainberg | clarkb, that doesn't seem ot be what i'm seeing | 21:57 |
morganfainberg | clarkb, let me start with a clean .testrepository | 21:57 |
openstackgerrit | A change was merged to openstack-infra/config: Add support for Fedora 20 to nodepool https://review.openstack.org/69510 | 21:57 |
*** rpodolyaka1 has quit IRC | 21:57 | |
*** reed has quit IRC | 21:58 | |
morganfainberg | clarkb, ideally i would like to see the more nieve approach of even balancing as long as the few specific tests are grouped together | 21:58 |
morganfainberg | clarkb, naive | 21:58 |
*** alex-away is now known as _alexandra_ | 21:58 | |
* morganfainberg tries to spell | 21:58 | |
clarkb | its ok I spelled ratio as "ration" | 21:59 |
sdague | morganfainberg: so first pass through it should be binning alphabetically | 21:59 |
sdague | 1, 2, 3, 4, 1, 2, 3, 4 .... through all the tests | 21:59 |
morganfainberg | sdague, hmm. | 21:59 |
morganfainberg | sdague, except where explicitly grouped i assume | 21:59 |
openstackgerrit | A change was merged to openstack/requirements: Add saharaclient to global requirements https://review.openstack.org/81083 | 21:59 |
sdague | after that it bins similarly, but sorts the list by longest to shortest time run | 22:00 |
morganfainberg | sdague, so longer tests run at the end? | 22:00 |
morganfainberg | erm start | 22:00 |
sdague | start | 22:01 |
morganfainberg | sdague, aha i found the issue, clean testrepository made my wierd grouping go away | 22:01 |
morganfainberg | must have just gotten wedged with bad data over my multiple runs. | 22:01 |
sdague | morganfainberg: yeh, you maybe had some pathologic training data | 22:01 |
*** fifieldt has joined #openstack-infra | 22:02 | |
morganfainberg | now i just need to figure out this DB migrate issue and life will be good. | 22:02 |
*** bhuvan_ has joined #openstack-infra | 22:02 | |
jhesketh_ | jeblair: reviewed | 22:03 |
*** e0ne has joined #openstack-infra | 22:04 | |
*** zehicle_at_dell has quit IRC | 22:05 | |
*** zehicle_at_dell has joined #openstack-infra | 22:05 | |
*** bhuvan has quit IRC | 22:05 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/nodepool: Delete created keypairs if nova boot fails https://review.openstack.org/81647 | 22:07 |
jeblair | clarkb: ^ | 22:07 |
*** maxbit_ has quit IRC | 22:08 | |
*** e0ne has quit IRC | 22:08 | |
clarkb | jeblair: do we need that special case in the normal vm launch code? I am trying to remember if we get a genearted key there too | 22:09 |
jeblair | jhesketh_: also you may want to check out gear 0.5.4 which fixes a bug that could affect turbo-hipster re-registering functions on reconnect | 22:09 |
jeblair | jhesketh_: also affected the zuul merger | 22:10 |
jeblair | thus https://review.openstack.org/#/c/81628/ | 22:10 |
jhesketh_ | jeblair: yes, I've been waiting for those new gear changes to come in :-) | 22:10 |
jeblair | clarkb: no, we don't generate a keypair for that, we expect the setup scripts to ensure we can log in, so we log in with jenkins key | 22:10 |
jhesketh_ | nice stuff :-) | 22:10 |
clarkb | jeblair: and nova won't insist on giving us one? | 22:11 |
jeblair | clarkb: how could it? it doesn't store the private key; we generate both and upload the public one | 22:12 |
jeblair | (keypair seems to be a bit of a misnomer in this context) | 22:12 |
clarkb | oh right | 22:12 |
clarkb | jeblair: I think because of the way passwords work I thought maybe it did the same with keys | 22:12 |
jeblair | clarkb: amusingly, even hpcloud gives us a password. it doesn't work. | 22:13 |
openstackgerrit | Joe Gordon proposed a change to openstack-infra/elastic-recheck: Add fingerprint for bug 1294874 https://review.openstack.org/81649 | 22:13 |
uvirtbot | Launchpad bug 1294874 in openstack-ci "Sub-process /usr/bin/dpkg returned an error code" [Undecided,New] https://launchpad.net/bugs/1294874 | 22:13 |
jeblair | jhesketh_: also finally got around to doing the alternate impl you suggested https://review.openstack.org/#/c/81320/ | 22:14 |
jhesketh_ | oh awesome, you've had a busy day! | 22:14 |
clarkb | jeblair: huh, weird. | 22:15 |
clarkb | I should review that change. | 22:17 |
clarkb | jeblair: I think https://review.openstack.org/#/c/76057/ is ready for your review now. Especially since the zuul merger issues we are having seem to be git related and not related to that change | 22:18 |
clarkb | jeblair: mostly curious to see what you have to say about the test changes | 22:18 |
*** david-lyle has quit IRC | 22:19 | |
jeblair | clarkb: ack; i'm nearing review burnout for today though. maybe tomorrow? | 22:21 |
clarkb | jeblair: sure, I am getting there myself | 22:22 |
clarkb | so wouldn't be able to address comments quickly anywyas | 22:22 |
asalkeld | clarkb, re: Invalid nova credentials (in my devstack gate output). I noticed something odd | 22:23 |
asalkeld | openstack user create swiftusertest1 --password=testing --project e41446fcdd1c41a89ce17614c305fb3f --email=test@example.com | 22:23 |
asalkeld | nova --os-password secret --os-username swiftusertest1 --os-tenant-name swifttenanttest1 x509-create-cert | 22:23 |
*** bhuvan_ has quit IRC | 22:23 | |
asalkeld | seems to use the wrong password | 22:23 |
asalkeld | (not at all related to my stuff) | 22:24 |
clarkb | devstack bug? | 22:24 |
*** bknudson has quit IRC | 22:24 | |
asalkeld | probably | 22:24 |
asalkeld | maybe we can just turn off swift | 22:24 |
*** rcleere has quit IRC | 22:26 | |
*** sweston has joined #openstack-infra | 22:26 | |
pleia2 | clarkb: tomorrow is fine, I'll be around all day | 22:27 |
clarkb | pleia2: well we went ahead and did it | 22:28 |
clarkb | pleia2: I need to check nodepool shortly to see how far it has gotten | 22:28 |
pleia2 | hah, oh ok :) | 22:28 |
pleia2 | I'll have a look at backlog once I'm back at my laptop | 22:28 |
*** bhuvan has joined #openstack-infra | 22:29 | |
clarkb | pleia2: jeblair convinced me it isn't as scary as I thought and poushed a nodepool patch to fix the one issue I was actually worried about | 22:31 |
*** mrmartin has joined #openstack-infra | 22:32 | |
*** thuc has quit IRC | 22:33 | |
clarkb | pleia2: though now I realize that the change doesn't actually add the image type to nodepool so we are mostly safe | 22:33 |
*** thuc has joined #openstack-infra | 22:33 | |
clarkb | pleia2: maybe push that change up and we will get that in tomorrow? | 22:33 |
clarkb | jhesketh_: https://review.openstack.org/#/c/70636/6/tests/test_scheduler.py the assertTrue on line 3702. are the two args to or identical? | 22:35 |
jhesketh_ | clarkb: not quite | 22:36 |
clarkb | ok I will reread it | 22:36 |
jhesketh_ | clarkb: we don't know the order of the dict so it's testing both possible orders of smtp/gerrit or gerrit/smtp | 22:36 |
clarkb | aha! I get it now | 22:37 |
clarkb | yay python python dicts are secure | 22:37 |
jhesketh_ | if you have a neater way of testing that, le tme know | 22:37 |
*** _alexandra_ is now known as alex-afk | 22:37 | |
clarkb | meh wfm | 22:37 |
*** thuc has quit IRC | 22:38 | |
*** mgagne has quit IRC | 22:41 | |
*** thedodd has quit IRC | 22:42 | |
*** mwagner__ has joined #openstack-infra | 22:42 | |
*** dkliban has quit IRC | 22:45 | |
fungi | dhellmann_: so... having reproduced the issue a few different ways, the root of the problem seems to be that the distro-installed python-pip3 1.3.1 package is in some way getting picked up from /usr/lib/python3/dist-packages when tox is first trying to set up the venv for the job, even though there's also a pip 1.5.4 in /usr/local/bin | 22:47 |
fungi | dhellmann_: removing the python3-pip package seems to allow the job to run successfully | 22:48 |
*** alex-afk is now known as _alexandra_ | 22:49 | |
*** CaptTofu has quit IRC | 22:49 | |
*** CaptTofu has joined #openstack-infra | 22:50 | |
*** dcramer_ has joined #openstack-infra | 22:50 | |
*** zhiyan_ is now known as zhiyan | 22:51 | |
*** dims has quit IRC | 22:51 | |
*** zhiyan is now known as zhiyan_ | 22:53 | |
*** SumitNaiksatam has joined #openstack-infra | 22:53 | |
*** CaptTofu has quit IRC | 22:54 | |
*** rpodolyaka1 has joined #openstack-infra | 22:54 | |
*** harlowja is now known as harlowja_away | 22:59 | |
*** rpodolyaka1 has quit IRC | 22:59 | |
lifeless | morganfainberg: its possible. untimed tests get round-robined. timed tests fill buckets up fastest bucket first | 22:59 |
*** sarob has quit IRC | 23:00 | |
*** sarob has joined #openstack-infra | 23:00 | |
*** mrodden has quit IRC | 23:02 | |
jogo | I am having trouble finding 'gate-tempest-dsvm-neutron-heat-slow' in graphite | 23:02 |
jogo | oh woops | 23:03 |
*** yamahata has quit IRC | 23:04 | |
*** sarob has quit IRC | 23:05 | |
*** mriedem has joined #openstack-infra | 23:05 | |
*** sarob has joined #openstack-infra | 23:05 | |
*** nati_ueno has quit IRC | 23:07 | |
*** dims has joined #openstack-infra | 23:07 | |
*** caleb_ has quit IRC | 23:08 | |
morganfainberg | lifeless, all working now, i just had a lot of old / bad data in .testrepository | 23:08 |
*** asalkeld has quit IRC | 23:12 | |
*** bhuvan has quit IRC | 23:13 | |
*** derekh has joined #openstack-infra | 23:15 | |
sdague | jogo: https://review.openstack.org/#/c/81663/ - I think that's the right devstack fix | 23:17 |
openstackgerrit | Derek Higgins proposed a change to openstack-infra/config: Increase the default pip socket timeout https://review.openstack.org/81664 | 23:17 |
openstackgerrit | Derek Higgins proposed a change to openstack-infra/devstack-gate: Increase the default pip socket timeout https://review.openstack.org/81665 | 23:17 |
*** weshay has quit IRC | 23:19 | |
openstackgerrit | Adam Gandelman proposed a change to openstack-infra/config: Enable Neutron for Ironic Devstack jobs https://review.openstack.org/81611 | 23:20 |
*** che-arne has joined #openstack-infra | 23:20 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Fix pip on py3k/pypy nodes https://review.openstack.org/75213 | 23:21 |
*** mrodden has joined #openstack-infra | 23:22 | |
*** asalkeld has joined #openstack-infra | 23:25 | |
*** maxbit has joined #openstack-infra | 23:26 | |
*** gokrokve_ has joined #openstack-infra | 23:27 | |
*** yassine has quit IRC | 23:27 | |
*** sabari has quit IRC | 23:27 | |
jogo | sdague: cool, any insight from git blame on why it was that way at all | 23:28 |
dhellmann_ | fungi: makes sense -- is that something we can do to the nodes "permanently"? | 23:28 |
*** thuc has joined #openstack-infra | 23:28 | |
fungi | dhellmann_: https://review.openstack.org/75213 | 23:28 |
sdague | jogo: I didn't look | 23:29 |
sdague | but now it's time to make dinner, so I'm probably done for the night | 23:30 |
*** bhuvan has joined #openstack-infra | 23:30 | |
dhellmann_ | fungi: lgtm | 23:30 |
*** gokrokve has quit IRC | 23:30 | |
*** krotscheck has joined #openstack-infra | 23:30 | |
sdague | fungi: can you promote - https://review.openstack.org/#/c/81603/ next time there is a chance | 23:31 |
*** sarob has quit IRC | 23:31 | |
kevinbenton | hi, i just filed this today and i've hit it a couple of times https://bugs.launchpad.net/neutron/+bug/1294715 | 23:31 |
uvirtbot | Launchpad bug 1294715 in neutron ""Build timed out" gate-neutron-python2[6,7]" [Undecided,New] | 23:31 |
devananda | sdague: feel like ticking off a few small devstack patches for ironic? | 23:31 |
*** sarob has joined #openstack-infra | 23:32 | |
*** maxbit_ has joined #openstack-infra | 23:32 | |
*** rhsu1 has quit IRC | 23:32 | |
sdague | fungi: it's #7 in gate resets today | 23:32 |
*** rhsu has joined #openstack-infra | 23:32 | |
sdague | devananda: morning, need to make dinner | 23:32 |
devananda | sdague: ack | 23:32 |
fungi | dhellmann_: oh, testing that shows that it's still not quite right... apparently removing pkg-resources rips out tox | 23:33 |
*** maxbit has quit IRC | 23:33 | |
fungi | sdague: okay | 23:34 |
*** sweston has quit IRC | 23:34 | |
*** sarob has quit IRC | 23:36 | |
*** bknudson has joined #openstack-infra | 23:37 | |
*** harlowja_away is now known as harlowja | 23:38 | |
*** vhoward has joined #openstack-infra | 23:39 | |
*** dkliban has joined #openstack-infra | 23:42 | |
*** jooools has quit IRC | 23:43 | |
pleia2 | clarkb: so I am still having pip issues on fedora preventing it from fully completing the nodepool scripts, puppet on fedora doesn't see the pip that's installed, even though /usr/bin/pip exists and other users can see it | 23:44 |
*** jooools has joined #openstack-infra | 23:45 | |
pleia2 | was working on it this morning, works if the python-pip package in fedora gets installed, need to figure out what the difference is as far as puppet is concerned | 23:45 |
jeblair | pleia2: do we need to revert that change? | 23:46 |
pleia2 | jeblair: no, it's fine as long as we don't add it back to tripleo using it yet | 23:47 |
clarkb | jeblair: no we shouldn't it didn't actually add it to the nodepool.yaml | 23:47 |
jeblair | ok | 23:48 |
*** bhuvan has quit IRC | 23:49 | |
*** maxbit_ has quit IRC | 23:50 | |
clarkb | fungi: jeblair what hotel are people staying in at the summit? | 23:51 |
* clarkb books trip | 23:51 | |
*** maxbit has joined #openstack-infra | 23:51 | |
fungi | clarkb: you want the omni | 23:52 |
clarkb | fungi: you speak from experience? :) | 23:52 |
fungi | clarkb: yeah, i've stayed before. it's not bad | 23:52 |
* clarkb realizes the flight back will be on his birthday... meh | 23:53 | |
fungi | atlanta is a hole (no offense to anyone unfortunate enough to live there) but the omni isn't too bad of a hotel | 23:53 |
fungi | i'm planning to get into atlanta on saturday, but i have to fly to asheville friday afternoon because my brother's getting married on that saturday after the summit | 23:54 |
clarkb | I am probably going to fly in sunday | 23:54 |
fungi | so my friday there at the end will be sort of rushed | 23:54 |
clarkb | and fly out saturday | 23:54 |
Alex_Gaynor | What does one do when a build feels with UNSTABLE? | 23:55 |
fungi | Alex_Gaynor: feel harder | 23:55 |
*** rpodolyaka1 has joined #openstack-infra | 23:55 | |
Alex_Gaynor | fails* :-) | 23:55 |
fungi | Alex_Gaynor: if you have a link, i can try to hunt down why it didn't report | 23:55 |
fungi | assuming it's fresh, the trail in jenkins console logs may not have grown cold yet | 23:55 |
Alex_Gaynor | fungi: https://review.openstack.org/#/c/81319/ | 23:56 |
lifeless | fungi: https://review.openstack.org/#/c/81608/ if you have a minute | 23:56 |
clarkb | ugh alaska flight leaves at 6:50pm from atlanta saturday | 23:56 |
clarkb | meh I can late checkout and find BBQ or somethinbg | 23:56 |
*** mwagner_dontUseM is now known as mwagner | 23:56 | |
*** rfolco has quit IRC | 23:57 | |
jeblair | clarkb: i'm in a similar situation | 23:57 |
jeblair | and that was pretty much my thought as well. :) | 23:58 |
jeblair | (hey, there's always bbq) | 23:58 |
fungi | there is *always* room for bbq | 23:58 |
fungi | we should just crash a pig pickin' | 23:58 |
jeblair | i'm getting in on saturday so i'm around for any potential board/tc/defcore shenanigans on sunday | 23:58 |
Shrews | fungi++ | 23:58 |
jeblair | but maybe i'll just get bbq then instead. | 23:59 |
*** rpodolyaka1 has quit IRC | 23:59 | |
fungi | sounds like an excellent plan | 23:59 |
pleia2 | ooh, snowing | 23:59 |
fungi | we can probably get Shrews on board with it too, sounds like ;) | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!