jeblair | mordred: http://tinyurl.com/qd2wcvk | 00:04 |
---|---|---|
jeblair | mordred: 900 test minutes per hour peak for all jobs | 00:05 |
fungi | that's crazytown | 00:06 |
jeblair | if that's correct that's about equivalent to 15 machines running full time | 00:06 |
jeblair | (that needs double checking) | 00:07 |
*** johnthetubaguy has joined #openstack-infra | 00:07 | |
*** johnthetubaguy has quit IRC | 00:09 | |
*** Ryan_Lane has quit IRC | 00:13 | |
jeblair | i think it does have a bug. there is a mean that needs to be multiplied by a count. but getting close. | 00:16 |
jog0 | anyone have any insight into the babel 1.0 issue? | 00:17 |
jeblair | sorry im about to board a plane and am on bad wifi. | 00:19 |
jog0 | clarkb: ? | 00:19 |
*** zaro0508 has quit IRC | 00:19 | |
jog0 | jeblair: thanks anyway | 00:19 |
*** Ryan_Lane has joined #openstack-infra | 00:19 | |
*** Ryan_Lane has quit IRC | 00:20 | |
*** Ryan_Lane has joined #openstack-infra | 00:20 | |
jog0 | when I run locally it wall works | 00:24 |
reed | rockstar, any news from LOSA? | 00:26 |
rockstar | reed, not yet. I'll let you know. | 00:27 |
rockstar | I'll have some time blocked out with him, so I can iterate with him. | 00:27 |
*** harlowja has joined #openstack-infra | 00:27 | |
fungi | jog0: http://blog.vrplumber.com/b/2013/07/25/pippytz-fails-14-and-2013b/ | 00:27 |
reed | rockstar, do you need me for anything? | 00:27 |
rockstar | reed, nope. I think I've got this. | 00:28 |
jog0 | fungi: ahhhh https://bugs.launchpad.net/openstack-ci/+bug/1205546 | 00:28 |
uvirtbot | Launchpad bug 1205546 in openstack-ci "babel 1.0 dependency pytz isn't found" [Undecided,New] | 00:28 |
jog0 | fungi: so we can blame pip haha | 00:28 |
jog0 | that explains why it worked for me | 00:28 |
jog0 | I have pip 1.3.1 somehow | 00:28 |
reed | rockstar, thanks. I'll hold on to publishing the newsletter until you tell me either abort or go | 00:28 |
jog0 | fungi: now what? | 00:29 |
fungi | jog0: further up in the scrollback it was jocularly remarked by bknudson that "pytz is the pits" | 00:30 |
jog0 | fungi: haha | 00:30 |
fungi | i suppose we need to temporarily cap babel until pytz versioning is fixed or pip 1.4.1 comes? | 00:30 |
jog0 | fungi: that only fixes infra not people using pip 1.4 without our mirrors | 00:31 |
jog0 | unless we cap it everywhere ... | 00:31 |
fungi | or temporarily add pytz==2013b | 00:31 |
jog0 | fungi: once again do we do that everywhere? | 00:31 |
jog0 | maybe just put that in reqs (one of those options) and email people warning them if they use pip 1.4? | 00:32 |
fungi | anywhere tickling that transitive dependency | 00:32 |
fungi | unless anyone can think of something more elegant | 00:32 |
jog0 | fungi: we should have an emergency stub package that allows us to pin things transitively | 00:33 |
jog0 | something like openstack-ohshitpackagesbroke | 00:33 |
fungi | heh. python-openstack-zomg | 00:33 |
jog0 | oh wait that may not work, because we do pip install -U | 00:33 |
fungi | ahh, jog0: https://review.openstack.org/34239 | 00:34 |
jog0 | unless pip is smart enough to not fall apart over that (which I think it may be now that I think about it) | 00:34 |
fungi | seems requirements has pytz>=2010h for a few weeks now | 00:34 |
jog0 | that didn't work because its a babel dep | 00:35 |
jog0 | ohhh so we have to pin babel | 00:35 |
*** ant-tree-ya is now known as anteaya | 00:35 | |
jog0 | so babel is bad boy, | 00:35 |
*** anteaya has quit IRC | 00:36 | |
fungi | or pin pytz in each project of ours which depends on babel before the babel dependency is declared? | 00:36 |
jog0 | fungi: I am not sure if that works | 00:36 |
jog0 | do you have pip 1.4 somewhere? | 00:36 |
jog0 | if so we can test that out | 00:36 |
jog0 | either way i think the easiest thing going forward is pin something in our mirrors and email ppl about 1.4 | 00:37 |
fungi | jog0: i can have pip 1.4 somewhere momentarily, or i can pull a slave out of service and try it | 00:37 |
jog0 | fungi: cool, so try pip install babel | 00:37 |
jog0 | that should fail | 00:37 |
*** bclifford has left #openstack-infra | 00:37 | |
jog0 | then pip install pyz=2013b | 00:37 |
jog0 | then pip install babel again | 00:37 |
jog0 | and see | 00:37 |
* fungi tries that into a venv on precise-dev.slave | 00:37 | |
* jog0 is glad he hasn't moved to the new hotness, pip 1.4 yet on his local box | 00:39 | |
fungi | you know you want this ;) | 00:41 |
jog0 | I am trying the same thing on a VM btw | 00:45 |
jog0 | fungi: it worked | 00:46 |
jog0 | pip install pytz=2013b ; pip install babel | 00:46 |
jog0 | err pytz==2013b | 00:46 |
fungi | just about finished testing in a virtualenv with an upgraded pip inside it | 00:46 |
fungi | Successfully installed babel | 00:48 |
fungi | so virtualenv on our slaves should be similarly happy | 00:48 |
jog0 | yeah | 00:48 |
jog0 | I treid pip install babel=0.9.6 | 00:48 |
*** vipul-away is now known as vipul | 00:48 | |
jog0 | and that works too | 00:48 |
jog0 | I think that is easier | 00:48 |
fungi | gonna try pytz>=2010h | 00:48 |
fungi | like is currently in openstack/requirements | 00:49 |
jog0 | I think that should work too, but the question is, do we think this will be around long enough that we *need* to fix it for everyone or jsut the gate | 00:49 |
*** DinaBelova has joined #openstack-infra | 00:50 | |
jog0 | as in can we just say people running pip 1.4 use our mirrors or do this step first | 00:50 |
jog0 | or do we have to put this in every project | 00:50 |
fungi | seems to work as well | 00:51 |
jog0 | cool | 00:51 |
jog0 | fungi: have to jet, BBIAB but I am hoping we can at least get openstack-ci back up before monday | 00:52 |
jog0 | I may backonline in abit | 00:52 |
jog0 | fungi: thanks you rock | 00:52 |
fungi | i get the impression the reason for adding it to openstack/requirements a few weeks ago was so that we could add it to any projects depending on it. new babel simply introduced it as a transient dependency to things not already listing it | 00:52 |
fungi | jog0: any time | 00:52 |
*** DinaBelova has quit IRC | 00:54 | |
*** blamar has joined #openstack-infra | 00:59 | |
*** nati_ueno has joined #openstack-infra | 01:02 | |
*** prad has joined #openstack-infra | 01:06 | |
fungi | jog0: added you to https://review.openstack.org/38896 seeing if tests succeed | 01:07 |
*** michchap has joined #openstack-infra | 01:08 | |
mordred | jog0: do you grok the pytz issue now? | 01:19 |
mordred | and/or do you need the info? (sorry, my plane landed) | 01:19 |
mordred | jog0, fungi: you're going to need to do this everywhere - and probably file a bug on Babel too | 01:19 |
mordred | the underlying thing is that pip 1.4 does not install pre-release software by default | 01:20 |
mordred | but will if you tell it to | 01:20 |
*** michchap has quit IRC | 01:20 | |
mordred | foo>=1.2a flips on the install-pre-release bit | 01:20 |
mordred | so will work | 01:20 |
mordred | letters in versions are all automatically pre-release | 01:20 |
fungi | makes sense | 01:20 |
mordred | which tags all of pytz's releases as pre-release | 01:20 |
*** michchap has joined #openstack-infra | 01:20 | |
mordred | so pip >= 1.4 will never install just plain pytz | 01:20 |
*** nati_uen_ has quit IRC | 01:21 | |
*** mgagne has joined #openstack-infra | 01:21 | |
fungi | i'll have to see if i remember my sourceforge login | 01:22 |
*** mriedem has joined #openstack-infra | 01:24 | |
fungi | opening a bug for them | 01:27 |
*** ^demon has quit IRC | 01:29 | |
*** beanemec is now known as bnemec_away | 01:29 | |
jog0 | fungi: you are handling the babel bug? | 01:32 |
jog0 | mordred: I was thinking if there are probablly not many people using pip 1.4 yet so we could pin babel in requirments instead, and that would fix everthing on infra then we can fix things properly depending on how fast babel gets it fixed | 01:33 |
jog0 | as babel 0.9.6 doesn't ahve a pytz dep | 01:34 |
dstufft | isn't pytz on launchpad | 01:35 |
*** mgagne1 has joined #openstack-infra | 01:35 | |
*** mgagne1 has joined #openstack-infra | 01:35 | |
*** dkliban has quit IRC | 01:35 | |
fungi | jog0: i am, but inadvertently opened a bug against something else called babel on sourceforge. opening one against the right babel on github | 01:35 |
fungi | assuming someone else hasn't already | 01:36 |
dstufft | https://bugs.launchpad.net/pytz/+bug/1204837 | 01:36 |
uvirtbot | Launchpad bug 1204837 in pytz "pytz version number (2013b) is considered a pre-release by pip 1.4" [Undecided,New] | 01:36 |
dstufft | fungi: What's the Babel issue? | 01:36 |
jog0 | dstufft: that babel just requires pytz and not a versioned copy | 01:37 |
dstufft | oh | 01:37 |
jog0 | coolhttps://bugs.launchpad.net/openstack-ci/+bug/1205546 | 01:37 |
uvirtbot | Launchpad bug 1205546 in nova "babel 1.0 dependency pytz isn't found" [Critical,In progress] | 01:37 |
*** mgagne has quit IRC | 01:37 | |
jog0 | I opened that to track this | 01:37 |
dstufft | count down till armin yells at me for breaking his install_requires :| | 01:38 |
Alex_Gaynor | Heh | 01:39 |
fungi | pytz could solve this by releasing 2013.0.1 or something | 01:39 |
jog0 | fungi: so in your opinion what is the best approach to unblock all of openstack? | 01:40 |
mordred | I think we should pin babel | 01:40 |
mordred | and/or add pytz to nova's list before Babel | 01:41 |
fungi | mordred: i already uploaded the latter as a review for nova on that lp bug | 01:41 |
mordred | fungi: great. | 01:42 |
jog0 | well then we have to fix every other project ... | 01:42 |
mordred | hrm | 01:42 |
fungi | at least until pytz and/or babel fix themselves | 01:42 |
jog0 | fungi: right, I doubt they will be faster then us | 01:42 |
mordred | we could in Babel in the requirements file | 01:43 |
fungi | because unlike us, they take their weekends off ;) | 01:43 |
mordred | (will have to to land the nova pin patch anyway) | 01:43 |
mordred | and then remove Babel 1.0 from our mirror | 01:43 |
jog0 | mordred: not if we assume no one uses pip 1.4 yet | 01:43 |
mordred | the gate uses 1.4 | 01:43 |
dstufft | https://github.com/pypa/pip/issues/974 here's relevant information for linking to people | 01:43 |
mordred | wait- what thing were you responding to? | 01:43 |
jog0 | and gate uses our mirrors which we can pin | 01:43 |
jog0 | as a quick and dirty get the code flowing | 01:44 |
fungi | the temporary versioned pytz dependency has the benefit of not having to muck with the mirror contents | 01:44 |
mordred | jog0: I meant we have to land the change to openstack/requirements | 01:44 |
jog0 | mordred: ahh yeah | 01:44 |
mordred | adding pytz is 'easy' - but we would have to land the change on every project we have that uses Babel | 01:44 |
jog0 | mordred: I will propose that patch so we have the option | 01:44 |
fungi | dstufft: thanks, i'll add that to the upstream babel bug i opened | 01:44 |
mordred | and then we'd have temporary depends in all the projects | 01:44 |
mordred | that we'd want to remember to go remove | 01:44 |
mordred | honestly, in the world of yuck- I think deleting Babel 1.0.0 from our mirror is a better short-term choice | 01:45 |
mordred | because the roll-it-back-out step is really low cost | 01:45 |
fungi | mordred: but that leaves our projects uninstallable with current pip for anyone not using our mirror | 01:45 |
mordred | this is true | 01:45 |
mordred | dammit | 01:45 |
fungi | unless we go and add that babel cap to all the projects where we'd instead add the temporary pytz minimum version entry | 01:46 |
mordred | yeeks | 01:46 |
jog0 | mordred: true but a simple if you use pip 1.4 do this .tox/bin/pip install pytz=... | 01:46 |
fungi | how many of our projects currently use babel without a version cap? lots? | 01:46 |
mordred | fungi: I'm going to say all | 01:46 |
fungi | k | 01:46 |
dstufft | FWIW pip 1.4 has 26k downloads from PyPI and Virtualenv 1.10 (which has pip 1.4 bundled) has 11k | 01:47 |
jog0 | dstufft: how land has pip 1.4 been out for | 01:48 |
Alex_Gaynor | And users with them I suspect are more likely to try to install OpenStack reps on a weekend than average | 01:48 |
jog0 | oh 3 days | 01:48 |
dstufft | jog0: 3 days? | 01:48 |
dstufft | give or take some hours | 01:48 |
*** melwitt has quit IRC | 01:49 | |
mordred | ok. SO ... | 01:49 |
mordred | I think we have to do both things | 01:49 |
mordred | we need to remove Babel from our mirror and pin requirements so that we can _land_ the other patches | 01:49 |
mordred | then I think we need to land patches to the projects either pinning Babel or pinning pytz | 01:50 |
mordred | to unbreak people who are not in our gate | 01:50 |
*** michchap has quit IRC | 01:50 | |
dstufft | historically pip has near immediate uptake as far as which versions people download (e.g. once a near version is released downloads for old versions basically fall off to near zero). Unknown how many people just don't bother upgrading | 01:50 |
dstufft | s/near/new/ | 01:50 |
*** michchap has joined #openstack-infra | 01:50 | |
jog0 | hmm not many things use Babel somehow | 01:51 |
jog0 | ceilo, cinder, heat neutron, nova and keystoneclient | 01:51 |
mordred | http://paste.openstack.org/show/42079/ | 01:51 |
jog0 | ignore me | 01:52 |
jog0 | that wasn't a full list | 01:52 |
mordred | ironic and nova are the only problem children | 01:52 |
fungi | mordred: are you saying the pytz entry in each package won't help by itself? | 01:52 |
mordred | for people consuming not on our mirror | 01:52 |
mordred | because the other uses are in test-requirements | 01:52 |
mordred | we have a nova core here, so we can land a fix to nova | 01:53 |
jog0 | o/ | 01:53 |
mordred | no offense, but I think if ironic is broken due to a req for a weekend, it will not impact anyway | 01:53 |
mordred | anyone | 01:53 |
mordred | so I'm back to - let's fix our mirror, then fix nova | 01:53 |
dstufft | Sorry for breaking this for you guys. In my defense the pytz guy knew about this 2 months ago :[ | 01:53 |
mordred | dstufft: hey man, I put the pytz pin in our file in prep for this a while ago | 01:54 |
mordred | Babel is the one who broke us | 01:54 |
fungi | mordred: in that order for expediency? because i think nova will be able to land 38896 unless i've mischaracterized the issue | 01:54 |
Alex_Gaynor | mordred: If it'd be helpful I can blast the rackspace dev-list asking for any openstack core reviewers to come help out? I imagine dstufft can do the same for nebula if he's comfortable and you think it'd be useful | 01:55 |
mordred | jog0: any clue which version we should pin to? >=0.9.6,<1.0.0 ? | 01:55 |
*** erfanian has joined #openstack-infra | 01:55 | |
mordred | Alex_Gaynor: nah, I thnk we can land the nova change most likely | 01:55 |
mordred | and that's the only one we really need help with | 01:55 |
Alex_Gaynor | okey doke | 01:55 |
openstackgerrit | Joe Gordon proposed a change to openstack/requirements: Pin Babel to <1.0 since it doesn't play well with pip 1.4 https://review.openstack.org/38898 | 01:55 |
dstufft | Yea I can if folks need it but looks like you got it covered | 01:55 |
jog0 | mordred: yeah I checked pytz isn't a dep in 0.9.6 | 01:55 |
fungi | jog0: trying that as an alternative to 38896 or should we do both? | 01:56 |
fungi | oh, right, that's to requirements | 01:56 |
jog0 | fungi: I agree with mordred both | 01:56 |
fungi | yeah, me too. this will get things moving again | 01:56 |
jog0 | so the full list: cinder, ceilo, heat, ironic keystone, neutron nova keystoneclient nova | 01:56 |
fungi | not as many as i expected | 01:57 |
jog0 | fungi: yeah | 01:57 |
dstufft | mordred: I expect similar fallout to 1.5 as a lot of people probably won't start hosting on PyPI until pip is failing by default | 01:57 |
dstufft | FWIW | 01:57 |
jog0 | I guess we don't translate clients | 01:57 |
mordred | ok. mirror cleaned | 01:58 |
mordred | and the mirror builders are cleaned from their caches | 01:58 |
jog0 | so should I go ahead and A+ the nova patch | 01:58 |
jog0 | as well | 01:58 |
mordred | yah | 01:58 |
jog0 | kk | 01:58 |
jog0 | can you +1 it too | 01:58 |
mordred | will it pass though? | 01:58 |
jog0 | mordred: fungi and I tried that approach out | 01:59 |
mordred | I think requirements has to land before the nova change .. | 01:59 |
mordred | oh - are you doing the pytz pin in nova? | 01:59 |
mordred | I'd think that you'd want the Babel pin so that we know what's going on, no? | 01:59 |
mordred | or - whichever, honestly | 01:59 |
jog0 | fungi: https://jenkins.openstack.org/job/gate-nova-pep8/34547/console haha | 01:59 |
jog0 | mordred: good point | 02:00 |
*** ^demon has joined #openstack-infra | 02:00 | |
mordred | HAHAHAHAHAHA | 02:00 |
mordred | jog0: I think you wrote that hacking check too | 02:00 |
jog0 | lets pin babel everywhere to make it easier to debug | 02:00 |
mordred | ++ | 02:00 |
mordred | I've aprv'd the Babel change in requirements | 02:00 |
fungi | okay, i'll abandon my change | 02:00 |
mordred | I'm kinda tempted to super-user approve it bypassing the gate - but that's just because I'm impatient | 02:01 |
fungi | or repropose it with the babel pin instead i guess | 02:01 |
jog0 | fungi: repropose with babel | 02:01 |
fungi | doing | 02:02 |
jog0 | so when I A+ it, its not all coming from me | 02:02 |
mordred | fungi: btw - COMPLETELY unreleated - https://review.openstack.org/#/c/38871/ - I did plane hacking | 02:02 |
jog0 | and do forget the period (btw that test wasn't me) | 02:02 |
openstackgerrit | A change was merged to openstack/requirements: Pin Babel to <1.0 since it doesn't play well with pip 1.4 https://review.openstack.org/38898 | 02:02 |
mordred | jog0: you might find that patch interesting too | 02:02 |
mordred | w00t! | 02:02 |
jog0 | mordred: puppet? | 02:03 |
mordred | jog0: that's step one in making out devstack-gate nodes using diskimage-builder | 02:03 |
mordred | our | 02:03 |
mordred | which, once done, will mean there will be downloadable copies of the image that runs d-g | 02:04 |
mordred | so that debugging locally in a kvm or something will be easy :) | 02:04 |
jog0 | d-g is that god spelled backwards without the o? | 02:04 |
mordred | yes | 02:04 |
jog0 | mordred: ohh nice | 02:04 |
jog0 | (to the download images and kvm) | 02:05 |
mordred | also, it's devstack-gate without as many letters | 02:05 |
jog0 | mordred: ahh | 02:05 |
mordred | but now I'm thinking we should re-name d-g to god | 02:05 |
jog0 | fungi: let me know when your patch is up and I will A+ | 02:05 |
mordred | but spell it g-d - or gate-devstack for short | 02:05 |
fungi | jog0: already there | 02:05 |
jog0 | fungi: title is wrong | 02:06 |
jog0 | pin babel | 02:06 |
fungi | yep, just spotted that | 02:06 |
fungi | fixing | 02:06 |
fungi | there now | 02:07 |
*** markmcclain has quit IRC | 02:08 | |
jog0 | thanks, mordred can you +1 that | 02:08 |
*** ^demon has quit IRC | 02:08 | |
mordred | yes. link? | 02:08 |
fungi | https://review.openstack.org/38896 | 02:09 |
jog0 | or any other infra, so its at least not just me https://review.openstack.org/#/c/38896/ | 02:09 |
mordred | +!'d | 02:09 |
mordred | +1'd | 02:09 |
mordred | it's too bad I let my nova core status lapse | 02:09 |
fungi | !!!1!!1eleven | 02:09 |
openstack | fungi: Error: "!!1!!1eleven" is not a valid command. | 02:09 |
fungi | openstack: it should be | 02:10 |
jog0 | dansmith: is around | 02:11 |
fungi | i'll get started proposing the same to the other listed affected projects if someone isn't already | 02:11 |
jog0 | and getting him to do a proper second review | 02:12 |
dansmith | it's done | 02:12 |
mordred | oh wow. nice | 02:12 |
mordred | it's almost like none of us take time off! :) | 02:12 |
Alex_Gaynor | So now we have to wait fo rit work it's way up the review stack | 02:13 |
jog0 | I need to run out and buy some whisky soon actaully, but this took priority | 02:13 |
jog0 | Alex_Gaynor: the openstack mirror is fixed | 02:13 |
jog0 | so recheck away | 02:13 |
fungi | jog0: but now you need twice as much whiskey | 02:13 |
*** koolhead17 has quit IRC | 02:13 | |
Alex_Gaynor | jog0: no joke, my patch didn't make its way up to the top yet because stuff was so slow | 02:13 |
Alex_Gaynor | jog0: is it recheck for gate jobs? I thougth there was some other command | 02:14 |
Alex_Gaynor | reverify, that's it | 02:14 |
jog0 | yeah | 02:16 |
jog0 | woot things are going green http://status.openstack.org/zuul/ | 02:17 |
jog0 | ok time to get that whisky | 02:17 |
mordred | jog0: enjoy! | 02:18 |
clarkb | jeblair: ++ to that graph | 02:18 |
clarkb | why is there so much sb on friday evening. I grab a couple ebers and this happens | 02:18 |
*** hartsocks has quit IRC | 02:19 | |
Alex_Gaynor | teh gating system totally needs an "Your estimated wait time is N minutes" display | 02:20 |
clarkb | Alex_Gaynor: https://review.openstack.org/#/c/38598/ | 02:21 |
clarkb | Alex_Gaynor: tl;dr maybe next week | 02:21 |
Alex_Gaynor | clarkb: awesome. | 02:21 |
fungi | clarkb: beat me to the link | 02:21 |
Alex_Gaynor | clarkb: You know this is just encouraging me to propose hair brained ideas right? | 02:21 |
fungi | mordred: jog0: can one of you confirm i didn't miss adding an effected project to https://launchpad.net/bugs/1205546 | 02:22 |
uvirtbot | Launchpad bug 1205546 in nova "babel 1.0 dependency pytz isn't found" [Critical,In progress] | 02:22 |
fungi | er, affected | 02:22 |
fungi | getting started uploading changes for each of those now | 02:23 |
*** vipul is now known as vipul-away | 02:25 | |
clarkb | fungi: nova, glance, swift, ? | 02:25 |
fungi | clarkb: nova's there. glance and swift don't seem to be the ones jog0 mentioned above | 02:26 |
fungi | but let me know if you see babel dependencies specified in them | 02:27 |
fungi | or i'l check once i get these knocked out | 02:27 |
clarkb | fungi: I think they both use babel | 02:27 |
clarkb | maybe | 02:27 |
clarkb | also wow babel made a new release? | 02:27 |
jog0 | fungi: that looks right, I only checked repos under github.com/openstack | 02:28 |
*** sdake_ has joined #openstack-infra | 02:29 | |
fungi | jog0: thanks | 02:30 |
mgagne1 | clarkb: mitsuhiko took over the maintenance of Babel | 02:31 |
*** mgagne1 is now known as mgagne | 02:31 | |
mgagne | clarkb: Armin Ronacher in fact | 02:32 |
jog0 | fungi: still seeing failures http://status.openstack.org/zuul/ | 02:32 |
*** changbl has joined #openstack-infra | 02:33 | |
jog0 | does it take time for the Babel change in mirror to propogate out? | 02:33 |
mordred | it shouldn't | 02:34 |
jog0 | because dansmith is still angry | 02:34 |
fungi | jog0: are those new or did those jobs run before we fixed the mirror? | 02:34 |
mordred | I don't like it when dansmith is angry | 02:34 |
dansmith | um, my *patches* are still angry :) | 02:34 |
dansmith | it's friday, so I'm happy :) | 02:34 |
fungi | dansmith: that's a healthy outlook | 02:35 |
mordred | oh. crappit. it's back in the mirror | 02:35 |
jog0 | fungi: not sure but the mirror change is in and these are still running | 02:35 |
jog0 | mordred: haha | 02:35 |
* mordred cries | 02:35 | |
fungi | mordred: was there perhaps a mirror job running when you cleaned the mirror copy? | 02:35 |
mordred | maybe I missed a cached file someone | 02:35 |
mordred | fungi: yeah. or that | 02:35 |
jog0 | btw my browser things http://pypi.openstack.org/openstack/ is danish | 02:36 |
mordred | ok. removed from the mirror again | 02:36 |
jog0 | will it stay removed this time? | 02:37 |
mordred | and removed from teh mirror builder caches again | 02:37 |
mordred | sorry bout that | 02:37 |
dstufft | babel refuses to be constrainted by your petty attempts | 02:40 |
jog0 | now if only we had a way to use logstash to find out which jobs failed because of this and autoretry all of em | 02:40 |
Alex_Gaynor | I can imagine a thing to scape all reviews in the last 8 hours and do things. | 02:40 |
jog0 | I think you just volunteered | 02:41 |
dstufft | I saw it | 02:41 |
mordred | jog0: I believe if clarkb was here, you could TOTALLY do that with logstash | 02:41 |
Alex_Gaynor | It only looked like that, but I didn't. | 02:41 |
Alex_Gaynor | I think clarkb was deifnitely just volunteered though | 02:42 |
mordred | also, I feel like fungi has scripts to retry things ... | 02:42 |
Alex_Gaynor | mordred: do the test runners have use-wheels = true yet? | 02:42 |
mordred | Alex_Gaynor: not yet | 02:42 |
mordred | that's coming | 02:42 |
dansmith | there is a zuul tool that will recheck everything currently in a pipeline | 02:42 |
mordred | I wanted to get one cycle of madness done before we started down that road | 02:42 |
Alex_Gaynor | dstufft: have you done a timing of use-wheels = true vs. false for openstack/requirements? | 02:43 |
jog0 | dansmith: nice, we also want things out of the pipeline | 02:43 |
jog0 | aka all of your patches | 02:43 |
dansmith | all of mine are in the pipeline right now | 02:43 |
fungi | mordred: i do not, in fact. but it would just be a matter of multiple gerrit review api calls | 02:43 |
dstufft | Alex_Gaynor: nope | 02:43 |
fungi | iterated from a list of change numbers | 02:43 |
Alex_Gaynor | dstufft: how do I disable use-wheel without commenting it out? | 02:44 |
mordred | Alex_Gaynor: I expect it will be several minutes quicker | 02:44 |
Alex_Gaynor | mordred: sounds about right, that should be nice | 02:44 |
dstufft | Alex_Gaynor: edit the file and set it to false? | 02:44 |
Alex_Gaynor | dstufft: ... | 02:44 |
dstufft | Alex_Gaynor: hey that's without commenting it out ;P | 02:44 |
Alex_Gaynor | what do normal people do at 8PM on a friday? | 02:44 |
mordred | I don't know! | 02:44 |
*** zehicle_at_dell has joined #openstack-infra | 02:44 | |
mordred | I mean, it's 10:45pm here - which is WAY too early to go out drinking | 02:44 |
dstufft | I dunno but probably the same thing they are doing at 11 | 02:45 |
mordred | I think the first step for wheel is that I want to change our mirror builder to build wheels of everything and upload those | 02:45 |
mordred | because I believe mirror builder is working on 1.4 already | 02:45 |
mordred | I also want to modify out publish-to-pypi jobs to publish wheels | 02:46 |
mordred | our | 02:46 |
dstufft | mordred: universal = 1 | 02:46 |
Alex_Gaynor | Oooh, yes please | 02:46 |
mordred | dstufft: aroo? | 02:46 |
dstufft | mordred: wheels are specific to the python you create them with, unless you set universal in the setup.cfg | 02:47 |
mordred | you're kidding | 02:47 |
dstufft | mordred: no. They are a binary file format | 02:47 |
mordred | even if they are pure-python? | 02:47 |
dstufft | mordred: yes | 02:47 |
mordred | awesome | 02:47 |
mordred | is there a way to set that at build time? | 02:47 |
Alex_Gaynor | any reason not to just put it in setup.cfg, we already use that format? | 02:48 |
mordred | so - I can add universal = 1 to setup.cfg for all of our stuff, so that when we upload universal stuff | 02:48 |
mordred | for stuff from pypi of which we are building wheels for our mirror consumption | 02:48 |
mordred | we actually do a py26 and a py27 run already | 02:48 |
mordred | so those being non-universal is probably fine | 02:48 |
dstufft | https://bitbucket.org/dholth/wheel/src/a160305fb0624a2b54f7bad6443fcc007e07bc8c/setup.cfg?at=default | 02:49 |
dstufft | ^ example use | 02:49 |
uvirtbot | dstufft: Error: "example" is not a valid command. | 02:49 |
mordred | I will slightly apologize to anyone on osx - I can't help you | 02:49 |
dstufft | I think it's smart enough not to make a platform specific wheel | 02:49 |
*** Ryan_Lane has quit IRC | 02:49 | |
dstufft | for pure python | 02:49 |
mordred | neat | 02:49 |
mordred | dstufft: what's consuming that metadata section in there. anything yet? | 02:50 |
dstufft | I think the universal flag is because it doesn't know if you have any sort of transformation like 2to3 | 02:50 |
dstufft | mordred: I think bdist_wheel does. | 02:50 |
mordred | neat. I should make sure our metadata is compat | 02:50 |
*** DinaBelova has joined #openstack-infra | 02:51 | |
dstufft | it allows specififying conditional dependencies as well | 02:51 |
mordred | yah. does pip grok those yet? | 02:51 |
fungi | okay, all changes uploaded now... https://review.openstack.org/#/q/status:open+topic:bug/1205546,n,z | 02:52 |
clarkb | logstash is too far behind and you really want to query gerrit for that info anyways | 02:52 |
*** changbl has quit IRC | 02:52 | |
fungi | i didn't bother with stable branches of anything yet | 02:52 |
jog0 | clarkb: you can check if pep8 py26 and py27 failed | 02:53 |
jog0 | if so its prob this bug | 02:53 |
fungi | maybe one of the upstreams will obviate the issue before we need another stable patch to something | 02:53 |
clarkb | jog0: not until tomorrow :( logstash is about 20 hours behind and the monmeth | 02:53 |
clarkb | *moment | 02:53 |
dstufft | mordred: Pip does not use a setup.cfg and the Wheel format does not include the setup.cfg in the metadata | 02:53 |
mordred | dstufft: ok. so, what processes the conditional requirements? | 02:53 |
dstufft | mordred: wheel will read the setup.cfg and use it to feed the Wheel format | 02:53 |
jog0 | clarkb: you can use the gerrit api | 02:53 |
mordred | wheel? | 02:53 |
jog0 | anyway i'm out for the weekend | 02:54 |
dstufft | the wheel format supports conditional requirements afaik | 02:54 |
dstufft | it's not stored as setup.cfg inside the wheel though | 02:54 |
jog0 | have a good one everybody | 02:54 |
dstufft | setup.cfg is just an input is what I'm trying to say | 02:54 |
*** jog0 is now known as jog0-up-up-and-a | 02:55 | |
dstufft | I'm looking to see if pip understands conditional reqs from a wheel | 02:55 |
*** jog0-up-up-and-a is now known as jog0-away | 02:55 | |
*** DinaBelova has quit IRC | 02:55 | |
mordred | so in _theory_ if I only published wheels, I could start getting conditional reqs | 02:56 |
clarkb | were we giving pytz a version or pinning babel? | 02:57 |
* mordred no have current use for this - just grokking | 02:57 | |
mordred | clarkb: pinning babel | 02:57 |
clarkb | I mean I think I would be ok with the same version of babel that has been around for the last 5 years | 02:57 |
Alex_Gaynor | for at least a few days | 02:57 |
mordred | clarkb: yeah. we'll unpin it when it fixes its pytz depend | 02:57 |
clarkb | also why do people do releases on friday? | 02:58 |
mordred | because | 02:58 |
mordred | chicken butt | 02:58 |
dstufft | mordred: you can emulate conditional reqs in a setup.py too | 02:59 |
dstufft | just do the same conditions in code | 02:59 |
mordred | totally | 02:59 |
mordred | was just wondering what the current state of the art is | 02:59 |
dstufft | make them match and you have codnitional requirements in all systems, and in wheel it's all without executing code | 02:59 |
mordred | I mean, we already process those files ... we could totally do markerlib matching same as wheel | 03:00 |
mordred | but for now - extra complex - not needed - too many changes already - aaaaaa | 03:01 |
fungi | aaaaaatotally | 03:02 |
mordred | ooh! I see green things in zuul | 03:02 |
fungi | slimy green things? | 03:03 |
* mordred wants jeblair's new status bar patch | 03:03 | |
*** mriedem has quit IRC | 03:05 | |
*** erfanian has quit IRC | 03:09 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Add branch tarball job for tempest. https://review.openstack.org/38910 | 03:16 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Clean local jeepyb repo when installing/upgrading https://review.openstack.org/38066 | 03:23 |
*** afazekas_zz is now known as afazekas | 03:34 | |
mikal | I take it from scroll back that the gate, she is broken? | 03:34 |
fungi | mikal: the gate she should be fixed now | 03:39 |
fungi | babel decided to release a new version with an unversioned dependency on pytz, which breaks under latest pip | 03:40 |
*** zehicle_at_dell has quit IRC | 03:44 | |
Alex_Gaynor | grumble, tests that fail non-deterministically really screw up the efficiency of gating | 03:48 |
*** ladquin has quit IRC | 03:48 | |
*** DinaBelova has joined #openstack-infra | 03:51 | |
fungi | Alex_Gaynor: yes, no argument there. bad tests should either be fixed or removed | 03:51 |
Alex_Gaynor | fungi: I guess any failer really, it costs you an hour per failure basically. We should hvae a test performance hackday or something. | 03:52 |
fungi | yup | 03:52 |
Alex_Gaynor | I wonder if there's some optimization that could be done so that you don't lose everything after the failer as well. Hmmm | 03:53 |
fungi | mordred: no luck on https://review.openstack.org/38066 with removing the cached eggs. so far the only solution i've found is manually upgrading pbr first. even pip install -U . is broken | 03:54 |
fungi | Alex_Gaynor: well, you don't want to continue testing subsequent changes on top of one you know is being ejected from the series | 03:55 |
*** DinaBelova has quit IRC | 03:55 | |
Alex_Gaynor | fungi: well, you don't know it's being ejected until it's at the top of stack | 03:56 |
fungi | Alex_Gaynor: yeah, but the one failing furthest up will either be ejected because those in front of it succeed, or will be restarted because one ahead of it failed | 03:57 |
Alex_Gaynor | right, but if something down the stack fails, as happens with non-deterministic ones, everything after it gets cancaelled | 03:58 |
Alex_Gaynor | and then rerun after the failer becomes ToS | 03:58 |
fungi | which is inevitable anyway, so no point in continuing to waste computing resources running a test which will have to be discarded and rerun | 03:58 |
fungi | if at least one change ahead of you is failing, your tests will be rerun because either it will fail to merge *or* it will succeed on a retry because one ahead of it fails to merge. either way the list of patches ahead of you is guaranteed to change before you merge so you need to rerun | 03:59 |
fungi | and with the current single-branch-prediction model, you have to wait to rerun until you know which patch ahead of you is going to get removed from the series | 04:01 |
fungi | we've talked about multiple-branch prediction, but that would speculatively ultimately multiply the combinations of changes being tested and use even more resources | 04:02 |
Alex_Gaynor | Right, you need more cloud then | 04:02 |
Alex_Gaynor | better wall clock, more CPU time | 04:03 |
fungi | it's a trade-off, plus more code | 04:03 |
Alex_Gaynor | more code is the most compelling argument :) | 04:03 |
fungi | not necessarily out of the cards though, just not something we have today and needs further discussion | 04:03 |
*** leifmadsen has quit IRC | 04:06 | |
*** pcrews has joined #openstack-infra | 04:22 | |
*** SergeyLukjanov has joined #openstack-infra | 04:26 | |
*** mgagne has quit IRC | 04:46 | |
*** DinaBelova has joined #openstack-infra | 04:52 | |
*** DinaBelova has quit IRC | 04:57 | |
SpamapS | 2013-07-27 01:18:33.592 | Downloading/unpacking pytz (from babel) | 04:57 |
SpamapS | 2013-07-27 01:18:33.592 | Could not find a version that satisfies the requirement pytz (from babel) (from versions: 2012c, 2012c, 2012c, 2012d, 2012d, 2012d, 2012f, 2012f, 2012f, 2012g, 2012g, 2012g, 2012h, 2012h, 2012j, 2012j, 2012j, 2013b, 2013b, 2013b, 2009r, 2008b, | 04:57 |
SpamapS | Known bug? | 04:57 |
SpamapS | https://bugs.launchpad.net/openstack-ci/+bug/1205546 | 04:57 |
uvirtbot | Launchpad bug 1205546 in python-keystoneclient "babel 1.0 dependency pytz isn't found" [Critical,In progress] | 04:57 |
SpamapS | ahh n/m | 04:57 |
* fungi nods | 04:59 | |
fungi | we all dogpiled onto it fairly quickly, but someone will need to round up a few core reviewers for the various projects affected | 05:01 |
SpamapS | di-b is affected too.. making patches now | 05:01 |
*** reed has quit IRC | 05:04 | |
fungi | yeah, we scanned for affected stuff in the openstack org heirarchy but didn't look for other projects. this is likely a very short-term situation until pytz and/or babel address it properly | 05:07 |
fungi | in the meantime, enjoy the round of ugly hacks | 05:07 |
*** sarob has joined #openstack-infra | 05:09 | |
*** sarob_ has joined #openstack-infra | 05:11 | |
*** sarob has quit IRC | 05:11 | |
*** nati_ueno has quit IRC | 05:12 | |
*** vogxn has joined #openstack-infra | 05:16 | |
*** pcrews has quit IRC | 05:21 | |
SpamapS | hrm, hacking seems to be confused | 05:28 |
*** yaguang has joined #openstack-infra | 05:28 | |
SpamapS | ./build/lib.linux-x86_64-2.7/os_apply_config/apply_config.py:26:1: H302 import only modules.'from os_apply_config import collect_config' does not import a module | 05:28 |
SpamapS | except.. it does | 05:28 |
*** yaguang has quit IRC | 05:37 | |
SpamapS | ah picking up a build dir.. weird | 05:37 |
*** Ryan_Lane has joined #openstack-infra | 05:39 | |
*** yaguang has joined #openstack-infra | 05:49 | |
*** yaguang has quit IRC | 05:51 | |
*** yaguang has joined #openstack-infra | 05:51 | |
*** DinaBelova has joined #openstack-infra | 05:52 | |
*** DinaBelova has quit IRC | 05:57 | |
*** shhu has joined #openstack-infra | 05:57 | |
*** westmau5 is now known as westmaas | 06:10 | |
*** dkehn has quit IRC | 06:34 | |
*** dkehn has joined #openstack-infra | 06:35 | |
*** dkehn has joined #openstack-infra | 06:36 | |
openstackgerrit | Sergey Lukjanov proposed a change to openstack-dev/pbr: Improve AUTHORS file generation https://review.openstack.org/37625 | 06:44 |
*** pcrews has joined #openstack-infra | 06:49 | |
*** sarob_ has quit IRC | 06:50 | |
*** sarob has joined #openstack-infra | 06:51 | |
*** DinaBelova has joined #openstack-infra | 06:53 | |
*** sarob has quit IRC | 06:56 | |
*** DinaBelova has quit IRC | 06:57 | |
*** jjmb has joined #openstack-infra | 06:59 | |
openstackgerrit | Sergey Lukjanov proposed a change to openstack-dev/pbr: Fix .mailmap file search location https://review.openstack.org/37562 | 07:00 |
*** SergeyLukjanov has quit IRC | 07:01 | |
*** DinaBelova has joined #openstack-infra | 07:02 | |
*** pcrews has quit IRC | 07:03 | |
*** plomakin has quit IRC | 07:03 | |
*** mirrorbox has quit IRC | 07:03 | |
*** ogelbukh has quit IRC | 07:03 | |
*** sshturm__ has quit IRC | 07:03 | |
*** ogelbukh has joined #openstack-infra | 07:04 | |
*** sshturm__ has joined #openstack-infra | 07:04 | |
*** mirrorbox has joined #openstack-infra | 07:05 | |
*** plomakin has joined #openstack-infra | 07:05 | |
*** mirrorbox has quit IRC | 07:06 | |
*** mirrorbox has joined #openstack-infra | 07:06 | |
*** jjmb has quit IRC | 07:31 | |
*** afazekas has quit IRC | 07:34 | |
*** ogelbukh has quit IRC | 07:36 | |
*** mirrorbox has quit IRC | 07:36 | |
*** plomakin has quit IRC | 07:36 | |
*** sshturm__ has quit IRC | 07:36 | |
*** sshturm__ has joined #openstack-infra | 07:36 | |
*** plomakin has joined #openstack-infra | 07:36 | |
*** ogelbukh has joined #openstack-infra | 07:37 | |
*** mirrorbox has joined #openstack-infra | 07:37 | |
*** afazekas has joined #openstack-infra | 07:38 | |
*** dkliban has joined #openstack-infra | 07:40 | |
*** Ryan_Lane has quit IRC | 07:45 | |
*** dkliban has quit IRC | 07:46 | |
*** dkliban has joined #openstack-infra | 07:52 | |
*** vogxn has quit IRC | 07:55 | |
*** SergeyLukjanov has joined #openstack-infra | 08:04 | |
*** DinaBelova has quit IRC | 08:07 | |
*** dkliban has quit IRC | 08:12 | |
*** michchap has quit IRC | 08:14 | |
*** michchap has joined #openstack-infra | 08:15 | |
*** UtahDave has joined #openstack-infra | 08:15 | |
*** afazekas has quit IRC | 08:19 | |
*** afazekas has joined #openstack-infra | 08:23 | |
*** dkliban has joined #openstack-infra | 08:41 | |
*** dkliban has quit IRC | 08:50 | |
*** ogelbukh has quit IRC | 08:51 | |
*** ogelbukh has joined #openstack-infra | 08:51 | |
*** fbo_away is now known as fbo | 08:57 | |
*** fbo is now known as fbo_away | 08:58 | |
*** DinaBelova has joined #openstack-infra | 09:07 | |
*** DinaBelova has quit IRC | 09:12 | |
*** DinaBelova has joined #openstack-infra | 09:18 | |
*** SergeyLukjanov has quit IRC | 09:19 | |
*** DinaBelova has quit IRC | 09:23 | |
*** afazekas has quit IRC | 09:26 | |
*** afazekas has joined #openstack-infra | 09:31 | |
*** yaguang has quit IRC | 09:32 | |
*** fbo_away is now known as fbo | 10:08 | |
*** fbo is now known as fbo_away | 10:08 | |
*** DinaBelova has joined #openstack-infra | 10:19 | |
*** DinaBelova has quit IRC | 10:24 | |
*** DinaBelova has joined #openstack-infra | 11:19 | |
*** DinaBelova has quit IRC | 11:24 | |
*** jjmb has joined #openstack-infra | 11:43 | |
*** UtahDave has quit IRC | 11:49 | |
*** w_ has joined #openstack-infra | 11:51 | |
*** olaph has quit IRC | 11:53 | |
*** DinaBelova has joined #openstack-infra | 11:57 | |
*** DinaBelova has quit IRC | 12:02 | |
*** DinaBelova has joined #openstack-infra | 12:15 | |
*** hashar has joined #openstack-infra | 12:15 | |
*** jjmb has quit IRC | 12:16 | |
*** hughsaunders_ has joined #openstack-infra | 12:20 | |
*** hughsaunders has quit IRC | 12:21 | |
*** hughsaunders_ is now known as hughsaunders | 12:21 | |
*** DinaBelova has quit IRC | 12:27 | |
*** DinaBelova has joined #openstack-infra | 12:39 | |
*** SergeyLukjanov has joined #openstack-infra | 12:39 | |
*** hashar has quit IRC | 12:53 | |
*** SergeyLukjanov has quit IRC | 12:56 | |
*** jjmb has joined #openstack-infra | 13:08 | |
*** jjmb has quit IRC | 13:10 | |
*** afazekas has quit IRC | 13:23 | |
*** afazekas has joined #openstack-infra | 13:25 | |
*** DinaBelova has quit IRC | 13:27 | |
*** DinaBelova has joined #openstack-infra | 13:30 | |
*** DinaBelova has quit IRC | 13:31 | |
*** krtaylor has quit IRC | 13:42 | |
*** DinaBelova has joined #openstack-infra | 13:44 | |
*** vogxn has joined #openstack-infra | 13:59 | |
*** DinaBelova has quit IRC | 14:24 | |
*** afazekas has quit IRC | 14:33 | |
*** yaguang has joined #openstack-infra | 14:35 | |
*** afazekas has joined #openstack-infra | 14:38 | |
*** nati_ueno has joined #openstack-infra | 14:41 | |
openstackgerrit | A change was merged to openstack-dev/pbr: Also patch easy_install script creation https://review.openstack.org/38391 | 14:42 |
*** dkliban has joined #openstack-infra | 14:51 | |
*** DinaBelova has joined #openstack-infra | 14:55 | |
openstackgerrit | A change was merged to openstack-dev/pbr: Revert include_package_data change https://review.openstack.org/38328 | 14:57 |
openstackgerrit | A change was merged to openstack-dev/pbr: Add support for globbing in data files https://review.openstack.org/35730 | 14:57 |
openstackgerrit | A change was merged to openstack-dev/pbr: Loop over test output for better readability https://review.openstack.org/38325 | 14:57 |
*** sarob has joined #openstack-infra | 14:59 | |
*** sarob has quit IRC | 15:04 | |
*** dkliban has quit IRC | 15:07 | |
*** vogxn has quit IRC | 15:07 | |
*** mriedem has joined #openstack-infra | 15:27 | |
*** DinaBelova has quit IRC | 15:29 | |
Alex_Gaynor | mordred, clarkb, dstufft: New version of babel has been released | 15:30 |
*** SergeyLukjanov has joined #openstack-infra | 15:40 | |
*** vogxn has joined #openstack-infra | 15:42 | |
*** DinaBelova has joined #openstack-infra | 15:43 | |
*** SergeyLukjanov has quit IRC | 15:52 | |
*** DinaBelova has quit IRC | 15:58 | |
*** sarob has joined #openstack-infra | 16:00 | |
*** sarob has quit IRC | 16:04 | |
*** SergeyLukjanov has joined #openstack-infra | 16:09 | |
mordred | Alex_Gaynor: w00t! | 16:15 |
mordred | also, woot my pbr patches landed | 16:15 |
rockstar | mordred, did you just get an email from me about the mailing list upgrade being complete? | 16:16 |
mordred | rockstar: yup | 16:16 |
rockstar | mordred, \o/ | 16:16 |
*** mriedem1 has joined #openstack-infra | 16:17 | |
*** mriedem has quit IRC | 16:18 | |
*** yaguang has quit IRC | 16:23 | |
*** changbl has joined #openstack-infra | 16:29 | |
*** vogxn has left #openstack-infra | 16:32 | |
*** DinaBelova has joined #openstack-infra | 16:32 | |
*** sarob has joined #openstack-infra | 16:36 | |
*** sarob has quit IRC | 16:41 | |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Build a devstack-gate image with diskimage-builder https://review.openstack.org/38871 | 16:42 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Proof of concept of kexec/takeovernode https://review.openstack.org/38887 | 16:42 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Build a devstack-gate image with diskimage-builder https://review.openstack.org/38871 | 16:43 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Proof of concept of kexec/takeovernode https://review.openstack.org/38887 | 16:43 |
*** DinaBelova has quit IRC | 16:44 | |
*** CaptTofu has joined #openstack-infra | 16:46 | |
*** dkliban has joined #openstack-infra | 16:49 | |
*** SergeyLukjanov has quit IRC | 16:54 | |
*** nati_ueno has quit IRC | 16:56 | |
*** michchap has quit IRC | 16:57 | |
*** DinaBelova has joined #openstack-infra | 16:59 | |
*** reed has joined #openstack-infra | 17:05 | |
reed | hi all | 17:05 |
reed | rockstar, thanks for persisting :) | 17:07 |
*** DinaBelova has quit IRC | 17:09 | |
*** UtahDave has joined #openstack-infra | 17:15 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack/requirements: Revert "Pin Babel to <1.0 since it doesn't play well with pip 1.4" https://review.openstack.org/38932 | 17:19 |
*** locke1051 has joined #openstack-infra | 17:21 | |
*** mrodden1 has joined #openstack-infra | 17:21 | |
*** locke105 has quit IRC | 17:22 | |
*** mrodden has quit IRC | 17:22 | |
fungi | mordred: SpamapS: you should be able to revert https://review.openstack.org/38903 safely now | 17:23 |
fungi | i've confirmed that the latest babel installs in a virtualenv using the latest pip and no longer trips over pytz | 17:24 |
*** hartsocks has joined #openstack-infra | 17:25 | |
*** CaptTofu has quit IRC | 17:25 | |
*** michchap has joined #openstack-infra | 17:27 | |
*** yaguang has joined #openstack-infra | 17:30 | |
*** SergeyLukjanov has joined #openstack-infra | 17:31 | |
*** dkliban has quit IRC | 17:34 | |
*** michchap has quit IRC | 17:35 | |
*** sarob has joined #openstack-infra | 17:37 | |
*** dkliban has joined #openstack-infra | 17:40 | |
*** sarob has quit IRC | 17:42 | |
*** yaguang has quit IRC | 17:43 | |
*** CaptTofu has joined #openstack-infra | 17:45 | |
*** dkliban has quit IRC | 17:52 | |
*** michchap has joined #openstack-infra | 18:10 | |
*** michchap_ has joined #openstack-infra | 18:11 | |
*** michcha__ has joined #openstack-infra | 18:14 | |
*** michchap has quit IRC | 18:14 | |
*** michchap has joined #openstack-infra | 18:15 | |
*** michchap_ has quit IRC | 18:16 | |
*** prad has quit IRC | 18:16 | |
*** michchap_ has joined #openstack-infra | 18:16 | |
*** michch___ has joined #openstack-infra | 18:17 | |
*** michcha__ has quit IRC | 18:18 | |
*** DinaBelova has joined #openstack-infra | 18:19 | |
*** michchap has quit IRC | 18:19 | |
*** UtahDave has quit IRC | 18:20 | |
*** michchap_ has quit IRC | 18:21 | |
*** michch___ has quit IRC | 18:21 | |
*** jesusaurus has quit IRC | 18:22 | |
*** DinaBelova has quit IRC | 18:24 | |
*** jesusaurus has joined #openstack-infra | 18:24 | |
mordred | fungi: awesome. thanks | 18:26 |
fungi | above is a revert for the requirements repo. none of the per-project changes besides dib's merged (due to a combination of too few core reviewers and constant failures because of bug 1205344) so i've abandoned them all | 18:29 |
uvirtbot | Launchpad bug 1205344 in tempest "mkfs error in test_stamp_pattern" [Critical,In progress] https://launchpad.net/bugs/1205344 | 18:29 |
openstackgerrit | A change was merged to openstack/requirements: Revert "Pin Babel to <1.0 since it doesn't play well with pip 1.4" https://review.openstack.org/38932 | 18:36 |
*** sarob has joined #openstack-infra | 18:37 | |
*** CaptTofu has quit IRC | 18:40 | |
*** sarob has quit IRC | 18:42 | |
*** prad has joined #openstack-infra | 18:55 | |
*** UtahDave has joined #openstack-infra | 19:08 | |
*** DinaBelova has joined #openstack-infra | 19:24 | |
*** DinaBelova has quit IRC | 19:28 | |
fungi | heh, dstufft called our ci infrastructure "massive" on distutils-sig a couple days ago (i'm still catching up on e-mail) | 19:28 |
*** jpeeler has quit IRC | 19:29 | |
dstufft | fungi: I did? | 19:30 |
clarkb | fungi: if you need numbers to back it up http://goo.gl/2XzK6k *may take a while to load | 19:30 |
fungi | dstufft: in the deprecating official pypi mirrors thread | 19:30 |
dstufft | fungi: aah | 19:31 |
dstufft | right | 19:31 |
*** pcrews has joined #openstack-infra | 19:31 | |
* dstufft forgets what things he's saying a lot | 19:32 | |
fungi | http://mail.python.org/pipermail/distutils-sig/2013-July/022000.html | 19:32 |
fungi | i just found it amusing, that's all | 19:32 |
fungi | i don't often expect to see openstack infra plugged on that list | 19:33 |
mordred | fungi, clarkb: update-image hasn't worked since July 23 | 19:33 |
fungi | oh fun | 19:34 |
* fungi looks at the log | 19:34 | |
clarkb | is that when the setuptools upgrade stuff went in/out? | 19:34 |
mordred | nope | 19:34 |
clarkb | hmm no 23rd was tuesday | 19:34 |
mordred | it's because it's trying to download an 11.04 openvz image | 19:34 |
mordred | which does not exist | 19:34 |
clarkb | ugh | 19:34 |
fungi | this airplane doesn't make log browsing easy | 19:34 |
mordred | I cannot find the reference | 19:34 |
mordred | I mean - I cannot find where we're asking for that image | 19:35 |
mordred | there it is | 19:35 |
fungi | in az2 it was still working into the 24th | 19:35 |
clarkb | is it a devstack image url? | 19:36 |
clarkb | mordred: yup. its in devstack stackrc | 19:36 |
fungi | presumably we need http://download.openvz.org/template/precreated/ubuntu-12.04-x86_64.tar.gz instead anyway | 19:37 |
fungi | but yeah, the 11.10 one has been yanked | 19:37 |
clarkb | I will propose the devstack fix as I don't have +2 there | 19:37 |
mordred | https://review.openstack.org/38936 | 19:37 |
clarkb | and I think some of you do | 19:37 |
mordred | I do not either | 19:37 |
clarkb | mordred: wins | 19:37 |
clarkb | ah | 19:37 |
mordred | btw - building devstack-gate image locally is awesome | 19:37 |
fungi | i only have +2 on devstack when i get the okay to temporarily add myself (like for the folsom fix) | 19:38 |
*** sarob has joined #openstack-infra | 19:38 | |
mordred | like, it made me notice that we're installing puppet3 on our devstack-gate nodes | 19:38 |
clarkb | mordred: noice | 19:39 |
mordred | because none of our install puppet logic has been ported in there | 19:39 |
*** DinaBelova has joined #openstack-infra | 19:39 | |
clarkb | I guess only jeblair has the +2 power | 19:39 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Build a devstack-gate image with diskimage-builder https://review.openstack.org/38871 | 19:39 |
mordred | btw - that almost works | 19:39 |
clarkb | I don't think this is super critical to merge this weekend though. | 19:39 |
mordred | I've got one more thing to track down | 19:39 |
clarkb | Probably before we merge my setuptools puppet change though | 19:39 |
mordred | but it works, and is honestly really tight | 19:40 |
mordred | the dib guys have done some excellent work on local caching of thigns | 19:40 |
mordred | in a general manner | 19:40 |
fungi | mordred: not sure if you saw in last night's scrollback, but the chicken-and-egg issue with the jeepyb puppet module needing to pull in newer non-broken pbr was not as we thought | 19:41 |
mordred | fungi: did not see that | 19:41 |
fungi | clearing the local eggs in the repo had no effect | 19:41 |
mordred | oh. what? | 19:41 |
clarkb | fungi: :( I didn't see that either | 19:41 |
fungi | and we also talked about switching to calling pip install -U . instead, but that also breaks in the same way as python setup.py install | 19:42 |
*** w_ is now known as olaph | 19:42 | |
mordred | fungi: what about upping the lower requirement? | 19:42 |
fungi | i'll give that a shot | 19:42 |
fungi | forgot to test that option | 19:43 |
*** sarob has quit IRC | 19:43 | |
*** DinaBelova has quit IRC | 19:43 | |
mordred | I'm getting this, btw: | 19:44 |
mordred | FATAL: Could not load /lib/modules/3.8.0-25-generic/modules.dep: No such file or directory | 19:44 |
mordred | in tryuing to install iptables-persistent | 19:44 |
mordred | anybody ever seen that or know what I might want to investigate? | 19:44 |
clarkb | mordred: its complaining about your kernel modules and that isn't the precise kernel | 19:44 |
Mithrandir | probably an upgraded kernel without a reboot | 19:44 |
Mithrandir | with the old kernel image removed in the meantime | 19:45 |
mordred | ah. interesting. cool | 19:45 |
fungi | mordred: can you put version matches on setup_requires list items? pkg_resources.VersionConflict: (pbr 0.5.17 (/usr/local/lib/python2.7/dist-packages), Requirement.parse('pbr>=0.5.19')) | 19:46 |
mordred | you can - but I would expect that to do the right thing and upgrade | 19:46 |
mordred | not just conflict | 19:46 |
mordred | wtf? | 19:46 |
fungi | easy to recreate this... cd into a current clone of the jeepyb repo, pip install pbr 0.5.17 and then try to install jeepyb from the local source tree | 19:47 |
fungi | pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) | 19:48 |
fungi | fwiw | 19:48 |
fungi | ubuntuprecise | 19:49 |
mordred | k. I will work on it | 19:49 |
fungi | manually upgrading pbr first allows jeepyb to install just fine, but so far that's the *only* solution i've gotten to work | 19:51 |
dstufft | ouch pip 1.0 | 19:52 |
fungi | dstufft: welcome to ubuntu lts | 19:52 |
*** dkliban has joined #openstack-infra | 19:53 | |
mordred | it might just be that we have to upgrade pbr first - the intereactions here don't seem great - but I'm going to see if I can figure something out | 19:54 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/devstack-gate: Build a devstack-gate image with diskimage-builder https://review.openstack.org/38871 | 19:57 |
mordred | clarkb, fungi: woot! that ^^ creates an d-g image! | 19:57 |
fungi | mordred: worst case, i'll roll https://review.openstack.org/38066 back to patchset #1, which did actually work, just didn't meet with aesthetic standards and sanity | 19:58 |
mordred | fungi: I thnk that's probably the best thing for moving forward | 19:59 |
mordred | the other thing is likely to take some pbr dev work on my part | 19:59 |
*** hartsocks has quit IRC | 20:00 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Work around catch-22 in jeepyb with broken pbr https://review.openstack.org/38066 | 20:06 |
dansmith | sometimes you guys pull out a stat about "test foo is failing at 40% now" -- is that actually calculated anywhere automatically? | 20:14 |
mordred | dansmith: not that I'm aware of - I believe clarkb can do some magic | 20:15 |
dansmith | okay.. was wondering how bad the grenade fail is, because it seems to be hitting lots of stuff, | 20:16 |
dansmith | but was then thinking, | 20:16 |
dansmith | a graph of fails-per-day for each test for the last 30 days might be a good "oh, that's breaking now" indicator | 20:16 |
fungi | might be more accurate if only limited to gate pipeline and not reported for check pipeline in that case | 20:17 |
fungi | jenkins will give you some of that fromits web interface, but it's quite neutered since we can't have it preserve results for any reasonable timeframe | 20:19 |
clarkb | I use logstash to get those numbers. I currently only have 2 weeks of semi broken logs :/ | 20:19 |
clarkb | I can tell you that for yesterday UTC time grenade failed in the gate ~10% of the time | 20:20 |
clarkb | 83 successes to 9 failures | 20:20 |
dansmith | hmm, okay, I guess it seems much higher than that, so I was thinking the departure from the norm would be more visually striking | 20:24 |
clarkb | dansmith: I think you are right that the recent rate is much higher than that | 20:29 |
clarkb | but logstash is too far behind to check current numbers :( | 20:29 |
mordred | we could probably report job name + success or fail to statsd | 20:31 |
mordred | to make those sorts of charts | 20:31 |
mordred | wow. | 20:34 |
mordred | d-g adds 500M on top of the base ubuntu vm | 20:34 |
pleia2 | yeah, that's why we've talked about caching the packages | 20:37 |
*** sarob has joined #openstack-infra | 20:39 | |
fungi | mordred: is that after an apt-get clean? | 20:39 |
pleia2 | (whether we run a full ubuntu mirror as openstack, or a partial one) | 20:39 |
fungi | a partial caching proxy mirror is very easy to accomplish for apt, luckily... i run one at home since i have so many machines | 20:40 |
*** jaypipes has quit IRC | 20:41 | |
*** Ryan_Lane has joined #openstack-infra | 20:41 | |
mordred | fungi: no - because we pre-cache all the packages we're going to install - this is doing the d-g dib elements I've been poking at | 20:41 |
fungi | ahh | 20:42 |
*** sarob has quit IRC | 20:43 | |
fungi | so /var/cache/apt/archives/ is empty in the resultant image? or has the packages cached but not installed there? | 20:44 |
*** dkliban has quit IRC | 20:44 | |
mordred | has the packages cached there but not installed | 20:48 |
mordred | it has _all_ of the packages cached that _any_ devstack run _might_ want to install | 20:48 |
fungi | rockstar: the new openstack@ list seems not to add a list-id header? | 20:48 |
mordred | so that it can install them during devstack under the control of devstack | 20:48 |
mordred | but withouth talking to the internets | 20:48 |
fungi | mordred: okay, makes sense. so that's an additional 500mib of compressed packages, cached in the image, not yet installed | 20:49 |
mordred | yup | 20:51 |
mordred | and also all the git repos | 20:51 |
mordred | fungi: I have bad news | 20:52 |
mordred | fungi: I cannot reproduce your problem | 20:52 |
mordred | with jeepyb | 20:52 |
*** prad has quit IRC | 20:52 | |
fungi | mordred: well, at a minimum review and review-dev are currently still in that state | 20:52 |
fungi | but i will see if i can recreate it with a fresh precise vm | 20:52 |
* mordred goes to look at review-dev | 20:53 | |
*** DinaBelova has joined #openstack-infra | 20:54 | |
*** DinaBelova has quit IRC | 20:58 | |
*** hartsocks has joined #openstack-infra | 21:08 | |
fungi | rockstar: nevermind my previous comment. i finally got far enough through my inbox to see that reed mentioned it to you already | 21:08 |
*** afazekas has quit IRC | 21:13 | |
*** mindjiver has joined #openstack-infra | 21:17 | |
openstackgerrit | Bob Ball proposed a change to openstack-infra/reviewday: Added time-based score for patches https://review.openstack.org/38836 | 21:17 |
*** SergeyLukjanov has quit IRC | 21:21 | |
*** koobs has quit IRC | 21:28 | |
*** koobs has joined #openstack-infra | 21:31 | |
openstackgerrit | Bob Ball proposed a change to openstack-infra/reviewday: Added time-based score for patches https://review.openstack.org/38836 | 21:33 |
*** sarob has joined #openstack-infra | 21:35 | |
*** sarob has quit IRC | 21:37 | |
*** sarob_ has joined #openstack-infra | 21:37 | |
*** dkliban has joined #openstack-infra | 21:40 | |
mordred | clarkb: w00t. I have takovernoded and kexec'd and it worked | 21:46 |
*** johnthetubaguy has joined #openstack-infra | 21:49 | |
*** johnthetubaguy has left #openstack-infra | 21:50 | |
fungi | mordred: btw i was able to reproduce... vanilla precise vm with git, pip and python-mysql packages added. clone jeepyb and cd into it, pip install setuptools pbr=0.5.17 | 22:13 |
fungi | at that point python setup.py install will work the first time, but will fail when run a second time | 22:13 |
fungi | clearing the cached eggs in the local source tree doesn't fix it | 22:14 |
mordred | ah! I got it | 22:15 |
fungi | also, i always forget that there's nothing quite like a summer afternoon layover in chicago when it comes to turbulence | 22:17 |
mordred | fungi: yes. I believe we are just going to have to upgrade by hand this time | 22:18 |
fungi | at the moment, it looks like jeepyb is the only infra project which uses pbr and is being deployed in our environment with vcsrepo, so i think that's the only puppet module which needs the workaround thankfully | 22:31 |
*** zaro0508 has joined #openstack-infra | 22:48 | |
*** zaro0508 has quit IRC | 23:06 | |
*** UtahDave has quit IRC | 23:29 | |
*** sarob_ has quit IRC | 23:38 | |
*** sarob has joined #openstack-infra | 23:39 | |
*** sarob has quit IRC | 23:39 | |
*** michchap has joined #openstack-infra | 23:39 | |
*** sarob has joined #openstack-infra | 23:40 | |
*** koolhead17 has joined #openstack-infra | 23:49 | |
*** sarob has quit IRC | 23:50 | |
*** sarob has joined #openstack-infra | 23:51 | |
clarkb | mordred: nice | 23:54 |
clarkb | fungi: fwiw our cgroups should prevent OOMing :) | 23:54 |
clarkb | fungi: or rather the tests can OOM all they want but the slave should escape unharmed | 23:54 |
fungi | clarkb: good point. just worth noting that tmpfs contents will occupy virtual memory | 23:55 |
*** sarob has quit IRC | 23:55 | |
fungi | and will most likely fall outside the enforcement of cgroups for that user's processes | 23:55 |
clarkb | I am actually not sure how tmpfs contents will get accounted against the cgroups | 23:56 |
fungi | but we do cap their normal memory usage artificially low for the amount of ram we get on those slaves, so maybe not a concern unless we start going 4g+ on the tmpfs sizes | 23:56 |
clarkb | I am slightly worried about the dirty /tmp situation we have | 23:58 |
clarkb | it might be worth mounting the tmpfs under a different path and letting specific things use that rather than /tmp | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!