*** masayukig has quit IRC | 00:00 | |
*** thuc_ has joined #openstack-infra | 00:00 | |
mikal | anteaya: sorry, wandered off for coffee | 00:01 |
---|---|---|
* mikal counts | 00:01 | |
*** thuc has quit IRC | 00:01 | |
mikal | Its the 2nd of March here (and in Tokyo) | 00:02 |
mikal | So, this 4:36pm on 28 February | 00:03 |
*** CaptTofu has quit IRC | 00:03 | |
mikal | >>> age = datetime.datetime.now() - datetime.datetime(2014,2,28,6,35) | 00:04 |
mikal | >>> print age | 00:04 |
mikal | 2 days, 4:28:55.394416 | 00:04 |
anteaya | what is the timezone of the 4:36pm? | 00:04 |
*** cody-somerville has joined #openstack-infra | 00:05 | |
anteaya | gerrit server timezone? | 00:05 |
*** yongli has quit IRC | 00:05 | |
mikal | 4:36pm was the timestamp on the log entyr on the ryu jenkins, right? | 00:05 |
*** pcm_ has quit IRC | 00:05 | |
anteaya | stack.sh log /var/lib/jenkins/jobs/ryuplugin-tempest/workspace/logs/devstack.2014-02-28-003806 | 00:06 |
anteaya | March 2, just after midnight tokyo time | 00:06 |
mikal | Oh, well that was only about 9 hours ago | 00:07 |
mikal | Its March 2 at about 9am in Tokyo now | 00:07 |
anteaya | and the time of 4:36pm was coming from the gerrit account page for that account | 00:07 |
anteaya | March 1st | 00:07 |
anteaya | is 24 hours worth | 00:08 |
anteaya | so 24 hours plus 9 hours | 00:08 |
anteaya | sorry February 28 just after midnight | 00:08 |
anteaya | 2014-02-28-003806 | 00:08 |
mikal | So 48 hours + 9 hours | 00:09 |
anteaya | 48? | 00:09 |
anteaya | ah yes, Feb 28 and March 1 | 00:10 |
anteaya | from this I have a new requirement of 3rd party tests | 00:10 |
*** sarob_ has joined #openstack-infra | 00:10 | |
anteaya | all tests must have a timestamp of the test run in utc time | 00:10 |
mikal | Sounds fair to me | 00:11 |
anteaya | I still don't know how gerrit is showing that neutron ryu voted -1 on a patch at 4:36pm - I don't know what timezone that time is in | 00:11 |
*** alexpilotti has quit IRC | 00:12 | |
mikal | Did ryu vote twice perhaps? | 00:13 |
anteaya | I am uncertain | 00:13 |
*** sarob_ has quit IRC | 00:15 | |
*** CaptTofu has joined #openstack-infra | 00:17 | |
clarkb | gerrit shows you your local time (as stated by the browser) | 00:17 |
openstackgerrit | Anita Kuno proposed a change to openstack-infra/config: Adding a utc timestamp requirement for 3rd party test logs https://review.openstack.org/77376 | 00:20 |
anteaya | clarkb: is there anyway I can convince gerrit to show me utc time? | 00:20 |
*** banix has joined #openstack-infra | 00:21 | |
anteaya | banix: hi | 00:22 |
anteaya | how is neutron ryu doing for your patches? | 00:22 |
clarkb | anteaya: set your browser timezone to utc | 00:23 |
anteaya | clarkb: I'm looking how to do that | 00:24 |
mikal | anteaya: http://www.rcbops.com/gerrit/reports/neutron-cireport.html | 00:25 |
clarkb | for chrom* I think it reads the system timezone. Firefox may make it configurable in the browser | 00:25 |
mikal | anteaya: a first cut at a CI report for neutron | 00:25 |
mikal | clarkb: do you have any idea when we might get to a more modern gerrit? | 00:29 |
clarkb | rsn, fixes pointed out by review dev are mostly in now. We are still sorting out build stuff for newer gerrit though | 00:29 |
clarkb | once we have that we should be able to scheduel a time and upgrade | 00:29 |
mikal | That would be nice | 00:30 |
mikal | I'd like timestamps for comments in the event stream specifically | 00:30 |
anteaya | mikal: thank you for that | 00:30 |
mikal | anteaya: "unknown" means my parser couldn't work out what the vote was, which is generally because its a zero vote comment with non-standard text | 00:31 |
* anteaya nods for timestamps | 00:31 | |
mikal | I shall clean those up when I can | 00:31 |
anteaya | mikal: it looks like a good start, thank you for doing this | 00:31 |
mikal | NP | 00:31 |
*** masayukig has joined #openstack-infra | 00:31 | |
anteaya | so banix was the dev that told me about neutron ryu failing | 00:32 |
anteaya | I hope to hear from him if he is still getting -1 votes from a broken ci | 00:32 |
*** masayukig has quit IRC | 00:33 | |
banix | anteaya: hi | 00:33 |
*** masayukig has joined #openstack-infra | 00:33 | |
*** reed has joined #openstack-infra | 00:35 | |
banix | anteaya: looks like, the Ryu now votes +1 on my patchset; I see the NEC CI voting incorrectly now; I think at some point it was decided that we don't post -1s but that does not seem to have been picked up by everybody. | 00:35 |
*** lnxnut has joined #openstack-infra | 00:35 | |
*** lnxnut has quit IRC | 00:40 | |
anteaya | banix: okay I am glad that ryu is voting on your patchset correctly (at least i hope it is correctly) can you please update your email to them and the infra ml with your findings on neutron ryu | 00:43 |
banix | anteaya; Will do shortly. | 00:44 |
anteaya | if NEC ci is now voting incorrectly on your patch, please email them nec-openstack-ci@iaas.jp.nec.com and cc the infra ml | 00:45 |
anteaya | funny I am looking at the NEC account page: https://review.openstack.org/#/dashboard/10116 | 00:45 |
anteaya | and I am not seeing any -1 votes for their system | 00:45 |
mikal | anteaya: we have a report for that! | 00:46 |
anteaya | mikal: can we get NEC added to http://www.rcbops.com/gerrit/reports/neutron-cireport.html | 00:46 |
* mikal adds NEC to the neutron list | 00:46 | |
anteaya | so ahead of me | 00:46 |
*** morganfainberg_Z is now known as morganfainberg | 00:47 | |
mikal | Done | 00:47 |
mikal | NEC OpenStack CI voted on 260 patchsets (95.24%), passing 214 (78.97%), failing 57 (21.03%) and unparsed 0 (0.00%) | 00:47 |
anteaya | clarkb: so far it looks like mozilla wants me to change my os time http://kb.mozillazine.org/Time_and_time_zone_settings | 00:47 |
* anteaya refreshes | 00:47 | |
mikal | Noting that when the report changes it can take a couple of refreshes | 00:48 |
mikal | As the web backends take non-zero time to sync | 00:48 |
*** banix has quit IRC | 00:49 | |
*** banix has joined #openstack-infra | 00:50 | |
*** dolphm_503 is now known as dolphm | 00:50 | |
anteaya | mikal: you are wonderful, thank you | 00:50 |
anteaya | I have to run away now | 00:51 |
anteaya | last night's plans that fell down and made me sad seem to be working out for tonight | 00:51 |
anteaya | banix: good luck | 00:51 |
anteaya | see you later | 00:51 |
banix | anteaya: thanks. good night. | 00:52 |
morganfainberg | anteaya, clarkb, so I briefly talked w/ fungi about monitoring stuff for infra, I'd like to discuss a bit more (e.g. eavesdrop/meetbot and moving to other things as needed) - it seems like we might want some mechanism for monitoring stuff like that beyond "oh it's offline", is there any interest? should i just toss it on the -infra meeting and show up on tuesday? | 00:53 |
*** banix has quit IRC | 00:53 | |
morganfainberg | anteaya, clarkb, mordred, i bring this up because meetbot/eavsedrop was offline a better part of a day last week and while it's not super insanely mission critical, it's a service we all do rely on / enjoy the benefits of having | 00:53 |
clarkb | I do think more monitoring / alerts would be a good thing. But also stress that I am pretty sure we don't intend on being on pager duty. But having better monitoring will give us info on what needs fixing/attention/replacement/etc | 00:54 |
clarkb | that said have you seen how flaky freenode has been lately? | 00:55 |
morganfainberg | no no i woudn't want that | 00:55 |
morganfainberg | clarkb, yeah i know :( | 00:55 |
morganfainberg | clarkb, I don't see a benefit to PD type service, but something that could help us identify issues (especially if it helps save someone hours diagnosing) or making it easy to see that some resource is missing, would be beneficial | 00:56 |
clarkb | yup | 00:56 |
mordred | ++ | 00:57 |
clarkb | if we want to be super hip we could run sensu | 00:57 |
mordred | what's that? | 00:57 |
morganfainberg | clarkb, not familiar with it | 00:57 |
clarkb | mordred: its basically salt with alerts | 00:57 |
clarkb | but written in ruby and using amqp | 00:57 |
morganfainberg | oh interesting | 00:57 |
mordred | it would be better if it was not in ruby | 00:57 |
morganfainberg | mordred, ++ | 00:57 |
clarkb | I think AaronGr and pleia2 have had ideas about monitoring stuff | 00:58 |
morganfainberg | clarkb, do you want me to toss this up on the infra meeting for tuesday and just hang out post keystone meeting? get more eyes on it? | 00:58 |
clarkb | iirc they both run local home monitoring systems because awesome | 00:58 |
morganfainberg | clarkb, hehe. | 00:58 |
clarkb | morganfainberg: sure, throw it up on the agenda beforehand | 00:59 |
morganfainberg | cool, will add it to agenda now. | 00:59 |
openstackgerrit | Allison Randal proposed a change to openstack-infra/elastic-recheck: Adds time-view filter to uncategorized page https://review.openstack.org/77377 | 00:59 |
*** dolphm is now known as dolphm_503 | 01:00 | |
*** dolphm_503 is now known as dolphm | 01:01 | |
morganfainberg | clarkb, added to agenda | 01:03 |
morganfainberg | clarkb, :) | 01:04 |
*** julim has joined #openstack-infra | 01:04 | |
morganfainberg | mordred, why does everything seem to come back to "<something something cool/usefule> but implemented in Ruby"? :P | 01:05 |
clarkb | ruby makes snse for that sort of thing since dealing with regexes in it isn't terrible | 01:05 |
clarkb | and I bet you use a lot of regexes to monitor all the things | 01:06 |
morganfainberg | clarkb, sure. but you could make the same argument for perl ;) | 01:06 |
morganfainberg | clarkb, I guess i am just biased...I like Python | 01:06 |
clarkb | I have the same bias but python is terrible for dealing with string parsing | 01:07 |
morganfainberg | clarkb, now let me see if I can refactor the apache stuff in devstack so we can run keystone on port 80 (shared with horizon) | 01:07 |
clarkb | ooh | 01:07 |
*** masayukig has quit IRC | 01:08 | |
morganfainberg | clarkb, and clean up the icky port issue | 01:08 |
morganfainberg | clarkb, it's a massive refactor though :( need to change how we build VHOSTS because it has to share the same vhost since we don't really use "hosts" in this case | 01:08 |
morganfainberg | clarkb, i'll def tag you on the review chain so you can weigh in. | 01:09 |
*** julim has quit IRC | 01:09 | |
clarkb | you can have different vhosts if you do loaclhost/keystone or keystone.localhost or something | 01:09 |
morganfainberg | clarkb, sure. but i wasn't sure if we wanted to need to hack in things to say... hostfile? | 01:10 |
morganfainberg | clarkb, we kindof rely on everything being IPs or same IP/HOST right now | 01:10 |
*** homeless_ has joined #openstack-infra | 01:10 | |
clarkb | hrm yeah we probably wouldn't want to do that in devstack | 01:10 |
*** homeless has quit IRC | 01:10 | |
*** dangers_away has quit IRC | 01:11 | |
*** dolphm is now known as dolphm_503 | 01:11 | |
morganfainberg | clarkb, i'm looking at making http://<host>/keystone/main go to keystone public and http://<host>/keystone/admin go to admin | 01:11 |
morganfainberg | clarkb, the hard part is that it mixes up ErrorLog | 01:11 |
*** dangers_away has joined #openstack-infra | 01:11 | |
morganfainberg | clarkb, it would be way better if we had distinct Vhosts. | 01:11 |
morganfainberg | clarkb, but it doesn't play nice if you want / to go to horizon and same vhost /keystone to be keystone | 01:11 |
morganfainberg | s/vhost/host-ip | 01:12 |
morganfainberg | clarkb, though... would it be terrible to muck with the hosts file in devstack? make it so you always refer to "keystone" and you set keystone IP as a var? | 01:13 |
morganfainberg | would even work well for multi-node devstack. | 01:13 |
morganfainberg | since you could specify the <ip> that 'keystone' points to | 01:13 |
clarkb | not sure, dtroyer and sdague could comment on that | 01:15 |
*** masayukig has joined #openstack-infra | 01:15 | |
clarkb | you could default to 127.0.0.0 + 35357 >_> | 01:15 |
morganfainberg | clarkb, we do that now, don't we? | 01:15 |
morganfainberg | clarkb, it port issue still occurs | 01:15 |
clarkb | no I mean the ip address | 01:16 |
morganfainberg | ooooh | 01:16 |
clarkb | as an homage | 01:16 |
clarkb | is that the right word? /me fails at english | 01:16 |
clarkb | ya thats right | 01:16 |
morganfainberg | so int(127.0.0.0) + 35357 | 01:16 |
morganfainberg | hehe | 01:16 |
clarkb | ya :) | 01:16 |
morganfainberg | clarkb, hmm. might actually be best to just support that idea. use an alternate IP (localhost) for keystone in devstack-gate and also support port 80 httpd for it | 01:17 |
*** reed has quit IRC | 01:17 | |
morganfainberg | clarkb, ++ i like :) | 01:18 |
morganfainberg | no host-file mucking really needed then. though, would be easier to do that way) | 01:18 |
morganfainberg | still want to fix the way we config apache services, so i'll do that first | 01:18 |
*** rwsu has joined #openstack-infra | 01:20 | |
mordred | morganfainberg, clarkb: https://github.com/sensu/sensu-puppet | 01:23 |
morganfainberg | mordred, hey, that looks like it makes sensu way easier to use | 01:24 |
morganfainberg | mordred :) | 01:24 |
morganfainberg | don't need to start from scratch with it | 01:24 |
jeblair | what problem does sensu solve? | 01:24 |
morganfainberg | jeblair, the same as any monitoring software really | 01:24 |
mordred | jeblair: it's the newer 'hipper' thing to use to monitor things apparently | 01:24 |
jeblair | morganfainberg: there's a lot of different monitoring software and they solve different problems | 01:24 |
morganfainberg | jeblair, right, this seems like notifications of events | 01:25 |
jeblair | mordred: that hasn't really sold me on it. :) | 01:25 |
morganfainberg | jeblair, but the core of monitoring with a "hip" way of notification | 01:25 |
morganfainberg | jeblair, amqp | 01:25 |
jeblair | morganfainberg: who would act on these notifications? | 01:25 |
morganfainberg | jeblair, i am not sure it's a useful tool for what i see as a benefit to infra. i think something that can help save hours diagnosing issues would be more useful. but i never heard of sensu until clarkb talked about it today | 01:26 |
*** pcrews has joined #openstack-infra | 01:26 | |
clarkb | I just mentioned it as the hip thing. They "sell" it as a replacement for nagios | 01:27 |
morganfainberg | jeblair, so i'm far from the expert on sensu vs other systems i've used. | 01:27 |
clarkb | because ti fixes a lot of problems that nagios has apparently | 01:27 |
jeblair | morganfainberg: i love having tools that help diagnose issues. we have found cacti to be fairly instrumental in this sort of thing: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1 | 01:27 |
*** KurtMartin has joined #openstack-infra | 01:28 | |
jeblair | morganfainberg: however i'm skeptical of systems like nagios, or sensu, having run them in the past. they produce quite a bit of noise (unless you spend quite a bit of time tuning them) | 01:28 |
morganfainberg | jeblair, aye cacti is good. i also think there are some (perhaps) gaps for things such as meetbot etc that should be monitored in a fashion besides "oh look <openstack> left the irc channel | 01:28 |
*** dangers_away has quit IRC | 01:29 | |
jeblair | morganfainberg: to my knowledge, meetbot has never died | 01:29 |
*** DuncanT has quit IRC | 01:29 | |
morganfainberg | jeblair, last week for over a day | 01:29 |
jeblair | morganfainberg: meetbot gets on the wrong side of netsplits | 01:29 |
morganfainberg | jeblair, or it was on the wrong netsplit | 01:29 |
jeblair | morganfainberg: right, and that's not a problem that can be solved with monitoring | 01:29 |
*** kmartin has quit IRC | 01:29 | |
mordred | we could run another bot that coudl check to see if it was in the same channel as openstack ... ;) | 01:29 |
*** homeless_ has quit IRC | 01:30 | |
*** thuc has joined #openstack-infra | 01:30 | |
*** dangers_away has joined #openstack-infra | 01:30 | |
jeblair | mordred: and run 100 of those and say that if the majority of them are in the same channel as openstack, everything is okay? :) | 01:30 |
mordred | jeblair: yes | 01:30 |
*** DuncanT- has joined #openstack-infra | 01:30 | |
jeblair | so if the problem at issue is that freenode is unreliable, perhaps we should consider moving to oftc instead. i'd rather fix the underlying problem... | 01:31 |
*** CaptTofu has quit IRC | 01:31 | |
*** homeless has joined #openstack-infra | 01:31 | |
mordred | jeblair: we _could_ configure meetbot to look for you and me and fungi and clarkb and if it's not in the same channel wtih all four of us, to ping the remaining ones | 01:32 |
mordred | which, on the one hand, is a bit crazy - but on the other hand, - if the four of us aren't here, something has gone sideways | 01:32 |
* mordred is mostly kidding though | 01:32 | |
jeblair | mordred: i hope :) | 01:32 |
mordred | I support our new oftc overlords | 01:32 |
morganfainberg | i'm still shocked that freenode has maintained 127* address space in their RRDNS for chat.freenode / irc.freenode | 01:33 |
morganfainberg | it really makes it hard to use them sometimes. | 01:33 |
*** thuc_ has quit IRC | 01:33 | |
morganfainberg | or well, those addrs keep getting added/removed (sure it's for DDOS mitigation but ugh) | 01:34 |
morganfainberg | not sure if there is a better way | 01:34 |
jeblair | morganfainberg: how interesting, i didn't know they were doing that | 01:34 |
*** thuc has quit IRC | 01:34 | |
morganfainberg | jeblair, that is why people were complaining about freenode being down | 01:34 |
morganfainberg | jeblair, 127.* takes priority for most connections | 01:34 |
jeblair | morganfainberg: no, i mean i knew about the ddos, i didn't know that was one of their mitigation strategies | 01:35 |
morganfainberg | jeblair, so my client would never connect until i hard-coded a list of "real" servers | 01:35 |
morganfainberg | jeblair, yeah that was what i heard them say. | 01:35 |
morganfainberg | jeblair, or at least some people in the freenode channel who seeeeeemed to be an authority | 01:35 |
*** lnxnut has joined #openstack-infra | 01:35 | |
*** pcrews has quit IRC | 01:35 | |
morganfainberg | jeblair, it all happend about the same time as that host that they evicted from the network for adding a +o magic rule for themselves or such | 01:35 |
* morganfainberg read something on the blog | 01:36 | |
jeblair | sounds like a class act | 01:36 |
*** masayukig has quit IRC | 01:36 | |
morganfainberg | well it started is back last year around then | 01:37 |
morganfainberg | but yeah classy | 01:37 |
morganfainberg | anyway, DDOS etc being "fun" for the network | 01:37 |
*** lnxnut has quit IRC | 01:41 | |
clarkb | foundation folk re http://www.openstack.org/blog/2014/01/openstack-t-shirt-design-contest/ I think it would be about a billion times more excellent if the winning design was part of the summit swag | 01:54 |
*** dstufft|laptop has joined #openstack-infra | 01:54 | |
clarkb | reed isn't here :( maybe I need a twitter accout just to point that out | 01:54 |
*** dstufft|laptop has quit IRC | 01:57 | |
*** dstufft|laptop has joined #openstack-infra | 01:58 | |
*** CaptTofu has joined #openstack-infra | 01:58 | |
*** dolphm_503 is now known as dolphm | 02:02 | |
mordred | :( | 02:03 |
*** dolphm is now known as dolphm_503 | 02:12 | |
*** nati_ueno has joined #openstack-infra | 02:16 | |
*** CaptTofu has quit IRC | 02:27 | |
*** lnxnut has joined #openstack-infra | 02:35 | |
*** nati_ueno has quit IRC | 02:36 | |
*** nati_ueno has joined #openstack-infra | 02:37 | |
*** oubiwan__ has joined #openstack-infra | 02:38 | |
*** lnxnut has quit IRC | 02:40 | |
*** nati_ueno has quit IRC | 02:42 | |
*** dolphm_503 is now known as dolphm | 03:02 | |
*** rwsu has quit IRC | 03:06 | |
*** dolphm is now known as dolphm_503 | 03:14 | |
*** lnxnut has joined #openstack-infra | 03:35 | |
*** thedodd has joined #openstack-infra | 03:36 | |
*** thedodd has quit IRC | 03:37 | |
*** dstufft|laptop has quit IRC | 03:38 | |
*** lnxnut has quit IRC | 03:40 | |
*** jhesketh_ has joined #openstack-infra | 03:41 | |
*** CaptTofu has joined #openstack-infra | 03:43 | |
*** yamahata has joined #openstack-infra | 03:52 | |
*** pcm_ has joined #openstack-infra | 03:53 | |
*** lnxnut has joined #openstack-infra | 03:57 | |
*** pmathews has joined #openstack-infra | 03:57 | |
lifeless | fungi: / SpamapS: any news? | 03:58 |
pcm_ | Anyone? I'm hitting an issue with tox multiple threads and mocks. Can anyone advise? | 04:05 |
*** dolphm_503 is now known as dolphm | 04:05 | |
pcm_ | s/tox/tox's/ | 04:05 |
pleia2 | morganfainberg: yeah, AaronGr and I have talked about getting a nagios instance running but I think we've both been pretty busy, it's definitely something I think we should be doing :) | 04:11 |
*** marun has joined #openstack-infra | 04:11 | |
morganfainberg | pleia2, :) | 04:12 |
morganfainberg | pleia2, well i toss it on the agenda for infra meeting on tuesday | 04:12 |
pleia2 | \o/ | 04:12 |
morganfainberg | pleia2, maybe we can work towards it :) | 04:12 |
pleia2 | morganfainberg: that would be great | 04:12 |
morganfainberg | pleia2, ^_^ | 04:12 |
morganfainberg | s/toss/tossed | 04:12 |
morganfainberg | pleia2, i am in process of setting up a near mirror to -infra stuffs for CI internal to my company | 04:13 |
pleia2 | morganfainberg: cool | 04:13 |
morganfainberg | pleia2, mostly because we need to test our code + what we plan to submit upstream | 04:13 |
* pleia2 nods | 04:13 | |
morganfainberg | pleia2, so figure if we can combine efforts to define a monitoring footprint | 04:13 |
pleia2 | yeah definitely | 04:14 |
morganfainberg | :) | 04:14 |
pleia2 | I don't have a good strategy for managing nagios files, at $old_job we used a homebrewed yaml solution | 04:14 |
lifeless | pleia2: Heat | 04:15 |
lifeless | :P | 04:15 |
morganfainberg | nagios files suck to manage in ${cms_solution} | 04:15 |
pleia2 | so honestly I haven't looked at what's out there recently | 04:15 |
pleia2 | lifeless: hehe | 04:15 |
*** dolphm is now known as dolphm_503 | 04:15 | |
lifeless | pleia2: no really, we have a nagios element. Of course, it needs love... | 04:15 |
morganfainberg | lifeless, i... */me bows out, no good response* | 04:15 |
pleia2 | lifeless: here yet, or between airplanes? | 04:15 |
lifeless | pleia2: between planes. | 04:15 |
pleia2 | lifeless: good to know :) | 04:15 |
morganfainberg | lifeless or on a plane? :P | 04:15 |
lifeless | morganfainberg: no internet on planes I get, mostly. | 04:16 |
lifeless | morganfainberg: trans pacific... | 04:16 |
morganfainberg | lifeless, oh right | 04:16 |
morganfainberg | lifeless, i forgot the _one_ vendor didn't see that as profitable. so they cancelled the service (not that it was widely available) | 04:16 |
lifeless | yeah | 04:17 |
morganfainberg | next trans-pacific flight i do... will need to be business class | 04:17 |
morganfainberg | cause i am tired of cattleclass for 10+hrs | 04:17 |
lifeless | problem is you need business class or at least premium economy for enough space for laptop work | 04:17 |
morganfainberg | not that i have had to do that flight much (HK summit has been the only excuse so far) | 04:17 |
lifeless | (with most laptops) | 04:17 |
morganfainberg | lifeless, yeah | 04:17 |
lifeless | morganfainberg: also preach it, HP fly us cattle :( | 04:17 |
morganfainberg | awwww :( | 04:17 |
morganfainberg | dude i'm so sorry | 04:18 |
morganfainberg | that is brutal | 04:18 |
pleia2 | I just take sleeping pills and it's over fast :) | 04:18 |
*** CaptTofu has quit IRC | 04:18 | |
morganfainberg | at least for Paris summit, i can stop in NYC for a night (or at least a few hours) and cut the flight in 1/2 | 04:18 |
morganfainberg | and stretch | 04:18 |
morganfainberg | heck, for ATL i'm almost planning a layover :P | 04:19 |
*** SumitNaiksatam_ has joined #openstack-infra | 04:19 | |
morganfainberg | actually prob will just spring for business/first since it's domestic to ATL | 04:19 |
lifeless | pleia2: Expect Zombie Lifeless on monday. | 04:20 |
morganfainberg | hehe | 04:20 |
*** SumitNaiksatam has quit IRC | 04:20 | |
*** SumitNaiksatam_ is now known as SumitNaiksatam | 04:20 | |
morganfainberg | lifeless, should we head to the winchester, have a cold pint and wait for this to blow over? | 04:21 |
morganfainberg | zombie apocalypse and all | 04:21 |
morganfainberg | /shaun of the dead response to all things zombie | 04:21 |
pleia2 | lifeless: me too, I have a cold again! (I'll try to steer clear of everyone) | 04:22 |
lifeless | pleia2: get a mask | 04:23 |
pleia2 | :) | 04:25 |
lifeless | pleia2: serious - if you have reasonable expectation that you're contagious | 04:25 |
lifeless | pleia2: basic public health measures - mask, wash hands regularly, 1m space around you, and the likelyhood of infection drops to ~0 | 04:26 |
lifeless | pleia2: and *everyone* will be super appreciative | 04:26 |
morganfainberg | lifeless, is 1m enough? | 04:27 |
* morganfainberg gets sick infreuqently enough that it's hard to remember these things | 04:27 | |
lifeless | morganfainberg: yes | 04:27 |
lifeless | morganfainberg: I have a friend with bone marrow cancer, name escapes me | 04:27 |
morganfainberg | i've been "really" sick (e.g. no something like a migrane, which isn't contagious) 1 time in ~5yrs | 04:27 |
lifeless | morganfainberg: he has no immune system | 04:28 |
morganfainberg | now... it flattented me for 8 days | 04:28 |
morganfainberg | but it's rare | 04:28 |
openstackgerrit | lifeless proposed a change to openstack-dev/cookiecutter: Remove .gitreview https://review.openstack.org/77383 | 04:28 |
morganfainberg | lifeless, cool, if 1m is enough for that guy, enough said. | 04:28 |
openstackgerrit | lifeless proposed a change to openstack-dev/cookiecutter: Remove version constraints in setup.py https://review.openstack.org/77384 | 04:28 |
lifeless | morganfainberg: when combined w/mask and sterigel on hands that is ;) | 04:28 |
morganfainberg | oh yeah | 04:28 |
lifeless | you can get masks cheaply at the chemist usually | 04:28 |
morganfainberg | sterigel = alcohol stuff, right? | 04:28 |
lifeless | yeah, squeeze onto hands, washing motion, dries in a few seconds | 04:29 |
morganfainberg | lifeless, yep. | 04:29 |
morganfainberg | lifeless, i am in favor of that stuff. | 04:29 |
morganfainberg | lifeless, how often you travel state-side? | 04:30 |
*** marun has quit IRC | 04:31 | |
*** dstanek has quit IRC | 04:32 | |
*** banix has joined #openstack-infra | 04:34 | |
lifeless | morganfainberg: 3-4 times a year | 04:36 |
morganfainberg | lifeless, ah cool. | 04:36 |
lifeless | had a lull recently | 04:36 |
lifeless | but there's this trip | 04:36 |
lifeless | then in 3 weeks later florida | 04:36 |
morganfainberg | lifeless, hehe | 04:36 |
lifeless | then ATL | 04:36 |
lifeless | then I'm sure mordred will find something :) | 04:36 |
lifeless | Lynne and I are just debugging Cynthias reaction to my absences at the moment. | 04:37 |
lifeless | She gets a little discombombulated, which is stressful all around. | 04:37 |
morganfainberg | hehe | 04:38 |
lifeless | She's old enough to know I'm not there, but not old enough to really get 'business trip' as a thing or be patient for my return. | 04:38 |
morganfainberg | zigo, ping | 04:41 |
morganfainberg | lifeless, ahh | 04:41 |
morganfainberg | lifeless, no one in my life that age, but a ton of friends who have them, and they all know that | 04:41 |
morganfainberg | zigo, wanted to make sure i'm duplicating SQLA stuff correctly and get that fixed ASAP | 04:42 |
zigo | morganfainberg: pong | 04:42 |
morganfainberg | zigo, so for keystone, you're just doing a straight update to 0.9.3 SQLA and trying to run tests? | 04:42 |
zigo | morganfainberg: Correct. The only thing which broke is migration from 14 to 16, afaict. | 04:43 |
morganfainberg | k | 04:43 |
morganfainberg | zigo, makes me want to smush those migrates down | 04:43 |
morganfainberg | really pull a nova and say "no we don't keep pre-H migrations for I" | 04:43 |
morganfainberg | so i'll fix that test then worry about collapsing the migrations if possible | 04:44 |
*** jhesketh__ has joined #openstack-infra | 04:44 | |
morganfainberg | zigo, cool easy. let me finish coffee and head to my office (a block away) and i'll get some power and see if i can smush that out for you tonight | 04:44 |
zigo | Cool! :) | 04:45 |
zigo | I'll try to git review the patches for migrate today as well. | 04:45 |
morganfainberg | sounds good | 04:45 |
morganfainberg | zigo, i'll tag you on the review when it's posted | 04:45 |
zigo | Cheers! :) | 04:46 |
*** jhesketh_ has quit IRC | 04:46 | |
lifeless | clarkb: are you around by chance? | 04:47 |
zigo | lifeless: There's no .gitreview in sqla-migrate, how do I "Please manually create a remote named "gerrit" and try again" ? | 04:47 |
zigo | (what's the URL...) | 04:48 |
zigo | Just adding: | 04:48 |
zigo | [gerrit] | 04:48 |
zigo | host=review.openstack.org | 04:48 |
zigo | port=29418 | 04:48 |
zigo | project=openstack/horizon.git | 04:48 |
zigo | is ok? | 04:48 |
zigo | (obviously, with migrate instead) | 04:48 |
zigo | project=stackforge/sqlalchemy-migrate.git ? | 04:49 |
lifeless | zigo: yes, exactly | 04:49 |
lifeless | thats covered in http://ci.openstack.org/stackforge.html btw | 04:50 |
zigo | Oh, my bad... | 04:50 |
zigo | It was there. | 04:50 |
morganfainberg | oh wonderful we can't run test_sql_upgrade in isolation | 04:50 |
morganfainberg | *grumbles* | 04:50 |
morganfainberg | DatabaseAlreadyControlledError yay i get to unwind that | 04:51 |
morganfainberg | zigo, do you have a paste of the specific error you have? | 04:51 |
*** jhesketh__ has quit IRC | 04:51 | |
zigo | Sure. | 04:51 |
zigo | morganfainberg: http://pastebin.com/ik4HTXsC | 04:53 |
*** pcm_ has quit IRC | 04:53 | |
morganfainberg | zigo, cool | 04:53 |
zigo | I have lots of other unicode strings error which would be nice to fix too, but I don't think that's blocking. | 04:54 |
morganfainberg | zigo, yeah i need to make it so we can run our migrate tests independant of any other tests...alas... so let me work on solving your specific issue first | 04:54 |
zigo | (and I had them before) | 04:54 |
zigo | Cool! :) | 04:55 |
morganfainberg | zigo, 2013.2.2 that's Havana right? | 04:56 |
morganfainberg | nvm i'll just checkout the tag | 04:57 |
*** lnxnut has quit IRC | 04:59 | |
zigo | morganfainberg: Yeah, because I need patches for havana. | 05:05 |
morganfainberg | zigo, np. | 05:05 |
zigo | morganfainberg: Though the issue should be in Icehouse too. | 05:05 |
morganfainberg | zigo, looking at H first. icehouse has a far bigger issue w/ that test | 05:05 |
morganfainberg | and by bigger, i mean... i can't run that test w/o running the entire test suite | 05:06 |
*** dolphm_503 is now known as dolphm | 05:06 | |
zigo | morganfainberg: With SQLA 0.9? | 05:06 |
morganfainberg | no | 05:06 |
morganfainberg | at all | 05:06 |
morganfainberg | it's a test isolation issue | 05:06 |
morganfainberg | looks like it cropped up in I sometime | 05:06 |
morganfainberg | i'll deal with that before we hit RC | 05:07 |
morganfainberg | actually prob by tuesday. | 05:07 |
morganfainberg | but still, separate issue | 05:07 |
morganfainberg | zigo, i also see an error with test_downgrade_to_0 | 05:08 |
morganfainberg | oh | 05:08 |
morganfainberg | nvm | 05:08 |
morganfainberg | sorry | 05:08 |
*** dolphm is now known as dolphm_503 | 05:15 | |
*** thedodd has joined #openstack-infra | 05:22 | |
*** thuc has joined #openstack-infra | 05:30 | |
morganfainberg | zigo, do you have a bug ID for this? | 05:30 |
morganfainberg | zigo, also, you do know that H specifies SQLAlchemy>=0.7.8,<=0.7.99 as the requirement, right? | 05:30 |
*** thuc has quit IRC | 05:35 | |
morganfainberg | zigo, https://review.openstack.org/#/c/77392/ and https://review.openstack.org/#/c/77391/ | 05:39 |
morganfainberg | zigo, i need a bug id if you have it, justifying the port of this to stable/havana | 05:40 |
morganfainberg | but the fix is posted and should solve your concern | 05:40 |
*** e0ne has quit IRC | 05:47 | |
*** yamahata has quit IRC | 05:54 | |
*** nicedice_ has quit IRC | 05:55 | |
*** nicedice has joined #openstack-infra | 05:56 | |
*** yamahata has joined #openstack-infra | 05:57 | |
zigo | morganfainberg: I don't have a bug ID. | 05:57 |
morganfainberg | zigo, if you could file one and explain why this is important for both havana and icehouse (grizzly?) that would help get it through | 05:58 |
zigo | morganfainberg: I do know that Havana specify SQLAlchemy>=0.7.8,<=0.7.99, though this doesn't mean I think it's a good idea to have such specs, when SQLA 0.9 has been released long ago! :) | 05:58 |
zigo | I don't care Grizzly. | 05:58 |
morganfainberg | zigo, haha | 05:58 |
zigo | Currently filling-up the review patches for -migrate | 05:59 |
morganfainberg | zigo, well I'll need more than "this is a good idea" to get it into stable/havana, you might get away with a packaging patch file based on the change | 05:59 |
zigo | Sure! :) | 05:59 |
zigo | Will do. | 05:59 |
morganfainberg | zigo, but it's posted for you. | 05:59 |
morganfainberg | hopefully it passes (don't see why it wouldn't) | 05:59 |
zigo | In fact, I'm ok with package specific patches, as long as Icehouse is fixed. | 05:59 |
zigo | (and that we start gating on SQLA 0.9) | 05:59 |
morganfainberg | still a bug for I would be perfect. | 05:59 |
zigo | Sure. | 06:00 |
morganfainberg | thanks :) | 06:00 |
morganfainberg | i can abandon the havana one if it's not justifyable (the requirements make it a hard sell) | 06:00 |
morganfainberg | but you have the fix :) | 06:00 |
zigo | My fixes for -migrate: https://review.openstack.org/#/c/77387/ https://review.openstack.org/#/c/77388/ https://review.openstack.org/#/c/77396/ https://review.openstack.org/#/c/77397/ | 06:06 |
zigo | Test with current keystone in trunk: https://review.openstack.org/#/c/75723/ <--- shows problems with Keystone db migration. | 06:06 |
*** dolphm_503 is now known as dolphm | 06:07 | |
*** marun has joined #openstack-infra | 06:09 | |
*** jcooley_ has joined #openstack-infra | 06:14 | |
*** dolphm is now known as dolphm_503 | 06:16 | |
*** reed has joined #openstack-infra | 06:17 | |
*** CaptTofu has joined #openstack-infra | 06:19 | |
*** gokrokve has quit IRC | 06:21 | |
*** homeless has quit IRC | 06:23 | |
*** homeless has joined #openstack-infra | 06:23 | |
*** CaptTofu has quit IRC | 06:24 | |
*** jcooley_ has quit IRC | 06:24 | |
*** jcooley_ has joined #openstack-infra | 06:24 | |
*** gokrokve has joined #openstack-infra | 06:28 | |
*** yamahata has quit IRC | 06:29 | |
*** jcooley_ has quit IRC | 06:29 | |
*** yamahata has joined #openstack-infra | 06:32 | |
zigo | morganfainberg: Your patch fails in Jenkins. | 06:35 |
morganfainberg | the havana one? it's a transient | 06:35 |
zigo | morganfainberg: What's that mean? | 06:36 |
zigo | transient? | 06:36 |
morganfainberg | http://logs.openstack.org/92/77392/1/check/check-tempest-dsvm-neutron/45084f0/console.html#_2014-03-02_06_15_51_534 | 06:36 |
morganfainberg | it is a transient bug, something didn't become "Ready" | 06:36 |
morganfainberg | when it was expected, unreleated to keystone unit tests(s) | 06:36 |
morganfainberg | i need to chase down what bug to "recheck" against | 06:37 |
zigo | Ok. | 06:37 |
zigo | Got it. | 06:38 |
*** reed has quit IRC | 06:49 | |
*** banix has quit IRC | 06:52 | |
zigo | morganfainberg: https://bugs.launchpad.net/keystone/+bug/1286717 | 06:54 |
morganfainberg | zigo, if you can comment on the reviews i'll get them updated with the bug id | 06:55 |
zigo | morganfainberg: Not sure what to add as a comment ... | 06:55 |
morganfainberg | zigo, just add "bug id for this is <link>" | 06:55 |
morganfainberg | :) | 06:55 |
zigo | :) | 06:56 |
morganfainberg | just in gerrit, no specific line number | 06:56 |
morganfainberg | just will be eaiser for me to find that way rather than hunting through irc logs | 06:56 |
*** banix has joined #openstack-infra | 06:56 | |
*** gokrokve has quit IRC | 06:57 | |
*** banix has quit IRC | 06:58 | |
*** dolphm_503 is now known as dolphm | 07:08 | |
*** jcooley_ has joined #openstack-infra | 07:12 | |
*** lttrl has joined #openstack-infra | 07:16 | |
*** dolphm is now known as dolphm_503 | 07:17 | |
*** pmathews has quit IRC | 07:18 | |
*** thedodd has quit IRC | 07:18 | |
*** jcooley_ has quit IRC | 07:18 | |
*** gokrokve has joined #openstack-infra | 07:28 | |
*** gokrokve has quit IRC | 07:33 | |
*** oubiwan__ has quit IRC | 07:43 | |
*** valentinbud has joined #openstack-infra | 07:47 | |
*** SumitNaiksatam has quit IRC | 07:50 | |
*** vkozhukalov has joined #openstack-infra | 07:56 | |
*** SumitNaiksatam has joined #openstack-infra | 08:00 | |
*** vkozhukalov has quit IRC | 08:03 | |
*** dolphm_503 is now known as dolphm | 08:08 | |
*** lttrl has quit IRC | 08:12 | |
*** rcarrillocruz has joined #openstack-infra | 08:12 | |
*** yolanda_ has joined #openstack-infra | 08:13 | |
*** oubiwan__ has joined #openstack-infra | 08:13 | |
*** rcarrillocruz1 has quit IRC | 08:13 | |
*** oubiwan__ has quit IRC | 08:18 | |
*** dolphm is now known as dolphm_503 | 08:18 | |
*** CaptTofu has joined #openstack-infra | 08:20 | |
*** CaptTofu has quit IRC | 08:24 | |
*** rcarrillocruz1 has joined #openstack-infra | 08:29 | |
*** gokrokve has joined #openstack-infra | 08:29 | |
*** rcarrillocruz has quit IRC | 08:32 | |
*** gokrokve has quit IRC | 08:34 | |
*** valentinbud has quit IRC | 08:34 | |
*** e0ne has joined #openstack-infra | 08:42 | |
*** crank has quit IRC | 08:45 | |
*** e0ne has quit IRC | 08:47 | |
*** e0ne has joined #openstack-infra | 08:47 | |
*** rlandy has joined #openstack-infra | 08:51 | |
*** rcarrillocruz has joined #openstack-infra | 08:59 | |
*** rcarrillocruz1 has quit IRC | 09:01 | |
*** jcooley_ has joined #openstack-infra | 09:04 | |
*** amotoki_ has joined #openstack-infra | 09:08 | |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/jenkins-job-builder: Test for email-ext publisher https://review.openstack.org/75872 | 09:08 |
openstackgerrit | A change was merged to openstack-infra/jenkins-job-builder: ArtifactDeployer Plugin support added https://review.openstack.org/75015 | 09:09 |
*** dolphm_503 is now known as dolphm | 09:09 | |
*** jcooley_ has quit IRC | 09:09 | |
*** gema has quit IRC | 09:10 | |
*** yamahata has quit IRC | 09:11 | |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/jenkins-job-builder: Added send-to options support to email-ext plugin https://review.openstack.org/73601 | 09:11 |
*** yamahata has joined #openstack-infra | 09:12 | |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/jenkins-job-builder: Add attachment pattern expression to email-ext. https://review.openstack.org/70148 | 09:13 |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/jenkins-job-builder: Content-Type can now be set for email-ext publisher https://review.openstack.org/75919 | 09:13 |
*** oubiwan__ has joined #openstack-infra | 09:14 | |
*** morganfainberg is now known as morganfainberg_Z | 09:15 | |
*** gema has joined #openstack-infra | 09:17 | |
*** dolphm is now known as dolphm_503 | 09:19 | |
openstackgerrit | A change was merged to openstack-infra/jenkins-job-builder: Added support for Delivery Pipeline Plugin https://review.openstack.org/71658 | 09:19 |
*** oubiwan__ has quit IRC | 09:21 | |
openstackgerrit | A change was merged to openstack-infra/jenkins-job-builder: Added support for python virtualenv plugin https://review.openstack.org/71926 | 09:26 |
*** gokrokve has joined #openstack-infra | 09:30 | |
*** gokrokve has quit IRC | 09:35 | |
*** valentinbud has joined #openstack-infra | 09:35 | |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/jenkins-job-builder: Use venv to build documentation https://review.openstack.org/76190 | 09:56 |
*** jcooley_ has joined #openstack-infra | 10:00 | |
*** jcooley_ has quit IRC | 10:04 | |
*** dolphm_503 is now known as dolphm | 10:10 | |
*** dolphm is now known as dolphm_503 | 10:19 | |
*** CaptTofu has joined #openstack-infra | 10:21 | |
*** rcarrillocruz1 has joined #openstack-infra | 10:23 | |
*** rcarrillocruz has quit IRC | 10:25 | |
*** CaptTofu has quit IRC | 10:25 | |
*** gokrokve has joined #openstack-infra | 10:29 | |
*** gokrokve_ has joined #openstack-infra | 10:31 | |
*** gokrokve has quit IRC | 10:34 | |
*** gokrokve_ has quit IRC | 10:35 | |
*** e0ne has quit IRC | 10:35 | |
*** e0ne has joined #openstack-infra | 10:36 | |
*** e0ne has quit IRC | 10:37 | |
*** e0ne has joined #openstack-infra | 10:40 | |
*** dripton__ has joined #openstack-infra | 10:58 | |
*** dripton_ has quit IRC | 10:58 | |
*** dripton__ has quit IRC | 11:05 | |
*** Ryan_Lane has quit IRC | 11:06 | |
*** dripton__ has joined #openstack-infra | 11:07 | |
*** dolphm_503 is now known as dolphm | 11:10 | |
*** rcarrillocruz has joined #openstack-infra | 11:17 | |
*** rcarrillocruz1 has quit IRC | 11:20 | |
*** dolphm is now known as dolphm_503 | 11:20 | |
*** alexpilotti has joined #openstack-infra | 11:21 | |
*** ildikov_ has joined #openstack-infra | 11:22 | |
*** salv-orlando has quit IRC | 11:23 | |
*** gokrokve has joined #openstack-infra | 11:28 | |
*** gokrokve has quit IRC | 11:33 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: update grenade old fails to stop bug https://review.openstack.org/77422 | 11:36 |
*** e0ne has quit IRC | 11:36 | |
*** che-arne has quit IRC | 11:38 | |
*** CaptTofu has joined #openstack-infra | 11:44 | |
*** rcarrillocruz1 has joined #openstack-infra | 11:45 | |
*** rcarrillocruz has quit IRC | 11:46 | |
*** yamahata has quit IRC | 11:46 | |
*** valentinbud has quit IRC | 11:48 | |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: python-dateutil requires six, be explicit about it https://review.openstack.org/77423 | 11:49 |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: fix quoting of the filename https://review.openstack.org/77424 | 11:51 |
*** yamahata has joined #openstack-infra | 11:55 | |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: update grenade old fails to stop bug https://review.openstack.org/77422 | 11:55 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: python-dateutil requires six, be explicit about it https://review.openstack.org/77423 | 11:55 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: fix quoting of the filename https://review.openstack.org/77424 | 11:55 |
*** dolphm_503 is now known as dolphm | 12:11 | |
*** yamahata has quit IRC | 12:12 | |
*** yamahata has joined #openstack-infra | 12:15 | |
*** DeeJay1 has joined #openstack-infra | 12:16 | |
DeeJay1 | hi. I have a problem setting up zuul, it seems it cannot read the ssh keyfile and I get an SSHException: not a valid DSA private key file error :( | 12:18 |
*** yamahata has quit IRC | 12:20 | |
*** dolphm is now known as dolphm_503 | 12:21 | |
*** valentinbud has joined #openstack-infra | 12:27 | |
*** gokrokve has joined #openstack-infra | 12:28 | |
*** gokrokve has quit IRC | 12:33 | |
*** lttrl has joined #openstack-infra | 12:35 | |
*** yolanda_ has quit IRC | 12:38 | |
*** rcarrillocruz has joined #openstack-infra | 12:40 | |
*** rcarrillocruz1 has quit IRC | 12:42 | |
*** e0ne has joined #openstack-infra | 12:52 | |
*** e0ne has quit IRC | 12:54 | |
zigo | sdague: What do you mean by "put these in a series"? Is there a series feature in gerrit that I didn't know of? | 12:59 |
sdague | zigo: if they are a series in git, that dependency chain with go through to gerrit | 13:00 |
zigo | sdague: I have no idea what you're talking about! :) | 13:00 |
* zigo is out for 30 minz, kid's bath ... | 13:00 | |
sdague | instead of 4 commits against master | 13:00 |
sdague | build a branch, make the commits one after another on it | 13:01 |
sdague | then you get something like this in gerrit - https://review.openstack.org/#/c/72372/ | 13:01 |
sdague | note the "Depends On" | 13:02 |
*** CaptTofu has quit IRC | 13:02 | |
openstackgerrit | Lukasz Jernas proposed a change to openstack-infra/config: Periodic documentation build for translators https://review.openstack.org/73806 | 13:07 |
*** mburned is now known as mburned_out | 13:10 | |
*** dolphm_503 is now known as dolphm | 13:12 | |
*** e0ne has joined #openstack-infra | 13:17 | |
*** alexpilotti has quit IRC | 13:19 | |
*** dolphm is now known as dolphm_503 | 13:22 | |
openstackgerrit | Lukasz Jernas proposed a change to openstack-infra/config: Periodic documentation build for translators https://review.openstack.org/73806 | 13:23 |
*** oubiwan__ has joined #openstack-infra | 13:27 | |
*** gokrokve has joined #openstack-infra | 13:28 | |
*** e0ne has quit IRC | 13:30 | |
*** gokrokve has quit IRC | 13:33 | |
*** lttrl has quit IRC | 13:35 | |
*** oubiwan__ has quit IRC | 13:45 | |
*** sileht has quit IRC | 13:48 | |
*** sileht has joined #openstack-infra | 13:50 | |
*** valentinbud has quit IRC | 13:55 | |
*** dolphm_503 is now known as dolphm | 14:13 | |
*** dolphm is now known as dolphm_503 | 14:22 | |
*** gokrokve has joined #openstack-infra | 14:28 | |
*** gokrokve has quit IRC | 14:33 | |
*** oubiwan__ has joined #openstack-infra | 14:42 | |
*** oubiwan__ has quit IRC | 14:47 | |
*** pcm_ has joined #openstack-infra | 14:49 | |
*** yamahata has joined #openstack-infra | 14:54 | |
*** medieval1 has joined #openstack-infra | 14:54 | |
*** medieval1_ has quit IRC | 14:58 | |
*** talluri has joined #openstack-infra | 15:00 | |
*** CaptTofu has joined #openstack-infra | 15:03 | |
*** CaptTofu has quit IRC | 15:07 | |
*** dolphm_503 is now known as dolphm | 15:13 | |
*** thuc has joined #openstack-infra | 15:14 | |
*** dcramer__ has quit IRC | 15:20 | |
*** ildikov_ has quit IRC | 15:21 | |
*** dolphm is now known as dolphm_503 | 15:23 | |
*** jcooley_ has joined #openstack-infra | 15:24 | |
*** rcarrillocruz1 has joined #openstack-infra | 15:25 | |
*** rcarrillocruz has quit IRC | 15:27 | |
*** thuc has quit IRC | 15:27 | |
*** thuc has joined #openstack-infra | 15:28 | |
*** gokrokve has joined #openstack-infra | 15:28 | |
*** jcooley_ has quit IRC | 15:28 | |
*** gokrokve has quit IRC | 15:33 | |
*** thuc has quit IRC | 15:35 | |
*** thuc has joined #openstack-infra | 15:35 | |
fungi | i've taken a stack trace of nodepoold and am restarting/cleaning up after it | 15:36 |
fungi | seemed not to be successfully building any nodes in the tripleo-ci overcloud | 15:36 |
fungi | and any of my attempts to perform deletions from the cli met with sqlalchemy lock errors | 15:37 |
*** thuc has quit IRC | 15:40 | |
*** exz has quit IRC | 15:41 | |
*** exz has joined #openstack-infra | 15:44 | |
SergeyLukjanov | fungi, oh | 15:46 |
SergeyLukjanov | fungi, could it affect other providers? | 15:46 |
jeblair | i'll go fix graphite | 15:47 |
fungi | SergeyLukjanov: it didn't seem to be affecting any other providers | 15:47 |
fungi | jeblair: i started to look at the graphite traceback... any idea what the cause is? (no module named timezone?) | 15:47 |
jeblair | fungi: not sure yet, but we do cd master there... | 15:48 |
jeblair | i think they may have just bumped their django requirement to 1.4 | 15:48 |
fungi | oh, master has moved to possibly no longer supporting django 1.3 it seemed | 15:48 |
fungi | yeah, that | 15:48 |
fungi | and we currently install django from ubuntu-provided debs. though perhaps precise also has django 1.4 | 15:49 |
jeblair | does it? | 15:50 |
fungi | i was going to check. no idea yet | 15:51 |
fungi | something is wonky with my wireless here, so without going down to my office this is very slow going | 15:52 |
anteaya | precise-updates django 1.3.1: http://packages.ubuntu.com/precise-updates/python-django | 15:53 |
jeblair | filed this https://github.com/graphite-project/graphite-web/issues/650 | 15:54 |
jeblair | looks like it's this commit https://github.com/jcsp/graphite-web/commit/ac6d56a53324c04e9ebe192dc0dad98d0262336e | 15:59 |
*** rcarrillocruz has joined #openstack-infra | 15:59 | |
fungi | yeah, i'm not finding any django1.4 or similar backports for precise | 16:00 |
*** rcarrillocruz1 has quit IRC | 16:02 | |
clarkb | Cloud archive may have it due to horizon | 16:03 |
*** dstanek has joined #openstack-infra | 16:04 | |
*** dcramer__ has joined #openstack-infra | 16:08 | |
jeblair | #status log stopped puppet and manually rolled back graphite-web on graphite.o.o pending resolution of https://github.com/graphite-project/graphite-web/issues/650 | 16:12 |
jeblair | so i'm doing that until we see if they want to support 1.3 or if we should upgrade to 1.4 | 16:13 |
anteaya | the git blame for graphite's requirements.txt give a timestamp of | 16:13 |
anteaya | 2013-08-19 | 16:13 |
anteaya | for the django move to 1.4 | 16:13 |
anteaya | why would that only hit us now? | 16:13 |
jeblair | oh, sorry, we track 0.9.x which lists 1.3 as a requirement | 16:14 |
*** dolphm_503 is now known as dolphm | 16:14 | |
fungi | lifeless: SpamapS: after the nodepoold restart, i can see in our debug log that we're frequently getting overlimit errors (OverLimit: Quota exceeded for instances: Requested 1, but already used 100 of 100 instances) and the few that do manage to build never come up/become reachable (Exception: Timeout waiting for ssh access) | 16:15 |
jeblair | fungi: what prompted you to restart? | 16:16 |
fungi | jeblair: all attempts to delete nodes from the command line failed with sqlalchemy waiting for lock timeout | 16:16 |
fungi | for the past several days | 16:16 |
jeblair | fungi: were you trying to delete machines that were already in the delete state? | 16:16 |
fungi | so i took a stacktrace a little bit ago and restarted | 16:16 |
fungi | they were in a building state | 16:17 |
jeblair | fungi: for how long? | 16:17 |
fungi | hours | 16:17 |
jeblair | fungi: i think the current timeout for a building node is 8 hours | 16:17 |
fungi | trying to either switch them to a delete state or delete --now both met with the same traceback from sqlalchemy | 16:17 |
fungi | but yes, the cleanup thread seemed to be reaping them at the 8 hour mark just fine | 16:18 |
fungi | and i didn't see similar lock timeouts in the log | 16:18 |
jeblair | fungi: so i think that's because those nodes all had active threads working on them | 16:18 |
fungi | so i have to assume nodepoold was locking for update and never released | 16:18 |
fungi | yeah, makes sense | 16:18 |
jeblair | fungi: i don't think nodepool has a way to switch a building node to delete | 16:19 |
fungi | nodepool delete (the command line) is what i was trying to use to switch them to delete state | 16:19 |
jeblair | fungi: right, but since there was still a build thread running, there's nothing that would make it stop | 16:20 |
*** thuc has joined #openstack-infra | 16:21 | |
fungi | but yes, i didn't seem to be able to forcibly delete them without restarting nodepool to release the lock on one of those records so i could delete and watch it try to build a current node in that cloud | 16:21 |
jeblair | fungi: so afaict, nodepool was operating correctly, but it doesn't seem to be meeting your expectations... | 16:22 |
jeblair | fungi: why did you want to delete the currently building nodes? | 16:22 |
fungi | because they'd been sitting in that state for hours, and i wanted to try to get it to build one fresh | 16:23 |
jeblair | fungi: so would you like a shorter timeout for building a node? | 16:23 |
*** dolphm is now known as dolphm_503 | 16:24 | |
fungi | under normal circumstances, i'm not sure | 16:24 |
fungi | the tripleo people were begging me to find out why nodepool was not assigning nodes to jenkins from their cloud, and insisting they had everything fixed and back in working order so something must be stuck on our side | 16:25 |
fungi | pretty sure at this point they're wrong | 16:25 |
jeblair | fungi: oh, i didn't see any of that | 16:25 |
fungi | it was yesterday/last night, but i only just glanced at a computer for a few minutes, it being teh weekiend | 16:25 |
*** thuc has quit IRC | 16:26 | |
*** thuc has joined #openstack-infra | 16:26 | |
*** pcrews has joined #openstack-infra | 16:27 | |
*** gokrokve has joined #openstack-infra | 16:28 | |
sdague | did something hiccup? | 16:28 |
jeblair | sdague: what? | 16:29 |
sdague | https://review.openstack.org/#/c/77435/ didn't seem to get assigned anything for tests | 16:29 |
*** thuc_ has joined #openstack-infra | 16:29 | |
sdague | and it's er, so I expect it to get processed in a couple minutes | 16:29 |
sdague | uploaded a review 1.5hrs ago | 16:29 |
jeblair | fungi: i don't think there's a bug to fix in nodepool. i'm not sure why those nodes never left the building state, but i think if we want to have nodepool handle more manual interaction (like aborting a build thread while in progress), i think that would be an enhancement | 16:29 |
jeblair | sdague: i'll check | 16:30 |
sdague | jeblair: thanks | 16:31 |
notmyname | I saw some import errors when trying to do some graphite graphs from openstack about 20 minutes ago. seems to be working now, but ya, it seems something hiccuped | 16:31 |
sdague | notmyname: this would be a different issue | 16:31 |
*** thuc has quit IRC | 16:31 | |
fungi | jeblair: okay, thanks. mainly i was just confused and didn't anticipate nodepoold holding a record lock for the duration of the node build. what i was seeing makes more sense now | 16:31 |
anteaya | notmyname: could you paste your errors? | 16:31 |
jeblair | anteaya: it's the thing i just fixed | 16:31 |
notmyname | sdague: not saying they are directly related. just that I too saw something | 16:32 |
anteaya | jeblair: how did you fix it? | 16:32 |
notmyname | anteaya: I don't have it anymore. IIRC, something about "couldn't import module timezone" | 16:32 |
anteaya | notmyname: yes it was the same issue affecting status.openstack.org/zuul | 16:32 |
jeblair | anteaya: "stopped puppet and manually rolled back graphite-web on graphite.o.o pending resolution of https://github.com/graphite-project/graphite-web/issues/650" | 16:32 |
jeblair | notmyname: ^ | 16:32 |
*** gokrokve has quit IRC | 16:33 | |
anteaya | jeblair: oh okay, I read that but it didn't click, thanks | 16:33 |
sdague | from an er perspective, it will be very good to get to the end of the week and age out the old results. It looks like about 80% of the uncategorized content is failures in logs we didn't used to index, but started last weekend. | 16:33 |
sdague | I think the one remaining log we really need is horizon error log | 16:34 |
jeblair | zuul doesn't seem to be seeing any gerrit events | 16:34 |
notmyname | jeblair: I pushed a tag for python-swiftclient a little more than 10 minutes ago. normally it would be on pypi by now, but it isn't yet. should I wait, or is there something else going on? | 16:34 |
sdague | but that will require clarkb to help, because it's a different time format | 16:34 |
sdague | jeblair: odd | 16:34 |
sdague | jeblair: just recheck? | 16:34 |
anteaya | check and gate queues are empty | 16:34 |
jeblair | sdague: no events | 16:34 |
sdague | well I just ran a recheck no bug | 16:35 |
jeblair | zuul has 2 established tcp connections to review:29418; review has 3 established connections to zuul | 16:35 |
anteaya | notmyname: zuul isn't seeing the gerrit event | 16:35 |
sdague | still doesn't seem to pop up | 16:35 |
notmyname | ya, just saw jeblair say that. I'll check in later | 16:35 |
anteaya | k | 16:36 |
sdague | jeblair: yeh, looks like at least 3 changes in gerrit that have been completely missed: https://review.openstack.org/#/q/status:open,n,z | 16:36 |
jeblair | the last event it saw from gerrit was at 14:42:17 | 16:37 |
* DeeJay1 just uploaded a new patchset - nothing happened | 16:37 | |
anteaya | DeeJay1: yes we are tracking why now | 16:37 |
anteaya | zuul has been deaf for the last two hours almost | 16:37 |
sdague | jeblair: actually, 18 | 16:38 |
sdague | https://review.openstack.org/#/q/status:open+-Verified-1+-Verified%252B1+-Verified-2+-Verified%252B2,n,z | 16:38 |
anteaya | 18 hours? | 16:38 |
sdague | 18 reviews with no Gerrit vote since 14:42:17 UTC | 16:39 |
sdague | there's one at 14:43 | 16:39 |
anteaya | I saw changes in both gate and check last night | 16:39 |
sdague | so that's a pretty solid indicator | 16:39 |
jeblair | i think this is probably another half-closed connection like what we saw when rax was having those network issues | 16:39 |
anteaya | with might have been about 18 hours ago | 16:39 |
fungi | no obvious blips on its cacti graphs | 16:39 |
sdague | anteaya: no, it's 2 hrs ago | 16:39 |
jeblair | paramiko is waiting for a packet | 16:39 |
sdague | 18 missing events | 16:39 |
anteaya | ah | 16:40 |
fungi | noir on the review.o.o graph | 16:40 |
fungi | nor | 16:40 |
jeblair | fungi: so i think there's little left to do other than restart zuul. i think the fix for this is figuring out how to add tcp keepalive to the ssh connection | 16:42 |
fungi | jeblair: greed--i was just lkooking at paramiko's set_keepalive | 16:42 |
jeblair | fungi: would you agree that the 2 established connections on zuul's side vs 3 connections on gerrit's side points in that direction? | 16:42 |
fungi | agreed | 16:42 |
fungi | yes, probably an asymmetric connection reset | 16:43 |
jeblair | restarting zuul now | 16:43 |
fungi | misbehaving router between them or something | 16:43 |
fungi | http://www.lag.net/paramiko/docs/paramiko.Transport-class.html#set_keepalive | 16:43 |
jeblair | zuul and its mergers have been restarted | 16:44 |
fungi | i can retrigger notmyname's tag jobs | 16:45 |
notmyname | fungi: thanks | 16:45 |
jeblair | fungi: cool, thx | 16:45 |
fungi | then after that, i have to disappear. i'm supposed to be cleaning ;) | 16:46 |
anteaya | best way to spend a Sunday | 16:46 |
anteaya | if I should be seeing anything on the status page, I am not yet | 16:46 |
jeblair | fungi: oh, i think we were wrong | 16:47 |
jeblair | i still didn't see any events in the zuul log | 16:47 |
jeblair | even after i left a 'recheck no bug' | 16:47 |
fungi | jeblair: connectivity problems then? | 16:47 |
jeblair | so i ran "ssh review gerrit stream-events" locally | 16:47 |
jeblair | it connected, i left another comment, but nothing | 16:47 |
jeblair | so i think gerrit may actually be at fault | 16:47 |
anteaya | why would gerrit suddenly stop streaming events? | 16:48 |
*** talluri has quit IRC | 16:48 | |
jeblair | fungi: same behavior running stream-events on review.o.o connecting to localhost | 16:48 |
jeblair | ls-projects works | 16:49 |
jeblair | fungi: ssh review gerrit show-queue | 16:50 |
jeblair | is interesting | 16:50 |
fungi | i can ping6 -Q0x10 in both directions between them, so it's not our qos/dscp friend | 16:50 |
jeblair | fungi: have you run the tag command yet? | 16:51 |
jeblair | fungi: if not, don't | 16:52 |
fungi | i haven't no | 16:52 |
jeblair | fungi: i'm curious if we 'gerrit kill 9e0a9030' will it become unstuck? | 16:52 |
fungi | currently looking at gerrit bug reports about stream-events hanging | 16:52 |
jeblair | or rather just 'kill 9e0a9030' i think | 16:53 |
jeblair | fungi: shall i try that? | 16:53 |
fungi | worth a shot. i was trying to interpret which of the threads was at fault there | 16:53 |
jeblair | fungi: before i do that, should we look at the javamelody page? | 16:53 |
fungi | the first 9 of them aren't waiting | 16:53 |
jeblair | fungi: is that what you are doing? | 16:53 |
fungi | oh, no i was just trying to interpret show-queue | 16:54 |
fungi | i can't seem to find my notes on how to access the javamelody interface for gerrit | 16:54 |
jeblair | https://review.openstack.org/monitoring | 16:55 |
fungi | based on my reading so far, when all the available stream handler threads are occupied, the others will sit in a waiting state until one is freed up | 16:56 |
jeblair | fungi: are they all waiting for an ssh write operation to finish? | 16:57 |
fungi | i'm trying to figure that out from the gerrit logs | 16:59 |
jeblair | https://etherpad.openstack.org/p/K2qiY8oCqw | 16:59 |
jeblair | the tracebacks for all of the 'SSH-Stream-Worker-#' threads look like that | 16:59 |
jeblair | 'gerrit show-connections' isn't returning very quickly for me | 17:01 |
*** banix has joined #openstack-infra | 17:02 | |
jeblair | it returned | 17:02 |
*** ok_delta has joined #openstack-infra | 17:02 | |
*** ok_delta has quit IRC | 17:02 | |
*** ociuhandu has quit IRC | 17:03 | |
*** CaptTofu has joined #openstack-infra | 17:03 | |
jeblair | i wish i knew which hosts those threads were trying to write to | 17:04 |
fungi | yeah, the hash values for them seem to not really be related to those in the queue at all | 17:06 |
jeblair | fungi: gerrit docs say that tcpkeepalive defaults to true for gerrit's sshd | 17:07 |
fungi | which would cause gerrit to at least tear down dead ssh client sockets, though without keepalive from the client the other end might not be aware still | 17:08 |
*** CaptTofu has quit IRC | 17:08 | |
fungi | we at least know 98f5ade1 should be dead at this point | 17:09 |
fungi | started at 22:26:59, idle 02:24:09 | 17:09 |
jeblair | true | 17:10 |
fungi | corresponds to roughly when zuul stopped receiving events | 17:10 |
anteaya | would the number of systems we have connected to gerrit play any role in this? | 17:12 |
fungi | i'm wondering whether queue task 522c096f (the last one in the list which isn't waiting) is at fault? | 17:12 |
fungi | anteaya: no idea... we've got about 70 established ssh connections gerrit knows about at the moment | 17:12 |
anteaya | k | 17:12 |
jeblair | fungi: i don't think it's the events that are at fault, i think it's the connections | 17:13 |
*** cody-somerville has quit IRC | 17:13 | |
fungi | ahh | 17:13 |
jeblair | fungi: i think those threads take events from the queue, format them, and push them to the ssh connections | 17:13 |
jeblair | fungi: and i think they are all trying to do that | 17:13 |
jeblair | fungi: so i'm pretty sure if we killed one of those processes, it would just get stuck on the next one | 17:14 |
jeblair | fungi: we can try that if you want | 17:14 |
fungi | and in theory if we started killing connections, we might eventually find that one in particular un-sticks the remaining tasks in the queue? | 17:14 |
*** dolphm_503 is now known as dolphm | 17:15 | |
jeblair | fungi: except i don't know how to kill a connection | 17:15 |
jeblair | fungi: only the gerrit task (or thread) (which doesn't map 1:1 to an ssh connection) | 17:16 |
jeblair | fungi: but if we could destroy connections, yes | 17:16 |
fungi | looks like hashar saw something similar at wikimedia? https://bugzilla.wikimedia.org/show_bug.cgi?id=46917 | 17:18 |
*** rcarrillocruz1 has joined #openstack-infra | 17:19 | |
*** rcarrillocruz has quit IRC | 17:21 | |
* fungi really wishes his wireless network would stop flaking out. a neighbor must have started running something on te same frequency today | 17:22 | |
*** ildikov_ has joined #openstack-infra | 17:23 | |
*** rcarrillocruz has joined #openstack-infra | 17:24 | |
*** rcarrillocruz1 has quit IRC | 17:24 | |
*** dolphm is now known as dolphm_503 | 17:25 | |
*** thuc_ has quit IRC | 17:25 | |
anteaya | their bug seems to hinge on the use of apache sshd 0.6.0 | 17:25 |
anteaya | what version of apache sshd is our gerrit using? | 17:26 |
jeblair | i believe ours is still using 0.5.1 | 17:26 |
*** thuc has joined #openstack-infra | 17:26 | |
anteaya | do we have something that automatically closes idle ssh connections? | 17:27 |
*** ociuhandu has joined #openstack-infra | 17:27 | |
fungi | we might have simply had an unfortunate culmination of network issues to trigger a similar symptom | 17:28 |
*** gokrokve has joined #openstack-infra | 17:28 | |
jeblair | fungi: yeah | 17:28 |
anteaya | should we open a bug to track our findings in case this happens again? | 17:29 |
fungi | unfortunately they were resolving it with gerrit restarts, which is a fairly hefty hammer | 17:29 |
*** thuc has quit IRC | 17:30 | |
fungi | i have no doubt that would clear the situation, but won't help us preserve any pending events and get them sent out to the hung streams | 17:31 |
*** yolanda_ has joined #openstack-infra | 17:31 | |
*** gokrokve has quit IRC | 17:33 | |
*** oubiwan__ has joined #openstack-infra | 17:34 | |
jeblair | fungi: ever used tcpkill? | 17:34 |
fungi | it's been a while. i think it runs and continues to issue tcp/rst packets for any connecting matching a particular pattern | 17:37 |
fungi | rereading the manpage now | 17:37 |
jeblair | oh it might not work for idle connections | 17:37 |
*** rcarrillocruz1 has joined #openstack-infra | 17:38 | |
fungi | s/connecting/connection/ | 17:39 |
*** rcarrillocruz has quit IRC | 17:40 | |
fungi | but yeah it seems to loop indefinitely and kill all connections matching the given pattern as soon as they arise | 17:40 |
anteaya | so you can't kill tcpkill? | 17:43 |
fungi | might be able to use hping to spoof a tcp/rst back to the client | 17:43 |
jeblair | fungi: i'm about ready to say this isn't worth debugging java deeper, try killing the gerrit task(s), and then restarting gerrit if that doesn't work... | 17:46 |
fungi | yeah, i was starting to dig into how we might cause a "connection reset by peer" using hping3, but that's getting into the weeds | 17:47 |
*** dcramer__ has quit IRC | 17:47 | |
jeblair | ok, i'll kill the top gerrit task and we'll see what happens | 17:47 |
fungi | wfm | 17:47 |
fungi | at this point we can kill as many of them as we like for experimentation purposes. the gerrit restart's going to do that in the end anyway | 17:48 |
jeblair | fungi: yeah, they all just shifted up and all 9 workers are still waiting at the same spot | 17:49 |
fungi | okay | 17:49 |
*** e0ne has joined #openstack-infra | 17:49 | |
jeblair | fungi: same result with killing the bottom one in the non-waiting stack | 17:51 |
jeblair | fungi: restart now? | 17:51 |
fungi | yeah | 17:51 |
fungi | i'm zeroing in on ways we might be able to diagnose this better next time, but let's not hold up the boat | 17:52 |
*** gokrokve has joined #openstack-infra | 17:52 | |
jeblair | fungi: installing jstack might be helpful (it's better about diagnosing lock contention), but that would pull in the jdk and LOTS of packages | 17:52 |
fungi | did you want me to restart it? | 17:52 |
jeblair | fungi: i'll do it | 17:53 |
fungi | k | 17:53 |
mordred | morning guys - sounds like a fun morning | 17:53 |
mordred | but that it's in hand - anything you want me to jump in on? | 17:54 |
*** banix has quit IRC | 17:54 | |
jeblair | fungi: if you have the prep for notmyname's tag job handy, i think it's safe to do that now | 17:55 |
jeblair | fungi: if you need to go, i can do it | 17:55 |
jeblair | mordred: not that i can think of | 17:55 |
fungi | i'll get it running | 17:55 |
openstackgerrit | Lukasz Jernas proposed a change to openstack-infra/zuul: Minor fixes to styling and coherency https://review.openstack.org/77442 | 17:56 |
DeeJay1 | seems to work | 17:56 |
jeblair | sdague: the problem was gerrit; we restarted it. events were lost. | 17:56 |
sdague | jeblair: ok | 17:56 |
sdague | well, at least it didn't happen on a busy day :) | 17:57 |
*** thuc has joined #openstack-infra | 17:58 | |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: added query for broken rax mirror https://review.openstack.org/77435 | 17:59 |
anteaya | did you want me to recheck no bug this list? https://review.openstack.org/#/q/status:open+-Verified-1+-Verified%252B1+-Verified-2+-Verified%252B2,p,002b7a3800012e3f | 18:00 |
jeblair | anteaya: that would be swell, thanks | 18:01 |
anteaya | k | 18:01 |
fungi | notmyname: https://pypi.python.org/pypi/python-swiftclient/2.0.3 | 18:02 |
fungi | notmyname: should be all set now. sorry about the delay | 18:02 |
*** e0ne has quit IRC | 18:03 | |
*** pcrews has quit IRC | 18:03 | |
*** dcramer__ has joined #openstack-infra | 18:03 | |
DeeJay1 | anteaya: hmm, wasn't that this list instead https://review.openstack.org/#/q/status:open+-Verified-1+-Verified%252B1+-Verified-2+-Verified%252B2,p,002b681c00012ae7 | 18:04 |
DeeJay1 | ? | 18:04 |
sdague | https://review.openstack.org/#/q/status:open+-Verified-1+-Verified%252B1+-Verified-2+-Verified%252B2+-age:4h,n,z is actually the right query I think to pick up everything that's left | 18:05 |
*** jcooley_ has joined #openstack-infra | 18:06 | |
sdague | jeblair / fungi: thanks for digging in on a Sunday | 18:06 |
jeblair | sdague: no prob. i'm trying to enjoy the wknd, but i also planned to be around to make sure things are in shape for this coming week. | 18:08 |
sdague | yeh, honestly, I was sifting through ER data this weekend, I think we're actually in pretty good shape right now | 18:09 |
*** jcooley_ has quit IRC | 18:11 | |
anteaya | I did some on the link I posted and then switched to the link sdague posted | 18:11 |
anteaya | I may have missed some | 18:11 |
anteaya | DeeJay1: thanks | 18:12 |
fungi | looks like the hping3 trick would require some scripted timing to be able to plug in the right seq and ack for the forged packet. also hping3 doesn't support ipv6. to really do this effectively we'd probably need to use something like scapy instead | 18:12 |
openstackgerrit | Lukasz Jernas proposed a change to openstack-infra/config: Periodic documentation build for translators https://review.openstack.org/73806 | 18:13 |
* DeeJay1 has a love/hate relationship with zuul after patchset nr 7... | 18:14 | |
*** wchrisj has joined #openstack-infra | 18:14 | |
*** dolphm_503 is now known as dolphm | 18:16 | |
*** DeeJay1 has quit IRC | 18:24 | |
*** dolphm is now known as dolphm_503 | 18:25 | |
*** rlandy has quit IRC | 18:28 | |
anteaya | I went through the list again, I don't think I missed any that were listed | 18:29 |
anteaya | I didn't see the swift tag job, I think fungi had gotten to it already | 18:29 |
*** dcramer__ has quit IRC | 18:31 | |
*** fifieldt has joined #openstack-infra | 18:32 | |
*** reed has joined #openstack-infra | 18:34 | |
openstackgerrit | Andreas Jaeger proposed a change to openstack-infra/config: Change operations-guide-jobs to not use maven https://review.openstack.org/77446 | 18:41 |
*** dcramer__ has joined #openstack-infra | 18:44 | |
*** jcooley_ has joined #openstack-infra | 18:48 | |
*** rossella_s has joined #openstack-infra | 18:50 | |
*** jcooley_ has quit IRC | 18:50 | |
*** rossella_s has quit IRC | 18:56 | |
*** jcooley_ has joined #openstack-infra | 18:56 | |
*** wchrisj has quit IRC | 18:58 | |
*** reed has quit IRC | 18:59 | |
*** CaptTofu has joined #openstack-infra | 19:04 | |
*** CaptTofu has quit IRC | 19:09 | |
*** jcooley_ has quit IRC | 19:15 | |
*** dolphm_503 is now known as dolphm | 19:16 | |
*** dcramer__ has quit IRC | 19:19 | |
*** david-lyle has joined #openstack-infra | 19:21 | |
*** gokrokve has quit IRC | 19:22 | |
kevinbenton | is something wrong with the gate? there was an approved patch this morning that disappeared | 19:24 |
kevinbenton | https://review.openstack.org/#/c/74306/ | 19:24 |
kevinbenton | but another patch made it in now so I will need to update the alembic migration script before this can merge | 19:25 |
kevinbenton | i just want to make sure everything is back to normal if there was a problem | 19:25 |
*** e0ne has joined #openstack-infra | 19:25 | |
*** dolphm is now known as dolphm_503 | 19:26 | |
*** thuc has quit IRC | 19:27 | |
kevinbenton | anteaya: ^^ | 19:28 |
*** thuc has joined #openstack-infra | 19:28 | |
clarkb | kevinbenton: see scrollback. gerrit was unable to emit events for a few hours and was restarted | 19:30 |
kevinbenton | clarkb: ok, it's good now, right? | 19:31 |
kevinbenton | clarkb: because i can't see anything in the gate. http://status.openstack.org/zuul/ | 19:32 |
*** thuc has quit IRC | 19:32 | |
clarkb | kevinbenton: yes it should be fine | 19:32 |
kevinbenton | clarkb: ok. thanks | 19:33 |
*** gokrokve has joined #openstack-infra | 19:38 | |
*** yolanda_ has quit IRC | 19:44 | |
*** salv-orlando has joined #openstack-infra | 19:49 | |
*** hashar_ has joined #openstack-infra | 19:51 | |
anteaya | kevinbenton: I see 2 in the gate right now, a cinder and a neutron | 19:57 |
anteaya | and 74306,12 in check | 19:58 |
*** thuc has joined #openstack-infra | 19:58 | |
*** thuc has quit IRC | 20:03 | |
*** lttrl has joined #openstack-infra | 20:03 | |
*** fifieldt has quit IRC | 20:07 | |
*** thuc has joined #openstack-infra | 20:16 | |
*** dolphm_503 is now known as dolphm | 20:17 | |
*** gokrokve has quit IRC | 20:21 | |
*** thuc has quit IRC | 20:23 | |
*** thuc_ has joined #openstack-infra | 20:23 | |
*** gokrokve_ has joined #openstack-infra | 20:23 | |
*** dolphm is now known as dolphm_503 | 20:27 | |
*** e0ne has quit IRC | 20:28 | |
*** thuc has joined #openstack-infra | 20:28 | |
*** gokrokve_ has quit IRC | 20:31 | |
*** nicedice has quit IRC | 20:31 | |
*** thuc_ has quit IRC | 20:31 | |
*** thuc has quit IRC | 20:32 | |
*** morganfainberg_Z is now known as morganfainberg | 20:33 | |
*** HenryG_ has quit IRC | 20:38 | |
*** hashar_ has quit IRC | 20:45 | |
*** thuc has joined #openstack-infra | 20:46 | |
notmyname | can someone give me some help on https://review.openstack.org/#/c/77094/ ? console log on the failure is http://logs.openstack.org/94/77094/1/gate/gate-swift-python26/4a29787/console.html.gz | 20:47 |
anteaya | No distributions at all found for pastedeploy>=1.3.3 (from -r /home/jenkins/workspace/gate-swift-python26/requirements.txt (line 5)) | 20:49 |
notmyname | right. but what's the bug? | 20:49 |
notmyname | I need some invocation to put into the gerrit comment box | 20:49 |
anteaya | https://bugs.launchpad.net/openstack-ci/+bug/1272417 | 20:50 |
anteaya | don't forget to throw salt over your left shoulder as well | 20:50 |
notmyname | thanks | 20:51 |
anteaya | np | 20:51 |
*** wchrisj has joined #openstack-infra | 20:59 | |
*** CaptTofu has joined #openstack-infra | 21:05 | |
*** CaptTofu has quit IRC | 21:10 | |
SpamapS | fungi: thanks for looking into it btw. Not sure we were "begging", but we definitely didn't have insight into why jobs weren't starting/vms weren't being built | 21:12 |
SpamapS | fungi: I think we found a quota bug where resources aren't being freed ... | 21:12 |
SpamapS | fungi: if there is somewhere that we can look to see nodepoold's state in the future we won't have to bug you ;) | 21:15 |
*** lttrl has quit IRC | 21:17 | |
*** dolphm_503 is now known as dolphm | 21:18 | |
*** mrda_away is now known as mrda | 21:19 | |
*** e0ne has joined #openstack-infra | 21:20 | |
SpamapS | fungi: btw, I have 9789 deleted ssh keypairs for openstack-nodepool .. I wonder if the quota system has a bug around that (that is the only quota that is 100) | 21:21 |
*** ianw has joined #openstack-infra | 21:22 | |
*** wchrisj has quit IRC | 21:22 | |
SpamapS | looks like floating ips aren't working hmmm | 21:23 |
SpamapS | well.. l3 agent not running would PROBABLY have something to do with that | 21:23 |
*** gokrokve has joined #openstack-infra | 21:31 | |
*** gokrokve has quit IRC | 21:36 | |
*** jhesketh_ has joined #openstack-infra | 21:40 | |
*** wchrisj has joined #openstack-infra | 21:41 | |
lifeless | fungi: thanks | 21:45 |
*** pmathews has joined #openstack-infra | 21:47 | |
*** e0ne has quit IRC | 21:47 | |
*** e0ne has joined #openstack-infra | 21:48 | |
*** e0ne has quit IRC | 21:48 | |
*** e0ne has joined #openstack-infra | 21:54 | |
*** e0ne has quit IRC | 21:56 | |
*** dcramer__ has joined #openstack-infra | 21:57 | |
*** e0ne_ has joined #openstack-infra | 21:58 | |
*** rossella_s has joined #openstack-infra | 22:00 | |
*** jhesketh_ has quit IRC | 22:00 | |
*** e0ne_ has quit IRC | 22:04 | |
*** wchrisj has quit IRC | 22:06 | |
SpamapS | fungi: quotas "fixed" to reflect reality now.. and nodepool seems to be spinning things up | 22:09 |
SpamapS | fungi: and I can get to SSH on the vms | 22:09 |
SpamapS | so we may be back "up" | 22:10 |
*** rossella_s has quit IRC | 22:10 | |
*** gokrokve has joined #openstack-infra | 22:23 | |
*** reed has joined #openstack-infra | 22:23 | |
*** gokrokve has quit IRC | 22:27 | |
*** rcleere has joined #openstack-infra | 22:28 | |
*** pmathews has quit IRC | 22:28 | |
*** pcm_ has quit IRC | 22:35 | |
*** thuc has quit IRC | 22:36 | |
*** thuc has joined #openstack-infra | 22:36 | |
*** oubiwan__ has quit IRC | 22:37 | |
*** jhesketh has joined #openstack-infra | 22:39 | |
jhesketh | Morning | 22:40 |
*** oubiwan__ has joined #openstack-infra | 22:40 | |
*** thuc has quit IRC | 22:41 | |
*** jcooley_ has joined #openstack-infra | 22:43 | |
anteaya | morning jhesketh | 22:43 |
*** blamar has quit IRC | 22:43 | |
jhesketh | hey anteaya, how's things in Canada? | 22:43 |
anteaya | cold | 22:45 |
openstackgerrit | Stefano Maffulli proposed a change to openstack-infra/config: Add track chairs mailing list https://review.openstack.org/77458 | 22:45 |
anteaya | keep having to wear my coveralls when I go outside | 22:45 |
*** jcooley_ has quit IRC | 22:45 | |
*** oubiwan__ has quit IRC | 22:45 | |
anteaya | how are things with you? | 22:45 |
jhesketh | heh | 22:46 |
jhesketh | not bad thanks, summer finished over the weekend though | 22:46 |
jhesketh | but autumn (or do you call it fall?) is my favourite season so I'm happy | 22:47 |
anteaya | nice | 22:47 |
anteaya | I like autumn/fall too | 22:47 |
anteaya | I use both | 22:47 |
anteaya | did they ever get the shark hunt drum fishing issue resolved? | 22:47 |
jhesketh | not sure, I haven't heard anything about it | 22:48 |
* jhesketh isn't very good at following news | 22:48 | |
anteaya | maybe it was just in perth | 22:48 |
*** thuc has joined #openstack-infra | 22:51 | |
*** oubiwan__ has joined #openstack-infra | 22:55 | |
*** jcooley_ has joined #openstack-infra | 22:56 | |
*** thuc has quit IRC | 22:56 | |
*** e0ne has joined #openstack-infra | 22:58 | |
*** pmathews has joined #openstack-infra | 23:00 | |
*** e0ne has quit IRC | 23:02 | |
*** CaptTofu has joined #openstack-infra | 23:06 | |
*** HenryG has joined #openstack-infra | 23:07 | |
*** jcooley_ has quit IRC | 23:08 | |
*** morganfainberg is now known as morganfainberg_Z | 23:09 | |
*** jcooley_ has joined #openstack-infra | 23:09 | |
*** CaptTofu has quit IRC | 23:11 | |
*** jcooley_ has quit IRC | 23:13 | |
*** jcooley_ has joined #openstack-infra | 23:18 | |
*** jamielennox|away is now known as jamielennox | 23:18 | |
*** thuc has joined #openstack-infra | 23:22 | |
*** jcooley_ has quit IRC | 23:25 | |
*** jcooley_ has joined #openstack-infra | 23:26 | |
*** pmathews has quit IRC | 23:26 | |
*** dolphm is now known as dolphm_503 | 23:27 | |
*** zns has joined #openstack-infra | 23:29 | |
*** david-lyle has quit IRC | 23:29 | |
*** thuc has quit IRC | 23:30 | |
*** jcooley_ has quit IRC | 23:31 | |
*** david-lyle has joined #openstack-infra | 23:34 | |
*** banix has joined #openstack-infra | 23:44 | |
*** oubiwan__ has quit IRC | 23:57 | |
*** oubiwan__ has joined #openstack-infra | 23:58 | |
*** e0ne has joined #openstack-infra | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!