sdague | clarkb: it's modified more than you think | 00:03 |
---|---|---|
sdague | es has very complicated query structure | 00:03 |
*** sarob_ has quit IRC | 00:04 | |
*** sarob has joined #openstack-infra | 00:05 | |
*** sarob has quit IRC | 00:09 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 00:10 |
fungi | jeblair: there's a first stab ^ | 00:10 |
jeblair | fungi: awesome; i've written the failing test for zuul replication; the code shouldn't be too much longer, though it's getting to be breakfast/transit time here | 00:11 |
*** esker has joined #openstack-infra | 00:13 | |
jeblair | fungi: that looks great; is it worth not having a default, and if it isn't set, not creating them? | 00:13 |
fungi | jeblair: fair enough, we just need to make sure to pass the environment variable in puppet | 00:14 |
fungi | i'll do that | 00:14 |
*** kraman has quit IRC | 00:14 | |
jeblair | fungi: ok. i think we should call it '/zuul' to be clear about what the conntents of the repos are | 00:15 |
fungi | jeblair: agreed. i just wanted to go with something generic about the code and then we can override it to whatever name is most effective for our use case | 00:16 |
*** pballand has quit IRC | 00:16 | |
jeblair | ++ | 00:16 |
*** pmathews has quit IRC | 00:17 | |
*** pmathews1 has joined #openstack-infra | 00:17 | |
*** eharney has quit IRC | 00:19 | |
*** fifieldt has joined #openstack-infra | 00:20 | |
*** ruhe is now known as _ruhe | 00:21 | |
clarkb | sdague: mriedm gotcha | 00:23 |
*** banix has quit IRC | 00:23 | |
clarkb | zaro the problem is everyone has access | 00:23 |
clarkb | zaro that is not a good thing as melody lets you do stuff | 00:24 |
fungi | clarkb: everyone, not just admins? | 00:25 |
clarkb | fungi it let me in without loggibg in | 00:25 |
*** yamahata has joined #openstack-infra | 00:26 | |
clarkb | unless I was ninja logged in | 00:26 |
fungi | oh, ew | 00:26 |
*** mriedem has joined #openstack-infra | 00:27 | |
*** yidclare1 has quit IRC | 00:27 | |
*** kraman has joined #openstack-infra | 00:29 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 00:30 |
*** mriedem has quit IRC | 00:30 | |
fungi | jeblair: there's the off-by-default version ^ | 00:31 |
*** mriedem has joined #openstack-infra | 00:31 | |
*** wenlock has quit IRC | 00:31 | |
fungi | oh, wait, bug :( | 00:32 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 00:34 |
fungi | better ^ | 00:34 |
zaro | clarkb: yikes! | 00:34 |
jeblair | fungi: cool, transit time. biab | 00:35 |
*** SergeyLukjanov is now known as _SergeyLukjanov | 00:37 | |
openstackgerrit | Davanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Gather horizon/apache2 logs https://review.openstack.org/64490 | 00:37 |
fungi | just realized i could do that way more efficiently. conditional in a loop is silly | 00:38 |
*** yamahata has quit IRC | 00:40 | |
*** _cjones_ has joined #openstack-infra | 00:41 | |
*** pmathews1 has quit IRC | 00:46 | |
*** hunner1 is now known as Hunner | 00:49 | |
*** pelix has joined #openstack-infra | 00:52 | |
*** nosnos has joined #openstack-infra | 00:53 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 00:58 |
*** dcramer_ has joined #openstack-infra | 00:59 | |
*** UtahDave has quit IRC | 01:00 | |
*** devanand1 is now known as devananda | 01:02 | |
*** hogepodge has quit IRC | 01:02 | |
*** slong has joined #openstack-infra | 01:04 | |
*** herndon has quit IRC | 01:11 | |
*** yaguang has joined #openstack-infra | 01:12 | |
*** dkliban is now known as dkliban_afk | 01:14 | |
*** mrodden1 has quit IRC | 01:22 | |
*** ryanpetrello has joined #openstack-infra | 01:24 | |
*** senk has quit IRC | 01:24 | |
*** melwitt has quit IRC | 01:26 | |
*** praneshp has quit IRC | 01:26 | |
*** marun has quit IRC | 01:27 | |
*** yidclare has joined #openstack-infra | 01:27 | |
*** resker has joined #openstack-infra | 01:31 | |
*** yidclare has quit IRC | 01:32 | |
*** esker has quit IRC | 01:33 | |
*** CaptTofu has quit IRC | 01:34 | |
*** CaptTofu has joined #openstack-infra | 01:34 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Pass a zuul scratch subpath to create-cgitrepos https://review.openstack.org/65403 | 01:35 |
*** CaptTofu has quit IRC | 01:35 | |
fungi | jeblair: and that's ^ the config change to make use of it. i'd roll in credentials for zuul to push with, but not sure how you're engineering that to work so i'll hold off a bit | 01:35 |
*** CaptTofu has joined #openstack-infra | 01:35 | |
*** CaptTofu has quit IRC | 01:39 | |
pelix | Like some input on https://review.openstack.org/#/c/63579/ - current regex results in inconsistent behaviour once you have a certain number of levels of xml tags | 01:42 |
pelix | PyXML fixes this by producing consistent XML on python 2.6 but is not maintained (i.e. unlikely to work with version 3), 're' module doesn't support recursive regexes which is probably what's required to be able to apply a regex that doesn't screw up, | 01:42 |
*** yamahata has joined #openstack-infra | 01:42 | |
*** nosnos has quit IRC | 01:43 | |
*** nosnos has joined #openstack-infra | 01:44 | |
*** pcrews has quit IRC | 01:44 | |
pelix | That leaves a regex project pypi that supports recursive regexes which is what is likely to be required to not have he regex screw up on different levels of tags and is supposedly intended to replace python's re, or writing a prettyprint function that works with elementtree and produces consistent output across multiple versions of python. | 01:45 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 01:45 |
fungi | jeblair: a guess ^ at the git farm end of granting zuul access | 01:45 |
*** ^d has joined #openstack-infra | 01:46 | |
openstackgerrit | A change was merged to openstack-dev/hacking: Add H904: don't wrap lines using a backslash https://review.openstack.org/64584 | 01:47 |
*** weshay_afk has quit IRC | 01:48 | |
*** senk has joined #openstack-infra | 01:49 | |
openstackgerrit | Mathieu Gagné proposed a change to openstack-infra/config: Remove unit tests against Puppet 3.0.x https://review.openstack.org/65406 | 01:54 |
*** senk has quit IRC | 01:59 | |
*** senk has joined #openstack-infra | 02:00 | |
*** marco-chirico has joined #openstack-infra | 02:01 | |
*** oubiwann has joined #openstack-infra | 02:01 | |
*** reed has quit IRC | 02:15 | |
*** kraman has quit IRC | 02:18 | |
*** oubiwann has quit IRC | 02:19 | |
*** nati_ueno has quit IRC | 02:23 | |
*** ^d has quit IRC | 02:26 | |
*** tianst20 has quit IRC | 02:29 | |
*** resker has quit IRC | 02:32 | |
*** markmcclain has quit IRC | 02:33 | |
*** pcrews has joined #openstack-infra | 02:41 | |
*** tian has joined #openstack-infra | 02:41 | |
*** loq_mac has joined #openstack-infra | 02:42 | |
*** nosnos has quit IRC | 02:43 | |
*** coolsvap has quit IRC | 02:43 | |
*** ryanpetrello has quit IRC | 02:44 | |
*** changbl has joined #openstack-infra | 02:44 | |
*** fallenpegasus has joined #openstack-infra | 02:46 | |
*** talluri_ has joined #openstack-infra | 02:48 | |
*** kraman has joined #openstack-infra | 02:49 | |
*** marco-chirico has left #openstack-infra | 02:51 | |
*** kraman has quit IRC | 02:54 | |
*** dhellmann has quit IRC | 02:55 | |
*** fallenpegasus has quit IRC | 02:56 | |
*** oubiwann has joined #openstack-infra | 02:57 | |
*** fallenpegasus has joined #openstack-infra | 02:59 | |
*** fallenpegasus has quit IRC | 03:06 | |
*** talluri_ has quit IRC | 03:08 | |
*** dcramer_ has quit IRC | 03:09 | |
openstackgerrit | Eli Klein proposed a change to openstack-infra/jenkins-job-builder: Added rbenv-env wrapper https://review.openstack.org/65352 | 03:16 |
*** fallenpegasus has joined #openstack-infra | 03:20 | |
*** loq_mac has quit IRC | 03:21 | |
*** pcrews has quit IRC | 03:24 | |
*** fallenpegasus has quit IRC | 03:34 | |
*** loq_mac has joined #openstack-infra | 03:38 | |
SpamapS | wooooot | 03:40 |
SpamapS | netaddr 0.7.10 is on pypi | 03:40 |
clarkb | nice | 03:44 |
*** fungi has quit IRC | 03:44 | |
*** nicedice has quit IRC | 03:45 | |
openstackgerrit | Eli Klein proposed a change to openstack-infra/jenkins-job-builder: Add local-branch option https://review.openstack.org/65369 | 03:47 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication https://review.openstack.org/65410 | 03:48 |
jeblair | fungi: (is gone :( ) ^ | 03:48 |
*** vipul is now known as vipul-away | 03:48 | |
*** nicedice has joined #openstack-infra | 03:48 | |
*** kraman has joined #openstack-infra | 03:50 | |
*** fallenpegasus has joined #openstack-infra | 03:50 | |
*** marun has joined #openstack-infra | 03:51 | |
*** harlowja is now known as harlowja_away | 03:52 | |
*** vipul-away is now known as vipul | 03:52 | |
*** fungi has joined #openstack-infra | 03:52 | |
*** kraman has quit IRC | 03:54 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 03:56 |
*** coolsvap has joined #openstack-infra | 04:00 | |
*** reed has joined #openstack-infra | 04:01 | |
*** praneshp has joined #openstack-infra | 04:07 | |
*** praneshp_ has joined #openstack-infra | 04:08 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 04:08 |
*** praneshp has quit IRC | 04:12 | |
*** praneshp_ is now known as praneshp | 04:12 | |
*** loq_mac has quit IRC | 04:13 | |
*** AaronGr is now known as AaronGr_Zzz | 04:15 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication https://review.openstack.org/65410 | 04:15 |
fungi | reviewing those | 04:17 |
jeblair | fungi: cool; note i updated your ssh key change | 04:18 |
openstackgerrit | A change was merged to openstack-infra/jenkins-job-builder: Implements: Archive publisher allow-empty setting. https://review.openstack.org/62806 | 04:19 |
jeblair | fungi: (which makes me think we should probably give zuul a new key, but maybe later) | 04:19 |
fungi | jeblair: on 65405, do you think that's safe enough, or should we set separate ownership over /var/lib/git/zuul and use a separate push account from the one gerrit uses so that we don't accidentally destroy one set of repos or the other via misconfiguration? | 04:19 |
jeblair | fungi: that would make me happy. | 04:19 |
*** fallenpegasus has quit IRC | 04:19 | |
jeblair | fungi: 2 accounts. not destroying them. :) | 04:19 |
fungi | seems a marginal risk, so i wasn't overly worried, but can certainly separate them | 04:19 |
*** fallenpegasus has joined #openstack-infra | 04:20 | |
clarkb | fungi jeblairs net blew up we are headed to lunch | 04:20 |
clarkb | sorry | 04:20 |
fungi | clarkb: no worries. my net keeps blowing up too | 04:21 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt https://review.openstack.org/65414 | 04:25 |
*** pballand has joined #openstack-infra | 04:26 | |
*** praneshp has quit IRC | 04:27 | |
*** pcrews has joined #openstack-infra | 04:30 | |
*** senk has quit IRC | 04:31 | |
*** fallenpegasus has quit IRC | 04:35 | |
*** fallenpegasus has joined #openstack-infra | 04:36 | |
*** Ryan_Lane has joined #openstack-infra | 04:37 | |
*** fallenpegasus has quit IRC | 04:40 | |
*** rhsu has joined #openstack-infra | 04:41 | |
*** marun has quit IRC | 04:42 | |
*** mriedem has quit IRC | 04:45 | |
*** esker has joined #openstack-infra | 04:48 | |
*** kraman has joined #openstack-infra | 04:50 | |
*** rcarrillocruz has quit IRC | 04:52 | |
*** ryanpetrello has joined #openstack-infra | 04:54 | |
*** tian has quit IRC | 04:54 | |
*** ryanpetrello has quit IRC | 04:55 | |
*** senk has joined #openstack-infra | 04:56 | |
*** kraman has quit IRC | 05:00 | |
*** chandankumar has joined #openstack-infra | 05:00 | |
*** dhellmann has joined #openstack-infra | 05:04 | |
*** talluri has joined #openstack-infra | 05:07 | |
*** talluri has quit IRC | 05:08 | |
*** kraman has joined #openstack-infra | 05:14 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 05:20 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 05:20 |
jeblair | fungi: back | 05:22 |
fungi | jeblair: working on the ownership tweak for the jeepyb change now, and then they're probably ready | 05:22 |
clarkb | jeblair: I left a comment on one of the config changes | 05:30 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 05:30 |
jeblair | clarkb: let's change zuul's key later, but that's the situation we have now | 05:30 |
*** dhellmann has quit IRC | 05:31 | |
jeblair | clarkb: (jenkins pushing our tarballs and merges changes, so it's not a huge security profile change) | 05:31 |
jeblair | s/pushing/pushes/ | 05:31 |
clarkb | ok will chnage my vote in a minute, reviewing the jeepyb change now | 05:32 |
jeblair | clarkb: there's a cron that changes its name in there; won't puppet make a new cron entry without an 'ensure=>absent'? | 05:32 |
jeblair | clarkb, fungi: assuming so, i think we should just leave the original cron name | 05:32 |
clarkb | jeblair: oh right it will | 05:33 |
*** loq_mac has joined #openstack-infra | 05:33 | |
clarkb | the old one will stick around | 05:33 |
jeblair | content => $git_gerrit_ssh_key, | 05:33 |
*** dcramer_ has joined #openstack-infra | 05:33 | |
jeblair | quotes were added around that, should we go back to that version? | 05:34 |
jeblair | (the quotes were added for the ps that had 2 keys) | 05:34 |
*** oubiwann has quit IRC | 05:35 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 05:36 |
jeblair | fungi, clarkb: i made those changes ^ | 05:36 |
fungi | jeblair: makes sense on the cron job. i just spotted the same quoting issue as well as a couple other bugs on the 65405 change i'm marking but my internet access issues are maddening | 05:36 |
fungi | gah | 05:36 |
clarkb | jeblair: were you going t ofix the cron job? | 05:37 |
fungi | oh well, i left my comments on the previous patchset | 05:38 |
jeblair | clarkb: didn't it? | 05:39 |
jeblair | clarkb: didn't i? | 05:40 |
clarkb | jeblair: oh you did, I was looking at the wrong idff | 05:40 |
clarkb | fungi: I don't understand the comment that says this should be /var/lib/git | 05:40 |
clarkb | fungi: my eyes tell me the two strings match | 05:41 |
fungi | did i typo it twice? | 05:41 |
fungi | should be /var/lib/git/zuul | 05:41 |
fungi | i should leave this to more awake people with less broken internets | 05:41 |
clarkb | fungi: :) np | 05:41 |
*** dhellmann has joined #openstack-infra | 05:42 | |
jeblair | fungi: are you going to address those comments or should i? | 05:42 |
fungi | jeblair: i can fix it | 05:43 |
*** tian has joined #openstack-infra | 05:43 | |
*** banix has joined #openstack-infra | 05:44 | |
*** nicedice has quit IRC | 05:45 | |
*** nicedice has joined #openstack-infra | 05:46 | |
*** yaguang has quit IRC | 05:47 | |
*** mlipchuk has joined #openstack-infra | 05:48 | |
*** fallenpegasus has joined #openstack-infra | 05:51 | |
*** fallenpegasus has quit IRC | 05:52 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 05:52 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 05:53 |
*** fallenpegasus has joined #openstack-infra | 05:53 | |
*** reed has quit IRC | 05:53 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication https://review.openstack.org/65410 | 05:56 |
jeblair | fungi, clarkb: i approved the changes to create the repos | 05:58 |
fungi | cool | 05:59 |
*** pelix has left #openstack-infra | 06:00 | |
*** banix has quit IRC | 06:00 | |
*** fallenpegasus has quit IRC | 06:01 | |
fungi | also, not sure if the aussie contingent caught this morning's fun in scrollback (overnight for you), but we accidentally upgraded all the precise slaves to libvirt 1.1.1 for a little while... details in bug 1266711 | 06:02 |
clarkb | jeblair: can you look at the the inline comment on ps2 of the zuul change/ | 06:02 |
jeblair | fungi: thx | 06:02 |
clarkb | fungi: I did catch that :/ | 06:03 |
fungi | for future reference, removing an apt repo object in puppet does not remove the source.list snippet it installs | 06:04 |
fungi | er, sources.list | 06:04 |
jeblair | aaaaah | 06:07 |
*** morganfainberg has quit IRC | 06:07 | |
*** loq_mac has quit IRC | 06:07 | |
*** fallenpegasus has joined #openstack-infra | 06:08 | |
fungi | that and meetings sucked up a good chunk of my day. sat in on the defcore call, which kept getting into the weeds. i still don't think they're anywhere near being ready for designing the automated scorecard reporting infrastructure | 06:10 |
openstackgerrit | A change was merged to openstack-infra/jeepyb: Create scratch git repos https://review.openstack.org/65400 | 06:11 |
fungi | though i'm starting to wonder whether the tripleo baremetal test cloud should be the exemplar for refstack | 06:11 |
jeblair | lifeless: ^ | 06:12 |
fungi | rather than waiting for someone to make a separate refstack we can baseline | 06:12 |
fungi | seems like something we should push for, to help conserve/combine effort | 06:14 |
*** loq_mac has joined #openstack-infra | 06:14 | |
clarkb | ++ | 06:15 |
clarkb | especially since it is community run | 06:15 |
fungi | i'm watching git01... it's already got that jeepyb update now | 06:15 |
openstackgerrit | A change was merged to openstack-infra/config: Pass a zuul scratch subpath to create-cgitrepos https://review.openstack.org/65403 | 06:15 |
fungi | so once it gets that ^ puppet change, i should see tons of empty repos for zuul appear thereafter | 06:16 |
*** rhsu has quit IRC | 06:16 | |
*** rhsu has joined #openstack-infra | 06:16 | |
*** yaguang has joined #openstack-infra | 06:17 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 06:18 |
*** markmc has joined #openstack-infra | 06:19 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 06:19 |
jeblair | syntax fix | 06:19 |
*** changbl has quit IRC | 06:20 | |
fungi | oh, right, those repos aren't going to get created unless the cgit list file gets changed (since the exec is subscribed to changes on that file) | 06:20 |
*** rhsu has quit IRC | 06:20 | |
* fungi fakes it out | 06:20 | |
jeblair | fungi: sounds god | 06:22 |
jeblair | good | 06:22 |
jeblair | so when it's time to upgrade zuul, i'm going to save the queue, shut it down, remove all the git repos[1], start it, replace the queue | 06:23 |
*** changbl has joined #openstack-infra | 06:23 | |
jeblair | [1] is just to delete current zuul refs to speed things up a bit | 06:24 |
jeblair | well, we could probably skip that; i don't think they are having a big impact. | 06:25 |
jeblair | consider that stricken from the plan | 06:25 |
*** rhsu has joined #openstack-infra | 06:27 | |
*** senk has quit IRC | 06:27 | |
clarkb | so a funny thing has happened since the new year | 06:29 |
clarkb | the logs/syslog.txt files are being indexed as from 2013 | 06:29 |
clarkb | it is very odd and I haven't sorted out why yet | 06:30 |
jeblair | that is interesting | 06:30 |
clarkb | can you say y2k | 06:30 |
dstufft | fungi: did you see https://github.com/drkjam/netaddr/issues/57#issuecomment-31796111 ? | 06:31 |
*** rhsu has quit IRC | 06:31 | |
*** rhsu1 has joined #openstack-infra | 06:31 | |
*** fallenpegasus has quit IRC | 06:32 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/jeepyb: Correct variables masquerading as strings https://review.openstack.org/65420 | 06:33 |
fungi | adding an extra blank line to /home/cgit/projects.yaml was enough to trigger the exec, but it can haz bug ^ | 06:34 |
fungi | dstufft: yeah, SpamapS mentioned it a little while ago | 06:34 |
lifeless | jeblair: hi, 'sup? | 06:35 |
jeblair | lifeless: fungi had a suggestion about refstack and tripleo; not urgent | 06:35 |
fungi | lifeless: was mainly wondering whether anyone had previously discussed the possibility of refstack becoming the reference implementation for refstack, since nobody seems to be putting time into a separate refstack | 06:36 |
clarkb | I have a hunch that because syslog doesn't include a year the year is coming from when the process started possibly? | 06:36 |
fungi | er, of the tripleo baremetal test cloud | 06:36 |
clarkb | *the logstash indexer process | 06:36 |
fungi | becoming | 06:36 |
clarkb | jeblair: we are currently only using 9-16? | 06:37 |
lifeless | fungi: 'refstack becoming the reference implementation for refstack' ? | 06:37 |
clarkb | jeblair: I am tempted to simply restart processes to see if things get indexed in the proper index | 06:37 |
jeblair | clarkb: we're using 1-X where X is 4 or 8 i think. | 06:37 |
clarkb | jeblair: thanks | 06:38 |
*** rhsu1 has quit IRC | 06:38 | |
fungi | lifeless: tripleo baremetal test cloud becoming the reference implementation in place of refstack, i should have said | 06:38 |
fungi | been going about 18 hours with no break again | 06:38 |
fungi | fingers getting ahead of brain | 06:39 |
lifeless | fungi: ah. Go get sleep ;) | 06:39 |
fungi | soon | 06:39 |
*** Ryan_Lane has quit IRC | 06:39 | |
lifeless | fungi: I've certainly been thinking of tripleo's deployment as a reference implementation :) | 06:40 |
*** Ryan_Lane has joined #openstack-infra | 06:40 | |
*** pblaho has joined #openstack-infra | 06:40 | |
fungi | the openstack provider compat scorecard idea needs a reference implementation against which we can run baseline test sets. that cloud has the benefit that it's not really a public cloud vendor and so can be neutral ground | 06:41 |
dstufft | fungi: ah cool, I just woke up from a nap so didin't see :) | 06:41 |
*** praneshp has joined #openstack-infra | 06:42 | |
*** loq_mac has quit IRC | 06:42 | |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Allow normal users to create file in solum source tree https://review.openstack.org/65421 | 06:43 |
clarkb | jeblair: fungi: yup restarting the indexer processes fixed the problem | 06:45 |
clarkb | that is ridiculously brain dead, Will need to debug further to come up with a proper rix | 06:45 |
lifeless | fungi: not the baremetal cloud though, a virt cloud deployed on it. | 06:45 |
clarkb | I could just write a once yearly cron for midnight january 1st to restart them all >_> | 06:46 |
*** fallenpegasus has joined #openstack-infra | 06:46 | |
*** zigo_ is now known as zigo | 06:46 | |
fungi | lifeless: ahh, yeah that would make sense | 06:48 |
fungi | clarkb: now that *is* a y2k-esque solution ;) | 06:49 |
fungi | i manually tested 65420 on git01 and it works, btw. but went ahead and removed the directory tree again since it won't get created with the right ownership until 65405 merges | 06:58 |
fungi | once that's in place, making a trivial modification to /home/cgit/projects.yaml should cause the empty zuul repos to all get populated | 06:59 |
fungi | and with that, it's 2am in my neighborhood, so i'm going to grab a quick nap | 07:00 |
jeblair | fungi: good night, thanks! | 07:00 |
*** morganfainberg has joined #openstack-infra | 07:02 | |
*** fallenpegasus has quit IRC | 07:03 | |
*** SergeyLukjanov has joined #openstack-infra | 07:05 | |
*** yolanda has joined #openstack-infra | 07:11 | |
*** jamielennox is now known as jamielennox|away | 07:12 | |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo https://review.openstack.org/65421 | 07:14 |
*** jcoufal has joined #openstack-infra | 07:20 | |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo https://review.openstack.org/65421 | 07:20 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt https://review.openstack.org/65414 | 07:20 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt https://review.openstack.org/65414 | 07:24 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo https://review.openstack.org/65421 | 07:25 |
*** rcarrillocruz has joined #openstack-infra | 07:30 | |
*** fallenpegasus has joined #openstack-infra | 07:31 | |
*** loq_mac has joined #openstack-infra | 07:31 | |
AJaeger | clarkb, jeblair : Do we have any Jenkins problems right now? I just had two strange doc failures | 07:31 |
jeblair | AJaeger: i don't think there's anything systemic that would cause failures | 07:34 |
*** ken1ohmichi has joined #openstack-infra | 07:35 | |
AJaeger | jeblair, any idea what causes these two fails: https://review.openstack.org/#/c/65425/ and https://review.openstack.org/#/c/64872/ | 07:35 |
AJaeger | "This change was unable to be automatically merged with the current state of the repository. Please rebase your change and upload a new patchset." | 07:35 |
AJaeger | But the changes are at HEAD - properly rebased. | 07:35 |
*** loq_mac has quit IRC | 07:36 | |
jeblair | AJaeger: i think there was a transient problem with zuul | 07:39 |
jeblair | AJaeger: there was a dns resolution error | 07:39 |
AJaeger | this just happened 10 mins ago - so is it safe to recheck? | 07:40 |
*** talluri has joined #openstack-infra | 07:41 | |
jeblair | AJaeger: actually it's not transient... | 07:42 |
jeblair | AJaeger: i'll fix it in a few mins | 07:42 |
AJaeger | jeblair, thanks! | 07:42 |
openstackgerrit | A change was merged to openstack-infra/zuul: Make smtp tests more robust https://review.openstack.org/64311 | 07:44 |
openstackgerrit | A change was merged to openstack-infra/zuul: Add Zuul ref replication https://review.openstack.org/65410 | 07:45 |
*** praneshp has quit IRC | 07:45 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 07:52 |
openstackgerrit | A change was merged to openstack-infra/config: Have Zuul replicate to git.o.o https://review.openstack.org/65412 | 07:52 |
*** kraman has quit IRC | 08:01 | |
*** fallenpegasus has quit IRC | 08:01 | |
*** fallenpegasus has joined #openstack-infra | 08:02 | |
*** fallenpegasus has quit IRC | 08:06 | |
*** Ryan_Lane has quit IRC | 08:11 | |
*** Ryan_Lane has joined #openstack-infra | 08:12 | |
*** coolsvap has quit IRC | 08:12 | |
*** coolsvap has joined #openstack-infra | 08:12 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Fix typo in puppet https://review.openstack.org/65430 | 08:12 |
openstackgerrit | A change was merged to openstack-infra/config: Fix typo in puppet https://review.openstack.org/65430 | 08:13 |
jeblair | clarkb: /home/cgit/projects.yaml | 08:15 |
openstackgerrit | A change was merged to openstack-infra/jeepyb: Correct variables masquerading as strings https://review.openstack.org/65420 | 08:22 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Fix typo in zuul config template https://review.openstack.org/65431 | 08:26 |
*** fallenpegasus has joined #openstack-infra | 08:26 | |
openstackgerrit | A change was merged to openstack-infra/config: Fix typo in zuul config template https://review.openstack.org/65431 | 08:27 |
*** coolsvap has quit IRC | 08:28 | |
*** coolsvap has joined #openstack-infra | 08:30 | |
*** loq_mac has joined #openstack-infra | 08:30 | |
*** fallenpegasus has quit IRC | 08:31 | |
*** pcrews has quit IRC | 08:31 | |
*** fallenpegasus has joined #openstack-infra | 08:31 | |
*** pcrews has joined #openstack-infra | 08:31 | |
*** kraman has joined #openstack-infra | 08:32 | |
*** fallenpegasus has quit IRC | 08:32 | |
*** fallenpegasus has joined #openstack-infra | 08:33 | |
*** kraman has quit IRC | 08:36 | |
*** Ryan_Lane has quit IRC | 08:37 | |
*** ken1ohmichi has quit IRC | 08:37 | |
*** flaper87|afk is now known as flaper87 | 08:40 | |
*** loq_mac has quit IRC | 08:40 | |
*** mancdaz_away is now known as mancdaz | 08:41 | |
clarkb | jeblair: zuul:/home/clarkb/changes.txt | 08:42 |
*** hashar has joined #openstack-infra | 08:42 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 08:43 |
openstackgerrit | A change was merged to openstack-infra/config: Allow zuul to push to git servers https://review.openstack.org/65405 | 08:44 |
*** SergeyLukjanov is now known as _SergeyLukjanov | 08:44 | |
*** _SergeyLukjanov has quit IRC | 08:44 | |
clarkb | jeblair: /home/clarkb/change_projects.txt tab delimited | 08:48 |
*** kraman has joined #openstack-infra | 08:50 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Fix passing zuul public key to git backends https://review.openstack.org/65432 | 08:51 |
openstackgerrit | A change was merged to openstack-infra/config: Fix passing zuul public key to git backends https://review.openstack.org/65432 | 08:51 |
*** kraman has quit IRC | 08:54 | |
*** fallenpegasus2 has joined #openstack-infra | 08:56 | |
*** fallenpegasus has quit IRC | 08:56 | |
*** SergeyLukjanov has joined #openstack-infra | 08:59 | |
*** fallenpegasus2 has quit IRC | 09:00 | |
*** noorul has joined #openstack-infra | 09:02 | |
noorul | https://review.openstack.org/65414 | 09:02 |
noorul | I am not sure why it is not getting queued up in zuul | 09:02 |
*** jpich has joined #openstack-infra | 09:03 | |
AJaeger | noorul, Zuul was down, might not have seen the recheck... | 09:04 |
clarkb | jeblair: I don't grok why that is happening. the other dirs are correct | 09:04 |
clarkb | jeblair: and they are created by the same script correct? | 09:04 |
AJaeger | noorul, it's in the list now | 09:04 |
*** yolanda has quit IRC | 09:04 | |
*** yolanda has joined #openstack-infra | 09:05 | |
*** jroovers has joined #openstack-infra | 09:06 | |
AJaeger | clarkb, jeblair: I would appreciate some review for this one, please https://review.openstack.org/#/c/65391/ | 09:06 |
AJaeger | jeblair, thanks for fixing Zuul! | 09:06 |
clarkb | jeblair: I see what is going on now | 09:08 |
jeblair | clarkb: enqueing open changes now | 09:09 |
jeblair | clarkb: zuul refs are being pushed to git farm | 09:09 |
jeblair | clarkb: the first changes for each are going to be slow because they'll be pushing most of the project history | 09:10 |
*** derekh has joined #openstack-infra | 09:11 | |
clarkb | jeblair: roger | 09:12 |
clarkb | jeblair: I really do not know why the env vars did not work here | 09:12 |
clarkb | jeblair: unless the mapping is wrong | 09:12 |
noorul | AJaeger: Thank you! | 09:12 |
noorul | AJaeger: Did you do anything specific? | 09:15 |
AJaeger | noorul, no. Looking at your patch: I suggest to rmdir as well | 09:17 |
*** CaptTofu has joined #openstack-infra | 09:17 | |
noorul | AJaeger: Because my recheck has no effect, see https://review.openstack.org/#/c/65421/ | 09:18 |
noorul | AJaeger: Are you suggesting to remove temp directory? | 09:18 |
*** CaptTofu has quit IRC | 09:19 | |
AJaeger | noorul, patience: http://status.openstack.org/zuul/ | 09:19 |
*** CaptTofu has joined #openstack-infra | 09:19 | |
AJaeger | "Queue lengths: 155 events, 4 results. " - so there are lots of events that I do not see... | 09:19 |
AJaeger | noorul, yes I do suggest - let me add to the review | 09:19 |
noorul | AJaeger: Usually during IST zuul used to respond quickly | 09:20 |
noorul | AJaeger: So I thought something is wrong | 09:20 |
AJaeger | noorul, as I said before: Zuul was down for an hour and is catching up. | 09:20 |
AJaeger | noorul, so, something was wrong but clarkb and jeblair have been fixing it | 09:22 |
*** CaptTofu has quit IRC | 09:24 | |
*** johnthetubaguy has joined #openstack-infra | 09:24 | |
*** jooools has joined #openstack-infra | 09:26 | |
*** johnthetubaguy1 has joined #openstack-infra | 09:26 | |
*** johnthetubaguy has quit IRC | 09:27 | |
*** tian has quit IRC | 09:30 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/devstack-gate: Fetch zuul refs from git.o.o https://review.openstack.org/65438 | 09:33 |
*** yamahata has quit IRC | 09:34 | |
*** homeless has quit IRC | 09:38 | |
*** michchap has quit IRC | 09:39 | |
*** michchap_ has joined #openstack-infra | 09:39 | |
* AJaeger gave the wrong URL when asking for a review - this is the right one https://review.openstack.org/#/c/64795/ | 09:39 | |
*** homeless has joined #openstack-infra | 09:40 | |
*** yolanda has quit IRC | 09:47 | |
flaper87 | fungi: around? | 09:49 |
*** kraman has joined #openstack-infra | 09:50 | |
flaper87 | Actually, let me just ask the question. We're trying to release an alpha version of marconiclient but I guess something went wrong. ttx was helping us and he created / pushed a tag for the client. The thing is that no job was triggered | 09:50 |
jeblair | flaper87: we had to restart zuul | 09:51 |
jeblair | flaper87: an infra root can trigger the job manually, but i don't have time at the moment | 09:51 |
flaper87 | jeblair: we literally just did that like 10 mins ago, did you guys restarted zuul around that time? | 09:51 |
jeblair | flaper87: yes | 09:51 |
flaper87 | jeblair: ok, I'll ping again later then | 09:51 |
jeblair | flaper87: send an email to openstack-infra with the project and tag name | 09:52 |
flaper87 | jeblair: awesome, thanks! | 09:52 |
jeblair | fungi: quick summary: everything is in place; zuul ran out of file descriptors so we had to restart it | 09:53 |
jeblair | fungi: we ran with the new feature for a while, but it looks like it adds about 20 seconds to the processing of each change, which is a bit too much at current levels | 09:54 |
*** kraman has quit IRC | 09:55 | |
jeblair | fungi: so i disabled it manually | 09:55 |
jeblair | fungi: we may want to prepare a rax performance node with a lot of cpu capacity to deal with the current crunch | 09:55 |
*** yolanda has joined #openstack-infra | 10:05 | |
jeblair | fungi: i've spun one up; dns ttl is 300 | 10:05 |
*** afazekas has joined #openstack-infra | 10:10 | |
jeblair | fungi: ip is 162.242.154.88 | 10:11 |
jeblair | i have to go to dinner now | 10:11 |
*** SergeyLukjanov has quit IRC | 10:17 | |
*** fallenpegasus has joined #openstack-infra | 10:21 | |
*** fallenpegasus has quit IRC | 10:21 | |
*** fallenpegasus has joined #openstack-infra | 10:21 | |
*** SergeyLukjanov has joined #openstack-infra | 10:22 | |
*** SergeyLukjanov has quit IRC | 10:23 | |
*** che-arne has joined #openstack-infra | 10:45 | |
*** kraman has joined #openstack-infra | 10:50 | |
*** kraman has quit IRC | 10:52 | |
*** kraman1 has joined #openstack-infra | 10:52 | |
*** yamahata has joined #openstack-infra | 10:53 | |
*** yaguang has quit IRC | 10:54 | |
*** jooools has quit IRC | 10:54 | |
*** jooools has joined #openstack-infra | 10:55 | |
*** kraman1 has quit IRC | 10:57 | |
*** SergeyLukjanov has joined #openstack-infra | 10:58 | |
*** noorul has left #openstack-infra | 11:02 | |
*** lcestari has joined #openstack-infra | 11:05 | |
*** tma996 has joined #openstack-infra | 11:07 | |
*** saschpe has joined #openstack-infra | 11:12 | |
*** MIDENN_ has quit IRC | 11:20 | |
*** hashar has quit IRC | 11:33 | |
*** ArxCruz has joined #openstack-infra | 11:36 | |
*** michchap_ has quit IRC | 11:39 | |
*** mlipchuk has quit IRC | 11:39 | |
*** mlipchuk has joined #openstack-infra | 11:40 | |
*** michchap has joined #openstack-infra | 11:40 | |
*** mlipchuk has left #openstack-infra | 11:44 | |
*** roeyc has joined #openstack-infra | 11:50 | |
*** roeyc has quit IRC | 11:50 | |
*** kraman has joined #openstack-infra | 11:50 | |
*** roeyc_ has joined #openstack-infra | 11:52 | |
*** alexpilotti has joined #openstack-infra | 11:53 | |
*** kraman has quit IRC | 11:55 | |
*** CaptTofu has joined #openstack-infra | 11:55 | |
*** ociuhandu has quit IRC | 11:55 | |
*** esker has quit IRC | 12:14 | |
*** dizquierdo has joined #openstack-infra | 12:15 | |
*** openstackstatus has quit IRC | 12:17 | |
*** openstackstatus has joined #openstack-infra | 12:17 | |
jeblair | i'm going to move zuul to the new server now | 12:20 |
*** ociuhandu has joined #openstack-infra | 12:20 | |
*** yolanda has quit IRC | 12:23 | |
*** yolanda has joined #openstack-infra | 12:24 | |
*** rpodolyaka has joined #openstack-infra | 12:31 | |
*** yassine_ has joined #openstack-infra | 12:33 | |
*** hashar has joined #openstack-infra | 12:36 | |
*** thomasem has joined #openstack-infra | 12:38 | |
*** pmathews has joined #openstack-infra | 12:40 | |
*** alexpilotti has quit IRC | 12:43 | |
*** saschpe has quit IRC | 12:44 | |
*** saschpe has joined #openstack-infra | 12:44 | |
*** coolsvap has quit IRC | 12:47 | |
sdague | hey, so I see by the linkedin spam that ryan lane is no longer at wikimedia. Any idea if he's still going to maintain the wiki, or if wikimedia is going to have someone else help with that, or where that stands at all? | 12:50 |
*** kraman has joined #openstack-infra | 12:50 | |
anteaya | sdague: he will continue to maintain our wiki | 12:50 |
sdague | cool | 12:51 |
anteaya | pleia2 and I talked to him at summit about it | 12:51 |
sdague | ah, cool, yeh, I'm just late to the game | 12:51 |
anteaya | no worries | 12:51 |
sdague | anteaya: you are up early/late? | 12:51 |
anteaya | hard to keep up with all the movement | 12:51 |
anteaya | just had one of the most spectacular dinners of my life | 12:51 |
anteaya | catching up on the latest and then off to bed | 12:52 |
anteaya | 8:52 pm | 12:52 |
*** dizquierdo has quit IRC | 12:52 | |
anteaya | I don't think he had changed call signs yet at summit | 12:52 |
*** smarcet has joined #openstack-infra | 12:54 | |
*** kraman has quit IRC | 12:54 | |
*** dcramer_ has quit IRC | 12:55 | |
*** heyongli has joined #openstack-infra | 12:57 | |
*** smarcet has left #openstack-infra | 12:57 | |
*** johnthetubaguy1 is now known as johnthetubaguy | 13:00 | |
chmouel | fyi the rechecks page seems to be down https://bugs.launchpad.net/openstack-ci/+bug/1267098 | 13:01 |
jeblair | chmouel: thx, i'll fix it in a sec | 13:03 |
ttx | jeblair: one of those nights, huh | 13:03 |
jeblair | I'm moving zuul to a new server and i need to copy over that data manually | 13:03 |
*** dizquierdo has joined #openstack-infra | 13:04 | |
jeblair | ttx: yeah, fun times. | 13:04 |
*** yassine_ has quit IRC | 13:15 | |
*** yassine_ has joined #openstack-infra | 13:16 | |
*** yassine_ has quit IRC | 13:16 | |
*** yassine_ has joined #openstack-infra | 13:17 | |
*** yassine_ has quit IRC | 13:17 | |
*** yassine_ has joined #openstack-infra | 13:19 | |
*** yassine_ has quit IRC | 13:19 | |
*** yassine has joined #openstack-infra | 13:21 | |
*** talluri has quit IRC | 13:23 | |
*** sandywalsh_ has joined #openstack-infra | 13:27 | |
*** banix has joined #openstack-infra | 13:30 | |
*** sandywalsh_ has quit IRC | 13:31 | |
*** yassine has quit IRC | 13:31 | |
*** eharney has joined #openstack-infra | 13:32 | |
*** smarcet has joined #openstack-infra | 13:32 | |
*** yassine has joined #openstack-infra | 13:33 | |
*** banix has quit IRC | 13:35 | |
openstackgerrit | Julien Danjou proposed a change to openstack-infra/config: Install Cassandra on OpenStack CI slaves https://review.openstack.org/65466 | 13:36 |
*** sandywalsh_ has joined #openstack-infra | 13:36 | |
*** CaptTofu has quit IRC | 13:36 | |
*** dprince has joined #openstack-infra | 13:36 | |
*** CaptTofu has joined #openstack-infra | 13:37 | |
*** CaptTofu has quit IRC | 13:37 | |
*** markmc has quit IRC | 13:40 | |
*** jasondotstar has joined #openstack-infra | 13:46 | |
*** kraman has joined #openstack-infra | 13:50 | |
*** rossella_s has joined #openstack-infra | 13:52 | |
*** dkranz has joined #openstack-infra | 13:53 | |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/zuul: dequeue abandoned changes https://review.openstack.org/65467 | 13:53 |
*** ryanpetrello has joined #openstack-infra | 13:54 | |
*** kraman has quit IRC | 13:55 | |
*** dims has quit IRC | 13:55 | |
*** dims has joined #openstack-infra | 13:57 | |
*** esker has joined #openstack-infra | 13:59 | |
*** yassine has quit IRC | 14:01 | |
*** yassine has joined #openstack-infra | 14:01 | |
*** yamahata has quit IRC | 14:03 | |
openstackgerrit | Thierry Carrez proposed a change to openstack-infra/config: Support proposed/* branches for milestone-proposed https://review.openstack.org/65103 | 14:04 |
*** sandywalsh_ has quit IRC | 14:04 | |
*** prad_ has joined #openstack-infra | 14:08 | |
*** mfer has joined #openstack-infra | 14:11 | |
hashar | have you guys noticed your node pool seems screwed ? It barely has any VM available :/ Since monday 6th apparently | 14:11 |
dhellmann | sdague: ping? | 14:13 |
jeblair | hashar: they are in use; we've been running at capacity since about then. | 14:14 |
*** sandywalsh_ has joined #openstack-infra | 14:14 | |
*** mriedem has joined #openstack-infra | 14:15 | |
*** heyongli has quit IRC | 14:17 | |
*** dkliban_afk is now known as dkliban | 14:20 | |
*** mrodden has joined #openstack-infra | 14:21 | |
jeblair | fungi, clarkb, mordred: new zuul host is up; seems to be dealing with the load _very_ well | 14:24 |
jeblair | two things: | 14:24 |
*** mrodden has quit IRC | 14:26 | |
*** miqui has joined #openstack-infra | 14:28 | |
*** mrodden has joined #openstack-infra | 14:28 | |
*** weshay has joined #openstack-infra | 14:29 | |
sdague | dhellmann: pong | 14:29 |
dhellmann | sdague: catching up on that oslo.sphinx issue -- is that affecting the gate? | 14:30 |
sdague | not as far as I know | 14:30 |
dhellmann | ok | 14:30 |
sdague | there is the one guy that wanted to build docs on a devstack nova pull, and it wasn't installed | 14:30 |
sdague | I have no idea why the solum folks are trying to remove it from test-requirements, then monkey patch it back in in dg | 14:31 |
dhellmann | would it help if we had a doc-requirements.txt separate from test-requirements.txt? and then a tox env specifically for docs? | 14:31 |
annegentle_ | dhellmann: nah don't put docs in a ghetto | 14:32 |
dhellmann | annegentle_: ghetto? | 14:32 |
jeblair | a) data from zuul is not making it to statsd. i have no idea why | 14:32 |
annegentle_ | dhellmann: setting docs apart makes for a perception of second class citizen | 14:33 |
fungi | jeblair: you are up *late* | 14:33 |
jeblair | b) we're going to run out of space on /, so we should figure out what on the old zuul server was using space and deal with it. | 14:33 |
jeblair | fungi: i am about to go to bed | 14:33 |
annegentle_ | dhellmann: I saw the post also and I think it's best to run devstack while working on docs | 14:33 |
fungi | i can have a looksie | 14:33 |
jeblair | fungi: cool; any other questions before i crash? | 14:34 |
jeblair | fungi: (we've gone from load avg 150 -> 1.0 with the new server, so i'm pleased with that) | 14:34 |
dhellmann | annegentle_: ok | 14:34 |
mriedem | sdague: dhellmann: i was interested in the oslo.sphinx / nova thing too, but didn't have a good answer | 14:35 |
openstackgerrit | Julien Danjou proposed a change to openstack-dev/pbr: package: read a specific Python version requirement file https://review.openstack.org/63236 | 14:35 |
fungi | jeblair: no, i think i should be able to figure out any gaps. thanks! | 14:35 |
sdague | dhellmann: so you could say that a million ways | 14:35 |
sdague | how about pep8-requirements | 14:35 |
jeblair | fungi: the replication was too slow, so i've disabled it. sorry if that ends up being a dead end. | 14:35 |
*** dstanek has joined #openstack-infra | 14:36 | |
jeblair | fungi: but i did come up with what i think will be a good way to scale zuul horizontally; i'll try to write it up tomorrow. | 14:36 |
sdague | realistically its currently requirements.txt => run time requirements | 14:36 |
dhellmann | sdague: yeah, we should have that one, too -- takes forever to run pep8 against ceilometer because one of our test requirements is nova | 14:36 |
fungi | jeblair: oh, race conditions getting the refs to the git s3ervers in time for tests to pull them? i wondered if it might | 14:36 |
sdague | test-requirements.txt => everything else | 14:36 |
openstackgerrit | Antoine Musso proposed a change to openstack-infra/zuul: test dequeue on abandoned changes https://review.openstack.org/65476 | 14:36 |
jeblair | fungi: no actually it was just that the pushes were too slow | 14:36 |
sdague | dhellmann: then you need to manage all these things and keep them in sync | 14:37 |
fungi | oh, wow | 14:37 |
dhellmann | sdague: we need to address the python2 vs python3 question, too | 14:37 |
mriedem | sdague: dhellmann: avoiding a build-requirements.txt | 14:37 |
mriedem | like rpms | 14:37 |
jeblair | fungi: added about 20 seconds per-change | 14:37 |
*** amotoki has joined #openstack-infra | 14:37 | |
*** malini_afk is now known as malini | 14:37 | |
mriedem | but that's what this sphinx thing really is, it's not test, it's not runtime, it's build | 14:37 |
sdague | dhellmann: well that I consider a pip flaw | 14:37 |
dhellmann | sdague: ? | 14:38 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Don't have zuul replicate to git.o.o https://review.openstack.org/65477 | 14:38 |
fungi | jeblair: maybe if crufty zuul repos end up being part of the load issue over time, we could look into ways to expire out refs and prune. i'll play around with options when i get time | 14:38 |
hashar | iirc I have to `git gc` zuul repositories from time to time | 14:38 |
jeblair | fungi: pls merge that and restart puppet ^ | 14:38 |
fungi | jeblair: will do | 14:38 |
jeblair | fungi: i think we tried turning on packing; i can't recall right now | 14:39 |
fungi | okay. get some sleep. i'll catch you on the other side of the sun | 14:39 |
jeblair | k. good night! | 14:39 |
*** dims has quit IRC | 14:40 | |
*** ryanpetrello has quit IRC | 14:40 | |
*** rossella_s has quit IRC | 14:40 | |
*** pmathews has quit IRC | 14:40 | |
*** fallenpegasus has quit IRC | 14:40 | |
*** morganfainberg has quit IRC | 14:40 | |
*** bknudson has quit IRC | 14:40 | |
*** prad_ has quit IRC | 14:40 | |
sdague | dhellmann: sorry, just grumpy. And I should get to code and now policy this morning :) | 14:40 |
sdague | mriedem: so sure... but honestly other than *purity* what does that buy you? | 14:40 |
mriedem | nada right now | 14:41 |
jeblair | fungi: oh one more thing; if you need to restart zuul, see ~root/zuul-changes2.py | 14:41 |
jeblair | fungi: you can save the queue and re-enqueue changes with it | 14:41 |
jeblair | fungi: help text should get you started | 14:41 |
sdague | I'm also completely grumpy that there is no sane tox + pip + site packages solution | 14:42 |
jeblair | okay, really leaving now | 14:42 |
fungi | jeblair: great--thanks! | 14:42 |
*** rossella_s has joined #openstack-infra | 14:42 | |
*** dims has joined #openstack-infra | 14:42 | |
*** ryanpetrello has joined #openstack-infra | 14:42 | |
*** pmathews has joined #openstack-infra | 14:42 | |
*** fallenpegasus has joined #openstack-infra | 14:42 | |
*** morganfainberg has joined #openstack-infra | 14:42 | |
*** bknudson has joined #openstack-infra | 14:42 | |
mriedem | i thought python had a setup_requires vs install_requires thing? | 14:43 |
mriedem | idk, i did rpm packaging for openstack and it's dependencies for about a year and just got used to having to sort out the BuildRequires when i hit htem | 14:44 |
mriedem | *them | 14:44 |
annegentle_ | dhellmann: I'm reading the whole rest of the thread this morning and hmming. | 14:44 |
*** herndon_ has joined #openstack-infra | 14:46 | |
dhellmann | annegentle_: I think we're running into a setuptools or pip bug :-/ | 14:47 |
*** changbl has quit IRC | 14:47 | |
fungi | dhellmann: got a link to a summary/repeatable test case? | 14:47 |
dhellmann | sdague: no worries, I'll be grumpy this afternoon | 14:47 |
dhellmann | fungi: there's a mailing list thread "[Solum] Devstack gate is failing" | 14:48 |
annegentle_ | dhellmann: remind me, the reason for oslo.sphinx is theming only? | 14:48 |
annegentle_ | dhellmann: though I guess you can't build the docs without the tehem | 14:48 |
annegentle_ | them | 14:48 |
annegentle_ | theme. gah | 14:48 |
fungi | i've been eyeballs deep in the most recent pip/virtualenv/setuptools churn so maybe something will look familiar | 14:48 |
*** rossella_s has quit IRC | 14:49 | |
fungi | sdague: do you know whether havana horizon has bit-rotted on us? https://jenkins01.openstack.org/job/gate-grenade-dsvm/3181/console | 14:49 |
dhellmann | annegentle_: it's the theme, but also meant to hold any sphinx customizations we make (the "feature" of pbr that autogenerates files for the api docs is supposed to move there eventually, iirc) | 14:50 |
*** kraman has joined #openstack-infra | 14:50 | |
dhellmann | fungi: the tl;dr is that when oslo.config is installed globally with "pip install -e" and oslo.sphinx is installed in a virtualenv with "pip install" python in the virtualenv can't find oslo.config | 14:50 |
jeblair | fungi: ipv6 firewall on graphite | 14:51 |
fungi | dhellmann: the only way that would work is with --system-site-packages on the virtualenv right? | 14:51 |
*** burt has joined #openstack-infra | 14:51 | |
fungi | jeblair: ahh, i'll fix | 14:51 |
dhellmann | fungi: they must have that to not have oslo.config installed in the virtualenv at all, yeah | 14:52 |
dhellmann | fungi: but the imports are *not* working | 14:52 |
jeblair | fungi: new zuul doesnn't have a aaaa record | 14:52 |
fungi | dhellmann: virtualenv 1.11? --system-site-packages is broken (there's an rc for 1.22.1 up) | 14:52 |
annegentle_ | dhellmann: so, if incubating projects use oslo.sphinx, do they get openstack theming prematurely? | 14:52 |
fungi | er, rc for 1.11.1 up | 14:52 |
jeblair | fungi: can probably add it; i just didn't because the old one didn't have one | 14:52 |
dhellmann | fungi: this is in the gate jobs for solum, so I don't know which versions are in use | 14:52 |
fungi | jeblair: good call | 14:53 |
jeblair | fungi: but didn't realize it would be talking to graphite over v6 | 14:53 |
jeblair | ok. back to bed | 14:53 |
dhellmann | annegentle_: I'm not sure whether it counts as premature or not -- they're incubating, so presumably we've given them some sort of nod of approval | 14:53 |
fungi | dhellmann: my guess is it's probably pulling latest virtualenv then (or could be too-old virtualenv, 1.9.x has issues with pip 1.5). lemme dig up the bugs i know about | 14:54 |
openstackgerrit | Thierry Carrez proposed a change to openstack-infra/config: Support proposed/* branches for milestone-proposed https://review.openstack.org/65103 | 14:54 |
*** kraman has quit IRC | 14:54 | |
annegentle_ | fungi: much appreciated | 14:56 |
dhellmann | fungi: no rush, I think we're waiting for the solum guys to explain what they're trying to do on the ML | 15:01 |
fungi | dhellmann: k | 15:01 |
jd__ | did someone already see the netaddr download issue? | 15:05 |
jd__ | someone except dhellmann | 15:05 |
*** dkranz has quit IRC | 15:05 | |
fungi | jd__: which netaddr download issue? are you talking about bug 1266513 or something else? | 15:07 |
jd__ | fungi: that looks like it, I got this error http://logs.openstack.org/70/58770/4/check/check-requirements-integration-dsvm/5266bfc/console.html from https://review.openstack.org/#/c/58770/ but maybe I just need to recheck | 15:08 |
*** rwsu has joined #openstack-infra | 15:09 | |
jd__ | or I need to submit a patch to requirements to add --allow-unverified and external? | 15:10 |
*** herndon_ has quit IRC | 15:11 | |
fungi | jd__: i'll have to look at the log to see why you're hitting it there... just getting up to speed and am a bit swamped with my overnight backlog (it was a very busy night) | 15:11 |
jd__ | fungi: no hurry, just let me know if I can help | 15:11 |
jd__ | you seem more aware than me of what could be the problem, but I'm willing to take action to take my part of the load | 15:11 |
openstackgerrit | A change was merged to openstack-infra/config: Don't have zuul replicate to git.o.o https://review.openstack.org/65477 | 15:12 |
fungi | jd__: well, it's been an emergent issue for us going on 6 days now, and we've been plugging it with workarounds in a variety of places | 15:12 |
jd__ | :-( | 15:12 |
jd__ | issues, ya never got enough | 15:13 |
*** rossella_s has joined #openstack-infra | 15:13 | |
*** dkranz has joined #openstack-infra | 15:13 | |
fungi | jd__: if you really want to help, join the crusade to convince the remaining straggler requirements of ours to start (or resume) uploading their releases to pypi.python.org | 15:13 |
fungi | jd__: SpamapS badgered the netaddr devs into assent yesterday (so awesome) | 15:14 |
fungi | but there are still a few more | 15:14 |
jd__ | my sword is yours, fungi | 15:14 |
fungi | it doesn't just help us, after all, but everyone using pip/pypi | 15:15 |
jd__ | we should list them in the requirements repo as a start I guess | 15:16 |
jd__ | that would also allow to build the --allow-* list dynamically | 15:16 |
fungi | jd__: if it goes on for much longer, we're going to have to do something like that in openstack/requirements (probably in a separate file from global-requirements.txt just to make it easier on our existing tooling) | 15:17 |
* jd__ nods | 15:17 | |
*** julim has joined #openstack-infra | 15:17 | |
fungi | jd__: i confirmed yesterday that you can put just the the override options (but not the version specs themselves) in something like a wall-of-shame.txt and then tox can be instructed to do -r wall-of-shame.txt -r requirements.txt -r test-requirements.txt and that works | 15:18 |
jd__ | ah that's indeed unexpected, but handy | 15:19 |
dims | jog0, sdague - check/gate-grenade-dsvm is keeling over again | 15:19 |
sdague | dims: examples? | 15:20 |
fungi | but there's a lot of places we'd want to update to make that a permanent fixture in requirements enforcement, sync and mirroring scripts, so if we can just convince the devs of those packages to dtrt instead we can possibly avoid that extra technical debt | 15:20 |
fungi | sdague: the one i asked you about earlier. failing havana horizon exercises, probably some sort of bitrot | 15:20 |
*** dcramer_ has joined #openstack-infra | 15:20 | |
sdague | fungi: I don't know | 15:21 |
dims | sdague, log stash query "Horizon front page not functioning!" | 15:21 |
sdague | honestly, if I'm ever going to get the ER stuff working, I need to stop looking at actual gate fails :) | 15:21 |
fungi | sdague: do you know whether havana horizon has bit-rotted on us? https://jenkins01.openstack.org/job/gate-grenade-dsvm/3181/console | 15:21 |
fungi | that one | 15:21 |
fungi | sounds like no, so someone needs to start investigating | 15:21 |
sdague | fungi: I'd start with a bug and an ER check | 15:22 |
dims | fungi, sdague - i had a pending review for capturing the apache2 logs (https://review.openstack.org/#/c/64490/) | 15:22 |
dims | sdague, already filed bug - https://bugs.launchpad.net/openstack-ci/+bug/1265057 | 15:22 |
fungi | dims: you probably need to bring it to the attention of the horizon and stable-maint devs, i'm guessing, so that we can up the priority | 15:23 |
sdague | so if I actually get these data queries sorted, we'll have a nearly instant answer here, so I'm going to go back to try to get that code done | 15:23 |
fungi | sdague: yes, please do. i was only asking in case you'd already seen/looked into it | 15:24 |
*** ttx has quit IRC | 15:25 | |
*** ttx has joined #openstack-infra | 15:25 | |
*** ttx has quit IRC | 15:25 | |
*** ttx has joined #openstack-infra | 15:25 | |
*** AaronGr_Zzz is now known as AaronGr | 15:25 | |
*** kraman has joined #openstack-infra | 15:26 | |
*** prad_ has joined #openstack-infra | 15:26 | |
*** dprince has quit IRC | 15:27 | |
dims | fungi, will hop onto horizon, https://review.openstack.org/#/c/64490/ is against openstack-infra/devstack-gate to get apache2 logs, could use blessings from folks here | 15:29 |
*** markmcclain has joined #openstack-infra | 15:31 | |
fungi | dims: approved, but it'll take some time to go in... i can promote it to the head of the integrated gate queue if there's consensus that the disruption from another reset will be worthwhile to get that extra data | 15:31 |
fungi | dims: developers can also recreate this job failure themselves pretty reliably, i expect, by following https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/README.rst#n100 | 15:32 |
sdague | so we're missing stuff in ER indexes | 15:33 |
dims | thanks fungi ! | 15:33 |
sdague | in ES indexes that is | 15:33 |
sdague | http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF91dWlkOjIxMGI3N2UzZmFhMTQ1ZGQ4ZTE0ZjNhODNiOTdmOTIyIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg5MTk1MjMzMTY5fQ== | 15:34 |
sdague | no console log | 15:34 |
*** rpodolyaka has left #openstack-infra | 15:35 | |
*** pmathews has quit IRC | 15:35 | |
*** afazekas has quit IRC | 15:37 | |
*** roaet has joined #openstack-infra | 15:38 | |
*** AaronGr is now known as AaronGr_Zzz | 15:38 | |
roaet | Howdy infra folk! I had a question about the last 3 or 4 failures for my change. It appears that it is failing for random reasons (different tests, different times). Mostly Jenkins issues it appears. Anything I can do to help move this along? https://review.openstack.org/#/c/57517/6 | 15:39 |
openstackgerrit | Flavio Percoco proposed a change to openstack-infra/config: Post pre-releases on pypi for marconiclient https://review.openstack.org/65483 | 15:39 |
*** andreaf has joined #openstack-infra | 15:40 | |
*** CaptTofu has joined #openstack-infra | 15:41 | |
dims | roaet, that last failure is a flakiness, you can try a recheck against bug 1265057 (you have the symptom - http://logs.openstack.org/17/57517/7/check/check-grenade-dsvm/612c6c2/logs/new/error.txt.gz) | 15:42 |
roaet | thank you dims, I will try that. | 15:43 |
matel | Hello, what's the best way to try out nodepool changes at home? | 15:43 |
roaet | With the other errors, should I just recheck no bug? | 15:43 |
chmouel | dims: i would imagine it's 1265057, no? | 15:43 |
chmouel | https://bugs.launchpad.net/grenade/+bug/1265057 | 15:43 |
*** wenlock has joined #openstack-infra | 15:44 | |
sdague | if i had to guess, I'd guess the grenade issue was a horizon havana patch that landed | 15:45 |
sdague | forward upgrade gating is still a gap | 15:45 |
*** herndon_ has joined #openstack-infra | 15:45 | |
*** fifieldt has quit IRC | 15:47 | |
*** coolsvap has joined #openstack-infra | 15:47 | |
dims | chmouel, right "recheck bug 1265057" | 15:47 |
matel | jeblair: ping | 15:47 |
chmouel | dims:, roaet: yeah correct | 15:48 |
sdague | so realize recheck is only useful if that job is sometimes passing | 15:50 |
sdague | otherwise it just makes things worse | 15:50 |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1265057 https://review.openstack.org/65489 | 15:50 |
mriedem | jog0: clarkb: mtreinish: sdague: ^ | 15:50 |
*** rcleere has joined #openstack-infra | 15:51 | |
roaet | sdague: what if it is sometimes passing/failing random tests? | 15:51 |
*** mdenny has joined #openstack-infra | 15:51 | |
*** fallenpegasus has quit IRC | 15:52 | |
sdague | roaet: that means there is a race either in openstack or in the tests. so it needs to get fixed, otherwise we end up with 80 queue, 24hr delay gates, like we do today | 15:52 |
sdague | mriedem: realize that grenade jobs can't post on reviews yet | 15:52 |
sdague | for ER | 15:52 |
*** chandankumar has quit IRC | 15:53 | |
*** chandankumar_ has joined #openstack-infra | 15:53 | |
roaet | The error prior to my last one was a failed py26 test (as in it couldn't run for some reason). In those situations I should recheck (it passed prior)? | 15:53 |
mriedem | sdague: ah, didn't knwo that | 15:53 |
fungi | roaet: does the log say it ran a couple days ago on centos6-3? | 15:54 |
*** jd__ has quit IRC | 15:54 | |
roaet | fungi: not sure: http://logs.openstack.org/17/57517/7/check/gate-python-neutronclient-python26/61fa46c/console.html | 15:54 |
fungi | if so, that server had a problem and got disciplined very harshly | 15:54 |
roaet | it was this morning | 15:54 |
fungi | ahh, maybe we have another... | 15:55 |
roaet | yes. centos6-3 something | 15:55 |
*** jd__ has joined #openstack-infra | 15:55 | |
roaet | just randomly failed.. some java hudson failure | 15:55 |
*** chandankumar__ has joined #openstack-infra | 15:55 | |
fungi | roaet: that's not this morning | 15:55 |
roaet | oh. weird. I thought it was ran this morning. (is in a time loop) | 15:55 |
fungi | roaet: that's the event from a couple days ago. there was an infra bug for that server (now resolved) | 15:55 |
roaet | Ah got it. Sorry about that. | 15:56 |
sdague | fungi: promote this - https://review.openstack.org/#/c/64491/ | 15:56 |
sdague | next time you have a chance | 15:56 |
fungi | you can recheck against the bug if you go looking at recently resolved bugs in lp for openstack-ci | 15:56 |
fungi | sdague: will do asap | 15:56 |
*** chandankumar_ has quit IRC | 15:57 | |
fungi | sdague: promoted changes 64491,4 64490,2 (in that order) | 15:59 |
dims | whoa - gate is 101 deep | 16:00 |
mriedem | dkranz: ping | 16:00 |
openstackgerrit | A change was merged to openstack-infra/elastic-recheck: Add query for bug 1265057 https://review.openstack.org/65489 | 16:02 |
openstackgerrit | Eli Klein proposed a change to openstack-infra/jenkins-job-builder: Add local-branch option https://review.openstack.org/65369 | 16:02 |
dkranz | mriedem: Give me two minutes please... | 16:02 |
*** dprince has joined #openstack-infra | 16:04 | |
*** CaptTofu has quit IRC | 16:06 | |
*** senk has joined #openstack-infra | 16:06 | |
*** tma996 has quit IRC | 16:06 | |
*** CaptTofu has joined #openstack-infra | 16:07 | |
*** oubiwann has joined #openstack-infra | 16:07 | |
*** yamahata has joined #openstack-infra | 16:08 | |
*** CaptTofu has quit IRC | 16:09 | |
dkranz | mriedem: pong | 16:09 |
*** CaptTofu has joined #openstack-infra | 16:09 | |
mriedem | dkranz: hey, so i was looking into bug 1265906 to add logstash indexing on logs/error.txt.gz | 16:09 |
mriedem | to this: http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/logstash/jenkins-log-client.yaml | 16:09 |
mriedem | however, given the format of that log file: | 16:10 |
mriedem | http://logs.openstack.org/37/61037/9/check/check-tempest-dsvm-neutron/6d98c79/logs/error.txt.gz | 16:10 |
mriedem | looks like i'd need to define a new format to handle it here, right? | 16:10 |
mriedem | http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/templates/logstash/indexer.conf.erb | 16:10 |
mriedem | the existing formats expect timestamps, logs/error.txt.gz doesn't have that | 16:10 |
zul | how do i make one gerrit review depend on another https://review.openstack.org/#/c/62432/ and https://review.openstack.org/#/c/65493/ | 16:10 |
dkranz | mriedem: The other option would be to change the generator for that log file | 16:11 |
dkranz | mriedem: I didn't even know it existed until recently | 16:11 |
mriedem | zul: https://wiki.openstack.org/wiki/Gerrit_Workflow#Add_dependency | 16:11 |
mriedem | dkranz: where would i change the generator for that log file? | 16:12 |
dkranz | mriedem: I'm not sure. I don't know a lot about this stuff. | 16:12 |
mriedem | dkranz: ok, i'm picking on your due to your ML post on where the indexers are defined. :) | 16:12 |
*** reed has joined #openstack-infra | 16:13 | |
dkranz | mriedem: I figured | 16:13 |
mriedem | that's what you get for helping | 16:13 |
dkranz | mriedem: No good deed goes unpunished :) | 16:13 |
dims | lol | 16:13 |
dkranz | mriedem: Most of the infra folks are far away from home now I think | 16:14 |
*** banix has joined #openstack-infra | 16:16 | |
*** sdake has quit IRC | 16:17 | |
*** gyee_ has joined #openstack-infra | 16:17 | |
*** sdake has joined #openstack-infra | 16:17 | |
*** ttx_ has joined #openstack-infra | 16:18 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Allow statsd hosts to connect to graphite via IPv6 https://review.openstack.org/65497 | 16:18 |
* ttx_ experiences some kind of routing issues | 16:19 | |
fungi | as painful as a very deep gate queue is, it's fun to watch the subway construction underway | 16:23 |
*** SergeyLukjanov has quit IRC | 16:23 | |
openstackgerrit | Flavio Percoco proposed a change to openstack-infra/config: Post pre-releases on pypi for marconiclient https://review.openstack.org/65483 | 16:29 |
*** herndon_ has quit IRC | 16:29 | |
*** esker has quit IRC | 16:31 | |
*** sandywalsh_ has quit IRC | 16:31 | |
portante | fungi: are we in a state where folks should hold off further approvals? | 16:32 |
portante | fwiw: if there is anything I can do to help out, let me know | 16:33 |
*** dstanek has quit IRC | 16:33 | |
fungi | portante: i don't think so. today's main gate blocker (besides a little lost ground while we were down for a zuul replacement with a faster vm) is assumed to be fixed by 64491, which is now at the head of the gate and slated to merge in 10 minutes. fingers crossed | 16:36 |
fungi | the new zuul is performing waaaay better than its predecessor too | 16:37 |
fungi | though i'm hammering through 65497 to fix a problem with graphs | 16:37 |
*** ^d has joined #openstack-infra | 16:38 | |
*** UtahDave has joined #openstack-infra | 16:39 | |
fungi | sdague: 64491 went in | 16:40 |
openstackgerrit | A change was merged to openstack-infra/config: Allow statsd hosts to connect to graphite via IPv6 https://review.openstack.org/65497 | 16:40 |
*** talluri has joined #openstack-infra | 16:43 | |
portante | fungi: great, thanks | 16:43 |
hashar | nice | 16:44 |
*** beagles is now known as beagles_brb | 16:45 | |
*** talluri has quit IRC | 16:45 | |
*** hashar has quit IRC | 16:46 | |
*** jasondotstar has quit IRC | 16:52 | |
*** changbl has joined #openstack-infra | 16:53 | |
*** pcrews has quit IRC | 16:53 | |
*** markmcclain has quit IRC | 16:55 | |
*** esker has joined #openstack-infra | 16:58 | |
fungi | the tabloid headline for this would be "red hat takes over centos" http://lists.centos.org/pipermail/centos-announce/2014-January/020100.html | 16:58 |
ttx_ | fungi: or "redhat eats centos alive" | 16:59 |
ttx_ | (not that I think it's a bad idea :) | 16:59 |
fungi | right next to a fuzzy picture of bat boy | 16:59 |
*** esker has quit IRC | 17:00 | |
*** SergeyLukjanov has joined #openstack-infra | 17:00 | |
*** chandankumar__ has quit IRC | 17:00 | |
ttx_ | ok, I declare my network connection totally useless and end my day | 17:01 |
*** jasondotstar has joined #openstack-infra | 17:01 | |
*** dstanek has joined #openstack-infra | 17:01 | |
fungi | ttx_: have a pleasant evening. my isp issues seem to be clearing up, at least | 17:02 |
*** esker has joined #openstack-infra | 17:02 | |
ttx_ | fungi: if at least it was just failing, I would not keep on trying and getting random loss | 17:02 |
*** sarob has joined #openstack-infra | 17:03 | |
fungi | ttx_: was similar for me. established tcp sessions kept working, but ~50% of new sessions plus most of my udp traffic timed out | 17:03 |
*** sarob has quit IRC | 17:04 | |
*** sarob has joined #openstack-infra | 17:04 | |
ttx_ | yeah, the drops seem to affect encypted connections more, too. Trying not to get too paranoid | 17:04 |
fungi | web browsing to ajaxy sites doing callback or using crap like google analytics was basically impossible | 17:04 |
fungi | and api calls using http(s), like novaclient, also severely broken | 17:05 |
*** ttx_ has quit IRC | 17:06 | |
jog0 | whoa gate queue is at 102 | 17:06 |
fungi | jog0: pretty awesome, yeah? | 17:07 |
jd__ | not sure if that's irony | 17:07 |
jd__ | or rather sarcasm | 17:08 |
fungi | well, the scalability there is pretty grand. maybe i can be a bit more objective because it's not preventing me from getting other things done | 17:08 |
*** sarob has quit IRC | 17:09 | |
*** sarob has joined #openstack-infra | 17:09 | |
fungi | the graphite graphs won't be accurate for another hour, so hard to see what sort of volume we're really doing | 17:10 |
*** sarob has quit IRC | 17:11 | |
*** sarob has joined #openstack-infra | 17:11 | |
fungi | the most recent gate reset was classified by elastic-recheck as bug 1254872 | 17:11 |
*** jcoufal has quit IRC | 17:12 | |
jog0 | libvirt ... sigh | 17:12 |
*** sarob has quit IRC | 17:12 | |
fungi | but the grenade/horizon fix made it in a little while ago, so hopefully things will be back to moving quickly | 17:13 |
*** sarob has joined #openstack-infra | 17:14 | |
jog0 | fungi: cool. that was my patch | 17:15 |
*** oubiwann has quit IRC | 17:15 | |
*** sarob has quit IRC | 17:15 | |
fungi | jog0: just think of all the changes you saved from a reverify with that one | 17:16 |
*** sarob has joined #openstack-infra | 17:17 | |
jog0 | fungi: heh, I am just happy that one of my 20 odd outstanding patches is done | 17:18 |
*** yamahata has quit IRC | 17:18 | |
fungi | jog0: indeed. i think i only have 15 which aren't wip right now | 17:20 |
fungi | clearly i should be writing more patches | 17:21 |
jog0 | ^_^ | 17:21 |
*** sarob has quit IRC | 17:22 | |
*** pballand has quit IRC | 17:24 | |
*** roeyc_ has quit IRC | 17:25 | |
*** dstanek has quit IRC | 17:28 | |
*** herndon has joined #openstack-infra | 17:28 | |
*** hub_cap is now known as carnac | 17:29 | |
*** carnac is now known as hub_cap | 17:30 | |
*** jpich has quit IRC | 17:32 | |
sdague | dhellmann: so is there a concise write up of the oslo.config issue? | 17:32 |
sdague | because there are now a millions things related to it, but I'm not sure I see a real analysis of what is going on, and why it's an issue | 17:33 |
*** pballand has joined #openstack-infra | 17:33 | |
dhellmann | sdague: I think bnemec worked out that it's caused by a combination of pip install -e and something else in the same namespace package not being installed in that mode | 17:33 |
*** herndon has quit IRC | 17:33 | |
dhellmann | sdague: alternatives to fix it seem to be install oslo.sphinx with pip install -e or change its package name | 17:34 |
*** oubiwann has joined #openstack-infra | 17:34 | |
dhellmann | sdague: or don't install it at all, of course | 17:34 |
bnemec | dhellmann: sdague: Right, the problem is that we pip install -e oslo.config in the system site packages, then pip install oslo.sphinx in the venv. | 17:34 |
bnemec | That combination results in oslo.config being unavailable in the venv. | 17:34 |
sdague | so that sounds like a pip or venv bug | 17:35 |
dhellmann | probably setuptools, but yeah | 17:35 |
fungi | dhellmann: is the issue to do with them sharing a common namespace but being split between system and venv? | 17:35 |
dstufft | ouch are you guys using setuptools namespaces? | 17:35 |
bnemec | fungi: Partially, but it doesn't happen if you do a normal pip install of both. | 17:35 |
dhellmann | fungi: common namespace, split, *and* installed with .egg-link in some cases (I think) | 17:35 |
dstufft | namespaces don't play well with -e | 17:35 |
bnemec | It's only a problem if you mix egg-links and regular pip installs. | 17:35 |
fungi | ahhhh | 17:35 |
dstufft | I recommend staying away from namespace pacakges unless you're python3 only and can use the built in support for them :/ | 17:36 |
dhellmann | so for normal production oslo libs, we'd just install them from devstack with pip install -e and be done | 17:36 |
dhellmann | but because oslo.sphinx is not a production lib, that's not necessarily the best answer | 17:36 |
dhellmann | dstufft: too late :-/ | 17:36 |
dstufft | dhellmann: I figured as much, i'd try to get away from them if you can though. I don't think i've seen a single use of them that didn't end up being a regret eventually | 17:37 |
*** senk has quit IRC | 17:37 | |
dhellmann | bnemec: I have a meeting starting in a minute, can you reply to sdague's email on the list with a summary? | 17:37 |
*** sparkycollier has joined #openstack-infra | 17:38 | |
bnemec | dhellmann: Sure, I was actually just doing that when this discussion started. :-) | 17:38 |
dhellmann | dstufft: I've had fairly good luck, but I try to avoid doing anything tricky with them most of the time | 17:38 |
sdague | dstufft: so can you explain what the issue is? | 17:38 |
dhellmann | bnemec: thanks! | 17:38 |
dstufft | sdague: the issue is they rely on a hack, a hack that breaks down if you install two things with the same namespace in different ways | 17:39 |
dhellmann | sdague: the code building the import path for the oslo namespace package is getting confused because it's finding .egg-link and real directories -- it's probably stopping on the real directory in the venv | 17:39 |
sdague | dstufft: so the issue is it's called oslo.sphinx | 17:39 |
sdague | basically the issue is we used '.' in packages? | 17:39 |
dhellmann | no, a namespace package is a special kind of package that does stuff to the import path | 17:40 |
dhellmann | if we had just an oslo package, and bundled everything into it, this would all work fine | 17:40 |
dstufft | yea | 17:40 |
dstufft | what dhellmann said | 17:40 |
dstufft | https://github.com/openstack/oslo.sphinx/blob/master/oslo/__init__.py#L17 | 17:40 |
dstufft | that line is what does the hack | 17:40 |
sdague | dhellmann: ok, so all these details need to be in that synopsis | 17:40 |
dhellmann | but we have several different oslo directories, and each of those extends the import path so when you import from oslo it looks in all of them | 17:40 |
*** andreaf has quit IRC | 17:40 | |
dhellmann | except that that path munging stuff doesn't like that we're mixing editable installs and regular installs | 17:40 |
*** CaptTofu has quit IRC | 17:40 | |
sdague | because it sounds like oslo is doing stuff that's unwise, so we should stop doing that. | 17:41 |
dhellmann | we should probably just reserve the oslo namespace for production libraries, and rename the theme package | 17:41 |
*** herndon_ has joined #openstack-infra | 17:41 | |
*** CaptTofu has joined #openstack-infra | 17:41 | |
sdague | and then maybe we wouldn't have to uninstall / reinstall oslo.config twice at the beginning of every devstack install to try to get a working env | 17:41 |
sdague | dhellmann: so isn't oslo.config hit by the same issue | 17:41 |
dhellmann | well, it could also be said that if devstack didn't use that -e option this wouldn't be an issue, so it's a combination of all of it | 17:41 |
dhellmann | I don't know what led to us reinstalling oslo.config like that | 17:42 |
dhellmann | is it being brought in as a dependency? | 17:42 |
*** mancdaz is now known as mancdaz_away | 17:42 | |
sdague | dhellmann: because it didn't work until we did that | 17:42 |
dstufft | if I recall correctly, and i'm not well versed in namespace packages, there are 3 cases of how something can be installed | 17:42 |
dstufft | and pip uses one of them normally, another one for -e, and I think easy_install uses a third way | 17:43 |
sdague | you end up with "pip install oslo.config" can't install, already installed | 17:43 |
sdague | import oslo.config | 17:43 |
sdague | explode | 17:43 |
dstufft | and you can have namespace packages compat with oe and the pip way, or the pip way and the easy_install way | 17:43 |
dstufft | IIRC | 17:43 |
sdague | pythong can't find it | 17:43 |
dstufft | but not all 3 of them | 17:43 |
dstufft | https://github.com/pypa/pip/issues/3 | 17:43 |
dhellmann | sdague: do you know what caused it to be installed and then for something else to try to install it again? | 17:43 |
*** nicedice has quit IRC | 17:44 | |
dhellmann | dstufft: thanks, that's good info | 17:44 |
sdague | dhellmann: because it's packaged in the distro, and some times things drag it in | 17:44 |
sdague | remember, oslo wanted their stuff to be generic | 17:44 |
sdague | so other people might use it | 17:44 |
sdague | honestly, I don't know all the details | 17:44 |
dhellmann | why are we installing the distro package? | 17:44 |
dhellmann | ok | 17:44 |
*** dcramer_ has quit IRC | 17:45 | |
sdague | you can't assume it's not installed | 17:45 |
sdague | in devstack | 17:45 |
sdague | it actually caused all kinds of breaks | 17:45 |
dhellmann | that's not going to be unique to the other libraries, then | 17:45 |
dhellmann | or rather, that's going to affect the other libraries and not be unique to config | 17:45 |
sdague | except oslo.config gets imported freaking everywhere | 17:46 |
* dhellmann needs lunch to english better | 17:46 | |
dhellmann | sdague: so will oslo.log and oslo.text when those are released | 17:46 |
dstufft | dhellmann: np, sorry I don't have good news :| | 17:46 |
*** harlowja_away is now known as harlowja | 17:46 | |
dhellmann | dstufft: "this is not only your issue" is a start :-) | 17:46 |
dstufft | dhellmann: :) | 17:48 |
sdague | dhellmann: so basically, it has to be fixed in a real way, because we were hacking around this on oslo.config, and those hacks are breaking down | 17:48 |
dhellmann | sdague: To start I'll look into the impact of changing the name of oslo.sphinx. Do you have another suggestion for the name? openstack-sphinx? with a python package name "openstacksphinx"? | 17:48 |
sdague | but honestly, it totally blows my mind that it's ok that you can pip install a package, then not be able to import it :P | 17:48 |
sdague | that seems like a fundamental break in the software | 17:49 |
dhellmann | sdague: The multi-install issue would show up with any of our dependencies, right? If we tried to install an editable version of sqlalchemy, that would cause the same issue? | 17:49 |
dhellmann | sdague: yeah, see the bug dstufft linked above -- it is | 17:49 |
sdague | dhellmann: so you have to walk through the details, remember you've got a decade on my in wonky python internals | 17:49 |
*** marun has joined #openstack-infra | 17:50 | |
*** CaptTofu has quit IRC | 17:50 | |
dhellmann | sdague: these are 2 issues, and I was just talking about the "multiple install oslo.config to get it to work" problem | 17:50 |
*** CaptTofu has joined #openstack-infra | 17:50 | |
dhellmann | installing something "for real" leaves it in a different format than installing it in "development mode" where you can edit the files without having to reinstall the package | 17:51 |
dstufft | sdague: packaging is built on a pile of broken, we're trying to sort things out and make it sane, but some stuff is just fundamentally broken and afaik the setuptools namespace hack isn't something that can be fixed realiably :[ The future of namespace packages lives in python3, but it's not useful until 2.x is dead :| | 17:51 |
dhellmann | an os package is going to use the first mode, and devstack is using the second | 17:51 |
dhellmann | that makes sense, because we want to leave the system in a state where the developer can make changes to it | 17:51 |
*** nati_ueno has joined #openstack-infra | 17:51 | |
dhellmann | but what it means is we have to work to make sure there is no previous version of the library, in order to get it cleaned up properly | 17:51 |
sdague | right, so the point of devstack is you have trees of code that you can edit so you can ^C and restart with that code and test your changes | 17:52 |
dhellmann | that's right | 17:52 |
dhellmann | but that may mean we have to say "do not install these other libraries globally if you are using devstack" | 17:52 |
dhellmann | or, have the logic in devstack to forcibly remove them | 17:52 |
dhellmann | for example, if we know the system package names, we could try to delete them | 17:52 |
sdague | however, what you are telling me is that because we need oslo.config in /opt/stack you can never use another program that uses oslo.config on that system which isn't in /opt/stack | 17:52 |
dhellmann | or if that doesn't work, we could just remove the relevant files with rm rather than "pip uninstall" | 17:53 |
sdague | which means we should tell everyone to minimize their use of oslo.config | 17:53 |
dhellmann | no, it doesn't have to do with where the source is on the filesystem | 17:53 |
*** _ruhe is now known as ruhe | 17:53 | |
sdague | it has to do with the fact that it's in development mode | 17:54 |
dhellmann | if you've installed my-fancy-app and that uses oslo.config and then you try to use devstack, you'll have this issue | 17:54 |
sdague | right | 17:54 |
dhellmann | s/oslo.config/any other package/ | 17:54 |
sdague | so that seems completely broken | 17:54 |
*** DennyZhang has joined #openstack-infra | 17:54 | |
dhellmann | yes, I agree | 17:54 |
dstufft | So this may be a dumb question | 17:55 |
dstufft | why's devstack not isolated from the system? | 17:55 |
dhellmann | because what we're doing is installing editable versions of libraries into the global operating system's view of python | 17:55 |
dhellmann | and that's not a great idea | 17:55 |
dhellmann | right | 17:55 |
sdague | dstufft: because we need to make sure it will actually work on the system | 17:55 |
*** dstanek has joined #openstack-infra | 17:55 | |
* dhellmann has a meeting, sorry | 17:55 | |
dstufft | actually work on what system | 17:55 |
dstufft | isn't integrating into the OS the OS packager's job | 17:56 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Move devstack hooks from infra config to solum repo https://review.openstack.org/65414 | 17:56 |
*** pcrews has joined #openstack-infra | 17:56 | |
sdague | dstufft: we're actually trying to not hand them a big pile of fail that makes that hard though. | 17:56 |
dhellmann | sdague: if it helps, libraries that are definitely meant for others to consume will not be in the oslo namespace, and that should make this less of an issue for those | 17:57 |
openstackgerrit | Noorul Islam K M proposed a change to openstack-infra/config: Move devstack hooks from infra config to solum repo https://review.openstack.org/65414 | 17:57 |
sdague | dhellmann: so oslo.config is not meant to be used beyond openstack server packages? | 17:57 |
dhellmann | sdague: if we installed all of our dependencies into a single virtualenv, would that be too far off from installing it globally? | 17:57 |
dhellmann | sdague: god, I hope not | 17:57 |
*** krtaylor has quit IRC | 17:58 | |
sdague | dhellmann: so why is it used by oslo's update script then? | 17:58 |
dhellmann | sdague: because that's also an openstack tool? | 17:58 |
dhellmann | it's not meant to be used outside of openstack | 17:58 |
dhellmann | at least, IMHO | 17:58 |
sdague | so if it was oslo-config instead would this not be an issue? | 17:59 |
sdague | or still an issue? | 17:59 |
*** herndon_ has quit IRC | 17:59 | |
dhellmann | the name issue related to the python package, so it would have to be osloconfig | 17:59 |
dhellmann | but yeah, if we didn't use namespace packages this would probably be less of an issue | 17:59 |
sdague | less of an issue, or not an issue? | 18:00 |
dhellmann | although I don't know for sure that you'd have less trouble re-installing it in edit mode, I'd have to experiment with that | 18:00 |
dhellmann | I'm not sure | 18:00 |
sdague | because if the answer is "not an issue", I think that's what has to be done | 18:00 |
*** thomasem has quit IRC | 18:01 | |
dstufft | it wouldn't be an issue that you'd install it and not see it because of how you installed it | 18:01 |
dhellmann | I need to understand why a single virtualenv for devstack wouldn't work before I can agree, but we'll hae to talk about that later | 18:01 |
sdague | sure | 18:01 |
sdague | and you need mordred in that conversation | 18:01 |
dstufft | atleast as far as I know | 18:01 |
*** Ajaeger1 has joined #openstack-infra | 18:01 | |
dstufft | this is python packaging we're talking about so there's a giant ass caveat that nobody actually understands how all of this works so | 18:02 |
sdague | because, honestly I'm noobish enough to just know the current situation is terrible. As a user pip install -WHWATEVER boo, then not being able to import boo, is fail. :) | 18:02 |
*** medberry is now known as med_ | 18:03 | |
*** jasondotstar has quit IRC | 18:03 | |
sdague | and I'm surprised that "just throw a venv at it" is the solution. Because didn't we make fun of java people for shipping 20 jvms with their software because they couldn't get all the software to play together :) | 18:03 |
*** kraman has quit IRC | 18:03 | |
*** matel has quit IRC | 18:03 | |
dhellmann | we wouldn't use a virtualenv in production, just for devstack | 18:04 |
*** kraman1 has joined #openstack-infra | 18:04 | |
*** pballand has quit IRC | 18:05 | |
sdague | dhellmann: in which case we just made moving from dev -> production that much further way | 18:06 |
sdague | away | 18:06 |
sdague | because we'll stop testing if we can actually build a system where all this stuff plays together with a base OS | 18:06 |
*** senk has joined #openstack-infra | 18:06 | |
*** beagles_brb is now known as beagles | 18:06 | |
dhellmann | the virtualenv lets us say "all of these versions work together". What does using /usr give us over that if we're replacing system package versions of libs anyway? | 18:06 |
sdague | dhellmann: because we don't want to be replacing *all* of the system | 18:07 |
fungi | dstufft: part of the argument for installing it globally in the os is the suggestion that we want to see if it's going to be viable packaged before the distro packagers package it. sort of chicken-and-egg proposition which i don't completely think is sound | 18:07 |
dhellmann | doesn't devstack do that when it syncs the requirements file and then calls pip install -U? | 18:07 |
sdague | dhellmann: it doesn't install pip install -U | 18:07 |
sdague | it does pip install without the -U | 18:08 |
sdague | so if you have sufficient requirements, you get them | 18:08 |
dhellmann | ok | 18:08 |
sdague | not the latest | 18:08 |
dstufft | fungi: yea I don't buy that personally. I've never had anything but problems mixing virtualenv and a system site packages. Now granted i've never done anything as big as openstack so idk :) | 18:08 |
dstufft | although if someones packaging openstack wouldn't they be packaging it with the next version of their OS? | 18:08 |
dstufft | the *nix's don't usually like putting new stuff into an already released OS | 18:09 |
dhellmann | sdague: i expect we could work around the issue with multiple installs for oslo.config by making devstack understand how the package might be installed, and removing it ourself instead of relying on pip to do it | 18:09 |
sdague | dstufft: most of them provide it via an update channel | 18:09 |
*** thomasem has joined #openstack-infra | 18:09 | |
dstufft | sdague: ah, ok | 18:09 |
sdague | dstufft: because it evolves faster than the base OS | 18:10 |
fungi | dstufft: right now we're so fast-moving/evolving that shipping releases as old as the distro's release is seen by much of the community as broken/insufficient | 18:10 |
fungi | what sdague said | 18:10 |
dhellmann | fungi: could you have a look at https://review.openstack.org/#/c/65180/ when you have a few minutes? | 18:10 |
sdague | dstufft: "I've never had anything but problems mixing virtualenv and a system site packages" unfortunatelly translates to me as "never use python for system level things" :( | 18:10 |
*** jasondotstar has joined #openstack-infra | 18:10 | |
fungi | as primarily a sysadmin, i believe in you get what released with your distro, plus security/usability fixes backported | 18:10 |
dhellmann | fungi: no rush, but I'd like to start gating cliff on openstack consumers | 18:10 |
*** dstanek has quit IRC | 18:11 | |
dstufft | sdague: Eh, it's really an issue because you're using two different package managers to manage the same stuff | 18:11 |
fungi | dhellmann: i think i have that change starred, and i'm getting close to being able to get back to reviewing today if nothing else breaks | 18:11 |
dhellmann | fungi: cool, thanks | 18:11 |
dhellmann | sdague: what dstufft said | 18:12 |
sdague | dstufft: right, but when you have a distro that has a bunch of python stuff to get things done, what do you do? | 18:13 |
dstufft | I always isolate myself from the system site packages | 18:13 |
sdague | dstufft: and how often does that become system level packages? | 18:14 |
*** oubiwann has quit IRC | 18:14 | |
dstufft | If someone wants to put mysoftware in a *nix, it's on them to integrate it, that's their value add over just installing things from PyPI, that they've spent the time to integrate it into the system | 18:14 |
dstufft | sdague: I don't understnad that question | 18:15 |
sdague | yeh, I'm not sure I do anymore either. :) | 18:16 |
dstufft | I mean i work on pip so our stuff gets packaged by OSs, but we avoid dependencies like the plague :) | 18:16 |
sdague | :) | 18:17 |
*** sarob has joined #openstack-infra | 18:17 | |
*** sarob has joined #openstack-infra | 18:18 | |
sdague | ok, time to get some lunch before I just start trolling :) | 18:18 |
*** sarob has quit IRC | 18:18 | |
kraman1 | fungi: ping, hows your load today. got some time to answer a few questions? | 18:19 |
*** sarob has joined #openstack-infra | 18:20 | |
*** oubiwann has joined #openstack-infra | 18:20 | |
*** noorul has joined #openstack-infra | 18:21 | |
noorul | https://review.openstack.org/#/c/65414/ | 18:22 |
noorul | If someone can help me to get this in quickly | 18:22 |
dkranz | fungi: How do you investigate why jenkins is not running on a new upload of a patch, for example https://review.openstack.org/#/c/64818/2 | 18:23 |
Ajaeger1 | dkranz: have a look at http://status.openstack.org/zuul/ | 18:24 |
dkranz | AJaeger: I did that but don't see anything for 64818 | 18:24 |
mriedem | sdague: to get a feel for how long a particular ES query takes, you suggested adding debug logging to check_success.py in elastic-recheck, | 18:24 |
mriedem | i was thinking about actually making that something we store in the metrics we collect and dump - thoughts on that? | 18:25 |
Ajaeger1 | dkranz: Zuul was restarted at the time the change was done, you need to retrigger it. | 18:25 |
* Ajaeger1 checked the timestamp | 18:25 | |
dkranz | AJaeger: ok, thanks | 18:25 |
*** sarob has quit IRC | 18:25 | |
Ajaeger1 | dkranz "recheck no bug" should be all it needs - and then some patience ;) | 18:26 |
*** sarob has joined #openstack-infra | 18:26 | |
dkranz | AJaeger: Yup | 18:26 |
*** CaptTofu has quit IRC | 18:26 | |
sdague | mriedem: so we're actually pretty stateless today, I'd rather stay that way as much as possible | 18:30 |
*** sarob has quit IRC | 18:31 | |
mriedem | sdague: not sure i follow, figured getting the query time into the print_metrics output would be better than having to dig through the debug logs per query? | 18:31 |
*** sarob has joined #openstack-infra | 18:31 | |
*** pblaho has quit IRC | 18:31 | |
fungi | kraman1: my day is pretty crazy, but i can probably answer a question or two | 18:31 |
sdague | mriedem: so eventually, sure. the problem is there is a hot/cold data problem | 18:32 |
sdague | so... actually, maybe a different tool is better | 18:32 |
*** sarob has quit IRC | 18:32 | |
sdague | that does 4 runs of each query, throws away the first one, averages the other 3 | 18:32 |
kraman1 | fungi: i got zuul working the way I need it with a bunch of hacks. was hoping to get some time run the hacks by you and see if there was a better way to do it so i could send patches to zuul | 18:33 |
fungi | dkranz: i think that 64818,2 got uploaded while jeblair was replacing zuul, so it probably missed the patchset upload event | 18:33 |
fungi | oh, Ajaeger1 just mentioned that | 18:33 |
mriedem | sdague: maybe a new option for the elastic-recheck CLI | 18:33 |
mriedem | ? | 18:33 |
kraman1 | fungi: if you dont have time (looks like it), then can you please suggest anyone else I could talk to about the hacks? | 18:33 |
sdague | mriedem: honestly, I'd rather go with a smaller set of tools vs. one big cli at this point | 18:34 |
dkranz | fungi: So in the future, if I see something where jenkins does not run and nothing in zuul, I should not bother to report that but just do recheck? | 18:34 |
*** CaptTofu has joined #openstack-infra | 18:34 | |
Ajaeger1 | dkranz: and the zuul queue empty! | 18:34 |
*** sarob has joined #openstack-infra | 18:34 | |
*** praneshp has joined #openstack-infra | 18:34 | |
Ajaeger1 | if there's a long queue, you might not see it ;) | 18:34 |
dkranz | AJaeger: Got it :) | 18:34 |
fungi | kraman1: maybe asking next week would be better when everyone besides me isn't at a conference in australia (jeblair is our resident expert on zuul since he wrote just about all of it) | 18:34 |
*** sarob has quit IRC | 18:35 | |
kraman1 | fungi: ok, will wait for jeblair to return. thanks | 18:36 |
*** sarob has joined #openstack-infra | 18:36 | |
fungi | dkranz: if you search for the change number on http://status.openstack.org/zuul/ and it's not there after a minute, and that page says "Queue lengths: 0 events, 0 results." then chances are good zuul missed the event for it for some reason | 18:36 |
dkranz | fungi: k, thanks | 18:36 |
fungi | dkranz: though there's another corner case... if any of the change's dependencies or reverse-dependencies is a draft patchset (but if you avoid drafts that should be uncommon). currently that breaks zuul's ability to figure out how to test it | 18:37 |
dkranz | fungi: Yes, I have learned to check that too the last time I saw this :) | 18:38 |
*** branen_ has quit IRC | 18:38 | |
*** derekh has quit IRC | 18:38 | |
*** rnirmal has joined #openstack-infra | 18:39 | |
*** sarob has quit IRC | 18:40 | |
*** sarob has joined #openstack-infra | 18:40 | |
*** dstanek has joined #openstack-infra | 18:40 | |
*** praneshp has quit IRC | 18:42 | |
*** kraman1 has left #openstack-infra | 18:43 | |
*** krtaylor has joined #openstack-infra | 18:43 | |
*** sarob has quit IRC | 18:45 | |
*** praneshp has joined #openstack-infra | 18:49 | |
*** dizquierdo has quit IRC | 18:50 | |
*** sandywalsh has quit IRC | 18:54 | |
*** dprince_ has joined #openstack-infra | 18:55 | |
*** dprince has quit IRC | 18:55 | |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Log how long it takes to run a query when collecting metrics https://review.openstack.org/65514 | 18:55 |
fungi | devs going into a blind reverify loop on consistent failures is not helping our gate throughput... https://review.openstack.org/61924 | 18:57 |
fungi | i think we should call a stop to any havana approvals until https://launchpad.net/devstack/bugs/1266094 is solved (and now it looks like volumes is broken on grizzly as well as aggregates) | 18:58 |
*** oubiwann has quit IRC | 18:58 | |
*** CaptTofu has quit IRC | 18:58 | |
bknudson | fungi: the launchpad link didn't work for me. | 18:59 |
mriedem | https://bugs.launchpad.net/devstack/+bug/1266094 | 18:59 |
fungi | oops... i meant https://launchpad.net/bugs/1266094 | 18:59 |
fungi | or what mriedem corrected it to (that works too) | 18:59 |
mriedem | maybe should be marked as critical... | 19:00 |
fungi | maybe should be triaged at all | 19:00 |
* fungi doesn't have permissions to set importance on devstack bugs | 19:01 | |
fungi | afaik it could be a duplicate (i find it hard to believe this situation has escaped detection by a wider audience) | 19:02 |
fungi | but that was the only obvious one i spotted | 19:02 |
*** markmcclain has joined #openstack-infra | 19:03 | |
mriedem | hmm Error in sys.exitfunc | 19:05 |
*** sandywalsh has joined #openstack-infra | 19:07 | |
*** changbl has quit IRC | 19:08 | |
*** wenlock has quit IRC | 19:10 | |
*** melwitt has joined #openstack-infra | 19:10 | |
openstackgerrit | Jim Branen proposed a change to openstack/requirements: Use new hplefthandclient https://review.openstack.org/65179 | 19:11 |
*** wenlock has joined #openstack-infra | 19:12 | |
*** esker has quit IRC | 19:12 | |
*** herndon_ has joined #openstack-infra | 19:12 | |
*** jroovers has quit IRC | 19:13 | |
*** eharney has quit IRC | 19:15 | |
*** branen has joined #openstack-infra | 19:16 | |
*** mrodden has quit IRC | 19:19 | |
*** changbl has joined #openstack-infra | 19:19 | |
portante | fungi: I just noticed that a swift commit was about to fail due to one of the gate tests failing (all others gate tests for the commit had passed), but when the job ahead of failed, it was reset and that flaky failure mode was lost | 19:20 |
*** mrodden has joined #openstack-infra | 19:20 | |
portante | that sounds like we are potentially missing the true rate of test flakiness with that, no? | 19:21 |
*** herndon_ has left #openstack-infra | 19:21 | |
*** ruhe is now known as _ruhe | 19:21 | |
fungi | portante: not entirely. the logs for those jobs were still uploaded unless they got cancelled, so elasticsearch has a record of them (it's why the hit volume reported by elastic-recheck's graphs is not reflective of the number of changes which actually failed to merge on the first pass) | 19:22 |
portante | great, thanks for explaining that | 19:22 |
fungi | so we do collect data on jobs which fail, even if they don't result in a change getting kicked out of teh gate | 19:23 |
portante | do we have those events recorded in a database somewhere? | 19:23 |
*** syerrapragada1 has joined #openstack-infra | 19:24 | |
fungi | logstash keeps a short-term (currently two week) record of them, which is used in the analysis elastic-recheck performs | 19:24 |
portante | hmm, okay | 19:24 |
fungi | and so they can be queried via lucene expressions through the kibana interface at http://logstash.openstack.org/ | 19:24 |
*** syerrapragada1 has quit IRC | 19:24 | |
*** dcramer_ has joined #openstack-infra | 19:24 | |
fungi | what it's recording and indexing is the logs themselves. i think zuul also reports the job results via statsd to graphite.openstack.org (though at the moment that still may be broken--we thought it was an iptables problem after the zuul replacement last night, but the graphs are still flat-lining) | 19:26 |
*** johnthetubaguy has quit IRC | 19:26 | |
portante | ah | 19:26 |
fungi | i need to look into why. the firewall rules have been correct for a while now i think, so it's got to be something else | 19:27 |
*** thuc has joined #openstack-infra | 19:27 | |
*** jooools has quit IRC | 19:31 | |
portante | k | 19:32 |
*** rossella_s has quit IRC | 19:33 | |
*** DennyZhang has quit IRC | 19:34 | |
*** dcramer_ has quit IRC | 19:34 | |
*** sarob has joined #openstack-infra | 19:34 | |
*** hashar has joined #openstack-infra | 19:35 | |
*** jroovers has joined #openstack-infra | 19:38 | |
*** smarcet has left #openstack-infra | 19:45 | |
*** thuc has quit IRC | 19:45 | |
*** thuc has joined #openstack-infra | 19:46 | |
*** ArxCruz has quit IRC | 19:46 | |
*** dcramer_ has joined #openstack-infra | 19:48 | |
*** ^d has quit IRC | 19:49 | |
*** thuc has quit IRC | 19:50 | |
*** thuc has joined #openstack-infra | 19:51 | |
*** thuc has quit IRC | 19:51 | |
*** thuc has joined #openstack-infra | 19:51 | |
*** eharney has joined #openstack-infra | 19:52 | |
*** sarob has quit IRC | 19:53 | |
*** sarob has joined #openstack-infra | 19:54 | |
*** dripton has joined #openstack-infra | 19:54 | |
*** dripton__ has quit IRC | 19:54 | |
*** sarob has quit IRC | 19:57 | |
*** hashar has quit IRC | 20:00 | |
*** vipul is now known as vipul-away | 20:02 | |
*** vipul-away is now known as vipul | 20:02 | |
*** rfolco has quit IRC | 20:04 | |
*** mrodden1 has joined #openstack-infra | 20:13 | |
*** mrodden has quit IRC | 20:14 | |
*** _ruhe is now known as ruhe | 20:14 | |
*** yolanda has quit IRC | 20:15 | |
*** hashar has joined #openstack-infra | 20:16 | |
*** vipul is now known as vipul-away | 20:17 | |
openstackgerrit | Malini Kamalambal proposed a change to openstack-infra/devstack-gate: Add Support for Marconi https://review.openstack.org/65145 | 20:18 |
*** ^d has joined #openstack-infra | 20:19 | |
*** AaronGr_Zzz is now known as AaronGr | 20:25 | |
fungi | seeing what's currently slowing up the gate, https://launchpad.net/bugs/1232303 is killing quite a few cycles for gate-tempest-dsvm-large-ops (using nova network) | 20:26 |
*** malini is now known as malini_afk | 20:33 | |
openstackgerrit | Zane Bitter proposed a change to openstack-infra/reviewstats: Add Bartosz Gorski to heat-core https://review.openstack.org/65534 | 20:36 |
*** vipul-away is now known as vipul | 20:36 | |
*** freyes__ has quit IRC | 20:39 | |
*** sarob has joined #openstack-infra | 20:41 | |
*** ociuhandu has quit IRC | 20:41 | |
openstackgerrit | Sergey Lukjanov proposed a change to openstack/requirements: Sort tracked projects list https://review.openstack.org/63770 | 20:50 |
*** hogepodge has joined #openstack-infra | 20:54 | |
mriedem | fungi: do we need an e-r query for bug 1232303? | 20:57 |
*** sarob has quit IRC | 20:59 | |
*** sarob has joined #openstack-infra | 21:00 | |
fungi | mriedem: possibly? i haven't checked (too swamped with other tasks still) | 21:00 |
mriedem | fungi: i mean, there isn't one today, but i see you have one with 100% fail rate an 49 hits in the last 7 days | 21:00 |
mriedem | i'll check it out | 21:00 |
*** dstanek has quit IRC | 21:00 | |
mriedem | only thing i'm not sure about is if it must be pinned to that project, but i can look | 21:00 |
*** sarob_ has joined #openstack-infra | 21:01 | |
fungi | mriedem: i really don't know. i saw it shoot the entire ~95 change gate queue in the head, googled the error, found that bug, adjusted the query in it for the new dsvm job names and saw it was pretty bad | 21:02 |
fungi | then griped in here and hoped someone else would do all the real work ;) | 21:02 |
mriedem | i didn't see the griping, but saw the bug report update, so i'll check out at least the e-r query | 21:02 |
fungi | yeah, it hit a horizon change in the past hour | 21:03 |
fungi | this might come back to the console log not getting indexed issue sdague mentioned earlier (i think he posted to the -infra ml on it too) | 21:03 |
*** sarob has quit IRC | 21:04 | |
sdague | fungi: yeh I posted over to -infra list | 21:05 |
sdague | as I think clarkb will need to look into it | 21:05 |
sdague | I expect it's reasonably subtle | 21:05 |
*** CaptTofu has joined #openstack-infra | 21:06 | |
sdague | mriedem: we should also probably only add e-r bugs for things that are not in New state. I think a part of the issue is projects aren't triaging them, or realize the bug is bouncing the gate | 21:06 |
fungi | sdague: yeah, i've already got a plate full of subtle (and not-so-subtle) i'm trying to wolf down, so i'm hoping he'll get spare time to respond on it | 21:06 |
sdague | fungi: yep agreed | 21:06 |
*** sarob_ has quit IRC | 21:07 | |
sdague | on the upside, after I got past that issue, getting the data series to report reasonable things based on pandas series is starting to click | 21:07 |
*** sarob has joined #openstack-infra | 21:07 | |
sdague | I've got a rewrite of check_success about 60% complete | 21:08 |
fungi | too awesome | 21:08 |
sdague | it makes some things much simpler and cleaner, though you kind of have to get a feel for how these DataFrame objects work | 21:09 |
*** krtaylor has quit IRC | 21:11 | |
*** sarob has quit IRC | 21:12 | |
*** hogepodge_ has joined #openstack-infra | 21:17 | |
*** weshay has quit IRC | 21:17 | |
*** hogepodge has quit IRC | 21:19 | |
*** hogepodge_ is now known as hogepodge | 21:19 | |
openstackgerrit | Matt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1232303 https://review.openstack.org/65539 | 21:19 |
jog0 | sdague: have you seen the large-ops test failing | 21:20 |
jog0 | in gate http://logs.openstack.org/43/60443/5/gate/gate-tempest-dsvm-large-ops/ac1b04a/console.html | 21:20 |
sdague | jog0: I saw there were failed jobs. I've basically tried to pull back looking into failures otherwise this ER work is never getting done | 21:21 |
fungi | jog0: we were discussing bug 1232303 as being a likely high-priority one for large-ops fails | 21:21 |
fungi | ooh, SSHTimeout in a tempest job also just caused a massive gate reset | 21:22 |
jog0 | fungi: sshtimeout is alwys causing gate resets | 21:23 |
* fungi wonders how long it will be until we can design a cloud which can actually boot reachable vms ;) | 21:23 | |
* jog0 wonders too | 21:23 | |
*** prad has joined #openstack-infra | 21:24 | |
jog0 | mriedem: once the tests pass +A on https://review.openstack.org/#/c/65539 | 21:25 |
*** smarcet has joined #openstack-infra | 21:26 | |
mriedem | okey dokey | 21:26 |
fungi | ouch, precise12 went into rapid-fire jenkins agent failure and killed a bunch of jobs, so i unplugged it just now https://jenkins02.openstack.org/computer/precise12/builds | 21:26 |
*** prad_ has quit IRC | 21:26 | |
fungi | i'll wrestle it back into shape | 21:27 |
*** david-lyle has quit IRC | 21:27 | |
*** david-lyle has joined #openstack-infra | 21:28 | |
jog0 | fungi: dumb question I don't see 65539 in status.o.o/zuul | 21:30 |
jog0 | why is that | 21:30 |
Ajaeger1 | wow, "Queue lengths: 120 events, 3746 results. " - that's a lot of events | 21:30 |
Ajaeger1 | jog0: ^ - there are many events not listed, see the queue length... | 21:30 |
jog0 | check queue of 25 is the limit? | 21:31 |
*** weshay has joined #openstack-infra | 21:32 | |
fungi | jog0: no, zuul is busy processing thousands of events from a gate reset in the past few minutes, and hasn't gotten through the backlog far enough to find the patchset event for that change and enqueue it | 21:32 |
jog0 | fungi: ahhhhh | 21:33 |
jog0 | fungi: anyway to make zuul not allocate resources to patchets that are a certain depth down in the gate queue? | 21:33 |
*** SergeyLukjanov_ has joined #openstack-infra | 21:34 | |
*** SergeyLukjanov has quit IRC | 21:35 | |
*** Ajaeger1 has quit IRC | 21:38 | |
*** rcleere has quit IRC | 21:39 | |
fungi | jog0: that has been discussed as a possible pessimistic "optimization" (perhaps dynamically determined by recent gate success rates) but i think the consensus was "let's just fix the bugs instead of making them hurt less | 21:39 |
fungi | in some ways, broken software slowing down the rate of development serves to redirect some of that effort to addressing the broken, mostly proportional to the painful | 21:40 |
*** dprince_ has quit IRC | 21:40 | |
jog0 | the only thing this would change is less wasted resouces in zuul, but I agree about the redirect efforts part | 21:41 |
jog0 | so i'll retract my comment | 21:41 |
portante | fungi: are we seeing more bugs fixed? | 21:41 |
fungi | portante: good question--i don't have relevant numbers on bug fix rates (maybe qa does), but i'm pretty sure that just making them easier to ignore is not going to improve quality | 21:43 |
portante | the fear I have is that the pain has to get too high before the bug fix rate changes and the environment becomes unusable | 21:45 |
portante | perhaps the PTLs can help monitor the fix rate and make that public so we have a feedback loop in place | 21:45 |
fungi | well, we had some consensus from ptls that gate-impacting bugs would get addressed as a top priority, but there has to be some basic rate of bug triage going on for that to happen (unless we're just going to accomplish it by opening bugs and then shouting on the -dev ml until someone notices) | 21:46 |
portante | some how that bug fix rate has to be tracked and made visible, I would think | 21:47 |
portante | russellb: you around? | 21:47 |
fungi | at least from an infra perspective, i can say that while i may not triage all bugs in a timely fashion, i keep an eye on the gating-related ones and jump on them straight away if they're an actual infra problem (or reroute them quickly if they're not) | 21:48 |
portante | and we try to do the same with swift | 21:48 |
portante | but that assumes that we have a good mechanism in place to get them tracked properly in the first place, which I don't know for sure | 21:49 |
fungi | i think this week, there may still be some post-holiday sluggishness going on with projects keeping on top of bugs | 21:49 |
fungi | but that's just an unfounded guess | 21:49 |
portante | yes, that is not an unreasonable expectation | 21:49 |
fungi | okay, precise12 has been rebooted, brought back online in jenkins and is running tests without any sign of agent failures now | 21:50 |
*** smarcet has left #openstack-infra | 21:51 | |
*** dizquierdo has joined #openstack-infra | 21:52 | |
fungi | precise32 got automatically marked offline for some reason, so i'm prodding it now | 21:54 |
fungi | yep, it's gone unreachable. probably hung | 21:54 |
jeblair | fungi: this is the worst jetlag ever. | 21:55 |
fungi | jeblair: sorry to hear it. the sort of jetlag which shoots you in a dark alley and then fishes in your pockets for loose change? | 21:56 |
jeblair | fungi: loose change is serious money here | 21:56 |
fungi | jeblair: i haven't solved the mystery of the broken statsd reporting yet, what with other stuff cropping up. i added ip6tables rules to graphite, but that host has no aaaa record anyway and tcpdump on zuul shows no sign that it's trying to send any 8125/udp packets | 21:57 |
fungi | jeblair: however, i did confirm that the statsd envvars are present in the calling environment for the currently running zuul daemon pids according to /proc/pid/environ | 21:58 |
harlowja | who killed github, lol | 21:59 |
jeblair | fungi: weird; okay i'll keep looking | 21:59 |
fungi | harlowja: i shot the github and i won | 21:59 |
harlowja | fungi :) | 21:59 |
harlowja | shot em dead | 21:59 |
mgagne | harlowja: https://status.github.com/messages =) | 22:00 |
harlowja | ya mgagne | 22:00 |
harlowja | fungi did it | 22:00 |
fungi | not really, but i did manage to get a sex pistols song stuck in my head now | 22:00 |
harlowja | :) | 22:00 |
jog0 | mriedem: your patch failed | 22:00 |
jog0 | https://review.openstack.org/#/c/65539/1 | 22:01 |
fungi | er, sorry, it was dead kennedys | 22:01 |
jog0 | mriedem: ping me when it passes tests and I will +A | 22:01 |
mriedem | jog0: yeah, hit a bad slave | 22:01 |
fungi | mriedem: precise12 or a different one? | 22:01 |
mriedem | precise37 | 22:01 |
jog0 | bad slave lol | 22:01 |
fungi | ugh, i'll jump on that one next | 22:01 |
*** hogepodge has quit IRC | 22:03 | |
mriedem | so i guess e-r doesn't run on e-r jobs? | 22:03 |
mriedem | :) | 22:03 |
fungi | okay, precise37 is in the corner for a timeout while i look it over | 22:03 |
fungi | looks like it did a fair amount of damage too... https://jenkins01.openstack.org/computer/precise37/builds | 22:04 |
fungi | jeblair: did you just manually generate statsd traffic off the new zuul? | 22:04 |
jeblair | fungi: yes | 22:04 |
fungi | i hadn't gotten around to killing my tcpdump yet and just saw two packets | 22:04 |
jeblair | fungi: and it showed up on graphite :/ | 22:05 |
fungi | so the zuul process simply doesn't want to send them | 22:05 |
jog0 | mriedem: also 1257626 is the same bug | 22:05 |
jog0 | and we already have a query for it | 22:05 |
jog0 | or almost the same bug | 22:05 |
jog0 | not sure why we don't always see that one hit | 22:05 |
mriedem | jog0: because the message has "nova.compute.manager" in it? | 22:06 |
*** sarob has joined #openstack-infra | 22:08 | |
*** SergeyLukjanov_ has quit IRC | 22:08 | |
*** mfer has quit IRC | 22:08 | |
mriedem | jog0: hmm, it hits in logstash: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibm92YS5jb21wdXRlLm1hbmFnZXIgVGltZW91dDogVGltZW91dCB3aGlsZSB3YWl0aW5nIG9uIFJQQyByZXNwb25zZSAtIHRvcGljOiBcXFwibmV0d29ya1xcXCIsICBSUEMgbWV0aG9kOiBcXFwiYWxsb2NhdGVfZm9yX2luc3RhbmNlXFxcIlwiIEFORCBmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlc | 22:09 |
mriedem | jog0: is it that gate-tempest-dsvm-large-ops isn't checked by e-r? | 22:09 |
jog0 | mriedem: doh yes | 22:09 |
jog0 | thats it | 22:09 |
*** SergeyLukjanov has joined #openstack-infra | 22:10 | |
mriedem | alright, i'll abandon my patch as a duplicate, and mark one of the bugs as a dupe of the other | 22:10 |
openstackgerrit | Sean Dague proposed a change to openstack-infra/elastic-recheck: wip: use pandas DataFrames for check_success https://review.openstack.org/65173 | 22:10 |
jog0 | just did that | 22:10 |
jog0 | mriedem: it shouldn't be hard to make e-r support large-ops | 22:11 |
jog0 | want to take a wack at that | 22:11 |
mriedem | jog0: if you can give some guidance | 22:11 |
jeblair | fungi: and it used ipv4, so nothing relevant should have changed since the last zuul restart, right? | 22:11 |
fungi | jeblair: yep. ipv4 | 22:11 |
fungi | 22:02:03.770385 IP 162.242.154.88.46477 > 198.61.209.112.8125: UDP, length 8 | 22:11 |
*** fbo is now known as fbo_away | 22:11 | |
jog0 | mriedem: search the code for tempest | 22:11 |
jog0 | actually hmmmm | 22:12 |
jog0 | we should be checking large-ops | 22:12 |
jog0 | not sure if we are but we should be | 22:12 |
jog0 | because we check all tempest jobs | 22:12 |
mriedem | jog0: i can open a bug at least to track it | 22:12 |
jeblair | fungi: i'm out of ideas other than to see if a restart changes anything. | 22:13 |
fungi | jeblair: oh, i solved the mystery of the zuul filesystem usage, btw. very, very large zuul/gearman logs from a while back. some from july were 2gb per log. /var/log/zuul accounted for 22gib on the / filesystem. scratch log copies/thread dumps in /root took up another 6+gib | 22:13 |
sdague | jog0: we're also loosing stuff in the ER index | 22:13 |
jog0 | sdague: eep | 22:14 |
fungi | jeblair: so i don't think there's too much risk of running out of / for now | 22:14 |
*** weshay has quit IRC | 22:14 | |
sdague | go see my openstack-infra email | 22:14 |
sdague | that was tripping me up all last night | 22:14 |
* jog0 isn't on ifra | 22:14 | |
sdague | jog0: you should fix that :P | 22:14 |
* jog0 signs up | 22:14 | |
sdague | http://lists.openstack.org/pipermail/openstack-infra/2014-January/000615.html | 22:15 |
sdague | basically I was trying to do a merge on build_uuids | 22:15 |
jeblair | sdague: do you know if that's the case for very recent builds? | 22:15 |
sdague | jeblair: that build was 7 days ago | 22:15 |
*** thomasem has quit IRC | 22:16 | |
fungi | precise32 and precise37 are back on line not and not insta-failing jobs | 22:16 |
*** krtaylor has joined #openstack-infra | 22:16 | |
fungi | er, back on line now | 22:16 |
jeblair | sdague, fungi: we should correlate that with the jenkins upgrade and scp plugin issue | 22:16 |
sdague | jeblair: I can write a tool to spit out missing build_uuids once I get the check_success bits done | 22:16 |
sdague | which I'm close on, but I ran out of time for the day. Time to run off to linux users group here | 22:17 |
fungi | jeblair: sdague: good point. we had a span of nearly a day where console log files were being truncated | 22:17 |
sdague | fungi: truncated to 0? | 22:17 |
fungi | sdague: not the ones i saw, just various points in the middle of the log | 22:17 |
sdague | there are 0 console.html lines for the build uuid I provided in there | 22:17 |
sdague | I provided a build_uuid and es url in the email | 22:17 |
sdague | for further exploration | 22:18 |
sdague | anyway, need to run. Talk to folks tomorrow | 22:18 |
jeblair | fungi: how do you feel about a zuul restart? | 22:19 |
fungi | the gate just reset, so i suppose now's as good a time as any | 22:19 |
jeblair | done | 22:20 |
* fungi gigs up the restart notes | 22:20 | |
jeblair | i wish there were a nodepool delete all command | 22:20 |
fungi | ~root/zuul-changes2.py | 22:21 |
jeblair | fungi: oh, i already did that | 22:21 |
jeblair | fungi: i should have phrased my question differently | 22:21 |
fungi | ahh, okay | 22:21 |
jeblair | starting zuul now | 22:21 |
fungi | a'ight | 22:21 |
jeblair | let's see if there's any statsd traffic | 22:21 |
*** rwsu has quit IRC | 22:22 | |
jeblair | it should register empty queue gauges when it starts | 22:22 |
jeblair | i think | 22:22 |
jeblair | wow, still no joy | 22:22 |
fungi | nada | 22:23 |
fungi | 2014-01-08 22:21:50,833 DEBUG zuul.Scheduler: Statsd enabled | 22:25 |
fungi | that's the only reference it seems to log about anything to do with statsd or graphite | 22:25 |
*** sarob has quit IRC | 22:26 | |
mriedem | jog0: looks like e-r is reporting on 1257626 after all, here is an example: https://review.openstack.org/#/c/57358/ | 22:27 |
jeblair | fungi: that was a manual test | 22:28 |
jog0 | mriedem: hmm strange this may be the issues sdague saw | 22:28 |
fungi | jeblair: i figured, since it was solitary. i'm expecting a flood if it's spontaneously fixed | 22:28 |
sdake | hey guys, https://github.com/openstack/heat is 404ing - any tips? | 22:29 |
*** weshay has joined #openstack-infra | 22:29 | |
mriedem | sdake: not for me, but use git.openstack.org | 22:29 |
*** pelix has joined #openstack-infra | 22:29 | |
*** dkranz has quit IRC | 22:30 | |
*** rwsu has joined #openstack-infra | 22:30 | |
fungi | mriedem: sdake: https://status.github.com/messages (but at least if you use git.o.o instead you can pester us to fix it when it breaks) | 22:31 |
*** hogepodge has joined #openstack-infra | 22:31 | |
*** flaper87 is now known as flaper87|afk | 22:32 | |
jeblair | fungi: restarted and queues restored | 22:33 |
*** sarob has joined #openstack-infra | 22:33 | |
pelix | clarkb: re https://review.openstack.org/#/c/63579 would monkey patching the minidom Element class to fix the writexml method for python 2.6 be an acceptable fix? | 22:34 |
fungi | yep, i see the changes popping back up on the status page as zuul catches up with the queue | 22:34 |
pelix | Alternatives involve using something like elementreewriter, an escape method on all element text fields for html entities and a regex to remove the blank spaces in empty nodes i.e. make '<tag />' = '<tag/>' | 22:34 |
pelix | fixing minidom until support for python 2.6 is removed seems the least crazy solution I have so far :| | 22:34 |
*** sarob has quit IRC | 22:36 | |
clarkb | pelix: that doesnt sound too bad and can probably be done in a future python friendly way | 22:36 |
*** sarob has joined #openstack-infra | 22:37 | |
*** dkranz has joined #openstack-infra | 22:37 | |
*** sarob_ has joined #openstack-infra | 22:38 | |
*** thuc has quit IRC | 22:38 | |
*** sarob_ has joined #openstack-infra | 22:39 | |
*** thuc has joined #openstack-infra | 22:39 | |
*** pelix has quit IRC | 22:41 | |
*** sarob has quit IRC | 22:41 | |
jeblair | fungi: i have a reproducible test case; it looks like it's related to the python daemon package | 22:42 |
jeblair | there's a new version. maybe it sanitizes the env | 22:42 |
*** dmsimard has joined #openstack-infra | 22:42 | |
fungi | jeblair: i kept thinking it might be an environment issue, which was why i was digging in /proc/pid/environ | 22:42 |
fungi | that does sound entirely possible | 22:43 |
*** thuc has quit IRC | 22:43 | |
*** dkranz has quit IRC | 22:43 | |
fungi | using a deb of it? the latest on pypi was uploaded 2010-03-02 | 22:44 |
fungi | but i do recall a recent cve for python-daemon | 22:45 |
fungi | just not the details | 22:45 |
*** sarob_ has quit IRC | 22:46 | |
dmsimard | Hey guys, maybe you can point me in the right direction ? I'm trying to do a commit that depends on a commit that is already in review. I essentially checked out the commit in review, committed on top of it and sent it for review but it doesn't seem like it's that easy. Tried to dig the documentation but haven't found much. Any ideas ? | 22:46 |
dmsimard | (Maybe I should be asking in #openstack-dev..) | 22:46 |
fungi | jeblair: the deb for python-daemon installed on zuul has its most recent package changelog entry dated Sat, 17 Dec 2011 14:09:14 +0000 | 22:46 |
fungi | dmsimard: it is supposed to be that easy. what errors did you get? | 22:47 |
*** SergeyLukjanov_ has joined #openstack-infra | 22:47 | |
*** SergeyLukjanov_ has quit IRC | 22:47 | |
jeblair | fungi: oh! we're using an _older_ one on the new server, and a newer one on the old server from pypi | 22:48 |
*** SergeyLukjanov has quit IRC | 22:48 | |
*** SergeyLukjanov_ has joined #openstack-infra | 22:48 | |
openstackgerrit | Zane Bitter proposed a change to openstack-infra/reviewstats: Reformat heat.json https://review.openstack.org/65558 | 22:48 |
openstackgerrit | Zane Bitter proposed a change to openstack-infra/reviewstats: Add Bartosz Gorski to heat-core https://review.openstack.org/65534 | 22:48 |
dmsimard | fungi: The error suggests a change-id should be provided in the commit message - but I'm confused since this should be generating a new review - not a new patch set to the ongoing review. | 22:48 |
fungi | jeblair: aha! | 22:48 |
*** SergeyLukjanov_ has quit IRC | 22:48 | |
fungi | dmsimard: does git log show a Change-Id: header in your commit message? | 22:48 |
dmsimard | fungi: No, but none of the commits do, actually | 22:49 |
fungi | dmsimard: what repository? | 22:50 |
dmsimard | fungi: puppet-swift | 22:50 |
dmsimard | stackforge/puppet-swift | 22:50 |
dmsimard | The change-id are usually in the footer. | 22:50 |
*** jroovers has quit IRC | 22:51 | |
fungi | dmsimard: normally, the first time you run git-review in a local repository, it adds a commit hook to automatically update your commit message with a random change-id header when you write it out (if there isn't already one in the commit message), but if you hadn't used git-review in that repository before the last time you ran git commit, it wouldn't have been added | 22:51 |
fungi | dmsimard: right, git header lines go at the end of the commit message (more correctly called a footer, but they get confusingly referred to as header lines most of the time anyway) | 22:52 |
jeblair | fungi: i think the daemon version is a red herring; it seems related to the new gear lib which imports statsd | 22:52 |
fungi | ohhh | 22:52 |
jeblair | fungi: (but we now have daemon 1.6 from pip on the new server) | 22:52 |
fungi | is it importing one of the other python modules called statsd rather than the statsd we want? | 22:52 |
dmsimard | fungi: So is gerrit expecting the change-id from the parent commit ? | 22:53 |
ruhe | dmsimard: "git review -s" and the then "git commit --amend" to get change-id appended to commit message | 22:53 |
fungi | dmsimard: it expects a change-id on any commit you're pushing which is not already on the target branch | 22:53 |
jeblair | fungi: afaict it still gets a correct statsd object | 22:53 |
*** dims has quit IRC | 22:53 | |
fungi | oh, so not the "wrong" statsd, just maybe the wrong version | 22:53 |
jeblair | fungi: i think it's gear statsd + daemonizing that's triggering it | 22:54 |
jeblair | brb | 22:54 |
clarkb | is statsd opening a socket then daemon closing it? | 22:55 |
*** sarob has joined #openstack-infra | 22:55 | |
dmsimard | ruhe: That did the trick. Thanks. | 22:55 |
arosen | Hi, I did recheck bug 1257626 here but it hasn't rerun the tests https://review.openstack.org/#/c/64769/ . I had also tried recheck no bug and same thing. | 22:55 |
fungi | dmsimard: if you're trying to push a series of patches and more than one of them is not yet in gerrit, you may need to rebase -i and switch them from pick to edit so you can commit --amend each of their commit messages | 22:55 |
arosen | It failed with that ssh timeout issue. | 22:55 |
fungi | arosen: you may have done it right when zuul was being restarted around 30 minutes ago | 22:56 |
* fungi looks | 22:56 | |
fungi | yep | 22:56 |
*** pelix has joined #openstack-infra | 22:56 | |
fungi | oh, nope, that was over an hour ago | 22:56 |
fungi | arosen: ahh, i think it was in the middle of being tested, got aborted and restored during the zuul restart, and the status page shows it there now | 22:57 |
*** rnirmal has quit IRC | 22:58 | |
fungi | all its tests are started except py26, which is waiting on an available centos6 slave | 22:58 |
arosen | fungi: ah i see it now on http://status.openstack.org/zuul/ didn't see it there a few min ago. | 22:58 |
arosen | thanks! | 22:58 |
fungi | np | 22:58 |
jeblair | fungi: i'm guessing it's because two processes can't share a socket. :) | 22:59 |
fungi | jeblair: this is not a surprise | 22:59 |
*** jroovers has joined #openstack-infra | 22:59 | |
fungi | presumably they need different local port numbers | 22:59 |
jeblair | i think the current statsd initializes a global object with a socket | 23:00 |
fungi | i mean, it's a surprise in that i didn't think of it | 23:00 |
jeblair | and the zuul server command imports gear which imports statsd before the fork | 23:00 |
jeblair | so both processes end up with a statsd object with an initialized socket | 23:00 |
*** dizquierdo has quit IRC | 23:00 | |
jeblair | i _think_ the newer statsd library changed this, but i think zuul still needs updating to use it | 23:01 |
fungi | i wonder if it wouldn't be cleaner to start with one parent process and then fork the zuul and gearman processes from that before importing statsd | 23:01 |
*** sparkycollier has quit IRC | 23:01 | |
jeblair | so quick fix is just to move the gear import down into the 'start geard' function | 23:02 |
fungi | oh, if they've fixed statsd connection object sharing in the module, then yeah, totally | 23:02 |
*** sarob has quit IRC | 23:02 | |
jeblair | i think we should move the import for now, then move it back if that's fixed later | 23:02 |
fungi | quick fixes until we can upgrade to that, right | 23:02 |
*** sarob has joined #openstack-infra | 23:03 | |
*** burt has quit IRC | 23:04 | |
*** sarob_ has joined #openstack-infra | 23:04 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Move gear import to a safe place https://review.openstack.org/65561 | 23:05 |
fungi | mikal: if you're near an internet, garyk posted to the -dev ml with something which might be a turbo-hipster implementation bug (problem with dependent patchsets, sounds like) | 23:07 |
fungi | mikal: Subject: [openstack-dev] [nova][turbo hipster] unable to rebase | 23:07 |
*** hashar has quit IRC | 23:07 | |
*** jamielennox|away is now known as jamielennox | 23:07 | |
*** sarob has quit IRC | 23:07 | |
sdake | thanks fungi | 23:08 |
*** dims_ has joined #openstack-infra | 23:08 | |
*** thuc has joined #openstack-infra | 23:09 | |
*** eharney has quit IRC | 23:10 | |
*** mriedem has quit IRC | 23:14 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/zuul: Move gear import to a safe place https://review.openstack.org/65561 | 23:15 |
*** mrodden1 has quit IRC | 23:17 | |
*** thuc has quit IRC | 23:18 | |
*** dmsimard has left #openstack-infra | 23:18 | |
*** yassine has quit IRC | 23:21 | |
*** sarob_ has quit IRC | 23:21 | |
*** sarob has joined #openstack-infra | 23:22 | |
*** ryanpetrello has quit IRC | 23:23 | |
*** ryanpetrello has joined #openstack-infra | 23:25 | |
jeblair | fungi: i'm waiting for a reset then i plan to apply that zuul patch manually | 23:25 |
*** sarob has quit IRC | 23:26 | |
fungi | jeblair: sounds like a fine idea | 23:29 |
fungi | it's still waiting on a py26 slave, but it passed py27 and the rest just fine | 23:30 |
fungi | should be safe | 23:30 |
fungi | especially since we're not running zuul on centos ourselves | 23:30 |
*** thuc has joined #openstack-infra | 23:33 | |
*** dstanek has joined #openstack-infra | 23:33 | |
*** jgrimm has quit IRC | 23:33 | |
jeblair | david-lyle: https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release | 23:36 |
jeblair | david-lyle: that was written for thierry, but if you follow the same instructions but substitute master instead of milestone-proposed, you should get a release automatically built and uploaded to pypi | 23:37 |
*** dstanek has quit IRC | 23:38 | |
fungi | did i miss a question in scrollback, or was this a contextual change of venue? | 23:38 |
david-lyle | jeblair: thanks | 23:38 |
jeblair | fungi: change of venue | 23:38 |
jerryz | fungi:if you have time, could you please take a look at this: https://review.openstack.org/#/c/65178/ and add initial reviewers to the group? https://bugs.launchpad.net/openstack-ci/+bug/1266603. Thanks | 23:38 |
david-lyle | fungi: I asked about releasing django_openstack_auth | 23:38 |
fungi | oh, good. just making sure i hadn't gone blind | 23:38 |
david-lyle | but not here | 23:39 |
fungi | my network has been terrible the last couple days, so now i'm paranoid i'm dropping from irc | 23:39 |
*** wenlock has quit IRC | 23:39 | |
*** wenlock has joined #openstack-infra | 23:40 | |
david-lyle | django_openstack_auth now uses the __init__.py file to specify version number, is it better to switch to use pbr.version.VersionInfo? | 23:40 |
fungi | jerryz: i flagged the bug so i wouldn't forget to check it once the change merges | 23:40 |
jeblair | david-lyle: oh, yeah, if you do that, it will get filled in automatically based on the tag | 23:40 |
*** ruhe is now known as _ruhe | 23:41 | |
david-lyle | jeblair: ok, I will make that change, seems like a better model | 23:41 |
jeblair | david-lyle: yeah, i like it -- way less work. :) | 23:41 |
*** prad has quit IRC | 23:41 | |
david-lyle | +1 less work | 23:42 |
*** sarob has joined #openstack-infra | 23:44 | |
jeblair | reset inbound | 23:45 |
* fungi braces | 23:46 | |
jeblair | stopped | 23:50 |
fungi | statsd traffic burst alert! | 23:54 |
jeblair | yay! | 23:54 |
jeblair | reloading queues | 23:54 |
fungi | and there goes a ton more | 23:54 |
openstackgerrit | Eli Klein proposed a change to openstack-infra/jenkins-job-builder: Added rbenv-env wrapper https://review.openstack.org/65352 | 23:55 |
*** slong_ has joined #openstack-infra | 23:57 | |
*** senk has quit IRC | 23:57 | |
jeblair | fungi: precise7 went rogue | 23:57 |
jeblair | fungi: i disconnected it | 23:57 |
*** slong has quit IRC | 23:57 | |
fungi | okay, i'll have a look | 23:57 |
jeblair | fungi: i'm going to redo the restart | 23:58 |
jeblair | since precise7 took out the top of the queue | 23:58 |
*** senk has joined #openstack-infra | 23:58 | |
*** ryanpetrello has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!