*** ChanServ has quit IRC | 00:12 | |
myoung | rlandy|rover: ack th x | 00:15 |
---|---|---|
*** yolanda_ has quit IRC | 00:19 | |
*** yolanda_ has joined #oooq | 00:20 | |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 00:22 |
*** rlandy|rover has quit IRC | 00:33 | |
*** ChanServ has joined #oooq | 00:33 | |
*** barjavel.freenode.net sets mode: +o ChanServ | 00:33 | |
*** atoth has quit IRC | 00:35 | |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 02:22 |
*** ykarel|away has joined #oooq | 02:46 | |
*** rlandy has joined #oooq | 02:51 | |
chandankumar | rfolco: I have asked release delivery guys to tag it under unittesting repo | 03:55 |
*** udesale has joined #oooq | 03:58 | |
*** ykarel|away is now known as ykarel | 03:59 | |
*** rlandy has quit IRC | 04:05 | |
*** ykarel is now known as ykarel|afk | 04:07 | |
*** links has joined #oooq | 04:11 | |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 04:23 |
chandankumar | rfolco: future is also available under unittesting repo you are good to go | 04:38 |
*** marios has joined #oooq | 04:40 | |
*** pgadiya has joined #oooq | 04:42 | |
*** pgadiya has quit IRC | 04:44 | |
*** ykarel|afk is now known as ykarel | 05:18 | |
*** quiquell|off is now known as quiquell | 05:33 | |
Tengu | hello! small question: is there a way to mimic tripleo-ci-centos-7-scenario001-multinode-oooq-container with quickstart? (or any CI env/tests) | 05:56 |
quiquell | Tengu: the reproducer now works with libvirt and RDO cloud | 05:59 |
Tengu | quiquell: oh? | 06:01 |
Tengu | that's a really, really good news. Will check how to do that. | 06:01 |
quiquell | Tengu: Do you have a failing build from zuul ? | 06:02 |
Tengu | quiquell: yep | 06:03 |
Tengu | http://logs.openstack.org/27/570627/15/check/tripleo-ci-centos-7-scenario007-multinode-oooq-container/69c694f/logs/ | 06:03 |
Tengu | so I basically download the reproduce-quickstart.sh file - I'll read it in order to find out how to use a libvirt host. | 06:04 |
quiquell | yep there is a --with-libvirt or similar | 06:04 |
Tengu | ... but first I have to retake my builder host, apparently it went down -.- | 06:05 |
quiquell | quiquell: Is kinda new, so maybe you find some problems, let us know | 06:05 |
Tengu | ok :) | 06:05 |
quiquell | sshnaidm: You there ? | 06:12 |
quiquell | sshnaidm: Going to resize the ruck-rover-dashboard to a bigger flavor | 06:15 |
Tengu | quiquell: erf... apparently it lacks all the env setup... More over, it would be great to be able to pass the virthost IP (i.e. I'm not running that on my laptop, I have a desktop dedicated for this kind of heavy tasks) | 06:15 |
Tengu | quiquell: I just edited the script and replaced the IP address, not a blocker, but the ansible errors... | 06:15 |
quiquell | Tengu If you use libvirt you have to run it at your desktop | 06:16 |
quiquell | It will be the libvirt host | 06:16 |
Tengu | quiquell: hmmm ok. so it doesn't work like quickstart, copying and running the receipts on the remote node, right? | 06:16 |
quiquell | Tengu It does, but the remote is just the undercloud, with is a livirt image running in the host | 06:17 |
quiquell | So the remoting is really inside the virthost but from host to image | 06:18 |
Tengu | quiquell: hmm wait - do I have to run the reproducer from an undercloud node, or from a "naked" BM node with libvirt? | 06:19 |
quiquell | Tengu the place where libvirt is running | 06:19 |
Tengu | ok, so my BM | 06:20 |
quiquell | Yep | 06:20 |
Tengu | let's try it then :) | 06:20 |
quiquell | Sure, let me know how it goes (I haven't try libvirt option yet) | 06:20 |
Tengu | there are some steps to ensure prior running it: ssh access to localhost without password, sudo without password and so on. best doing this in a dedicated user you can drop later. | 06:22 |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 06:23 |
*** jtomasek has joined #oooq | 06:25 | |
quiquell | Tengu: I usually run a tmux in the virthost, it helps to continue with work | 06:26 |
Tengu | quiquell: ah, well, of course ;). I always run a tmux on remote computers :). | 06:27 |
quiquell | Tengu: Cool cool | 06:28 |
*** jtomasek has quit IRC | 06:28 | |
*** ratailor has joined #oooq | 06:33 | |
*** holser__ has joined #oooq | 06:43 | |
Tengu | quiquell: small question: where's the git for the "tripleo-ci"? I'd like to see how it works :). | 06:59 |
quiquell | openstack-infra/tripleo-ci | 07:00 |
Tengu | ah, good | 07:00 |
Tengu | thank you! | 07:00 |
quiquell | yw | 07:00 |
*** tosky has joined #oooq | 07:06 | |
*** ccamacho has joined #oooq | 07:11 | |
*** jaosorior has joined #oooq | 07:15 | |
Tengu | quiquell: also, how long are the build logs kept on the CI infra (if you know that, of course)? I guess it becomes pretty heavy regarding disk storage and has to be cleaned on a regular basis... | 07:16 |
quiquell | Tengu: I have to leave go back in a few | 07:17 |
quiquell | Tengu: Don't know those details, you can ask at #openstack-infra | 07:18 |
*** quiquell is now known as quiquell|afk | 07:18 | |
Tengu | quiquell|afk: np, thanks :) | 07:18 |
*** zz_saneax has joined #oooq | 07:18 | |
*** zz_saneax is now known as saneax | 07:19 | |
sshnaidm | quiquell|afk, ok | 07:19 |
*** sshnaidm is now known as sshnaidm_pto | 07:20 | |
*** tesseract has joined #oooq | 07:20 | |
*** apetrich has quit IRC | 07:21 | |
*** apetrich has joined #oooq | 07:23 | |
*** florianf has joined #oooq | 07:23 | |
*** jtomasek has joined #oooq | 07:25 | |
*** amoralej|off is now known as amoralej | 07:27 | |
*** jtomasek has quit IRC | 07:31 | |
*** skramaja has joined #oooq | 07:32 | |
*** ykarel is now known as ykarel|lunch | 07:36 | |
*** ykarel|lunch has quit IRC | 07:47 | |
*** quiquell|afk is now known as quiquell | 08:11 | |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 08:23 |
*** ykarel|lunch has joined #oooq | 08:48 | |
*** brault has joined #oooq | 08:55 | |
*** ykarel|lunch is now known as ykarel | 08:57 | |
*** ykarel is now known as ykarel|away | 09:06 | |
*** ykarel|away has quit IRC | 09:12 | |
*** jtomasek has joined #oooq | 09:24 | |
*** jtomasek has quit IRC | 09:25 | |
*** zoli is now known as zoli|lunch | 09:31 | |
*** udesale_ has joined #oooq | 09:31 | |
*** udesale_ has quit IRC | 09:32 | |
*** udesale_ has joined #oooq | 09:32 | |
*** udesale_ has quit IRC | 09:32 | |
*** udesale_ has joined #oooq | 09:33 | |
*** udesale has quit IRC | 09:33 | |
*** dtantsur|afk is now known as dtantsur | 09:38 | |
*** sanjayu_ has joined #oooq | 10:01 | |
*** udesale__ has joined #oooq | 10:02 | |
*** udesale__ has quit IRC | 10:02 | |
*** udesale has joined #oooq | 10:03 | |
*** udesale_ has quit IRC | 10:05 | |
*** jaosorior has quit IRC | 10:14 | |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 10:23 |
*** jaosorior has joined #oooq | 10:31 | |
*** sanjayu_ has quit IRC | 10:39 | |
*** saneax has quit IRC | 10:46 | |
*** saneax has joined #oooq | 10:48 | |
*** sanjay__u has quit IRC | 10:54 | |
*** zoli|lunch is now known as zoli | 11:00 | |
*** quiquell is now known as quiquell|lunch | 11:01 | |
*** moguimar has joined #oooq | 11:08 | |
*** ykarel has joined #oooq | 11:28 | |
*** udesale_ has joined #oooq | 11:28 | |
*** udesale has quit IRC | 11:31 | |
*** udesale_ has quit IRC | 11:33 | |
*** ykarel has quit IRC | 11:37 | |
*** amoralej is now known as amoralej|lunch | 11:38 | |
*** rlandy has joined #oooq | 11:56 | |
ssbarnea | apparently using quickstart with ANSIBLE_STRATEGY=debug is a PITA, triggers debugger on lots of taks. | 12:00 |
rlandy | arxcruz|ruck: ping - I'm on the platform meeting | 12:03 |
arxcruz|ruck | rlandy: me too | 12:03 |
*** rlandy is now known as rlandy|rover | 12:03 | |
arxcruz|ruck | rlandy: already update the doc | 12:03 |
rlandy|rover | arxcruz|ruck: do you need me or should I drop? | 12:03 |
arxcruz|ruck | rlandy|rover: you can drop if you want, we are green :) | 12:03 |
arxcruz|ruck | it will be fast | 12:04 |
rlandy|rover | arxcruz|ruck: what's the latest on https://bugs.launchpad.net/tripleo/+bug/1773445? | 12:06 |
openstack | Launchpad bug 1773445 in tripleo "tripleo-quickstart-extras-gate-newton-delorean-full-minimal fails to install undercloud - Access denied for user 'heat'@'192.168.24.1" [High,Triaged] | 12:06 |
arxcruz|ruck | rlandy|rover: I update the bug, i have a PR on puppet-certmonger | 12:06 |
arxcruz|ruck | waiting jaosorior friends approve it then we can move on to openstack side to fix it | 12:07 |
arxcruz|ruck | rlandy|rover: https://github.com/saltedsignal/puppet-certmonger/pull/20 | 12:07 |
rlandy|rover | cool | 12:07 |
*** panda|off is now known as panda | 12:10 | |
*** ratailor has quit IRC | 12:11 | |
panda | does someone have today's candidate resume ? | 12:13 |
*** atoth has joined #oooq | 12:16 | |
*** quiquell|lunch is now known as quiquell | 12:17 | |
rlandy|rover | panda: not sure we got one | 12:18 |
arxcruz|ruck | panda: i have his facebook :P | 12:18 |
panda | arxcruz|ruck: even better | 12:18 |
quiquell | humm... did you guys found my facebook before my interview ? | 12:20 |
arxcruz|ruck | quiquell: of course, it was the only reason we hire you lol | 12:22 |
panda | quiquell: I really liked how you decorated the house. | 12:23 |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 12:23 |
panda | hubbot: thanks for checking stable/ocata twice | 12:23 |
hubbot | panda: Error: "thanks" is not a valid command. | 12:23 |
rlandy|rover | ok - this is starting to get creepy | 12:23 |
panda | myoung: do you know if it's checking stable/pike at all ? ^ | 12:24 |
arxcruz|ruck | hubbot: kill | 12:25 |
hubbot | arxcruz|ruck: Error: "kill" is not a valid command. | 12:25 |
arxcruz|ruck | at least kill is not a valid command | 12:25 |
arxcruz|ruck | hubbot: love | 12:25 |
hubbot | arxcruz|ruck: Error: "love" is not a valid command. | 12:25 |
*** udesale has joined #oooq | 12:29 | |
panda | I think we need a completely different approach with this candidate. I'm looking at his reviews | 12:29 |
*** trown|outtypewww is now known as trown | 12:31 | |
*** ykarel has joined #oooq | 12:38 | |
myoung | panda: afaik, it's checking the following patches | 12:41 |
*** amoralej|lunch is now known as amoralej | 12:41 | |
myoung | %config plugins.GateStatus.changeIDs | 12:41 |
hubbot | myoung: I0cbf9ffb8552411e4dd891c38702ff8d1f6db5b1 I214272a6f25feb75496e44eb0a16269c6ee4cfe2 I4c5bdf00ce8cf7eabf669b248b99cb8443e82fab If12c8fe9bd0bea98a4842f279399285344f22246 | 12:41 |
myoung | panda: this is... | 12:42 |
myoung | TQE, https://review.openstack.org/#/c/560445, I214272a6f25feb75496e44eb0a16269c6ee4cfe2 | 12:42 |
myoung | THT, https://review.openstack.org/#/c/567224, I0cbf9ffb8552411e4dd891c38702ff8d1f6db5b1, stable/queens | 12:42 |
myoung | THT, https://review.openstack.org/#/c/564285, If12c8fe9bd0bea98a4842f279399285344f22246, stable/pike | 12:42 |
myoung | THT, https://review.openstack.org/#/c/564291, I4c5bdf00ce8cf7eabf669b248b99cb8443e82fab, stable/ocata | 12:42 |
myoung | %gatestatus | 12:42 |
panda | myoung: ok, so it's hubbot that is reporting incorrectly | 12:42 |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 12:42 |
myoung | yeah something's a little wierd there | 12:42 |
myoung | yeah arxcruz|ruck noticed this yesterday | 12:43 |
myoung | https://review.openstack.org/#/c/564285 isn't being rechecked | 12:43 |
myoung | arxcruz|ruck: do we have a LP for that? | 12:43 |
myoung | ^^ that's a bug / problem | 12:44 |
myoung | arxcruz|ruck, rlandy|rover, if I had to guess it's maybe a typo on the .conf file on hubbot instance? | 12:44 |
*** udesale has quit IRC | 12:47 | |
*** udesale has joined #oooq | 12:47 | |
arxcruz|ruck | myoung: we don't but i'll create :) | 12:49 |
quiquell | panda: I have some logs about the n -> n + 1 do you have time to look at them with me ? | 12:49 |
myoung | arxcruz|ruck: thx | 12:52 |
panda | quiquell: after the interview | 12:58 |
quiquell | ok | 12:58 |
trown | rlandy|rover: joining interview? | 13:00 |
rlandy|rover | myoung: sshnaidm_pto: hate to ask this question again - any idea why this job was disabled since 09/05 if it was passing at that point? https://ci.centos.org/job/tripleo-quickstart-promote-ocata-rdo_trunk-minimal/ | 13:04 |
rlandy|rover | there must have been a reason to do so | 13:04 |
myoung | amoralej: ^^ ? | 13:05 |
myoung | rlandy|rover: thats in rdo1, I don't have visibility. checking git history | 13:05 |
amoralej | mmmm | 13:05 |
rlandy|rover | it's our (RDO CI) job | 13:05 |
rlandy|rover | not amoralej's | 13:05 |
myoung | rlandy|rover: I just mean I don't recall patching anything to disable it | 13:05 |
rlandy|rover | although if he knows, I'd be grateful | 13:05 |
amoralej | i've seen it failing lastly | 13:05 |
amoralej | but i'm not sure why it was disabled | 13:06 |
myoung | rlandy|rover: and I/we don't have access to eitherthe nodes it runs on, or the jenkins server | 13:06 |
rlandy|rover | amoralej: I'm following this failing job here: https://bugs.launchpad.net/tripleo/+bug/1774079 | 13:06 |
openstack | Launchpad bug 1774079 in tripleo "[ocata promotion] phase1 (ci.centos) job tripleo-quickstart-promote-ocata-rdo_trunk-minimal fails introspection/deploy "No valid host found"" [Critical,Triaged] | 13:06 |
rlandy|rover | I am considering dropping it as a promotion criterion but I'd prefer not tp | 13:06 |
rlandy|rover | to | 13:06 |
rlandy|rover | problem is the failure is not consistent | 13:07 |
rlandy|rover | myoung: the only reason I am asking you guys as I am guess the disable happened during the ruck/river shift you had | 13:08 |
myoung | rlandy|rover: i'm going thru git history now, was the job disabled by hand using jenkins UI in admin mode (we don't have this)? I'm not seeing any commits that would disable the job during that period | 13:09 |
myoung | also looking back thru notes from that sprint | 13:10 |
ykarel | rlandy|rover, is that ocata job reproducable locally? | 13:10 |
ykarel | i mean has anyone tried that | 13:10 |
rlandy|rover | ykarel: we have no access to the exact hardware - and we don't create a reproducer there | 13:11 |
rlandy|rover | so the closes we can get is a virt job on our own hardware | 13:11 |
* myoung reconstructs history (http://sol.usersys.redhat.com/dlrnapi-reports/ocata-combined.txt) | 13:13 | |
*** ykarel_ has joined #oooq | 13:13 | |
ykarel_ | i think local reproducer would help getting to the root cause | 13:15 |
myoung | 2018-05-09 10:28:33, https://trunk.rdoproject.org/centos7-ocata/b8/d5/b8d5d3b2f3937e2063192a4fb3b97e8eabe56383_1edce40d, current-tripleo-rdo looks to be the last rdo1 promotion, with the next current-tripleo promotion not happening until 5/22 | 13:15 |
myoung | 2018-05-22 02:41:04, https://trunk.rdoproject.org/centos7-ocata/5c/13/5c13ff81a5466b3ee8a23e3b910f8cc7a66995b6_5e0d17be, current-tripleo | 13:15 |
myoung | which spawned https://ci.centos.org/job/rdo_trunk-promote-ocata-current-tripleo/452 | 13:15 |
myoung | which had the sub-job https://ci.centos.org/job/tripleo-quickstart-promote-ocata-rdo_trunk-minimal/334, which is the next job after 5/9 | 13:15 |
myoung | so nothing it appears was disabled per se between 5/9 and 5/22, we just didn't have a current-tripleo promotion | 13:16 |
myoung | rlandy|rover: ^^ | 13:16 |
*** ykarel has quit IRC | 13:16 | |
rlandy|rover | oh ok - nothing from 09 | 13:16 |
rlandy|rover | ok | 13:16 |
myoung | i've also confirmed that JJB for those job didn't change | 13:16 |
myoung | looks like 13 attempts upstream to promote tripleo-ci-testing --> current-tripleo, there's a good deal of gap | 13:18 |
myoung | if i recall correctly this was also during the period where master and queens were broken due to container issues, as well as the concurrent centos 7.5 release (and the hilarity in gates for a week) during that sprint, so why ocata upstream wasn't promoting between 5/9 and 5/22 isn't clear...our focus wasn't there. I'm reviewing sprint notes to see if there are some clues... | 13:19 |
*** ykarel_ is now known as ykarel | 13:21 | |
myoung | rlandy|rover: finished diving thru notes, and it we were focused on master/queens, and mostly DOA rdo2 from beginning of sprint. the ocata virt minimal job appears to have been failing in introspection starting 5/22 (https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-ocata-rdo_trunk-minimal-334/undercloud/home/stack/overcloud_prep_images.log.gz) and persisting with same error until today https://ci.centos.org/artifacts/rdo/ | 13:28 |
myoung | jenkins-tripleo-quickstart-promote-ocata-rdo_trunk-minimal-353/undercloud/home/stack/overcloud_prep_images.log.gz | 13:28 |
*** links has quit IRC | 13:29 | |
rlandy|rover | myoung: thanks - yep - sometimes we fail introspection and sometimes we fail deploy | 13:29 |
myoung | 2018-05-29 21:17:56.478 20823 INFO workflow_trace [-] Workflow 'tripleo.baremetal.v1.introspect_manageable_nodes' [RUNNING -> ERROR, msg=Failure caused by error in tasks: fail_workflow | 13:29 |
myoung | ^^ https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-ocata-rdo_trunk-minimal-353/undercloud/var/log/mistral/engine.log.gz | 13:29 |
*** links has joined #oooq | 13:30 | |
myoung | that error looks to be preceded by | 13:30 |
myoung | 2018-05-29 21:11:56.173 20896 INFO swiftclient [-] REQ: curl -i https://192.168.24.2:13808/v1/AUTH_cd277c3ea8e142aebf89d4c008ac886f/overcloud -I -H "X-Auth-Token: gAAAAABbDcIXgFli..." | 13:30 |
myoung | 2018-05-29 21:11:56.173 20896 INFO swiftclient [-] RESP STATUS: 404 Not Found | 13:30 |
myoung | 2018-05-29 21:11:56.174 20896 INFO swiftclient [-] RESP HEADERS: {u'Date': u'Tue, 29 May 2018 21:11:56 GMT', u'Content-Length': u'0', u'Content-Type': u'text/html; charset=UTF-8', u'X-Openstack-Request-Id': u'tx862953ad45dc456aaea3a-005b0dc21b', u'X-Trans-Id': u'tx862953ad45dc456aaea3a-005b0dc21b'} | 13:30 |
myoung | 2018-05-29 21:11:56.177 20896 WARNING mistral.actions.openstack.base [-] Traceback (most recent call last): | 13:30 |
myoung | File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 127, in run | 13:30 |
myoung | result = method(**self._kwargs_for_run) | 13:30 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1735, in head_container | 13:30 |
myoung | return self._retry(None, head_container, container, headers=headers) | 13:30 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1673, in _retry | 13:30 |
myoung | service_token=self.service_token, **kwargs) | 13:30 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 977, in head_container | 13:30 |
myoung | resp, 'Container HEAD failed', body) | 13:30 |
myoung | ClientException: Container HEAD failed: https://192.168.24.2:13808/v1/AUTH_cd277c3ea8e142aebf89d4c008ac886f/overcloud 404 Not Found | 13:31 |
myoung | in executor log | 13:31 |
myoung | sry i'm probably just retracing your debug steps and spamming the channel | 13:31 |
myoung | could this just be swift / infra on the nodes up there and not a product issue? | 13:32 |
arxcruz|ruck | myoung: could be swift didn't start ? | 13:34 |
arxcruz|ruck | because it's 404 | 13:34 |
* myoung is looking at swift logs too | 13:35 | |
myoung | arxcruz|ruck, rlandy|rover, so swift is up and running, here's the request coming in afaict... | 13:38 |
myoung | 2018-05-29 21:11:56.173 20896 INFO swiftclient [-] REQ: curl -i https://192.168.24.2:13808/v1/AUTH_cd277c3ea8e142aebf89d4c008ac886f/overcloud -I -H "X-Auth-Token: gAAAAABbDcIXgFli..." | 13:38 |
myoung | 2018-05-29 21:11:56.173 20896 INFO swiftclient [-] RESP STATUS: 404 Not Found | 13:38 |
myoung | 2018-05-29 21:11:56.174 20896 INFO swiftclient [-] RESP HEADERS: {u'Date': u'Tue, 29 May 2018 21:11:56 GMT', u'Content-Length': u'0', u'Content-Type': u'text/html; charset=UTF-8', u'X-Openstack-Request-Id': u'tx862953ad45dc456aaea3a-005b0dc21b', u'X-Trans-Id': u'tx862953ad45dc456aaea3a-005b0dc21b'} | 13:38 |
myoung | 2018-05-29 21:11:56.177 20896 WARNING mistral.actions.openstack.base [-] Traceback (most recent call last): | 13:38 |
myoung | File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 127, in run | 13:38 |
myoung | result = method(**self._kwargs_for_run) | 13:38 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1735, in head_container | 13:38 |
myoung | return self._retry(None, head_container, container, headers=headers) | 13:38 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1673, in _retry | 13:38 |
myoung | service_token=self.service_token, **kwargs) | 13:38 |
myoung | File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 977, in head_container | 13:38 |
myoung | resp, 'Container HEAD failed', body) | 13:38 |
myoung | ClientException: Container HEAD failed: https://192.168.24.2:13808/v1/AUTH_cd277c3ea8e142aebf89d4c008ac886f/overcloud 404 Not Found | 13:38 |
arxcruz|ruck | :( | 13:38 |
myoung | shiz | 13:38 |
myoung | paste fail | 13:38 |
myoung | this rather | 13:38 |
arxcruz|ruck | pastebin | 13:38 |
myoung | May 29 21:11:55 undercloud haproxy[15814]: Connect from 192.168.24.3:37426 to 192.168.24.3:35357 (keystone_admin/HTTP) | 13:38 |
myoung | May 29 21:11:56 undercloud proxy-server: 192.168.24.1 192.168.24.1 29/May/2018/21/11/56 HEAD /v1/AUTH_cd277c3ea8e142aebf89d4c008ac886f/overcloud HTTP/1.0 404 - python-swiftclient-3.3.0 gAAAAABbDcIXgFli... - - - tx862953ad45dc456aaea3a-005b0dc21b - 0.5826 - - 1527628315.588342905 1527628316.170984983 - | 13:38 |
myoung | ^^ https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-ocata-rdo_trunk-minimal-353/undercloud/var/log/swift/swift.log.gz | 13:38 |
arxcruz|ruck | so the overcloud swift is missing | 13:39 |
rlandy|rover | I'd like to blame this on infra | 13:39 |
myoung | rlandy|rover: yeah...i bet if we ran this job on one of our beefier virthosts in RDU it would pass, or at least not fail like this | 13:40 |
rlandy|rover | myoung: the fun thing is that pike/master/queens pass | 13:40 |
myoung | huh... | 13:41 |
* myoung wonders if a backport --> stable/ocata was missed somewhere | 13:41 | |
myoung | or if this is an ocata only issue | 13:41 |
rlandy|rover | only ocata | 13:41 |
myoung | we could ping OSP qe folk, or do a quick bz search to see if it's been found / observed there | 13:42 |
* myoung looks to see last puddle import and flips to #rhos-delivery | 13:42 | |
myoung | i do have fond memories of a pile of introspection random fails in ocata timeframe | 13:42 |
myoung | and by "fond" I mean "My brain is blocking the memories to protect itself" | 13:43 |
myoung | :) | 13:43 |
rlandy|rover | I have been searching | 13:43 |
myoung | rlandy|rover: so imports to ocata have not occured in 21 days | 13:44 |
myoung | so the last OSP puddle for ocata would be prior to when we started to hit this... | 13:44 |
*** links has quit IRC | 13:44 | |
* myoung confirms | 13:44 | |
myoung | rlandy|rover: huh...dashboard might be off...seeing a pile of puddles for 10 | 13:48 |
myoung | flipping to internal channel | 13:48 |
rlandy|rover | myoung: let's bj - when review is over | 13:48 |
rfolco | chandankumar, hi, still don't see python2-{future,stestr} on http://download-node-02.eng.bos.redhat.com/rel-eng/repos/rhos-12.0-rhel-7-testdeps/x86_64/ | 13:48 |
rlandy|rover | board is correct we have not promoted ocata | 13:48 |
Tengu | hello there! for information, this patch will impact quickstart: https://review.openstack.org/#/c/570627/ - care to have a look, as well as its quickstart-extras part: https://review.openstack.org/#/c/570841/ (especially since there will be a version check)? thank you :) | 13:51 |
myoung | rlandy|rover: aye...but they have been pulling changes and making puddles anyway, see chatter in rhos-delivery - | 13:52 |
ssbarnea | out of curiosity, is it normal to see index.html.gz on ara reports? the browser no longer opens the HTML file as HTML if is archived. see https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-gate-newton-delorean-quick-basic-5031/ara_oooq/ | 13:58 |
arxcruz|ruck | !gate | 14:02 |
openstack | arxcruz|ruck: Error: "gate" is not a valid command. | 14:02 |
arxcruz|ruck | !check | 14:02 |
openstack | arxcruz|ruck: Error: "check" is not a valid command. | 14:02 |
rlandy|rover | myoung: arxcruz|ruck: want to bj? | 14:02 |
panda | %gatestatus | 14:02 |
arxcruz|ruck | rlandy|rover: sure | 14:02 |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 14:02 |
myoung | rlandy|rover, arxcruz|ruck, so https://code.engineering.redhat.com/gerrit/#/c/139256, imports from ocata --> OSP are no longer (ever) happening as osp11 is now officially EOL anyway | 14:02 |
myoung | sure | 14:02 |
arxcruz|ruck | panda: i want to check what's the changes hubbot is listening | 14:03 |
panda | ssbarnea: yes, I would solve it using class.mro | 14:03 |
myoung | imho we care even less about this | 14:03 |
myoung | arxcruz|ruck: fyi you can also spam hubbot privately in a direct message to debug | 14:03 |
arxcruz|ruck | rlandy|rover: myoung which bj ? | 14:03 |
panda | arxcruz|ruck: %config plugins.GateStatus.changeIDs | 14:03 |
rlandy|rover | https://bluejeans.com/u/rlandy/ | 14:03 |
rlandy|rover | arxcruz|ruck: myoung: ^^ | 14:03 |
arxcruz|ruck | rlandy|rover: let me just grab more water | 14:03 |
myoung | cool incoming, i need 120 sec tho | 14:03 |
myoung | brt | 14:03 |
myoung | mayve 180 sec | 14:03 |
quiquell | panda: Have to go to the kindergarden | 14:08 |
*** EmilienM_PTO is now known as EmilienM | 14:08 | |
panda | quiquell: oh, ok, we just finished ... | 14:08 |
panda | quiquell: you're here tomorrow morning ? | 14:08 |
quiquell | panda: Yep, looks like n -> n + 1 featureset050 is working with rlandy|rover change | 14:09 |
panda | quiquell: already ??? | 14:09 |
quiquell | I see build-test-packages and install repo at undercloud install, but have to check with someone else | 14:10 |
panda | quiquell: oh, but withouth change injection probably | 14:10 |
panda | oh, ok | 14:10 |
panda | eyah | 14:10 |
quiquell | panda: This is the change, take a look also to the Depends-On https://review.openstack.org/#/c/570888/ | 14:10 |
*** marios has quit IRC | 14:16 | |
myoung | rlandy|rover: what's your meeting id | 14:16 |
myoung | the # | 14:16 |
*** quiquell is now known as quique | 14:16 | |
*** quique is now known as quiquell|off | 14:16 | |
*** marios has joined #oooq | 14:16 | |
arxcruz|ruck | rasca: around ? | 14:18 |
myoung | arxcruz|ruck: rlandy|rover https://code.engineering.redhat.com/gerrit/#/c/139256 | 14:18 |
ssbarnea | panda: thanks for the MRO, added a bookmark. I also know why I didn't know it because I avoided multiple inheritance like plague. i know one case where I avoiding by monkey patching Requests.session to implement retries on it. | 14:21 |
panda | ssbarnea: yes, exactly the case in which it's used, you have your base class but also need to overrid an existing API | 14:22 |
panda | ssbarnea: anyway it's something you entounter also when you start dealing with metaclasses | 14:23 |
hubbot | All check jobs are working fine on stable/queens, stable/ocata, stable/ocata, master. | 14:23 |
panda | but, advanced and very rare usage, hence the question was "bonus" | 14:23 |
ssbarnea | the best bonus for me would be to avoid having to add jenkins groovy to my daily plate, i got enough of it, i find plain bash 10x better (and portable) | 14:26 |
ssbarnea | out of curiosity, what is your policy/practice regarding ansible compatibility? how bleading edge or conservative? | 14:29 |
*** skramaja has quit IRC | 14:29 | |
*** moguimar has quit IRC | 14:32 | |
panda | ssbarnea: we are currently one release behind the stable | 14:32 |
*** moguimar has joined #oooq | 14:34 | |
ssbarnea | and i suppose that's usually until they release 2.x.1 to fix the bugs affecting you. i was considering creating a job downstream to test with prereleases in order to spot bugs before they break them. | 14:34 |
ssbarnea | i use devel locally sometimes, a source of joy :D | 14:35 |
panda | ssbarnea: we have to pin because moving means testing that evry single job still works | 14:35 |
panda | ssbarnea: testing a single job is not enough unfortunately, we need 100% coverage. | 14:36 |
panda | ssbarnea: we know for example that 2.8 will be a pain, because they are deprecating some stauff we use | 14:36 |
rlandy|rover | panda: trown: are we going public with the libvirt reproducer? if so, I still have this doc review outstanding: https://review.openstack.org/#/c/566155/ | 14:46 |
panda | rlandy|rover: I think more than public, the first step is just test it to check if we are breaking it. Like trown said, if we go public then we need to maintain it. A whole new level of commitment... | 14:47 |
*** saneax has quit IRC | 14:47 | |
*** ykarel is now known as ykarel|away | 14:53 | |
rlandy|rover | quiquell|off: panda: can we w+1 https://review.openstack.org/#/c/568946/? | 14:56 |
rlandy|rover | wes and Emilien voted | 14:56 |
rasca | hey arxcruz|ruck you pinged me above here... Do you still need me? | 14:57 |
rlandy|rover | I'd like someone from the sprint to approve | 14:57 |
arxcruz|ruck | rasca: nah, already have my questions answered | 14:57 |
rasca | arxcruz|ruck, aCK | 14:59 |
panda | rlandy|rover: approved, with comment. | 15:00 |
rlandy|rover | p | 15:01 |
rlandy|rover | panda: thanks | 15:01 |
myoung | arxcruz|ruck: do we have LP tracking the promoter networking / dns issue(s) that are causing us to have to run a private promoter? | 15:01 |
*** ykarel|away is now known as ykarel | 15:02 | |
panda | rlandy|rover: no no, thank you. | 15:06 |
rlandy|rover | myoung: arxcruz|ruck: not as far as I know | 15:14 |
rlandy|rover | I think we should try the rdocloud one again | 15:14 |
*** rfolco_ has joined #oooq | 15:21 | |
*** rfolco has quit IRC | 15:23 | |
arxcruz|ruck | myoung: no, do i need to open the bug for the ocata change ? | 15:29 |
myoung | arxcruz|ruck: we need a LP to track the promoter issue(s), would like to return to using the tripleo-infra instance. regarding ocata issue we already have https://bugs.launchpad.net/tripleo/+bug/1774079 | 15:34 |
openstack | Launchpad bug 1774079 in tripleo "[ocata promotion] phase1 (ci.centos) job tripleo-quickstart-promote-ocata-rdo_trunk-minimal fails introspection/deploy "No valid host found"" [Critical,Triaged] | 15:34 |
arxcruz|ruck | myoung: https://bugs.launchpad.net/tripleo/+bug/1774220 | 15:35 |
openstack | Launchpad bug 1774220 in tripleo "Promoter server is having DNS issues" [High,Triaged] | 15:35 |
arxcruz|ruck | myoung: i meant the hubbot one | 15:36 |
*** udesale has quit IRC | 15:36 | |
myoung | arxcruz|ruck: updated https://bugs.launchpad.net/tripleo/+bug/1774220 | 15:38 |
openstack | Launchpad bug 1774220 in tripleo "Promoter server is impacted by networking issues and is offline (running private instance presently)" [Critical,Triaged] | 15:38 |
arxcruz|ruck | myoung: nice english words :P | 15:39 |
myoung | arxcruz|ruck: we suspect DNS, but don't know it... | 15:39 |
myoung | arxcruz|ruck: regarding hubbot issues, IMHO yes, please create a LP for that. I'm generally of the opnion that we should track all things in LP/storyboard (when we get there) as it's transparent, and helps us to understand where/how we spend time. Things not tracked are invisible. Lots of invisible effort yields burnout and other sorts of bad things. | 15:40 |
*** udesale has joined #oooq | 15:43 | |
arxcruz|ruck | myoung: do you have the change number ? | 15:43 |
arxcruz|ruck | hubbot: %config plugins.GateStatus.changeIDs | 15:44 |
hubbot | arxcruz|ruck: Error: "%config" is not a valid command. | 15:44 |
arxcruz|ruck | %config plugins.GateStatus.changeIDs | 15:44 |
hubbot | arxcruz|ruck: I0cbf9ffb8552411e4dd891c38702ff8d1f6db5b1 I214272a6f25feb75496e44eb0a16269c6ee4cfe2 I4c5bdf00ce8cf7eabf669b248b99cb8443e82fab If12c8fe9bd0bea98a4842f279399285344f22246 | 15:44 |
myoung | arxcruz|ruck: see the top few lines of sprint-14 etherpad | 15:45 |
arxcruz|ruck | k | 15:46 |
*** ykarel is now known as ykarel|away | 15:50 | |
*** ykarel|away has quit IRC | 16:02 | |
*** zoli is now known as zoli|gone | 16:04 | |
*** zoli|gone is now known as zoli | 16:04 | |
*** saneax has joined #oooq | 16:18 | |
*** panda is now known as panda|off | 16:19 | |
chandankumar | rfolco_: is this one [rhelosp-12.0-unittest] enabled? | 16:21 |
chandankumar | rfolco_: http://download-node-02.eng.bos.redhat.com/rel-eng/repos/rhos-12.0-rhel-7-testdeps/tagged both the packages are available there | 16:22 |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 16:23 |
rfolco_ | chandankumar, ok I tried to physically find the rpm | 16:24 |
rfolco_ | http://download-node-02.eng.bos.redhat.com/rel-eng/repos/rhos-12.0-rhel-7-testdeps/x86_64/ | 16:24 |
rfolco_ | chandankumar, looks like the tagged rpms are somewhere else | 16:24 |
*** trown is now known as trown|lunch | 16:26 | |
chandankumar | rfolco_: yup, try this one yum-config-manager --enable <unit test repo name> | 16:26 |
chandankumar | it will work | 16:26 |
*** udesale has quit IRC | 16:26 | |
*** marios has quit IRC | 16:27 | |
rfolco_ | chandankumar, cool thanks :) | 16:27 |
*** rlandy|rover is now known as rlandy|rover|brb | 16:49 | |
*** holser__ has quit IRC | 16:57 | |
*** dtantsur is now known as dtantsur|afk | 17:00 | |
*** amoralej is now known as amoralej|off | 17:07 | |
*** tesseract has quit IRC | 17:10 | |
*** trown|lunch is now known as trown | 17:39 | |
myoung | arxcruz|ruck: do we have the hubbot bug? | 17:40 |
*** rlandy|rover|brb is now known as rlandy|rover | 17:46 | |
*** gvrangan has joined #oooq | 17:53 | |
arxcruz|ruck | myoung: sorry, not yet, let me do that now for you, it's already 8pm here ;) | 17:54 |
rlandy|rover | and now we have a 504 deployment error | 17:56 |
rlandy|rover | it keeps changing | 17:56 |
arxcruz|ruck | myoung: actually, i think it's working, because i'm seeing logs for today, but no comments in gerrit, perhaps something change ? | 17:57 |
myoung | arxcruz|ruck: hubbot? | 18:02 |
myoung | %gatestatus | 18:02 |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 18:02 |
* myoung notes the duplicated stable/ocata | 18:02 | |
arxcruz|ruck | hmmm interesting | 18:03 |
arxcruz|ruck | funny part is stable/ocata has failures | 18:04 |
arxcruz|ruck | https://review.openstack.org/#/c/564291/ | 18:04 |
myoung | arxcruz|ruck: afaik it looks for 2 failures in a row | 18:05 |
myoung | so it see's the http://logs.openstack.org/91/564291/14/check/tripleo-ci-centos-7-undercloud-oooq/f420bbc, but isn't alerting until that same job fails a second time (I think) | 18:05 |
arxcruz|ruck | myoung: ok | 18:06 |
arxcruz|ruck | i need to check the code for hubbot cuz i'm a bit lost :/ | 18:06 |
myoung | arxcruz|ruck: this might help | 18:07 |
myoung | https://trello.com/c/vdDrtoee/50-hubbot-is-private-code-running-on-a-private-server-lets-open-this-up-and-run-on-a-shared-instance | 18:07 |
myoung | and this: | 18:07 |
myoung | https://etherpad.openstack.org/p/tripleo-ci-hubbot-configuration | 18:07 |
myoung | ^^ bj recording there as well | 18:07 |
myoung | arxcruz|ruck: if I recall correctly https://trello.com/c/iWAz4ONC/73-hubbot-bot-add-two-dummy-changes-on-tht-to-watch#comment-5ae1a62c4600414413011626 was the last note/item on the hubbot config work | 18:08 |
myoung | (until now) | 18:09 |
*** ccamacho has quit IRC | 18:11 | |
rlandy|rover | can we manually kick the ocata promotion? | 18:20 |
rlandy|rover | from rdocloud | 18:20 |
rlandy|rover | on 24-hr cycle | 18:20 |
rlandy|rover | 4 days deplayed | 18:20 |
rlandy|rover | https://logs.rdoproject.org/openstack-periodic-24hr/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-ocata-upload/7175801/undercloud/home/jenkins/overcloud_validate.log.txt.gz | 18:21 |
rlandy|rover | arxcruz|ruck: ^^ | 18:21 |
rlandy|rover | I'd like to rekick that once we figure out the dns issues | 18:21 |
rlandy|rover | could also be a dns problem | 18:21 |
rlandy|rover | myoung: actually - can we rekick ocata on the promoter server? | 18:23 |
rlandy|rover | yours? | 18:23 |
rlandy|rover | promotion should have happened on 05/28 | 18:23 |
hubbot | All check jobs are working fine on stable/ocata, stable/ocata, master, stable/queens. | 18:23 |
rlandy|rover | to current-tripleo | 18:24 |
myoung | rlandy|rover: ack...looking now | 18:37 |
rlandy|rover | myoung: last current-tripleo is 05/26 | 18:37 |
rlandy|rover | I think we should have had a successful promotion 05/28 | 18:38 |
rlandy|rover | maybe that was the latest hash? | 18:38 |
rlandy|rover | failures over the last two days | 18:38 |
myoung | rlandy|rover: here's past 3 days of promotions to ocata... | 18:39 |
myoung | 2018-05-30 00:00:27, https://trunk.rdoproject.org/centos7-ocata/00/91/00914c802ded15a6ad4643a1a8b277a5342ed5a6_e0df978b, tripleo-ci-testing | 18:39 |
myoung | 2018-05-29 00:00:35, https://trunk.rdoproject.org/centos7-ocata/d8/4a/d84a6ecd344b9c3513596b476172fa1c890a2fc6_1558157c, tripleo-ci-testing | 18:39 |
myoung | 2018-05-28 00:00:29, https://trunk.rdoproject.org/centos7-ocata/0a/d7/0ad78bda27846527d1b755cd7a248e35ef6c6932_787a5938, tripleo-ci-testing | 18:39 |
myoung | 2018-05-27 00:20:11, https://trunk.rdoproject.org/centos7-ocata/0a/d7/0ad78bda27846527d1b755cd7a248e35ef6c6932_787a5938, current-tripleo | 18:39 |
myoung | looking at logs now | 18:39 |
rlandy|rover | checking hashes | 18:42 |
myoung | rlandy|rover: most recent hash is missing fs2 | 18:42 |
myoung | https://paste.fedoraproject.org/paste/iiZmHsRP8Nd-cr9mvQV~dQ | 18:42 |
rlandy|rover | myoung: yeah - that is correct - that job failed | 18:43 |
rlandy|rover | should have passed on 05/28 though | 18:43 |
myoung | the hash prior (d84a6ecd344b9c3513596b476172fa1c890a2fc6_1558157c) is missing 22018-05-30 13:44:11,342 25536 INFO promoter Skipping promotion of tripleo-ci-testing to current-tripleo, missing successful jobs: ['periodic-ovb-1ctlr_1comp-featureset002', 'periodic-ovb-1ctlr_1comp-featureset020', 'periodic-ovb-3ctlr_1comp-featureset001'] | 18:43 |
* myoung looks | 18:43 | |
rlandy|rover | myoung: sorry for bother you again on this ... I'll check it tomorrow | 18:43 |
myoung | not a bother | 18:44 |
rlandy|rover | the hash is 4 days out | 18:44 |
rlandy|rover | I would think it would be 2 | 18:44 |
rlandy|rover | nvm | 18:44 |
rlandy|rover | this promotion is making me crazy :) | 18:46 |
myoung | rlandy|rover: i don't see any others that passed tripleo-ci-testing that would be candidates for current-tripleo in past 6 days | 18:47 |
rlandy|rover | https://logs.rdoproject.org/openstack-periodic-24hr/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-ocata-upload/b494da6/ | 18:47 |
rlandy|rover | on 05/28 all promote jobs passed | 18:47 |
myoung | rlandy|rover: correct, that job was testing hash: 0ad78bda27846527d1b755cd7a248e35ef6c6932_787a5938 --> https://logs.rdoproject.org/openstack-periodic-24hr/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-ocata-upload/b494da6/console.txt.gz#_2018-05-28_04_02_38_669 | 18:51 |
myoung | and was promoted --> 2018-05-27 00:20:11, https://trunk.rdoproject.org/centos7-ocata/0a/d7/0ad78bda27846527d1b755cd7a248e35ef6c6932_787a5938, current-tripleo | 18:52 |
myoung | is currently https://trunk.rdoproject.org/centos7-ocata/current-tripleo/commit.yaml | 18:52 |
*** atoth has quit IRC | 18:55 | |
*** yolanda has joined #oooq | 18:57 | |
*** yolanda_ has quit IRC | 18:59 | |
*** gvrangan has quit IRC | 20:09 | |
rlandy|rover | https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rdo-promote-queens-rdo_trunk/ | 20:10 |
rlandy|rover | myoung: ^^ why is rdo-queens-promote-rdo_trunk-build-images still marked as a failure? | 20:10 |
* myoung looks | 20:12 | |
myoung | 00:00:30.367 cmd2 requires Python '>=3.4' but the running Python is 2.7.13 | 20:13 |
myoung | 00:00:30.455 python setup.py install failed | 20:13 |
rlandy|rover | I reran it | 20:13 |
myoung | https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rdo-queens-promote-rdo_trunk-build-images/72/console | 20:13 |
rlandy|rover | job is marked as green | 20:14 |
myoung | .ahh i see...sec | 20:14 |
rlandy|rover | https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rdo-queens-promote-rdo_trunk-build-images/73/consoleFull | 20:14 |
rlandy|rover | it should kick the next text cycle | 20:14 |
myoung | i'm guessing "retry" was kicked on the build job itself | 20:16 |
myoung | of #73 | 20:16 |
myoung | ya | 20:16 |
myoung | Started by Naginator after the failure of build #72 | 20:16 |
rlandy|rover | oh I did that | 20:16 |
rlandy|rover | what should I have done? | 20:16 |
myoung | if you run the top level multijob it should just run | 20:16 |
rlandy|rover | retry there> | 20:16 |
myoung | jenkins naginator / retry on a individual phase of a multijob will just run that phase | 20:16 |
rlandy|rover | or rebuild? | 20:16 |
rlandy|rover | retrying - thanks | 20:17 |
myoung | they are basically the same thing, at least at multijob level | 20:17 |
myoung | so "retry" will rekick a job, feeding it the same parameters as before | 20:17 |
rlandy|rover | cool | 20:17 |
EmilienM | is wes on pto? | 20:17 |
myoung | so if it's a build/test job, things like "current_build" get resent | 20:17 |
EmilienM | yes he is | 20:17 |
myoung | "rebuild" does the same thing, but will give the opportunity (via UI) to change input params | 20:18 |
EmilienM | he's always a pto | 20:18 |
myoung | since top level multijob has no params...retry = rebuild | 20:18 |
EmilienM | in* | 20:18 |
myoung | EmilienM: he's back Monday | 20:18 |
EmilienM | he's always back on Monday :D and then he leaves again | 20:18 |
EmilienM | lol | 20:18 |
rlandy|rover | well - it's reckicking now | 20:18 |
myoung | rlandy|rover: aye watching https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rdo-promote-queens-rdo_trunk/93/ | 20:18 |
rlandy|rover | we have as redo on https://bugs.launchpad.net/tripleo/+bug/1750874 | 20:18 |
openstack | Launchpad bug 1750874 in tripleo "[queens promotion] fs001 fails overcloud deploy with 'Authentication failed'" [Critical,Fix released] - Assigned to Alex Schultz (alex-schultz) | 20:18 |
rlandy|rover | on queens | 20:18 |
rlandy|rover | where is he going now? | 20:19 |
rlandy|rover | EmilienM: welcome back | 20:19 |
EmilienM | rlandy|rover: thanks :D | 20:19 |
rlandy|rover | at least some people come back | 20:19 |
EmilienM | rlandy|rover: what did I miss? :-P | 20:19 |
myoung | EmilienM: heh | 20:19 |
rlandy|rover | EmilienM: nothing :( ... still trying to get https://review.openstack.org/#/c/568946/ through gates | 20:20 |
EmilienM | I see you folks doing well, lot of green everywhere | 20:20 |
EmilienM | rlandy|rover: yeah but it's almost merged I think :D | 20:20 |
EmilienM | thanks again for this work | 20:20 |
* myoung blames rlandy|rover for the green | 20:20 | |
rlandy|rover | EmilienM: the green is a lie | 20:20 |
EmilienM | lol | 20:20 |
myoung | there's a joke in there somewhere | 20:20 |
rlandy|rover | we adjust the promotion criteria | 20:20 |
EmilienM | exit 0 | 20:20 |
EmilienM | rlandy|rover: how badly? | 20:20 |
rlandy|rover | EmilienM: you don't want to know | 20:21 |
* rlandy|rover feels shame | 20:21 | |
rlandy|rover | but not upstream | 20:21 |
rlandy|rover | that is truly green | 20:21 |
EmilienM | lol | 20:21 |
EmilienM | can I help into something? | 20:22 |
* EmilienM late in the party | 20:22 | |
rlandy|rover | oh no - lots of party left for us all | 20:22 |
myoung | to be fair...the criteria we've been using in RDO2 hasn't really changed...we just have cognative dissonence beween how we're modelling promoter config "these jobs must pass" with what reality looks like "this job, and one of these 3 must pass" | 20:22 |
* myoung fetches EmilienM a party hat | 20:22 | |
rlandy|rover | myoung: you should be a political spokesman | 20:23 |
rlandy|rover | ie: we're not really cheating it's more "cognative dissonence" | 20:23 |
myoung | EmilienM: no sprint would be complete without the intermittent RDO Cloud networking stuff as well | 20:23 |
hubbot | FAILING CHECK JOBS on stable/ocata: tripleo-ci-centos-7-undercloud-oooq @ https://review.openstack.org/564291 | 20:23 |
myoung | rlandy|rover: "mmmmm...what is cheating really...." | 20:23 |
myoung | :) | 20:24 |
rlandy|rover | oh whatever ... | 20:24 |
myoung | i think TBH there's a bit of technical debt | 20:24 |
EmilienM | I think we have seen worse situations at this stage of the cyle | 20:24 |
myoung | (low priority) | 20:24 |
EmilienM | cycle* | 20:24 |
myoung | to have our promoter config match reality | 20:24 |
rlandy|rover | right - so really - it's ocata and phase two that are not great now | 20:25 |
rlandy|rover | the rest is ok | 20:25 |
myoung | phase 2 is mostly ok | 20:25 |
rlandy|rover | ocata is like a random experiment | 20:25 |
myoung | heh | 20:25 |
rlandy|rover | a new job a new error | 20:25 |
*** trown is now known as trown|outtypewww | 20:25 | |
rlandy|rover | never a repeat | 20:25 |
myoung | did i see/parse correctly btw that now it's a 504? | 20:25 |
myoung | vs. a 404 | 20:26 |
myoung | or a 500 | 20:26 |
rlandy|rover | lat job was 504 | 20:26 |
rlandy|rover | the one before the prep-network error | 20:26 |
rlandy|rover | the one after the introspection error | 20:26 |
rlandy|rover | see - party party | 20:26 |
myoung | https://restlet.com/http-status-map | 20:27 |
myoung | ^^ where do you want to go today? | 20:28 |
myoung | SPIN THE WHEEL, kick the job | 20:28 |
EmilienM | oh nice | 20:28 |
rlandy|rover | oh that's hysterical, sad but hysterical | 20:28 |
myoung | if we couldn't laugh...we would have to cry | 20:28 |
* myoung goes back to attempting to get sprint 13 and 14 status out (late) | 20:29 | |
*** florianf has quit IRC | 20:34 | |
rlandy|rover | myoung:??? | 21:50 |
rlandy|rover | fire? | 21:50 |
myoung | yo | 22:00 |
rlandy|rover | ugh - someone just shoot me :( | 22:19 |
*** rlandy|rover is now known as rlandy|rover|bbl | 22:23 | |
hubbot | FAILING CHECK JOBS on stable/ocata: tripleo-ci-centos-7-undercloud-oooq @ https://review.openstack.org/564291 | 22:23 |
*** tosky has quit IRC | 23:04 | |
*** saneax has quit IRC | 23:33 | |
*** sanjayu_ has joined #oooq | 23:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!