*** ysandeep has joined #oooq | 00:35 | |
*** saneax has quit IRC | 00:51 | |
*** ysandeep has quit IRC | 01:01 | |
*** ysandeep has joined #oooq | 01:03 | |
*** ysandeep has quit IRC | 02:55 | |
*** ysandeep has joined #oooq | 03:13 | |
*** aakarsh has joined #oooq | 03:31 | |
*** ysandeep has quit IRC | 03:36 | |
*** skramaja has quit IRC | 03:44 | |
*** bhagyashris has joined #oooq | 03:54 | |
*** rlandy has quit IRC | 04:07 | |
*** udesale has joined #oooq | 04:29 | |
*** ykarel has joined #oooq | 04:37 | |
*** epoojad1 has joined #oooq | 04:53 | |
*** raukadah is now known as chkumar|rover | 05:08 | |
*** holser has joined #oooq | 05:29 | |
*** surpatil has joined #oooq | 05:35 | |
*** holser has quit IRC | 05:46 | |
*** skramaja has joined #oooq | 06:00 | |
*** soniya29 has joined #oooq | 06:07 | |
*** udesale has quit IRC | 06:25 | |
*** marios|ruck has joined #oooq | 06:31 | |
*** jbadiapa has joined #oooq | 06:43 | |
*** jfrancoa has joined #oooq | 06:51 | |
chkumar|rover | marios|ruck: Good morning | 07:15 |
---|---|---|
chkumar|rover | marios|ruck: master and train RHEL8 promotion https://bugs.launchpad.net/tripleo/+bug/1856278 | 07:15 |
openstack | Launchpad bug 1856278 in tripleo "RHEL8 scenario 1 standalone deployment failed with The following containers failed validations and were not started: collectd" for master and train" [Critical,Confirmed] | 07:15 |
chkumar|rover | bug | 07:15 |
marios|ruck | chkumar|rover: o/ men | 07:20 |
marios|ruck | man i am very slow today | 07:20 |
marios|ruck | i have some bug :( | 07:20 |
marios|ruck | chkumar|rover: will check in abit thanks | 07:21 |
chkumar|rover | marios|ruck: for tempest working on proposing patch | 07:23 |
chkumar|rover | actual fix | 07:23 |
*** soniya29 has quit IRC | 07:39 | |
*** soniya29 has joined #oooq | 07:40 | |
*** tesseract has joined #oooq | 07:43 | |
*** saneax has joined #oooq | 07:44 | |
*** ykarel is now known as ykarel|lunch | 07:47 | |
marios|ruck | chkumar|rover: train promoting ... would be nice to get master gnna check it in a minute. but rhel is totally f***d really red and blocked on that image build :/ | 07:59 |
*** jtomasek has joined #oooq | 08:03 | |
*** amoralej|off is now known as amoralej | 08:27 | |
chkumar|rover | marios|ruck: ah sweet :-) | 08:27 |
marios|ruck | chkumar|rover: the rocky fs2 upload worked too it promoted last night | 08:29 |
marios|ruck | so apart from rhel we are pretty good | 08:29 |
chkumar|rover | marios|ruck: on rhel8 standalone jobs, we have just one failure on master and train | 08:29 |
marios|ruck | chkumar|rover: yeah but image build | 08:30 |
marios|ruck | blocks everything | 08:30 |
chkumar|rover | marios|ruck: reproducing it | 08:30 |
chkumar|rover | *trying to reproduce it | 08:30 |
marios|ruck | chkumar|rover: ack good luck | 08:30 |
marios|ruck | matbu: o/ | 08:40 |
*** tosky has joined #oooq | 08:40 | |
marios|ruck | matbu: did you file a bug for tripleo-ci-centos-7-containerized-undercloud-upgrades train & 2019-12-12 09:11:39.923379 | primary | TASK [validate-services : Fails if we find failed systemd units] *************** | 08:40 |
marios|ruck | matbu: can't see one there ... https://bugs.launchpad.net/tripleo/+bugs?orderby=-date_last_updated&start=0 | 08:42 |
marios|ruck | matbu: going to file one now then | 08:43 |
*** ykarel|lunch is now known as ykarel | 08:45 | |
*** jpena|off is now known as jpena | 08:54 | |
marios|ruck | chkumar|rover: matbu: fyi https://bugs.launchpad.net/tripleo/+bug/1856288 The train tripleo-ci-centos-7-containerized-undercloud-upgrades often fails on validate-services ironic_pxe_tftp healthcheck | 08:55 |
openstack | Launchpad bug 1856288 in tripleo "The train tripleo-ci-centos-7-containerized-undercloud-upgrades often fails on validate-services ironic_pxe_tftp healthcheck" [Critical,Triaged] | 08:55 |
chkumar|rover | aye sir | 08:56 |
marios|ruck | chkumar|rover: matbu is already working there so just fyi for now | 08:58 |
marios|ruck | chkumar|rover: * https://review.opendev.org/#/c/698663/3 i think is part of that | 08:58 |
matbu | marios|ruck: ack you can assigned it to me, im working on it | 09:01 |
marios|ruck | thanks matbu | 09:01 |
*** holser has joined #oooq | 09:18 | |
*** derekh has joined #oooq | 09:30 | |
marios|ruck | sshnaidm|off: chkumar|rover: we should bring this discussion to next scrum https://review.rdoproject.org/r/#/c/24073/1/zuul.d/upstream.yaml see if anyone has better ideas | 09:45 |
chkumar|rover | souds good to me | 09:46 |
chkumar|rover | marios|ruck: weshay I tried to reproduce the overcloud build image locally, but not able to reproduce | 09:49 |
chkumar|rover | time to hold the node | 09:49 |
marios|ruck | chkumar|rover: ack | 09:49 |
marios|ruck | chkumar|rover: can you just point to a review or what do we need to request a hold? | 09:49 |
chkumar|rover | need to send a review then ask the sf-ops to hold the node once we recheck the job again | 09:50 |
marios|ruck | chkumar|rover: ack thats what i mean thanks (only did it once and long time ago) | 09:50 |
chkumar|rover | marios|ruck: please send a review up | 09:51 |
chkumar|rover | with overcloud build image job | 09:51 |
marios|ruck | ack chkumar|rover maybe we can use that sec | 09:51 |
marios|ruck | https://review.rdoproject.org/r/#/c/23919 | 09:51 |
marios|ruck | ? | 09:51 |
marios|ruck | chkumar|rover: or has to be just one job | 09:51 |
chkumar|rover | marios|ruck: good candidate | 09:52 |
chkumar|rover | time to jump on sf-ops | 09:52 |
marios|ruck | chkumar|rover: ack | 09:52 |
marios|ruck | chkumar|rover: going to get some food and medicine will be biab | 10:13 |
chkumar|rover | marios|ruck: sure, | 10:13 |
*** marios|ruck has quit IRC | 10:18 | |
*** marios|ruck has joined #oooq | 10:43 | |
*** yolanda has quit IRC | 10:53 | |
*** ykarel is now known as ykarel|afk | 10:57 | |
marios|ruck | chkumar|rover: time on the logs weird? am looking in the held node now but the build.log has 2019-12-13 10:17:02.041 | but local machine says 06:00:41 EST 2019 | 11:00 |
marios|ruck | chkumar|rover: fyi the ram thing you noticed yesterday is not fatal its a warning e.g. i see it in a green log http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-rhel-8-buildimage-overcloud-full-master/ae9503e/build.log | 11:04 |
chkumar|rover | marios|ruck: yes | 11:04 |
chkumar|rover | marios|ruck: few more things discovered | 11:04 |
chkumar|rover | marios|ruck: on RDO cloud, rhel-8.0 image with selinux premissive available and there is working finer | 11:05 |
marios|ruck | chkumar|rover: hmm we can test that with semanage i think we have a var for that | 11:05 |
chkumar|rover | marios|ruck: on zuul job, rhel-8.1 image is used selinux enforcing | 11:05 |
chkumar|rover | where it is failing | 11:05 |
marios|ruck | chkumar|rover: nice i hope that's it man would be great | 11:05 |
marios|ruck | friday 13th we promote all the things! | 11:06 |
chkumar|rover | jpena: Hello | 11:06 |
marios|ruck | chkumar|rover: indeed is enforcing in that node | 11:06 |
chkumar|rover | jpena: is it possible to change the selinux permissive in rhel8 nodepool image | 11:07 |
chkumar|rover | *to permissive | 11:07 |
chkumar|rover | marios|ruck: I am trying to reproduce the same on rdocloud vm | 11:07 |
marios|ruck | chkumar|rover: ./roles/common/defaults/main.yml:42:selinux_enforcing: false | 11:07 |
jpena | chkumar|rover: by default? I don't think it's a good idea. I'd rather set it as part of the job | 11:07 |
marios|ruck | chkumar|rover: maybe we can just set that in featureset ^ | 11:07 |
marios|ruck | jpena: yeah we suspect it is cause of https://bugs.launchpad.net/tripleo/+bug/1853028 chkumar|rover++ noticed it | 11:07 |
openstack | Launchpad bug 1853028 in tripleo "Build overcloud image for rhel8 fails sometimes on in_target.d/post-install.d/51-enable-network-service" [Critical,Triaged] | 11:07 |
marios|ruck | jpena: s/suspect/really hope | 11:08 |
marios|ruck | jpena: ack we can use a var for testing it | 11:08 |
panda | marios|ruck: anything you want me to do to test the queens manifest ? | 11:08 |
marios|ruck | panda: hmm not really checked that yet panda i posted the patch yesterday but didn't check results will do in a bit | 11:08 |
marios|ruck | panda: it should be trying to pull _manifest there | 11:08 |
marios|ruck | https://review.opendev.org/#/c/687267/ & posted queens tht w/depends-on @ https://review.opendev.org/698747 panda | 11:09 |
marios|ruck | panda: i will have a look at that stuff in a while ^ | 11:09 |
marios|ruck | panda: in particular the updated v3/4 @ https://review.opendev.org/#/c/687267 happened late yesterday not sure it gives us what we want ... originally it was for standalone but no such thing for Q | 11:09 |
marios|ruck | chkumar|rover: gonna update the review so we can rerun? | 11:10 |
marios|ruck | chkumar|rover: hmm let me check is it in allowed overrides? | 11:10 |
chkumar|rover | marios|ruck: can we send a different review | 11:10 |
marios|ruck | chkumar|rover: just to test | 11:10 |
chkumar|rover | jpena: if we recheck the review does the node gets deleted? | 11:11 |
chkumar|rover | i mean holded node | 11:11 |
marios|ruck | chkumar|rover: we can't use featureset_override https://github.com/openstack/tripleo-ci/blob/master/roles/run-test/tasks/main.yaml#L8 will have to update the fs for that | 11:11 |
marios|ruck | chkumar|rover: i suspect it will and we get a new one jpena ? | 11:11 |
jpena | chkumar|rover: do you mean the existing held node? No, it will stay around | 11:11 |
marios|ruck | ah cool | 11:11 |
chkumar|rover | jpena: ok | 11:11 |
jpena | and yes, a new node will be spawned for the new job | 11:12 |
chkumar|rover | marios|ruck: we can send a review on tripleo ci and we can add depend on which var here, may be it will works? | 11:12 |
marios|ruck | jpena: | 11:12 |
marios|ruck | oh | 11:12 |
marios|ruck | jpena: so we need to hold the new node? | 11:12 |
jpena | marios|ruck: if you want, yes | 11:12 |
marios|ruck | jpena: and we can release this one then | 11:12 |
jpena | ack | 11:12 |
marios|ruck | chkumar|rover: k ? | 11:12 |
jpena | let me know when I can delete it | 11:12 |
marios|ruck | jpena: thanks will do | 11:12 |
chkumar|rover | marios|ruck: jpena works for me | 11:12 |
marios|ruck | chkumar|rover: are you doing somethign or should i post it | 11:12 |
marios|ruck | chkumar|rover: we do need a new patch | 11:12 |
marios|ruck | chkumar|rover: cos we need to add into featureset or somewhere | 11:13 |
marios|ruck | chkumar|rover: 13:11 < marios|ruck> chkumar|rover: we can't use featureset_override https://github.com/openstack/tripleo-ci/blob/master/roles/run-test/tasks/main.yaml#L8 will have to update the fs | 11:13 |
chkumar|rover | marios|ruck: let me send a patch | 11:13 |
marios|ruck | chkumar|rover: OK | 11:13 |
marios|ruck | chkumar|rover: let me know to add it immediately at https://review.rdoproject.org/r/23919 and we can request another hold | 11:14 |
marios|ruck | and release this one | 11:14 |
marios|ruck | chkumar|rover: another tempest related on scen4 master pipeline still runnign but that one failed posted testproject in a sec | 11:17 |
chkumar|rover | marios|ruck: https://review.opendev.org/#/c/698876/ | 11:19 |
chkumar|rover | I also need to change the flag somewhere na? | 11:19 |
marios|ruck | chkumar|rover: oh you added it to featureset_override? | 11:19 |
marios|ruck | chkumar|rover: ok fine lets just use it for the test. i though you would update the featureset but fine | 11:20 |
marios|ruck | chkumar|rover: is fine i can just pass it with job vars | 11:20 |
marios|ruck | re 13:19 < chkumar|rover> I also need to change the flag somewhere na? | 11:20 |
chkumar|rover | marios|ruck: yes passing it with job var it will work | 11:21 |
chkumar|rover | marios|ruck: rhel8 overcloud image not featureset is used | 11:21 |
marios|ruck | chkumar|rover: k done v6 https://review.rdoproject.org/r/#/c/23919/6/.zuul.yaml | 11:23 |
marios|ruck | jpena: can we please have that? just posted it we need periodic-tripleo-rhel-8-buildimage-overcloud-full-master please thanks | 11:23 |
marios|ruck | chkumar|rover: look ok? | 11:23 |
jpena | marios|ruck: ok, I'll release the old node and hold the new one | 11:24 |
marios|ruck | jpena: fine for me chkumar|rover you need the old one still? | 11:24 |
chkumar|rover | marios|ruck: looks ok but it is not going to work http://codesearch.openstack.org/?q=selinux_enforcing&i=nope&files=&repos= | 11:24 |
chkumar|rover | as that var used in tripleo-quickstart only | 11:24 |
marios|ruck | chkumar|rover: :/ | 11:24 |
marios|ruck | chkumar|rover: so it has to be on the guest | 11:24 |
marios|ruck | image i mean virt-customize type thing | 11:25 |
*** yolanda has joined #oooq | 11:25 | |
marios|ruck | chkumar|rover: maybe we can use tripleo-heat-templates SELinuxMode | 11:26 |
chkumar|rover | marios|ruck: sorry but we need to change the selinux mode to permissive of the job vm which is used and there playbook runs | 11:26 |
marios|ruck | chkumar|rover: hmmm only in /docker-baremetal-ansible.yaml | 11:27 |
marios|ruck | chkumar|rover: well we just need a setenforce 0 before image build | 11:27 |
marios|ruck | chkumar|rover: we could just hack it | 11:27 |
marios|ruck | chkumar|rover: like | 11:27 |
marios|ruck | chkumar|rover: (for testing the theory i mean at least) | 11:28 |
chkumar|rover | marios|ruck: ok let me do that in the same review | 11:28 |
marios|ruck | chkumar|rover: here chandan https://github.com/openstack/tripleo-ci/blob/master/roles/oooci-build-images/templates/build-images.sh.j2 | 11:28 |
marios|ruck | chkumar|rover: wdyt? | 11:28 |
marios|ruck | chkumar|rover: we can just add setenforce to test it | 11:28 |
marios|ruck | jpena: sorry | 11:29 |
marios|ruck | jpena: please don't hold the new node | 11:29 |
chkumar|rover | marios|ruck: that will alos work | 11:29 |
marios|ruck | jpena: i made a mistake | 11:29 |
jpena | ok, no worries | 11:29 |
marios|ruck | jpena: we will update and promise last time | 11:29 |
marios|ruck | jpena: sorry for the noise thank so much for your support | 11:29 |
jpena | should I remove the old node? | 11:30 |
marios|ruck | chkumar|rover: should i post something for that? | 11:30 |
marios|ruck | jpena: yes for me i don't need it chkumar|rover do you need the old node? | 11:30 |
chkumar|rover | marios|ruck: me neither | 11:30 |
marios|ruck | chkumar|rover: k posting a change to build template for testing | 11:31 |
marios|ruck | chkumar|rover: we don't need to hold node | 11:31 |
marios|ruck | chkumar|rover: lets just run it see what happens | 11:31 |
marios|ruck | we can ping javier again later | 11:31 |
marios|ruck | chkumar|rover: ? | 11:31 |
marios|ruck | chkumar|rover: K? | 11:31 |
chkumar|rover | ok | 11:32 |
*** epoojad1 is now known as epoojad1|afk | 11:39 | |
chkumar|rover | brb | 11:40 |
*** ykarel|afk is now known as ykarel | 11:42 | |
*** epoojad1|afk has quit IRC | 11:44 | |
*** bhagyashris has quit IRC | 11:52 | |
marios|ruck | brb | 11:58 |
*** rfolco has joined #oooq | 11:58 | |
*** dtantsur|afk is now known as dtantsur | 12:02 | |
chkumar|rover | marios|ruck: weshay I reproduced the issue | 12:05 |
marios|ruck | chkumar|rover: image build you mean ? | 12:05 |
chkumar|rover | marios|ruck: yes, in rdo cloud | 12:05 |
chkumar|rover | marios|ruck: change the selinux mode from permissive to enforcing | 12:05 |
marios|ruck | chkumar|rover: perfect | 12:05 |
marios|ruck | chkumar|rover: so thats it then | 12:05 |
marios|ruck | chkumar|rover: still running there https://review.rdoproject.org/zuul/stream/69a9ff6b8dd34e56a2803eb378da85f7?logfile=console.log | 12:06 |
chkumar|rover | marios|ruck: http://paste.openstack.org/show/787545/ | 12:06 |
chkumar|rover | marios|ruck: let it finish then we can update the patch | 12:07 |
marios|ruck | chkumar|rover:++ you found it mate | 12:08 |
chkumar|rover | marios|ruck: we did it together and thanks to jpena also :-) | 12:08 |
marios|ruck | chkumar|rover: maybe we can get some good logs for slaweq fs 1 too at https://review.rdoproject.org/r/#/c/23919/ | 12:08 |
chkumar|rover | marios|ruck: yes | 12:09 |
*** EmilienM has quit IRC | 12:12 | |
*** EmilienM has joined #oooq | 12:12 | |
*** EmilienM is now known as EvilienM | 12:17 | |
chkumar|rover | marios|ruck: it passed | 12:18 |
marios|ruck | chkumar|rover: \o/ | 12:18 |
marios|ruck | chkumar|rover: so what do we do ? add conditional into the build script for rhel8 and setenforce? | 12:22 |
marios|ruck | chkumar|rover: virtcustomize the image? | 12:22 |
marios|ruck | i know weshay will want it on the image by default | 12:22 |
marios|ruck | at least he has done in the past for this kind of thing like rhui for example | 12:23 |
*** jpena is now known as jpena|lunch | 12:29 | |
chkumar|rover | kopecmartin: soniya29 surpatil please have a look at agenda https://hackmd.io/fIOKlEBHQfeTZjZmrUaEYQ?view | 12:32 |
chkumar|rover | marios|ruck: sorry coming back to the above question | 12:36 |
marios|ruck | chkumar|rover: np updated the https://bugs.launchpad.net/tripleo/+bug/1853028/comments/13 | 12:37 |
openstack | Launchpad bug 1853028 in tripleo "Build overcloud image for rhel8 fails sometimes on in_target.d/post-install.d/51-enable-network-service" [Critical,Triaged] | 12:37 |
marios|ruck | chkumar|rover: and trello | 12:37 |
chkumar|rover | marios|ruck: adding condition or a task in build images role itself will help | 12:37 |
chkumar|rover | marios|ruck: may be in future if the same role will be used in downstream then we need to just flip the selinux switch | 12:37 |
marios|ruck | chkumar|rover: yeah but still hacky sure we can just update that https://review.opendev.org/698883 | 12:37 |
marios|ruck | chkumar|rover: but not sure it will be acceptable | 12:37 |
chkumar|rover | marios|ruck: we will get it merged | 12:38 |
chkumar|rover | as it unblocks our ci | 12:38 |
marios|ruck | chkumar|rover: ok let the run finish before i update it | 12:38 |
marios|ruck | chkumar|rover: so we get the fs1 logs for slaweq | 12:38 |
chkumar|rover | marios|ruck: yup | 12:38 |
* marios|ruck looks at clock | 12:38 | |
marios|ruck | don't know if we'll see the rhel8 promote but maybe we can set it up for happening later | 12:38 |
marios|ruck | chkumar|rover: ^ | 12:38 |
chkumar|rover | marios|ruck: if we get one successful run, it will get promoted | 12:39 |
marios|ruck | ;) | 12:39 |
marios|ruck | chkumar|rover: but we also need fs1 | 12:40 |
marios|ruck | chkumar|rover: right? image build and fs1 i think for rhel missing | 12:40 |
* marios|ruck checks promoter logs | 12:40 | |
marios|ruck | chkumar|rover: hm also standalone | 12:41 |
marios|ruck | chkumar|rover: but is still running master pipeline now | 12:41 |
marios|ruck | chkumar|rover: there though fs1,scen1 and buildimage missing successful jobs: [u'periodic-tripleo-ci-rhel-8-scenario001-standalone-master', u'periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-master', u'periodic-tripleo-rhel-8-buildimage-overcloud-full-master'] | 12:41 |
marios|ruck | http://38.145.34.55/redhat8_master.log | 12:41 |
marios|ruck | chkumar|rover: hmmm hope that scenario 1 is not a new blocker :/ | 12:42 |
marios|ruck | chkumar|rover: it failed on current run for scen1 too | 12:42 |
marios|ruck | checking ... | 12:42 |
marios|ruck | fax * http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-scenario001-standalone-master/879e792/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz | 12:42 |
marios|ruck | * 2019-12-13 10:06:45 | File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1323, in _standalone_deploy | 12:42 |
marios|ruck | 2019-12-13 10:06:45 | raise exceptions.DeploymentError('Deployment failed') | 12:42 |
chkumar|rover | marios|ruck: https://review.opendev.org/#/c/698889/ | 12:43 |
* marios|ruck hug chkumar|rover | 12:43 | |
chkumar|rover | will fix collectd issue | 12:43 |
* chkumar|rover hugs back marios|ruck | 12:43 | |
marios|ruck | sweet | 12:43 |
* chkumar|rover looks for panda | 12:43 | |
marios|ruck | so we wait for current run to finish | 12:43 |
marios|ruck | then post testproject/rdo and done | 12:43 |
marios|ruck | it should promote | 12:43 |
chkumar|rover | yup | 12:43 |
marios|ruck | we have a way to build image | 12:44 |
* marios|ruck starts defrosting mojito | 12:44 | |
*** ykarel is now known as ykarel|away | 12:48 | |
chkumar|rover | zbr: can we close this issue https://github.com/containers/libpod/issues/4580 since it is fixed | 12:51 |
chkumar|rover | ? | 12:51 |
panda | aaawwww | 12:51 |
zbr | chkumar|rover: not really, we did build the rpm by disabling the gpgme! so mainly by shipping less of podman :D | 12:57 |
zbr | i think but is still valid, but I will update it to make it clear | 12:57 |
chkumar|rover | zbr: sure | 12:57 |
*** akahat has joined #oooq | 12:59 | |
zbr | chkumar|rover: thanks for reminding, i updated it. | 13:00 |
*** ykarel_ has joined #oooq | 13:00 | |
chkumar|rover | weshay: kopecmartin zbr soniya29 surpatil meeting time | 13:01 |
marios|ruck | man some patches just DO NOT want to merge aweeeooooooooo recheck! https://review.opendev.org/#/c/695878/1 https://review.opendev.org/#/c/696872/1 | 13:01 |
marios|ruck | recheck | 13:01 |
marios|ruck | recheck | 13:01 |
marios|ruck | each time a different job! | 13:02 |
marios|ruck | can whoever hex those please unhex them so they go through whatever i did to you i am sorry | 13:03 |
*** ykarel|away has quit IRC | 13:03 | |
marios|ruck | panda: 2019-12-12 17:27:43 | 2019-12-12 17:27:43,302 ERROR: Failed running docker push for 192.168.24.1:8787/tripleoqueens/centos-binary-neutron-openvswitch-agent:dc9da4e8d8269365a7af28aeebeaa7579382a132_77cf1e91_manifest | 13:06 |
marios|ruck | https://52a5c01fbc2e5a1c1c79-24704190c22ac93ed09ee07fafdd15be.ssl.cf1.rackcdn.com/698747/1/check/tripleo-ci-centos-7-containers-multinode/3ed648d/logs/undercloud/home/zuul/overcloud_prep_containers.log.txt.gz | 13:06 |
marios|ruck | panda: want to jump in a call? | 13:07 |
marios|ruck | panda: i miss you | 13:07 |
marios|ruck | i promise i won't hug you i have virus | 13:07 |
marios|ruck | well promise is a strong work | 13:08 |
marios|ruck | like i'll take it under very serious consideration anyway | 13:09 |
marios|ruck | s/work/word 15:08 < marios|ruck> well promise is a strong work | 13:09 |
chkumar|rover | marios|ruck: I think rhel8 fs01 train job the issue is same I need to find the selinux part and fire up the job http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-scenario001-standalone-train/493b7a4/job-output.txt | 13:21 |
marios|ruck | chkumar|rover: ack maybe the first thing you posted or just update the featureset | 13:21 |
chkumar|rover | sorry wrong log | 13:22 |
chkumar|rover | marios|ruck: I am looking first whether is enforcing or permissive | 13:25 |
marios|ruck | chkumar|rover: ack we have it in /extra i believe | 13:25 |
chkumar|rover | from here it is permissive http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-train/e2b80c8/logs/undercloud/var/log/extra/selinux.txt.gz | 13:25 |
marios|ruck | chkumar|rover: k | 13:25 |
*** cgoncalves has joined #oooq | 13:31 | |
beagles | k so I have a bit of a mystery and could use some advice on how to proceed. For a couple of weeks now, I've been trying to get the octavia deployment on queens and rocky to pick up a downloaded amphora image. | 13:34 |
*** ykarel_ has quit IRC | 13:34 | |
beagles | I've been hacking away using a few different patches trying to figure out what's going on: https://review.opendev.org/#/c/692579/ | 13:34 |
beagles | https://review.opendev.org/#/c/692857/ | 13:34 |
beagles | and https://review.opendev.org/#/c/692858/ | 13:35 |
beagles | (the first drives the test scenario to try out the other two) | 13:35 |
*** rlandy has joined #oooq | 13:35 | |
beagles | you can see in the journals on the undercloud that the image is downloaded and does in fact exist there.. but the ansible run by mistral via workflow tasks does not see it | 13:36 |
beagles | s/does/cannot/ | 13:36 |
marios|ruck | beagles: tempest fail? | 13:36 |
beagles | afaict the permissions are okay | 13:36 |
marios|ruck | beagles: https://review.opendev.org/#/c/692579/ i see TASK [validate-tempest : Execute tempest fails in https://38fc547d6dc8718028ea-253b0a19be2181811797fa6cbd0c2b8d.ssl.cf5.rackcdn.com/692579/12/check/tripleo-ci-centos-7-scenario010-multinode-oooq-container/2cf8cd5/job-output.txt | 13:36 |
*** jpena|lunch is now known as jpena | 13:37 | |
beagles | marios|ruck, the stat task in the ansible returns false so it doesn't get loaded into glance | 13:37 |
beagles | marios|ruck, so the tempest fail is expected because they cannot run | 13:37 |
marios|ruck | beagles: ack i see | 13:37 |
marios|ruck | scrolling up | 13:37 |
beagles | marios|ruck, I've loaded up a bunch of 'find' tasks etc to get some insight on what is there in the /usr/share on the undercloud (we expect there to be a /usr/share/openstack-octavia-images dir with a qcow in it) | 13:38 |
beagles | marios|ruck, but the dir isn't there - afaict this works ok in a local environment (tried it twice successfully) so I'm kind of stuck | 13:38 |
beagles | (it would be so much better if it also didn't work "at home") | 13:39 |
beagles | fwiw: the same ansible works fine for stein and train | 13:39 |
beagles | but they are pretty different in how the octavia tasks are run - queens is mistral workbook "workflow tasks" - rocky also doesn't work right but that's probably for different reasons (containerized undercloud with the ansible being run *in* the container instead of on the host IIRC) | 13:41 |
marios|ruck | beagles: not sure if that is relevant but errors in mistral log like HeatAction.stacks.get failed: ERROR: The Stack (overcloud) could not be found. | 13:41 |
marios|ruck | beagles: have you tried asking someone from mistral team to check it | 13:42 |
marios|ruck | beagles: or if there are some known queens issue/differeences | 13:42 |
chkumar|rover | weshay: thank you for the yesterday guidance on dib, it helpes, we got the issue and is selinux | 13:43 |
beagles | marios|ruck, the heat stack action get might actually end up being relevant but the problem here is that the file check "stat" operation returns false . and AFAICT the file that I *thought* should be there (and indeed seemed to be there) isn't | 13:43 |
weshay | chkumar|rover, k.. cool :) | 13:43 |
weshay | marios|ruck, chkumar|rover /me works on 8.1 | 13:43 |
chkumar|rover | weshay: rhel8 nodepool image is 8.1 | 13:43 |
weshay | chkumar|rover, aye.. but the qcow2 file is 8.0 | 13:44 |
beagles | marios|ruck, summary - quickstart triggers a role during undercloud install that lives in oooq-extras that downloads an amphora image onto a location in the undercloud | 13:44 |
chkumar|rover | weshay: if we build it with enforcing mode the error gets reproduced | 13:44 |
chkumar|rover | rhel8 nodepool image has selinux enforcing | 13:44 |
beagles | marios|ruck, this file gets picked up by workflow|external tasks during the overcloud deployment if it exists | 13:44 |
chkumar|rover | in rdocloud image has selinux permissive that's why it worked | 13:44 |
chkumar|rover | it was a good exercise | 13:45 |
*** Goneri has joined #oooq | 13:45 | |
panda | marios|ruck: lunch. | 13:45 |
marios|ruck | beagles: could it be permissions? or selinux? | 13:46 |
marios|ruck | beagles: like it can't see the file | 13:46 |
beagles | marios|ruck, the download seems to work but the stat in the workbook isn't working ... what it looks like to me (but how can it be) that the amphora image is being put somewhere on the undercloud other than where it says it is, or the directory is being removed before the overcloud deploy somehow or the mistral process cannot see certain directories | 13:46 |
weshay | chkumar|rover, marios|ruck just FYI.. we'll need to make setting selinux a param.. because we'll use this same job internally to build images https://review.opendev.org/#/c/698883/1/roles/oooci-build-images/templates/build-images.sh.j2 | 13:46 |
beagles | marios|ruck, could be... | 13:46 |
marios|ruck | beagles: not selinux https://38fc547d6dc8718028ea-253b0a19be2181811797fa6cbd0c2b8d.ssl.cf5.rackcdn.com/692579/12/check/tripleo-ci-centos-7-scenario010-multinode-oooq-container/2cf8cd5/logs/undercloud/var/log/extra/selinux.txt.gz | 13:46 |
beagles | marios|ruck, right | 13:47 |
*** soniya29 has quit IRC | 13:47 | |
panda | marios|ruck: I can chat in 30 | 13:48 |
marios|ruck | panda: yes | 13:49 |
marios|ruck | beagles: commented there fyi so you can point to it and not repeat if you ask anyone else to check https://review.opendev.org/#/c/692579/12 | 13:49 |
beagles | marios|ruck, ah good idea thanks | 13:50 |
marios|ruck | beagles: not sure i can have another look see if something reveals itself to me its in my reviews list now anyway | 13:50 |
beagles | marios|ruck, k thanks! | 13:50 |
marios|ruck | beagles: are you sure the download is completed? | 13:51 |
marios|ruck | beagles: i mean is it a sequencing thing | 13:51 |
beagles | marios|ruck, hrmm.. think so because it's a get_url in the undercloud install | 13:51 |
beagles | marios|ruck, so should be done long before we get to that stage of the deploy | 13:52 |
marios|ruck | weshay: chkumar|rover: ack so what should we do with that? https://review.opendev.org/#/c/698883/1/roles/oooci-build-images/templates/build-images.sh.j2 and the bug should we just tidy that up for merge weshay ? with conditional? not sure that is acceptable but also not sure what else to do short of virtcustomize the image | 13:52 |
beagles | marios|ruck, well.. the undercloud install playbook not the undercloud install itself sorry | 13:52 |
marios|ruck | beagles: ack sorry just thinking out loud/ | 13:52 |
beagles | marios|ruck, I stuck a stat after the download to make sure it actually did work... was starting to get paranoid | 13:53 |
marios|ruck | beagles: ack ;) | 13:53 |
beagles | marios|ruck, all that stuff happens around here https://zuul.opendev.org/t/openstack/build/2cf8cd5dc0f540b59b82e726c95a7f04/log/job-output.txt#5812 | 13:54 |
marios|ruck | panda: sent you invite for call in half hour | 13:54 |
beagles | marios|ruck, is there anyway that the /usr/share path might be getting "sanitized" after that playbook is run? | 13:54 |
marios|ruck | panda: RSVP its the polite thing to do | 13:54 |
marios|ruck | beagles: not that i know of | 13:55 |
beagles | marios|ruck, I guess it would prob have to happen in overcloud prep role | 13:55 |
beagles | or playbook rather | 13:55 |
chkumar|rover | marios|ruck: heading home | 14:01 |
marios|ruck | chkumar|rover: ack | 14:01 |
*** skramaja has quit IRC | 14:13 | |
*** epoojad1 has joined #oooq | 14:13 | |
marios|ruck | chkumar|rover: fyi master very green only scen4 failed but we got it with https://review.rdoproject.org/r/24149 now just waiting on fs20 | 14:14 |
*** amoralej is now known as amoralej|lunch | 14:15 | |
marios|ruck | jpena: still around? | 14:16 |
jpena | marios|ruck: yep | 14:16 |
rlandy | jpena: hi ... wrt the naming of the downstream component dlrn .../rhel8-osp17/component/ | 14:16 |
marios|ruck | jpena: do you think we can hold node https://review.rdoproject.org/r/#/c/23919/ periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-master please | 14:16 |
marios|ruck | chkumar|rover: looks like its going to timeout on image prep which is the bug we want | 14:16 |
rlandy | for rdoproject we mirror rhel and redhat, would that we possible with downstream? | 14:16 |
marios|ruck | jpena: that job is currently running it should timeout in the nexthalf hour or so | 14:17 |
marios|ruck | jpena: would be great if we can have the node for debu | 14:18 |
marios|ruck | jpena: hmm | 14:18 |
marios|ruck | jpena: how long can we hol that/? | 14:18 |
marios|ruck | jpena: can it be until monday or far too much? | 14:18 |
marios|ruck | jpena: cos if we can't i don't see slaweq around let me check tripelo | 14:18 |
jpena | marios|ruck: I think it won't be cleaned up automatically, so yes, it's possible | 14:18 |
marios|ruck | jpena: thanks pinged slaweq also in tripleo | 14:20 |
jpena | marios|ruck: ssh zuul@38.145.34.7 | 14:20 |
marios|ruck | jpena: ah thanks | 14:20 |
jpena | rlandy: I guess you want that for the component-based pipeline? | 14:21 |
marios|ruck | thanks jpena in | 14:21 |
rlandy | jpena: please - as we are trying to reuse the promotion code which refers to ansible_distribution | 14:21 |
jpena | rlandy: so with https://review.rdoproject.org/r/23596 that should be covered | 14:22 |
jpena | I just need to fix one little thing in the internal playbooks, and we'll have the URL too | 14:22 |
jpena | the API will be there once it's enabled (it's not at the moment) | 14:22 |
rlandy | jpena: great - thanks | 14:23 |
*** TrevorV has joined #oooq | 14:29 | |
marios|ruck | panda: joining now | 14:30 |
jpena | rlandy: I have enabled the redhat8-* paths in the PoC VM | 14:33 |
marios|ruck | *** new promoter test Queens manifest revisit - some dig updated v3/4 @ https://review.opendev.org/#/c/687267/ & posted queens tht w/depends-on @ https://review.opendev.org/698747 | 14:36 |
marios|ruck | panda: ^^ | 14:36 |
*** dmellado has quit IRC | 14:36 | |
chkumar|rover | marios|ruck: good, then we are on track | 14:37 |
rlandy | jpena: yep - I see the new dirs - thanks | 14:37 |
marios|ruck | weshay: panda: ran the manifest push on queens yesterday | 14:38 |
marios|ruck | and it pushed fine \o/ | 14:38 |
marios|ruck | panda:++ | 14:38 |
weshay | marios|ruck, panda++ | 14:39 |
*** dmellado has joined #oooq | 14:39 | |
weshay | marios|ruck, panda so.. then that begs the question... when are we comfortable updating the promoter server to the latest code? | 14:39 |
weshay | marios|ruck, panda or.. maybe turning off the old promoter and running from the new one.. for a bit | 14:40 |
panda | weshay: better to run the new code on the old server | 14:40 |
weshay | so we have a clean and ready known good working | 14:40 |
marios|ruck | weshay: NOT NOW | 14:40 |
weshay | marios|ruck, if not now when | 14:40 |
marios|ruck | if anyone touches the promoter i will 100% resort to violence | 14:40 |
weshay | lolz | 14:40 |
marios|ruck | weshay: we are chasing rhel8 promotion | 14:40 |
* weshay touches it.. | 14:40 | |
marios|ruck | weshay: and master | 14:41 |
* weshay runs | 14:41 | |
weshay | marios|ruck, ok.. | 14:41 |
marios|ruck | https://www.youtube.com/watch?v=h1PfrmCGFnk | 14:41 |
panda | weshay: but I'm quite confident at this point taht it would work, what it's left is being sure that jobs can use the _manifest | 14:41 |
weshay | panda, anything other steps you can think of or shall we set you free to continue thinking about and working on tests for the promoter? | 14:42 |
panda | weshay: but yes, not on friday | 14:42 |
marios|ruck | panda: well yeah it should be pretty safe as we discssed yesterday just would rather not disturb it right now we are waiting on rhel centos master | 14:42 |
weshay | I guess next all the managers will be away.. so the cats can play | 14:42 |
marios|ruck | like it just push _manifest but replace/remove nothing and no-one uses that _manifest anyway | 14:42 |
weshay | rfolco, this is pretty good timing.. considering the ppc guys have their stuff working now | 14:43 |
panda | weshay: no, at this point, only the transition is left, then it's all improving tests so next iterations will be easier to test promoter code | 14:43 |
weshay | panda, ok looking forward to hearing your thoughts on the next iteration of tests | 14:45 |
chkumar|rover | marios|ruck: on monday, I will try to make fs01 train rhel8 green | 14:46 |
marios|ruck | chkumar|rover: we should get master promotion | 14:47 |
marios|ruck | chkumar|rover: centos | 14:47 |
marios|ruck | at least | 14:47 |
marios|ruck | chkumar|rover: maybe rhel centos monday | 14:47 |
chkumar|rover | marios|ruck: it is waiting on collectd issue | 14:47 |
chkumar|rover | we will go green on centos | 14:47 |
marios|ruck | chkumar|rover: scenario1 right? | 14:47 |
chkumar|rover | marios|ruck: yes | 14:47 |
*** derekh has quit IRC | 14:48 | |
marios|ruck | chkumar|rover: so it reported just now https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-master | 14:48 |
rlandy | rfolco: weshay: for when component and product release differ: https://review.rdoproject.org/r/#/c/24153/ | 14:48 |
*** amoralej|lunch is now known as amoralej | 14:48 | |
marios|ruck | chkumar|rover: damn also fs20 posting now | 14:48 |
marios|ruck | chkumar|rover: https://review.rdoproject.org/r/24149 | 14:49 |
marios|ruck | chkumar|rover: we already have green scen4 centos from last run of that ^ so if new one passes centos master promotes | 14:50 |
chkumar|rover | marios|ruck: cool | 14:52 |
chkumar|rover | marios|ruck: for scenario 4 manila tempest bugged tbarron aready | 14:52 |
*** aakarsh has quit IRC | 14:52 | |
marios|ruck | chkumar|rover: where? didn't see that | 14:52 |
chkumar|rover | marios|ruck: http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-scenario004-standalone-master/5049a8c/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz | 14:53 |
marios|ruck | chkumar|rover: yeah but rerun green | 14:53 |
chkumar|rover | marios|ruck: it is from recent posted log | 14:53 |
marios|ruck | chkumar|rover: periodic-tripleo-ci-centos-7-scenario004-standalone-master SUCCESS in 1h 10m 20s https://review.rdoproject.org/r/#/c/24149/ v1 | 14:53 |
* marios|ruck feel like crap | 14:54 | |
chkumar|rover | marios|ruck: ah, ok | 14:54 |
chkumar|rover | marios|ruck: new patch for scenario 1 https://review.opendev.org/#/c/698914 | 14:59 |
weshay | rlandy, comment added | 15:06 |
rlandy | weshay: updated | 15:09 |
weshay | rlandy, k. .sorry same thing here https://review.rdoproject.org/r/#/c/24153/3/roles/dlrn-report/tasks/dlrn-vars-setup.yml | 15:09 |
weshay | and we'll merge | 15:09 |
rlandy | yeah - sorry - saw that | 15:10 |
weshay | np | 15:10 |
* weshay is paranoid about newbies | 15:10 | |
weshay | panda, oh.. before you go for the day.. do we have enough data on quay to open a ticket? | 15:11 |
* marios|ruck feeling crap going to call it in a bit | 15:13 | |
weshay | marios|ruck, get out of here | 15:13 |
chkumar|rover | marios|ruck: today was a good day, we fixed one of the long issue, let's celeberate | 15:14 |
marios|ruck | weshay: yeah in a bit gonna tidy up/status etc | 15:15 |
marios|ruck | chkumar|rover: \o/ | 15:15 |
marios|ruck | chkumar|rover: another day in paradise ;) | 15:15 |
chkumar|rover | marios|ruck: :-) | 15:15 |
*** ykarel_ has joined #oooq | 15:16 | |
rlandy | weshay: rfolco: https://review.rdoproject.org/r/#/c/24153/ updated | 15:20 |
rlandy | there is more work to do to get tripleo-ci-base-promote-component-to-current-tripleo to downstream compatible | 15:20 |
rlandy | but will do that in a diff patch | 15:21 |
rfolco | rlandy, just fyi I am working on the promote-hash role that can be reused for any promotion job | 15:21 |
rlandy | rfolco: yeah - ok - I can remove that file | 15:22 |
rlandy | if you like | 15:22 |
rlandy | rfolco: just trying to keep things consistent | 15:23 |
rfolco | rlandy, me too, keep what your are doing, when I get the patch ready we remove dups | 15:23 |
panda | weshay: not much else than "we are failing to push 130 images in a row, with 500 errors, and time outs." | 15:24 |
marios|ruck | jpena: am about to leave not sure if slaweq had time yet | 15:24 |
marios|ruck | jpena: maybe ping him in tripleo when you are leaving? | 15:24 |
marios|ruck | jpena: otherwise id say just kill it | 15:24 |
jpena | marios|ruck: no worries, let's keep the vm until monday | 15:24 |
marios|ruck | jpena: don't wait around for that | 15:24 |
marios|ruck | jpena: ok thanks | 15:24 |
* marios|ruck shutdown sequence | 15:25 | |
weshay | panda, if you can give me the raw data.. I'll handle the ticket | 15:25 |
weshay | panda, we can just email support@quay.io w/ cc me and Emilien | 15:26 |
*** rfolco is now known as rfolco|doctor | 15:27 | |
*** marios|ruck is now known as marios|ruck|out | 15:35 | |
EvilienM | Evilien you mean | 15:35 |
*** ykarel_ is now known as ykarel|away | 15:39 | |
*** marios|ruck|out has quit IRC | 15:44 | |
*** epoojad1 has quit IRC | 15:45 | |
*** ykarel|away has quit IRC | 15:46 | |
*** jbadiapa has quit IRC | 15:48 | |
*** surpatil has quit IRC | 15:49 | |
chkumar|rover | see ya people, have a nice weekend ahead | 15:53 |
*** chkumar|rover is now known as raukadah | 15:53 | |
*** akahat has quit IRC | 15:54 | |
rlandy | migi: hi - so I think I have the right strings now for downstream - just struggling to get nodes to work | 16:03 |
rlandy | jobs die with node_failures | 16:03 |
migi | rlandy: which driver are you using ? | 16:04 |
*** ykarel|away has joined #oooq | 16:04 | |
rlandy | driver? rhel-8 | 16:04 |
rlandy | node type | 16:04 |
migi | so it's openstack | 16:05 |
rlandy | https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/status | 16:06 |
rlandy | you can see the test job ^^ | 16:06 |
rlandy | pinged admins | 16:06 |
migi | rlandy: but this job is running | 16:12 |
*** fmount has quit IRC | 16:13 | |
*** fmount has joined #oooq | 16:14 | |
migi | rlandy: I can see in the logs that the upshift got issues with creating IP "No more IP addresses available on network " | 16:15 |
migi | rlandy: so it may be problem with upshift or some left overs from nodepool leaking | 16:16 |
migi | rlandy: interesting that for that tenant current usage is "Floating IPs Allocated 5 of 400" | 16:16 |
migi | rlandy: maybe worth rechecking the jobs as the upshift tenant looks fine imo now | 16:17 |
rlandy | migi: see #sf-ops | 16:19 |
migi | rlandy: http://pastebin.test.redhat.com/821863 | 16:19 |
*** ykarel|away has quit IRC | 16:44 | |
*** tesseract has quit IRC | 16:53 | |
raukadah | kopecmartin: please keep an eye on this patch https://review.opendev.org/#/c/698589/ needed for tripleo ci tempest train mess | 17:00 |
weshay | raukadah, fyi https://review.rdoproject.org/r/#/c/24157/2/zuul.d/tripleoci.yaml | 17:12 |
weshay | raukadah, not sure if we can test that from as it's in config | 17:12 |
raukadah | weshay: what about moving that it to rdo-jobs and just call the job which uses it | 17:13 |
raukadah | weshay: I will take a look on that, rlandy and I discussed about making config less bulky, may be we can do something in new year | 17:14 |
weshay | raukadah, you mean creating a net new overcloud image build job in rdo-jobs that overrides the vars? | 17:14 |
raukadah | weshay: I mean define the job in rdo-jobs and keep the base in config which needs secrets and rest in rdo-jobs test stuff in rdo-jobs itself | 17:15 |
weshay | raukadah, +1 from me | 17:16 |
raukadah | by modifying projects.yaml | 17:16 |
raukadah | weshay: by the way rdo cloud rhel8 nodepool image is running on 8.1 | 17:17 |
raukadah | we need to talk and fix selinux issue there | 17:17 |
weshay | raukadah, any tripleo job should be permissive | 17:19 |
weshay | raukadah, I'll try to follow up the zuul patch and get it in rdo-jobs | 17:19 |
raukadah | weshay: may be we missed something while creating rhel8 jobs , it would be a good exercise while working on centos8 | 17:20 |
raukadah | weshay: great, will look on that on monday! | 17:20 |
raukadah | from rhel8 I learned so manythings | 17:20 |
weshay | +1 | 17:21 |
raukadah | thanks to rlandy panda sshnaidm|off and marios | 17:21 |
raukadah | weshay: you played with systemd c header files? I need to do some work on podman side | 17:24 |
raukadah | panda: rlandy ^^ | 17:25 |
raukadah | my main usecase is to find out systemd is there not by calling command but from header files | 17:26 |
weshay | raukadah, I have not | 17:34 |
*** dtantsur is now known as dtantsur|afk | 17:40 | |
EvilienM | panda: thx for the email | 17:43 |
panda | EvilienM: I'm afraid all the updates will be sent to me, I'll see if I can add your emails too, but at least you have the internal ticket number | 17:45 |
raukadah | weshay: np, once find the solution, will learn together | 17:48 |
*** amoralej is now known as amoralej|off | 17:51 | |
*** jpena is now known as jpena|off | 17:53 | |
*** holser has quit IRC | 18:09 | |
mjturek | weshay: so baha and I are hitting a new error. Suddenly skydive-base is matching and failing to build with no output as to why :( | 18:11 |
mjturek | weshay do you know where we find the regex that matches containers? | 18:13 |
weshay | mjturek, it's all in the kolla config | 18:14 |
mjturek | cool thanks | 18:14 |
mjturek | like should be in kolla-build.conf | 18:15 |
weshay | https://review.opendev.org/#/c/693390/2/docker/nova/nova-base/Dockerfile.j2 for example | 18:17 |
weshay | mjturek, I think skydive though should be excluded from the build | 18:17 |
weshay | and maybe why ur hitting an error | 18:18 |
mjturek | weshay it definitely should for poer | 18:18 |
mjturek | power* | 18:18 |
weshay | oh.. | 18:18 |
*** soniya29 has joined #oooq | 18:18 | |
* mjturek has a stuck w key | 18:18 | |
weshay | http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-centos-7-master-containers-build-push/fd6135a/logs/containers-successfully-built.log.txt.gz | 18:19 |
weshay | ya.. I guess we do build it | 18:19 |
weshay | -agent analyzer and base | 18:20 |
weshay | I have to meet a buddy for lunch.. bbiab | 18:20 |
weshay | sorry to bail | 18:20 |
mjturek | np ttyl! | 18:21 |
rlandy | weshay: is there any way to work out the dlrn end point of we don;t know if off hand? | 18:36 |
*** yolanda has quit IRC | 18:43 | |
*** jfrancoa has quit IRC | 18:48 | |
*** saneax has quit IRC | 18:48 | |
*** tosky has quit IRC | 18:57 | |
rlandy | raukadah: ^^ if you are around, do you know the dlrn api for downstream? | 19:09 |
*** soniya29 has quit IRC | 19:11 | |
EvilienM | panda: no prob | 19:21 |
EvilienM | thanks for taking care of it | 19:21 |
rlandy | nvm found it | 19:21 |
*** jtomasek has quit IRC | 19:31 | |
*** Goneri has quit IRC | 20:27 | |
*** rfolco|doctor is now known as rfolco | 20:31 | |
weshay | rlandy, the dlrn_api client in the prod dlrn server is throwing 500's too | 20:51 |
weshay | http://promoter.rdoproject.org/centos7_train.log | 20:52 |
*** EvilienM is now known as EmilienM | 20:57 | |
*** TrevorV has quit IRC | 20:58 | |
rlandy | weshay; ok - will chat with infar guys on monday | 21:10 |
weshay | aye | 21:10 |
*** holser has joined #oooq | 21:10 | |
*** rlandy has quit IRC | 21:10 | |
*** holser has quit IRC | 21:35 | |
*** holser has joined #oooq | 22:10 | |
*** rfolco has quit IRC | 22:19 | |
*** holser has quit IRC | 22:26 | |
*** Goneri has joined #oooq | 22:34 | |
*** rfolco has joined #oooq | 22:50 | |
*** rfolco has quit IRC | 22:55 | |
*** rfolco has joined #oooq | 23:49 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!