Thursday, 2018-10-11

*** iurygregory has quit IRC00:04
*** lblanchard has quit IRC00:08
*** dmacpher_ has joined #tripleo00:09
*** dmacpher has quit IRC00:10
*** ooolpbot has joined #tripleo00:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION00:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968000:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)00:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671000:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675600:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716700:10
*** ooolpbot has quit IRC00:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]00:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)00:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]00:10
*** jhebden has quit IRC00:11
*** jhebden has joined #tripleo00:12
*** panda has quit IRC00:13
*** panda has joined #tripleo00:16
*** lblanchard has joined #tripleo00:17
*** lblanchard has quit IRC00:17
*** artom has quit IRC00:19
*** artom has joined #tripleo00:19
*** rcernin has quit IRC00:29
*** mjturek has joined #tripleo00:57
*** mjturek has quit IRC00:58
*** lblanchard has joined #tripleo00:58
*** tzumainn has quit IRC00:58
*** phuongnh has joined #tripleo01:03
*** ooolpbot has joined #tripleo01:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION01:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968001:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671001:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675601:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716701:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)01:10
*** ooolpbot has quit IRC01:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]01:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)01:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]01:10
itlinuxhi can someone check this out https://review.openstack.org/#/c/609036/201:14
itlinuxlet me know. Thanks01:14
*** mrsoul has quit IRC01:34
*** mschuppert has quit IRC01:34
*** rcernin has joined #tripleo02:01
*** ooolpbot has joined #tripleo02:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION02:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968002:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)02:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671002:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675602:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716702:10
*** ooolpbot has quit IRC02:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]02:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)02:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]02:10
*** itlinux_ has joined #tripleo02:28
*** itlinux has quit IRC02:31
*** phuongnh has quit IRC02:33
*** lblanchard has quit IRC02:40
*** rh-jelabarre has quit IRC02:43
*** psachin has joined #tripleo03:08
*** ooolpbot has joined #tripleo03:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION03:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968003:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)03:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671003:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675603:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716703:10
*** ooolpbot has quit IRC03:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]03:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)03:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]03:10
*** ramishra has joined #tripleo03:16
*** shyamb has joined #tripleo03:57
*** ooolpbot has joined #tripleo04:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION04:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968004:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)04:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671004:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675604:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716704:10
*** ooolpbot has quit IRC04:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]04:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)04:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]04:10
*** shyamb has quit IRC04:24
*** janki has joined #tripleo04:26
*** shyamb has joined #tripleo04:35
*** shyamb has quit IRC04:53
*** ksambor has joined #tripleo05:00
*** agurenko has joined #tripleo05:04
*** shyamb has joined #tripleo05:05
Tenguhello there05:05
jaosoriorgood morning105:05
Tenguhow are you today, jaosorior ?05:06
*** ykarel has joined #tripleo05:07
jaosoriorall good05:09
jaosorioryou?05:09
*** ooolpbot has joined #tripleo05:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION05:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968005:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671005:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675605:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716705:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)05:10
*** ooolpbot has quit IRC05:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]05:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)05:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]05:10
*** slaweq has joined #tripleo05:11
*** slaweq has quit IRC05:16
Tengujaosorior: not bad. Happy with my finding on containers - we can go forward now with overcloud deploy I think.05:18
*** ratailor has joined #tripleo05:19
*** sshnaidm|afk has quit IRC05:19
Tenguchkumar|off: heya! no idea for the tempest failure, but I saw it was discussed as a CIX blocker or something like that, and a patch was just merged - so maybe my run was launched just before the merge itself. I hit another issue now on the same commit, so I've launched a recheck.05:20
*** chkumar|off is now known as chandankumar05:26
chandankumarTengu: let'see how recheck goes.05:37
Tenguchandankumar: yep05:37
*** udesale has joined #tripleo05:39
*** quiquell|off is now known as quiquell05:42
quiquellGood morning05:42
Tengu«o/ quiquell05:44
*** yprokule has joined #tripleo05:50
*** pcaruana has joined #tripleo06:01
chandankumarTengu: quiquell \o/06:04
quiquellchandankumar: o/06:04
quiquellmarios: I am checking timeouts, I see deploy overcloud has increase06:04
quiquellmarios: http://dashboard-ci.tripleo.org/d/poOr-d0mk/ansible-exploration?orgId=1&var-influxdb_filter=logs_path%7C%3D~%7C%2Ftripleo-ci-centos-7-scenario001-multinode-oooq-container%2F&var-influxdb_filter=job_branch%7C%3D%7Cmaster&var-influxdb_filter=task_name%7C%3D%7Covercloud-deploy%20:%20Deploy%20the%20overcloud06:04
*** akrivoka has joined #tripleo06:05
* chandankumar is still trying to recover from 5 days long pycon india 2018 conference06:05
*** ooolpbot has joined #tripleo06:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION06:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968006:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179671006:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675606:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716706:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)06:10
*** ooolpbot has quit IRC06:10
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]06:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)06:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]06:10
chandankumarmarios: hello06:11
chandankumarmarios: https://bugs.launchpad.net/tripleo/+bug/1796710 I think we can remove alert from this bug As I am not seeing in CI just one time failure06:11
openstackLaunchpad bug 1796710 in tripleo "Tempest tests failed with Read timed out error on tripleo-ci-centos-7-undercloud-containers" [Critical,Triaged]06:11
*** morazi has quit IRC06:18
*** apetrich has quit IRC06:20
marios|rover looking chandankumar06:22
*** mschuppert has joined #tripleo06:23
*** shyamb has quit IRC06:24
marios|roverchandankumar: indeed via cistatus.tripleo it looks green all of yesterday06:24
marios|roverchandankumar: updated the bug... wondering about https://review.openstack.org/#/c/608723 then. perhaps we don't need it looks like06:27
chandankumarmarios|rover: I think the above review is not needed06:28
marios|roverchandankumar: yeah i set workflow -1 on it just now06:28
marios|roverthanks chandankumar06:29
*** jrist has joined #tripleo06:33
*** janki has quit IRC06:34
*** jchhatba_ has joined #tripleo06:35
*** numans has quit IRC06:39
*** apetrich has joined #tripleo06:43
*** aufi has joined #tripleo06:45
*** mburned_out is now known as mburned06:46
*** jchhatba_ has quit IRC06:46
*** iurygregory has joined #tripleo06:46
*** janki has joined #tripleo06:47
*** dalvarez has quit IRC06:49
*** amoralej has quit IRC06:49
*** slaweq has joined #tripleo06:50
*** shyamb has joined #tripleo06:50
*** aufi has quit IRC06:52
*** radez has quit IRC06:52
*** kopecmartin has joined #tripleo06:52
*** aufi has joined #tripleo06:53
*** aufi has quit IRC06:54
*** rcernin has quit IRC07:01
*** slaweq has quit IRC07:03
*** slaweq has joined #tripleo07:05
quiquellPuff puppet- projects CI is a little instable07:05
jaosoriormarios|rover: https://review.openstack.org/608589 is in the gate. still some time to go as there's abunch of tripleo-ui patches in front07:06
jaosoriorand a couple of t-h-t patches07:06
quiquelljaosorior: naif question, why we have only one queue for all those projects ?07:06
*** shardy has joined #tripleo07:06
marios|roverjaosorior: ack thanks07:06
quiquelljaosorior: Could be possible to have multiple queues like we have with puppet- projects ?07:06
jaosoriorquiquell: maybe :D I have no idea how that stuff works07:07
jaosoriorbut it would be good to ask07:07
*** jfrancoa has joined #tripleo07:07
iurygregorygood morning07:08
quiquelljaosorior: Who can know it ?07:08
quiquelliurygregory: o/07:08
jaosoriorasking, I guess :D07:08
jaosoriorgotta ask the infra folks07:08
jaosorioriurygregory: welcome!07:08
quiquelliurygregory: puppet- ci is a little instable ?07:08
iurygregoryjaosorior, tks o/07:09
iurygregoryquiquell, maybe i would say, wich patch ?07:09
*** jrist has quit IRC07:09
quiquelliurygregory: It's kind of noop change https://review.openstack.org/#/c/609300/Ã07:10
*** ooolpbot has joined #tripleo07:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION07:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968007:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675607:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179716707:10
*** ooolpbot has quit IRC07:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)07:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)07:10
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]07:10
*** udesale has quit IRC07:12
*** udesale has joined #tripleo07:13
quiquellmarios|rover: ssl patch has infra issues now :-( http://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-standalone/7299cae/logs/undercloud/home/zuul/install_packages.sh.log.txt.gz07:14
*** udesale has quit IRC07:15
*** ssbarnea has joined #tripleo07:17
marios|roverkopecmartin: o/ removed the alert/promotion-blocker at https://bugs.launchpad.net/tripleo/+bug/1797167 see comment #2 looks like it was isolated07:18
openstackLaunchpad bug 1797167 in tripleo "swift timeout during undercloud deploy" [Critical,Triaged]07:18
marios|roverquiquell: :/07:21
*** rdopiera has joined #tripleo07:25
*** kopecmartin is now known as kopecmartin|ruck07:26
kopecmartin|ruckmarios|rover, ok, thanks07:27
*** huynq has joined #tripleo07:30
*** errr has joined #tripleo07:31
*** ssbarn___ has joined #tripleo07:33
quiquelljaosorior: Looks like what makes Depends-On works out of the box for tripleo, if you have different queues you have to wait untill dependencies are merged07:33
Tenguchandankumar: btw, is your patch for tempest in container working now?07:34
jaosoriorquiquell: oh, got it07:34
quiquelljaosorior: But maybe we prefer to add a little manual to not break gates so often ?07:34
quiquelljaosorior: Puff don't know07:35
jaosoriorquiquell: I suggest you send a mail and bring it up to the wider community07:35
quiquelljaosorior: openstack-dev ?07:35
jaosorioryeah07:36
*** shyamb has quit IRC07:43
*** yolanda has joined #tripleo07:44
yolandahi, good morning... i have a problem with a pair of patches, going on for some time. They don't pass RDO third party ci07:44
yolandais there some problem?07:44
jaosorioryolanda: yes, there is currently, but it's being fixed07:46
yolandaok jaosorior , do you have some estimate? this patch is important for us as the images are not working07:49
yolandaand thx for feedback!07:50
*** kopecmartin|ruck has quit IRC07:51
*** udesale has joined #tripleo07:51
*** fhubik has joined #tripleo07:57
jaosoriormarios|rover: have all the issues with ovb been sorted out? is the TLS fix the last one needed?07:58
*** jpich has joined #tripleo07:59
huynqhi.. i'm new in tripleo. I'm tracking the status of Undercloud OS upgrade from https://etherpad.openstack.org/p/tripleo-upgrade-squad-status08:00
jaosoriorhuynq: OpenStack upgrade or Operating System upgrade?08:00
huynqI do not see any change from Sep 25. How's it going?08:00
marios|roverjaosorior: should be or at least all known issues... latest run reported at cistatus for tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master08:01
jaosoriormarios|rover: crap... it timed out :(08:01
*** kopecmartin has joined #tripleo08:01
*** kopecmartin is now known as kopecmartin|ruck08:01
marios|roverjaosorior: is the ara issue http://logs.rdoproject.org/00/605200/7/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/38659d4/logs/undercloud/home/zuul/install-undercloud.log.txt.gz but that is from 2 days ago08:01
jaosoriormarios|rover: but we disabled ara already, right?08:02
chandankumarTengu: fixed some path issue08:02
huynqjaosorior: Operating System upgrade08:02
marios|roverjaosorior: right it https://bugs.launchpad.net/tripleo/+bug/1796764 and https://review.openstack.org/#/c/606986/ for that trace ^^ but the trace is from 9th08:02
openstackLaunchpad bug 1796764 in tripleo "undercloud install failing for containers-multinode in the gate "OperationalError: (sqlite3.OperationalError) database is locked "" [Critical,Fix released]08:02
*** amoralej has joined #tripleo08:04
*** cylopez has joined #tripleo08:05
Tenguchandankumar: saw that. lemme know if you have issues :)08:08
*** ooolpbot has joined #tripleo08:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION08:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968008:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675608:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)08:10
*** ooolpbot has quit IRC08:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)08:10
*** skramaja has joined #tripleo08:11
*** ssbarnea has quit IRC08:13
*** bogdando has joined #tripleo08:14
*** kopecmartin has joined #tripleo08:14
*** kopecmartin|ruck has quit IRC08:15
*** kopecmartin is now known as kopecmartin|ruck08:15
chandankumarTengu: I need to rewrite the container script part08:16
chandankumarTengu: I am tired of workaround08:16
Tengu:)08:16
Tenguchandankumar: may I help?08:16
chandankumarTengu: nana, I will update my script and let you know, but help will be needed08:17
Tenguchandankumar: just ping me ;)08:17
*** cylopez has left #tripleo08:21
*** fhubik has left #tripleo08:21
*** huynq has quit IRC08:23
*** huynq has joined #tripleo08:24
*** huynq_ has joined #tripleo08:25
*** huynq_ has quit IRC08:25
*** shyamb has joined #tripleo08:27
*** huynq has left #tripleo08:28
Tenguhmmm.08:29
TenguI'm pretty sure I'll get some post_failure in the CI.....08:29
Tenguor timeout.08:29
Tenguquiquell: you were talking about time out increasing a bit earlier?08:29
quiquellTengu: nah forget about it, I was not filtering by failing jobs08:30
Tenguah, ok.08:30
Tenguwell....08:30
*** huynq has joined #tripleo08:31
Tenguah. one last test to finish - might succeed in the end.08:36
Tengu\o/08:36
Tengufu... time_out08:37
jaosorior:(08:37
* Tengu wants his patch for Standalone, Podman and SELinux08:39
*** derekh has joined #tripleo08:42
*** dciabrin has joined #tripleo08:43
*** dtantsur|afk is now known as dtantsur|08:44
*** dtantsur| is now known as dtantsur08:44
*** ssbarnea has joined #tripleo08:49
huynqjaosorior: Could you give me some information or related references?08:52
*** ssbarnea has quit IRC08:56
mandreshardy: regarding your comment on https://review.openstack.org/#/c/609614/, why to stable/queens first? this is for rocky08:58
shardymandre: ah sorry -enocoffee moment, will fix09:02
*** panda is now known as panda|mtg09:05
*** ykarel_ has joined #tripleo09:06
mandreshardy: haha no worries, thanks for the review! Do we really mandate related bugs in the commit message for the backports?09:06
*** tosky has joined #tripleo09:06
shardymandre: we generally try to ref stable branch policy but I'm not blocking on it09:06
* mandre should create more bugs09:07
*** ykarel_ is now known as ykarel|lunch09:07
shardyhelps keep track of things that were backported for each point release, and ensures stealth features don't get backpoted ;)09:07
mandreI mean more bugs in launchpad, not in the code :)09:07
shardyin this case we know it's all backports to make things functional09:07
shardylol :D09:07
shardymandre: question re https://review.openstack.org/#/c/605127/ - are we sure that would never be desired?09:08
*** ssbarn___ has quit IRC09:08
shardypunting disk cleanup to operators unconditionally seems kind of unhelpful, particularly since it'll potentially break repeated test deployments?09:08
shardyor do we expect ironic cleaning to clean secondary disks etc?09:08
*** ykarel has quit IRC09:08
mandreshardy: I think we expect the operators to use openshift-ansible to clean the disks prior to the deployment09:09
mandrethere is a playbook for that09:09
shardymandre: hmm really?  I expected the openshift-ansible stuff to be more under the hood09:09
shardye.g imagine the demo with UI etc, but no, you have to run this bunch of CLI commands first!09:10
shardyetc09:10
*** ooolpbot has joined #tripleo09:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION09:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968009:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675609:10
*** ooolpbot has quit IRC09:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)09:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)09:10
shardymandre: not blocking but I was kinda surprised as IIRC that was needed for local redeploys09:10
shardymaybe we could conditionally run the osa cleanup playbook instead?09:11
*** kopecmartin has joined #tripleo09:12
*** jistr is now known as jistr|call09:12
mandreshardy: hmmm IIRC the cleanup step for osa is part of their "undeploy" playbook, if you pass the option to clean the disk09:13
mandreshardy: we could run undeploy playbook prior to re-deploying openshift bug that seems like a bit overkill to me09:15
mandreidk09:15
shardymandre: ack - trying to remember the details but IIRC the reason we added that was because deploy, then stack delete, then deploy again failed e.g on quickstart nodes and I guess baremetal as well09:15
shardyflaper87: may have more details but I'm just trying to ensure this doesn't make the user/developer experience worse09:16
*** ssbarnea has joined #tripleo09:20
*** gkadam has quit IRC09:25
mandreshardy: it does make the developer experience worse... personally I'm reverting the patch in my environment :-/09:25
mandreso I suppose it's a good reason to make it optional09:25
shardymandre: Ok cool, perhaps we can just add a conditional then?09:28
*** salmankhan has joined #tripleo09:28
shardyI also wonder if it's really safer, e.g we're about to give the disks to gluster which will presumably write all over them09:29
shardyso cleaning things first doesn't seem *that* unreasonable to me ;)09:29
shardymaybe we can have a YesTrashMyDisksPlease: true parameter :)09:30
bogdandoweshay, therve, d0ugal: https://bugs.launchpad.net/tripleo/+bug/1789680/comments/2309:30
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)09:30
*** shyamb has quit IRC09:34
*** shyamb has joined #tripleo09:34
d0ugalbogdando: looking09:35
kopecmartin|ruckmarios|rover, sorry if I missed that, is this reported somewhere? https://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset050-upgrades-master/debcdb2/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-10-09_18_46_1909:37
*** mburned is now known as mburned_out09:38
*** shyamb has quit IRC09:39
d0ugalbogdando: sounds good, fingers crossed it fixes it :)09:40
marios|roverkopecmartin|ruck: nope not seen that one09:42
*** shyamb has joined #tripleo09:42
kopecmartin|ruckmarios|rover, hm, but if i get the config right, the job is not part of promotion for master09:43
*** jtomasek has joined #tripleo09:46
*** kopecmartin|ruck has quit IRC09:47
*** hjensas has joined #tripleo09:47
*** kopecmartin is now known as kopecmartin|ruck09:47
jaosoriorhjensas: hey, did you ever get novajoin working with config-drive?\09:56
honzajtomasek: florianf: i approved all the refactor patches related to named exports last night because i thought it would put undue stress on jirka to have to rebase everything if we found a nit sommewhere in the middle09:56
honzajtomasek: florianf: we can fix any issues in follow up patches09:57
jtomasekhonza: thanks, that moves things forward a lot09:58
*** skramaja has quit IRC09:58
*** skramaja_ has joined #tripleo09:58
mandreshardy: what do you think should be the default for cleaning disks? do it or not?09:59
jtomasekhonza: and I'll rebase the last one09:59
shardymandre: I guess a safer default is not to do it, what was the motivation behind removing the tasks - did we get some feedback on that?10:02
shardybut like I said it's kind of strange since they're about to be written on by gluster so ^o^10:03
quiquelliurygregory: Do you know why this is failing https://review.openstack.org/#/c/609300/ ?10:03
shardyperhaps this should really be an osa variable vs something we do in t-h-t10:04
quiquelliurygregory: Feels to me totally unrelated10:04
iurygregoryquiquell, let me take a look10:04
mandreshardy: the current implementation is broken and I was about to "fix" it with a dmsetup remove_all, when said to myself "wait a minute, this is not right"10:04
quiquelliurygregory: verified passed, gate failed10:05
quiquelliurygregory: going to recheck10:05
mandreshardy: asked flaper87 about it who was under the same impression that wiping the disks was only convenience we made for the demo10:05
iurygregoryquiquell, also the past failure was in integration-4-scenario004-tempest-centos-7-mimic i think10:05
shardymandre: ack, well it seems like something useful outside the context of the demo to me, it's not like nobody else ever reuses hardware :)10:06
shardybut perhaps it should be something handled via the cns support in osa instead?10:07
florianfhonza: thanks!10:08
flaper87mandre: shardy yeah, for the demos and dev. Just cleaning and re-creating the disks every time is painful10:09
mandreshardy: yeah... it would be more useful to make the disk wiping optionally part of the cns install playbook (and not just uninstall) in osa10:09
flaper87I'm not a fan of wiping disks ourselves, if we can move this somewhere else, that'd be awesome10:09
mandrethen we can stop implementing it badly and we'll just have to pass a variable10:09
flaper87but definitely not something we should have in the main templates10:09
shardyOk sounds like a plan10:09
*** ooolpbot has joined #tripleo10:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION10:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968010:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)10:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675610:10
*** ooolpbot has quit IRC10:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)10:10
*** gouthamr has quit IRC10:14
*** stevebaker has quit IRC10:16
*** dmellado has quit IRC10:17
*** ykarel|lunch is now known as ykarel10:19
hjensasjaosorior: yes10:19
*** akrivoka has quit IRC10:19
*** akrivoka has joined #tripleo10:20
hjensasjaosorior: with the caching fix that was merged yesterday + https://review.openstack.org/607492 I made several deployments 1 ctrl+1 cmp where nodes successfully enrolled to FreeIPA10:20
jaosoriorhjensas: OK, I'll review your config-drive patch today. thanks!10:22
hjensasjaosorior: Thanks! btw, owalsh made some comments on it, wanting to move to python only etc. I agree with him, but I think that a follow up doing that would be better.10:24
*** salmankhan has quit IRC10:24
*** salmankhan has joined #tripleo10:32
*** shyamb has quit IRC10:35
jaosorioryeah10:44
jaosoriorlets do that on a follow up instead10:44
*** sshnaidm|afk has joined #tripleo10:45
*** jistr|call is now known as jistr11:00
*** roger2 has joined #tripleo11:00
*** shyamb has joined #tripleo11:01
*** sshnaidm|afk is now known as sshnaidm11:02
*** holser_ has joined #tripleo11:03
roger2I successfully got tripleo to deploy the rocky release of undercloud on a baremetal server. Then I followed instructions (https://blogs.rdoproject.org/2018/05/running-tempest-tests-against-a-tripleo-undercloud/) to run tempest tests against the undercloud. I was surprised to see "Failure 122 Skip 62". Is so much failure normal for containerized?11:04
jaosoriorroger2: it isn't. What errors are you getting?11:05
*** shyamb has quit IRC11:06
chandankumarsshnaidm: Hello11:08
chandankumarsshnaidm: https://review.openstack.org/#/c/602347/ you mean commit message comment?11:08
sshnaidmchandankumar, I meant comment in code11:09
chandankumarsshnaidm: ah I can update that11:09
roger2jaosorior: here is the tempest.html I saw: https://drive.google.com/file/d/1Zl1j4LTUsEAsMKsdY9-WfYw1QZh7v3by/view?usp=sharing11:10
*** ooolpbot has joined #tripleo11:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION11:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968011:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675611:10
*** ooolpbot has quit IRC11:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)11:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)11:10
jaosoriorchandankumar: does that look familiar? ^^11:10
chandankumarroger2: which openstack release is used and deployment is done using tripleo-quickstart?11:12
chandankumarroger2: from the error _member_ is not found11:12
roger2chandankumar: It was rocky release. It was not with quickstart.11:12
chandankumarroger2: one thing you can do that openstack role create _member_11:14
*** shyamb has joined #tripleo11:14
chandankumarroger2: then re-run the test it will work11:14
roger2chandankumar: thank you I will do that11:14
chandankumarroger2: by the way we have a detailed guide here https://docs.openstack.org/tripleo-docs/latest/install/basic_deployment/tempest.html11:14
chandankumarroger2: thank you for trying that out from blog :-)11:15
roger2chandankumar: thank you so much11:15
roger2yw :)11:15
*** ratailor has quit IRC11:15
*** quiquell is now known as quiquell|afk11:15
*** holser_ has quit IRC11:15
*** holser_ has joined #tripleo11:17
*** amoralej is now known as amoralej|lunch11:18
*** roger2 has quit IRC11:18
*** dtantsur is now known as dtantsur|brb11:21
*** rh-jelabarre has joined #tripleo11:21
*** owalsh_away has quit IRC11:22
*** holser_ has quit IRC11:25
*** holser_ has joined #tripleo11:25
*** holser__ has joined #tripleo11:26
*** mburned_out is now known as mburned11:27
*** owalsh has joined #tripleo11:28
*** holser_ has quit IRC11:30
*** morazi has joined #tripleo11:34
*** udesale has quit IRC11:39
*** janki has quit IRC11:41
*** shyamb has quit IRC11:45
*** shyamb has joined #tripleo11:45
*** ramishra has quit IRC11:46
*** holser__ has quit IRC11:49
*** mcornea has joined #tripleo11:50
*** janki has joined #tripleo11:51
weshayjaosorior, if we don't merge https://review.openstack.org/#/c/608589/ we're never getting a promotion11:52
weshayI've been watching that patch for 48hrs11:52
*** cylopez has joined #tripleo11:52
jaosoriorweshay: keeps timing out. Did we disable ARA already? thought that would hlep11:53
weshayjaosorior, timing out?11:53
jaosoriorweshay: it was about to merge today and it timed out11:53
weshaydang it did..11:53
weshaythat's the first time I've seen it time out actually11:54
weshayk11:54
weshayara is disabled11:54
jaosoriorand then it was rechecked, and the package installation script failed11:54
jaosoriorweshay: http://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-containers-multinode/ee2de19/job-output.txt.gz#_2018-10-11_07_06_30_62132811:54
weshayjaosorior, 2hrs to install the undercloud11:56
weshay:(11:56
weshayomg http://logs.openstack.org/89/608589/5/gate/tripleo-ci-centos-7-containers-multinode/aef1db7/logs/undercloud/home/zuul/install-undercloud.log.txt.gz11:57
jaosoriorweshay: a patch to tripleo-common introducing skopeo recently merged.11:58
weshaythis is nuts man11:59
jaosoriorweshay: https://review.openstack.org/#/c/604664/11:59
jaosoriorand now this https://review.openstack.org/#/c/609586/ might be the next iteration11:59
*** abishop has joined #tripleo11:59
*** derekh has quit IRC12:00
jaosorioruhm... although, judging by the logs, the skopeo output only takes 2 mins12:00
jaosoriorso, must be something else12:00
weshaymarios|rover, http://logs.openstack.org/89/608589/5/gate/tripleo-ci-centos-7-containers-multinode/aef1db7/logs/undercloud/home/zuul/install-undercloud.log.txt.gz12:00
*** derekh has joined #tripleo12:00
*** ramishra has joined #tripleo12:01
*** toure|gone is now known as toure12:01
*** jpena|off has joined #tripleo12:02
jaosoriornope, all the stuff we do with the images definitely eats up most of the time there12:02
*** dprince has joined #tripleo12:05
*** gouthamr has joined #tripleo12:05
*** roger2 has joined #tripleo12:06
weshayapetrich, any update https://trello.com/c/cVtSh2HC/759-cixbz1628319osp14regressionovercloud-deployment-received-504-gateway-time-out-from-mistral12:10
*** dmellado has joined #tripleo12:10
*** ooolpbot has joined #tripleo12:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION12:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968012:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675612:10
*** ooolpbot has quit IRC12:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)12:10
*** cylopez has left #tripleo12:10
*** trown|outtypewww is now known as trown12:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)12:10
sshnaidmTengu, what are reasons for using ansible-runner in tripleo?12:11
*** quiquell|afk is now known as quiquell|lunch12:11
Tengusshnaidm: what are reasons not to use it?12:11
apetrichweshay, I did a bit of digging yesterday but didn't find anything obvious. I'm trying to spin an env all day but not getting there.12:12
Tengusshnaidm: more seriously - it will provide a stable interface with ansible and allow to do far more than the subprocess thingy, in a python way.12:12
apetrichweshay, oh wait. that was for another bug12:12
Tengusshnaidm: like Tower integration if needed later.12:12
sshnaidmTengu, couldn't you use python interface of ansible directly?12:14
Tengusshnaidm: nope12:14
Tengusshnaidm: unstable API interface :(.12:14
Tengusshnaidm: thought about it, and discussed that as well, apparently ansible-runner is a glue allowing to NOT change the application code in the event the ansible API changes.12:14
sshnaidmTengu, I see12:15
weshayapetrich, k k.. please update the card w/ that info12:15
Tengusshnaidm: I was disapointed about that fact, but still, runner seems to be the "right" was to handle it. It's done by the ansible team, so we can expect a nice maintenance and support.12:15
Tengui.e. it's not a project coming from nowhere.12:15
sshnaidmTengu, yeah, it's surprising to see another tool to cover disadvantages of original tool..12:16
Tenguindeed12:16
Tengubut we can understand in some way.12:16
Tenguif the API changes in order to get a faster run or something like that, providing a stable interface with a 3rd party (call it proxy) makes sense.12:17
Tenguthat prevents having to change all the calls in end-user apps. well, it should, at least ;).12:17
apetrichthrash, morning! do you know if https://review.openstack.org/#/c/605633/1 was ported downstream?12:17
Tengusshnaidm: I didn't go further on that topic. When ansible guys tell you "use runner, don't use ansible public API directly", you'd better follow that advice, I think ;)12:18
sshnaidmTengu, well, it'd b weird that API should change for running something fast :)12:18
Tengusshnaidm: who knows ;). but indeed.12:18
sshnaidmTengu, well, ya12:18
Tenguso.. that's the story about integrating ansible-runner.12:18
Tenguand not calling ansible public API directly12:18
Tengualso, runner offers an easier way to interface with Tower iirc12:19
Tenguand that would be of some use, also iirc12:19
*** skramaja_ has quit IRC12:19
*** skramaja has joined #tripleo12:19
*** huynq has quit IRC12:20
sshnaidmTengu, ok, thanks, more clear now12:20
Tenguno problem :)12:20
*** stevebaker has joined #tripleo12:22
*** rh-jelabarre has quit IRC12:23
*** ramishra has quit IRC12:26
*** tzumainn has joined #tripleo12:27
*** lblanchard has joined #tripleo12:28
*** ramishra has joined #tripleo12:29
*** fhubik has joined #tripleo12:29
chandankumarTengu: Does standalone does not support cinder volume support?12:29
*** fhubik has quit IRC12:30
Tenguchandankumar: hmm it should, according to https://github.com/openstack/tripleo-heat-templates/blob/master/environments/standalone.yaml#L15-L2312:30
Tenguat least Cinder is present.12:30
weshayjaosorior, mwhahaha EmilienM it may make sense to discuss wtf we're doing for 2+ weeks w/o healthy ovb tests12:30
*** rlandy has joined #tripleo12:32
jaosoriorweshay: there is no way to make them voting as it's third party CI, right?12:33
*** lblanchard has quit IRC12:35
fungijaosorior: third-party ci systems can certainly vote. you just need to make sure the acl for the projects is set to allow verified -1..+1 for a group you put the accounts of your ci systems in12:43
fungithere are tons of third-party ci systems voting on changes at review.openstack.org12:44
fungihttps://review.openstack.org/#/admin/groups/tripleo-ci12:45
*** ansmith has joined #tripleo12:46
fungithe "RDO Third Party CI" can currently vote on any projects which allow verified -1..+1 from that group12:46
*** quiquell|lunch is now known as quiquell12:47
fungifor example, looks like it voted verified +1 on https://review.openstack.org/609678 and another third-party ci system (SUSE CI) voted verified -112:48
*** amoralej|lunch is now known as amoralej12:54
*** janki has quit IRC13:00
weshayjaosorior, there is.. but so far the we want to see a more stable env.. like not rdo-cloud13:01
*** shyamb has quit IRC13:01
weshayrlandy,13:01
*** sai_p has joined #tripleo13:04
*** ykarel has quit IRC13:05
*** ykarel has joined #tripleo13:05
*** holser_ has joined #tripleo13:09
*** holser_ has quit IRC13:09
*** holser_ has joined #tripleo13:10
*** ooolpbot has joined #tripleo13:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION13:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968013:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)13:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675613:10
*** ooolpbot has quit IRC13:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)13:10
*** holser_ has quit IRC13:10
*** dmacpher_ has quit IRC13:11
*** holser_ has joined #tripleo13:11
*** holser__ has joined #tripleo13:12
*** dmacpher has joined #tripleo13:12
*** holser_ has quit IRC13:13
*** holser_ has joined #tripleo13:13
*** dtantsur|brb is now known as dtantsur13:15
*** holser__ has quit IRC13:16
*** ansmith has quit IRC13:16
trownshardy: flaper87 could either of you help review patches in https://etherpad.openstack.org/p/tripleo-openshift-patches we really need to get that list trimmed down while CI is green(ish)13:19
mwhahahaweshay: that would be smart why would we do that13:22
*** vinaykns has joined #tripleo13:22
*** ansmith has joined #tripleo13:24
*** mjturek has joined #tripleo13:25
shardytrown: ack will do, started this morning but will revisit now13:25
*** udesale has joined #tripleo13:28
*** dtrainor_ has quit IRC13:30
*** holser_ has quit IRC13:32
*** dtrainor has joined #tripleo13:33
*** jrist has joined #tripleo13:33
beaglesanybody know where the ::uuid fact is configured? I'm getting a "Warning: Unknown variable: '::uuid'. at /etc/puppet/modules/tripleo/manif13:35
beaglesests/profile/base/database/mysql/client.pp:85:8",13:36
beagleserror when deploying undercloud13:36
*** skramaja has quit IRC13:36
*** skramaja_ has joined #tripleo13:37
*** psachin has quit IRC13:38
mariosmwhahaha: EmilienM did i see an email/review from either of you this morning about the 3 node job or am i making it up13:39
roger2I'm trying to remove 'rocky' so I can deploy 'queens' since 'rocky' isn't working. I ran "set -x ; sudo yum -y remove python-tripleoclient ceph-ansible ; sudo -E tripleo-repos -b queens current ceph ; sudo yum clean all ; sudo rm -rf /var/cache/yum ; sudo yum -y install python-tripleoclient ceph-ansible ; set +x". But now if I do "openstack tripleo container image prepare default --local-push-destination --output-env-file ~/containe13:39
roger2...it returns error saying it "is not an openstack command." Please advise.13:39
mwhahahamarios: we removed it?13:40
shardyroger2: python-tripleoclient provides that openstackclient plugin, and you removed it13:41
roger2shardy: I removed rocky version and then tried to install queens version13:41
shardyroger2: ah the command is openstack overcloud image prepare...13:42
*** mjturek has quit IRC13:42
shardysorry sec13:43
shardyopenstack overcloud container image prepare13:43
mariosmwhahaha: ack (I know that :)) i was looking for the email/review in my inbox couldn't find it k thanks13:44
*** mjturek has joined #tripleo13:45
mwhahahamarios: i think it all merged let me check13:46
mariosmwhahaha: so in http://cistatus.tripleo.org/ it last ran last night so...13:47
mwhahahamarios: queens is in the gate, https://review.openstack.org/#/c/608324/13:47
mariosmwhahaha: thanks13:47
marios|roverkopecmartin|ruck: weshay so we can close this one out https://bugs.launchpad.net/tripleo/+bug/1786520 though its a valid bug the job no longer exists :)13:49
openstackLaunchpad bug 1786520 in tripleo "3node jobs failing due to missing file UpgradeInitDeployment" [Critical,Triaged]13:49
roger2shardy: thank you. I'm making progress, but the arguments seem different with older command. Is this fine to create local push registry? "openstack overcloud container image prepare --output-env-file ~/containers-prepare-parameter.yaml"13:50
marios|roverkopecmartin|ruck: i was looking into it yesterday http://logs.openstack.org/33/605033/3/check/tripleo-ci-centos-7-3nodes-multinode/f3ca3e2/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-10-10_08_19_41 it is/was still happening but anyway gonna comment and close the bug13:50
weshaymarios|rover, thanks13:52
kopecmartin|ruckmarios|rover, ok, thanks13:52
marios|roverweshay: mwhahaha https://review.openstack.org/#/c/609708/13:53
marios|roverweshay: ah we still have it for tq13:54
jaosoriormarios|rover: seems it needs to be removed from somewhere else too:   Job tripleo-ci-centos-7-3nodes-multinode not defined13:54
marios|roverjaosorior: thanks posting13:56
shardyroger2: I think containers-prepare-parameter.yaml is the input -e option, https://review.openstack.org/#/c/599462/ may help as it shows the diff between the old/new commands13:56
*** Vorrtex has joined #tripleo13:56
trownmandre: any interest in looking at a 3.11 deploy today? I have an environment redeploying now with manually built 3.11 packages ... I was looking at it yesterday and it seems to fail because of an SSL issue13:56
*** skramaja_ has quit IRC13:56
shardyroger2: it was mentioned recently that we should probably include those details as stable branch annotations in the docs but that's not yet happened unfortunately13:57
marios|roverweshay: jaosorior mwhahaha: https://review.openstack.org/#/c/609710/13:57
weshayjaosorior, mwhahaha fyi.. imho once https://review.openstack.org/608589 get's back into the gate queue we should prioritize and then pray13:58
marios|roverjaosorior: will update that other review too thanks for checking it13:59
jaosoriorweshay: agreed.13:59
*** quiquell is now known as quiquell|off13:59
*** mjturek has quit IRC14:00
marios|roverjaosorior: oh i think the depends on should do it at least i cant see something else than needs removing at https://review.openstack.org/#/c/609708/214:02
mandretrown: yeah me too would like to know how bad/easy it is to switch to 3.11, this is something mfedosin started looking into14:02
*** ssbarne__ has joined #tripleo14:03
marios|roverweshay: haha 'fix released'14:04
weshaylol14:05
weshayf it14:05
*** rh-jelabarre has joined #tripleo14:06
*** mjturek has joined #tripleo14:07
roger2shardy: thx for your help. unfortunately undercloud deploy failed anyway, "(os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]". So I think I will reinstall fresh CentOS and start over.14:09
*** ooolpbot has joined #tripleo14:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION14:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968014:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)14:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675614:10
*** ooolpbot has quit IRC14:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)14:10
*** panda|mtg has quit IRC14:12
*** lblanchard has joined #tripleo14:14
*** panda has joined #tripleo14:14
*** jrist has quit IRC14:20
*** artom has quit IRC14:27
weshayssbarnea, where is your patch to handle python virtenv across python2.7/314:32
*** mcornea_ has joined #tripleo14:33
*** cylopez has joined #tripleo14:34
vinayknshello channel...need review for https://review.openstack.org/#/c/600843/ & https://review.rdoproject.org/r/#/c/16848/14:34
*** mcornea has quit IRC14:35
*** jtomasek has quit IRC14:36
*** lblanchard has quit IRC14:36
*** artom has joined #tripleo14:42
weshayjaosorior, mwhahaha failed in check14:43
mwhahahawhat did14:43
weshayhttps://review.openstack.org/60858914:44
weshayhttp://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/317ee0f/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-10-11_13_58_4014:44
weshaydam it14:44
weshayd0ugal, toure ^14:44
d0ugalweshay: Looking.14:45
* toure looks14:46
weshaytoure, do we just need to bypass the promotion and get the latest client in current-tripleo?14:47
weshayor is this a legit timeout and we have some infra issue?14:47
weshaybogdando, had an update on that bug14:47
* toure digging through logs14:48
bogdandowhat bug14:48
weshayhttps://bugs.launchpad.net/tripleo/+bug/1789680/comments/2314:48
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)14:48
weshaybogdando, you think that timeout is caused by a sighup?14:48
weshaymarios|rover, fyi ^14:49
mwhahahano we reverted that code14:51
mariosweshay: ack14:51
mwhahahaand that was in puppet-tripleo  so it should be in place14:51
*** cylopez has left #tripleo14:52
bogdandoweshay: aye14:55
bogdandomwhahaha: waat14:56
bogdandowe did not14:56
bogdandoit was only merged this night14:56
toureweshay this isn't the bug I fixed14:56
tourethis is a timeout issue14:56
mwhahahabogdando: we revered the sighup stuff didn't we?14:56
bogdandothe copytruncate fallback, should fix those mistrals failing14:56
bogdandomwhahaha: ah, yes, but I basically put all links in that comment14:56
bogdandohttps://bugs.launchpad.net/tripleo/+bug/1789680/comments/2314:57
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)14:57
d0ugalweshay, toure: I am seeing some strange swift errors I've not seen before...14:58
*** ramishra has quit IRC14:58
d0ugalweshay, toure: Couple of odd tracebacks in here: http://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/317ee0f/logs/undercloud/var/log/containers/mistral/executor.log.1.gz14:58
toured0ugal yeah I have seen those, I originally thought there was a network latency issue14:59
d0ugaland in the swift logs... http://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/317ee0f/logs/undercloud/var/log/containers/swift/swift.log.txt.gz#_Oct_11_12_58_4114:59
*** openstackgerrit has joined #tripleo14:59
d0ugalOct 11 12:58:41 undercloud proxy-server: ERROR with Object server 192.168.24.1:6000/d1 re: Trying to get final status of PUT to /v1/AUTH_0366a5269e6c449e9837b4ffaae17fa5/overcloud-messages/tripleo.deployment.v1.config_download_deploy/2018-10-11_12%3A57%3A04.yaml: Timeout (60.0s) (txn: tx2f17ed392da640f18e432-005bbf48a0)14:59
d0ugalOct 11 12:58:41 undercloud proxy-server: Object PUT returning 503 for [503] (txn: tx2f17ed392da640f18e432-005bbf48a0) (client_ip: 192.168.24.1)14:59
weshayd0ugal, not sure if it's related exactly, but fyi https://trello.com/c/0EO9OsKh/775-cixlp1797167tripleociproa-swift-timeout-during-undercloud-deploy15:00
d0ugalweshay: Yeah, that is the timeout I spotted yesterday (was it yesterday?) - I think this is different15:00
weshayk15:00
d0ugalI'm not sure what the error in the swift log means15:00
toured0ugal it does state that there was a timeout hit15:02
tourehttp://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/317ee0f/logs/undercloud/var/log/containers/swift/swift.log.txt.gz#_Oct_11_12_58_4115:02
d0ugaltoure: oh, I thought it looked different when I seen the timeout yesterday15:03
d0ugalbut you could well be correct, I'm not very familiar with swift logs15:04
apetrichd0ugal, toure Object replication complete. (1.64 minutes) seems a bit excessive15:04
*** artom has quit IRC15:04
toured0ugal I am digging through the swift code base to see where that timeout lives15:04
d0ugaltoure: ah, I see it is the same now15:05
d0ugalweshay: so yeah, this is the same as the error I found before15:05
toureah15:05
toureso we hit the 6mins mark15:05
weshaywhy though?15:06
*** ssbarne__ has quit IRC15:07
openstackgerritMichael Bayer proposed openstack/puppet-tripleo master: Implement Global Galera database  https://review.openstack.org/60973415:07
d0ugaltoure: I thought it was quicker. Where do you see that?15:08
toured0ugal sorry yeah it looks like 98s15:08
toureor somewhere around that15:08
d0ugaltoure: also, note the end of the message "Timed out waiting for messages from Execution (ID: c5cee15a-93da-41b3-87f9-02d914ddba0a, State: ERROR). The Workflow errored and no messages were received."15:08
openstackgerritBogdan Dobrelya proposed openstack/tripleo-quickstart master: Update docs for building libvirt images modes  https://review.openstack.org/60973715:08
chandankumartbarron: Hello15:09
openstackgerritMichael Bayer proposed openstack/tripleo-specs master: Global Galera Database  https://review.openstack.org/60055515:09
tbarronchandankumar: running upstream manila community meeting atm15:09
tbarronchandankumar: but go ahead, I just may be slow15:09
d0ugaltoure: That means it gave up waiting, and then checked the API for the execution status. so it failed before the client timeout15:09
toured0ugal right15:09
*** ooolpbot has joined #tripleo15:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION15:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968015:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/179675615:10
*** ooolpbot has quit IRC15:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)15:10
openstackLaunchpad bug 1796756 in tripleo "Error searching for image docker.io/tripleorocky/centos-binary-ceilometer-compute - UnixHTTPConnectionPool(host=\'localhost\', port=None): Read timed out." [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)15:10
trownmandre: I have an 3.11 env up if you want to take a look15:11
chandankumartbarron: regarding this review https://review.openstack.org/#/c/609550/ -> currently we are keeping manila related tempest config in featureset (which is used generally in ci), In order to genertae tempest.conf we have a tool python-tempestconf can we all the required code in the tool itself so that other people can get benefitted and generate tmepest.conf easily for manila and run manila tempest tests15:11
*** artom has joined #tripleo15:11
gouthamragreed chandankumar, but i don't want to yank out the tests right away15:11
gouthamri.e, can they keep running while we address the tempestconf?15:12
chandankumargouthamr: since we are going to unskip the tests, we need to copy manila tempest conf on fs020 and fs021 side15:13
chandankumargouthamr: as there it will also get triggered15:13
gouthamrchandankumar: manila isn't deployed in those featuresets15:13
*** sai_p has quit IRC15:13
chandankumarweshay: ^^15:13
gouthamrchandankumar: but you're saying the plugin is installed?15:13
openstackgerritMichael Bayer proposed openstack/tripleo-heat-templates master: Implement Global Galera database  https://review.openstack.org/60973815:14
openstackgerritBen Nemec proposed openstack/tripleo-heat-templates master: Add sample designate environment for ha  https://review.openstack.org/58402615:15
openstackgerritBen Nemec proposed openstack/tripleo-heat-templates master: Split designate envs  https://review.openstack.org/58453215:15
openstackgerritBen Nemec proposed openstack/tripleo-heat-templates master: Add /v2 suffix to Designate uris  https://review.openstack.org/58588215:15
openstackgerritBen Nemec proposed openstack/tripleo-heat-templates master: Set correct project name for designate-neutron integration  https://review.openstack.org/58590215:15
chandankumargouthamr: we are using tempest container to run tempest tests15:15
*** artom has quit IRC15:16
weshaychandankumar, gouthamr manila is scenario04 or fs01915:16
chandankumarin fso9 and fs01815:16
chandankumarweshay: is manila configured on fs020?15:16
*** jfrancoa has quit IRC15:16
gouthamrweshay: +115:16
weshayhttp://logs.openstack.org/50/609550/1/check/tripleo-ci-centos-7-scenario004-multinode-oooq-container/35f540c/logs/tempest.html.gz15:17
weshaychandankumar, /me checks15:17
chandankumarweshay: the thing is that we have a centralized skip list and tempest is runned from container at multiple places15:17
weshayright..s ec15:18
weshaysec15:18
chandankumarweshay: if we unskip some tests it will start breaking at some place if proper test_regex is not passed15:18
chandankumarfs020 is going to affect15:18
*** artom has joined #tripleo15:19
*** openstackgerrit has quit IRC15:22
gouthamrfs020 doesn't deploy manila, how will it be affected?15:24
dpeacockis it just me or is zuul.openstack.org slow today?15:24
weshaychandankumar, don't think so.. just blacklist in fs2015:24
*** mugsie has joined #tripleo15:24
chandankumargouthamr: weshay: https://review.openstack.org/60974115:25
chandankumaronce this merged we are good to go with your patch15:25
chandankumarotherwise we will be break the stuff15:25
*** huynq has joined #tripleo15:25
weshayd0ugal, toure updates should go into https://trello.com/c/UbVy6G7k/776-overcloud-deployment-received-504-gateway-time-out-from-mistral15:26
weshaychandankumar, you want that on fs20 and fs21 right?15:26
chandankumarweshay: sorry reviiiew updated15:26
chandankumarweshay: fs020 only15:26
weshayhrm..15:26
weshaysure?15:26
toureweshay: ack15:27
chandankumarweshay: fs021 a job without skip list15:27
chandankumarweshay: since we are working on towards full tempest, I think we need a fs where can we have all stuff tested15:27
*** mcornea_ has quit IRC15:28
chandankumarincluding manila +sahara not as a seperate fs15:28
*** huynq has quit IRC15:28
marioshow can /#/c/608589/ not be merged yet weshay did you say your prayer yet? bandini what kinda hex did you put on that patch?15:29
*** mcornea has joined #tripleo15:29
*** dprince has quit IRC15:31
*** chandankumar is now known as chkumar|off15:32
weshaymarios, it failed on the mistral issue that I'm pinging toure d0ugal and bogdando about15:32
weshaymarios, https://trello.com/c/UbVy6G7k/776-overcloud-deployment-received-504-gateway-time-out-from-mistral15:32
*** Chaserjim has joined #tripleo15:32
marios|roverthanks weshay and :/15:32
bogdandoweshay: we'll need to have that https://review.openstack.org/#/c/607491/ puppet-tripleo promoted somehow,  to unblock mistral things, right?15:33
bogdandoIIUC, that failed build of /#/c/608589/ does not have that fix in puppet modules?15:34
weshaybogdando, puppet changes should come through automatically15:34
bogdandohm15:34
weshaybogdando, no need for promotion15:34
bogdandothen, that means, my assertion failed and the mistral breaks in different way15:34
bogdandonot cuz of logrotate15:35
d0ugaltoure: I am going to add a rety to that Swift upload. Not sure if it will help... but maybe?15:35
bogdandoI'll check the logs...15:35
mcorneamwhahaha: weshay do we have any gate job that deploys tripleo allinone?15:36
mwhahahamcornea: the standalone?15:37
* mwhahaha is unsure what you are refering to15:37
bogdandoweshay, d0ugal, therve: mmh, that http://logs.openstack.org/89/608589/5/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/317ee0f/logs/undercloud/var/log/containers/ does not have logrotate container at all, so nothing even attempts to send signals or do copytruncate15:37
bogdandoI thought we always deploy logrotate container in CI15:37
mcorneamwhahaha: yeah15:37
d0ugalbogdando: Yeah, I don't think this most recent failure is related to logrotate.15:37
bogdandookay15:38
mwhahahamcornea: the standalone job fs 52 or something15:38
mwhahahamcornea: https://github.com/openstack-infra/tripleo-ci/blob/master/zuul.d/standalone-jobs.yaml#L6-L1315:39
d0ugalhttps://review.openstack.org/60974615:40
d0ugaltoure: ^ That could possibly help... but ¯\_(ツ)_/¯15:40
*** yprokule has quit IRC15:44
weshaygouthamr, update your patch w/ the depends to chkumar|off 's patch15:45
weshaythen I can +215:46
gouthamrweshay: ack, will do15:46
gouthamrweshay chkumar|off: ty!15:46
roger2In the undercloud.conf, should I set "masquerade" to true or to false? I was thinking "true", but I was surprised it is set to "false" by default.So I'm a little confused about it.15:46
mwhahaharoger2: depends on if you want the overcloud nodes to use the undercloud for internet access, you likely will want true15:48
tbarronchkumar|off++15:49
tbarronweshay++15:49
tbarrongouthamr++15:49
tbarronkarma happens even w/o a bot15:49
*** jrist has joined #tripleo15:50
*** mjturek has quit IRC15:51
weshay2018-10-11 12:07:27 | time="2018-10-11T12:07:27Z" level=fatal msg="pinging docker registry returned: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 127.0.0.1:53: server misbehaving"15:51
weshayhttp://logs.openstack.org/49/585649/15/check/tripleo-ci-centos-7-standalone/81f644c/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2018-10-11_12_07_2715:51
roger2mwhahaha: yeah, I don't have a separate gateway, so "true" it is then. thank you.15:52
*** bogdando has quit IRC15:53
*** rh-jelabarre has quit IRC15:55
*** jrist has quit IRC15:59
*** dtantsur is now known as dtantsur|afk16:00
chem`Tengu: hey, I've found this issue during ansible-lint of the generated ansible-playbook https://bugs.launchpad.net/tripleo/+bug/1797412 and I've upload that associated review https://review.openstack.org/#/c/609754/16:03
openstackLaunchpad bug 1797412 in tripleo "Wrong syntax for import_role in the ansible part of the templates." [Critical,In progress] - Assigned to Sofer Athlan-Guyot (sofer-athlan-guyot)16:03
chem`Tengu: I've noticed that you've made that import_role change so I though I tell you as I may have missed something here16:04
trozetcan someone please help review https://review.openstack.org/#/c/609438/ ? thanks16:04
gouthamrweshay: done wrt https://review.openstack.org/#/c/609550 , ty again for the quick feedback!16:09
*** ooolpbot has joined #tripleo16:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION16:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968016:10
*** ooolpbot has quit IRC16:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)16:10
*** rpioso has joined #tripleo16:11
*** ksambor has quit IRC16:14
*** mburned is now known as mburned_out16:14
*** morazi has quit IRC16:16
*** panda has quit IRC16:20
*** sri_ has quit IRC16:21
*** panda has joined #tripleo16:21
*** agopi is now known as agopi|brb16:22
*** vinaykns has quit IRC16:24
*** trown is now known as trown|lunch16:34
*** gouthamr has quit IRC16:34
*** stevebaker has quit IRC16:34
*** dmellado has quit IRC16:34
*** abregman has joined #tripleo16:36
*** jpich has quit IRC16:38
*** mjturek has joined #tripleo16:42
*** gouthamr has joined #tripleo16:43
*** agopi|brb is now known as agopi16:57
*** derekh has quit IRC16:59
*** rdopiera has quit IRC16:59
*** mjturek has quit IRC17:01
*** ykarel_ has joined #tripleo17:06
*** ykarel has quit IRC17:07
*** salmankhan1 has joined #tripleo17:07
*** salmankhan1 has quit IRC17:09
*** salmankhan has quit IRC17:09
*** ooolpbot has joined #tripleo17:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION17:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968017:10
*** ooolpbot has quit IRC17:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)17:10
*** udesale has quit IRC17:11
Tenguchem`: hu? how could it pass the CI ??17:11
*** russellb has joined #tripleo17:12
russellbis there a simple way to clean up a standalone containers deployment?17:12
Tenguchem`: that's.... strange, to say the less :/.17:13
Tenguso now I have to find back why I used "role" (or "roles"), I didn't invent that param :/.17:13
shardyrussellb: you can docker kill/docker rm all containers, but if you update the images and re-run it should restart them all anyway17:14
russellbshardy: OK, that's what i did, and the new deployment failed in the middle, so was wondering if i just didn't clean up fully17:14
shardyrussellb: ack, yeah we don't really have any automated cleanup atm, the idea is you can always modify the config and re-apply it to move forward/update17:15
russellbok17:15
toured0ugal will check it out17:15
shardyrussellb: some state like the DB is mounted from the host so killing the containers might not be enough17:15
russellbshardy: do the images get updated automatically?17:15
toured0ugal so you think we are dropping the client connection or I should say letting it timeout?17:16
toured0ugal I will c17:17
toured0ugal I will cherry pick the patch and run another deployment, as I think I have a local reproducer17:17
roger2My baremetal containerized undercloud installed successfully. So, ‘queens’ wins my vote for stable release.17:17
Tenguchem`: ahhh, ok, so it's ansible lint failing while apparently "role" is working, probably backward compat with a really old version... Can't see other reason :/. Anyway, good catch.17:18
shardyrussellb: we set tag_from_label: rdo_version in containers-prepare-parameters.yaml so I think so, but the standalone workflow is a bit different to the overcloud one now17:19
shardytbh I got my env working and didn't dare touch it but I can try if needed :)17:19
russellbno no, don't break it!  :)17:20
*** rh-jelabarre has joined #tripleo17:20
*** trown|lunch is now known as trown17:34
*** vinaykns has joined #tripleo17:41
*** shardy has quit IRC17:47
*** amoralej is now known as amoralej|off17:57
*** ansmith has quit IRC18:04
*** ooolpbot has joined #tripleo18:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION18:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968018:10
*** ooolpbot has quit IRC18:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)18:10
*** lblanchard has joined #tripleo18:16
*** ansmith has joined #tripleo18:20
*** gouthamr_ has joined #tripleo18:25
*** dmellado has joined #tripleo18:27
*** openstackstatus has quit IRC18:28
*** Chaserjim has quit IRC18:30
*** openstackstatus has joined #tripleo18:31
*** ChanServ sets mode: +v openstackstatus18:31
*** dprince has joined #tripleo18:31
*** panda has quit IRC18:35
*** panda has joined #tripleo18:38
*** agurenko has quit IRC18:38
*** stevebaker has joined #tripleo18:42
*** openstackgerrit has joined #tripleo18:44
openstackgerritBrent Eagles proposed openstack/tripleo-heat-templates master: WIP: make sure octavia flavor gets configured if composable octavia  https://review.openstack.org/60969618:44
*** gouthamr has quit IRC18:45
*** chem` has quit IRC18:52
*** ykarel_ has quit IRC18:54
*** ykarel_ has joined #tripleo18:55
*** rh-jelabarre has quit IRC19:02
*** ansmith has quit IRC19:05
roger2Do you know to fix "Cannot validate PXE bootloader"? I just did "openstack overcloud node introspect --all-manageable --provide" and introspection completed, but because I have "clean_nodes = true" in my undercloud.conf, it hit this error.19:06
roger2[{u'result': u'Node 6ec17ca3-9eea-438b-888c-8bc9c85ef40f did not reach state "available", the state is "clean failed", error: Failed to prepare node 6ec17ca3-9eea-438b-888c-8bc9c85ef40f for cleaning: Cannot validate PXE bootloader. Some parameters were missing in node\'s driver_info. Missing are: [\'deploy_ramdisk\', \'deploy_kernel\']'},...19:06
*** ooolpbot has joined #tripleo19:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION19:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968019:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)19:10
*** ooolpbot has quit IRC19:10
roger2How do I change undercloud to "clean_nodes = false"?19:23
roger2I assume redeploy undercloud with setting, but if there is better way, I'm listening.19:24
*** rdopiera has joined #tripleo19:31
*** dciabrin has quit IRC19:32
roger2too late. already redeploying undercloud.19:35
*** salmankhan has joined #tripleo19:36
*** salmankhan has quit IRC19:38
*** salmankhan1 has joined #tripleo19:38
mwhahaharoger2: you should have been able to just rerun the undercloud install (or upgrade) and it would have applied the config change19:38
*** salmankhan1 is now known as salmankhan19:40
roger2mwhahaha: thank you. So I removed clean_nodes setting from undercloud.conf and now I'm running openstack undercloud install again. I think that's what you just said. So... yay.19:43
mwhahahayea that should work19:44
roger2kthx19:44
roger2I have to go away now for several days. Thanks to everyone. Bye!19:45
*** roger2 has quit IRC19:47
openstackgerritRonelle Landy proposed openstack/tripleo-quickstart-extras master: DNM - Testing changes to https://review.openstack.org/#/c/591652  https://review.openstack.org/60979819:53
*** ansmith has joined #tripleo20:09
*** ooolpbot has joined #tripleo20:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION20:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968020:10
*** ooolpbot has quit IRC20:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)20:10
openstackgerritSteve Baker proposed openstack/tripleo-quickstart-extras master: Switch the undercloud to deploy Podman by default starting from Stein  https://review.openstack.org/60845220:13
*** tosky__ has joined #tripleo20:16
*** tosky has quit IRC20:17
openstackgerritRonelle Landy proposed openstack/tripleo-quickstart-extras master: DNM - Testing changes to https://review.openstack.org/#/c/591652  https://review.openstack.org/60979820:19
openstackgerritMerged openstack/tripleo-upgrade master: Replace RabbitMQ with OsloMessaging(Rpc|Notify)  https://review.openstack.org/60863720:20
openstackgerritMerged openstack/tripleo-heat-templates stable/pike: [Ocata/Pike] Pass DeployIdentifier in upgrade tasks.  https://review.openstack.org/60608920:20
openstackgerritMerged openstack/paunch master: Set standard mode for systemd generated unit files  https://review.openstack.org/60864420:20
*** tosky__ is now known as tosky20:23
openstackgerritNicolas Hicher proposed openstack-infra/tripleo-ci master: provider: Add vexxhost  https://review.openstack.org/59643220:23
*** rdopiera has quit IRC20:26
openstackgerritRonelle Landy proposed openstack/tripleo-quickstart-extras master: DNM - Testing changes to https://review.openstack.org/#/c/591652  https://review.openstack.org/60979820:34
*** pcaruana has quit IRC20:40
*** ansmith has quit IRC20:41
weshayoh HI openstackgerrit20:44
weshaywelcome back20:44
*** dprince has quit IRC20:45
mwhahahayea they merged a fix for the auth problems causing openstackgerrit's absense20:47
mwhahahaso hopefully it will not leave again :D20:47
*** akrivoka has quit IRC20:53
openstackgerritSteve Baker proposed openstack/tripleo-common master: WIP replace skopeo inspect with python  https://review.openstack.org/60958620:54
mwhahahaweshay: sooo what's the outstanding bugs for the OVB jobs?20:55
mwhahahaweshay: is it just the tls thing or is there more20:55
weshaymwhahaha, I'm sure there are more by now.. but it's hard to tell w/o https://review.openstack.org/#/c/608589/20:55
mwhahahak20:55
weshaymwhahaha, atm we're only getting the undercloud logs in ovb20:56
weshaydue to $cloud20:56
weshaymwhahaha, either infra or ironic https://review.rdoproject.org/zuul/stream/8a92e66d7c754242b98675358d4fd9c9?logfile=console.log20:57
weshaydang.. we lost the console logs20:59
weshayhttps://review.openstack.org/#/c/588488/5/scripts/te-broker/destroy-env21:00
*** mcornea has quit IRC21:01
weshaymwhahaha, so I think ovb ipv6 is working21:05
weshayipv4 is hitting something21:05
weshaybut I don't have logs :(21:05
mwhahahaodd21:05
weshayhttps://review.rdoproject.org/zuul/stream/e06a5aeaa5e248d1a4811591adb42b7d?logfile=console.log21:06
weshayipv621:06
*** fpan has quit IRC21:06
*** ykarel__ has joined #tripleo21:06
weshayqueens fs001 https://logs.rdoproject.org/89/608589/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/e6d6cb4/logs/undercloud/home/zuul/overcloud_prep_images.log.txt.gz21:07
*** fpan has joined #tripleo21:07
*** ykarel_ has quit IRC21:09
mwhahahaweshay: https://logs.rdoproject.org/89/608589/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/e6d6cb4/logs/undercloud/var/log/ironic/ironic-conductor.log.txt.gz#_2018-10-11_19_44_12_41921:09
openstackgerritRonelle Landy proposed openstack/tripleo-quickstart-extras master: DNM - Testing changes to https://review.openstack.org/#/c/591652  https://review.openstack.org/60979821:09
*** ooolpbot has joined #tripleo21:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION21:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968021:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)21:10
*** ooolpbot has quit IRC21:10
weshaymwhahaha, vbmc issues ?21:10
mwhahahapossibly?21:10
weshaytoure, sorry.. I'm back21:10
mwhahahawe have any stuck stacks?21:10
* weshay tries to bring it up21:10
* weshay looks21:10
weshaymwhahaha, I have the console logs on the server21:11
weshaythey are just not being served21:11
* weshay tries to fix21:17
openstackgerritAlan Bishop proposed openstack/tripleo-heat-templates master: Improve support for deploying ceph on standalone system  https://review.openstack.org/60755821:17
*** ade_lee has quit IRC21:18
*** abishop has quit IRC21:22
*** ykarel__ has quit IRC21:29
*** panda has quit IRC21:34
openstackgerritRonelle Landy proposed openstack/tripleo-quickstart-extras master: DNM - Testing changes to https://review.openstack.org/#/c/591652  https://review.openstack.org/60979821:37
*** panda has joined #tripleo21:37
weshayERROR /var/log/extra/docker/containers/ironic_pxe_http/log/ironic/ironic-conductor.log: 7 ERROR oslo.service.loopingcall [-] Dynamic backoff interval looping call 'ironic.conductor.utils._wait' failed: LoopingCallTimeOut: Looping call timed out after 20.98 seconds21:37
weshayhttps://logs.rdoproject.org/89/608589/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/8a92e66/logs/undercloud/var/log/extra/errors.txt.gz21:38
weshaymwhahaha, https://logs.rdoproject.org/89/608589/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/8a92e66/logs/undercloud/var/log/containers/ironic/ironic-conductor.log.txt.gz#_2018-10-11_19_49_32_12821:42
weshay?21:42
weshaydoesn't tell me much21:42
weshayhttps://logs.rdoproject.org/89/608589/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/8a92e66/logs/undercloud/var/log/containers/ironic/ironic-conductor.log.txt.gz#_2018-10-11_20_55_39_39621:43
mwhahahayea not sure, i actually just saw the same thing on a queens ovb introspectiojn21:43
mwhahahaIMHO it would point to connectivity problems or something between the undercloud and the bmc21:43
mwhahahaor the bmc hanging on the openstack calls21:43
mwhahahaweshay: you can see from the bmc logs that the power thing was called repeatedly21:44
weshayyes21:44
weshaybaremetal-11662_1-co..>2018-10-11 20:502021:46
weshay[   ]baremetal-11662_2-co..>2018-10-11 20:5020621:46
weshay[   ]baremetal-11662_0-co..>2018-10-11 20:5020621:46
weshay[   ]baremetal-11662_3-co..>2018-10-11 20:502021:46
weshayso files w/ 206kb have some indication they are booting21:47
weshaybut 20kb is an empty file21:47
weshayso maybe _0 and _2 booted and _1 and _3 did not21:47
*** rbrady has quit IRC21:50
openstackgerritHarald Jensås proposed openstack/tripleo-quickstart-extras master: OVB job's need to masquerade 10.0.0.0/24  https://review.openstack.org/60983021:51
*** trozet has quit IRC21:52
mwhahahahrm that seems like the wrong way to do that masquerade21:52
*** gouthamr_ is now known as gouthamr21:53
hjensasmwhahaha: I am open to suggestions. :)21:54
mwhahahacan't we reflect that in undercloud.conf? or fix teh networking?21:54
hjensasmwhahaha: enabling/disabling masquerading for ctlplane network became a bool in queens. So we cannot just say masquerade_network: <ctlplane-network>,10.0.0.0/24 anymore.21:56
hjensasmwhahaha: to be honest, I'm a bit confused why the rule was not needed in instack-undercloud.22:05
*** ooolpbot has joined #tripleo22:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION22:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968022:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)22:10
*** ooolpbot has quit IRC22:10
*** lblanchard has quit IRC22:13
mwhahahahjensas: yeah not sure either I'll look into it later. It seems like there is something g else going on22:13
*** salmankhan has quit IRC22:15
hjensasmwhahaha: oh, it's not the masquerading we want - we want the RETURN rule. -A POSTROUTING -s 10.0.0.0/24 -d 10.0.0.0/24 -m state --state NEW,RELATED,ESTABLISHED -m comment --comment "137 routed_network return 10.0.0.0/24 ipv4" -j RETURN22:16
hjensasmwhahaha: i.e we do not want to masqurade, we just want to route.22:16
hjensasmwhahaha: no, actually not that either ... it's just weird. I'll abandone and write up something there.22:17
mwhahahaHa22:17
mwhahahaOk22:17
openstackgerritEmilien Macchi proposed openstack/tripleo-upgrade master: Set container_cli for undercloud  https://review.openstack.org/60846222:18
openstackgerritEmilien Macchi proposed openstack/tripleo-upgrade master: Set container_cli for undercloud  https://review.openstack.org/60846222:18
openstackgerritEmilien Macchi proposed openstack/tripleo-quickstart master: fs050: upgrade the undercloud to Podman containers  https://review.openstack.org/60846322:18
*** openstackgerrit has quit IRC22:19
*** openstackgerrit has joined #tripleo22:20
*** boazel has quit IRC22:22
*** rcernin has joined #tripleo22:23
*** rpioso is now known as rpioso|afk22:26
*** vinaykns has quit IRC22:26
*** boazel has joined #tripleo22:28
*** openstackstatus has quit IRC22:28
*** openstackstatus has joined #tripleo22:30
*** ChanServ sets mode: +v openstackstatus22:30
*** Vorrtex has quit IRC22:32
openstackgerritHarald Jensås proposed openstack/tripleo-heat-templates master: Remove defaults from masquerade-networks service env  https://review.openstack.org/60984522:42
openstackgerritRonelle Landy proposed openstack-infra/tripleo-ci master: NODES_FILE definition is missing  https://review.openstack.org/60984622:50
*** ooolpbot has joined #tripleo23:10
ooolpbotURGENT TRIPLEO TASKS NEED ATTENTION23:10
ooolpbothttps://bugs.launchpad.net/tripleo/+bug/178968023:10
openstackLaunchpad bug 1789680 in tripleo "mistral MessagingTimeout correlates with containerized undercloud uptime" [Critical,Triaged] - Assigned to Toure Dunnon (toure)23:10
*** ooolpbot has quit IRC23:10
openstackgerritIan Wienand proposed openstack/diskimage-builder master: Minor documentation updates  https://review.openstack.org/60963623:13
*** owalsh has quit IRC23:15
*** tosky has quit IRC23:15
*** spotz has quit IRC23:17
*** mcarden has quit IRC23:18
*** owalsh has joined #tripleo23:30
openstackgerritHarald Jensås proposed openstack/puppet-tripleo master: Fix Undercloud masquerading firewall rules  https://review.openstack.org/60985823:39
*** rlandy is now known as rlandy|bbl23:46
*** radez has joined #tripleo23:54
*** spotz has joined #tripleo23:55
itlinux_hello guys, I am trying this on pike openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml23:55
itlinux_but I gett an error openstack: 'tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml' is not an openstack command.23:56
itlinux_I got it..23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!