15:01:56 #startmeeting kolla 15:01:57 Meeting started Wed Mar 3 15:01:56 2021 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:01 The meeting name has been set to 'kolla' 15:02:05 #topic rollcall 15:02:08 \o 15:02:11 o][o 15:02:18 o/ 15:02:19 o 15:02:41 {o 15:02:45 o/ 15:02:52 o7 15:03:14 |o| 15:03:53 |-o-| 15:04:02 #topic agenda 15:04:11 * Roll-call 15:04:13 * Announcements 15:04:15 ** Combined TC/PTL nomination open http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020811.html 15:04:17 ** OpenStack feature freeze next week 15:04:19 * Review action items from the last meeting 15:04:21 * CI status 15:04:23 * Review requests 15:04:25 * PoC: image build & test pipeline (https://review.opendev.org/c/openstack/kolla/+/777796 and https://review.opendev.org/c/openstack/kolla-ansible/+/777946) 15:04:27 * Wallaby release planning 15:04:29 #topic Announcements 15:04:37 #info Combined TC/PTL nomination open 15:04:40 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020811.html 15:05:17 Anyone is welcome to run for Kolla PTL 15:06:00 #info OpenStack feature freeze next week 15:06:09 #link https://releases.openstack.org/wallaby/schedule.html 15:06:49 #info Kolla feature freeze will be Mar 29 - Apr 02 15:07:10 Any other announcements? 15:07:45 #topic Review action items from the last meeting 15:08:00 yoctozepto to ask openstack-discuss about NTP 15:08:24 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020707.html 15:08:28 Thanks yoctozepto 15:08:56 #topic CI status 15:09:42 Kolla train still broken 15:09:57 The fix keeps getting hit by dockerhub pull limits 15:09:59 https://review.opendev.org/c/openstack/kolla/+/774602 15:10:57 we had an issue with neutron-server builds on master, but it was fixed by https://review.opendev.org/c/openstack/kolla/+/777992 15:11:12 unclear whether it affects other branches 15:11:45 kolla-ansible NFV CI job seems to be failing on master 15:11:51 https://ac90fbbc9cd1b2f919e7-c0288c15cf27fe5a39c9948ecafb7329.ssl.cf1.rackcdn.com/778179/1/check/kolla-ansible-centos8-source-scenario-nfv/e3b78d7/secondary1/logs/docker_logs/tacker_conductor.txt 15:11:59 tacker processes fail to import toscaparser 15:12:18 i think this is a tacker bug 15:12:57 is it not a missing package in our image? 15:13:02 i will add the package requirement to the tacker project. 15:13:14 \o/ 15:13:22 this package should be in tacker requirements. 15:13:28 mgoddard: it is but tacker should be listing it 15:13:37 https://pypi.org/project/tosca-parser/ 15:13:38 yeah, just what wuchunya_ is saying 15:14:41 probably, unless it is an optional dep 15:14:52 it's not the first time tacker is clumsy 15:15:03 so that's a good question 15:15:26 #action wuchunyang to propose toscaparser in tacker requirements to fix NFV job 15:15:40 ok , no problem 15:16:26 thanks 15:17:15 anyone want to review https://review.opendev.org/c/openstack/kolla-ansible/+/761519/ to enable rabbitmq TLS in CI? 15:18:43 done 15:19:00 clever plug ;p 15:19:03 tacker plot thickens: requirements.txt has tosca-parser>=1.6.0 # Apache-2.0 15:19:23 hmm, that's intriguing 15:19:37 perhaps it's run in a different env? 15:19:47 the package name is nfv-toscaparser 15:20:03 there are two packages 15:20:32 this should be the one though: 15:20:33 https://github.com/openstack/tosca-parser 15:20:37 hmm 15:20:48 nfv-toscaparser looks old, last release 1.1.1 15:21:15 https://pypi.org/project/tosca-parser/ 15:21:39 anyway, we don't all need to solve it 15:21:46 Kayobe CI had a couple of improvements this week 15:22:04 bare metal testing reliability should be improved 15:22:23 we still hit pull limits 15:22:36 most often in the limestone region 15:23:05 #topic Review requests 15:23:19 Does anyone have a patch they would like to be reviewed this week? 15:23:43 Possibly this small one: https://review.opendev.org/c/openstack/kolla-ansible/+/774222 15:24:25 added RP+1 15:24:30 https://review.opendev.org/c/openstack/kolla-ansible/+/767950 15:24:37 'tis still rotting ;p 15:25:41 ELK7 upgrade patches should be fairly easy 15:27:12 RP+1 all round 15:28:04 it's cheating a little bit, but I did some reviews of the healthcheck patches today, so I'll plug those https://review.opendev.org/q/topic:%22container-health-check%22+(status:open%20OR%20status:merged) 15:28:14 would be nice to finish that one off 15:28:49 #topic PoC: image build & test pipeline 15:29:06 This is mine 15:29:42 After fixing some kayobe CI issues last week, the next biggest obstacle to stability is dockerhub pull limits 15:30:13 We've made changes that make CI usable, but it is still annoying when it fails from time to time 15:30:59 So I thought I would put some effort into working out how to use the opendev container registry 15:31:21 Given that we may see this as a potential fix for our dockerhub woes 15:31:45 #link https://review.opendev.org/c/openstack/kolla/+/777796 15:31:52 #link https://review.opendev.org/c/openstack/kolla-ansible/+/777946 15:32:04 the commit message tries to give a high level overview 15:32:58 it's based on this setup: 15:33:02 #link https://docs.opendev.org/opendev/base-jobs/latest/docker-image.html#a-repository-with-producers-and-consumers 15:33:25 and allows different jobs to produce and consume container images 15:33:43 the PoC has one job that builds images, then pushes them to a registry 15:33:58 and another job that pulls images from the registry and tests them 15:34:21 a key part here being that dockerhub is not involved (much) 15:34:37 Pierre Riteau proposed openstack/kayobe master: Change docker_registry network_mode to host https://review.opendev.org/c/openstack/kayobe/+/760371 15:34:42 is anyone listening? 15:35:10 sounds good 15:35:16 ACK 15:35:19 Everyone is looking at the changes :) 15:35:52 yes 15:36:08 I'll give you a few minutes 15:38:02 I'm sure this has been asked before, but it wasn't possible to request unlimited pulls from Docker hub? 15:38:39 dougsz: we have to pay with blood 15:38:49 more or less we are doomed 15:38:50 or soul 15:38:59 cool, I see :) 15:39:22 I have read description of kolla patch and we may end with pushing GBs of data between CI nodes 15:39:39 life sucks 15:39:49 shucks 15:40:13 right 15:40:27 that is one of my main concerns 15:40:34 there are two tiers of registry involved 15:41:01 the buildset registry, a temporary node running in another job. I believe this should be on the same cloud (but not certain) 15:41:06 the intermediate registry 15:41:21 ^ there is only one of these, and it lives in rackspace 15:41:53 for $reasons, images generally get pushed to both registries 15:41:53 or each k-a job does: start registry, build images and push to local, do own job, destroy 15:42:28 that way no data send but all jobs take longer 15:42:28 no, build and deploy are in separate jobs 15:42:53 well, what you describe is what we have already 15:43:35 + caching registry in each opendev cloud to not fetch debian/centos/ubuntu base image 15:43:53 this way we touch docker hub only in publish jobs 15:45:24 there are quite a few options for how it would work 15:46:00 I suppose we ought to try to list them, and work out which ones fit with the changes we want to make 15:46:22 and I assume that opendev already asked dockerhub to get 'unlimited pull' and got rejected 15:46:53 it's possible we could just publish to and pull from the infra registry as well as dockerhub, and keep everything else the same 15:47:19 see earlier dicussion about soul and blood 15:48:27 if we think this option looks good, then we probably need to have a conversation with opendev infra team 15:48:57 but while poking around in the opendev config, I found option B 15:49:52 the registry mirrors in opendev are not the official docker registry, just an apache caching proxy 15:50:47 oh, that's bad 15:50:57 #link https://opendev.org/opendev/system-config/src/commit/4310315afe27c040b239a72a1c248ddabf7fdfa5/playbooks/roles/mirror/templates/mirror.vhost.j2#L453 15:51:23 which means that they are able to support quay.io 15:51:58 the lack of a registry mirror was one of my main concerns about switching to quay.io 15:51:58 but also might be the reason they let us hit the limits so often 15:52:06 it could be 15:52:08 yes, that is true 15:52:17 so we could reconsider quay.io 15:52:22 indeed 15:52:28 and tell docker goodbye 15:52:33 well, dockerhub* 15:52:59 are the bases present in quay? 15:54:02 hopefully they have centos, ubuntu and debian 15:54:46 there doesn't seem to be the same 'official' set of images in quay.io though 15:55:02 centos 8 stream is official on quay 15:55:17 and only there 15:55:54 it shouldn't really matter where the base lives 15:56:52 yep 15:57:19 #action mgoddard to write up options for CI registry 15:57:58 I'll try to present some options next week, hopefully we can make a decision 15:57:58 great 15:58:46 2 minutes for open discussion 15:58:49 #topic open discussion 16:00:51 too short window for open discussion :P 16:01:19 closed discussion then 16:01:22 :D 16:01:40 Thanks all 16:01:42 #endmeeting