15:01:56 <mgoddard> #startmeeting kolla
15:01:57 <openstack> Meeting started Wed Mar  3 15:01:56 2021 UTC and is due to finish in 60 minutes.  The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:01 <openstack> The meeting name has been set to 'kolla'
15:02:05 <mgoddard> #topic rollcall
15:02:08 <mgoddard> \o
15:02:11 <hrw> o][o
15:02:18 <parallax> o/
15:02:19 <wuchunya_> o
15:02:41 <dougsz> {o
15:02:45 <headphoneJames> o/
15:02:52 <kplant> o7
15:03:14 <priteau> |o|
15:03:53 <mgoddard> |-o-|
15:04:02 <mgoddard> #topic agenda
15:04:11 <mgoddard> * Roll-call
15:04:13 <mgoddard> * Announcements
15:04:15 <mgoddard> ** Combined TC/PTL nomination open http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020811.html
15:04:17 <mgoddard> ** OpenStack feature freeze next week
15:04:19 <mgoddard> * Review action items from the last meeting
15:04:21 <mgoddard> * CI status
15:04:23 <mgoddard> * Review requests
15:04:25 <mgoddard> * PoC: image build & test pipeline (https://review.opendev.org/c/openstack/kolla/+/777796 and https://review.opendev.org/c/openstack/kolla-ansible/+/777946)
15:04:27 <mgoddard> * Wallaby release planning
15:04:29 <mgoddard> #topic Announcements
15:04:37 <mgoddard> #info Combined TC/PTL nomination open
15:04:40 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020811.html
15:05:17 <mgoddard> Anyone is welcome to run for Kolla PTL
15:06:00 <mgoddard> #info OpenStack feature freeze next week
15:06:09 <mgoddard> #link https://releases.openstack.org/wallaby/schedule.html
15:06:49 <mgoddard> #info Kolla feature freeze will be Mar 29 - Apr 02
15:07:10 <mgoddard> Any other announcements?
15:07:45 <mgoddard> #topic  Review action items from the last meeting
15:08:00 <mgoddard> yoctozepto to ask openstack-discuss about NTP
15:08:24 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020707.html
15:08:28 <mgoddard> Thanks yoctozepto
15:08:56 <mgoddard> #topic CI status
15:09:42 <mgoddard> Kolla train still broken
15:09:57 <mgoddard> The fix keeps getting hit by dockerhub pull limits
15:09:59 <mgoddard> https://review.opendev.org/c/openstack/kolla/+/774602
15:10:57 <mgoddard> we had an issue with neutron-server builds on master, but it was fixed by https://review.opendev.org/c/openstack/kolla/+/777992
15:11:12 <mgoddard> unclear whether it affects other branches
15:11:45 <mgoddard> kolla-ansible NFV CI job seems to be failing on master
15:11:51 <mgoddard> https://ac90fbbc9cd1b2f919e7-c0288c15cf27fe5a39c9948ecafb7329.ssl.cf1.rackcdn.com/778179/1/check/kolla-ansible-centos8-source-scenario-nfv/e3b78d7/secondary1/logs/docker_logs/tacker_conductor.txt
15:11:59 <mgoddard> tacker processes fail to import toscaparser
15:12:18 <wuchunya_> i think this is a tacker bug
15:12:57 <mgoddard> is it not a missing package in our image?
15:13:02 <wuchunya_> i will add the package requirement to the  tacker project.
15:13:14 <yoctozepto> \o/
15:13:22 <wuchunya_> this package should be in tacker requirements.
15:13:28 <yoctozepto> mgoddard: it is but tacker should be listing it
15:13:37 <mgoddard> https://pypi.org/project/tosca-parser/
15:13:38 <yoctozepto> yeah, just what wuchunya_ is saying
15:14:41 <mgoddard> probably, unless it is an optional dep
15:14:52 <yoctozepto> it's not the first time tacker is clumsy
15:15:03 <yoctozepto> so that's a good question
15:15:26 <mgoddard> #action wuchunyang to propose toscaparser in tacker requirements to fix NFV job
15:15:40 <wuchunyang> ok , no problem
15:16:26 <mgoddard> thanks
15:17:15 <mgoddard> anyone want to review https://review.opendev.org/c/openstack/kolla-ansible/+/761519/ to enable rabbitmq TLS in CI?
15:18:43 <yoctozepto> done
15:19:00 <yoctozepto> clever plug ;p
15:19:03 <mgoddard> tacker plot thickens: requirements.txt has tosca-parser>=1.6.0 # Apache-2.0
15:19:23 <yoctozepto> hmm, that's intriguing
15:19:37 <yoctozepto> perhaps it's run in a different env?
15:19:47 <wuchunyang> the package name is nfv-toscaparser
15:20:03 <mgoddard> there are two packages
15:20:32 <yoctozepto> this should be the one though:
15:20:33 <yoctozepto> https://github.com/openstack/tosca-parser
15:20:37 <yoctozepto> hmm
15:20:48 <mgoddard> nfv-toscaparser looks old, last release 1.1.1
15:21:15 <mgoddard> https://pypi.org/project/tosca-parser/
15:21:39 <mgoddard> anyway, we don't all need to solve it
15:21:46 <mgoddard> Kayobe CI had a couple of improvements this week
15:22:04 <mgoddard> bare metal testing reliability should be improved
15:22:23 <mgoddard> we still hit pull limits
15:22:36 <mgoddard> most often in the limestone region
15:23:05 <mgoddard> #topic Review requests
15:23:19 <mgoddard> Does anyone have a patch they would like to be reviewed this week?
15:23:43 <parallax> Possibly this small one: https://review.opendev.org/c/openstack/kolla-ansible/+/774222
15:24:25 <mgoddard> added RP+1
15:24:30 <yoctozepto> https://review.opendev.org/c/openstack/kolla-ansible/+/767950
15:24:37 <yoctozepto> 'tis still rotting ;p
15:25:41 <dougsz> ELK7 upgrade patches should be fairly easy
15:27:12 <mgoddard> RP+1 all round
15:28:04 <mgoddard> it's cheating a little bit, but I did some reviews of the healthcheck patches today, so I'll plug those https://review.opendev.org/q/topic:%22container-health-check%22+(status:open%20OR%20status:merged)
15:28:14 <mgoddard> would be nice to finish that one off
15:28:49 <mgoddard> #topic PoC: image build & test pipeline
15:29:06 <mgoddard> This is mine
15:29:42 <mgoddard> After fixing some kayobe CI issues last week, the next biggest obstacle to stability is dockerhub pull limits
15:30:13 <mgoddard> We've made changes that make CI usable, but it is still annoying when it fails from time to time
15:30:59 <mgoddard> So I thought I would put some effort into working out how to use the opendev container registry
15:31:21 <mgoddard> Given that we may see this as a potential fix for our dockerhub woes
15:31:45 <mgoddard> #link https://review.opendev.org/c/openstack/kolla/+/777796
15:31:52 <mgoddard> #link https://review.opendev.org/c/openstack/kolla-ansible/+/777946
15:32:04 <mgoddard> the commit message tries to give a high level overview
15:32:58 <mgoddard> it's based on this setup:
15:33:02 <mgoddard> #link https://docs.opendev.org/opendev/base-jobs/latest/docker-image.html#a-repository-with-producers-and-consumers
15:33:25 <mgoddard> and allows different jobs to produce and consume container images
15:33:43 <mgoddard> the PoC has one job that builds images, then pushes them to a registry
15:33:58 <mgoddard> and another job that pulls images from the registry and tests them
15:34:21 <mgoddard> a key part here being that dockerhub is not involved (much)
15:34:37 <openstackgerrit> Pierre Riteau proposed openstack/kayobe master: Change docker_registry network_mode to host  https://review.opendev.org/c/openstack/kayobe/+/760371
15:34:42 <mgoddard> is anyone listening?
15:35:10 <hrw> sounds good
15:35:16 <dougsz> ACK
15:35:19 <priteau> Everyone is looking at the changes :)
15:35:52 <wuchunyang> yes
15:36:08 <mgoddard> I'll give you a few minutes
15:38:02 <dougsz> I'm sure this has been asked before, but it wasn't possible to request unlimited pulls from Docker hub?
15:38:39 <yoctozepto> dougsz: we have to pay with blood
15:38:49 <hrw> more or less we are doomed
15:38:50 <yoctozepto> or soul
15:38:59 <dougsz> cool, I see :)
15:39:22 <hrw> I have read description of kolla patch and we may end with pushing GBs of data between CI nodes
15:39:39 <hrw> life sucks
15:39:49 <yoctozepto> shucks
15:40:13 <mgoddard> right
15:40:27 <mgoddard> that is one of my main concerns
15:40:34 <mgoddard> there are two tiers of registry involved
15:41:01 <mgoddard> the buildset registry, a temporary node running in another job. I believe this should be on the same cloud (but not certain)
15:41:06 <mgoddard> the intermediate registry
15:41:21 <mgoddard> ^ there is only one of these, and it lives in rackspace
15:41:53 <mgoddard> for $reasons, images generally get pushed to both registries
15:41:53 <hrw> or each k-a job does: start registry, build images and push to local, do own job, destroy
15:42:28 <hrw> that way no data send but all jobs take longer
15:42:28 <mgoddard> no, build and deploy are in separate jobs
15:42:53 <mgoddard> well, what you describe is what we have already
15:43:35 <hrw> + caching registry in each opendev cloud to not fetch debian/centos/ubuntu base image
15:43:53 <hrw> this way we touch docker hub only in publish jobs
15:45:24 <mgoddard> there are quite a few options for how it would work
15:46:00 <mgoddard> I suppose we ought to try to list them, and work out which ones fit with the changes we want to make
15:46:22 <hrw> and I assume that opendev already asked dockerhub to get 'unlimited pull' and got rejected
15:46:53 <mgoddard> it's possible we could just publish to and pull from the infra registry as well as dockerhub, and keep everything else the same
15:47:19 <mgoddard> see earlier dicussion about soul and blood
15:48:27 <mgoddard> if we think this option looks good, then we probably need to have a conversation with opendev infra team
15:48:57 <mgoddard> but while poking around in the opendev config, I found option B
15:49:52 <mgoddard> the registry mirrors in opendev are not the official docker registry, just an apache caching proxy
15:50:47 <yoctozepto> oh, that's bad
15:50:57 <mgoddard> #link https://opendev.org/opendev/system-config/src/commit/4310315afe27c040b239a72a1c248ddabf7fdfa5/playbooks/roles/mirror/templates/mirror.vhost.j2#L453
15:51:23 <mgoddard> which means that they are able to support quay.io
15:51:58 <mgoddard> the lack of a registry mirror was one of my main concerns about switching to quay.io
15:51:58 <yoctozepto> but also might be the reason they let us hit the limits so often
15:52:06 <mgoddard> it could be
15:52:08 <yoctozepto> yes, that is true
15:52:17 <yoctozepto> so we could reconsider quay.io
15:52:22 <mgoddard> indeed
15:52:28 <yoctozepto> and tell docker goodbye
15:52:33 <yoctozepto> well, dockerhub*
15:52:59 <yoctozepto> are the bases present in quay?
15:54:02 <mgoddard> hopefully they have centos, ubuntu and debian
15:54:46 <mgoddard> there doesn't seem to be the same 'official' set of images in quay.io though
15:55:02 <hrw> centos 8 stream is official on quay
15:55:17 <hrw> and only there
15:55:54 <mgoddard> it shouldn't really matter where the base lives
15:56:52 <hrw> yep
15:57:19 <mgoddard> #action mgoddard to write up options for CI registry
15:57:58 <mgoddard> I'll try to present some options next week, hopefully we can make a decision
15:57:58 <yoctozepto> great
15:58:46 <mgoddard> 2 minutes for open discussion
15:58:49 <mgoddard> #topic open discussion
16:00:51 <kevko> too short window for open discussion :P
16:01:19 <yoctozepto> closed discussion then
16:01:22 <kevko> :D
16:01:40 <mgoddard> Thanks all
16:01:42 <mgoddard> #endmeeting