15:00:19 #startmeeting kolla 15:00:22 Meeting started Wed Mar 24 15:00:19 2021 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:26 The meeting name has been set to 'kolla' 15:00:28 #topic rollcall 15:00:44 ]°[ 15:00:45 o/ 15:01:01 o/ 15:01:04 \o\ |o| /o/ 15:01:08 \o 15:01:40 o_ 15:02:57 #topic agenda 15:03:07 * Roll-call 15:03:09 * Announcements 15:03:11 ** PTG 19th - 23rd April, registration open | https://april2021-ptg.eventbrite.com | https://www.openstack.org/ptg/ 15:03:13 * Review action items from the last meeting 15:03:15 * CI status 15:03:17 * Review requests 15:03:19 * PTG team signup http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html 15:03:21 * Quay.io 15:03:23 * Wallaby release planning 15:03:25 #topic announcements 15:03:28 #info PTG 19th - 23rd April, registration open 15:03:35 #link https://april2021-ptg.eventbrite.com 15:03:40 #link https://www.openstack.org/ptg/ 15:04:07 #link https://etherpad.opendev.org/p/kolla-xena-ptg 15:04:18 Please add your name to ^ if you plan to attend 15:04:29 We will discuss PTG more later 15:05:03 #info Kolla feature freeze next week 15:05:19 Any other announcements? 15:06:27 #topic Review action items from the last meeting 15:06:39 yoctozepto try out quay.io 15:06:45 he most certainly did 15:06:49 will discuss later 15:06:55 #topic CI status 15:07:14 * hrw 15:07:40 lots of nice notes on debian issues, thanks hrw 15:07:53 and we need all of them merged 15:09:30 this one makes x86 zuul pass: https://review.opendev.org/c/openstack/kolla/+/782606 15:09:33 then would love to get some help on checking do things work 15:09:50 I've added a note on rc -13 15:10:39 mgoddard: so s/-1/+2/ ;D 15:11:09 +23 15:11:13 oops 15:11:48 I would need to check Debian in victoria, ussuri, train 15:12:01 yoctozepto: how often is ubuntu cephadm failing 15:12:20 quite often but not 100% I think 15:12:23 let's see the recent runs 15:13:16 https://zuul.openstack.org/builds?job_name=kolla-ansible-ubuntu-source-cephadm&branch=master 15:13:22 seems not horribly often 15:13:51 or perhaps it was not always ubuntu 15:13:53 let me see 15:14:41 yup, bingo 15:14:50 it just I recognised ubuntu as happening more often 15:14:52 could be 15:14:55 but it's on both 15:15:16 ok, sounds like more investigation needed 15:15:23 I wonder if it's not rc -13 in disguise 15:15:44 perhaps the block storage is unreliable or something 15:16:05 I think it would make sense to decrease the amount of retries 15:16:11 Pierre Riteau proposed openstack/kayobe master: [DNM] Set environment for testing https://review.opendev.org/c/openstack/kayobe/+/773411 15:16:11 especially for non-voting jobs 15:16:24 as we sometimes wait for several runs 15:16:50 wonder what happens there 15:16:55 I don't think we're going to solve this here. Let's move on 15:17:06 #topic Review requests 15:17:36 You know the drill. One review per person 15:17:43 I will be back on masakari 15:17:54 nothing new today :-( 15:18:27 I do not have one. Debian needs 3 ;D 15:18:52 found one: https://review.opendev.org/c/openstack/kolla/+/782386 - ussuri backport 15:19:08 I will choose https://review.opendev.org/c/openstack/kolla-ansible/+/781062 15:19:21 as we still use Ussuri and want upgrade to newer qemu to test SVE guests 15:19:47 yoctozepto: https://review.opendev.org/c/openstack/kolla-ansible/+/761872 - that one is for you ;) 15:19:52 I have some Questions about test case requirements for Let's Encrypt 15:20:11 https://review.opendev.org/c/openstack/kolla-ansible/+/741340 15:20:34 headphoneJames: go for it 15:21:32 Currently, I use "pebble" to create the TLS certificate. Then I distribute that TLS certificate to all haproxy 15:21:42 mnasiadka: why me? 15:22:09 yoctozepto: well, find another active core in k-a that does not work in StackHPC :) 15:22:24 mnasiadka: wuchunyang 15:22:53 is he on the meeting? nope 15:23:11 The certificate is not valid, because pebble is a testing product. Therefore, all subsequent requests to the OpenStack deployment would need to ignore the insecure certificate. 15:23:22 mnasiadka: ok 15:23:47 headphoneJames: why isn't it valid? is there no CA certificate available for it? 15:23:47 Since I don't have access to the certificate authority for the certificate generated by pebble, I added a boolean to allow for insecure curl method executions to get around this for now. 15:25:02 The valid CA cert is generated by pebble -I have not determined a way to pull that certificate out of pebble / docker volume distribute it to the executor that's running the test 15:26:07 Note, when I run this with let's encrypt proper (not functional test with pebble), the certificate generated is valid and trusted 15:28:02 My first question is just validating the logs for certbot (That a certificate was properly generated) and that the certificate is distributed to all HAProxy enough for a test case? 15:28:05 we should be able to get the CA cert from pebble 15:28:24 mnasiadka: y no healthchecks? 15:28:33 yes 15:28:48 yoctozepto: where? 15:29:29 mnasiadka: left a comment 15:29:36 headphoneJames: there's not much point in running a full test suite with insecure mode. We need to either do a more targeted test of the letsencrypt code, or somehow get hold of the CA cert 15:29:44 mnasiadka: on ovn-octavia 15:30:20 headphoneJames: e.g. using something like the openssl suite to grab the cert from haproxy and check that it has come from pebble 15:30:39 yoctozepto: actually we can disable distributed FIP, but where do we communicate what CI does nowadays? :D 15:30:57 mnasiadka: what about the commit message for the starters? :P 15:31:08 commit message sounds good :) 15:31:11 will update 15:31:15 let's do the distr fip 15:31:23 mgoddard: the openssl suite approach would be doable if that feels acceptable 15:31:23 sounds fancier 15:31:36 and get those healthchecks in 15:32:59 headphoneJames: that would be better than nothing 15:33:27 and probably necessary in any case to verify that the cert has been rotated 15:33:42 I think we've derailed a bit 15:33:46 Let's move on 15:33:57 #topic PTG team signup 15:34:05 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html 15:34:18 Tomorrow is the deadline to choose time slots 15:34:35 I didn't get any responses regarding using earlier slots 15:34:53 so I propose we stick to the usual plan 15:35:07 13:00-17:00 UTC on Monday and Tuesday for Kolla and Kolla Ansible 15:35:17 13:00-15:00 on Wednesday for Kayobe 15:35:28 #vote 15:35:31 +1 15:35:57 13:00-17:00 UTC is good for me 15:36:34 +1 15:36:43 +1 15:37:23 done 15:37:37 please add your names to https://etherpad.opendev.org/p/kolla-xena-ptg 15:38:26 please also add discussion topics! 15:38:42 * deprecate 'base' image 15:38:48 ops, wrong window 15:39:23 let's deprecate something :-) 15:40:10 deprecate yoctozepto 15:40:31 :-( 15:41:04 mgoddard: you want to be PTL for rest of your life? 15:41:23 of course 15:41:27 +1 15:41:33 <3 15:41:52 uf. nova change which makes Debian work just got +w ;D 15:42:05 or rather s/work/fail in known place/ 15:42:18 good enuff 15:42:42 #topic Quay.io 15:43:15 yoctozepto has started a nice PoC of using Quay.io in CI 15:43:32 #link https://review.opendev.org/c/openstack/kolla/+/781130 15:43:41 #link https://review.opendev.org/c/openstack/kolla-ansible/+/781546 15:43:43 thanks, I've pushed all master and victoria images there 15:43:51 #link https://review.opendev.org/c/openstack/kolla/+/781899 15:43:51 except for centos binary which was failing at the time 15:44:07 but it's no biggie 15:44:13 so I think we have two things to discuss 15:44:19 1. any concerns 15:44:25 2. plan 15:44:31 yoctozepto: any concerns? 15:44:38 ~> https://review.opendev.org/q/topic:%22quay.io%22+projects:openstack/kolla 15:44:51 there is one limitation 15:45:00 in that new repositories get pushed as private 15:45:19 quay.io is "actively investigating" how to improve this 15:45:42 I have a script that fixes it for all repos 15:45:49 but it has to be run with human user credentials 15:46:14 otoh, we don't create new images that often 15:46:20 and quay.io might fix it sooner or later 15:46:35 other than that, I am quite happy with it 15:46:46 (not to mention having total control over it now) 15:46:50 (though I can share) 15:47:02 as for the plan 15:47:17 I would consider adding daily quay.io publishing jobs 15:47:30 leaving dockerhub ones in place to run their weekly sacred dance 15:47:46 then switching kolla-ansible to test from quay.io 15:48:15 we can run back to dockerhub if it proves worse 15:48:18 ;p 15:48:38 so we would keep publishing to dockerhub for the time being 15:48:52 it probably makes sense 15:49:01 less disruption to users 15:49:09 no need to clean up 15:49:26 although we would have no way to test the images 15:49:36 perhaps a weekly test pipeline 15:49:56 mgoddard: we build weekly and publish to dockerhub and quay.io in same job? 15:50:18 yoctozepto suggests publishing to quay.io daily 15:50:25 yes, quay.io more often 15:50:35 to get on track like we had it before ;p 15:50:47 daily job on mon-sat to quay, weekly on sun to quay,docker? 15:51:09 it would probably be simpler to just have separate publishing jobs 15:51:13 sure 15:51:42 Radosław Piliszek proposed openstack/kolla-ansible master: [CI] Drop the workaround in Masakari client calls https://review.opendev.org/c/openstack/kolla-ansible/+/777182 15:51:50 although potentially we could more easily test and promote to dockerhub 15:52:22 well, we can publish to dockerhub daily 15:52:28 it was not publishing that was broken 15:52:30 it was pulls 15:52:31 (and is) 15:52:38 I think weekly is fine 15:52:50 "is enough" 15:52:59 is both 15:53:07 and doesn't double our CI load 15:53:23 well, we can publish from the same jobs 15:53:24 we may write in docs "please use quay" and keep dockerhub as source for those who still use it 15:53:29 but I guess we could timeout 15:53:34 and not know what to blame 15:53:37 Pierre Riteau proposed openstack/kayobe master: [DNM] Set environment for testing https://review.opendev.org/c/openstack/kayobe/+/773411 15:53:45 hrw: yes 15:53:48 for the time being 15:53:55 let's see 15:53:59 so we have a rough plan 15:54:05 I will enact it 15:54:12 will make me happy 15:54:14 we can also deprecate dockerhub in Xena and do quay only in Yeti 15:54:17 wonderful 15:54:40 what about account credentials for quay.io 15:54:57 currently I don't think any of us have credentials for dockerub 15:55:06 which you might argue is a security feature 15:55:22 I can give you admin access 15:55:30 as you are PTL 15:55:33 that's the opposite of what I'm suggesting :) 15:55:39 and I'm just a humble CI servant :D 15:55:50 oh, someone has to have them 15:56:01 someone from the previous team generated them for dockerhub 15:56:02 if any person has credentials, it would allow them to compromise the images 15:56:05 they did not appear magically 15:56:11 mind you 15:57:03 it all boils down to trust 15:57:09 indeed 15:57:12 I can't give you a better answer 15:57:24 I trust myself 15:57:28 I trust the PTL 15:57:37 but quite a lot of effort goes into trust in zuul 15:57:43 this could effectively side step that 15:57:50 we can perhaps write something down who is moderating the images 15:58:02 zuul has it encrypted 15:58:07 as it always had 15:58:08 I think that opendev infra should have it somewhere 15:58:12 and we trust it a lot 15:58:19 as well as all its admins 15:58:26 in case of bus incident happening with yoctozepto and mgoddard 15:58:32 yeah, they can decryp the secrets 15:58:44 potentially, a zuul job could rotate the password, encrypt it, and print the encrypted result 15:58:58 for example 15:59:03 then no human would have the password 15:59:31 I know that I do not want to know it 15:59:33 but infra could access it 15:59:42 I would worry about that thing getting broken in the middle 15:59:49 new password and it being nowhere 16:00:00 it's possible 16:00:20 we'd probably need a safety access account until we know it works 16:00:29 anyway, we're out of time 16:00:30 * hrw out 16:00:41 thanks mgoddard 16:00:48 I think this is a concern though, and we shouldn't assume anything about what happened with dockerhub 16:00:59 perhaps we should put something on openstack-dicuss 16:01:19 yes, please do; sometimes we get really nice insight 16:01:40 #action mgoddard email openstack-discuss about quay.io credentials 16:01:44 #endmeeting