15:00:41 #startmeeting kolla 15:00:42 Meeting started Wed May 26 15:00:41 2021 UTC and is due to finish in 60 minutes. The chair is mnasiadka. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:45 The meeting name has been set to 'kolla' 15:01:24 ok, you can chair as well 15:01:25 :-) 15:01:52 #topic rollcall 15:02:00 \o/ 15:02:05 o/ 15:02:16 o/ 15:02:17 o/ 15:02:23 o/ 15:03:09 Ok then, let's start 15:03:12 #topic agenda 15:03:22 * Roll-call 15:03:22 * Agenda 15:03:22 * Announcements 15:03:22 ** Freenode IRC drama http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022705.html 15:03:22 * Review action items from the last meeting 15:03:23 * CI status 15:03:23 * Wallaby release planning 15:03:24 ** Debian bullseye 15:03:24 ** chrony 15:03:25 * Xena cycle planning 15:03:25 ** master branch life cycle https://etherpad.opendev.org/p/kolla-release-process-draft 15:03:26 * Documentation improvements http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022478.html 15:03:41 #topic announcements 15:03:52 yoctozepto: the stage is yours I guess? 15:04:23 #info Freenode IRC drama http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022705.html 15:04:27 so 15:04:37 you may know or not know but Freenode is acting weird recently 15:05:10 I don't want to go into details but all in all we decided to migrate ASAP 15:05:25 we are not leaving IRC as our beloved technology 15:05:38 we are switching the network 15:05:47 the communication will be sent via openstack-discuss mailing list 15:06:01 and we will communicate it as it happens on current IRC channels 15:06:24 and all wikis and docs related to us using FreeNode will need to updated... 15:06:34 *to get updated 15:06:35 see also https://etherpad.opendev.org/p/openstack-irc 15:06:48 yes, it's going to be a bit of work for sure 15:06:56 PTLs will be asked for help 15:07:43 that's all for this topic, I can discuss more after the meeting if someone is interested 15:08:08 ok then, let's go further, drama will be drama anyways 15:08:18 #topic CI status 15:08:56 So, CentOS Stream loves us and broke edk2-ovmf package, it's now pinned in master and getting backported to stable/wallaby - that should get us back to green 15:09:24 ++ 15:09:53 I think that's it from breakages 15:10:31 #topic Wallaby release planning 15:10:45 So, first item is Debian Bullseye 15:11:17 I think OVS 2.15 issue is fixed by changing CI approach for configuring Neutron network interface 15:12:05 https://review.opendev.org/c/openstack/kolla-ansible/+/792768 - Wallaby backport - failing due to CI breakage 15:13:00 And then we have yoctozepto's cgroupns change that allows running libvirt on cgroups v2 15:13:15 https://review.opendev.org/c/openstack/kolla-ansible/+/790678 15:13:51 And then supposedly we can merge this -> https://review.opendev.org/c/openstack/kolla-ansible/+/789569 15:13:58 yoctozepto: have I missed anything? 15:14:22 I don't think so 15:14:42 ah 15:14:43 https://review.opendev.org/c/openstack/kolla-ansible/+/792583 15:14:50 but it's orthogonal 15:14:54 yet Mark wanted it 15:15:26 for the unaware, the OVS 2.15 issue was that OVS 2.15 recycles the switches 15:15:34 and we assigned an IP address there 15:15:52 so you can imagine it was gone and all the wrongs of the world raided our jobs 15:15:56 or so I imagine it 15:15:59 there = br-ex 15:16:05 anyway, they were failing on that 15:16:17 yeah, there = br-ex = the openvswitch switch 15:16:23 and that got recycled 15:16:44 Now we setup a linux bridge and create a veth pair, and we plug one veth end to OVS - just like Kayobe does 15:17:36 ++ 15:20:00 ok then, Debian Bullseye looks like feasible - maybe even this week 15:20:12 second item on the agenda is chrony 15:20:26 https://review.opendev.org/c/openstack/kolla-ansible/+/792119 15:22:08 I've added RP+1 - let's review it once it passes Zuul after publishing images with pinned edk2-ovmf 15:23:24 rp+1 for sure 15:24:29 Added docker version bump gerrit url to whiteboard, so we don't miss it as a prereq for RC2 15:25:10 It would be ideal if we could request an RC2 end of this week/beginning of next 15:25:47 Ok, let's move on 15:25:58 #topic Xena cycle planning 15:26:11 master branch life cycle https://etherpad.opendev.org/p/kolla-release-process-draft 15:26:27 mnasiadka: it's unlikely to happen 15:26:29 but let's try 15:26:35 ok, move on 15:26:52 yoctozepto: do we want to discuss it now, or wait for Marcin and Mark (so I guess in two weeks time)? 15:28:04 wait 15:28:09 it does not make sense today 15:28:20 ok then 15:29:12 I guess we can talk about Documentation improvements (http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022478.html) also next time 15:31:10 I don't see the thread author as present on the meeting, and we already did have a long discussion about it on Kolla Kall's - so I think we should discuss that (and compare to previous plan from Kolla Kalls) in a bigger group. 15:31:24 agreed 15:33:19 So then let's move on 15:33:22 #topic Open discussion 15:35:24 Would be fun to get started on Let's Encrypt in Xena: https://review.opendev.org/c/openstack/kolla-ansible/+/741340 15:38:15 headphoneJames: great idea, but have in mind core reviewers focus is mainly on changes that will allow us to release Wallaby - so you might see traction for your change probably somewhere after next weekend 15:38:33 makes sense 15:38:53 And I thought we had this crazy idea during PTG to bump up haproxy and use dynamic cert store, right? 15:39:29 wasn't so clear on that, but would be nice 15:40:49 let's bump it 15:40:50 That wouldn't affect this txn though - since it doesn't touch the HAProxy part. That will be split off in a separate secondary transaction 15:41:00 oh, ok 15:43:00 ok then, so it's first half of the feature - I'll try to review it, but not before 12th of June - next week I'm off from Wed :) 15:43:25 If all goes well, should we expect to see the fixed wallaby nova-libvirt image tomorrow morning around 8 AM UTC? 15:44:19 (according to https://zuul.openstack.org/builds?job_name=kolla-publish-centos8s-binary-quay) 15:45:47 priteau: if it will get merged, yoctozepto offered help in force-publishing it sooner than 8am, but I don't know if it makes any sense - we can just wait :) 15:47:08 yeah, I'm on waiting now 15:47:19 it's already late, and I'm doing other stuff 15:47:24 I will force publish if the jobs fail 15:47:32 will see 15:48:45 Tomorrow's OK, I have a fix for Kayobe using Docker Hub (it works!) 15:49:14 using old images? :) 15:49:52 Yes :D 15:49:56 3 days old 15:50:23 yoctozepto: I'm just thinking, could we omit such failures? can we depend on kolla-ansible job before doing publish? 15:52:13 mnasiadka: I didn't get your question 15:52:26 ah 15:52:29 I think I understand 15:52:34 you want to get back to discussing 15:52:39 yoctozepto: so, we periodically run build+publish - if we would try to deploy the built images - we would not publish them 15:52:40 validating images before publishing 15:52:48 yeah, yeah 15:52:49 I agree 15:52:53 but needs hands to do it 15:53:15 Well, if we could write down what needs to be done, I think I can find a volunteer 15:55:34 hmm, ok 15:55:43 we could work on that 15:56:10 it's probably a bit less breakages, and seeing how Stream works... it will probably happen again 15:57:55 +1 15:58:18 Ok, I think it's enough for today. 15:58:29 thanks all 15:58:32 #endmeeting