09:02:14 <strigazi> #startmeeting magnum
09:02:14 <opendevmeet> Meeting started Wed Feb  9 09:02:14 2022 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:02:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:02:14 <opendevmeet> The meeting name has been set to 'magnum'
09:02:19 <strigazi> #topic Roll Call
09:02:22 <strigazi> o/
09:02:27 <dalees> o/
09:03:50 <strigazi> Hello dalees , first time in this meeting?
09:04:30 <dalees> Yes, I'm working at Catalyst Cloud on Magnum now Fei Long has moved on.
09:05:30 <strigazi> dalees: nice
09:05:31 <dalees> I think we met in Barcelona Kubecon, Ricardo introduced us a few years back.
09:05:39 <dalees> anyway, hi!
09:06:16 <gbialas> o/
09:06:31 <strigazi> dalees: I remember it, I couldn't tell from the irc nick
09:06:37 <strigazi> o/ gbialas
09:08:34 <strigazi> #topic Stories/Tasks
09:08:52 <strigazi> #link https://etherpad.opendev.org/p/magnum-weekly-meeting
09:09:49 <strigazi> (dalees: Ricardo says hi, he's next to me)
09:10:28 <strigazi> gbialas: I managed to merge some of your patches last week
09:10:44 <strigazi> And I have pushed 827668: fcos-k8s: Update to v1.22 | https://review.opendev.org/c/openstack/magnum/+/827668
09:10:51 <gbialas> Yes, I saw that. Huuuge thank for this.
09:11:06 <gbialas> This is also very handy :)
09:11:24 <strigazi> I will bump it to 1.23, it shouldn't need anything else
09:11:38 <strigazi> have a look at it if you can
09:11:41 <dalees> strigazi, :-)
09:13:24 <strigazi> gbialas: let's try to update to 1.23.3 and then merge the rest of the patches?
09:15:19 <gbialas> strigazi: 1.23.3 -> you mean k8s version ?
09:15:28 <strigazi> gbialas: yes
09:15:55 <gbialas> Ok. We can do this.
09:16:25 <strigazi> the patch above, 1.22 was the main painpoint since it deprecated quite some apis
09:16:29 <tobias-urdin> o/
09:17:28 <strigazi> tobias-urdin: \o
09:17:45 <gbialas> Yeah, I see that it do a lot more that just "update to v.1.22" :)
09:18:38 <strigazi> gbialas: api version changes bite in many places
09:18:48 <mnasiadka> o/ - sorry for being late
09:19:42 <gbialas> strigazi:  Yeah, I know how it looks.
09:19:51 <strigazi> mnasiadka: \o
09:21:31 <strigazi> I think we are done with reviews from last week, let's move to:
09:21:33 <strigazi> #topic Open Discussion
09:21:50 * dalees is pleased to see FCOS `35.20220116.3.0` in there too.
09:23:21 <strigazi> For the ClusterAPI driver, since Stig is not here we can skip. But have a look at it 824488: Add Cluster API Kubernetes COE driver | https://review.opendev.org/c/openstack/magnum-specs/+/824488
09:24:12 <strigazi> Anyone wants to bring something up?
09:24:33 <mnasiadka> I saw some old change trying to stop using hyperkube - is there a plan to do anything with it? Current situation is a bit sub-optimal I would say ;-) (fresh kubernetes versions not working out of the box without hyperkube_prefix/kube_tag settings)
09:25:38 <strigazi> mnasiadka: IMO, since we plan to use CAPI and we can get updated hyperkube builds from rancher easily, we can leave it as it is
09:25:48 <mnasiadka> ok then
09:26:10 <strigazi> mnasiadka: we can set the default to the rancher build
09:26:11 <jakeyip> o/ sorry I'm late, getting used to TZ changes
09:26:19 <mnasiadka> but for CAPI a user would need a pre-existing k8s cluster to deploy, which changes things a bit
09:26:52 <mnasiadka> strigazi: would make sense, since the current default probably is EOL version of Kubernetes
09:28:21 <strigazi> mnasiadka: I added an entry for the next meeting
09:28:41 <strigazi> mnasiadka: in the agenda https://etherpad.opendev.org/p/magnum-weekly-meeting
09:29:03 <mnasiadka> strigazi: I see it, have it opened :)
09:30:02 <dalees> I've got a couple of (basic) questions.
09:31:22 <strigazi> mnasiadka: for the CAPI driver that needs an existing k8s cluster, maybe leave a comment in the spec? I think it can be bring your own cluster or use a magnum cluster
09:31:54 <mnasiadka> strigazi: will do
09:32:15 <strigazi> jakeyip: \o
09:32:26 <strigazi> dalees: shoot
09:33:36 <dalees> How many others in magnum community are using rolling upgrade vs recreate clusters? I'm currently interested in FCOS35 as there's a kernel CVE fixed in there, and have found a couple of rolling upgrade bugs to fix.
09:34:00 <strigazi> dalees: we (at CERN), always recreate
09:34:43 <strigazi> dalees: we suggest user to leverage LBs (openstack manage or not) to redirect the workloads
09:35:17 <tobias-urdin> we've had to recreate customers clusters when moving to rancher hyperkube images :(
09:35:27 <tobias-urdin> even though we just wanted to get a newer k8s to labels and upgrade
09:35:37 <tobias-urdin> added some info to meeting agenda
09:35:52 <jakeyip> we (Nectar) haven't tested out rolling upgrades properly, so we don't recommend
09:36:28 <tobias-urdin> (we've previously been able to upgrade when using old community hyperkube images)
09:36:45 <dalees> ok, interesting - thanks. maybe explains some of the codebase author sections :). When I have some patchsets created for these fixes, can I just link against story https://storyboard.openstack.org/#!/story/2008628 or should I use/create another?
09:37:02 <strigazi> tobias-urdin: we can replicate the images to our registry, would this help?
09:37:25 <strigazi> tobias-urdin: from docker.io/rancher to docker.io/openstackmagnum
09:37:58 <tobias-urdin> hmm, i'm not really sure but i don't think it would solve it, i don't think it defaults to using docker.io/openstackmagnum on existing clusters
09:38:52 <dalees> strigazi, is docker.io/openstackmagnum zero rated for image pulls, or normal unauth'd pull limits apply?
09:39:26 <strigazi> tobias-urdin: indeed, it's the old official repo
09:40:14 <strigazi> dalees: limits apply, which is an issue for the heat-agent
09:41:26 <dalees> strigazi, ack
09:47:12 <strigazi> @all, Do you think we should mirror the heat agent to another registry? quay.io or ghcr.io or both
09:49:21 <jakeyip> hosting on a non rate limited registry will be nice, but are there any users currently impacted by this? afaict most cloud providers seems to have their own registry?
09:49:43 <dalees> we're working to solve the pull limits issue with an internally hosted registry, so it affects us now but won't soon.
09:53:23 <strigazi> @all Anything else to bring up?
09:56:22 <strigazi> let's end the meeting then, see you next week!
09:56:25 <strigazi> #endmeeting