09:02:14 #startmeeting magnum 09:02:14 Meeting started Wed Feb 9 09:02:14 2022 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:02:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:02:14 The meeting name has been set to 'magnum' 09:02:19 #topic Roll Call 09:02:22 o/ 09:02:27 o/ 09:03:50 Hello dalees , first time in this meeting? 09:04:30 Yes, I'm working at Catalyst Cloud on Magnum now Fei Long has moved on. 09:05:30 dalees: nice 09:05:31 I think we met in Barcelona Kubecon, Ricardo introduced us a few years back. 09:05:39 anyway, hi! 09:06:16 o/ 09:06:31 dalees: I remember it, I couldn't tell from the irc nick 09:06:37 o/ gbialas 09:08:34 #topic Stories/Tasks 09:08:52 #link https://etherpad.opendev.org/p/magnum-weekly-meeting 09:09:49 (dalees: Ricardo says hi, he's next to me) 09:10:28 gbialas: I managed to merge some of your patches last week 09:10:44 And I have pushed 827668: fcos-k8s: Update to v1.22 | https://review.opendev.org/c/openstack/magnum/+/827668 09:10:51 Yes, I saw that. Huuuge thank for this. 09:11:06 This is also very handy :) 09:11:24 I will bump it to 1.23, it shouldn't need anything else 09:11:38 have a look at it if you can 09:11:41 strigazi, :-) 09:13:24 gbialas: let's try to update to 1.23.3 and then merge the rest of the patches? 09:15:19 strigazi: 1.23.3 -> you mean k8s version ? 09:15:28 gbialas: yes 09:15:55 Ok. We can do this. 09:16:25 the patch above, 1.22 was the main painpoint since it deprecated quite some apis 09:16:29 o/ 09:17:28 tobias-urdin: \o 09:17:45 Yeah, I see that it do a lot more that just "update to v.1.22" :) 09:18:38 gbialas: api version changes bite in many places 09:18:48 o/ - sorry for being late 09:19:42 strigazi: Yeah, I know how it looks. 09:19:51 mnasiadka: \o 09:21:31 I think we are done with reviews from last week, let's move to: 09:21:33 #topic Open Discussion 09:21:50 * dalees is pleased to see FCOS `35.20220116.3.0` in there too. 09:23:21 For the ClusterAPI driver, since Stig is not here we can skip. But have a look at it 824488: Add Cluster API Kubernetes COE driver | https://review.opendev.org/c/openstack/magnum-specs/+/824488 09:24:12 Anyone wants to bring something up? 09:24:33 I saw some old change trying to stop using hyperkube - is there a plan to do anything with it? Current situation is a bit sub-optimal I would say ;-) (fresh kubernetes versions not working out of the box without hyperkube_prefix/kube_tag settings) 09:25:38 mnasiadka: IMO, since we plan to use CAPI and we can get updated hyperkube builds from rancher easily, we can leave it as it is 09:25:48 ok then 09:26:10 mnasiadka: we can set the default to the rancher build 09:26:11 o/ sorry I'm late, getting used to TZ changes 09:26:19 but for CAPI a user would need a pre-existing k8s cluster to deploy, which changes things a bit 09:26:52 strigazi: would make sense, since the current default probably is EOL version of Kubernetes 09:28:21 mnasiadka: I added an entry for the next meeting 09:28:41 mnasiadka: in the agenda https://etherpad.opendev.org/p/magnum-weekly-meeting 09:29:03 strigazi: I see it, have it opened :) 09:30:02 I've got a couple of (basic) questions. 09:31:22 mnasiadka: for the CAPI driver that needs an existing k8s cluster, maybe leave a comment in the spec? I think it can be bring your own cluster or use a magnum cluster 09:31:54 strigazi: will do 09:32:15 jakeyip: \o 09:32:26 dalees: shoot 09:33:36 How many others in magnum community are using rolling upgrade vs recreate clusters? I'm currently interested in FCOS35 as there's a kernel CVE fixed in there, and have found a couple of rolling upgrade bugs to fix. 09:34:00 dalees: we (at CERN), always recreate 09:34:43 dalees: we suggest user to leverage LBs (openstack manage or not) to redirect the workloads 09:35:17 we've had to recreate customers clusters when moving to rancher hyperkube images :( 09:35:27 even though we just wanted to get a newer k8s to labels and upgrade 09:35:37 added some info to meeting agenda 09:35:52 we (Nectar) haven't tested out rolling upgrades properly, so we don't recommend 09:36:28 (we've previously been able to upgrade when using old community hyperkube images) 09:36:45 ok, interesting - thanks. maybe explains some of the codebase author sections :). When I have some patchsets created for these fixes, can I just link against story https://storyboard.openstack.org/#!/story/2008628 or should I use/create another? 09:37:02 tobias-urdin: we can replicate the images to our registry, would this help? 09:37:25 tobias-urdin: from docker.io/rancher to docker.io/openstackmagnum 09:37:58 hmm, i'm not really sure but i don't think it would solve it, i don't think it defaults to using docker.io/openstackmagnum on existing clusters 09:38:52 strigazi, is docker.io/openstackmagnum zero rated for image pulls, or normal unauth'd pull limits apply? 09:39:26 tobias-urdin: indeed, it's the old official repo 09:40:14 dalees: limits apply, which is an issue for the heat-agent 09:41:26 strigazi, ack 09:47:12 @all, Do you think we should mirror the heat agent to another registry? quay.io or ghcr.io or both 09:49:21 hosting on a non rate limited registry will be nice, but are there any users currently impacted by this? afaict most cloud providers seems to have their own registry? 09:49:43 we're working to solve the pull limits issue with an internally hosted registry, so it affects us now but won't soon. 09:53:23 @all Anything else to bring up? 09:56:22 let's end the meeting then, see you next week! 09:56:25 #endmeeting