08:00:42 <dalees> #startmeeting magnum 08:00:42 <opendevmeet> Meeting started Tue Aug 5 08:00:42 2025 UTC and is due to finish in 60 minutes. The chair is dalees. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:42 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:00:42 <opendevmeet> The meeting name has been set to 'magnum' 08:00:51 <dalees> #topic Roll Call 08:00:59 <dalees> o/ 08:01:29 <sd109> o/ 08:01:40 <hemanth> o/ 08:03:44 <dalees> #topic Reviews 08:04:06 <dalees> hemanth: thanks for adding the topic and link to https://review.opendev.org/c/openstack/magnum-capi-helm/+/955984 08:05:24 <dalees> so to support this, do you have different helm charts that will be deployed for canonical's k8s? 08:05:33 <hemanth> I am trying to use magnum-capi-helm driver to deploy canonical k8s using clusterAPI and saw the driver has controlplane CRD as kubeadmcontrolplane which should be configurable so that to allow any other k8s deployment control plane 08:05:44 <hemanth> Yes I am preparing a new helm charts for this work 08:08:25 <dalees> the part I'm not sure on is the approach to turn CRD details into config; it feels like this is a fixed set of names that should be in code. 08:10:04 <sd109> Do you mean the new api_resources and k8s_control_plane_resource_conditions config options? 08:10:08 <hemanth> If i introduce set of names in the code, do you expect to have some functional testing as well as part of upstream with the whole set of new helm charts? 08:10:15 <dalees> but I'm happy to see expanded use of the driver, be aware there may be breakages until we have CI testing of both capo and canonical k8s 08:11:06 <dalees> sd109: yeah; that is what i mean. what are your thoughts? 08:11:54 <dalees> hemanth: perhaps, maybe config is easier to keep these details separate? 08:12:44 <hemanth> yeah I think config is simpler to make the driver support any k8s deployment tool which have corresponding capi pieces 08:13:25 <hemanth> so that there will be clean on what upstream tests and supports.. kubeadm 08:14:29 <sd109> I quite like the idea of being able to at least modify the api_versions for the CAPI CRDs in config because it further detaches the driver from the mgmt cluster and means we don't need a compatibility matrix in the driver docs 08:15:03 <sd109> The supported CAPI API versions are really the responsibility of the chosen Helm charts anyway I guess 08:16:43 <sd109> And being able to configure something other than Kubeadm control plane provider via config is, I think, a nice way of widening the driver's adoption 08:19:11 <dalees> these are good points 08:19:37 <hemanth> +1 08:21:01 <hemanth> If we are on consensus on this approach, will appreciate further feedback on PR when you get time 08:21:31 <sd109> I'll try to take a closer look at it this week 08:22:04 <hemanth> thanks 08:22:07 <sd109> hemanth: do you intend to use the Manifest and HelmRelease CRDs for cluster addons too? 08:23:21 <hemanth> My preference right now is to use https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm unless there is something missing here to use Manifest and HelmRelease CRDs 08:25:11 <sd109> I thought that might be the case, we've talked about (and still plan to at some point) move to that upstream project too so would be interested to hear how it works for you 08:25:56 <hemanth> sure i will let you know 08:27:38 <dalees> I'm not sure the CRD names and details match, but perhaps they can with the proposed config file options. Likewise we plan to move to using that repo, but still use azimuth/cluster-api-addon-provider 08:28:43 <dalees> so yeah, it'll be good to hear how you get on with that. Certainly any driver changes to allow supporting both would be welcomed as it's the expected migration path. 08:29:10 <sd109> +1 08:29:18 <dalees> any more on this topic? 08:29:28 <hemanth> no, thank you both 08:29:33 <sd109> Nothing from me 08:29:46 <dalees> #topic Proposals 08:30:07 <dalees> A brief topic, mostly for the other cores - there are two specs created that could use some feedback. 08:30:38 <dalees> the Flamingo feature freeze is pretty soon, so we'll be getting the first patches up for these two shortly for review. 08:31:08 <dalees> mnasiadka: please have a read, links in agenda 08:31:52 <dalees> #topic Open Discussion 08:32:00 <dalees> anything else to raise? 08:32:39 <hemanth> there is a question in the channel couple of days ago 08:32:44 <hemanth> any idea about if it's possible to override the kubeNetwork:pods:cidrBlocks without having to fork and maintain my own version of that helm chart? 08:32:53 <hemanth> I would like to know your thoughts on this 08:34:54 <sd109> Not at the moment, we've encountered similar problems with other default values. It would be possible via driver config if this patch is accepted though: https://review.opendev.org/c/openstack/magnum-capi-helm/+/951966 08:36:03 <dalees> yeah we've hit this with a customer too, where having cluster labels for pod and services CIDR lists would be helpful. 08:36:52 <dalees> sd109: that would be site wide, which I would avoid. perhaps it would work for andrewbogott_ 's use case however. 08:36:55 <hemanth> Isnt it better solution if we can provide those values in openstack cluster create command? 08:37:44 <hemanth> * Isnt it better solution if we can provide those override values in openstack cluster create command? 08:38:03 <dalees> something like this https://github.com/stackhpc/magnum-capi-helm/pull/30 08:39:56 <dalees> but also coming from cluster labels. Also if I were to rework that, perhaps only providing the labels if they were defined. Else they should be left to default to whatever the chart values are. 08:40:53 <hemanth> Ok I am thinking if the helm chart value overrides should be site specific or should be at user level 08:42:05 <hemanth> esp when i look at the capi-helm-chart in https://azimuth-cloud.github.io/capi-helm-charts there are so many cluster addons which might not be required by all the users 08:42:15 <sd109> IIRC John was quite keen on only making it site specific to prevent users from accidentally breaking there clusters with malformed values, but I can see both sides 08:43:23 <sd109> Although it's also worth noting that John's proposed patch does it on a per cluster template basis rather than site wide 08:43:24 <hemanth> ack.. good to know about the ongoing PR.. 08:43:53 <hemanth> I will take a keen look on the PR 08:45:27 <dalees> sd109: right, of course. 08:48:51 <dalees> that patch 951966 does allow quite a few helpful things for the deployer. I need to get back to reviewing that 08:57:28 <dalees> anything else? I will close out the meeting if we're all good. 08:57:40 <hemanth> nothing from me 08:58:17 <sd109> Likewise 08:59:26 <dalees> ok, thanks for the discussions 08:59:31 <dalees> #endmeeting