09:00:16 <strigazi> #startmeeting magnum
09:00:16 <opendevmeet> Meeting started Wed Jan 19 09:00:16 2022 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:16 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:16 <opendevmeet> The meeting name has been set to 'magnum'
09:00:21 <strigazi> #topic Roll Call
09:00:38 <strigazi> o/
09:00:40 <tobias-urdin> o/
09:00:46 <oneswig> o/
09:00:50 <bbezak> o/
09:00:50 <parallax> \o
09:01:19 <strigazi> Thanks for joining the meeting tobias-urdin oneswig bbezak parallax
09:01:30 <oneswig> Thanks for arranging it :-)
09:01:56 <strigazi> Agenda (from header):
09:01:59 <strigazi> #link https://etherpad.opendev.org/p/magnum-weekly-meeting
09:02:12 <gbialas> o/
09:02:20 <strigazi> o/ gbialas
09:02:22 <strigazi> #topic Stories/Tasks
09:02:45 <strigazi> #topic Add Cluster API Kubernetes COE driver
09:02:56 <strigazi> #link https://review.opendev.org/c/openstack/magnum-specs/+/824488
09:03:15 <strigazi> oneswig: do you want to start the discussion on it?
09:03:32 <oneswig> Thanks for raising this strigazi.  In the team we have the cluster api end of an implementation in good shape
09:03:57 <oneswig> It has been developed with Magnum integration in mind
09:04:36 <oneswig> The concept is to use an external k8s management cluster for kubernetes cluster provisioning and management.
09:04:40 <jakeyip> hi o/
09:05:08 <oneswig> We get to draw on a good deal of operator functionality for relatively little Magnum code.
09:06:43 <oneswig> Our hope is that the development of the driver itself is fairly lightweight, but there may be significant effort needed around regression testing of Magnum, plus any assumptions in the code made for the heat-centric design
09:07:07 <oneswig> Making those changes without disruption is where the art lies
09:07:45 <strigazi> tobias-urdin and others non-stackHPC, are you aware of the spec? What do you think?
09:08:28 <tobias-urdin> I read it just now, I wasn't aware of it but welcome it :)
09:09:45 <jakeyip> I am not familiar with cluster api but I like what it can bring, e.g. less reliant on heat and removes need for keystone trusts
09:10:01 <strigazi> oneswig: what would be nice to add in the SPEC is some direction on the implementation, break it onto some tasks
09:10:36 <oneswig> You're right, it's very light on concrete detail.  I'll try to change that this week
09:10:41 <strigazi> eg add the appcreds generation code, the base class for CAPI in drivers, the helm chart maintenance
09:11:37 <oneswig> +1
09:11:40 <strigazi> oneswig: It doesn't have to be in detail, just to make some self contained tasks with order. eg appcreds generation should be first i guess
09:12:24 <strigazi> jakeyip: trusts and appcreds are not too different as concepts, but appcreds are definitely better
09:13:15 <strigazi> oneswig, will we need to host our own helm charts somewhere?
09:13:34 <strigazi> not the code, the tarballs
09:13:52 <oneswig> Not sure what the best plan is there.
09:14:23 <oneswig> Ideally it would dovetail with CERN's work
09:14:44 <jakeyip> what is CERN's work?
09:15:20 <strigazi> jakeyip: all add-ons internally we have them in a single helm chart.
09:15:40 <strigazi> a metachart, as it is usually called. It's a common pattern in helm
09:15:57 <strigazi> We kind of do it in magnum ATM but in the code:
09:17:17 <strigazi> https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/install-helm-modules.sh
09:17:54 <strigazi> the ideal thing would be to move this outside the code
09:18:05 <strigazi> eg https://gitlab.cern.ch/helm/releases/cern-magnum
09:18:58 <strigazi> tobias-urdin: jakeyip: makes sense?
09:20:10 <parallax> +1
09:20:33 <jakeyip> yes
09:20:47 <tobias-urdin> yes
09:21:51 <strigazi> The SPEC implies smth similar
09:22:21 <strigazi> https://review.opendev.org/c/openstack/magnum-specs/+/824488/1/specs/yoga/clusterapi-driver.rst#88
09:22:48 <strigazi> The addons that mention installed with helm can be tracked in a similar way
09:23:08 <oneswig> It would certainly simplify automation effort, I think
09:23:57 <strigazi> In our case, we host in a private registry. We could ship it in the code base which would be strange or host a magnum chart somewhere
09:24:35 <strigazi> i don't think we can solve this now, but maybe we raise it in the ML?
09:25:31 <strigazi> We could use a provider, dockerhub, github, to start
09:26:32 <jakeyip> can you clarify what you mean by this ^ ?
09:27:02 <strigazi> this chart here: https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/install-helm-modules.sh#L72
09:27:21 <strigazi> Is created and installed locally on the node at cluster creation
09:28:15 <strigazi> We could host this chart on a provider and at cluster creation, you can pass just values for this metachart and the chart version
09:29:06 <strigazi> operators in the CT can define values for this chart that control all components and users can override on creation
09:30:06 <strigazi> It will replace instead of these bash scripts https://github.com/openstack/magnum/tree/master/magnum/drivers/common/templates/kubernetes/helm
09:30:28 <strigazi> jakeyip: makese sesne?
09:30:30 <jakeyip> hm, is it only values that can be overridden? is it possible for magnum have a default chart and infrastructure providers can host and customize their own?
09:31:15 <strigazi> jakeyip: magnum operators should be able to use their own, not the only the default
09:31:43 <strigazi> jakeyip: we should provide a default to have a base working
09:31:56 <jakeyip> cool
09:32:28 <strigazi> jakeyip: or not, at CERN we will use definitely a more complicated due other things that exist only at CERN
09:33:23 <strigazi> but all clouds will have their own, eg custom storage classes for cinder etc
09:33:56 <strigazi> we move on, more questions on it?
09:34:51 <oneswig> +1
09:36:10 <strigazi> oneswig: we'll wait for an update from you and then push code
09:36:27 <strigazi> #topic Open Reviews
09:36:44 <strigazi> from the list in https://etherpad.opendev.org/p/magnum-weekly-meeting
09:37:31 <strigazi> I have already looked at all of them, I'll add feedback today
09:37:46 <gbialas> Thx.
09:37:58 <strigazi> Does anyone want to discuss something about them? Any issues?
09:38:21 <gbialas> For soem reason yesterday zuulu added -1 because of tox failed on edocs... No idea why.
09:38:34 <strigazi> parallax: you left a question in the etherpad
09:39:01 <strigazi> gbialas: did you recheck?
09:39:19 <bbezak> I've noticed this recently in wallaby - https://github.com/kubernetes/cloud-provider-openstack/issues/1738. Will take a look at it to fix that.
09:39:37 <strigazi> bbezak: thanks
09:39:57 <gbialas> Not yet. Bus afaik it is failing localy also.
09:40:07 <parallax> well, yes - I'm wondering what's the future for the particular drivers
09:41:00 <strigazi> fedoda-coreos will be with us for a bit more, the others we can start dropping. Mesos should be first since its archived
09:41:04 <parallax> e.g swarm
09:41:20 <strigazi> parallax: note that you can disable drivers in magnum.conf so that they can not be used
09:41:24 <parallax> do you deprecate first, then drop in the next cycle?
09:42:02 <strigazi> parallax: let's deprecate and send an email to the ML? then in Z we drop?
09:42:36 <parallax> that makes sense I think
09:44:02 <strigazi> cool
09:44:35 <strigazi> #topic Open Discussion
09:44:50 <jakeyip> there was also bay and baymodel deprecation
09:45:05 <jakeyip> I think https://review.opendev.org/c/openstack/magnum/+/803779
09:45:34 <strigazi> jakeyip: I think we can drop these, wdyt?
09:46:36 <jakeyip> yes would like to see them gone. need a deprecation strategy though, e.g. what about magnumclient?
09:47:08 <jakeyip> thinking deprecate in client first then api, does that makes sense what is the normal process?
09:47:54 <strigazi> jakeyip: in the past it was two releases, but I think we bay/baymodel we can be flexible since they are not maintained in a while
09:48:22 <strigazi> jakeyip: i'd drop and also notify the ML
09:49:20 <jakeyip> great
09:49:33 <jakeyip> i can help with testing out some of the reviews in the etherpad, I am guessing that is priority
09:49:49 <strigazi> excellent
09:50:08 <strigazi> let's wrap up the meeting?
09:51:03 <strigazi> said twice
09:51:25 <strigazi> thanks everyone, see you next week!
09:51:35 <strigazi> #endmeeting