09:00:43 #startmeeting magnum 09:00:43 Meeting started Wed Feb 16 09:00:43 2022 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:43 The meeting name has been set to 'magnum' 09:00:49 #topic Roll Call 09:00:52 o/ 09:01:25 hi 09:02:24 o/ 09:02:25 mnasiadka: jakeyip: hello :) 09:02:33 o/ 09:03:03 o/ 09:03:04 \o 09:03:22 o/ 09:03:36 o/ 09:04:28 #topic Add Cluster API Kubernetes COE driver https://review.opendev.org/c/openstack/magnum-specs/+/824488 09:06:29 Only a couple of internal discussions on that at this end, unfortunately. Not much progress 09:07:52 oneswig: Is there a first step we can start from? Are you stuck in something particular? 09:10:57 oneswig: I (or someone in our team) could help with the driver part, up to the point of talking to the kubernetes cluster running the CAPI controller 09:12:18 Another colleague has been working on an implementation (as part of other work), I'd hoped he would join last week, but I wasn't ehre 09:13:44 Appreciate the offer and I'll try to make connections 09:13:56 oneswig: ok, thanks 09:14:39 #topic Past Action Items 09:14:46 change the default hyperkube to the rancher build 09:15:12 I didn't manage to push the patch last week, I will do it today 09:16:04 #topic Pending Reviews 09:18:19 I'd need a second pair of eyes for "Mesos driver drop https://review.opendev.org/c/openstack/magnum/+/821213" 09:19:00 LGTM, but I have questions - when we deprecate these should we start from client / API first? 09:19:14 I saw that the FC35 update has security implications (ie, people should do move off FC33). Has that been publicised? 09:19:42 oneswig: do you have a link for that? 09:20:19 jakeyip: For the mesos driver, I don't think we do any validations in the client 09:20:55 jakeyip: It's been some time that it didn't receive any patches and we sent an email in the ML 09:21:25 https://jfrog.com/blog/the-impact-of-cve-2022-0185-linux-kernel-vulnerability-on-popular-kubernetes-engines/ 09:21:33 thanks! 09:22:28 strigazi: yeah for mesos I don't see anything in client, I am thinking generally, e.g. the related bay/baymodel drop 09:23:44 jakeyip: usually we log a warning on both api/client then drop 09:25:47 then drop meaning one version later? 09:26:43 yes, but do we want to wait for another release? 09:28:26 for mesos I was thinking dropping it at the API at https://github.com/openstack/magnum/blob/master/magnum/api/validation.py#L259-L260 first... which has the effect of not allowing new clusters, then the driver code will be effectively dead code and can be removed easily 09:29:18 jakeyip: so, in this release we change the validation and on the next one the rest of the code? 09:31:05 seems safer to me, I don't have strong opinions. 09:31:16 ok 09:31:21 let's do that 09:31:48 we can revisit if the code (e.g. tests) are preventing us from moving forward 09:31:59 For bay/baymodel, something similar? 09:32:25 yeap 09:32:36 cool 09:32:40 e.g. could do client this version https://review.opendev.org/c/openstack/python-magnumclient/+/803629 09:33:32 ok Let's log these as actions 09:34:02 #action change magnum/api/validation.py#L259-L260 to not allow mesos as a coe option 09:34:41 #action leave a comment to merge https://review.opendev.org/c/openstack/magnum/+/821213 in Z 09:35:00 #undo 09:35:00 Removing item from minutes: #action leave a comment to merge https://review.opendev.org/c/openstack/magnum/+/821213 in Z 09:35:05 #action leave a comment to merge https://review.opendev.org/c/openstack/magnum/+/821213 in Z+1 09:35:33 #action merge 803629: Drop bay and baymodel | https://review.opendev.org/c/openstack/python-magnumclient/+/803629 in Z 09:36:09 #action leave a comment to mere 803780: Drop bay and baymodel from controllers | https://review.opendev.org/c/openstack/magnum/+/803780 in Z+1 09:36:15 #undo 09:36:15 Removing item from minutes: #action leave a comment to mere 803780: Drop bay and baymodel from controllers | https://review.opendev.org/c/openstack/magnum/+/803780 in Z+1 09:36:20 #action leave a comment to merge 803780: Drop bay and baymodel from controllers | https://review.opendev.org/c/openstack/magnum/+/803780 in Z+1 09:37:00 #action change the default hyperkube to the rancher build 09:37:25 let's move to the rest of the list of reviews 09:40:26 For https://review.opendev.org/c/openstack/magnum/+/773923 and https://review.opendev.org/c/openstack/magnum/+/775793 I don't think there something to bring up 09:41:30 For 827089: security hardening - kube-hunter(KHV002) | https://review.opendev.org/c/openstack/magnum/+/827089 is safe to merge jakeyip ? we rely on the healthz of the apiserver to install all addons 09:42:34 if others can have a look it would be great 09:43:15 Finally, for 827668: fcos-k8s: Update to v1.22 | https://review.opendev.org/c/openstack/magnum/+/827668 we can merge 09:44:16 #topic Open Discussion 09:44:29 Anyone wants to bring something up? 09:44:49 oh hm, need to hold that. I saw that the cluster state reports healthy, I did not realised the /heathz endpoint returns 401. I'll check 09:45:37 For the Z-PTL I'll send an email today. I hope we can change on the next release :) 09:46:04 we have a couple of patches for quotas that we would like merge 09:46:04 jakeyip: where to you see the 401? in the conductor? 09:46:31 jakeyip: For the quotas patches, I'll have a look 09:47:27 strigazi: 401 when I curl it as a normal client 09:47:51 jakeyip: that's exepcted, it's the goal of the pacth 09:47:53 strigazi: thanks! 09:48:58 jakeyip: calls like this should work https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/calico-service.sh#L4471 09:49:03 [ "ok" = "$(kubectl get --raw='/healthz')" ] 09:51:10 ok, I was confused. I thought /healthz output updates cluster status. 09:52:00 reading code now... I'll leave comment on the patch later 09:52:06 jakeyip: thanks 09:53:09 AOB? 09:54:15 thanks for merging magnumclient robo patches, there are a couple more I will send them up after meeting, don't want to pollute the conversation 09:55:48 See you next week everyone 09:55:53 #endmeeting