09:03:54 #startmeeting magnum 09:03:54 Meeting started Wed May 29 09:03:54 2024 UTC and is due to finish in 60 minutes. The chair is jakeyip. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:03:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:03:54 The meeting name has been set to 'magnum' 09:03:59 #link https://etherpad.opendev.org/p/magnum-weekly-meeting 09:04:04 Please put your topics into to Agenda 09:04:09 #topic Roll Call 09:04:10 o/ 09:04:13 o/ 09:04:36 let's wait a couple of mins 09:05:53 Natz cheng proposed openstack/magnum-tempest-plugin master: Use py3 as the default runtime for tox https://review.opendev.org/c/openstack/magnum-tempest-plugin/+/895531 09:06:44 #topic Review Action Items 09:07:08 Cluster actions: kubeconfig download and reorder 09:07:11 #link https://review.opendev.org/c/openstack/magnum-ui/+/917913 09:08:34 dalees: this is generally OK, I do have a comment. keystone_auth_enabled defaults to True, but your patch checks for the label explicitly. do you have a reason for this? 09:09:32 yeah, I think we're specifying it explicitly in our Cluster Templates so I made the wrong assumption about the default. I'll re-check this and make the change so it accounts for the default 09:09:40 thanks for catching that 09:10:12 I guess you can flip the logic and it'll work, i.e. check for False :) 09:10:20 Merged openstack/python-magnumclient master: Remove extraneous quote in non-TLS kubeconfig https://review.opendev.org/c/openstack/python-magnumclient/+/918950 09:11:38 yep, I'll test the defaults too though. The CAPI driver (or helm charts) might have a different default, but I hope not and I hope I just got it wrong. 09:12:48 ah I see. ok I'll wait for your response. thanks! that is good work 09:13:21 It's quite nice to download a kubeconfig file and **just use it** without any openstack CLI auth :) 09:13:45 kudos to ricolin for the first version of that feature 09:15:53 do you have it in use for your production clusters? 09:16:46 Yes, I even updated our quickstart screenshots with all these changes in place :) https://docs.catalystcloud.nz/kubernetes/quickstart.html#creating-a-kubernetes-cluster 09:18:51 actually I thought we need another patch at https://opendev.org/openstack/python-magnumclient/src/branch/master/magnumclient/common/utils.py#L251 09:19:13 that line is still using v1beta1, another one at ln 243 09:21:00 ah, maybe we do. I recall updating it from `v1alpha1` in https://opendev.org/openstack/python-magnumclient/commit/de11f40d0c632ff839e5838ed650d1821c87d8a8 09:22:27 dalees: maybe I am wrong. I couldn't get it to work in devstack (using OS_CLOUDS=devstack) 09:22:46 let me try again and comment. good that it is working for you 09:24:08 I haven't review the rest of magnum-ui changes, I will get to them 09:24:53 it looks like `client.authentication.k8s.io/v1` is what it should be, even if it continues to work with my `kubectl` version. But I'll need to dig up the version it was introduced, removed and deprecated to make sure I understand the timeline. https://v1-29.docs.kubernetes.io/docs/reference/config-api/client-authentication.v1/ 09:27:12 possibly both are still supported at this point in time and I was facing another different error. I'll update when I find out more 09:28:24 In my quick search, I can't find signs of `ExecCredential`s v1beta1 being removed yet... 09:29:37 yep - ping me if you have issues and error messages. Maybe I can also reproduce on my side or help investigate. 09:30:17 ok 09:31:02 I don't have much to discuss, just want to bring to attention that I haven't got CI passing for v1.28 with Heat driver 09:31:19 #link https://review.opendev.org/c/openstack/magnum/+/919560 09:32:19 it passes locally in my devstack but I don't know what's wrong in Zuul. I might play around that patch to cut down number of tests etc to try 09:34:20 reading zuul logs i notice "0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }" 09:34:55 hm that's wrong 09:35:00 and "0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}". 09:35:15 So not sure which is the most relevant to the failure 09:36:15 I was hoping for a `kubectl get nodes` but that output is in json and my eyes went x_x 09:39:47 oh yah I remember another place I got stuck 09:40:03 https://zuul.opendev.org/t/openstack/build/bb7a0c2d39ba46feaba68b28093f7dc1/logs occm logs at https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bb7/919560/3/check/magnum-tempest-plugin-tests-cluster-k8s_fcos_v1-1.28-flannel/bb7a0c2/controller/logs/magnum-nodes/kubernetes/pods/kube-system/openstack-cloud-controller-manager-dhn7p 09:40:18 E0520 09:19:36.923645 1 node_controller.go:240] error syncing 'tempest-cluster-308782461-3hjtwkyajabv-node-0': failed to get instance metadata for node tempest-cluster-308782461-3hjtwkyajabv-node-0: Get "https://199.204.45.177:9696/networking/v2.0/ports?device_id=f26c7b4a-0747-4eb2-8508-1108cf160821": dial tcp 199.204.45.177:9696: connect: no route to host, 09:40:20 requeuing 09:40:41 it looked like the networking is broken, but I couldn't tell why 09:41:35 hmm, failed trying to talk from the cluster to the openstack neutron serivce. 09:42:47 yeah. for both calico and flannel, so it's not a cni issue 09:43:11 Merged openstack/magnum master: Change network driver test to use non-default driver. https://review.opendev.org/c/openstack/magnum/+/905632 09:43:12 not sure if that's a magnum problem or the VM generally can't reach openstack 09:43:42 well the 1.27 clusters in same PS worked. just broken for 1.28 clusters 09:43:46 but other tests pass for 1.27? so it must be isolated to 1.28 09:44:11 maybe I play around with occm, try the 1.27 occm with the 1.28 cluster 09:44:25 yeah good plan 09:44:25 what versions do you use? 09:45:09 we didn't run 1.28 in Heat 09:47:37 ok 09:47:38 Jake Yip proposed openstack/magnum master: DNM: Test CI for v1.28.9 https://review.opendev.org/c/openstack/magnum/+/920725 09:47:53 anything else? 09:49:04 that's all from me, thanks again for the effort on reviews 09:50:49 no worries. thanks for staying late (again) :) 09:50:59 let's end the meeting 09:51:09 #endmeeting