09:01:55 #startmeeting magnum 09:01:55 Meeting started Wed Jun 19 09:01:55 2024 UTC and is due to finish in 60 minutes. The chair is jakeyip. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:01:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:01:55 The meeting name has been set to 'magnum' 09:02:05 #link https://etherpad.opendev.org/p/magnum-weekly-meeting 09:02:10 #topic Roll Call 09:02:14 o/ 09:02:30 o/ 09:03:52 thanks for coming :) 09:04:15 don't have much on the agenda, dalees feel free to add something 09:04:37 neither this week, been working on CAPO code. might be a short one 09:05:10 I worked through one of your magnum-ui change 09:06:00 while doing that I found an issue, I've put it in the agenda 09:06:16 basically the 3.12 SSL change broke devstack magnum-ui 09:07:12 hmm, thats interesting. do you know why? 09:08:18 basically devstack uses IP and horizon using magnumclient can't verify ssl certs against that 09:09:05 I don't know why it wasn't a problem for the old way of setting up httpclient, if we were verifying at all it won't have worked 09:09:18 have you came across the same doing your magnum-ui changes? 09:10:19 No, but I wasn't using devstack, nor that change as it only merged this month. 09:14:02 ok nvm I'll work thru it 09:14:30 any reviews you need pushing let me know 09:15:25 lots of bitsy ones, but as last week I need to check them again and make sure they're updated. 09:15:34 ok 09:15:52 I want to talk about something else that's not on agenda 09:15:54 the main ones for magnum-ui are progressing through, thanks :) 09:15:58 do you have a registry? 09:15:59 ok 09:16:06 an oci registry? yes 09:16:14 harbor? 09:16:37 it should be, but for now it's just docker registry:2 09:16:41 we are having an issue with our harbor registry backed by swift; I'm trying to hunt down a bug 09:17:15 another team uses harbor here, not sure if it's swift backed though. 09:17:36 I can't push docker.io/library/python:3.10-bookworm to our registry, it errors out with 503 09:17:38 We use Harbor in Azimuth but right now it is backed by a Cinder PVC 09:18:04 I have used the S3 store before though, not with Swift or Ceph RGW though 09:18:05 hi mkjpryor :) can you try that :) 09:18:38 we are running Version v2.10.2-1a741cb7 09:19:06 jakeyip: so you can push/pull other images, but just not that one? 09:19:12 yeah 09:19:43 at first I thought it was something affecting our production swift (we have had other issues), but turns out it affects our test registry/swift also 09:19:45 super weird 09:20:54 That is odd 09:21:02 i dont have an environment to replicate with, but there's a few things you could try to narrow that down. separate out tagging, blob pushing, manifest pushing, reading swift logs etc. 09:23:48 yeah I've traced a bunch of things 09:24:12 just finished setting up another separate instance without swift, guess what that worked 09:25:47 anyway, it's ok, I just have to continue digging, just thought it's good to check 09:25:55 #topic open discussion 09:28:08 nothing from me this week 09:28:25 I saw that there has been a commit to the magnum-capi-helm-charts repo - who is actively working on that? 09:29:02 I just forked it, was planning to work on it 09:29:27 We are looking to start the paperwork to contribute Azimuth to the CNCF shortly 09:29:53 that's good 09:30:02 I wonder if using the CAPI Helm charts from that project would be sufficiently "not single vendor", and would also allow us to continue to use our working GitHub CI? 09:30:12 :shrugs: 09:31:26 I find the 'not single vendor' requirement weird 09:31:27 Porting the CI to Zuul is, as we have previously discussed, my main concern with moving the CAPI Helm charts to OpenDev 09:32:16 It was noted by a few members of the community, so it does seem to be an issue 09:32:17 so many things are single vendor. but like if it's google/microsoft it's ok, stackhpc is not? 09:32:56 Maybe "belongs to a foundation, with StackHPC as custodians" sits better 09:33:08 * jakeyip shrugs 09:33:17 Mainly, it stops us from unilaterally changing the licence 09:33:23 *license 09:33:54 Which I think is what most people are scared of with "single vendor open-source" 09:34:28 After all the recent incidents 09:34:33 nah nothing preventing that, see elk 09:34:47 anyway that's not I'm concerned with 09:34:52 Elastic never belonged to a foundation 09:35:04 ElasticSearch, rather 09:35:21 So Elastic were able to just change the license as and when they wished 09:35:25 I think the fork makes sense because there are some features which you will want in your version, but magnum will just ship with the barebones 09:35:57 and cloud providers will extend from magnum's 09:36:20 In that case, you should make some effort to port that fork to the upstream Helm addon provider 09:36:21 https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm 09:37:01 It didn't exist when we first developed our Cluster API components, and it is different enough that porting is enough of a pain 09:37:11 ok 09:37:30 I was wondering how that differed from yours 09:37:34 But Magnum should probably make efforts to use as many upstream components as possible I think 09:37:43 If you are going to fork the charts 09:38:06 It does broadly the same thing 09:40:14 right, i see - it is quite new 09:42:09 In fact, we were involved in the initial conversation and their template syntax for the values was inspired by our provider 09:42:23 But we just didn't have the time to get involved in the dev 09:43:38 I think this will be good, will be great if someone can help with this 09:44:21 my priority now is getting CI working and hopefully that'll be able to unblock people working on this 09:44:26 can't merge changes without CI 09:44:44 or well, can if we YOLO it 09:47:11 I am planning to add support for producing Flux resources to our addon provider, rather than directly calling Helm ourselves, so we will likely continue to use our provider 09:47:39 Flux has some nice functionality for detecting and correcting drift, which we are interested in for managed clusters 09:48:25 (We might chose to use Argo, which has similar functionality) 09:48:35 choose* 09:52:11 what will drift? 09:56:25 The resources installed on the tenant clusters by the addons 09:57:04 So basically, if the user does something stupid like delete or reconfigure the CNI, Flux will detect the "drift" and correct it 09:58:53 ah ok. will this be auto or a 'reset my cluster' functionality 10:01:38 So for us in Azimuth, it will be constantly reconciling 10:02:13 ok we are at time I should end the meeting 10:02:17 #endmeeting