09:00:10 #startmeeting magnum 09:00:11 Meeting started Wed Mar 18 09:00:10 2020 UTC and is due to finish in 60 minutes. The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:14 The meeting name has been set to 'magnum' 09:00:21 #topic roll call 09:00:23 o/ 09:00:37 o/ 09:00:42 \o 09:00:57 o/ 09:01:30 team, put your topics on https://etherpad.openstack.org/p/magnum-weekly-meeting pls 09:02:48 #topic drop py27 for client https://review.opendev.org/713555 09:03:17 now the magnum client gate is broken because of the py27 job failure, i have proposed above patch to remove the py27 09:03:30 it's a simple one, so please help give it a quick review 09:03:44 is this like a community goal? 09:04:33 i think so 09:04:55 for this cycle, but our client didn't get much love ;) 09:05:24 ok 09:06:01 #topic Allow updating health_status  https://review.opendev.org/#/c/710384/ 09:06:25 strigazi: i have added the policy check for this feature 09:06:36 and i have tested, it works good 09:07:11 strigazi: brtknr: any question about this one? 09:07:21 I will try to review. I won't manage today 09:07:39 btw, are you guys safe about the virus? 09:07:44 I will test it this week 09:07:47 do you need breathing mask? 09:08:03 I'm good 09:08:12 Everyone at stackhpc started working from home from last week :) 09:08:28 ok, God bless you 09:08:37 Whats it like in NZ? 09:08:42 i'm starting to wfh since this week 09:08:52 here are 20 confirmed case 09:08:58 3 in wellington 09:09:21 Ah thats nothing :) we have ~2000 09:09:21 but i can see it may raise to 200 after one month 09:09:49 ok, let's continue 09:10:05 #topic selinx disable 09:10:07 It doesnt take very long for it to accelerate though, only 2 weeks ago, we have about 200 09:10:08 brtknr: ^ 09:10:45 At catalyst are you using cinder volumes on fedora coreos yet? 09:11:06 the seliux issue exists for both in-tree and out of tree cinder 09:12:04 brtknr: we haven't upgrade to train, we're still using fedora atomic 09:12:09 we're working on the upgrade now 09:12:14 So this volume plugin had never potential to be secure 09:12:49 strigazi: elaborate? 09:13:06 selinux was never working with this plugin, that's it 09:14:13 since it had this issue forever as you said. I haven't checked myself 09:14:50 is it a cinder issue or any other cloud provider may have similar issue? 09:15:01 not a cidner issue 09:15:05 cinder or CPO? 09:15:06 client only 09:15:12 CPO 09:15:31 is there an issue in CPO to track this? 09:15:43 yes, brtknr opened it 09:15:52 good 09:16:17 for magnum side, i think what we can do now is adding the new label to allow disable it 09:16:37 yep 09:16:47 anything else about this? 09:16:58 I will test with strigazi's suggest to add selinux config into ignition file so we dont need to reload it 09:17:07 disabling zincati was a good call 09:18:00 next topic? 09:18:08 +1 on having selinux on by default. This way we know what forces as to disable 09:19:06 +1 09:19:17 #topic labels merging 09:19:30 strigazi: did you get a chance to discuss with Ricardo? 09:20:03 yes, we had came up with a different plan the ugly ++ and -- 09:20:32 yes, we had came up with a different plan than the ugly ++ and -- 09:20:46 strigazi: so what's the solution? 09:20:57 have cluster labels and nodegroup labels 09:20:58 conclusion 09:21:31 And have per driver policy in the config for which labels can be passed as cluster labels 09:21:36 or ng labels 09:22:25 currently, if you pass labels in cluster creation basically CT labels are completely ignored. 09:22:40 This was we know what the user passed. 09:22:53 with ++ and --, you can't have this 09:23:05 unless you log the payload of the requests 09:23:20 are you going to pick up the current patch? 09:23:28 in log or notification, which is a horrible idea 09:24:26 I saw this in helm 3 where, helm reports what have passed and what is default. While in helm 2, if you do get values you see all values merged 09:24:43 :) 09:24:54 so which way we should follow 09:25:23 what i mentioned, having different labels for cluster and ng 09:25:46 I can write it in a spec 09:25:57 brtknr: you can jump in if you want 09:26:02 it would be appreciated if you can propose a spec 09:26:41 brtknr: we can draft the spec together and you take over 09:26:46 i'm happy to review it 09:27:43 anything else we need to discuss? 09:28:23 sorry i had to answer a colleague's question as i was just catching up 09:28:37 let's wait of brtknr to read 09:30:08 Hmm I might be able to work on this from June depending on whether we get funded for it 09:30:30 ok 09:30:37 June is a bit late 09:30:42 i can help as well 09:31:14 ok, I try to wrap some networking tests this week and I will get into it 09:31:45 strigazi: thank you very much 09:31:48 anything else? 09:31:51 One more thing, 09:32:02 Apart from the fixing the labels 09:32:34 we could introduce new fields that resemble config maps 09:33:02 explain more? 09:33:11 what do you mean resemble? 09:33:17 The need we have, similar to the url manifest you added 09:33:46 ok, can you give an example? 09:33:48 sorry guys, I need to get the door 09:33:54 please wait t abit 09:37:26 Merged openstack/python-magnumclient master: Drop py27 tests https://review.opendev.org/713555 09:38:36 I'm back 09:39:20 For example, we deploy a few things we helm and we have some value set 09:39:24 For example, we deploy a few things we helm and we have some values set 09:39:47 but we expose to the API only handful of them 09:40:15 nginx-ingress, prometheus have an immense amount of config options 09:40:38 we can not expose everything via labels 09:40:44 you mean saving those config options into an extra cluster field? 09:40:52 yes 09:41:02 so this is for helm 09:41:14 hmm... then it has to be a json format 09:41:18 we have other addons, eg NPD 09:41:25 otherwise, it will be very limited 09:41:54 flwang1: in magnum db blob 09:42:02 but it can yaml 09:42:02 if you have a patch or spec, i'm happy to review it 09:42:17 Feilong Wang proposed openstack/python-magnumclient master: Support updating cluster health status https://review.opendev.org/713344 09:42:45 but i can see the pain 09:42:52 the end goal for us at CERN would be to have most addons with helm 09:42:54 even calico 09:43:08 i'm happy to support that goal 09:43:24 and also deployed via fluxcd, so that we can uprade them too 09:43:32 and also deployed via fluxcd, so that we can upgrade them too 09:43:35 but the current helm way in magnum is not very good 09:44:11 flwang1: for calico for example we added a new labels recently 09:44:25 what's the new label? 09:44:38 the IPIP mode 09:44:45 right 09:45:04 we wouldn't need a patch for this 09:45:16 anyway you get the idea, right? 09:45:21 yes 09:45:38 sorry, strigazi i have to leave earlier today, anything else we need to discuss? 09:45:57 all good here 09:46:06 brtknr: ? 09:46:08 are we going to upgrade to using helm3 at some point? 09:46:25 i'm a big fan of a world without tiller 09:46:35 everyone is 09:46:45 the creators of tiller too 09:46:56 lol 09:46:59 brtknr: do you want to do it? 09:47:31 let me end the meeting, and you guys can continue the topic ;) 09:47:33 i will try :) 09:47:44 good night 09:47:49 thank you for joining 09:47:51 night all 09:47:53 #endmeeting