09:00:10 <flwang1> #startmeeting magnum
09:00:11 <openstack> Meeting started Wed Mar 18 09:00:10 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:14 <openstack> The meeting name has been set to 'magnum'
09:00:21 <flwang1> #topic roll call
09:00:23 <flwang1> o/
09:00:37 <strigazi> o/
09:00:42 <strigazi> \o
09:00:57 <brtknr> o/
09:01:30 <flwang1> team, put your topics on https://etherpad.openstack.org/p/magnum-weekly-meeting pls
09:02:48 <flwang1> #topic drop py27 for client https://review.opendev.org/713555
09:03:17 <flwang1> now the magnum client gate is broken because of the py27 job failure, i have proposed above patch to remove the py27
09:03:30 <flwang1> it's a simple one, so please help give it a quick review
09:03:44 <strigazi> is this like a community goal?
09:04:33 <flwang1> i think so
09:04:55 <flwang1> for this cycle, but our client didn't get much love ;)
09:05:24 <strigazi> ok
09:06:01 <flwang1> #topic Allow updating health_status  https://review.opendev.org/#/c/710384/
09:06:25 <flwang1> strigazi: i have added the policy check for this feature
09:06:36 <flwang1> and i have tested, it works good
09:07:11 <flwang1> strigazi: brtknr: any question about this one?
09:07:21 <strigazi> I will try to review. I won't manage today
09:07:39 <flwang1> btw, are you guys safe about the virus?
09:07:44 <brtknr> I will test it this week
09:07:47 <flwang1> do you need breathing mask?
09:08:03 <strigazi> I'm good
09:08:12 <brtknr> Everyone at stackhpc started working from home from last week :)
09:08:28 <flwang1> ok, God bless you
09:08:37 <brtknr> Whats it like in NZ?
09:08:42 <flwang1> i'm starting to wfh since this week
09:08:52 <flwang1> here are 20 confirmed case
09:08:58 <flwang1> 3 in wellington
09:09:21 <brtknr> Ah thats nothing :) we have ~2000
09:09:21 <flwang1> but i can see it may raise to 200 after one month
09:09:49 <flwang1> ok, let's continue
09:10:05 <flwang1> #topic selinx disable
09:10:07 <brtknr> It doesnt take very long for it to accelerate though, only 2 weeks ago, we have about 200
09:10:08 <flwang1> brtknr: ^
09:10:45 <brtknr> At catalyst are you using cinder volumes on fedora coreos yet?
09:11:06 <brtknr> the seliux issue exists for both in-tree and out of tree cinder
09:12:04 <flwang1> brtknr: we haven't upgrade to train, we're still using fedora atomic
09:12:09 <flwang1> we're working on the upgrade now
09:12:14 <strigazi> So this volume plugin had never potential to be secure
09:12:49 <brtknr> strigazi: elaborate?
09:13:06 <strigazi> selinux was never working with this plugin, that's it
09:14:13 <strigazi> since it had this issue forever as you said. I haven't checked myself
09:14:50 <flwang1> is it a cinder issue or any other cloud provider may have similar issue?
09:15:01 <strigazi> not a cidner issue
09:15:05 <flwang1> cinder or CPO?
09:15:06 <strigazi> client only
09:15:12 <strigazi> CPO
09:15:31 <flwang1> is there an issue in CPO to track this?
09:15:43 <strigazi> yes, brtknr opened it
09:15:52 <flwang1> good
09:16:17 <flwang1> for magnum side, i think what we can do now is adding the new label to allow disable it
09:16:37 <brtknr> yep
09:16:47 <flwang1> anything else about this?
09:16:58 <brtknr> I will test with strigazi's suggest to add selinux config into ignition file so we dont need to reload it
09:17:07 <brtknr> disabling zincati was a good call
09:18:00 <brtknr> next topic?
09:18:08 <strigazi> +1 on having selinux on by default. This way we know what forces as to disable
09:19:06 <flwang1> +1
09:19:17 <flwang1> #topic labels merging
09:19:30 <flwang1> strigazi: did you get a chance to discuss with Ricardo?
09:20:03 <strigazi> yes, we had came up with a different plan the ugly ++ and --
09:20:32 <strigazi> yes, we had came up with a different plan than the ugly ++ and --
09:20:46 <flwang1> strigazi: so what's the solution?
09:20:57 <strigazi> have cluster labels and nodegroup labels
09:20:58 <flwang1> conclusion
09:21:31 <strigazi> And have per driver policy in the config for which labels can be passed as cluster labels
09:21:36 <strigazi> or ng labels
09:22:25 <strigazi> currently, if you pass labels in cluster creation basically CT labels are completely ignored.
09:22:40 <strigazi> This was we know what the user passed.
09:22:53 <strigazi> with ++ and --, you can't have this
09:23:05 <strigazi> unless you log the payload of the requests
09:23:20 <flwang1> are you going to pick up the current patch?
09:23:28 <strigazi> in log or notification, which is a horrible idea
09:24:26 <strigazi> I saw this in helm 3 where, helm reports what have passed and what is default. While in helm 2, if you do get values you see all values merged
09:24:43 <flwang1> :)
09:24:54 <flwang1> so which way we should follow
09:25:23 <strigazi> what i mentioned, having different labels for cluster and ng
09:25:46 <strigazi> I can write it in a spec
09:25:57 <strigazi> brtknr: you can jump in if you want
09:26:02 <flwang1> it would be appreciated if you can propose a spec
09:26:41 <strigazi> brtknr: we can draft the spec together and you take over
09:26:46 <flwang1> i'm happy to review it
09:27:43 <flwang1> anything else we need to discuss?
09:28:23 <brtknr> sorry i had to answer a colleague's question as i was just catching up
09:28:37 <strigazi> let's wait of brtknr to read
09:30:08 <brtknr> Hmm I might be able to work on this from June depending on whether we get funded for it
09:30:30 <strigazi> ok
09:30:37 <flwang1> June is a  bit late
09:30:42 <flwang1> i can help as well
09:31:14 <strigazi> ok, I try to wrap some networking tests this week and I will get into it
09:31:45 <flwang1> strigazi: thank you very much
09:31:48 <flwang1> anything else?
09:31:51 <strigazi> One more thing,
09:32:02 <strigazi> Apart from the fixing the labels
09:32:34 <strigazi> we could introduce new fields that resemble config maps
09:33:02 <flwang1> explain more?
09:33:11 <flwang1> what do you mean resemble?
09:33:17 <strigazi> The need we have, similar to the url manifest you added
09:33:46 <flwang1> ok, can you give an example?
09:33:48 <strigazi> sorry guys, I need to get the door
09:33:54 <strigazi> please wait t abit
09:37:26 <openstackgerrit> Merged openstack/python-magnumclient master: Drop py27 tests  https://review.opendev.org/713555
09:38:36 <strigazi> I'm back
09:39:20 <strigazi> For example, we deploy a few things we helm and we have some value set
09:39:24 <strigazi> For example, we deploy a few things we helm and we have some values set
09:39:47 <strigazi> but we expose to the API only handful of them
09:40:15 <strigazi> nginx-ingress, prometheus have an immense amount of config options
09:40:38 <strigazi> we can not expose everything via labels
09:40:44 <flwang1> you mean saving those config options into an extra cluster field?
09:40:52 <strigazi> yes
09:41:02 <strigazi> so this is for helm
09:41:14 <flwang1> hmm... then it has to be a json format
09:41:18 <strigazi> we have other addons, eg NPD
09:41:25 <flwang1> otherwise, it will be very limited
09:41:54 <strigazi> flwang1: in magnum db blob
09:42:02 <strigazi> but it can yaml
09:42:02 <flwang1> if you have a patch or spec, i'm happy to review it
09:42:17 <openstackgerrit> Feilong Wang proposed openstack/python-magnumclient master: Support updating cluster health status  https://review.opendev.org/713344
09:42:45 <flwang1> but i can see the pain
09:42:52 <strigazi> the end goal for us at CERN would be to have most addons with helm
09:42:54 <strigazi> even calico
09:43:08 <flwang1> i'm happy to support that goal
09:43:24 <strigazi> and also deployed via fluxcd, so that we can uprade them too
09:43:32 <strigazi> and also deployed via fluxcd, so that we can upgrade them too
09:43:35 <flwang1> but the current helm way in magnum is not very good
09:44:11 <strigazi> flwang1: for calico for example we added a new labels recently
09:44:25 <flwang1> what's the new label?
09:44:38 <strigazi> the IPIP mode
09:44:45 <flwang1> right
09:45:04 <strigazi> we wouldn't need a patch for this
09:45:16 <strigazi> anyway you get the idea, right?
09:45:21 <flwang1> yes
09:45:38 <flwang1> sorry, strigazi i have to leave earlier today, anything else we need to discuss?
09:45:57 <strigazi> all good here
09:46:06 <flwang1> brtknr: ?
09:46:08 <brtknr> are we going to upgrade to using helm3 at some point?
09:46:25 <brtknr> i'm a big fan of a world without tiller
09:46:35 <strigazi> everyone is
09:46:45 <strigazi> the creators of tiller too
09:46:56 <brtknr> lol
09:46:59 <strigazi> brtknr: do you want to do it?
09:47:31 <flwang1> let me end the meeting, and you guys can continue the topic ;)
09:47:33 <brtknr> i will try :)
09:47:44 <strigazi> good night
09:47:49 <flwang1> thank you for joining
09:47:51 <brtknr> night all
09:47:53 <flwang1> #endmeeting