21:01:06 <strigazi> #startmeeting containers
21:01:07 <openstack> Meeting started Tue Dec  4 21:01:06 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:10 <openstack> The meeting name has been set to 'containers'
21:01:16 <strigazi> #topic Roll Call
21:01:22 <strigazi> o/
21:02:13 <cbrumm_> o/
21:02:20 <colin-> hello
21:02:57 <strigazi> hello guys
21:03:07 <strigazi> #topic Announcements
21:03:36 <strigazi> Following CVE-2018-1002105 https://github.com/kubernetes/kubernetes/issues/71411
21:03:57 <strigazi> I've pushed imaged for 1.10.11 and 1.11.5
21:04:05 <strigazi> #link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000501.html
21:04:20 <cbrumm_> thanks
21:04:24 <strigazi> And some quick instructions for upgrading.
21:05:08 <strigazi> After looking into the CVE, it seems that magnum clusters are suffering only from the anonymous-auth=true on the API server issue
21:05:50 <strigazi> The default config with magnum is not using the kube aggregator API and in kubelet this option is to false.
21:06:13 <colin-> understood
21:06:18 <cbrumm_> that's good, we've enabled the aggregator though :(
21:07:19 <flwang> o/
21:07:19 <strigazi> We need to this anyway, by so far we haven't.
21:07:39 <flwang> strigazi: does the v1.11.5 include the cloud provider support?
21:08:01 <strigazi> flwang: v1.11.5-1, yes it does
21:08:08 <flwang> strigazi: cool, thanks
21:10:39 <strigazi> FYI, at today at cern I counted kubernetes 141 clusters, we plan to set anonymous-auth=false in the API and then advise users to upgrade manuallu or migrate to new clusters with v1.12.3
21:11:15 <cbrumm_> 141, nice!
21:11:46 <strigazi> All clusters are inside our private network, so only critical services are advised to take action.
21:12:03 <strigazi> to be on the safe side
21:12:42 <strigazi> final comment about the CVE that monopolised by day and last night,
21:13:40 <strigazi> multi-tenant clusters are also more vulnerable since non owners might run custom code in the cluster.
21:14:14 <strigazi> #topic Stories/Tasks
21:14:20 <cbrumm_> luckily we don't have any multi-tenant clusters
21:15:41 <strigazi> last week, was too busy for me, I don't have any updates, I think I missed an issue with the heat-agent in queens or rocky, flwang ?
21:16:21 <strigazi> cbrumm_: we all might have with keystone-auth? it is easier to give access
21:16:46 <flwang> strigazi: for me, i worked on the keystone auth feature
21:16:52 <flwang> and it's ready for testing
21:16:52 <cbrumm_> I haven't looked into that, great question
21:16:55 <flwang> it works for me
21:17:13 <flwang> and now i'm working on the client side to see if we can automatically generate the config
21:17:37 <strigazi> flwang: shall we add one more label for authz?
21:18:04 <flwang> strigazi: can you remind me the user case to split into authN and authZ?
21:19:09 <strigazi> flwang: the user might want to manage RBAC only with k8s, with keystone authz you need to add the rules twice, one in the keystone policy one in k8s
21:20:40 <flwang> but my point is if we need two labels here, because if user just want to manage RBAC with k8s, they can don't update the configmap, and leave what is where is
21:20:46 <flwang> keep the default one
21:21:03 <flwang> i'm just hesitate to introduce more labels here
21:21:28 <strigazi> I'll check again if the policy is too restrictive, in general lgtm, thanks
21:23:33 <flwang> strigazi: i'm trying to set a very general policy here, but i'm open for any comments
21:24:37 <flwang> strigazi: me and lxkong are working on https://review.openstack.org/#/c/497144/
21:24:37 <strigazi> flwang: I'll leave in comment in gerrit if needed. Looks good as a first iteration, we can take it chnage smth if need it. I'll just need to test from the last time.
21:24:53 <flwang> strigazi: cool, thanks
21:25:12 <flwang> the delete resource feature is also an important one
21:25:31 <strigazi> flwang: I'll review it
21:25:36 <flwang> now we're getting many of tickets saying can't delete cluster
21:27:14 <strigazi> there is not hook that does smth yet, correct?
21:27:33 <flwang> lxkong will submit a patch for LB soon
21:27:48 <flwang> the current patch is just the framework
21:28:01 <strigazi> I'm happy to include in this patch or merge the two together
21:28:16 <lxkong> flwang: strigazi  the patch was already there, working on fixing the CI https://review.openstack.org/#/c/620761/
21:28:31 <lxkong> i've already tested in the devstack environment
21:28:42 <lxkong> but need to figure out the ut
21:28:45 <lxkong> failure
21:28:48 <strigazi> lxkong: that is the issue in the CI?
21:28:58 <lxkong> strigazi: just unit test
21:29:11 <strigazi> ok
21:29:39 <lxkong> in the real functionality test, and lbs created by the services can be properly removed before the cluster deletion
21:30:08 <strigazi> ok
21:30:42 <strigazi> I'll test in devstack, we don't have octavia in our cloud so all my input will come from devstack.
21:30:55 <lxkong> considering the differetnt k8s version and octavia/neutron-lbaas other people are using, that hook machinism is totally optional, it's up to the deployer to config it or not
21:31:09 <lxkong> strigazi: yeah, that patch is for octavia
21:31:09 <strigazi> got it
21:31:44 <lxkong> strigazi: you also need to patch k8s with https://github.com/kubernetes/cloud-provider-openstack/pull/223
21:32:05 <lxkong> which will add the cluster uuid into the lb's description
21:32:25 <strigazi> lxkong: does this work with the out-of-tree cloud-provider?
21:32:28 <lxkong> we will include that PR in our magnum images
21:32:38 <lxkong> strigazi: yeah, sure
21:32:52 <lxkong> latest CCM already has that fix
21:33:00 <strigazi> cool
21:35:24 <strigazi> lxkong: kind of relevant question, when using the CCM you need the cloud config in the worker nodes too?
21:35:28 <flwang> strigazi: btw, besides the keystone auth, i'm working on the ccm integration
21:35:38 <lxkong> strigazi: no
21:35:47 <lxkong> kubelet should have --cloud-provider=external
21:35:54 <strigazi> only?
21:36:05 <lxkong> yeah
21:36:09 <strigazi> cool
21:36:20 <flwang> lxkong: where does the pod read the cloud config?
21:36:21 <lxkong> it doesn't talk to cloud stuff any more
21:36:37 <lxkong> by com
21:36:37 <flwang> talk to apiserver?
21:36:43 <lxkong> cm
21:36:50 <lxkong> configmap
21:37:02 <lxkong> you need to create a cm with all the cloud config content
21:37:09 <lxkong> and pass that cm to CCM
21:37:17 <strigazi> it only makes sense.
21:37:33 <flwang> ok
21:38:49 <cbrumm_> just make sure that if your cm has cloud credentials that you lock it down policies around accessing it.
21:39:32 <strigazi> flwang: lxkong cbrumm_ for better security, if the ccm runs as a DS in the master nodes, it can mount the config from the node
21:39:43 <flwang> lxkong: cbrumm_: does the cloud config cm need to be created manually? or it will be read by something and created on behalf?
21:39:51 <strigazi> this way the creds are not accessible via any api
21:40:13 <flwang> strigazi: i'm going to make the ds only running on master
21:40:29 <strigazi> flwang: you can mount the config from the host then
21:40:50 <strigazi> *cloud config
21:41:04 <flwang> yes, but i'm not sure if ccm can still read the cloud config file or only happy with configmap now
21:41:11 <cbrumm_> strigazi: we do that too. Just saying that if creds are in cms that they must also be protected.
21:41:27 <strigazi> cbrumm_: +1
21:41:46 <strigazi> flwang:  it is the same from the ccm's point of view
21:41:56 <strigazi> flwang:  the pods will see file
21:42:11 <strigazi> flwang:  the pods will see a file that may come from the host or the config map
21:42:34 <flwang> strigazi: cool, will double check
21:43:01 <strigazi> flwang: it is better to not put passwords in config maps or even secrets without a KMS
21:43:53 <flwang> strigazi: ack
21:47:26 <strigazi> fyi, without the cloud provider that I tested, 1.13.0 works without issues with rocky.
21:49:21 <flwang> strigazi: why don't test the cloud provider? ;)
21:50:02 <strigazi> I have one last question for the cloud provider, is the external any better in terms of number of API calls?
21:50:15 <flwang> strigazi: technically yes
21:50:35 <flwang> because instead of sending api calls from each kubelet, there is only one caller, the ccm
21:50:41 <flwang> lxkong: correct me if i'm wrong
21:50:44 <strigazi> flwang: I tested in our production cloud, we don't have a use case for it there.
21:50:57 <flwang> strigazi: fair enough
21:51:08 * lxkong is reading back the log
21:51:50 <strigazi> I'd like to have, but we don't :( not lbaas not cinder, only manila, cvmfs and our internal dns lbaas.
21:52:21 <flwang> strigazi: right, make sense
21:53:10 <lxkong> flwang: you are right
21:53:21 <lxkong> this picture will help you understand better https://paste.pics/fe51956a0c2605edeaf2d42617fe108e
21:55:27 <strigazi> Anything else for the meeting?
21:55:50 <cbrumm_> not today
21:56:16 <flwang> lxkong: thanks for sharing, good diagram
21:56:21 <flwang> strigazi: all good for me
21:56:30 <flwang> strigazi: are you going to skip next meeting?
21:56:43 <flwang> until the new year?
21:57:04 <strigazi> no, I can do the next two
21:57:23 <strigazi> 11 and 18 of Dec
21:57:30 <cbrumm_> me and my team will miss the 25th and 1st meetings
21:57:50 <strigazi> me too
21:58:06 <strigazi> I'll put it in the wiki
21:59:35 <flwang> we won't have 25 and 1st meeting anyway :D
21:59:49 <flwang> cool, i will work next week too
22:01:13 <flwang> strigazi: thank you
22:02:29 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting
22:03:37 <strigazi> thanks for joining the meeting everyone
22:03:52 <strigazi> #endmeeting