21:01:06 #startmeeting containers 21:01:07 Meeting started Tue Dec 4 21:01:06 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:10 The meeting name has been set to 'containers' 21:01:16 #topic Roll Call 21:01:22 o/ 21:02:13 o/ 21:02:20 hello 21:02:57 hello guys 21:03:07 #topic Announcements 21:03:36 Following CVE-2018-1002105 https://github.com/kubernetes/kubernetes/issues/71411 21:03:57 I've pushed imaged for 1.10.11 and 1.11.5 21:04:05 #link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000501.html 21:04:20 thanks 21:04:24 And some quick instructions for upgrading. 21:05:08 After looking into the CVE, it seems that magnum clusters are suffering only from the anonymous-auth=true on the API server issue 21:05:50 The default config with magnum is not using the kube aggregator API and in kubelet this option is to false. 21:06:13 understood 21:06:18 that's good, we've enabled the aggregator though :( 21:07:19 o/ 21:07:19 We need to this anyway, by so far we haven't. 21:07:39 strigazi: does the v1.11.5 include the cloud provider support? 21:08:01 flwang: v1.11.5-1, yes it does 21:08:08 strigazi: cool, thanks 21:10:39 FYI, at today at cern I counted kubernetes 141 clusters, we plan to set anonymous-auth=false in the API and then advise users to upgrade manuallu or migrate to new clusters with v1.12.3 21:11:15 141, nice! 21:11:46 All clusters are inside our private network, so only critical services are advised to take action. 21:12:03 to be on the safe side 21:12:42 final comment about the CVE that monopolised by day and last night, 21:13:40 multi-tenant clusters are also more vulnerable since non owners might run custom code in the cluster. 21:14:14 #topic Stories/Tasks 21:14:20 luckily we don't have any multi-tenant clusters 21:15:41 last week, was too busy for me, I don't have any updates, I think I missed an issue with the heat-agent in queens or rocky, flwang ? 21:16:21 cbrumm_: we all might have with keystone-auth? it is easier to give access 21:16:46 strigazi: for me, i worked on the keystone auth feature 21:16:52 and it's ready for testing 21:16:52 I haven't looked into that, great question 21:16:55 it works for me 21:17:13 and now i'm working on the client side to see if we can automatically generate the config 21:17:37 flwang: shall we add one more label for authz? 21:18:04 strigazi: can you remind me the user case to split into authN and authZ? 21:19:09 flwang: the user might want to manage RBAC only with k8s, with keystone authz you need to add the rules twice, one in the keystone policy one in k8s 21:20:40 but my point is if we need two labels here, because if user just want to manage RBAC with k8s, they can don't update the configmap, and leave what is where is 21:20:46 keep the default one 21:21:03 i'm just hesitate to introduce more labels here 21:21:28 I'll check again if the policy is too restrictive, in general lgtm, thanks 21:23:33 strigazi: i'm trying to set a very general policy here, but i'm open for any comments 21:24:37 strigazi: me and lxkong are working on https://review.openstack.org/#/c/497144/ 21:24:37 flwang: I'll leave in comment in gerrit if needed. Looks good as a first iteration, we can take it chnage smth if need it. I'll just need to test from the last time. 21:24:53 strigazi: cool, thanks 21:25:12 the delete resource feature is also an important one 21:25:31 flwang: I'll review it 21:25:36 now we're getting many of tickets saying can't delete cluster 21:27:14 there is not hook that does smth yet, correct? 21:27:33 lxkong will submit a patch for LB soon 21:27:48 the current patch is just the framework 21:28:01 I'm happy to include in this patch or merge the two together 21:28:16 flwang: strigazi the patch was already there, working on fixing the CI https://review.openstack.org/#/c/620761/ 21:28:31 i've already tested in the devstack environment 21:28:42 but need to figure out the ut 21:28:45 failure 21:28:48 lxkong: that is the issue in the CI? 21:28:58 strigazi: just unit test 21:29:11 ok 21:29:39 in the real functionality test, and lbs created by the services can be properly removed before the cluster deletion 21:30:08 ok 21:30:42 I'll test in devstack, we don't have octavia in our cloud so all my input will come from devstack. 21:30:55 considering the differetnt k8s version and octavia/neutron-lbaas other people are using, that hook machinism is totally optional, it's up to the deployer to config it or not 21:31:09 strigazi: yeah, that patch is for octavia 21:31:09 got it 21:31:44 strigazi: you also need to patch k8s with https://github.com/kubernetes/cloud-provider-openstack/pull/223 21:32:05 which will add the cluster uuid into the lb's description 21:32:25 lxkong: does this work with the out-of-tree cloud-provider? 21:32:28 we will include that PR in our magnum images 21:32:38 strigazi: yeah, sure 21:32:52 latest CCM already has that fix 21:33:00 cool 21:35:24 lxkong: kind of relevant question, when using the CCM you need the cloud config in the worker nodes too? 21:35:28 strigazi: btw, besides the keystone auth, i'm working on the ccm integration 21:35:38 strigazi: no 21:35:47 kubelet should have --cloud-provider=external 21:35:54 only? 21:36:05 yeah 21:36:09 cool 21:36:20 lxkong: where does the pod read the cloud config? 21:36:21 it doesn't talk to cloud stuff any more 21:36:37 by com 21:36:37 talk to apiserver? 21:36:43 cm 21:36:50 configmap 21:37:02 you need to create a cm with all the cloud config content 21:37:09 and pass that cm to CCM 21:37:17 it only makes sense. 21:37:33 ok 21:38:49 just make sure that if your cm has cloud credentials that you lock it down policies around accessing it. 21:39:32 flwang: lxkong cbrumm_ for better security, if the ccm runs as a DS in the master nodes, it can mount the config from the node 21:39:43 lxkong: cbrumm_: does the cloud config cm need to be created manually? or it will be read by something and created on behalf? 21:39:51 this way the creds are not accessible via any api 21:40:13 strigazi: i'm going to make the ds only running on master 21:40:29 flwang: you can mount the config from the host then 21:40:50 *cloud config 21:41:04 yes, but i'm not sure if ccm can still read the cloud config file or only happy with configmap now 21:41:11 strigazi: we do that too. Just saying that if creds are in cms that they must also be protected. 21:41:27 cbrumm_: +1 21:41:46 flwang: it is the same from the ccm's point of view 21:41:56 flwang: the pods will see file 21:42:11 flwang: the pods will see a file that may come from the host or the config map 21:42:34 strigazi: cool, will double check 21:43:01 flwang: it is better to not put passwords in config maps or even secrets without a KMS 21:43:53 strigazi: ack 21:47:26 fyi, without the cloud provider that I tested, 1.13.0 works without issues with rocky. 21:49:21 strigazi: why don't test the cloud provider? ;) 21:50:02 I have one last question for the cloud provider, is the external any better in terms of number of API calls? 21:50:15 strigazi: technically yes 21:50:35 because instead of sending api calls from each kubelet, there is only one caller, the ccm 21:50:41 lxkong: correct me if i'm wrong 21:50:44 flwang: I tested in our production cloud, we don't have a use case for it there. 21:50:57 strigazi: fair enough 21:51:08 * lxkong is reading back the log 21:51:50 I'd like to have, but we don't :( not lbaas not cinder, only manila, cvmfs and our internal dns lbaas. 21:52:21 strigazi: right, make sense 21:53:10 flwang: you are right 21:53:21 this picture will help you understand better https://paste.pics/fe51956a0c2605edeaf2d42617fe108e 21:55:27 Anything else for the meeting? 21:55:50 not today 21:56:16 lxkong: thanks for sharing, good diagram 21:56:21 strigazi: all good for me 21:56:30 strigazi: are you going to skip next meeting? 21:56:43 until the new year? 21:57:04 no, I can do the next two 21:57:23 11 and 18 of Dec 21:57:30 me and my team will miss the 25th and 1st meetings 21:57:50 me too 21:58:06 I'll put it in the wiki 21:59:35 we won't have 25 and 1st meeting anyway :D 21:59:49 cool, i will work next week too 22:01:13 strigazi: thank you 22:02:29 #link https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting 22:03:37 thanks for joining the meeting everyone 22:03:52 #endmeeting