17:01:13 #startmeeting containers 17:01:14 Meeting started Thu Jul 12 17:01:13 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:17 The meeting name has been set to 'containers' 17:01:19 #topic Roll Call 17:01:27 o/ 17:05:17 greetings 17:05:33 hi colin- 17:05:44 imdigitaljim: you are stil here? 17:06:01 yeah 17:06:02 flwang: flwang1 ping 17:06:03 o/ 17:06:41 #topic Review Action Items 17:06:52 I meant: 17:06:52 #topic Blueprints/Bugs/Ideas 17:07:19 I'm working on: 17:08:30 I'm working on updating magnum heat templates 17:08:34 https://storyboard.openstack.org/#!/story/2002959 https://storyboard.openstack.org/#!/story/2002648 I'll ping you for reviews. 17:08:36 and what considerations are made for scaling 17:08:48 yeah please do 17:09:01 ill also interally be updating to v1.11 17:09:16 so i'll be figuring out what considerations will be necessary to support it 17:09:54 We have 1.11 working internally too based on gcr.io 17:10:10 oh awesome 17:10:18 i might bug you for some things later on then 17:10:21 I found only that one param is removed, the tls-ca-file 17:10:25 yeah 17:10:27 im hoping that is is 17:10:29 it* 17:10:43 doing testing as/ci for now 17:10:50 we don't use the cloud provder since some time 17:11:01 So it is not a problem for us 17:11:24 tls-ca-file was basically unused since some releases 17:11:30 yeah even in 1.9 17:11:34 I think two 17:11:35 yes 17:12:01 I'm evaluating two plans for the kubelet 17:12:39 One is base on gcr.io and make it a system container, which means just wrap around it a systemd unit 17:13:12 But that it not ideal when using on fedora 17:13:19 or centos 17:13:35 at some point we need to have selinux always enabled. 17:13:40 The other plan 17:14:13 Is to build kubernetes rpms from source and install them in a fedora container 17:14:23 thats what im doing right now 17:14:29 This works easily with a multistage build 17:14:39 imdigitaljim: how you build the rpm? 17:14:43 bazel? 17:14:50 rpmbuild 17:14:57 you can use binary format still 17:15:02 you dont have to build from source 17:15:08 you can grab a release from kubernetes 17:15:12 bazel build build/rpms 17:15:20 it builds in 10mins 17:15:45 i havent used bazel, we just use something like jenkins 17:16:00 I build everything in a container 17:16:18 the builder is a container that mounts docker.sock 17:16:26 that could work though 17:16:31 sorry i'm late 17:16:37 sounds good and id like to use it 17:16:39 ill check it out 17:16:51 but we should definitely update our docker.io containers :) 17:17:19 I can push the cern containers, the only hack is that the cloud provider is disabled 17:18:08 I'll push them docker.io/strigazi and we see for the project repo. 17:18:16 hi flwang 17:18:38 imdigitaljim: Is anyone of you working on puting the kubelet in the master nodes? 17:18:49 for everyone 17:18:54 not only calico 17:19:22 yeah i still need to work out the generic way to support all 17:19:31 but its basically borrowing a bit from the minion config we have 17:19:52 ok, I have a patch for it, I'll finish it then 17:20:15 ive explored some code cleanup work significantly though 17:20:24 so maybe after we get some of these patches in 17:20:32 which ones? 17:20:33 we'll lint/cleanup with idempotent changes 17:20:44 most of the bash scripts 17:20:49 er sh scripts* 17:20:54 the big one for sh? 17:21:01 no smaller ones 17:21:20 ill be breaking that large one down into many smaller easier to digest changes 17:22:03 ok 17:22:52 the idea for kubelet is to have the existing script for master and minion 17:23:19 and make cert can also be one 17:23:41 makes sense? 17:25:33 folks? 17:26:21 yes for me 17:26:32 do we still want to make it in Rocky? 17:27:32 if it is not intrusive, why not? it will have even less code 17:28:04 it is not changing the logic 17:28:19 the workflow will be the same 17:28:28 strigazi: i'm just nervous to have big change at the end of release 17:28:42 and you know, we don't have good e2e testing in gate 17:29:17 it won't be like the last one, we agreed that august will be for testing 17:29:40 ok, fair enough 17:29:43 One month is enough 17:31:07 imdigitaljim: colin- you are still here? 17:32:38 yeah we are 17:33:12 that sounds reasonable as well 17:33:25 do you plane to use the upstream magnum driver? It soudns good for you? 17:33:28 fix/updating testing in august sounds appropriate too 17:33:42 we still need quite a bit caught up for us to use upstream 17:33:52 but we're making sure our changes are fully tested 17:34:20 with upstream e2e? 17:34:31 or you test specific apps? 17:34:35 hopefully :] 17:34:38 a little of both 17:35:03 also at what point do we stop supporting like old versions of kubernetes on new branches of release 17:35:05 imdigitaljim: pls use sonobuoy to get a fully testing 17:35:28 we use e2e from heptio and test in house filesystems manually 17:36:36 I now use sonobuoy locally, thanks for the pointer flwang1 17:37:02 flwang1: to be honest, it doesn't test everything 17:37:34 strigazi: yes, that's why i'm going to contribute to sonobuoy 17:37:45 and it's too limited as for functions 17:38:15 e.g. you can't get the test result via the api, you have to copy the logs manually, and something like that 17:38:16 flwang1: what I founf was that we had a missconfiguration in flannel and when a pod has host networking it can not resolve cluster services 17:38:40 great to see we can find issues with it 17:38:57 with calico, it can pass all test cases 17:39:14 with flannel too, but this wasn't a test 17:39:23 ok 17:39:59 also, things regarding mounts 17:40:14 This was actually a bug in kubernetes 17:40:31 broken in 1.10.0 fixed in 1.10.3 17:40:47 link? 17:41:05 1 sec 17:41:57 https://github.com/kubernetes/kubernetes/issues/62396 17:42:11 coool, thankss 17:42:11 yeah we saw that 17:42:22 thats why we pushed into 1.11 17:42:26 we were using 1.10.1 17:43:02 we are in 1.10.3 but most clusters are in 1.9.3 17:46:36 flwang1: imdigitaljim Do you have any reviews in gerrit that need love? 17:46:55 soon i will 17:47:03 ive been slow adding this week 17:47:23 strigazi: need your comments on https://review.openstack.org/#/c/578510/ 17:47:39 aside from merge conflic 17:47:40 https://review.openstack.org/#/c/576623/ 17:47:44 this needs a push 17:47:55 as well as this 17:47:55 https://review.openstack.org/#/c/577570/ 17:48:03 ill fix the merge conflicts but other than that they are g2g 17:48:57 now we can support get rid of all floating ips, so we need to make sure all addon services can be accessed by kubectl proxy 17:49:55 577570 is a bit scary, I'll have a look 17:50:45 hi guys. I'm new heare and I've been trying to fix the coreos driver in my environment. Here's what I have working so far: https://review.openstack.org/#/c/579026/ Any review/input would be appreciated 17:50:46 flwang1: see also https://review.openstack.org/#/c/508172/1 17:51:19 strigazi: i know that one, but i'd like to keep it simple to avoid introducing too much changes 17:51:40 canori|2: thanks for this 17:51:41 in this patch, i'd like to focus on removing the nodeport way 17:53:04 flwang1: with your patch, the datasource will not be added 17:53:08 or the dashbord 17:53:15 or the dashboard 17:53:27 grafana will be empty 17:54:29 strigazi: ok, i will give it a try 17:54:38 haven't got time to test it yet 17:54:43 thanks for the comments 17:54:50 that's why I didn't remove only the nodeports 17:56:21 cool, let's wrap then, thanks everyone 17:57:03 See you next week 17:57:22 #endmeeting