09:59:42 <strigazi> #startmeeting containers
09:59:45 <strigazi> #topic Roll Call
09:59:47 <openstack> Meeting started Tue Jun  5 09:59:42 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:59:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:59:50 <openstack> The meeting name has been set to 'containers'
10:00:01 <strigazi> o/
10:00:09 <brtknr> o/
10:00:15 <vabada> hi o/
10:00:26 <rochapor1o> o/
10:01:15 <flwang1> o/
10:01:53 <strigazi> Thanks for joining the meeting folks
10:01:56 <strigazi> #topic Announcements
10:02:38 <strigazi> python-magnumclient 2.9.1 is released #topic Announcements
10:02:46 <strigazi> python-magnumclient 2.9.1 is released https://releases.openstack.org/queens/#queens-python-magnumclient
10:02:58 <flwang1> nice
10:03:41 <strigazi> It includes a fix for the quotas entrypoint and the OSC client
10:04:04 <strigazi> #topic Blueprints/Bugs/Ideas
10:04:51 <strigazi> Last week I pushed a patch for disabling the cloud provider, https://review.openstack.org/#/c/571190/
10:05:04 <strigazi> thanks brtknr for the input
10:05:22 <brtknr> only minor points, no worries
10:05:24 <strigazi> I propose to have it on by default, and opt out
10:06:08 <rochapor1o> just left a comment for that there too
10:06:28 <rochapor1o> it matches the *_enabled scheme in the other labels
10:07:07 <strigazi> Also last week, I tested f28, it seems that skopeo is buggy atm and we need a new version of it to run syscontainers.
10:07:26 <brtknr> rochapor1o: makes sense!
10:07:28 <strigazi> rochapor1o: I don't have a strong preference on it, what others think?
10:07:51 <strigazi> brtknr is on board with _enabled
10:08:12 <strigazi> ok, not a big change I can revise
10:08:22 <brtknr> although its a label
10:08:38 <strigazi> Is the enabled by default option good with you?
10:08:45 <flwang1> i prefer xxx_enabled to be consistent with other labels
10:10:26 <strigazi> ok, I'll change it have it on by default.
10:12:09 <strigazi> that is it from me, feilong do you want to go next?
10:12:26 <flwang1> sure
10:13:01 <flwang1> i'm still working on the k8s-keystone-auth integration work because we're improving the code of k8s-keystone-auth to support configmap
10:13:50 <flwang1> with that support, we may be able to get rid of the default policy file or at least allow user easily change the policy via configmap
10:14:18 <strigazi> flwang1 so we will need to run k8s-keystone-auth as a pod right?
10:14:21 <flwang1> and i have proposed the map to fully deprecate send_cluster_metrics
10:14:33 <flwang1> strigazi: hopefully, still need testing
10:14:50 <strigazi> flwang1: ok
10:15:35 <strigazi> flwang1 the health check will be a new periodic task since you propose to deprecate send_cluster_metrics?
10:15:43 <flwang1> here is the patch for deprecate send_cluster_metrics https://review.openstack.org/572249
10:15:58 <strigazi> flwang1:  I think we can keep one task and disable that functionality instead
10:15:59 <flwang1> yep, i'm going to add a new task
10:17:03 <flwang1> https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py now there are 2 in this file, sync cluster status and send_cluster_metrics
10:17:35 <flwang1> i think we can add the health status check into existing sync_cluster_status
10:17:53 <flwang1> or add a new task given we will deprecate the send cluster metrics, thoughts? folks?
10:18:32 <rochapor1o> i'll try to help with the reviews there starting this week
10:18:33 <flwang1> and btw, the patch add health_status and health_status_reason is ready for review  https://review.openstack.org/570818
10:18:40 <flwang1> rochapor1o: lovely
10:19:01 <strigazi> let's evaluate it, if the new task is not expensive it is cleaner. If we don't stress a lot the conductor i'm ok with the new task
10:19:54 <flwang1> strigazi: 2 api calls per cluster,  we need one for /componentstatuses  and one for /nodes
10:20:09 <strigazi> also let's be sure that we use the cached certs for authentication
10:20:17 <flwang1> if we don't want to support master health check, then only /nodes
10:20:31 <flwang1> strigazi: definitely
10:20:43 <strigazi> flwang1 master health is good
10:20:52 <strigazi> we need it
10:21:32 <strigazi> to catch a missconfig of the controller manager for example
10:21:54 <flwang1> currently, the idea is returning a dict for different nodes and  it's component/condition status
10:22:18 <strigazi> sounds good
10:22:41 <flwang1> like this {"master1": {"etcd": "OK", "scheduler": "OK"}, "master2": {}, "node1":{}, "node2": {} .......}
10:22:49 <strigazi> and reason of node being down?
10:22:57 <flwang1> absolutely
10:23:10 <strigazi> cool
10:23:58 <flwang1> rochapor1o: i will submit a draft later this week
10:24:09 <rochapor1o> perfect, looking forward to review
10:25:04 <flwang1> that's all from my side
10:25:13 <strigazi> flwang1: thanks
10:25:17 <flwang1> strigazi: btw, i'm keen to review the upgrade patch
10:25:29 <flwang1> very keen :D
10:25:44 <flwang1> we only have 2 months for Rocky
10:25:51 <flwang1> @all people
10:26:23 <vabada> I am also happy and willing to test the cluster upgrade, it's a killing feature
10:26:24 <strigazi> I promise to have something in gerrit tmr
10:27:12 <flwang1> strigazi: haha, i'm always the pusher
10:27:19 <strigazi> flwang1: it's good
10:28:00 <strigazi> others, do you want to bring something up?
10:28:30 <vabada> I'm going to start work on https://bugs.launchpad.net/magnum/+bug/1722573 I'll keep you updated
10:28:32 <openstack> Launchpad bug 1722573 in Magnum "Limit the scope of cluster-update for heat drivers" [Undecided,New]
10:29:28 <strigazi> vabada: do you have any questions?
10:29:56 <vabada> Haven't looked deep yet, but I'll raise them in the channel if any
10:30:06 <vabada> I think I know more or less how to proceed
10:30:26 <strigazi> thanks
10:31:15 <rochapor1o> i've been looking at CSI support in Magnum, mostly for CephFS with kubernetes 1.10. is this something other people are interested on? Other drivers maybe?
10:31:34 <rochapor1o> i have the patch for CephFS ready already, soon with Manila integration for PVCs
10:32:10 <flwang1> rochapor1o: sounds interesting, unfortunately we(catalyst cloud) don't have manila yet
10:32:36 <rochapor1o> ok
10:35:13 <strigazi> thanks rochapor1o
10:35:45 <strigazi> brtknr: from your side, do you want to bring something up?
10:36:25 <brtknr> rochapor1o: we will be comparing container infra with gluster vs cephfs so might be relevant at some point
10:36:38 <brtknr> I've filed 2 patched currently undergoing review
10:37:15 <brtknr> first one related to specifying cgroup driver for k8s when using Docker-CE
10:37:33 <brtknr> https://review.openstack.org/#/c/571583/
10:37:51 <strigazi> I think specifying the cgroup driver even when not using docker-ce is useful
10:37:53 <brtknr> second one related to disabling floating ip in swarm mode
10:38:05 <brtknr> https://review.openstack.org/#/c/571200/
10:38:35 <brtknr> strigazi: yes, i guess so
10:38:50 <rochapor1o> brtknr: the deployment for glusterfs-csi should be similar, we can use cephfs as a reference later
10:39:39 <brtknr> currently writing a script to change docker cgroup driver when cgroupdriver label is specified:
10:39:42 <brtknr> #! /bin/bash
10:39:44 <brtknr> cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
10:39:46 <brtknr> if cat /etc/systemd/system/docker.service | grep 'native.cgroupdriver'; then
10:39:48 <brtknr> sed -i "s/native.cgroupdriver.\*/native.cgroupdriver=${1} \\/" /etc/systemd/system/docker.service
10:39:50 <brtknr> else
10:39:52 <brtknr> cat > /etc/systemd/system/docker.service.d/cgroupdriver.conf <<< EOF
10:39:54 <brtknr> ExecStart=- --exec-opt native.cgroupdriver=$1
10:39:56 <brtknr> EOF
10:39:58 <brtknr> fi
10:40:00 <brtknr> systemctl daemon-reload
10:40:02 <brtknr> systemctl restart docker
10:40:04 <brtknr> y
10:40:16 <brtknr> this should work for both old and new docker versions
10:40:41 <brtknr> rochapor1o: sounds good, we are currently manually mounting volumes using Ansible
10:42:46 <brtknr> the patches work on my devstack deployment on both queens and master
10:43:00 <strigazi> brtknr: thanks, I'll review them
10:43:26 <brtknr> strigazi: thanks :)
10:43:58 <strigazi> @all Anything else?
10:44:12 <vabada> Not form my side
10:44:46 <vabada> s/form/from/
10:45:22 <brtknr> Any news re reducing number of things heat is responsible for deploying?
10:45:30 <brtknr> we were talking about this last week
10:46:44 <strigazi> brtknr: From now one, we can start not adding new software configs or deployment if it is not absolutely needed.
10:46:52 <strigazi> *from now on
10:47:37 <strigazi> We can propose patches to consolidate the deployments.
10:47:57 <rochapor1o> i'll need to leave a bit early, thanks everyone
10:48:15 <brtknr> rochapor1o: see you
10:48:52 <brtknr> strigazi: okay
10:49:48 <strigazi> it is not clear if reducing the number of db entries helps with a lot with performance. I think that it uses the same db connection but insertions are faster
10:51:27 <strigazi> Thanks everyone, see you in the channel or next week
10:51:50 <strigazi> #endmeeting