10:00:08 <strigazi> #startmeeting containers
10:00:09 <openstack> Meeting started Tue May 29 10:00:08 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:13 <openstack> The meeting name has been set to 'containers'
10:00:22 <strigazi> #topic Roll Call
10:00:31 <flwang1> o/
10:01:20 <ricolin> O/
10:02:25 <strigazi> Hello flwang1 and ricolin
10:02:30 <strigazi> #topic Announcements
10:03:05 <strigazi> I'll do a magnumclient release this week for queens, to include a fix for the quota cmd
10:03:35 <strigazi> #link http://git.openstack.org/cgit/openstack/python-magnumclient/commit/?h=stable/queens&id=40327b75edcab608e9c9d9ac0993855de088926b
10:04:01 <strigazi> #topic Blueprints/Bugs/Ideas
10:04:30 <strigazi> From my side, I'm finishing the patch for the enable_cloud_provider label
10:05:23 <strigazi> I'm also testing the move to fedora 28, seems ok, I'm running the conformance tests
10:06:00 <brtknr> Hi all
10:07:25 <strigazi> Finally, I have a patch for the make cert files to do less api call, use the same token and fetch the ca once. here:
10:07:27 <strigazi> https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/make-cert-client.sh#L43
10:07:30 <strigazi> hi brtknr
10:08:00 <strigazi> And one more thing that I need input from ricolin and all
10:08:28 <strigazi> Will we gain anything if we consolidate the software deployment and software configs?
10:08:50 <strigazi> I mean if we gain anything in db size and requests
10:09:24 <strigazi> @all fyi, the templates create a lot resources in heat which loads quite  bit the heat db
10:09:53 <flwang1> any performance issue on heat side?
10:11:00 <strigazi> flwang1: yes, in our deployement we create a  lot of connection to the db and requires some tuning on db config
10:11:02 <flwang1> strigazi: i mean are you aware of any performance issue on heat
10:11:38 <strigazi> the problem goes away if heat is configured properly (I mean the error when talkin to the db)
10:11:53 <strigazi> but I was wondering if we can reduce the load
10:12:47 <brtknr> Do you mean by reducing the number of objects that heat is responsible for handling?
10:13:27 <strigazi> brtknr yes
10:13:37 <flwang1> strigazi: we can do that
10:14:34 <strigazi> I just wanted some input from heat team, do be sure that we gain something in performance
10:14:58 <strigazi> we can always have a look at the code though :)
10:15:20 <strigazi> Anyway, I'll have a look
10:15:26 <flwang1> cool
10:15:29 <strigazi> flwang1: do you want to go next?
10:15:38 <flwang1> strigazi: ok
10:15:54 <flwang1> i have propose a patch for k8s keystone auth integration
10:16:00 <brtknr> On one of my current deployments with 1 master, 3 nodes, I have 139 objets
10:16:07 <flwang1> it works but need some input from you folks
10:16:10 <brtknr> at nested depth 4
10:16:33 <flwang1> i will talk more about this later
10:16:48 <flwang1> and i also proposed a patch to support cluster-level logging with FEK
10:17:18 <flwang1> logging is the only important addons we're missing for k8s
10:17:22 <strigazi> flwang1 Could we have a parameter to point to an external ES?
10:17:34 <strigazi> flwang1: or both
10:17:35 <flwang1> strigazi: could be
10:17:59 <flwang1> we can add it later? or you want to do it in current patch?
10:18:28 <strigazi> flwang1: if the change is not big, could  we do in this patch?
10:18:40 <flwang1> strigazi: but the hard part is how to save those certs/credentials from fluentd to ES
10:18:56 <flwang1> strigazi: i don't know, i need to take a look to get back to you
10:19:09 <strigazi> flwang1: can we make the policy a configmap?
10:19:20 <strigazi> no we can't
10:19:27 <flwang1> you mean the policy of k8s keystone auth?
10:19:28 <strigazi> it is not a pod
10:19:38 <flwang1> no, we can't
10:19:57 <flwang1> unless we want to put it into master's kubelet
10:20:17 <flwang1> but i don't want to go for that way, that's one of the problems
10:21:41 <strigazi> Does this policy need to change frequently?
10:22:01 <flwang1> strigazi: you know, like the policy.json in openstack world
10:22:18 <strigazi> I mean, it need to change if we give access to more projects?
10:22:20 <flwang1> it shouldn't be based on my understanding
10:23:04 <flwang1> strigazi: hmm... for that case, maybe
10:23:28 <strigazi> So if project A has a cluster and owns the cluster. Can the admin give read access to project B?
10:24:08 <flwang1> strigazi: TBH, i haven't tested this, but technically yes
10:24:38 <flwang1> however, my current patch doesn't support that
10:24:47 <flwang1> currently, the project id is injected siliently
10:25:27 <strigazi> ok, I'll test it
10:26:25 <strigazi> flwang1: I think giving access to more project in running clusters is a plus, what do you think?
10:27:10 <flwang1> strigazi: it could be useful for private cloud
10:27:15 <flwang1> not public cloud ;)
10:28:11 <flwang1> we can discuss more details offline
10:28:15 <strigazi> ok
10:28:18 <strigazi> thanks
10:28:36 <flwang1> another thing i'm working on is the cluster monitoring
10:28:52 <flwang1> i have proposed a patch to add health_status and health_status_reason
10:29:09 <flwang1> for next step, i need to know current status of the eventlet issue
10:29:27 <strigazi> flwang1: I think it will take too long to fix
10:29:46 <strigazi> flwang1:  we should just use python requests IMO
10:30:00 <flwang1> strigazi: can we just add a very think wrapper to use requests
10:30:17 <strigazi> maybe it works in python3, but it is not good enough
10:30:17 <flwang1> and switch back when it's fixed
10:30:59 <strigazi> flwang1: sounds good given the situation with eventlet, the problem in magnum is fixed but breaks elsewhere
10:31:29 <flwang1> strigazi: ok, i will try to make a wrapper
10:31:30 <strigazi> so we can't bump o/r/g-r
10:32:12 <strigazi> flwang1: node status should be enough to start with
10:32:12 <flwang1> or just replace k8s client with requests
10:32:20 <flwang1> strigazi: yep
10:33:12 <flwang1> i'm using json dict for the health_status_reason, so we should be able to express more
10:33:38 <strigazi> flwang1 +1
10:34:02 <strigazi> flwang1: it could be the dict returned by k8s api?
10:34:23 <strigazi> hmm, maybe it will limit us in the future
10:34:42 <flwang1> strigazi: i prefer to do some re-formating
10:34:56 <flwang1> instead of using the resp from k8s api
10:35:10 <flwang1> to make sure we have some flexible space
10:35:11 <strigazi> flwang1: fyi https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/node-problem-detector
10:35:19 <flwang1> that's on my list
10:36:18 <strigazi> cool
10:37:33 <flwang1> that's all from my side
10:37:40 <flwang1> lot of things to do
10:37:49 <strigazi> Cool, I looked into one more thing
10:38:07 <strigazi> multimaster without octavia lbaas
10:38:31 <strigazi> it can be done if we habe a reverse proxy in the nodes. kubespray does this
10:39:04 <strigazi> the only caveat is that if the masters change we need to update the nodes
10:39:11 <strigazi> thoughts?
10:39:55 <flwang1> what do you mean update the nodes? updates the master ip address in nodes?
10:40:06 <strigazi> yes, update the master ip(s)
10:41:15 <strigazi> at the moment, we don't support change of the masters anyway, but if we do in the future it is doable
10:41:17 <flwang1> that's doable
10:42:37 <strigazi> ok then, I'll have a look
10:43:09 <strigazi> brtknr: do you want to discuss something?
10:43:23 <strigazi> brtknr: I have something for you actually
10:43:52 <strigazi> brtknr: Do you want to push a patch to be able to chose the cgroup-driver?
10:44:08 <strigazi> systemd/cgroupfs
10:44:37 <brtknr> I wanted to see where this conversation was going, plus maybe ask question about the blueprint i submitted
10:45:02 <brtknr> I can push that patch :)
10:45:27 <strigazi> brtknr: thanks
10:45:31 <strigazi> brtknr: which bp?
10:45:40 <brtknr> strigazi: Blueprint re attaching multiple networks to magnum cluster
10:45:51 <brtknr> which we are currently doing manually via nova
10:46:20 <brtknr> after cluster creation
10:46:29 <strigazi> brtknr: do you have a mock patch for this, how to do it in heat?
10:46:37 <strigazi> brtknr: multiple private?
10:46:49 <brtknr> yes
10:47:19 <brtknr> I dont have a mock patch yet but wondering if this is something blueprint-worthy/whether other people also have the same problem
10:47:21 <strigazi> brtknr: you want to be able to do it in cluster creation only or change it after as well?
10:47:36 <flwang1> brtknr: does that mean something like 1 network for master  and 1 for nodes?
10:47:48 <strigazi> brtknr: It sounds like a fearure to me :)
10:48:02 <strigazi> flwang1: many private networks for all nodes
10:48:10 <flwang1> strigazi: wow
10:48:20 <flwang1> that's not a small change i think
10:48:31 <strigazi> it isn't
10:48:36 <brtknr> No, e.g. attach infiniband, ethernet both to masters and nodes
10:49:00 <strigazi> No to what?
10:49:26 <brtknr> No to 1 network for master, 1 for node
10:49:34 <brtknr> although that is an interesting proposition
10:50:06 <strigazi> brtknr: well flwang1 is interested more in master in project and minions in the other
10:50:15 <strigazi> we as well, but this is even harder
10:50:45 <strigazi> the above logic can be usefull though
10:50:53 <brtknr> hmm yes, similar to invisible control plane that they have on gcloud?
10:50:59 <strigazi> yes
10:51:04 <flwang1> yes
10:51:17 <strigazi> I like to call it not accountable :)
10:51:44 <flwang1> I would like to call it invisible masters :)
10:52:12 <strigazi> we can have a router that is owned by the ops team
10:52:31 <strigazi> and a router for the client
10:52:47 <brtknr> openstack coe cluster create --invisible-master?
10:54:28 <strigazi> is it like this in GKE?
10:54:35 <flwang1> brtknr: no, it could be a default action configured by ops
10:54:51 <flwang1> strigazi: no, in GKE, it's the default behaviour
10:55:02 <strigazi> we can offer both
10:55:08 <flwang1> strigazi: sure
10:55:21 <brtknr> For the multiple networks, the syntax would look something like `openstack cluster create k8s-cluster --fixed-network network1 network2 network3
10:55:27 <flwang1> i'd like to pick this up in S cycle
10:55:28 <strigazi> advanced user might want to access the master nodes, it is their cluster anyway
10:55:51 <brtknr> I am not sure how subnets would be handled
10:55:53 <strigazi> Maybe they want to access the etcd db
10:57:14 <strigazi> brtknr: you can describe how you do it manually in the bp
10:57:18 <flwang1> we're running out of time
10:57:28 <brtknr> Ok will do
10:57:36 <strigazi> thanks
10:57:59 <strigazi> let's continue offline
10:58:20 <strigazi> thanks for joining the meeting brtknr and flwang1
10:58:27 <brtknr> Back to the other point, where would systemd/cgroupfs option be defined? At cluster creation?
10:58:43 <flwang1> strigazi: thank you
10:58:44 <strigazi> brtknr: it would a label
10:59:12 <strigazi> we can continue in the channel
10:59:16 <strigazi> #endmeeting