09:59:46 <strigazi> #startmeeting containers
09:59:48 <openstack> Meeting started Tue Apr 24 09:59:46 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:59:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:59:51 <openstack> The meeting name has been set to 'containers'
09:59:59 <strigazi> #topic Roll Call
10:00:48 <flwang1> o/
10:00:53 <rochapor1o> o/
10:01:27 <flwang1> rochapor1o: waves from Wellington
10:02:09 <rochapor1o> flwang1: waves back from gva :)
10:02:33 <rochapor1o> say hi to everyone
10:02:47 <strigazi> #topic Blueprints/Bugs/Ideas
10:03:12 <strigazi> From my side, I discovered this one that I
10:03:21 <strigazi> I'm also resposible for:
10:03:40 <strigazi> #link https://bugs.launchpad.net/magnum/+bug/1766284
10:03:42 <openstack> Launchpad bug 1766284 in Magnum rocky "k8s_fedora: Kubernetes dashboard service account must not have any permissions" [Critical,In progress] - Assigned to Spyros Trigazis (strigazi)
10:03:55 <flwang1> strigazi: yep, that's a good catch
10:04:01 <strigazi> The fix is easy, just create an admin user
10:04:12 <strigazi> and use each token
10:04:17 <strigazi> and use its token
10:05:57 <strigazi> Apart from that, I don't update from last week, eventlet is still stuck and I haven't updated my other patches, will do this week
10:06:59 <flwang1> strigazi: finished?
10:07:02 <strigazi> Just some input, for flwang1. 548139 looks good, I'm testing it today
10:07:14 <strigazi> flwang1: yes, not much this week from me
10:07:25 <flwang1> strigazi: ok, from my side
10:07:52 <flwang1> I finally make the calico on master patch works, i think it's ready to go
10:08:21 <flwang1> and still working on the keystone integration work, need some help about the atomic install
10:08:35 <strigazi> flwang1: are we sure that we don't need kube-proxy on master?
10:08:45 <flwang1> besides, start to work on the health monitoring
10:09:02 <flwang1> strigazi: i tested it, so far i haven't seen any issue
10:09:13 <flwang1> dashboard, dns, heapster, all work
10:10:28 <flwang1> btw, i will take vacation from Wed to next Mon, back next Tue, just FYI
10:10:52 <strigazi> do you have any input for the health monitor? Are you working on the status?
10:11:05 <sfilatov> Hi! I just created a bug: https://bugs.launchpad.net/magnum/+bug/1766546
10:11:05 <openstack> Launchpad bug 1766546 in Magnum "Multi-Master deployments for k8s driver use different service account keys" [Undecided,New]
10:11:16 <flwang1> strigazi: yep, i'm working on adding 2 new fields for the schema
10:11:24 <sfilatov> I would like to discuss the way to resolve this
10:11:39 <flwang1> sfilatov: we're in meeting, can we discuss offline?
10:11:44 <strigazi> sfilatov give us few mins, rochapor1o fyi 1766546
10:12:03 <strigazi> flwang1: I pinged him to discuss it in open discussion
10:12:19 <flwang1> strigazi: cool
10:12:29 <flwang1> that's all on my side, thanks
10:12:47 <sfilatov> We can workaround it using ca.key so each server has the same artifact
10:12:49 <strigazi> Are the two fields in the spec?
10:12:58 <flwang1> strigazi: yes
10:13:09 <strigazi> flwang1: perfect
10:13:10 <flwang1> as we discussed last week
10:13:15 <strigazi> good
10:13:48 <sfilatov> My concern is that it's not safe to expose it and we could generate this pair separately
10:13:59 <flwang1> strigazi: i'm very excited to see magnum  is going to have auto-healing
10:13:59 <sfilatov> And not use ca.key for serviceaccount tokens
10:15:02 <strigazi> flwang1: :)
10:15:13 <strigazi> flwang1: that's all from you, right?
10:15:26 <flwang1> yes
10:15:31 <flwang1> i'm clear
10:15:38 <strigazi> sfilatov:
10:16:24 <rochapor1o> looking
10:16:40 <strigazi> you have a concern on using the ca.key in general or just for the serviceaccount key?
10:17:36 <sfilatov> strigazi: serviceaccount key. My concern is we defintely dont need it for serviceaccount key
10:17:53 <sfilatov> strigazi: So if we could not ewxpose it, we better not to
10:18:25 <strigazi> sfilatov: you mean, not set it at all?
10:18:56 <strigazi> sfilatov: and let kubernetes generate one? Have you tried it?
10:19:01 <sfilatov> strigazi: Not pass it to users vm
10:19:01 <rochapor1o> flwang1: i've start looking at auto healing
10:19:17 <rochapor1o> you'll be at the summit? we can discuss there
10:20:32 <strigazi> sfilatov: I'm confused, the problem is that we need the same key. And you suggest to use what as a key for service account tokens?
10:20:54 <rochapor1o> sfilatov: the goal is to have a separate cert/key pair for the kube services, is it? we can store it in the cert store (barbican or the default db) and load from there
10:21:34 <flwang1> rochapor1o: no, i won't be there because limited budget ;(
10:21:58 <sfilatov> strigazi: Yes we need the same key. I suggest generate it. Since ca.key is a safety issue - whoever has it could authenticate in our k8s cluster
10:22:50 <sfilatov> rochapor1o: yes, though I'm not sure we need to store it at all. We could generate it and pass through heat variables
10:23:30 <rochapor1o> we already use barbican to store the cluster secrets
10:23:40 <rochapor1o> you would need them if you decide to update the number of masters later
10:23:50 <rochapor1o> or for upgrades
10:24:11 <sfilatov> rochapor1o: yes you are right
10:24:14 <strigazi> rochapor1o: the key will be available in heat as a paramater
10:24:40 <strigazi> it will work when scaling as well
10:24:45 <flwang1> any impact if there is no barbican?
10:25:35 <rochapor1o> secrets go to the magnum db if there's no barbican (the default)
10:26:14 <strigazi> sfilatov: What I don't get is why it is a security concern since the key will be only on master nodes, and the master ndoes already have the api server certs.
10:26:58 <sfilatov> strigazi: I suppose api keys are not that important as ca key
10:27:09 <sfilatov> strigazi: Though I might be wrong
10:27:25 <flwang1> rochapor1o: yep, i know. but if saving it in magnum db, magnum can only save 1 ca file
10:27:50 <sfilatov> strigazi: I see that anyone who has the ca.key can authenticate as any user in magnum
10:27:59 <strigazi> sfilatov: we can double check then, if the current certs on the master node(s) are similar security  risks
10:28:51 <strigazi> sfilatov: yes, the ca.key can give access to everything
10:29:30 <rochapor1o> flwang1: i think db uses the same secret naming convention. if we change to have a special cert/keypair for the api server which is shared among all masters, a change for barbican will also work for the db case
10:29:36 <strigazi> sfilatov: let's take this offline in the bug then
10:29:52 <flwang1> strigazi: +1
10:30:04 <flwang1> let's discuss this offline
10:30:31 <strigazi> sfilatov: can you have a look if the other certs are a security risk, I hope they are not.
10:31:02 <sfilatov> strigazi: Can I try to implement this fix If we decide to generate a key it? :)
10:31:18 <sfilatov> strigazi: Yes, I will check other certs
10:31:23 <strigazi> sfilatov: and if so, we can generate a key just for this purpose
10:31:54 <strigazi> sfilatov: maybe it can also be used for the cert mananger?
10:32:17 <strigazi> sfilatov: if we decide to go this way, go for it
10:32:23 <sfilatov> strigazi: do you mean controller manager?
10:32:44 <openstackgerrit> Merged openstack/magnum master: Add and improve tests for certificate manager  https://review.openstack.org/552244
10:32:49 <strigazi> sfilatov: no, this on https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
10:32:53 <strigazi> sfilatov: no, this one https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
10:33:44 <flwang1> strigazi: should we move on?
10:33:50 <strigazi> yes
10:35:58 <strigazi> flwang1: Do you have anything else for the meeting?
10:36:44 <flwang1> strigazi: there are only 2 milestones for Rocky
10:37:08 <flwang1> we should review our current pipeline to see where we should pay more attention
10:37:20 <flwang1> to make sure we can meet our goal for R
10:38:42 <strigazi> We can try to aim for rocky-2
10:39:04 <strigazi> flwang1:  Do you have standing review apart from calico?
10:39:39 <flwang1> i'm blocked by the atomic command for keystone
10:39:51 <strigazi> flwang1: ok
10:40:01 <flwang1> after finished  keystone, i can focus on the monitoring stuff
10:40:13 <flwang1> which i think is the pre-condition for auto-healing
10:40:44 <flwang1> strigazi: and i'm also keen to review the upgrade PoC patch
10:41:23 <strigazi> ok, let's aim for Thursday to test
10:41:48 <flwang1> strigazi: and i have this tiny patch https://review.openstack.org/562454
10:41:53 <sfilatov> flwang1: can you DM upgrade PoC patch to me? I'm also interested
10:41:54 <flwang1> rename some scritps
10:42:21 <flwang1> sfilatov: haha, i'm waiting for the magic link from strigazi ;)
10:42:46 <strigazi> I'll put you both in the loop
10:42:47 <sfilatov> flwang1: looking forward :)
10:43:17 <strigazi> thanks folks
10:43:56 <flwang1> sfilatov: let's push this guy ;)
10:44:11 <strigazi> push is good
10:45:18 <flwang1> strigazi: and it would be nice if you can update the spec to reflect the api changes (show available versions)
10:45:22 <strigazi> Renaming things is not exciting, but I see the point
10:45:38 <flwang1> strigazi: i know it's not exciting, so no rush
10:46:16 <flwang1> strigazi: btw, another thing i mentioned before is the fluentd support
10:46:57 <flwang1> i think it's a low-hanging-fruit, but it's not my first priority
10:47:10 <flwang1> in Rocky, i'd like to help get the auto-healing and upgrade done
10:47:15 <strigazi> flwang1: yes, did you create a bp or bug?
10:47:23 <flwang1> strigazi: not yet
10:47:39 <flwang1> i'm happy to create one in case if there is new contributor interested in
10:48:07 <strigazi> Create a bp then
10:48:27 <strigazi> I hope we can find someone
10:48:29 <flwang1> strigazi: no problem
10:50:12 <strigazi> flwang1: do you want to discuss the atomic issue? Is there anything else?
10:50:39 <flwang1> strigazi: yep, i prefer to discuss the atomic issue offline
10:50:54 <flwang1> i don't want to waste others time
10:51:33 <strigazi> ok
10:52:08 <strigazi> let's wrap this then
10:52:16 <flwang1> strigazi: here you go https://blueprints.launchpad.net/magnum/+spec/fluentd
10:53:52 <strigazi> flwang1: thanks, can you add some more info? Will this be self contained in the cluster?
10:54:07 <strigazi> Will it accept endpoint to push to different places?
10:54:19 <strigazi> Will it accept an endpoint as a parameter to push to different places?
10:54:21 <flwang1> strigazi: sure, i will add more info later
10:55:06 <flwang1> based on my understanding with fluentd, it's most like a logging agent, so we will start it with dameonset
10:55:17 <flwang1> to make sure it can run on each node
10:56:39 <strigazi> Where the data will be stored?
10:57:03 <flwang1> Elasticsearch
10:57:27 <strigazi> In a provided endpoint? or inside the cluster?
10:58:03 <flwang1> strigazi: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
10:58:14 <flwang1> strigazi: in cluster
10:58:21 <strigazi> ok self-contained then
10:58:50 <flwang1> yes
10:58:55 <strigazi> We can also look in the option to give an external es endpoint
10:58:58 <flwang1> i'm also referring this https://kubernetes.io/docs/concepts/cluster-administration/logging/
10:59:06 <flwang1> strigazi: absolutely
10:59:35 <flwang1> i can imagine cern has already got a elasticsearch setup
10:59:46 <strigazi> time is up, we can discuss in the channel
10:59:52 <flwang1> yes
10:59:53 <flwang1> sure
10:59:59 <strigazi> ending the meeting then, thanks :)
11:00:04 <strigazi> #endmeeting