10:00:22 <strigazi> #startmeeting containers
10:00:23 <openstack> Meeting started Tue Feb  6 10:00:22 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:27 <openstack> The meeting name has been set to 'containers'
10:00:32 <strigazi> #topic Roll Call
10:00:37 <ricolin> o/
10:00:42 <flwang1> o/
10:00:43 <strigazi> o/
10:00:43 <slunkad> hi
10:01:37 <strigazi> Thanks for joining the meeting folks
10:01:41 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-06_1000_UTC
10:01:53 <strigazi> #topic Announcements
10:02:34 <strigazi> this week on Thursday we will branch, push for your changes, we can still push to the branch, but it is better to push this week
10:03:11 <strigazi> we can discuss this later in the meetning if you have questions
10:03:17 <strigazi> #topic Review Action Items
10:03:34 <strigazi> strigazi to look for a new meeting time APAC friendly [DONE]
10:03:48 <flwang1> strigazi: thanks
10:04:21 <ricolin> strigazi, I think it's better to send a ML to let all know we change the time and location
10:04:23 <strigazi> flwang1: you are welcome, it is great to have you and ricolin here
10:04:38 <strigazi> I sent the email for the time last week
10:04:48 <strigazi> no. yesterday
10:05:07 <strigazi> We send one more for the time
10:05:07 <ricolin> strigazi, cool!
10:05:13 <strigazi> We send one more for the place
10:05:18 <ricolin> lol
10:05:50 <strigazi> #topic Blueprints/Bugs/Reviews/Ideas
10:06:30 <strigazi> Calico network driver for kubernetes, flwang1 do you want to say a few things about this faeture for ricolin slunkad and for the meeting record? :)
10:06:51 <flwang1> strigazi: yes
10:07:04 <flwang1> catalyst cloud is trying to deploy magnum into production
10:07:23 <flwang1> but to get a production ready k8s, we would like to have the network policy support
10:07:34 <flwang1> which currently Flannel can't support
10:07:59 <flwang1> so we'd like to upstream the Calico driver to achieve that
10:08:23 <flwang1> https://review.openstack.org/540352 here is the patch i'm working on
10:08:56 <ricolin> #link https://review.openstack.org/540352
10:09:10 <flwang1> we'd like to get it in Queens if it's possible
10:09:42 <strigazi> flwang1: we will try to get it in
10:09:54 <flwang1> strigazi: thank you for all the support
10:10:00 <strigazi> flwang1: do you need help for the software deployment?
10:10:09 <strigazi> me and ricolin can help
10:10:24 <ricolin> strigazi, yes boss!
10:10:46 <flwang1> strigazi: i will push a patch tomorrow and please review it and feel free to post a patch set
10:11:03 <strigazi> flwang1: cool
10:11:29 <strigazi> btw, here are some slides for best practices on security for kubernetes
10:11:42 <strigazi> #link https://fosdem.org/2018/schedule/event/containers_kubernetes_security/
10:11:50 <strigazi> #link https://speakerdeck.com/ianlewis/kubernetes-security-best-practices
10:12:51 <flwang1> strigazi: thanks for sharing
10:12:53 <strigazi> the above two, is a talk at a conference in europe this weekend describing best practices on kubernetes security, it describes RBAC, calico and more
10:13:29 <strigazi> I also found a slide deck from a conference yesterday, again on kubernetes security, I found them useful
10:13:33 <flwang1> strigazi: great
10:13:39 <strigazi> #link https://docs.google.com/presentation/d/e/2PACX-1vQwQkF4MjGebZoWqBaJ1F_Nf3HSYS-tjX13JMND0aJ92dXw1flSwAgTIoekHumUuX7LAgBkv3rQS-qp/pub?start=false&loop=false&delayms=3000&slide=id.SLIDES_API1559816053_0
10:14:58 <strigazi> speaking of the heat-agent, ricolin what about that patch on passing the public_url?
10:16:02 <ricolin> was stuck by releases job, but hope I can get it out these two days
10:16:32 <strigazi> ricolin the patchset that is up now, is it working?
10:16:58 <ricolin> strigazi, which one you mean?
10:17:31 <strigazi> I though you pushed a  patch already
10:17:37 <strigazi> I thought you pushed a patch already
10:17:50 <ricolin> strigazi, I sould, but not yet
10:17:58 <strigazi> ricolin :) ok
10:18:26 <ricolin> It will help if we can use k8s gate in upstream
10:18:44 <strigazi> ricolin: what do you mean?
10:18:55 <strigazi> I think the our k8s job should work now
10:19:07 <ricolin> strigazi, maybe some plan in PTG to discuss how we can do with magnum-functional-k8s?
10:19:33 <strigazi> ricolin there is nothing we can do on openstack-infra
10:20:12 <strigazi> the five only m1.large vms but most importantly they don't give us nested virtualization
10:21:19 <strigazi> A reasonable environament needs to have, 15GB ram, 8 or 16 cores at least 100 GB disk and *nested* virtualization
10:21:52 <strigazi> multinode would work as well but we still need nested virtualization
10:22:02 <ricolin> strigazi, is current infra even get any environment that can do nested virtualization?
10:22:11 <strigazi> ricolin: no
10:22:17 <strigazi> I asked multiple times
10:22:26 <ricolin> okay:(
10:23:02 <strigazi> I tried to do smth in centos-ci but I was stuck and didn't have more time for it
10:23:25 <strigazi> let's move on
10:23:33 <strigazi> kubernetes python client and eventlet incompatibility https://github.com/eventlet/eventlet/issues/147
10:23:48 <strigazi> I'm repeating this, and I will add to the release notes
10:24:13 <strigazi> the periodic task to collect metrics from the cluster is broken
10:24:29 <strigazi> and it also breaks the task that syncs the cluster status with heat
10:24:56 <strigazi> flwang1: this is the task (the metrics one) that we want to use for cluster healing
10:25:14 <flwang1> strigazi: ok
10:25:35 <strigazi> But most importantly, it breaks the sync with heat, so cluster are in create_in_progress forever
10:26:09 <strigazi> we will have a parameter to disable that task, so magnum will continue to work normally
10:26:45 <slunkad> is there a patch for this already?
10:27:00 <strigazi> slunkad for the parameter yes
10:27:18 <flwang1> strigazi: is it only impacting master branch? or Pike as well?
10:27:56 <flwang1> can we bump the eventlet version?
10:28:31 <strigazi> flwang1: master only. Eventlet hasn't fix the problem yet
10:29:11 <strigazi> flwang1: by disabling the task, magnum works fine. At the moment you only lose the send to ceilometer metrics part
10:29:16 <flwang1> strigazi: ok. can't we bump(skip) that eventlet version in requirements?
10:29:29 <flwang1> strigazi: i see.
10:29:46 <strigazi> the problem is in kubernetes 4.0.0
10:29:48 <flwang1> can you paste the link of the patch disabling the task?
10:30:16 <strigazi> #link https://review.openstack.org/#/c/529098/
10:30:38 <flwang1> strigazi: cheers
10:31:37 <strigazi> flwang1: olso.messaging depends on eventlet
10:31:57 <strigazi> flwang1: kubernetes depends on multiproccsing
10:32:11 <strigazi> eventlet and multiproccsing are incompatible
10:32:29 <flwang1> ok, thanks for the clarification
10:33:00 <strigazi> we can use python requests directly, but this way we re-write the kuberntes client
10:33:38 <strigazi> or we can use kubectl, the binary. that is even worse
10:34:01 <flwang1> strigazi: TBH, i don't think magnum has to send metrics to ceilometer
10:34:11 <strigazi> it doesn't
10:34:37 <flwang1> i mean the function even shouldn't be a part of magnum ;)
10:34:50 <strigazi> history
10:34:56 <flwang1> history
10:35:31 <strigazi> in any case, we will need the client at some point for cluster healing and monitoring from magnum server to the clusters
10:35:56 <strigazi> next,
10:36:31 <strigazi> Cluster Federation  https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/federation-api I'm tesing this and I'll try to take it as experimental api
10:36:35 <flwang1> yep, agree
10:37:09 <strigazi> and I'll try to do the same with cluster upgrades
10:37:09 <ricolin> +1 on that
10:37:25 <strigazi> I'll ping you for testing
10:37:36 <strigazi> for upgrades mostly
10:37:50 <strigazi> I'm trying to clean my patches
10:38:07 <strigazi> Finally, to move to f27
10:38:23 <strigazi> there is a patch to run etcd in a container:
10:38:41 <strigazi> #link https://review.openstack.org/#/c/524116/
10:38:54 <strigazi> I'm also finishing the patch for flanneld
10:39:06 <strigazi> f27 doesn't have flanneld and etcd installed
10:39:49 <strigazi> do you have any questions?
10:41:01 <strigazi> ok, let's move on
10:41:05 <strigazi> #topic Open Discussion
10:41:39 <flwang1> i'd like to get you guys opinion on this bug https://bugs.launchpad.net/magnum/+bug/1746961 \
10:41:40 <openstack> Launchpad bug 1746961 in Magnum "Support enabled drivers" [Undecided,New] - Assigned to Feilong Wang (flwang)
10:42:14 <flwang1> as I mentioned above, Catalyst Cloud would like to deploy magnum, but we just want to support k8s for now
10:42:36 <flwang1> so we don't want to receive any support ticket about any other driver
10:42:52 <flwang1> so we have a way to do that now?
10:43:34 <strigazi> at the moment, you can update the magnum python egg and remove the entrypoints
10:43:41 <ricolin> wondering will it work if we replace the template:)
10:43:55 <strigazi> ricolin: which template?
10:44:44 <slunkad> does it make sense to have config options that disables the drivers?
10:44:53 <flwang1> strigazi: which is hard for us because we're building virtual env for different services
10:45:11 <flwang1> slunkad: that's the thing i'm trying to propose
10:45:27 <ricolin> strigazi, I mean remove the templates under each driver
10:45:43 <strigazi> ricolin that is also possible,
10:45:58 <strigazi> flwang1: we can do this as well
10:46:14 <flwang1> ricolin: but that way also need to change source code
10:46:21 <strigazi> have enable_drivers or use the definitions
10:46:28 <ricolin> flwang1, yes
10:46:31 <flwang1> strigazi: yep, https://github.com/openstack/magnum/blob/master/magnum/conf/cluster.py#L25
10:46:57 <flwang1> should we reuse the existing config option or create a new one?
10:48:06 <strigazi> flwang1 let's do new one
10:48:12 <ricolin> IMO better a new one if that's for disable drivers
10:48:24 <strigazi> flwang1: enabled_definitions is there but unsued anyway
10:48:31 <ricolin> you will need to keep the backward consist
10:48:34 <strigazi> no, enable drivers
10:48:49 <flwang1> so you all happy to have a config option for this user case?
10:48:56 <strigazi> ricolin it is silently ignored already
10:49:08 <strigazi> ricolin: since ocatra
10:49:11 <strigazi> ricolin: since ocata
10:49:20 <ricolin> strigazi, okay, then it will be fine
10:49:29 <strigazi> flwang1: yes, we can add enable_drivers
10:49:35 <slunkad> +1
10:49:41 <flwang1> strigazi: cool, thanks
10:51:17 <slunkad> strigazi: this is building locally now https://review.openstack.org/#/c/520063/13
10:51:42 <slunkad> but I need some +1s on https://review.openstack.org/#/c/539619/ so it is merged and we can test it in gating
10:52:16 <strigazi> slunkad ok
10:52:29 <slunkad> also if you have some time can you give this https://review.openstack.org/#/c/507522/ a code review?
10:52:50 <strigazi> slunkad: I'll have a look
10:53:01 <slunkad> strigazi: and ofc we still need the kubernetes elements in the image..
10:53:11 <slunkad> thanks!
10:53:19 <strigazi> slunkad I know :)
10:54:05 <strigazi> slunkad: if you have a working image we can take https://review.openstack.org/#/c/507522/ , you will be the ones that will mostly use it initially
10:55:14 <slunkad> strigazi: I do have a SLES based image which I uses for testing which I can't make public
10:55:23 <strigazi> ok
10:56:09 <slunkad> strigazi: not sure if everyone is comfortable merging it for now without the image..
10:57:16 <strigazi> it's a start, I think if we build an image even locally and publish it, we can start
10:57:17 <flwang1> slunkad: i'm sorry, but if that's the case, do we really need to move the driver into main tree?
10:58:15 <slunkad> strigazi: ok then I will try that, thanks
10:58:40 <slunkad> flwang1: we are working on a opensuse based image so we will have a image soonish
10:58:44 <strigazi> flwang1: with a working image published we can start
10:59:06 <strigazi> flwang1: we zero options avaialble publicly we can't
10:59:14 <strigazi> with zero options avaialble publicly we can't
10:59:41 <flwang1> strigazi: thanks, that makes sense for me
10:59:54 <strigazi> the time is up folks, thanks for coming
11:00:00 <strigazi> #endmeeting