10:00:17 #startmeeting containers 10:00:18 Meeting started Tue Feb 13 10:00:17 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:21 The meeting name has been set to 'containers' 10:00:24 #topic Roll Call 10:00:48 o/ 10:00:59 hi 10:01:57 hello slunkad flwang1 10:02:08 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-13_1600_UTC 10:02:23 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-13_1000_UTC 10:02:26 new time 10:02:40 #topic Announcements 10:02:52 Branch stable/queens is cut 10:03:08 The final release is in two weeks 10:03:29 strigazi: do we still have chance to merge the calico driver ? 10:03:34 flwang1: yes 10:03:39 strigazi: cool 10:04:25 flwang1: we can add changes that don't change requirements and they don't break client etc 10:04:41 strigazi: got it 10:04:58 #topic Blueprints/Bugs/Ideas 10:06:04 Calico network driver for kubernetes https://review.openstack.org/#/c/540352/ flwang1 is there a verification example somewhere? We can add in the docs 10:06:32 strigazi: yep 10:06:52 just create a template and using 'calico' as the network driver 10:07:19 cool, I'll test after the meeting. I mean test a network policy :) 10:07:30 strigazi: thanks a lot 10:07:52 we can check after 10:07:59 strigazi: btw, we (catalyst cloud) are also upstreaming this one https://review.openstack.org/#/c/543265/ 10:08:11 support using octavia as LB 10:08:36 given the lbaas of neutron is deprecating, i think it makes much sense 10:09:56 I thought that neutron lbaas was by default octavia v2 10:10:10 flwang1: hi, will there be a migration path for moving lbaas v2 LBs to Octavia? 10:10:23 flwang1: Thanks, I'll take a look 10:10:29 armaan: it's the above patch 10:10:37 https://review.openstack.org/#/c/543265/ 10:10:42 #link https://review.openstack.org/#/c/543265/ 10:10:54 armaan: there is no migration path unfortunately in magnum 10:11:17 there is a new config option to indicate if you have octavia deployed 10:11:59 flwang1: nice! last time neutron team made a mess with lbaas v1 by providing no upgrade path, their response was that operators delele the v1 lbs and create new lbs, which was a nightmare 10:12:25 That's the closest possible option for a migration. New clusters will have octavia v2 10:12:36 strigazi: thanks for sharing the link! 10:13:22 Next is, cluster federation 10:13:25 okie, if we can migrate LBs outside of Magnum, even then perhaps we can find workaround for this. 10:13:52 strigazi: Will it be possible to upgrade K8s in Queens? 10:14:21 I mean live upgrading of K8s bits to a newer version 10:14:28 armaan: I'll try to add a first implementation 10:15:37 About cluster federation, https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/federation-api 10:15:47 nice! 10:16:02 btw what is your opinion on something like this https://review.openstack.org/#/c/459498/ 10:16:03 The first patch is megred and the api layer is ready to take it in 10:16:54 armaan: we have kube_tag to set the tag for the kubernetes containers. 10:17:22 armaan: the api will still set the kube_tag to a newer version 10:17:55 Okay, i was not aware of this. Let me google it ... 10:19:08 flwang1: Is catalyst interested in cluster federation? 10:19:22 strigazi: not really at this moment 10:19:32 does GKE support that? 10:20:13 flwang1 it is possible to federate with a cluster in their cloud 10:20:59 if so, we may evaluate it later 10:21:03 flwang1: for marketing purposes I guess they don't offer it as option 10:21:07 but not on our MVP list 10:22:34 ok, next 10:22:43 enable_drivers option https://review.openstack.org/#/c/541663 10:23:04 flwang1: we just need release notes, I will test it 10:23:28 strigazi: i can add a release note after the meeting 10:24:09 flwang1: thanks 10:24:37 #topic Open Discussion 10:25:40 slunkad: flwang1 armaan do you want to discuss anything? 10:25:55 strigazi: do we know of any issues getting barbican to work as cert manager in magnum clusters? 10:26:16 I recently ran into a problem when trying it out 10:26:40 slunkad check your keystone_auth and keystone_authtoken sections in magnum.conf 10:27:10 slunkad: https://docs.openstack.org/magnum/latest/install/install-obs.html 10:27:18 I did, is there anything specific needed? the error I get is about domain not being found in project 10:27:20 strigazi: In my testing setup, i observed that stable/pike is broken because of this https://bugs.launchpad.net/magnum/+bug/1744362 10:27:21 slunkad: keystone_authtoken 10:27:22 Launchpad bug 1744362 in Magnum "heat-container-agent fails to communicate with Keystone." [Medium,Confirmed] 10:28:00 strigazi: is this also valid for newton? 10:28:06 slunkad: yes 10:28:14 strigazi: ok thanks will check then 10:28:30 slunkad: admin_user admin_password admin_tenant_name 10:28:34 strigazi: is there any migration path from x509 to barbican? 10:28:40 flwang1: no 10:28:47 strigazi: yes 10:29:07 so if we deploy magnum firstly without barbican, how can we migrate to barbican later? 10:29:37 armaan: the bug is in heat :( AFAIK we can't do anything magnum-side 10:29:56 flwang1: there is no supported path to do that 10:30:06 strigazi: ok, i see 10:30:18 flwang1: you could import manually the certs, one by one 10:30:26 flwang1: in theory it can work 10:30:32 ok, i see. 10:30:54 i could upstream the 'magic script' later if we have to do that 10:31:23 take ca from db, use trusd_id and trustee user to import the certs to barbican 10:31:51 got it 10:32:07 armaan: exposing the admin or internal endpoint would work 10:32:28 strigazi: thanks! I will take it to the #heat folks then 10:33:21 armaan: I don't know any security reason to not expose the admin and internal endpoints over ssl if you *already* expose the public one 10:33:57 armaan: most openstack services offer the same functionality with all endpoints 10:34:03 ok, heat agent is currently disabled in our environment 10:34:39 strigazi: AFAIK, we cannot expose our internal endpoints because of keystone 10:35:04 armaan: why not? 10:35:53 no ssl on internal endpoints 10:35:53 Ricardo Rocha proposed openstack/magnum master: [kubernetes] add ingress controller https://review.openstack.org/528756 10:36:41 armaan: ok, I get this but you can do the same config for the internal on I guess 10:36:49 armaan: ok, I get this but you can do the same config for the internal one I guess 10:37:46 slunkad: Do you need anything else for the config? 10:38:12 slunkad: did you try again? 10:38:30 strigazi: I did but still seems to fail with the same error 10:38:49 and the trust section? 10:39:00 slunkad: does it have the required fields? 10:39:20 strigazi: Yeah, we could do that. I will have a discussion about this with my team. 10:39:35 strigazi: seems so, let me try again 10:42:12 Ricardo Rocha proposed openstack/magnum master: [k8s] allow enabling kubernetes cert manager api https://review.openstack.org/529818 10:42:33 folks, is there anything else to discuss? 10:42:43 strigazi: yup :) 10:43:30 strigazi: I realize I might be barging in here so apologies for that, but armaan just asked me to join in here about https://review.openstack.org/#/c/459498/ (i.e. --coe-version vs. kube_tag) 10:44:53 If I understand you correctly, then you're suggesting for *operators* to select the kubernetes version they would like to enable users to deploy. That gerrit change, as I understand it, talks about enabling that choice for *users* though — a slightly different question. 10:45:34 So I guess our question is, do you have thoughts on pros and cons for enabling users to select a kubernetes release to deploy, via an API/CLI call? 10:46:36 fghaas coe_version is means to be used for the actual version. kube_tag on the other hand is for the tag that the nodes are going to pull. 10:47:12 fghaas: user can select the version that they want via the kube_tag label. 10:47:47 fghaas: users can use the cli eg with --labels kube_tag=v1.9.1 10:48:29 OK, perfect, and then `coe_version` (as a read-only attribute) would correctly return that? 10:48:51 strigazi: http://paste.openstack.org/show/670986/ fails with this config 10:48:53 fghaas: at the moment, it doesn't return the correct value] 10:50:14 fghaas: since the heat-agent is added we can export the version from the node to heat and then to the cluster 10:50:37 fghaas: at an api level coe_version will be read-only. makes sense? 10:50:47 slunkad: [trust] section? 10:50:57 strigazi: I see; so that means we just need to educate users that only kubectl --version is authoritative. No worries, we can do that. And, using the --labels approach requires the heat agent. Did I parse that correctly? 10:51:33 fghaas yes, kubectl --version is the source of truth 10:51:52 fghaas: no, the heat-agent is not required for kube_tag 10:52:09 fghaas: kube_tag is used here: 10:52:37 fghaas: http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh#n8 10:52:56 fghaas: we also miss an entry in the docs for it :( 10:53:03 strigazi: ah I think i might be missing the trustee_domain_admin_name, I'm using the trustee_domain_admin_id, checking now 10:53:59 fghaas: we miss it here http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/user/index.rst#n294 10:54:37 strigazi: Well *that* can be remedied. :) 10:55:18 slunkad: you need three entries in the trust section 10:55:28 fghaas: you work with armaan ? 10:55:40 fghaas: same team? 10:56:02 strigazi: no longer same team, but same company (since 2013 :) ) 10:56:29 fghaas I meant same company, cool :) 10:57:29 time is almost up, we can wrap the meeting, if you need anything to be logged speak now :) 10:58:06 cool, thanks folks, see you next week 10:58:11 #endmeeting