16:01:19 #startmeeting containers 16:01:20 Meeting started Tue Sep 13 16:01:19 2016 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:23 The meeting name has been set to 'containers' 16:01:29 #topic Roll Call 16:01:31 o/ 16:01:33 murali allada 16:01:33 o/ 16:01:35 Ton Ngo 16:01:39 Jaycen Grant 16:01:39 o/ 16:01:43 o/ 16:01:51 o/ 16:01:56 I don't have stable internet connection today, so strigazi will chair today meeting 16:02:18 Thanks for joining the meeting Drago1 muralia hongbin tonanhngo jvgrant eghobo dane_leblanc__ rpothier 16:02:23 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-09-13_1600_UTC 16:02:34 #topic Announcements 16:02:44 I have two minor ones 16:03:04 magnum's debian packages are moved in gerrit now 16:03:10 deb-magnum 16:03:17 and deb-python-magnumclient 16:03:35 all the contributions can be done there from now on 16:04:04 #link https://review.openstack.org/#/admin/projects/openstack/deb-magnum 16:04:18 #link https://review.openstack.org/#/admin/projects/openstack/deb-python-magnumclient 16:04:25 #topic Review Action Items 16:04:32 hongbin clean up the review queue (WIP: Adrian Otto left comments on the inactive reviews to prompt for actions) DONE 16:04:44 #topic Essential Blueprints Review 16:04:50 1. Support baremetal container clusters (strigazi) 16:04:55 #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support 16:04:59 the split between vm and bm k8s-fedora is done 16:05:07 nice 16:05:12 this week it is expected to push an update of the drivers spec for the common dir structure and mesos baremetal (which I'm testing) 16:05:20 also I'll update the user-guide for adding a new driver, writing docs is always more difficult than you expect 16:05:33 +1 :) 16:05:47 and mkrai is wokring on bm for swarm 16:05:58 o/ 16:06:02 questions? 16:06:11 hi adrian_otto 16:06:17 hi 16:06:36 next 16:06:40 2. Magnum User Guide for Cloud Operator (tango || tonanhngo) 16:06:44 #link https://blueprints.launchpad.net/magnum/+spec/user-guide 16:07:02 The Scaling section was merged, thanks everyone for the helpful review. 16:07:31 I am still working on the Horizon and native client section, will upload patch shortly 16:07:36 that's all for now 16:08:00 thanks Ton 16:08:11 3. COE Bay Drivers (muralia) 16:08:15 #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers 16:08:50 I'm still working on tests. unit tests are done. fixing functional tests. this is a lot fo work. 16:09:37 Is there something specific that breaks? 16:09:58 lots of tests are broken because we need to add a driver mock 16:10:15 ack 16:10:22 there were close too 100 unit tests failing because of this 16:10:25 i fixed those. 16:10:33 now looking at functional tests 16:10:51 thats all. just making progress slowly. 16:11:33 thanks 16:11:43 4. Rename bay to cluster (jvgrant) 16:11:47 #link https://blueprints.launchpad.net/magnum/+spec/rename-bay-to-cluster 16:12:02 All patches have now been merged!! :) 16:12:15 whoot!! 16:12:22 nice! 16:12:24 only references to bay/baymodel should be history and backwards compatibility 16:12:24 great! 16:12:50 Thanks to everyone who helped with the giant reviews 16:13:00 I haven't used bay for more than a week 16:13:03 :) 16:13:04 Worth announcing at the next summit 16:13:05 and swatson who helped a ton with the client portion 16:13:48 that is all 16:13:51 well done jvgrant 16:14:09 Do you want to mark the bp as complete? 16:14:17 yeah 16:14:29 I know that was a hefty task, jvgrant and swatson. Thanks for banging that out. 16:14:49 next topic 16:14:54 #topic Kuryr Integration Update (tonanhngo) 16:15:12 I attended the Kuryr meeting yesterday 16:15:37 The Release 1 is progressing, and they are working on release 2 16:16:04 However this only supports baremetal. Container in VM requires more work 16:16:43 There is a proposal for a different implementation to support containers in VM, using IPVLAN 16:17:08 Looks like they will proceed with a POC to flush out the pros/cons 16:17:53 I added a few more patches to round out the integration with the earlier Kuryr, but they are still marked as WIP. 16:18:18 Is there anything I could test on fake bm? (soon actual bm) 16:18:27 because of security concern, and we expect them to change in a few weeks when they have release 2 16:19:17 Not really ATM, because we still need the REST server in release 2 16:19:32 ack 16:19:34 I think for now, we can pause for a little while 16:19:48 and track the development in Kuryr 16:20:22 That's all for now 16:20:40 thanks Ton 16:20:52 #topic Other blueprints/Bugs/Reviews/Ideas 16:21:06 Magnum Newton release 16:21:15 #link http://releases.openstack.org/newton/schedule.html Newton release schedule 16:21:35 The final release is on Sep 26-30 16:22:09 #topic Open Discussion 16:22:22 I have two topics 16:22:40 i have a question, after you 16:22:51 I want to bring up some issues with the k8s load balancer also. 16:23:14 1. I tested fedora atomic 24 with docker 1.10 and uploaded it to fedorapeople 16:23:35 so you can use it or build it with this change: 16:23:48 #link https://review.openstack.org/#/c/344779/ 16:23:58 No change required to the current templates/scripts ? 16:24:39 strigazi: why not update to docker 1.11? 16:24:43 haven't noticed anything and the feunctional tests pass. aslo we've been using docker 1.10 with f23 for more than 2 months 16:25:16 f24 ships with only 1.10, but 16:25:41 I did a custom build with docker 1.11 from f25 16:25:56 I'll publish instructions and the image tomorrow 16:26:14 nice. 16:26:29 if we want to use what is upstream we must use 1.10 16:26:35 until november 25 16:26:54 I have also build with 1.12 from f26 :) 16:27:03 I was greedy I guess 16:27:22 and two: 16:27:42 What about updating mesos to 1.0 for newton 16:28:43 mesos is unstable at magnum in some sense anyway, If we update we might attract more users 16:28:47 i think we should update as many images as we can for newton 16:28:58 team? 16:29:31 +1 16:29:39 +1 16:29:52 if we have the resource to carry out 16:30:00 hongbin you have some concerns about updating 16:30:14 before the release 16:30:31 yes, we are at rc1 right now, which is almost close to the final release 16:30:42 i do have concern to upgrade the COE version 16:30:48 which could be a big change 16:31:01 we could do it at driver level though, after the release 16:31:12 yes, im ok with that too 16:31:45 ok, that means we will release the driver in the next cycle 16:32:13 or you want to backport the driver upgrade? 16:32:18 hopefully not, but with the pace at which its taking to fix tests. that might be possible 16:32:27 if we can backport, we should do that 16:32:56 then, keep in mind the backport policy of openstack 16:33:19 hmm, havent looked at it. anything specific I should be aware of? 16:33:23 the reviewers should review the backport patch againest the openstack backport policy 16:33:30 ok 16:33:51 muralia: basically, it said don't bakport anything besides bug fixes 16:33:51 ok, so updates of COEs on the next release 16:34:05 thanks 16:34:36 at CERN we'll update the drivers soon anyway 16:35:09 User can always pull from master if they want the feature 16:35:11 I don't have anythig else 16:35:23 i have a question for the team 16:35:45 are we ready to make a final release? any patch that hasn't been merged? 16:36:44 no? 16:36:46 I have some final updates on the install-guide 16:36:52 We might consider a fix for the kubernetes loadbalancer 16:36:59 yes, we might be ready to do so. The driver work seems to be the only one remaining, but I'm concerned that such a big change in the last minute might nit be fine. 16:37:34 ok 16:37:59 tonanhngo: do we have a bug for this? 16:38:24 Dane just reported the problem last week 16:38:52 let me open a bug 16:39:09 There was an old bug opened a while ago 16:39:22 the problem is more prevalent, but we can start with a partial fix 16:39:41 I guess you are done Hongbin? 16:39:41 #link https://bugs.launchpad.net/magnum/+bug/1524025 16:39:42 Launchpad bug 1524025 in Magnum "Kubernetes external loadbalancer is not getting created" [Undecided,In progress] - Assigned to Dane LeBlanc (leblancd) 16:39:54 tonanhngo: yes 16:40:21 sorry, final comment. 16:40:33 it looks we are ready to freeze the repo now? 16:41:09 yup. 16:41:09 Can we do it tomorrow, to update the install-guide? 16:41:26 strigazi: yes, will wait for your patch and ton's patch 16:41:40 Ton you 16:41:45 So the minor problem is that the configuration for k8s controller changed a bit because it is now a container instead of a process. I have a simple patch for that. 16:41:46 but the rest of the patches should be freeze now 16:42:14 any other concern? 16:42:20 tonanhngo: I'm covered by your last message 16:43:12 #action hongbin to freeze the magnum service repo 16:43:58 thanks. that is from me 16:44:00 tonanhngo: Is there more needed than this patch that just merged: #link https://review.openstack.org/368996 16:44:02 However the larger problem is that the k8s plugin for OpenStack still uses LBaaS V1 and Keystone V2. 16:44:35 We can't even get LBaaS V1 on devstack anymore, and in Newton 16:45:10 There is support for LBaaS V2 in K8s release 1.3, but all our image still has 1.2 16:45:59 we can consider building a custom image with Fedora 24 and K8s 1.3 16:46:21 and update our scripts to work with this image 16:46:55 Support for Keystone V3 is apparently still being tested 16:48:07 I'll give it a go with k8s 1.3, it's do-able but a custom build 16:48:52 I am wondering whether it's OK to let K8s load balancer be broken for the Newton release, and add fix later 16:49:20 or should we try to get it working with custom image 16:49:36 we can fix it but document 16:49:51 that k8s lbaas requires a custom image 16:49:58 ok 16:50:34 sounds good, I will check out Spyros new image tomorrow 16:51:03 ok 16:51:14 I also have a quick note to share with the team, and an open invitation 16:51:50 Some of us (myself, Spyros, Ricardo, my colleague Winnie) have been working on scalability for Magnum and COE 16:52:17 We requested a large cluster to run the Rally benchmarks we have been developing 16:52:45 We should be getting access to a 360 nodes cluster soon, from the CNCF lab (similar to OSIC) 16:53:08 Since this is public resource, we want to keep it open to the team 16:53:40 It's also a major undertaking to install OpenStack there, manage it, and run benchmarks 16:54:04 If anyone is interested in joining the effort, you are quite welcome to give a hand 16:54:41 awesome. thanks for that update 16:54:49 The result will be public, and we hope this will help with adoption 16:55:22 tonanhngo: I can allocate some time to help out with it. 16:55:49 That would be awesome. I am worried about installing OpenStack on 360 nodes 16:56:11 can we use openstack-ansible project? 16:56:20 rajiv__: yes. 16:56:23 I am not sure about the stability of it. 16:56:28 I am thinking about ansible, or Kolla 16:56:37 that will help a lot. We recently did a lot of work to make that work well with Magnum 16:57:22 rdo-manager whould be a good option but the nodes run ubuntu trusty 16:58:17 I want to discuss https://review.openstack.org/352358 which adds a more restricted security group for the cluster. I'm not sure about having essentially an "all closed" network security policy on all new clusters. From a security perspective it's a best practice to be secure by default. From a practical perspective, it means that every "real" application will require a custom COE driver. 16:58:17 does anyone have configuration information? 16:59:00 rajiv__: this is your patch 16:59:10 We can move on openstack-containers 16:59:19 time is up team 16:59:30 ok 16:59:36 #endmeeting