16:01:24 #startmeeting containers 16:01:25 Meeting started Tue Jun 14 16:01:24 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:28 The meeting name has been set to 'containers' 16:01:31 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-06-14_1600_UTC Today's agenda 16:01:37 #topic Roll Call 16:01:39 Adrian Otto 16:01:40 Madhuri Kumari 16:01:43 Rob Pothier 16:01:48 Spyros Trigazis 16:01:50 Jaycen Grant 16:01:53 Ton Ngo 16:01:59 o/ 16:02:28 o/ 16:02:36 Thanks for joining the meeting adrian_otto mkrai rpothier strigazi jvgrant_ tango eghobo dane_leblanc 16:02:45 #topic Announcements 16:02:56 I have no announcement 16:03:08 Any annoumenent from our team member? 16:03:16 o/ 16:03:32 #topic Review Action Items 16:03:38 o/ 16:03:39 1. hongbin send a ML to ask for a host for Magnum midcycle (DONE) 16:03:45 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/096803.html 16:03:52 2. hongbin create a doodle pool for collecting midcycle time (DONE) 16:03:57 #link http://doodle.com/poll/5tbcyc37yb7ckiec 16:04:31 For the midcycle, we will discuss it later in the agenda 16:04:46 #topic Essential Blueprints Review 16:04:52 1. Support baremetal container clusters (strigazi) 16:04:57 #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support 16:05:07 There is some progress: I'm testing http://lists.openstack.org/pipermail/openstack-dev/2016-June/097235.html 16:05:19 I don' 16:05:25 t have results yet 16:05:43 I'll finish tomorrow 16:06:05 nothing else 16:06:15 Thanks strigazi 16:06:25 I looked into building fc23 image, diskimagebuilder only supports up to fc22 16:06:25 Question for strigazi ? 16:06:52 It gives an error for fc23, looks like some packages are missing or got moved 16:07:01 Is it reasonable to ask for fc23 for dib team? 16:07:02 tango: Yuanying have some notes about it 16:07:14 https://review.openstack.org/#/c/247296/ 16:07:29 they are working on it 16:07:51 ok, I will follow up 16:08:15 2. Magnum User Guide for Cloud Operator (tango) 16:08:21 #link https://blueprints.launchpad.net/magnum/+spec/user-guide 16:08:54 There are patches for Kubernetes and Swarm sections, currently under review. Thanks everyone for the feedback 16:09:05 I am now working on the bay and baymodel sections 16:09:33 That's all for now 16:09:41 Thanks Ton 16:09:51 Comment for this BP? 16:10:14 tango: do you need any help from the team? 16:11:11 As usual, the sections are laid out so anyone can jump in and pick one up to work on 16:11:19 tango: I might help you for the mesos session if I have a chance 16:11:36 ok 16:11:37 I just submitted a bay driver section update. only thing missing is an example driver. 16:11:40 Some are easy, since they mainly involve integrating the existing doc, like TLS 16:11:56 3. COE Bay Drivers (jamie_h) 16:12:23 The patch which moves the fedora atomic image scripts is ready for review and merging. 16:12:43 im currently working on removing all the hardcoded paths to classes and templates. 16:13:07 lots of tests are failing after moving classes around. fixing them one at a time. will update the patch soon 16:13:20 thats all for me. 16:13:29 Thanks muralia 16:13:52 4. Create a magnum installation guide (strigazi) 16:13:58 #link https://blueprints.launchpad.net/magnum/+spec/magnum-installation-guide 16:14:12 new commit based on the template guide https://review.openstack.org/#/c/315165/ 16:14:40 With this format we can merge one distro at a time 16:14:54 The issue is neutron lbaas 16:15:19 Yes, neutron lbaas don't seem to have an operator-facing installation guide 16:15:21 we have a user asking about help for lbaas 16:15:46 We can't offer help/support for it 16:16:09 I have sent an email to lbaas to ask for operator-facing guide 16:16:18 It looks they didn't reply yet 16:16:27 I pointed the user to the lbaas irc channel 16:16:48 that's all 16:17:07 strigazi: For short-term, we can point the user to lbaas channel 16:17:15 strigazi: but we need a solution for long-term 16:17:36 I have some notes on installing lbaas in our system 16:18:00 tango: lbaas v1 or v2? 16:18:06 both 16:18:31 based on looking around and trying out 16:18:37 can we do a temporary lbaas install guide that just exists within ours until they have it ready? 16:18:54 Not a bad idea 16:19:02 It's better to focus on decoupling 16:19:12 and make lbaas an optional feature 16:19:23 We still need to provide instruction after decoupling 16:19:33 The brave users can try to install it :) 16:19:43 yes 16:19:43 or desparate :) 16:19:47 it can have the basics for the optional feature, but refer to lbaas for support and details 16:19:54 I am working on decoupling LBaaS 16:20:04 Drago: How is that goinig? 16:20:33 Fine so far. I am almost getting a kube cluster fully spun up but keep running into devstack space issues 16:21:17 Drago: I might help you offline to figure out the space issues 16:21:29 hongbin: Thanks! 16:22:20 Drago: you do that by copying the existing templaes or using Heat conditional to consilidat the templates? 16:22:49 I am using Heat environment files to select between OS::Heat::None or the Neutron resource 16:23:04 It effectively removes the resource 16:23:04 I am not sure if it is a good idea 16:23:27 Drago: It might worth to have a WIP patch for earlier feedback from the team 16:23:40 I was going to do that once I had a working POC 16:23:54 sounds good 16:24:00 Thanks Drago 16:24:21 Back to the lbaas installation guide, I will push the lbaas team to produce one 16:24:50 At the meantime, tango feel free to upload your notes if it is helpful 16:25:08 ok, I will fix up some doc and find a place to upload 16:25:27 #action hongbin follow up with neutron-lbaas team to produce a operator-facing installation guide 16:25:39 Any other comment for this BP? 16:26:07 #topic Magnum UI Subteam Update (bradjones) 16:26:17 Shu sent out a proposal to re-organize the magnum-ui subteam 16:26:24 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/097066.html 16:26:43 I think we have enough +1 so far, but welcome your votes if you want 16:27:06 Thanks everyone who voted on the proposal 16:27:34 any question regarding to the ui subteam? 16:27:56 #topic Kuryr Integration Update (tango) 16:28:13 tango: any update from the Kuryr team? 16:28:34 I attended the Kuryr meeting yesterday 16:28:59 Shared with the team my experience deploying Kuryr using their Docker image 16:29:29 I suggested improving the logging to help with troubleshooting, so this was taken as an action item 16:30:05 They are proceeding with the new driver for Kubernetes, so I am following that effort 16:30:27 That's all for now 16:30:43 Thanks Ton 16:31:03 Question for Ton? 16:31:45 #topic Other blueprints/Bugs/Reviews/Ideas 16:31:52 1. Midcycle 16:32:00 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/096853.html CERN offers host at Switzerland 16:32:09 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/097005.html RackSpace offers host at Austin, San Antonio or San Francisco 16:32:27 CERN would be awesome, although travel might be difficult for some. 16:32:33 For who will attend the midcycle, which location you prefer? 16:32:49 Austin. 16:32:57 Austin 16:33:07 Austin 16:33:14 SF or any place in USA ;) 16:33:29 SF or CERN 16:33:43 CERN 16:33:46 SF or CERN 16:34:22 we can have a doodle 16:34:52 Yes, we could have another doodle for choosing the location 16:35:13 does doodle do that? 16:35:25 choosing location 16:35:29 it might be best to solve the locationpreference, because not all locations are available on all dates 16:35:57 Right now, we have 3 Austin 16:36:02 so selecting a location will allow us to select a date, and then we can make our travel plans. 16:36:09 sure 16:36:25 Then, let's resolve the location preference for now 16:36:37 Would I ask for a question differently 16:36:50 Do you have a location that you want to exclude? 16:37:12 For those who will attend midcycle, please list the location that you cannot go 16:37:29 Austin, SF, or CERN 16:38:03 1. Who cannot go to Austin? 16:38:36 2. Who cannot go to Switherland (CERN)? 16:39:03 do we reply now? 16:39:06 yes 16:39:08 50% chance of getting that approved for me, probably lower for the other Austin residents 16:39:09 please 16:39:22 i cant go to cern 16:39:34 no cern for me either 16:39:50 ok 16:40:14 muralia and I are based in Austin 16:40:16 strigazi: is folks from CERN able to travel to US? 16:40:19 lets do this with a doodle pool. we need more people responding. 16:40:30 ok...... 16:40:39 Probably, only me, but I have to ask 16:41:16 #action hongbin create a doodle pool to select a location 16:41:23 I suggest using http://civs.cs.cornell.edu/ for selecting a location 16:41:35 and then a Doodle poll for selecting a date 16:41:45 i cant go to cern, international travel is always pain ): 16:42:00 I'm pretty sure I need to renew my passport 16:42:05 and that takes like 8 weeks 16:42:24 so it would be touch to finish in time 16:42:29 *tough 16:42:52 strigazi: It looks many folks cannot travel to CERN 16:43:05 ok 16:43:10 I am afraid we need to cross out CERN 16:43:19 np 16:43:34 The rest is Austin and SF 16:44:03 Will select over these two locations by using http://civs.cs.cornell.edu/ 16:44:06 For SF, would it be the same location as the last time? 16:44:12 yes 16:44:31 the 2nd street Rackspace office in downtown San Francisco 16:44:38 Great, we can try the beer tap again :) 16:44:53 heh, yep, that's the one. 16:45:01 http://doodle.com/poll/2x9utspir7vk8ter 16:45:21 strigazi: thanks 16:45:47 OK. Let's advance topic 16:45:56 2. Support heterogenous cluster 16:46:04 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/096380.html ML discussion 16:46:17 #link https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes the BP of manually managing bay nodes 16:46:22 #link https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones the BP of supporting availability zones 16:46:29 #link https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor the BP of supporting multiple flavor 16:46:57 We discussed this in the last meeting 16:47:13 The arguement is if it belongs to Heat or Magnum 16:47:33 For this, I sent an email to ask the Heat team 16:47:35 #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/096812.html Question for Heat 16:48:03 From my understanding, Heat doesn't support multiple resource group 16:48:40 If want a heterogenious feature, nodes with different flavor, AZ, etc, need to go to the same resource group 16:48:41 My understanding is that you can do it with nested templates 16:49:03 how? 16:49:18 you create child templates that each contain a resource group 16:49:34 Drago, can you confirm this is possible? 16:49:55 That would be creating multiple resource groups. I'm not sure how that's the same. 16:50:03 If I want 2, 3, N resource group, we need to create N resource groups? 16:50:33 For AZs N will be small 16:50:40 There may be some hacky way to use the index_var to select particular options 16:50:51 if we really wanted it all to be in the same RG 16:51:25 Personally, I don't like to put everything in the same RG 16:52:15 I would rather to let Magnum to create N Heat stacks, each of which contains a RG 16:53:03 Thoughts? 16:53:04 it does seem more simple that way, actually 16:53:18 OK 16:53:43 We could have contributors to explore that direction 16:54:08 Since I'm familiar with Heat, I may be able to explore that 16:54:08 agree? 16:54:10 so each stack contains a single homogeneous RG? 16:54:24 yes 16:54:36 and Magnum has a mapping of the different stacks to know which ones are related to a particular bay? 16:54:46 that may complicate the state machine for bays 16:55:10 we will need some way to surface the state of each of the different bays 16:55:16 s/bays/stacks/ 16:55:24 Yes 16:55:37 What is stopping us from putting the different RGs in one template? 16:55:37 Basically, Magnum manages 1 stack or N stacks 16:55:38 so users will know when the bay is actually in a working state 16:56:21 Magnum will tell users the state of the bays by iterating each stacks 16:56:55 #topic Open Discussion 16:57:29 We could think about the idea for now, and re-discuss it in the next meeting 16:58:21 I wanted to point out that a helpful patch landed in python-heatclient. It makes it easy to create a visual diagram of all resources and their dependencies in a stack: https://review.openstack.org/#/c/286913/ 16:59:15 cool 16:59:26 (Thanks, stevebaker :) ) 16:59:40 nice 16:59:51 Will try that 17:00:04 Time is up 17:00:10 #endmeeting