16:00:14 #startmeeting containers 16:00:16 Meeting started Tue Sep 27 16:00:14 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:19 The meeting name has been set to 'containers' 16:00:21 #topic Roll Call 16:00:27 o/ 16:00:29 Spyros Trigazis 16:00:35 Hieu LE o/ 16:00:40 Stephen Watson o/ 16:00:43 o/ 16:00:43 Rob Pothier 16:00:44 Madhuri Kumari 16:00:48 Jaycen Grant 16:00:48 Ton Ngo 16:01:13 o/ 16:01:14 o/ 16:01:18 Adrian Otto 16:01:37 Thanks for joining the meeting Drago strigazi hieulq_ swatson eghobo rpothier mkrai jvgrant tonanhngo dane_leblanc__ vijendar adrian_otto 16:01:42 #topic Announcements 16:02:02 1. Magnum PTL election end. adrian_otto is your PTL in Ocata cycle 16:02:16 I would like to extend my most sincere thanks to hongbin for serving as our PTL over the Newton release cycle. We deeply appreciate your leadership during this time, and look forward to working together more. 16:02:18 Thanks Hongbin for leading Magnum the Newston cycle! 16:02:39 +1 16:02:41 +1 16:02:52 Thanks hongbin 16:02:53 thanks hongbin and congrats adrian_otto 16:02:54 +1 16:02:55 Thanks. Glad to work with you in newton 16:02:57 Thanks hongbin :) 16:02:57 Congrats Adrian 16:03:05 Congratulations adrian_otto 16:03:19 Thanks hongbin and congrats adrian_otto 16:03:20 I will work with adrian_otto this week for the transition. 16:03:39 #topic Review Action Items 16:04:01 let me find teh list of AIs 16:04:19 1. hongbin talk to kuryr ptl to setup a join-session in design summit 16:04:31 The Kuryr PTL agreed to have a joined session 16:04:51 I believe adrian_otto will follow up the details later 16:04:56 I will, thanks. 16:05:11 let's record an action for me to follow up with him 16:05:12 2, hongbin contact the heat team about a join-session for discussing heat performance 16:05:31 #action adrian_otto follow up with Kuryr PTL to arrage a joined session 16:05:58 I believe strigazi is working on this? 16:06:05 Yes 16:06:22 strigazi: how is the session going? 16:06:42 I contacted the heat ptl and he told me that they are doing a large stacks sessions where we can join 16:06:51 with tripleo 16:07:08 There will be no specific 16:07:13 heat-magnum session 16:07:47 We can start the discussion on that 16:07:52 We should plan to attend that session. 16:07:52 and see how it goes 16:08:07 They should have an etherpad for planning, we should add our topics for discussion 16:08:15 the only concern I have is making sure it's placed on the schedule such that it does not conflict with other priorities of ours. 16:08:45 tonanhngo: can you help us track down a link to that etherpad? 16:08:46 hopefully this session would not conflict with our own session 16:08:56 I can continue on that and have more details 16:09:04 Sure, I will follow 16:09:07 I'm preparing a list of detailed subjects 16:09:13 hongbin: do you happen to know who's putting the schedule together this year? 16:09:32 adrian_otto: i think it is ttx 16:09:34 They also have a PTL change 16:09:39 thanks hongbin 16:09:58 ok, that is all for the AIs 16:10:09 #topic Essential Blueprints Review 16:10:17 1. Support baremetal container clusters (strigazi) 16:10:23 #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support 16:10:33 strigazi: ^^ 16:11:00 While working on the mesos bm driver, I noticed the only difference between that vm and bm drivers 16:11:16 are the private network and the lack of cinder support 16:11:54 With a colleague here we investigated how we can make the private network optional but enabled by default 16:12:16 and pass the selected network as a parameter (we have that option already) 16:12:30 So, of the mesos driver 16:12:37 s/of/for 16:13:04 I will only modify the current driver to support both flavor with the same heat templates 16:13:23 Eventually this can happen for the other driver too 16:13:52 we are testing how the optional network goes at the moment 16:13:56 that's it 16:14:12 thanks strigazi 16:14:27 any comment regarding the ironic integration? 16:14:39 2. Magnum User Guide for Cloud Operator (tango) 16:14:47 #link https://blueprints.launchpad.net/magnum/+spec/user-guide 16:14:51 tonanhngo: ^^ 16:14:53 The section on Horizon and native clients merged. Thanks again everyone for the helpful comments. 16:15:08 With this, I think the key sections in the guide are in place 16:15:30 I will pause for awhile to focus on the scalability work for the summit 16:15:44 tonanhngo: I have a suggestion 16:15:59 strigazi: Sure 16:16:01 to discuss until the summit and at the summit 16:16:27 The user guide has a lot of good content but has become very big 16:16:47 also there is content for ops and end users 16:16:59 we could start spliting the guide 16:17:15 to architecture, ops and user 16:17:30 what do you think? 16:17:42 that was our original intent as of the Tokyo summit 16:17:51 That's a good idea. I was thinking of starting an operator guide. The role distinction is basically who can manages the infrastructure 16:18:12 It's true that some parts of the current user guide is really for operator 16:18:34 the service operator has one set of interests, and the service consumer has another set. 16:18:36 We have talked about tuning, which would be an operator guide 16:18:45 so it might be nice to split along that line. 16:18:46 I could yes 16:18:54 (only yes) 16:19:21 What I wanted to say is 16:19:37 I want to add content for the ironic integration and i'm not sure where 16:19:51 Maybe Arch guide? 16:20:15 Who would be the audience for the arch guide? 16:20:18 seems to me that fits in the operator guide 16:20:31 Mostly ops but 16:20:52 won't have any instructions to follow or setup steps 16:21:11 it will be the description of the service 16:21:40 With not a strict definition it's ops guide 16:21:54 So maybe we can start the operator guide by pulling the appropriate sections from the current user guide, and add more. 16:22:08 Then see if a separate arch guide would be needed 16:22:18 seems sensible 16:22:27 makes sense 16:22:32 thanks Ton 16:22:39 Thanks Spyros 16:22:45 I worry that an arch guide standalone may be rather short 16:22:58 leaving the reader with an appetite for more specifics 16:23:11 if the ops guide grows to much we split later 16:23:18 s/to/too 16:23:43 It could start as a section in the operator guide, and we can direct the reader there 16:24:01 new file? 16:24:13 Yes definitely 16:24:23 great 16:24:54 ok, next topic 16:24:58 3. COE Bay Drivers (muralia) 16:25:04 #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers 16:25:10 muralia: ^^ 16:25:12 I have muralia's update 16:25:26 "so i updated my driver patch. https://review.openstack.org/#/c/374906 . not sure why some of the functional tests are failing at the gates. they pass for me in devstack." 16:25:37 "ive addressed all comments. just need more reviews while i fix those failing tests" 16:25:44 That is all 16:26:12 Thanks Drago 16:26:25 #topic Kuryr Integration Update (tango) 16:26:32 tonanhngo: ^^ 16:26:39 I attended the Kuryr meeting yesterday 16:26:53 The IPVLAN proposal seems to move along quickly 16:27:35 A POC was made available and they already started putting the implementation into the Kuryr lib 16:27:57 some part will go into the libnetwork for Swarm 16:28:35 The nested container support is coming slowly 16:28:50 similarly for the Rest/API driver 16:29:07 so nothing much for us to move on 16:30:06 As they implement the IPVLAN support, I think issues are starting to emerge 16:30:49 We will have to see how these are addressed. If they cannot be solved, then they will just be limitation to be aware of 16:30:57 That's about all 16:30:59 so the implementation is not working completely yet? 16:31:21 They just have the bare minimum to show the concept 16:31:34 the full implementation is in progress now 16:31:37 ok 16:31:56 thanks tonanhngo 16:32:03 #topic Open Discussion 16:32:28 Anyone has topic to discuss? 16:32:35 So there is a fix for the Kubernetes load balancer, at least for LBaaS v1 16:32:42 I have an image with k8s 1.3 16:33:06 That was for Ton ^^ 16:34:23 it looks my nick just changed (not sure why) 16:34:45 tonanhngo, with k8s 1.3, lbaas v2 is expected to work right? 16:37:06 it looks folks are keep disconnecting 16:37:15 Yeah, something wrong with freenode maybe 16:37:17 It looks like IRC is going haywire 16:37:27 I got kicked and had to reconnect 16:37:39 I'm here 16:38:07 same, but i see tons of notifications of people dropping and adding 16:38:22 sweet!! 16:38:34 I have filtered the all, so for me it's just weird 16:38:52 IRC is having a complete fit 16:39:09 Lost my connection 16:39:26 strange that some of us aren't impacted 16:40:06 probably just some servers 16:40:46 become stable now? 16:41:00 tonanhngo, about the fix 16:41:26 you set hostname-override for all cases 16:41:47 for clouds without proper dns that is a problem 16:42:10 Yes, we need that because the user can decide to use the load balancer at any time 16:42:28 The DNS issue also impacts some of the kubectl command 16:42:35 like exec 16:42:35 yeap 16:42:41 strigazi: I must have missed something because I don't understand the context of your remarks. Is hostname-override related to LBaaS? How? 16:43:01 The context is as follows 16:44:10 I assume Ton is typing 16:44:35 Yes I show up as both tango and tonanhngo :) 16:45:20 So to make this happens, we need to register with kube-apiserver the Nova name of the instance for the node 16:45:39 We do this using the hostname-override option in kubelet 16:46:32 This is one of the requirement for any feature in Kubernetes that requires interaction with OpenStack 16:46:51 I got disconnected again 16:47:15 So, this holds for the load balancer and cinder volume support, and likely other features in the future. 16:47:50 So back to the DNS question, it might be worth considering adding a DNS as part of the Kubernetes cluster 16:48:45 We can update /etc/hosts on all nodes, we need the heat-agents for that 16:49:35 That's another option 16:49:40 maybe easier 16:49:58 That do you mean have a DNS as part of the cluster? 16:50:12 install a DNS service 16:50:20 and register the node names there 16:50:51 like skyDNS? I though it was only the services 16:51:04 *only for 16:52:31 It would be good to try out 16:52:46 strigazi: tango : you could use the /etc/hosts as a DNS. what it needs is to populate it with IP addresses/hostnames of all the instances 16:53:08 Heat agents will definitely be on the nodes too 16:53:19 They're needed for lifecycle operations 16:54:18 Docker tried an implementation recently where each node in a swarm had a mirrored copy of a hosts file, and that proved problematic. 16:54:41 probably because they did not have a way to handle inconsistencies in that distributed state 16:54:56 i see 16:55:08 if you have a node that's responding too slowly, it can miss updates, and then the state is inconsistent 16:55:30 paket loss or other split cluster conditions can lead to taht 16:55:55 so what seems like a simple solution starts to become a complex distributed system once you try to address all the edge cases 16:56:07 True 16:56:24 it will be very hard to debug 16:56:46 so using something on top of a quarum data store (example etcd, zookeeper...) tend to be better 16:58:03 Quaram? Is that quorum or is there something called quarum? I'm unfamiliar 16:58:04 SkyDNS uses etcd as a backing store for the resource records. 16:58:26 sorry, I meant "quarum" as a category 16:58:44 we have only a couple of minutes remaining. 16:58:47 SkyDNS, isn't only for the services to be registered? 16:59:00 let's go to out channel 16:59:01 This seems like a general problem that needs some investigation. 16:59:16 ok 16:59:20 yeah, we can continue this in #openstack-containers 16:59:50 ok. thanks for joining the meeting 16:59:55 #endmeeting