16:07:18 #startmeeting containers 16:07:19 thanks tmorin 16:07:19 Meeting started Tue Jul 5 16:07:18 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:07:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:07:23 The meeting name has been set to 'containers' 16:07:25 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-07-05_1600_UTC Today's agenda 16:07:30 #topic Roll Call 16:07:32 o/ 16:07:34 murali allada 16:07:36 o/ 16:07:37 Adrian Otto 16:07:39 o/ 16:07:42 o/ 16:07:54 o/ 16:07:54 Ricardo Rocha 16:08:02 Spyros Trigazis 16:08:05 Thanks for joining the meeting Drago muralia tcammann adrian_otto dane_leblanc eghobo rpothier rochapor1o strigazi 16:08:13 #topic Announcements 16:08:19 I will be on travel next week so unable to chair the next meeting. Looking for a temp chair. 16:08:36 Any volunteer? 16:08:46 o/ 16:09:00 adrian_otto: you can chair the next meeting? 16:09:02 yes 16:09:07 adrian_otto: thanks 16:09:11 np. 16:09:15 #topic Review Action Items 16:09:22 1. hongbin create a wiki session to report a list of Magnum users (DONE) 16:09:28 #link https://wiki.openstack.org/wiki/Magnum#Users 16:09:49 It looks CERN is the only user 16:10:21 For others, we would like to know if you are using Magnum 16:10:46 Just write your name and oragnazation there if you like. 16:10:53 #topic Essential Blueprints Review 16:11:00 1. Support baremetal container clusters (strigazi) 16:11:06 #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support 16:11:48 Not much since last week, we are still working on the gate tests. 16:11:48 strigazi: any update? 16:12:50 Only the patch for fixed networks was merged 16:12:54 that's it 16:13:03 strigazi: thanks 16:13:12 Any question for strigazi ? 16:13:48 2. Magnum User Guide for Cloud Operator (tango) 16:13:54 #link https://blueprints.launchpad.net/magnum/+spec/user-guide 16:13:59 Ton is on vacation. I update you on behalf of him. 16:14:06 (Ton) The session on bay driver and baymodel were merged. 16:14:11 (Ton) The new section on bay is about 75% written 16:14:17 (Ton) I will finish after I return. 16:14:31 That is it from him 16:14:49 Question? 16:15:08 3. COE Bay Drivers (jamie_h, muralia) 16:15:16 #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers 16:15:31 Thanks strigazi for updating the swarm driver patch while I was out yesterday. 16:15:47 we seem to be really close to getting all the tests passing. 16:16:19 there might be an issue with removing the hardcoded path to the swarm templates right now. 16:16:34 looking into that. but over all, we are really close to getting it done. 16:16:57 I'll switch focus to the kubneters drivers once everything looks fine with the swarm driver. 16:17:03 thats all for me 16:17:18 muralia: Thanks for teh update 16:17:24 I have two patches as WIP on that bp, please ignore them. 16:17:40 can you abandon them strigazi? 16:17:50 the current patch seems to be the right way to go. 16:18:43 Yes, I leave them for a few days if I need to run something on the gate. They helped me a little 16:19:02 sure 16:19:04 Hi everyone! I am sorry for being late 16:19:17 mkrai: np 16:19:38 Any comment for the bay driver BP? 16:20:03 4. Create a magnum installation guide (strigazi) 16:20:09 #link https://blueprints.launchpad.net/magnum/+spec/magnum-installation-guide 16:20:18 strigazi: ^^ 16:20:41 The guide for suse is merged, and I'm still trying to package debian 16:21:00 I'll also add a "launch an instance" section 16:21:21 that's all 16:21:36 What is "launch an instance" section for? 16:22:01 like so : http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-heat.html 16:22:39 It's part of the current install guide to actually test that a simple example work after deployment 16:22:42 OK. It looks that section is for launching a bay 16:22:50 yes 16:22:54 I see 16:23:15 Any other comment for the installation guide BP? 16:23:34 to clarify 16:23:47 debian packages also means ubuntu packages 16:23:56 they are the same for magnum 16:24:18 get that 16:24:34 Any problem for packaging the debian packages? 16:25:01 Many, but I'm getting closer :) 16:25:03 strigazi: It looks the debian part is particular challenging? 16:25:15 OK. Sounds good 16:25:43 #topic Magnum UI Subteam Update (bradjones) 16:25:49 bradjones: there? 16:26:11 * hongbin didn't receive any update from magnum-ui so far 16:26:54 It looks Brad is not there. Let's skip this session 16:27:03 #topic Kuryr Integration Update (tango) 16:27:08 (Ton) I did attend the Kuryr meeting on Monday. 16:27:14 (Ton) They have started the code refactoring into 3 repos: kuryr-lib, kuryr-libnetwork, kuryr-kubernetes. 16:27:20 (Ton) I will use the stable fork for our development until the new repos are stable. 16:27:25 That's it 16:27:30 Comment? 16:28:16 #topic Other blueprints/Bugs/Reviews/Ideas 16:28:22 1. Create a Magnum team account in Docker Hub for hosting official images (strigazi) 16:28:44 There is one, but we don't know who owns it 16:28:45 https://hub.docker.com/r/openstackmagnum/ 16:29:18 strigazi: maybe it help to describe the rational to create a magnum repo in Docker Hub? 16:29:44 I pushed a change for cinder swam integration with rexray deploying it 16:29:50 in a container 16:30:28 put we need a docker image for that. I thought that a good option is to have an official magnum repo for that 16:30:41 this is the change https://review.openstack.org/#/c/336594/ 16:31:07 I don't see another option since we use atomic 16:31:25 we can't install anything on it 16:31:35 strigazi: How large is the image? 16:32:06 I think 200mb, but i'm not sure let me check 16:32:23 Looks quite large 16:32:42 954 16:32:53 https://hub.docker.com/r/strigazi/docker-rexray/builds/bh2bmxvfvuqe2mzpdkrzuwf/ 16:32:54 954 mb ?! 16:33:41 I guess 164.6mb 16:33:51 it's layered on fedora:23, if we have a common base image (be it fedora:23 or not) the actual rexray part would be much smaller 16:33:52 ok 16:34:11 https://hub.docker.com/r/strigazi/docker-rexray/tags/ 16:34:14 216 mb 16:34:32 My worry is the gate won't have enough time to pull it down 16:34:52 this size is with fedora:23 16:34:53 Unless there is a way to build the image into the OS 16:35:11 to have it in the image? 16:35:19 maybe 16:35:34 do we bake in any other image right now? 16:35:39 or will this be the first? 16:35:40 if we do that we can add rexray directly 16:35:58 the openstack-infra team has expressed willingness to have a custom image for us 16:36:18 adrian_otto: a custom fedora-atomic? 16:36:24 because if they do that, we end up using less time for our gate tests, and that's good for the other projects. 16:36:37 strigazi: yes 16:36:45 ok 16:36:55 how will we manage updates? everything we put in the image itself will require a new image being pushed on every security update (vs a docker pull on the nodes) 16:37:07 that means we maintain the rexray package in a way 16:37:55 rochaporlo: good point. lets try to reduce the size of the docker image and see if that works. 16:38:17 the alternative is a ostree rebase, but that requires a node reboot and it's complicated to manage 16:38:21 I'll try to modify the image 16:38:25 I guess it is very hard to reduce the size, unless we switch to fedora 23 16:38:32 alpine? 16:39:14 Even with the a smaller image. Where would that image live? 16:39:45 we should send a ML email to find out who created the docker hub account 16:39:46 strigazi: Could we build the docker image in the post-devstack hook? 16:40:14 It will down load the fedora:23 image anyway 16:41:06 OK. Push the discussion in ML? 16:41:16 what if we had a local mirror (caching reverse proxy) of the docker hub in each of the clouds that openstack-infra exists, and enter host entries into the local /etc/hosts files to use that for the hub dns names 16:41:50 that would speed up the load of the dociker image considerably in the general case, and allow for them to be updated without our participation 16:43:13 If the infra team agree, this could be a solution 16:43:42 and in which account we'll host the image? 16:44:01 is there one the upstream owns? 16:44:09 or is this magnum specific? 16:44:25 magnum specific 16:44:37 then we can put a tarball of the image on tarballs.openstack.org 16:44:38 I asked about an official 16:44:46 and just load it from that 16:45:00 leverage the existing mirror infrastructure 16:45:22 put the docker image in a tarball? 16:45:26 yes 16:45:37 you can create one with docker export 16:45:48 and you re-create it with docker import 16:46:04 OK, then we can run docker build as part of the gate, then upload it to tarballs.openstack.org 16:46:10 would we use that for the installation on production sites? 16:46:10 ok, I'll check if I can upload it there 16:46:16 that should work 16:46:37 rochapor1o: we should have a variable for the source of this 16:46:52 that way each cloud operator can select the location that makes the most sense for them. 16:47:07 that sounds good 16:47:23 #topic Open Discussion 16:47:46 So we won't have a magnum account on docker hub 16:48:12 that's a future option 16:48:13 strigazi: I guess it is unnecessary, if we can run docker build instead 16:48:25 ok 16:48:39 bay drivers passed the gate 16:48:48 whoot 16:48:49 yes! 16:49:01 awesome 16:49:21 it's only swarm but it's progress 16:50:35 Drago: I have a question for https://review.openstack.org/#/c/333383/9/magnum/templates/kubernetes/fragments/api_gateway_switcher_kubemaster.yaml 16:51:27 Drago: It looks we implemented a custom Heat resource for mixing the inputs to outputs 16:51:41 Drago: Would it possible to contribute this resource to Heat? 16:52:32 That is OS::HEAT::ApiGateWaySwitcher instead of Magnum::ApiGatewaySwitcher 16:53:21 Hm, interesting idea 16:53:35 I will have to discuss it on #heat 16:53:49 Drago: thx 16:54:50 Anything else to discuss? 16:55:26 If no, I will close the meeting a little earlier 16:56:13 All, thanks for joining the meeting. The next meeting will be chaired by adrian_otto at the same time next week 16:56:13 fine with me 16:56:19 #endmeeting