16:00:05 #startmeeting containers 16:00:05 Meeting started Tue Mar 29 16:00:05 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:10 The meeting name has been set to 'containers' 16:00:16 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-03-29_1600_UTC 16:00:24 #topic Roll Call 16:00:31 murali allada 16:00:33 Waldemar Znoinski 16:00:35 Ton Ngo 16:00:35 Corey O'Brien 16:00:41 o/ 16:00:41 o/ 16:00:44 Spyros Trigazis 16:00:49 o/ 16:00:50 o/ 16:01:01 Perry Rivera 16:01:06 o/ 16:01:27 Adrian Otto 16:01:53 Thanks for joining the meeting muralia wznoinsk Tango coreyob Kennan juggler strigazi jay-lau-513 wangqun juggler rpothier adrian_otto 16:02:07 Let's begin 16:02:10 #topic Announcements 16:02:28 may I? 16:02:31 1) An announcement from Adrian Otto 16:02:35 adrian_otto: pls 16:02:37 I'd like to announce that Magnum will have a new PTL for the Newton development cycle. You elected Hongbin for this role. We are transitioning over my duties now. If you have any concerns about Mitaka, you can ask me, and everything else going forward Hongbin will be responsible for. I plan to remain active as a reviewer and continue contributing to Magnum. 16:02:43 are there any questions about this? 16:02:54 o/ 16:03:19 Thanks Adrian for leading Magnum in the last 3 cycles. 16:03:45 You're welcome. It has been my pleasure. 16:03:45 Thanks adrian_otto 16:04:07 Thank you Adrian for brining us this far. Congrats Hongbin. 16:04:18 Thanks adrian_otto 16:04:34 Thanks adrian_otto 16:04:55 Thanks adrian_otto for your leadership and contribution to the project 16:05:31 #topic Review Action Items 16:05:38 None 16:05:46 #topic Essential Blueprint/Bug Review 16:06:00 I suggest to skip this one, since we are in FF 16:06:29 #topic Plan for design summit topics 16:06:37 #link https://etherpad.openstack.org/p/magnum-newton-design-summit-topics Etherpad collaborating topics for Magnum Newton design summit 16:07:13 We are using the etherpad to collaborating the topics we are going to discuss in the design summit 16:07:28 It will be great if you can put your topics there 16:08:02 We can wait for one week for inputs and revisit the etherpad in the next meeting 16:08:29 Any concern or question for the design summit? 16:09:30 OK. Let proceed to open discussion 16:09:33 #topic Open Discussion 16:09:41 1. Kuryr team proposed to extend its scope to storage 16:09:47 #link https://review.openstack.org/#/c/289993/ The proposed spec 16:09:54 Does it have any impact on the Magnum roadmap? What is our position? 16:10:59 I think it's a good direction, consistent with the networking effort 16:11:13 hongbin: does it Kuryr moving away from just container to COE support 16:11:18 i did not see much how Kuryr will manage storage 16:11:22 from the spec 16:11:37 eghobo: I am not sure right now 16:11:39 o/ 16:11:39 I was a little surprised about the expanded mission, but if they want to tackle this, it should be fine 16:11:59 shall we wait some detail for this? BTW: Each COE has its own storage manager and most of them can integrate with different backend 16:12:13 jay-lau-513: i think they want support plugins for Docker, Kub, etc 16:12:16 I think that the Kuryr + Storage will not impact magnm much 16:12:19 eghobo: It looks they want to support all the COEs per my understandng 16:12:54 magnum can focus on kuryr network part first 16:13:01 It's too early at this point, we should stay in touch to see how they approach the problem 16:13:37 Tango: +1 16:13:46 OK. It sounds we are OK for their expanded scope 16:14:29 If they take a similar approach to networking, then in the long run, it should simplify things for Magnum 16:14:38 can we have shared session at summit? 16:14:48 Tango: +1 16:14:52 Honestly, I think expanding the scope of a project like that is a huge mistake. 16:14:59 eghobo: I can talk to the Kuryr PTL about that 16:15:18 #action hongbin discuss with Kuryr PTL about the shared session idea 16:15:37 adrian_otto: could you elaborate? 16:15:49 any input? :-) 16:16:01 adrian_otto: +1, maybe they should join magnum instead ;) 16:16:01 Tango: seems they only use docker now, while other COEs would not just relay on docker 16:16:12 adrian_otto: It's surprising to me as well 16:16:55 Kennan: for networking? 16:17:21 Kennan: Kube will have its own networking implementation, CNI 16:17:28 0/ 16:17:52 networking is a huge scope already, with all the complexities of matching the container world with the SDN world 16:17:54 right now, I did not find kuryr fully support kubernetes 16:18:13 adrian_otto: true 16:18:18 just to clarify CNI is not Kub spec 16:18:31 eghobo: right 16:18:48 What I can do for the Kuryr sepc is: 16:19:01 1. Relay our inputs to the TC/Kuryr 16:19:11 2. Vote +1/-1 on their spec 16:19:28 my worry is that we rely on Kuryr as a pillar to succeed at integrating Magnum with Neutron. If their focus becomes fractured too wide, they lose effectiveness, and we both fail. 16:20:10 my advice would be to master one domain before moving into the next. 16:20:33 I heard Kuryr implementation for libnetwork is completed, they are looking to move on to CNI 16:20:33 unless the team is so big that it can afford to be split 16:21:00 adrian_otto: ack. I can forward your inputs to the Kuryr team 16:21:51 OK, any last comments before advancing topic? 16:22:04 Tango: did ou have CNI reference for that ? I did not think docker right now lead network design 16:22:25 Did Neutron ever get tiered port support? 16:22:30 That was a flaw with Kuryr 16:22:34 Kennan: I can follow up with you 16:23:07 thanks 16:23:16 2. Enhance Mesos bay to a DCOS bay (Jay Lau) 16:23:24 #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/090437.html Discussion in ML 16:23:29 #link https://blueprints.launchpad.net/magnum/+spec/mesos-dcos The blueprint 16:24:03 jay-lau-513: Could you give a quick summarize of your proposal? 16:24:37 like said by someone pretty sure the dcos cli is closed source and not redistributable 16:25:54 mvelten: we can use cli it's public https://github.com/mesosphere/dcos-cli/blob/master/LICENSE, but we cannot use dcos 16:26:25 BTW, I never successfully installed the dcos-cli, it asked me to enter a verification code 16:26:32 which I don't know how to get 16:26:50 I was often testing on ubuntu 16:26:59 it works well 16:27:42 anyway chronos for batch jobs would be nice 16:28:03 jay-lau-513_ : where do you want install cli? 16:28:05 the current mesos bay actually does not help much as it simply installs a mesos + marathon cluster without any customized configuration 16:28:05 I have no problem for adding chronos to the mesos bay 16:28:17 it cannot leverage mesos advanced features 16:28:54 eghobo not a must to install dcos cli, perhaps we can update document 16:29:06 mvelten. hongbin: we can ask marathon to run chronos (one json file for us) 16:29:28 eghobo navigate end user to use dcos cli to operate mesos bay 16:29:38 eghobo: Then, we don't need to install chronos in Magnum? 16:30:17 hongbin: we can add it the same way as kube-ui 16:30:17 eghobo hongbin why not add chronos to mesos bay? 16:30:38 How about using labels to install additional frameworks? 16:30:55 yep good idea for labels 16:31:06 Tango +1 16:31:07 +1 for labels 16:31:18 labels seems make it diffculet for users 16:31:18 +1 for labels 16:31:25 I am thinking abouot adding kubeand and swarm as well 16:31:31 in mesos 16:31:37 jay-lau-513_ : we can, i just don't want any extra packages at master 16:31:41 better with configure files 16:31:42 +q to tango 16:32:01 like magnum bay-create --coe-config-file 16:32:08 filename contains key,values configruation 16:32:11 jay-lau-513_: I think eghobo suggestion is a good idea 16:32:13 to be a basic dcos, it should have both long running and batch job management 16:32:23 and when possible we can leverage marathon to run the additional frameworks 16:32:29 mesos and chronos are basic to mesos bay 16:32:33 I think mesos+marathon and mesos+chronos should be separate COE drivers, not one. 16:32:46 Kennan: this idea is good. I agree with it more. 16:33:15 adrian_otto: If that happens, there will be too many bay types... Not sure if it is a good idea 16:33:19 I fear that putting too much into a single mesos bay type will make it bloated. 16:33:42 maybe marathon should be an option as well so users can pick and choose? 16:33:42 what's wrong with having more bay types? 16:33:52 mesos is very flexible, agree to make the mesos bay as configurable but it should have some basci componments 16:34:11 adrian_otto: not sure about it, people usually have both (marathon and chronos), but you are right about aurora for example 16:34:16 then you cant easily deploy one cluster for both batch jobs and services 16:34:29 eghobo: +1 16:34:30 adrian_otto there are some use cases that many end users want to use mesos+marathon+chronos in a single cluster 16:34:46 It's probably not a good idea for Magnum to become a configuration mangement tool for COEs. 16:35:00 the point is to make the COE's bay work well with the IaaS. 16:35:20 in this case we should remove marathon too I think :) 16:35:21 jay-lau-513: I see. 16:35:42 One of my use case for Mesos does not include Marathon 16:36:31 mesos is a special bay here, as it can integrate with different frameworks: k8s, swarm, marathon, chronos etc 16:36:47 what if drivers could be combined? 16:37:04 so you'd have a mesos base driver, and one or more framework drivers? 16:37:35 framework drivers is good one option 16:37:57 adrian_otto can you please explain more for drivers? :-) 16:38:15 note that we don't have to support every combination, but we could allow the most popular ones. 16:38:31 adrian_otto yes 16:38:46 +1 16:38:57 the problem is that if the Bay is too complex, the bay orchestration (heat) templates, the related parameter logic, and the resources needed to express them get much more complex. 16:39:00 but it is difficult to address the most popular ones,as some combinations are also growing 16:39:49 so if we can find a way to simplify each bay into an individual driver, with a sensible method of sharing code between them, then the resulting system should be simpler to maintain and operate. 16:40:34 Yes, I think that should be addressed by the bay driver proposal 16:41:41 except that you don't communalize the cluster resources with this approach, you will have to run one separated Mesos cluster for each framework : if one doesnt use its resources the other can't take them like in a single mesos 16:42:27 Yes, the point of mesos is the abiliity to run multiple framework in a single bay 16:42:41 you can make your own drivers as a cloud operator 16:42:51 so you could spend a little effort to combine a few. 16:43:22 when you have lots of combinations, supporting every possible one quickly becomes burdensome 16:43:23 mvelten, yes, but it depens, if every framwork use a mesos clusgter, then we cannot use the resources sharing policy in mesos 16:43:48 I will take a look at the driver proposal and to see how we can leverage it 16:43:54 thanks adrian_otto 16:44:15 but if 90% of Mesos bay users use both chronos and marathon, then those should both be in the default driver. 16:44:44 my hunch is that adding more frameworks to a bay is something that could be added to a post-install script that could be user contributed and guided by community documentation. 16:44:47 eghobo: how many users use both chronos and marathon? 16:45:08 jay-lau-513: thats what I meant. adrian_otto: fine for me, I am still not in the pluggable COE driver mode ;) 16:45:45 hongbin I think this is what mesosphere doing now, a basic dcos cluster should have long running service and batch job management 16:46:00 mvelten ok, i c 16:46:23 OK, I think we can discuss this topic further in the next meeting 16:46:35 Let's discuss other topics for now 16:46:40 agreed. batch jobs wer missing in Kubernetes, they added a Job API in v1.1 because of demand 16:46:47 3. Enable Mesos Bay export more slave flags (Jay Lau) 16:46:53 #link https://blueprints.launchpad.net/magnum/+spec/mesos-slave-flags The blueprint 16:46:59 sure, tango also raised a good question of mesos+swarm and mesos+kubernetes 16:47:13 ok, we can take a look at https://blueprints.launchpad.net/magnum/+spec/mesos-slave-flags 16:47:43 the reason that I raised this bp is because mesos is now adding some excitig features, such as CNI support, docker volume driver support etc 16:48:07 those features are managed by mesos agent flags 16:48:35 but the mesos bay donot enable end user customized the agent flags and the end uesr can not leverage those new features 16:49:03 so I propose that we expose more flags to end user so as to customized the mesos bay 16:49:04 I think adrian_otto raised a concern about supporting too many flags 16:49:31 jay-lau-513: possibly we cannot support all of them 16:49:39 but we need to consider how to enable end user to customize the mesos bay 16:49:48 jay-lau-513: maybe we can add support for some if it is important 16:50:08 Yes, I listed some key parameters in bp 16:50:25 about 7 parameteres 16:50:31 It should be ok to add more options. We were only discussing how to organize and present to the users 16:50:56 not limiting options 16:50:58 Tango: +1 16:51:05 Tango yes 16:51:16 This might be a good application for labels on a BayModel resource. 16:51:34 We can create a file and add the configfile <> to bay or baymodel. 16:51:41 adrian_otto yes, wangqun and kennan did some investigation for this 16:51:45 so you could allow an arbitrary set of known parameters through 16:52:02 why not just add a EXTRA_OPTIONS or something like that that would be added at the end of the mesos command line in the systemd unit ? 16:52:21 I think the label parameters is so many. It's easy to be used by users. 16:52:22 mvelten: yes, tha's what I just said 16:52:50 so seems we all agree to use labels 16:52:58 I think that's a sensible place to start 16:53:03 yep sorry it is going to fast :) 16:53:21 and if we learn that everyone is using a particular set of labels, then we can pull those into driver params 16:53:22 OK, let's rediscuss the mesos topic later 16:53:23 that's also the bp's target for now, cool 16:53:33 4. Support remote insecure private registry (Eli Qiao) 16:53:41 #link https://blueprints.launchpad.net/magnum/+spec/support-private-registry The blueprint 16:53:47 eliqiao_: you there? 16:53:57 hongbin I think that this bp we reached agreement to use labels 16:54:11 jay-lau-513: ack 16:54:50 jay-lau-513: From me, I need more times to look into each options you listed 16:55:47 jay-lau-513: hongbin: I think it needs more check about what it impacts to heat templates structure and itnerface 16:55:49 hongbin sure, I also append some links there, you can take a look, the mesos has a special feature: it does not depend on docker daemon but can also enable end user use docker containers 16:56:14 I looked into this. The option Eli is proposing we enable allows a secure docker to use a registry that has an unverifiable SSL cert, or just uses HTTP transport. 16:56:32 so labels is just one opition 16:56:40 not means it is best now 16:56:48 adrian_otto: Not clear from me 16:56:57 considering that insecure is in the name of the option, it's really easy to document this as "user beware". 16:57:25 Since Eli is not here, let's table this discussion 16:57:41 I will start a ML to discuss that 16:57:58 #link http://wanderingquandaries.blogspot.com/2014/11/setting-up-insecure-docker-registry.html Setting up an Insecure Docker Registry 16:58:04 it seems it is for private registry use 16:58:12 that may help explain the docker reature in question 16:58:17 *feature 16:58:30 while not use TLS for private registry 16:58:34 We have 3 minutes left 16:59:34 5. Progress on Atomic images using diskimage-builder 16:59:38 This is the last one 17:00:07 time up 17:00:35 All, thanks for joining the meeting. Let's wrap up. Overflow on the container channel 17:00:39 #endmeeting