16:00:05 <hongbin> #startmeeting containers
16:00:05 <openstack> Meeting started Tue Mar 29 16:00:05 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:10 <openstack> The meeting name has been set to 'containers'
16:00:16 <hongbin> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-03-29_1600_UTC
16:00:24 <hongbin> #topic Roll Call
16:00:31 <muralia> murali allada
16:00:33 <wznoinsk> Waldemar Znoinski
16:00:35 <Tango> Ton Ngo
16:00:35 <coreyob> Corey O'Brien
16:00:41 <Kennan> o/
16:00:41 <juggler> o/
16:00:44 <strigazi> Spyros Trigazis
16:00:49 <jay-lau-513> o/
16:00:50 <wangqun> o/
16:01:01 <juggler> Perry Rivera
16:01:06 <rpothier> o/
16:01:27 <adrian_otto> Adrian Otto
16:01:53 <hongbin> Thanks for joining the meeting muralia wznoinsk Tango coreyob Kennan juggler strigazi jay-lau-513 wangqun juggler rpothier adrian_otto
16:02:07 <hongbin> Let's begin
16:02:10 <hongbin> #topic Announcements
16:02:28 <adrian_otto> may I?
16:02:31 <hongbin> 1) An announcement from Adrian Otto
16:02:35 <hongbin> adrian_otto: pls
16:02:37 <adrian_otto> I'd like to announce that Magnum will have a new PTL for the Newton development cycle. You elected Hongbin for this role. We are transitioning over my duties now. If you have any concerns about Mitaka, you can ask me, and everything else going forward Hongbin will be responsible for. I plan to remain active as a reviewer and continue contributing to Magnum.
16:02:43 <adrian_otto> are there any questions about this?
16:02:54 <madhuri> o/
16:03:19 <Tango> Thanks Adrian for leading Magnum in the last 3 cycles.
16:03:45 <adrian_otto> You're welcome. It has been my pleasure.
16:03:45 <Kennan> Thanks adrian_otto
16:04:07 <muralia> Thank you Adrian for brining us this far. Congrats Hongbin.
16:04:18 <wangqun> Thanks  adrian_otto
16:04:34 <juggler> Thanks adrian_otto
16:04:55 <hongbin> Thanks adrian_otto for your leadership and contribution to the project
16:05:31 <hongbin> #topic Review Action Items
16:05:38 <hongbin> None
16:05:46 <hongbin> #topic Essential Blueprint/Bug Review
16:06:00 <hongbin> I suggest to skip this one, since we are in FF
16:06:29 <hongbin> #topic Plan for design summit topics
16:06:37 <hongbin> #link https://etherpad.openstack.org/p/magnum-newton-design-summit-topics Etherpad collaborating topics for Magnum Newton design summit
16:07:13 <hongbin> We are using the etherpad to collaborating the topics we are going to discuss in the design summit
16:07:28 <hongbin> It will be great if you can put your topics there
16:08:02 <hongbin> We can wait for one week for inputs and revisit the etherpad in the next meeting
16:08:29 <hongbin> Any concern or question for the design summit?
16:09:30 <hongbin> OK. Let proceed to open discussion
16:09:33 <hongbin> #topic Open Discussion
16:09:41 <hongbin> 1. Kuryr team proposed to extend its scope to storage
16:09:47 <hongbin> #link https://review.openstack.org/#/c/289993/ The proposed spec
16:09:54 <hongbin> Does it have any impact on the Magnum roadmap? What is our position?
16:10:59 <Tango> I think it's a good direction, consistent with the networking effort
16:11:13 <eghobo> hongbin: does it Kuryr moving away from just container to COE support
16:11:18 <jay-lau-513> i did not see much how Kuryr will manage storage
16:11:22 <jay-lau-513> from the spec
16:11:37 <hongbin> eghobo: I am not sure right now
16:11:39 <thomasem> o/
16:11:39 <Tango> I was a little surprised about the expanded mission, but if they want to tackle this, it should be fine
16:11:59 <jay-lau-513> shall we wait some detail for this? BTW: Each COE has its own storage manager and most of them can integrate with different backend
16:12:13 <eghobo> jay-lau-513: i think they want support plugins for Docker, Kub, etc
16:12:16 <jay-lau-513> I think that the Kuryr + Storage will not impact magnm much
16:12:19 <hongbin> eghobo: It looks they want to support all the COEs per my understandng
16:12:54 <jay-lau-513> magnum can focus on kuryr network part first
16:13:01 <Tango> It's too early at this point, we should stay in touch to see how they approach the problem
16:13:37 <eghobo> Tango: +1
16:13:46 <hongbin> OK. It sounds we are OK for their expanded scope
16:14:29 <Tango> If they take a similar approach to networking, then in the long run, it should simplify things for Magnum
16:14:38 <eghobo> can we have shared session at summit?
16:14:48 <hongbin> Tango: +1
16:14:52 <adrian_otto> Honestly, I think expanding the scope of a project like that is a huge mistake.
16:14:59 <hongbin> eghobo: I can talk to the Kuryr PTL about that
16:15:18 <hongbin> #action hongbin discuss with Kuryr PTL about the shared session idea
16:15:37 <hongbin> adrian_otto: could you elaborate?
16:15:49 <jay-lau-513> any input? :-)
16:16:01 <eghobo> adrian_otto: +1, maybe they should join magnum instead ;)
16:16:01 <Kennan> Tango: seems they only use docker now, while other COEs would not just relay on docker
16:16:12 <Tango> adrian_otto: It's surprising to me as well
16:16:55 <Tango> Kennan: for networking?
16:17:21 <Tango> Kennan: Kube will have its own networking implementation, CNI
16:17:28 <askb> 0/
16:17:52 <adrian_otto> networking is a huge scope already, with all the complexities of matching the container world with the SDN world
16:17:54 <Kennan> right now, I did not find kuryr fully support kubernetes
16:18:13 <hongbin> adrian_otto: true
16:18:18 <eghobo> just to clarify CNI is not Kub spec
16:18:31 <Tango> eghobo: right
16:18:48 <hongbin> What I can do for the Kuryr sepc is:
16:19:01 <hongbin> 1. Relay our inputs to the TC/Kuryr
16:19:11 <hongbin> 2. Vote +1/-1 on their spec
16:19:28 <adrian_otto> my worry is that we rely on Kuryr as a pillar to succeed at integrating Magnum with Neutron. If their focus becomes fractured too wide, they lose effectiveness, and we both fail.
16:20:10 <adrian_otto> my advice would be to master one domain before moving into the next.
16:20:33 <Tango> I heard Kuryr implementation for libnetwork is completed, they are looking to move on to CNI
16:20:33 <adrian_otto> unless the team is so big that it can afford to be split
16:21:00 <hongbin> adrian_otto: ack. I can forward your inputs to the Kuryr team
16:21:51 <hongbin> OK, any last comments before advancing topic?
16:22:04 <Kennan> Tango: did ou have CNI reference for that ? I did not think docker right now lead network design
16:22:25 <thomasem> Did Neutron ever get tiered port support?
16:22:30 <thomasem> That was a flaw with Kuryr
16:22:34 <Tango> Kennan: I can follow up with you
16:23:07 <Kennan> thanks
16:23:16 <hongbin> 2. Enhance Mesos bay to a DCOS bay (Jay Lau)
16:23:24 <hongbin> #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/090437.html Discussion in ML
16:23:29 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/mesos-dcos The blueprint
16:24:03 <hongbin> jay-lau-513: Could you give a quick summarize of your proposal?
16:24:37 <mvelten> like said by someone pretty sure the dcos cli is closed source and not redistributable
16:25:54 <eghobo> mvelten: we can use cli it's public https://github.com/mesosphere/dcos-cli/blob/master/LICENSE, but we cannot use dcos
16:26:25 <hongbin> BTW, I never successfully installed the dcos-cli, it asked me to enter a verification code
16:26:32 <hongbin> which I don't know how to get
16:26:50 <jay-lau-513_> I was often testing on ubuntu
16:26:59 <jay-lau-513_> it works well
16:27:42 <mvelten> anyway chronos for batch jobs would be nice
16:28:03 <eghobo> jay-lau-513_ : where do you want install cli?
16:28:05 <jay-lau-513_> the current mesos bay actually does not help much as it simply installs a mesos + marathon cluster without any customized configuration
16:28:05 <hongbin> I have no problem for adding chronos to the mesos bay
16:28:17 <jay-lau-513_> it cannot leverage mesos advanced features
16:28:54 <jay-lau-513_> eghobo not a must to install dcos cli, perhaps we can update document
16:29:06 <eghobo> mvelten. hongbin: we can ask marathon to run chronos (one json file for us)
16:29:28 <jay-lau-513_> eghobo navigate end user to use dcos cli to operate mesos bay
16:29:38 <hongbin> eghobo: Then, we don't need to install chronos in Magnum?
16:30:17 <eghobo> hongbin: we can add it the same way as kube-ui
16:30:17 <jay-lau-513_> eghobo  hongbin why not add chronos to mesos bay?
16:30:38 <Tango> How about using labels to install additional frameworks?
16:30:55 <mvelten> yep good idea for labels
16:31:06 <wangqun> Tango +1
16:31:07 <hongbin> +1 for labels
16:31:18 <Kennan> labels seems make it diffculet for users
16:31:18 <askb> +1 for labels
16:31:25 <Tango> I am thinking abouot adding kubeand and swarm as well
16:31:31 <Tango> in mesos
16:31:37 <eghobo> jay-lau-513_ : we can, i just don't want any extra packages at master
16:31:41 <Kennan> better with configure files
16:31:42 <jay-lau-513_> +q to tango
16:32:01 <Kennan> like magnum bay-create --coe-config-file <filename>
16:32:08 <Kennan> filename contains key,values configruation
16:32:11 <hongbin> jay-lau-513_: I think eghobo suggestion is a good idea
16:32:13 <jay-lau-513_> to be a basic dcos, it should have both long running and batch job management
16:32:23 <mvelten> and when possible we can leverage marathon to run the additional frameworks
16:32:29 <jay-lau-513_> mesos and chronos are basic to mesos bay
16:32:33 <adrian_otto> I think mesos+marathon and mesos+chronos should be separate COE drivers, not one.
16:32:46 <wangqun> Kennan: this idea is good. I agree with it more.
16:33:15 <hongbin> adrian_otto: If that happens, there will be too many bay types... Not sure if it is a good idea
16:33:19 <adrian_otto> I fear that putting too much into a single mesos bay type will make it bloated.
16:33:42 <Tango> maybe marathon should be an option as well so users can pick and choose?
16:33:42 <adrian_otto> what's wrong with having more bay types?
16:33:52 <jay-lau-513_> mesos is very flexible, agree to make the mesos bay as configurable but it should have some basci componments
16:34:11 <eghobo> adrian_otto: not sure about it, people usually have both (marathon and chronos), but you are right about aurora for example
16:34:16 <mvelten> then you cant easily deploy one cluster for both batch jobs and services
16:34:29 <mvelten> eghobo: +1
16:34:30 <jay-lau-513_> adrian_otto there are some use cases that many end users want to use mesos+marathon+chronos in a single cluster
16:34:46 <adrian_otto> It's probably not a good idea for Magnum to become a configuration mangement tool for COEs.
16:35:00 <adrian_otto> the point is to make the COE's bay work well with the IaaS.
16:35:20 <mvelten> in this case we should remove marathon too I think :)
16:35:21 <adrian_otto> jay-lau-513: I see.
16:35:42 <Tango> One of my use case for Mesos does not include Marathon
16:36:31 <jay-lau-513> mesos is a special bay here, as it can integrate with different frameworks: k8s, swarm, marathon, chronos etc
16:36:47 <adrian_otto> what if drivers could be combined?
16:37:04 <adrian_otto> so you'd have a mesos base driver, and one or more framework drivers?
16:37:35 <Kennan> framework drivers is good one option
16:37:57 <jay-lau-513> adrian_otto can you please explain more for drivers? :-)
16:38:15 <adrian_otto> note that we don't have to support every combination, but we could allow the most popular ones.
16:38:31 <jay-lau-513> adrian_otto yes
16:38:46 <muralia> +1
16:38:57 <adrian_otto> the problem is that if the Bay is too complex, the bay orchestration (heat) templates, the related parameter logic, and the resources needed to express them get much more complex.
16:39:00 <jay-lau-513> but it is difficult to address the most popular ones,as some combinations are also growing
16:39:49 <adrian_otto> so if we can find a way to simplify each bay into an individual driver, with a sensible method of sharing code between them, then the resulting system should be simpler to maintain and operate.
16:40:34 <hongbin> Yes, I think that should be addressed by the bay driver proposal
16:41:41 <mvelten> except that you don't communalize the cluster resources with this approach, you will have to run one separated Mesos cluster for each framework : if one doesnt use its resources the other can't take them like in a single mesos
16:42:27 <hongbin> Yes, the point of mesos is the abiliity to run multiple framework in a single bay
16:42:41 <adrian_otto> you can make your own drivers as a cloud operator
16:42:51 <adrian_otto> so you could spend a little effort to combine a few.
16:43:22 <adrian_otto> when you have lots of combinations, supporting every possible one quickly becomes burdensome
16:43:23 <jay-lau-513> mvelten, yes, but it depens, if every framwork use a mesos clusgter, then we cannot use the resources sharing policy in mesos
16:43:48 <jay-lau-513> I will take a look at the driver proposal and to see how we can leverage it
16:43:54 <jay-lau-513> thanks adrian_otto
16:44:15 <adrian_otto> but if 90% of Mesos bay users use both chronos and marathon, then those should both be in the default driver.
16:44:44 <adrian_otto> my hunch is that adding more frameworks to a bay is something that could be added to a post-install script that could be user contributed and guided by community documentation.
16:44:47 <hongbin> eghobo: how many users use both chronos and marathon?
16:45:08 <mvelten> jay-lau-513: thats what I meant. adrian_otto: fine for me, I am still not in the pluggable COE driver mode ;)
16:45:45 <jay-lau-513> hongbin I think this is what mesosphere doing now, a basic dcos cluster should have long running service and batch job management
16:46:00 <jay-lau-513> mvelten ok, i c
16:46:23 <hongbin> OK, I think we can discuss this topic further in the next meeting
16:46:35 <hongbin> Let's discuss other topics for now
16:46:40 <mvelten> agreed. batch jobs wer missing in Kubernetes, they added a Job API in v1.1 because of demand
16:46:47 <hongbin> 3. Enable Mesos Bay export more slave flags (Jay Lau)
16:46:53 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/mesos-slave-flags The blueprint
16:46:59 <jay-lau-513> sure, tango also raised a good question  of mesos+swarm and mesos+kubernetes
16:47:13 <jay-lau-513> ok, we can take a look at https://blueprints.launchpad.net/magnum/+spec/mesos-slave-flags
16:47:43 <jay-lau-513> the reason that I raised this bp is because mesos is now adding some excitig features, such as CNI support, docker volume driver support etc
16:48:07 <jay-lau-513> those features are managed by mesos agent flags
16:48:35 <jay-lau-513> but the mesos bay donot enable end user customized the agent flags and the end uesr can not leverage those new features
16:49:03 <jay-lau-513> so I propose that we expose more flags to end user so as to customized the mesos bay
16:49:04 <hongbin> I think adrian_otto raised a concern about supporting too many flags
16:49:31 <hongbin> jay-lau-513: possibly we cannot support all of them
16:49:39 <jay-lau-513> but we need to consider how to enable end user to customize the mesos bay
16:49:48 <hongbin> jay-lau-513: maybe we can add support for some if it is important
16:50:08 <jay-lau-513> Yes, I listed some key parameters in bp
16:50:25 <jay-lau-513> about 7 parameteres
16:50:31 <Tango> It should be ok to add more options.   We were only discussing how to organize and present to the users
16:50:56 <Tango> not limiting options
16:50:58 <wangqun> Tango: +1
16:51:05 <jay-lau-513> Tango yes
16:51:16 <adrian_otto> This might be a good application for labels on a BayModel resource.
16:51:34 <wangqun> We can create a file and add the configfile <> to bay or baymodel.
16:51:41 <jay-lau-513> adrian_otto yes, wangqun and kennan did some investigation for this
16:51:45 <adrian_otto> so you could allow an arbitrary set of known parameters through
16:52:02 <mvelten> why not just add a EXTRA_OPTIONS or something like that that would be added at the end of the mesos command line in the systemd unit ?
16:52:21 <wangqun> I think the label parameters is so many. It's easy to be used by users.
16:52:22 <adrian_otto> mvelten: yes, tha's what I just said
16:52:50 <jay-lau-513> so seems we all agree to use labels
16:52:58 <adrian_otto> I think that's a sensible place to start
16:53:03 <mvelten> yep sorry it is going to fast :)
16:53:21 <adrian_otto> and if we learn that everyone is using a particular set of labels, then we can pull those into driver params
16:53:22 <hongbin> OK, let's rediscuss the mesos topic later
16:53:23 <jay-lau-513> that's also the bp's target for now, cool
16:53:33 <hongbin> 4. Support remote insecure private registry (Eli Qiao)
16:53:41 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/support-private-registry The blueprint
16:53:47 <hongbin> eliqiao_: you there?
16:53:57 <jay-lau-513> hongbin I think that this bp we reached agreement to use labels
16:54:11 <hongbin> jay-lau-513: ack
16:54:50 <hongbin> jay-lau-513: From me, I need more times to look into each options you listed
16:55:47 <Kennan> jay-lau-513: hongbin: I think it needs more check about what it impacts to heat templates structure and itnerface
16:55:49 <jay-lau-513> hongbin sure, I also append some links there, you can take a look, the mesos has a special feature: it does not depend on docker daemon but can also enable end user use docker containers
16:56:14 <adrian_otto> I looked into this. The option Eli is proposing we enable allows a secure docker to use a registry that has an unverifiable SSL cert, or just uses HTTP transport.
16:56:32 <Kennan> so labels is just one opition
16:56:40 <Kennan> not means it is best now
16:56:48 <hongbin> adrian_otto: Not clear from me
16:56:57 <adrian_otto> considering that insecure is in the name of the option, it's really easy to document this as "user beware".
16:57:25 <hongbin> Since Eli is not here, let's table this discussion
16:57:41 <hongbin> I will start a ML to discuss that
16:57:58 <adrian_otto> #link http://wanderingquandaries.blogspot.com/2014/11/setting-up-insecure-docker-registry.html Setting up an Insecure Docker Registry
16:58:04 <Kennan> it seems it is for private registry use
16:58:12 <adrian_otto> that may help explain the docker reature in question
16:58:17 <adrian_otto> *feature
16:58:30 <Kennan> while not use TLS for private registry
16:58:34 <hongbin> We have 3 minutes left
16:59:34 <hongbin> 5. Progress on Atomic images using diskimage-builder
16:59:38 <hongbin> This is the last one
17:00:07 <adrian_otto> time up
17:00:35 <hongbin> All, thanks for joining the meeting. Let's wrap up. Overflow on the container channel
17:00:39 <hongbin> #endmeeting