16:00:08 <hongbin> #startmeeting containers
16:00:09 <openstack> Meeting started Tue Apr  5 16:00:08 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:12 <openstack> The meeting name has been set to 'containers'
16:00:16 <hongbin> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-04-05_1600_UTC Today's agenda
16:00:21 <hongbin> #topic Roll Call
16:00:22 <adrian_otto> Adrian Otto
16:00:27 <madhuri> o/
16:00:31 <yolanda> o/
16:00:32 <strigazi> Spyros Trigazis
16:00:36 <Tango> Ton Ngo
16:00:38 <dane_leblanc> o/
16:00:39 <rpothier> o/
16:00:41 <juggler> o/
16:00:47 <rebase> o/
16:01:34 <yamamoto> hi
16:01:49 <hongbin> Thanks for joining the meeting adrian_otto madhuri yolanda strigazi Tango dane_leblanc rpothier juggler rebase yamamoto
16:02:02 <hongbin> Let's begin
16:02:04 <hongbin> #topic Announcements
16:02:11 <hongbin> #link https://github.com/openstack/python-k8sclient A new repo for k8s client
16:02:28 <hongbin> Thanks dims for spliting the k8sclient out of Magnum tree
16:02:53 <madhuri> Thanks dims
16:02:57 <hongbin> Magnum has the k8sclient in tree before
16:03:17 <hongbin> It will be pulled out of tree in the future
16:03:24 <dims> yay
16:03:27 <hongbin> Any question for that?
16:03:29 <madhuri> Who will be managing it?
16:03:53 <dims> madhuri : hongbin : the entire magnum core can do what it wants in the new repo
16:04:06 <dims> https://review.openstack.org/#/admin/groups/1348,members
16:04:20 <dims> i can help with making releases
16:04:26 <muralia_> o/
16:04:54 <hongbin> Thanks dims
16:05:03 <hongbin> #topic Review Action Items
16:05:10 <hongbin> hongbin discuss with Kuryr PTL about the shared session idea (done)
16:05:18 <hongbin> A shared session is scheduled to Thursday 11:50 - 12:30 for now (by using the original Magnum fishbowl slot)
16:05:49 <hongbin> I expect we are going to discuss the networking solution with the Kuryr team at that session
16:06:13 <hongbin> To confirm, is anyone not able to join the session?
16:06:31 <adrian_otto> what timezone?
16:06:38 <adrian_otto> central
16:06:43 <hongbin> adrian_otto: local timzone
16:06:46 <adrian_otto> one sec
16:06:52 <hongbin> adrian_otto: In Austin
16:07:13 <hongbin> I hope the schedule will fit into everyone. If not, please let me know
16:07:55 <adrian_otto> works for me.
16:08:02 <hongbin> adrian_otto: great
16:08:06 <hongbin> #topic Plan for design summit topics
16:08:12 <hongbin> #link https://etherpad.openstack.org/p/magnum-newton-design-summit-topics Etherpad collaborating topics for Magnum Newton design summit
16:08:30 <hongbin> We have 15 topics proposed
16:09:08 <hongbin> We need to choose 10 out of them
16:09:33 <hongbin> What I am going to do is to ask everyone to vote on their favorite topic
16:09:43 <hongbin> And I will select based on the feedback
16:09:56 <hongbin> Any comment for that?
16:10:07 <Tango> Vote on the etherpad?
16:10:16 <hongbin> Tango: Yes (e.g. +1)
16:10:58 <hongbin> To clarify, you can vote multiple topics if you like
16:11:12 <hongbin> I will revisit the etherpad in next week
16:11:21 <hongbin> #topic Essential Blueprints Review
16:11:27 <hongbin> SKIP until we identify a list of essential blueprints for Newton
16:11:36 <hongbin> #link https://blueprints.launchpad.net/magnum/newton List of blueprints for Newton so far
16:12:03 <hongbin> Any comments on this?
16:12:32 <hongbin> #topic Other blueprints/Bugs/Reviews/Ideas
16:12:39 <hongbin> 1. Enhance Mesos bay to a DCOS bay (Jay Lau)
16:12:46 <hongbin> #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/090437.html Discussion in ML
16:12:52 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/mesos-dcos The blueprint
16:13:10 <hongbin> This topic has been discussed in the last meeting, and we are in disagreement
16:13:46 <hongbin> To summarize, Jay requests to add chronos framework into our mesos bay
16:14:06 <adrian_otto> dcos is not open source software. Has the issue with redistribution been addressed?
16:14:07 <askb> o/
16:14:07 <hongbin> Chronos is a mesos framework for batch processing
16:14:26 <hongbin> adrian_otto: I am not sure about that
16:14:43 <adrian_otto> I don't think we should even consider it unless we are sure there are not licensing constraints for this.
16:15:12 <hongbin> adrian_otto: But the chronos is totally open source (I think)
16:15:30 <Tango> 2 different issues?
16:15:32 <adrian_otto> this is something we can ask for assistance form the OpenStack BoD if we are uncertain about the legal implications of the decision.
16:16:03 <adrian_otto> all software we redistribute must be compatible with the Apache 2 license
16:16:27 <hongbin> adrian_otto: I can double-check that
16:16:27 <adrian_otto> otherwise the user will need to download and license it separately.
16:16:56 <hongbin> #action hongbin confirm with OpenStack BoD about the legal implications of Chornos
16:17:12 <adrian_otto> s/Chronos/dcos/
16:17:18 <Tango> If third parties want to add support for their licensed software, what's the process?
16:17:58 <adrian_otto> #Link https://github.com/mesos/chronos/blob/master/LICENSE Chronos Apache 2 License
16:18:11 <adrian_otto> #link https://github.com/mesos/chronos/blob/master/LICENSE Chronos Apache 2 License
16:18:31 <adrian_otto> Tango: they contribute an open source driver to OpenStack
16:19:01 <adrian_otto> and they allow the cloud operator to acquire and license the software/equipment that driver interfaces with.
16:19:19 <adrian_otto> but OpenStack does not ship with any proprietary software.
16:19:35 <Tango> adrian_otto: Should DCOS follow this process?
16:20:06 <adrian_otto> If Magnum users want it, then yes. Are we convinced there is a strong enough demand signal for that?
16:20:42 <adrian_otto> Chronos looks fine as long as we use the software posted at https://github.com/mesos/chronos
16:21:04 <hongbin> Yes, I think we just considered Chronos at current stage
16:21:14 <Tango> Jay should be able to provide an answer for DCOS support.
16:21:15 <hongbin> DCOS could be considered later if needed
16:21:28 <jaypipes> I can indeed.
16:21:37 <jaypipes> :P
16:21:45 <adrian_otto> hi jaypipes!
16:21:52 <jaypipes> :) hi!
16:22:36 <hongbin> Besides the license issue, we also disagreed on how to add Chronos to mesos bay
16:22:55 <hongbin> adrian_otto proposed to have a chronos bay in parallel with marathon bay
16:23:02 <hongbin> adrian_otto: Is that correct?
16:23:56 <hongbin> I and Jay Lau think chronos and marathon should be in the same bay
16:24:15 <hongbin> That is there is a single mesos bay that has both marathon and chronos
16:24:46 <hongbin> Let's debate this further if you like
16:24:59 <adrian_otto> I think they belong in different drivers
16:25:06 <adrian_otto> and each can share common code
16:25:32 <Tango> We should be able to mix & match
16:25:35 <hongbin> adrian_otto: I think the point is how to let them to share the bay
16:25:54 <Tango> +1
16:26:10 <hongbin> adrian_otto: For example, marathon & chronos shared the computing resources of the same bay
16:26:13 <Tango> Mesos is unique in that it can host multiple frameworks
16:26:20 <hongbin> adrian_otto: That is the key feature of mesos
16:26:26 <adrian_otto> yes, I understand that. We could have a post-setup hook that you could use to initialize a secondary driver.
16:26:53 <adrian_otto> but I feel strongly that we should not try to have an "everything you can possibly do with mesos" driver
16:27:16 <hongbin> Then, we are in disagreement again
16:27:39 <hongbin> How about others. What are your opinions?
16:28:04 <adrian_otto> the trouble with this appraoch is that it's a slippery slope. Let's say we have Marathon and Mesos… then it follows logically that the next framework and the next all should pile on there as well.
16:28:23 <Tango> I am not sure what the difference is with adding a secondary driver via a post-setup hook. The result is the same
16:28:27 <adrian_otto> Marathon + Chronos + ?? + ?? + ??
16:28:43 <adrian_otto> yes, the freedom for the user is the same
16:28:59 <madhuri> We should have support for both. It depends on user what framework they want
16:29:03 <adrian_otto> but we don't have an expectation to support every combination of every possible framework, and making them all work across upgrades.
16:29:17 <adrian_otto> that's a recipe for disaster.
16:29:34 <hongbin> adrian_otto: Agreed with you that we should not support every frameworks
16:29:44 <hongbin> adrian_otto: But the case of marathon + chronos is different
16:29:59 <hongbin> adrian_otto: Most of the times, they are paired toghther
16:30:01 <Tango> From what we are seeing, Mesos key benefit is to allow many frameworks to share resources.
16:30:25 <adrian_otto> so because we know there are a variety of these, we should have a way to support the prevailing ones individually, and provide an option to use them in combination that does not bind us to do full integration testing of each combination.
16:31:04 <hongbin> OK. Let's push the discussion into ML
16:31:08 <Tango> I guess the concern is about our responsibility for testing all combination.
16:31:13 <adrian_otto> I'm trying to make a point that we need to draw a line somewhere.
16:31:36 <hongbin> #action hongbin started an ML to discuss adding Chronos to mesos bay
16:31:50 <hongbin> 2. Support bays with no floating IPs
16:32:00 <hongbin> #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/091063.html Discussion in ML
16:32:04 <juggler> adrian_otto all valid points, thanks
16:32:31 <adrian_otto> this idea seems fine to me.
16:33:06 <hongbin> Yes. Fine to me as well, besides some technical details that I am not clear
16:33:38 <hongbin> If we removed the floating IPs, how to let the bay connect outside
16:34:20 <Tango> The nodes access the external internet via the router, not the floating IP
16:34:33 <hongbin> Tango: I see
16:34:44 <Tango> Floating IP is mainly to allow access to the nodes from external network
16:34:46 <adrian_otto> if each bay node has a public ip, then they access external hosts the same way any nova instance would.
16:35:34 <Tango> If there is no need for users to log into the node, then we don't need floating IP
16:36:17 <adrian_otto> we would only need the COE API node to be externally accessible.
16:37:01 <Tango> Currently this is done via the load balancer, so we can even skip the floating IP for the master nodes
16:37:06 <adrian_otto> but depending on what you start on each bay node, you may have an expectation that you can reach those from outside the bay
16:37:06 <hongbin> True, if users don't need port forwarding
16:37:49 <adrian_otto> Carina users, for example can access the individual bay nodes directly from external hosts.
16:38:26 <adrian_otto> This simplifies the use of the native COE API (it shows what port mappings it knows about internally. They do not need to be translated into other addresses.)
16:38:30 <hongbin> Tango: Egor pointed out there is a k8s feature that needs floating IP in the master node (called NodePort)
16:39:00 <Tango> I think there are use cases for both with and without floating IP, so providing an option to choose is probably the best.
16:39:13 <adrian_otto> +1
16:39:22 <hongbin> Yes, sounds like we agreed on adding an opiton to remove the need of floating IPs
16:39:32 <hongbin> Any opposing point of view?
16:40:10 <hongbin> #agreed Magnum will support an option to remove the need of floating IPs
16:40:19 <hongbin> I will draft a BP for that
16:40:57 <hongbin> OK. Let's discuss the topic in backlog since we have time
16:41:02 <hongbin> Discussion of certificate storage
16:41:12 <hongbin> #link https://etherpad.openstack.org/p/magnum-barbican-alternative Etherpad summarizing the concern and a variety of options to address it
16:41:37 <hongbin> We have debated it in the ML heavily
16:41:51 <hongbin> We can discuss it further right now, or leave it to design summit
16:42:50 <hongbin> Here is the problem: Magnum currently relied on Barbican to store TLS certificates
16:43:13 <hongbin> That requires operators to install Barbican in order to get Magnum working
16:43:35 <hongbin> However, operators don't like that
16:44:26 <hongbin> So, it is better to ship Magnum independently (without hard requirements on Barbican)
16:44:48 <hongbin> We listed several options to remove the hard dependency
16:44:57 <hongbin> The details are in the etherpad
16:45:20 <hongbin> Any comments?
16:46:13 <hongbin> ...........
16:46:40 <adrian_otto> the use of a simple solution at first does (keystone) not preclude us from using a more sophisticated approach later (archer)
16:47:00 <adrian_otto> s/archer/anchor/ I hate autocorrect!
16:47:12 <Tango> I can attest to the fact that arguing for Magnum adoption is already a challenge, asking for additional services just makes it harder.
16:47:46 <hongbin> I preferred the keystone approach as well
16:48:01 <hongbin> Tango: +1
16:48:05 <madhuri> +1 for keystone
16:48:51 <hongbin> Could we all agree that we start with the Keystone approach?
16:49:03 <adrian_otto> +1 because I proposed that to begin with
16:49:08 <madhuri> +1
16:49:10 <muralia_> +1
16:49:17 <juggler> +1
16:49:24 <Tango> It seems operators would have Keystone as one of the base component, so at least using Keystone at the beginning help adoption.
16:49:36 <Tango> +1 on Keystone
16:49:37 <adrian_otto> assuming we do agree, we do this knowing that the keystone credential store is not secure, and we should treat it as such.
16:49:50 <hongbin> #agreed Magnum will leverage Keystone as alternative authentication machnism for k8s
16:49:57 <askb> +1 on keystone
16:50:10 <adrian_otto> so as long as we are diligent about encrypting what we place in there, that's a suitable compromise in my view.
16:50:24 <juggler> adrian_otto is a warning indicating such offered to an operator?
16:50:37 <adrian_otto> yes, tha's in the BP now
16:50:43 <adrian_otto> Barbican will be the default
16:50:50 <juggler> cool
16:50:57 <adrian_otto> and when you change it to keystone, it's to be accompanied by a warning
16:51:01 <hongbin> We should have a clear document about the security implications of each option
16:51:12 <adrian_otto> so you'll see that in the config file upon making that change
16:51:28 <adrian_otto> hongbin: yes, definitely.
16:51:56 <hongbin> OK. Any further comment on this topic?
16:51:59 <adrian_otto> ok, that was productive.
16:52:19 <hongbin> #topic Open Discussion
16:52:56 <hongbin> Please feel free to bring up topics that you like to discuss
16:54:11 <adrian_otto> maybe e wrap early?
16:54:16 <adrian_otto> +w
16:54:21 <Tango> One question for the team:  we just switch to using the public Atomic 23 image, which is very nice
16:54:47 <Tango> but now there might be some motivation to switch back to a custom built image.  Is there any concern?
16:55:07 <hongbin> No from me
16:55:10 <Tango> Reason:  later Kubernetes release, cache docker images, ...
16:56:28 <hongbin> FYI, I drafted a BP to cache docker images into the glance image
16:56:30 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
16:57:00 <hongbin> We need that to accelerate the bay provisioning speed
16:57:03 <Tango> ok, if there is no concern then we can proceed with building and trying out new images.
16:57:11 <askb> related question to the team, any reason we choose f23 atomic image earlier over other options ?
16:57:19 <askb> like coreos
16:58:01 <adrian_otto> askb, no important reason. We were following the path of least resistance.
16:58:20 <hongbin> askb: The reason is the original Heat templates are authored by a RedHat folk
16:59:10 <hongbin> We copy his template to Magnum tree to speed up the development at the very beginning
16:59:54 <hongbin> OK. Let's wrap up
17:00:09 <hongbin> Thanks everyone for joining the meeting
17:00:15 <hongbin> #endmeeting