16:00:08 #startmeeting containers 16:00:09 Meeting started Tue Apr 5 16:00:08 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:12 The meeting name has been set to 'containers' 16:00:16 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-04-05_1600_UTC Today's agenda 16:00:21 #topic Roll Call 16:00:22 Adrian Otto 16:00:27 o/ 16:00:31 o/ 16:00:32 Spyros Trigazis 16:00:36 Ton Ngo 16:00:38 o/ 16:00:39 o/ 16:00:41 o/ 16:00:47 o/ 16:01:34 hi 16:01:49 Thanks for joining the meeting adrian_otto madhuri yolanda strigazi Tango dane_leblanc rpothier juggler rebase yamamoto 16:02:02 Let's begin 16:02:04 #topic Announcements 16:02:11 #link https://github.com/openstack/python-k8sclient A new repo for k8s client 16:02:28 Thanks dims for spliting the k8sclient out of Magnum tree 16:02:53 Thanks dims 16:02:57 Magnum has the k8sclient in tree before 16:03:17 It will be pulled out of tree in the future 16:03:24 yay 16:03:27 Any question for that? 16:03:29 Who will be managing it? 16:03:53 madhuri : hongbin : the entire magnum core can do what it wants in the new repo 16:04:06 https://review.openstack.org/#/admin/groups/1348,members 16:04:20 i can help with making releases 16:04:26 o/ 16:04:54 Thanks dims 16:05:03 #topic Review Action Items 16:05:10 hongbin discuss with Kuryr PTL about the shared session idea (done) 16:05:18 A shared session is scheduled to Thursday 11:50 - 12:30 for now (by using the original Magnum fishbowl slot) 16:05:49 I expect we are going to discuss the networking solution with the Kuryr team at that session 16:06:13 To confirm, is anyone not able to join the session? 16:06:31 what timezone? 16:06:38 central 16:06:43 adrian_otto: local timzone 16:06:46 one sec 16:06:52 adrian_otto: In Austin 16:07:13 I hope the schedule will fit into everyone. If not, please let me know 16:07:55 works for me. 16:08:02 adrian_otto: great 16:08:06 #topic Plan for design summit topics 16:08:12 #link https://etherpad.openstack.org/p/magnum-newton-design-summit-topics Etherpad collaborating topics for Magnum Newton design summit 16:08:30 We have 15 topics proposed 16:09:08 We need to choose 10 out of them 16:09:33 What I am going to do is to ask everyone to vote on their favorite topic 16:09:43 And I will select based on the feedback 16:09:56 Any comment for that? 16:10:07 Vote on the etherpad? 16:10:16 Tango: Yes (e.g. +1) 16:10:58 To clarify, you can vote multiple topics if you like 16:11:12 I will revisit the etherpad in next week 16:11:21 #topic Essential Blueprints Review 16:11:27 SKIP until we identify a list of essential blueprints for Newton 16:11:36 #link https://blueprints.launchpad.net/magnum/newton List of blueprints for Newton so far 16:12:03 Any comments on this? 16:12:32 #topic Other blueprints/Bugs/Reviews/Ideas 16:12:39 1. Enhance Mesos bay to a DCOS bay (Jay Lau) 16:12:46 #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/090437.html Discussion in ML 16:12:52 #link https://blueprints.launchpad.net/magnum/+spec/mesos-dcos The blueprint 16:13:10 This topic has been discussed in the last meeting, and we are in disagreement 16:13:46 To summarize, Jay requests to add chronos framework into our mesos bay 16:14:06 dcos is not open source software. Has the issue with redistribution been addressed? 16:14:07 o/ 16:14:07 Chronos is a mesos framework for batch processing 16:14:26 adrian_otto: I am not sure about that 16:14:43 I don't think we should even consider it unless we are sure there are not licensing constraints for this. 16:15:12 adrian_otto: But the chronos is totally open source (I think) 16:15:30 2 different issues? 16:15:32 this is something we can ask for assistance form the OpenStack BoD if we are uncertain about the legal implications of the decision. 16:16:03 all software we redistribute must be compatible with the Apache 2 license 16:16:27 adrian_otto: I can double-check that 16:16:27 otherwise the user will need to download and license it separately. 16:16:56 #action hongbin confirm with OpenStack BoD about the legal implications of Chornos 16:17:12 s/Chronos/dcos/ 16:17:18 If third parties want to add support for their licensed software, what's the process? 16:17:58 #Link https://github.com/mesos/chronos/blob/master/LICENSE Chronos Apache 2 License 16:18:11 #link https://github.com/mesos/chronos/blob/master/LICENSE Chronos Apache 2 License 16:18:31 Tango: they contribute an open source driver to OpenStack 16:19:01 and they allow the cloud operator to acquire and license the software/equipment that driver interfaces with. 16:19:19 but OpenStack does not ship with any proprietary software. 16:19:35 adrian_otto: Should DCOS follow this process? 16:20:06 If Magnum users want it, then yes. Are we convinced there is a strong enough demand signal for that? 16:20:42 Chronos looks fine as long as we use the software posted at https://github.com/mesos/chronos 16:21:04 Yes, I think we just considered Chronos at current stage 16:21:14 Jay should be able to provide an answer for DCOS support. 16:21:15 DCOS could be considered later if needed 16:21:28 I can indeed. 16:21:37 :P 16:21:45 hi jaypipes! 16:21:52 :) hi! 16:22:36 Besides the license issue, we also disagreed on how to add Chronos to mesos bay 16:22:55 adrian_otto proposed to have a chronos bay in parallel with marathon bay 16:23:02 adrian_otto: Is that correct? 16:23:56 I and Jay Lau think chronos and marathon should be in the same bay 16:24:15 That is there is a single mesos bay that has both marathon and chronos 16:24:46 Let's debate this further if you like 16:24:59 I think they belong in different drivers 16:25:06 and each can share common code 16:25:32 We should be able to mix & match 16:25:35 adrian_otto: I think the point is how to let them to share the bay 16:25:54 +1 16:26:10 adrian_otto: For example, marathon & chronos shared the computing resources of the same bay 16:26:13 Mesos is unique in that it can host multiple frameworks 16:26:20 adrian_otto: That is the key feature of mesos 16:26:26 yes, I understand that. We could have a post-setup hook that you could use to initialize a secondary driver. 16:26:53 but I feel strongly that we should not try to have an "everything you can possibly do with mesos" driver 16:27:16 Then, we are in disagreement again 16:27:39 How about others. What are your opinions? 16:28:04 the trouble with this appraoch is that it's a slippery slope. Let's say we have Marathon and Mesos… then it follows logically that the next framework and the next all should pile on there as well. 16:28:23 I am not sure what the difference is with adding a secondary driver via a post-setup hook. The result is the same 16:28:27 Marathon + Chronos + ?? + ?? + ?? 16:28:43 yes, the freedom for the user is the same 16:28:59 We should have support for both. It depends on user what framework they want 16:29:03 but we don't have an expectation to support every combination of every possible framework, and making them all work across upgrades. 16:29:17 that's a recipe for disaster. 16:29:34 adrian_otto: Agreed with you that we should not support every frameworks 16:29:44 adrian_otto: But the case of marathon + chronos is different 16:29:59 adrian_otto: Most of the times, they are paired toghther 16:30:01 From what we are seeing, Mesos key benefit is to allow many frameworks to share resources. 16:30:25 so because we know there are a variety of these, we should have a way to support the prevailing ones individually, and provide an option to use them in combination that does not bind us to do full integration testing of each combination. 16:31:04 OK. Let's push the discussion into ML 16:31:08 I guess the concern is about our responsibility for testing all combination. 16:31:13 I'm trying to make a point that we need to draw a line somewhere. 16:31:36 #action hongbin started an ML to discuss adding Chronos to mesos bay 16:31:50 2. Support bays with no floating IPs 16:32:00 #link http://lists.openstack.org/pipermail/openstack-dev/2016-March/091063.html Discussion in ML 16:32:04 adrian_otto all valid points, thanks 16:32:31 this idea seems fine to me. 16:33:06 Yes. Fine to me as well, besides some technical details that I am not clear 16:33:38 If we removed the floating IPs, how to let the bay connect outside 16:34:20 The nodes access the external internet via the router, not the floating IP 16:34:33 Tango: I see 16:34:44 Floating IP is mainly to allow access to the nodes from external network 16:34:46 if each bay node has a public ip, then they access external hosts the same way any nova instance would. 16:35:34 If there is no need for users to log into the node, then we don't need floating IP 16:36:17 we would only need the COE API node to be externally accessible. 16:37:01 Currently this is done via the load balancer, so we can even skip the floating IP for the master nodes 16:37:06 but depending on what you start on each bay node, you may have an expectation that you can reach those from outside the bay 16:37:06 True, if users don't need port forwarding 16:37:49 Carina users, for example can access the individual bay nodes directly from external hosts. 16:38:26 This simplifies the use of the native COE API (it shows what port mappings it knows about internally. They do not need to be translated into other addresses.) 16:38:30 Tango: Egor pointed out there is a k8s feature that needs floating IP in the master node (called NodePort) 16:39:00 I think there are use cases for both with and without floating IP, so providing an option to choose is probably the best. 16:39:13 +1 16:39:22 Yes, sounds like we agreed on adding an opiton to remove the need of floating IPs 16:39:32 Any opposing point of view? 16:40:10 #agreed Magnum will support an option to remove the need of floating IPs 16:40:19 I will draft a BP for that 16:40:57 OK. Let's discuss the topic in backlog since we have time 16:41:02 Discussion of certificate storage 16:41:12 #link https://etherpad.openstack.org/p/magnum-barbican-alternative Etherpad summarizing the concern and a variety of options to address it 16:41:37 We have debated it in the ML heavily 16:41:51 We can discuss it further right now, or leave it to design summit 16:42:50 Here is the problem: Magnum currently relied on Barbican to store TLS certificates 16:43:13 That requires operators to install Barbican in order to get Magnum working 16:43:35 However, operators don't like that 16:44:26 So, it is better to ship Magnum independently (without hard requirements on Barbican) 16:44:48 We listed several options to remove the hard dependency 16:44:57 The details are in the etherpad 16:45:20 Any comments? 16:46:13 ........... 16:46:40 the use of a simple solution at first does (keystone) not preclude us from using a more sophisticated approach later (archer) 16:47:00 s/archer/anchor/ I hate autocorrect! 16:47:12 I can attest to the fact that arguing for Magnum adoption is already a challenge, asking for additional services just makes it harder. 16:47:46 I preferred the keystone approach as well 16:48:01 Tango: +1 16:48:05 +1 for keystone 16:48:51 Could we all agree that we start with the Keystone approach? 16:49:03 +1 because I proposed that to begin with 16:49:08 +1 16:49:10 +1 16:49:17 +1 16:49:24 It seems operators would have Keystone as one of the base component, so at least using Keystone at the beginning help adoption. 16:49:36 +1 on Keystone 16:49:37 assuming we do agree, we do this knowing that the keystone credential store is not secure, and we should treat it as such. 16:49:50 #agreed Magnum will leverage Keystone as alternative authentication machnism for k8s 16:49:57 +1 on keystone 16:50:10 so as long as we are diligent about encrypting what we place in there, that's a suitable compromise in my view. 16:50:24 adrian_otto is a warning indicating such offered to an operator? 16:50:37 yes, tha's in the BP now 16:50:43 Barbican will be the default 16:50:50 cool 16:50:57 and when you change it to keystone, it's to be accompanied by a warning 16:51:01 We should have a clear document about the security implications of each option 16:51:12 so you'll see that in the config file upon making that change 16:51:28 hongbin: yes, definitely. 16:51:56 OK. Any further comment on this topic? 16:51:59 ok, that was productive. 16:52:19 #topic Open Discussion 16:52:56 Please feel free to bring up topics that you like to discuss 16:54:11 maybe e wrap early? 16:54:16 +w 16:54:21 One question for the team: we just switch to using the public Atomic 23 image, which is very nice 16:54:47 but now there might be some motivation to switch back to a custom built image. Is there any concern? 16:55:07 No from me 16:55:10 Reason: later Kubernetes release, cache docker images, ... 16:56:28 FYI, I drafted a BP to cache docker images into the glance image 16:56:30 #link https://blueprints.launchpad.net/magnum/+spec/cache-docker-images 16:57:00 We need that to accelerate the bay provisioning speed 16:57:03 ok, if there is no concern then we can proceed with building and trying out new images. 16:57:11 related question to the team, any reason we choose f23 atomic image earlier over other options ? 16:57:19 like coreos 16:58:01 askb, no important reason. We were following the path of least resistance. 16:58:20 askb: The reason is the original Heat templates are authored by a RedHat folk 16:59:10 We copy his template to Magnum tree to speed up the development at the very beginning 16:59:54 OK. Let's wrap up 17:00:09 Thanks everyone for joining the meeting 17:00:15 #endmeeting