22:01:04 #startmeeting containers 22:01:05 Meeting started Tue Sep 9 22:01:04 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:01:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:01:09 The meeting name has been set to 'containers' 22:01:27 #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda 22:01:31 #topic Roll Call 22:01:33 Adrian Otto 22:01:58 Iqbal Mohomed, IBM Research 22:02:15 hi iqbalmohomed 22:02:21 Hello 22:02:40 Chris Alfonso 22:03:27 hi funzo 22:03:42 hi! 22:03:48 long time no see 22:03:58 yeah, I missed last week 22:04:04 will you make it to the Paris Summit event in November? 22:04:10 I will not be there 22:04:17 bummer 22:04:20 although I'd like to :) 22:04:48 Ian Main will make it from RHT though 22:05:01 ok 22:05:27 well, we are pretty light on attendees today 22:05:53 Getting in line for the iWatch perhaps :-p 22:05:58 ha! 22:06:00 heh 22:06:33 ok, well diga sent his regrets, but he did produce a little bit of code 22:07:01 Yup ... very good to see it 22:07:01 so let's quickly step through the agenda items together, and with any luck we will end early today 22:07:20 #topic Announcements 22:07:41 first is that I will be absent next week, as our meeting conflicts with the OpenStack Silicon Valley event 22:07:47 as will I be. 22:07:53 except I'll be in Jamaica 22:08:29 ok, so unless I hear a ground swell between now and then, I will cancel that meeting, and we will continue again on 2014-09-23 22:09:14 if the meeting gets scheduled for 9/16, I will arrange for the chair to email the openstack-dev@lists.openstack.org ML with the [containers] topic tag 22:09:23 so that's how you will learn about it 22:09:26 sounds good 22:09:39 any other announcements? 22:09:54 #topic Review Action Items 22:10:01 I took two: 22:10:08 adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work. 22:10:45 this is in -progress. I am awaiting a response from bauzas, who I understand is leading the Gantt effort 22:11:14 I will update you again after that discussion happens 22:11:26 we might just rely on an ML thread to resolve that 22:11:36 so I will carry that action item forward 22:11:48 #action adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work. 22:12:04 next action item: 22:12:06 adrian_otto to make a backlog of open design topics, and put them on our meeting schedule for discussion 22:12:16 that is now available. Feel free to edit it: 22:12:25 #link https://wiki.openstack.org/wiki/Meetings/Containers Containers Team Meeting Page 22:12:32 there is now a topics backlog section 22:12:51 now on to the API, and progress toward that 22:12:55 #topic Containers Service (Magnum) API 22:13:02 #link https://github.com/digambar15/magnum.git Initial Source code from diga 22:13:15 there's not a lot in there yet 22:13:30 and it's set up as v2 of an API rather than v1, which we will arrange to adjust 22:14:17 but once we get an openstack repo opened and a gerrit review queue for it, we can begin to submit specs against that as code with docstrings 22:14:18 Is pecan used by any other openstack project? Just curious 22:14:34 so we can debate the merits of each aspect of the API implementation before we code against it 22:14:51 iqbalmohomed: yes. It is used by Cielometer, and by Solum 22:14:59 ah cool 22:15:06 many of the early projects did not follow a design pattern 22:15:33 and that was earmarked as a maintainability problem, so new projects are encouraged to use pecan/wsme for WSGI 22:16:03 before that recommendation, basically every project did something different when it came to WSGI 22:16:32 for a control plane API, the WSGI choice really does not matter much 22:16:55 for data plane API's arguments for performance come into consideration 22:17:08 cool ... thx 22:17:24 ok, there is more backstory if you take any deeper interest in that subject. 22:18:01 funzo: any questions on the approach for the API, or what has been proposed for it? 22:18:40 ok, that brings us to open discussion 22:18:44 #topic Open Discussion 22:18:56 i had a question ... 22:19:06 iqbalmohomed: sure, what's up? 22:19:09 we have talked about kubernetes in the past 22:19:17 yes. 22:19:28 adrian_otto: I haven't reviewed yet - but I will 22:19:39 funzo: tx! 22:19:54 curious about similarities/differences ... i can see some major benefits in the proposed container service ... stuff kubernetes doesn't do at all 22:20:23 but feel there are similarities that we can take advantage of in terms of designs, etc. any thoughts on this 22:20:46 iqbalmohomed: great question. This is something that I have been drawing block diagrams with to help my colleagues rationalize where these different things fit. 22:21:10 my hope is that we will refine something to bring up for discussion at our next meeting 22:21:41 the good news is that in my mind, there is a clear way that both kubernetes and Magnum can be used in Harmony along with OpenStack 22:22:08 although there are some areas of overlap, that's only by necessity 22:22:14 that is good to hear 22:22:30 the kubernetes pod concept .. that seems like a good idea 22:22:31 kubernetes has a concept of a pool manager that Rackspace has written code to extend to use Heat 22:22:54 we have talked about the "instance" managed by the container service as well 22:23:05 as it does not make sense to have multiple sources of truth about capacity management of the raw capacity on which containers are placed. 22:23:30 is this opensource? I saw a rackspace provider to kubernetes ... is this what u refer to? 22:23:33 yes, so our intent is to leverage a combination of heat and Nova to deal with that 22:23:53 iqbalmohomed: my understanding is that those exist as pull requests against the kubernetes project 22:23:59 ah ok 22:24:10 I don't have links to share, but I could ask for them 22:24:33 if u could share, would be appreciated 22:25:10 but the big picture is that kubernetes is a higher level thing ... container service aims to be in the middle of that and lower level things like Nova ... is that right? 22:25:26 the key difference between a system like mesos or kubernetes, and Magnum is that Magnum is focused on solving "give me a container" 22:26:07 and kubernetes and mesos are more concerned with "deploy and control my service/application" with the assumption that a microservices architecture in in play 22:26:47 they work by having each host communicate back to the pool manager from an agent 22:26:57 and that's how inventory is tracked 22:27:15 I should say inventory of capacity (hosts that can run containers) 22:27:18 so magnum isn't going to be in the business of restarting failed containers, for eg? 22:27:36 and that "membership" is what kubernetes tracks in etcd 22:27:52 iqbalmohomed: no, that's not a goal 22:27:57 cool 22:28:10 whereas that's in scope for kubernetes, or even potentially Heat. 22:28:37 how many containers should be added to deal with a sudden increase in work 22:28:44 yup .. makes sense 22:28:55 currently kubernetes does not solve for that, but I'm reasonably sure that's an ambition 22:29:13 whereas autoscaling and auto healing are not contemplated for Magnum 22:29:52 right ... in Openstack, heat handles those concerns 22:29:53 instead, we want a standardized, modular interface where various container technologies can be attached, and an API that allows a variety of tools to be integrated on the top 22:30:24 so if you really wanted kubernetes + openstack, one option would be: kubernetes + swarmd + magnum + openstack 22:31:10 what about the mapping of ports/ips to containers .. 22:31:24 where kubernetes does the service level management, swarmd provides a docker API, magnum provides a containers centric API to swarmd, and openstack (nova) produces raw capacity on instances of any type (vm, bare metal, container) 22:31:58 ok, so mappings of ports would be passed through magnum to the host level systems 22:32:16 let''s say for sake of discussion that I have the above setup 22:32:29 and I use nova-docker as my virt driver in nova 22:32:37 so I end up doing nested containers 22:33:01 yup 22:33:24 my port mappings are requested when my swarmd interacts with Magnum, and are passed through to the nova-docker virt driver, and it accomplishes the map using (probably) a call to neutron 22:33:41 or by invoking docker locally on the host 22:34:11 Magnum would cache the port arguments requested 22:34:28 so it would be able to return them when you do an inspection on a container 22:34:51 sound sensible? 22:34:52 so who decides the port mappings in your example? swarmd? 22:35:23 good question, the port configurations are expressed by the consumer of the Magnum API 22:35:49 so in my example, yes, that's swarmd, and it would be responding to requests from kubernetes, which would need to specify them 22:35:52 got it ... so up to that consumer to make good decisions 22:36:00 yes 22:36:16 cool ... makes sense .. thanks for this! 22:36:29 this approach strikes a balance between simplicity (of the system) and freedom (to make the system do exotic things) 22:36:53 hopefully Magnum is only somewhat opinionated 22:37:06 I like the layering/separation of concerns here 22:37:18 because if it gets too magical… you get the point 22:37:31 rails :-p 22:37:36 now this is a contrast to Solum 22:37:52 where Solum will generate heat templates and will determine the mappings for you 22:38:11 yup ... makes sense 22:38:18 whereas Magnum will allow you to specify an additional level of detail that it happily passes through to the lower layers 22:38:36 a silly question just to confirm .. 22:38:39 Magnum makes no assumptions about what you are trying to deploy 22:39:07 for the instance on which magnum deploys containers . ... all containers belong to this tenant, right? 22:39:14 (of course speaking in the future perspective, since it does nothing yet) 22:39:57 okay, I will be careful about how I answer that 22:40:24 we have contemplated two ways Magnum will interact with instances 22:40:33 1) It creates them as-needed 22:40:50 2) The caller specifies what instance_id to create a container on 22:41:15 in the case of #1, they will be created using the credentials (trust token) provided by the caller 22:41:27 so the tenant will match the user of the Magnum API 22:41:42 unless you have done something clever with the trust tokens to get a different outcome 22:41:50 does this make sense so far? 22:42:07 yup 22:42:18 ok, so in the case of #2, we apply some logic 22:42:50 we check to see if the resource referenced by the instance_id in the request matches the tenant currently acting on the Magnum API 22:42:53 -or- 22:43:05 there is a trust that allows access to another tenant 22:43:13 ic 22:43:37 so one can use trusts to achieve some interesting scenarios 22:44:00 yes, it is possible to make a trust to delegate rights to another tenant 22:44:23 and if you have set that up in keystone before you made a call, then it could be possible for you to act across tenants if you had a reason to do that 22:44:26 cool ... thanks for this discussion ... very useful info 22:44:30 but it would be properly access controlled 22:44:41 we would use python-keystoneclient to access those features 22:45:10 ok, any other questions? 22:45:19 i'm sated for the day :) 22:45:45 heh, okay cool 22:46:02 any other lurkers who would like to be recorded in attendance before we wrap up for today? 22:46:43 ok, thanks! 22:46:51 I'll see you here again in two weeks time 22:46:57 thanks .. bye 22:46:59 #endmeeting