22:00:06 #startmeeting containers 22:00:06 Meeting started Tue Aug 26 22:00:06 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:09 The meeting name has been set to 'containers' 22:00:11 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2014-08-26_2200_UTC Our Agenda 22:00:17 #topic Roll Call 22:00:20 Adrian Otto 22:00:22 Thomas Maddox 22:00:31 Matt Tesauro 22:01:05 we lost apmelton 22:01:26 o/ 22:01:26 not fer long 22:01:28 andrew melton 22:01:33 oh, there you are 22:01:36 hello 22:01:40 y'all looking for me? 22:01:53 I found it amusing that I called rollc all and your client left the room 22:02:07 Iqbal Mohomed, IBM Research 22:02:09 in transition to a new bouncer and haven't quite got it set up 22:02:20 so back to the old one for now 22:02:30 makes sense 22:02:45 so before I advance to announcements, I will share trivial stuff 22:02:53 I hurt my shoulder last week 22:03:01 Oh that's no fun 22:03:09 doing... 22:03:14 in California we have bad drought (lowest water levels in 100 years) 22:03:21 so I have been trying to save water 22:03:28 digging a well? 22:03:36 collecting the shower water in a pail while waiting for it to warm up 22:03:46 rcleere: :-) 22:04:09 then I take that shower water and use it in my washing machine 22:04:15 it's a 5 gallon pail 22:04:27 so when full it weights 40 pounds 22:04:29 wow california doesn't sound nearly as fun as everyone says it is :P 22:04:36 you don't say 22:04:38 anyway, while I was hurling it into the clothes wacher… ouch. 22:04:42 youch 22:04:45 that was last thursday 22:04:54 and I have been in serious pain, until today 22:05:02 today I am almost all better 22:05:17 so I'm in a terrific mood, just thought I would share that 22:05:18 that's good news 22:05:25 lol, that's awesome 22:05:41 ok, so on with the agenda 22:05:46 * jogo walks in late 22:05:46 #topic Announcements 22:05:54 fist of all 22:06:01 OpenStack Silicon Valley event. 22:06:06 2014-09-16 22:06:13 it's a Tuesday, and I'm attending 22:06:17 will I see any of you there? 22:06:39 I honestly didn't find out about that until today. 22:07:01 while I'm doing that, I will be missing our team meeting, so we should decide to 1) Cancel or meeting for that day -or- 2) Select a pro-tem chair to run it 22:07:12 #link http://openstacksv.com/ Openstack Silicon Valley 2014-09-16 22:07:24 what do you all think is best? 22:07:57 adrian_otto: I have a general question when there is an opening in the meeting schedule 22:08:07 jogo, proceed 22:08:10 adrian_otto: Well, we may know more closer to that time. 22:08:23 e.g. who would be availabel to lead the meeting 22:08:36 so I asked this at the nova midcycle but I am less sure then before 22:08:42 we can punt this for a week or so 22:08:43 so some background 22:09:04 I think OpenStack needs a good container story, as they have a lot of value for !VM use cases 22:09:34 but why does openstack need a openstack native solution instead of adopting another project and making them work better together 22:09:48 there is no shortage of things that try to manage containers at scale 22:10:01 great question 22:10:32 first of all, my goal is to make containers a first class resource in OpenStack 22:10:43 and I didn't see the spec mention that 22:11:02 so IMHO first class doesn't mean we need a native built from scratch solution 22:11:20 the compute program can just say, use x for the container service 22:11:28 the current proposal suggests specific tools that we can leverage 22:11:38 adrian_otto: hmm I must have missed that line 22:11:39 jogo: I think the important distinction between the openstack containers service compared to others (kubernetes, fleet, etc) is that our service not only builds and magages the containers, but also the infrastructure they are built on 22:11:44 the intent is not to recreate what exists, but represent them within the fabric of openstack 22:12:07 without a clumbsy third party heat resource 22:12:18 something that taps into the same scheduling capability that OpenStack uses 22:12:25 apmelton: so layer something underneath to do taht 22:12:26 that 22:12:42 ok, so let's table this for like 2 minutes 22:12:45 hmm why does it need to use the same scheduling? 22:13:02 and continue it once we close out announcements 22:13:08 adrian_otto: kk, thats why I asked for when you had a moment, sorry for derailing 22:13:13 any other announcements from team members? 22:13:32 jogo, Oh, I thought you were referring to 9/16, sorry. 22:13:43 I want to have this debate, for sure. 22:14:25 ok, so no other announcements, so advancing to next agenda item 22:14:37 #topic Discuss Specs for OpenStack Containers Service 22:14:52 first the links, and a quick update on each 22:15:02 #link https://review.openstack.org/114044 Spec Proposal 22:15:11 there is a revision as of today for your review 22:15:25 #link https://review.openstack.org/115328 Repo Review 22:15:33 on this topic, something perplexing happened 22:15:46 we were asked to work within the OpenStack compute program using Stackforge 22:15:58 which apparently the current rules do not allow. 22:16:20 If we are going to use the OpenStack trademark, then we need to be in the openstack/ namespace, so I resubmitted this accordingly. 22:16:37 adrian_otto: you were asked to work on stackforge before goign to the compute program 22:16:56 the real intent was to allow for rapid iteration on a project, which is possible regardless of Stackforge or compute program work 22:17:10 as they can have separate review teams, and just not tag releases 22:17:41 What about https://etherpad.openstack.org/p/containers-service-api? 22:17:44 and so long as we trust the compute program PTL not to cut a release of containers until it is ready, then the lcoation should not matter 22:18:12 thanks, I will get to that in a moment, dguryanov 22:18:37 so although we ahve contributors working on implementing an API spec, there is not currently anywhere to land that 22:18:53 until I get through the red tape of 115328 22:19:40 ok, any questions on the current state of the code repo? 22:20:53 ok, so next part is: 22:20:55 #link https://etherpad.openstack.org/p/containers-service-api Previous Containers Service API Draft 22:21:17 this dates back to the 2013 timeframe when a containers API was first discussed 22:21:38 dguryanov: ^^ what are your thoughts on this? 22:21:50 adrian_otto: you asked me to come up with reasons I didn't want to use the docker api as our implementation: https://gist.github.com/ramielrowe/4d162d780977542997a8 22:22:06 I thinks this API is a better starting point, then docker's API, but it have some lacks 22:22:07 I didn't get much time to work on it, but those are the basic reasons for my position 22:22:20 apmelton: excellent! Thank you! 22:22:49 apmelton: notice that I adjusted our proposal to suggest an alternate approach in accordance with our discussion last week 22:23:13 Docker API users can use libswarm and an openstack containers backend to talk to the openstack containers service API 22:23:18 adrian_otto: yup, I think that we can offer equivalent functionality to the docker api 22:24:05 and we can still provide access to other sorts of containers, so if someone prefers openvz they could use that 22:24:16 as a backend module to the containers service 22:24:43 and possibly even use docker CLI to control that, as perveted as that may sound. 22:25:45 ok, so any more thoughts on this before resuming the discussion stemming from jogo's questions? 22:26:31 ok, jogo, you have the floor 22:26:36 I still suggest to create a new etherpad page for API 22:26:42 adrian_otto: thanks 22:26:50 And write all thoughts there 22:26:54 dguryanov: Good, I'll take that as an AI 22:26:57 adrian_otto: so my understanding of this effort its twofold 22:27:08 1) have a defacto container answer for OpenStack 22:27:38 #action adrian_otto to create and share a new etherpad for recording current consensus about a containers service api 22:27:38 2) be able to provision compute instances from inside the container service so user doesn't need to worry about it 22:28:39 jogo, yes. I want owners of OpenStack clouds to have a built-in containers solution that just works, regardless of what instance type they have chosen to use for their nova service 22:28:51 adrian_otto: sure but why not use an existing solution 22:28:57 and without scurrying arounda nd bolting on third party software to make OpenStack containers ready. 22:29:01 and just have the compute program 'bless' it 22:29:21 jogo: that approach has been tried and failed. 22:29:23 adrian_otto: so this comes from my view that openstack should be not be big tent but a small tent with a big ecosystem 22:29:28 jogo, because that conflicts with #1 22:29:29 adrian_otto: do you have examples? 22:29:34 or at least my view of #1 22:29:39 scalr, nova-docker 22:30:00 not if openstack, or the compute program in particular, says use outside things for this 22:30:07 so nova-docker why is that a failure? 22:30:12 and I have never heard of scalr 22:30:13 to be the defacto container solution for openstack, that container solution should present it self with very similar features as nova 22:30:45 so cannot comment 22:30:45 apmelton: why? 22:30:52 because you can't do half the things that containers are meant to allow 22:30:52 nova features and containers features are a venn diagram with a limited overlap 22:31:06 that limited overlap is what nova-docker delivers now 22:31:18 and that's not enough to meet customer expectations 22:31:35 jogo, because otherwise would be a bad user experience 22:32:18 I should be able to switch from using nova to using containers, and bring my cinder volumes and neutron networks with me 22:32:57 apmelton: +1 22:33:22 and I should be able to do that without having to learn an entirely new architecture 22:33:30 apmelton: interesting idea 22:33:31 jogo: I must admit that back in November 2013 I thought *exactly* what you are expressing right now 22:33:48 apmelton: but how do you migrate a full VM to a container? 22:33:49 and my position has evolved considerably as I started using containers every day. 22:33:51 image wise 22:34:18 adrian_otto: so there is clearly something I am missing in this discussion 22:34:19 Well, that's another step, and not impossible. OpenVZ has a prototype for that, I think. 22:34:20 what you begin to realize is that if you think of a container as just a cheaper VM, you are missing out on all the truly compelling things about containers 22:34:34 adrian_otto: as you have more experience with containers then me and you changed your mind 22:34:44 but I just don't see what I am missing right now 22:34:49 jogo, that's definitely tricky 22:35:09 ok, take for example the ability to set a shell environment variable key/value pair to be set at the time a container starts 22:35:10 adrian_otto: sure, I see the value of OpenStack working well with containers 22:35:13 +1 adrian_otto 22:35:14 adrian_otto: let me ask a different question 22:36:30 capturing stdout and return codes too 22:36:34 shared namespaces, though a bit more niche in my opinion. 22:36:41 jogo, what I'm suggesting is, images aside, our users should be able to provision containers and almost the same openstack experience as if they used nova 22:36:58 It ought to feel like OpenStack 22:36:59 why don't any public clouds today have native container solutions? and defer to seperate tools 22:36:59 adrian_otto: I am not questioning the value of containers 22:37:17 jogo, both GCE and RAX clouds have it 22:37:33 through docker + libswarm, our workd with OnMetal as well. 22:37:42 jogo because those tools are not multi tenant at the moment 22:37:45 thomasem: I don't knwo what 'feal like OpenStack' means 22:38:10 jogo: using similar verbiage, data structures 22:38:23 jogo: async 22:38:31 adrian_otto1: GCE supports a way to say just give me a container 22:38:35 without doing anything else? 22:38:44 with multi-tenancy and async, those are the two major differences. 22:39:01 jogo, they have a docker image that essentially is configured through metadata the user provides 22:39:04 adrian_otto1: async? 22:39:12 apmelton: sure that is just an image 22:39:19 that is different then a full service though 22:39:27 and still allowing for the actual container code to be pluggable, to alow for implementations for LXC/libct, openvz, docker/libcontainer whatever. 22:39:51 adrian_otto1: sorry I am still missing something, in part because of all the different voices in here saying different things 22:40:39 The point that I am still stuck on is why should OpenStack have an OpenStack native solution to do this? I don't want another case of NIH 22:40:40 jogo: have you seen the use cases section of the spec proposal yet? 22:40:45 those are intended to give a perspective for who values this and why 22:41:01 adrian_otto1: looking now 22:41:32 jogo, if a suitable solution already existed, we would be using that now. What we have now is nova-docker and a clumbsy heat resource with no scheduler. That's not working well enough. 22:41:45 we also have libvirt/lxc 22:41:50 which solves only a part of the use cases 22:41:51 adrian_otto1: I am really conrused about the nova-docker thing 22:41:56 adrian_otto1: I thought that isn't related at all 22:42:30 jogo, do you have an example of another service provider, providing containers as a service? 22:43:20 with a non-native service providing that 'containers as a service' 22:43:20 apmelton: there was only one, Tutum, and now they are part of Docker, Inc. 22:43:59 #topic Review Action Items 22:44:01 (none) 22:44:12 #topic Open Discussion 22:44:23 jogo: you are welcome to continue through open discussion 22:44:27 adrian_otto1: as that is a nova driver 22:44:27 confused* 22:44:27 so going through the use cases 22:44:27 1,2,3,4,5 don't have anything in them that is OpenStack specific 22:44:27 is that accurate? 22:44:27 adrian_otto1: sorry if I am coming accross as contrary 22:44:31 adrian_otto: so use cases ^ 22:45:06 adrian_otto: is it safe to say 1,2,3,4,5 have nothing in them that makes them OpenStack specific 22:46:24 jogo, the frame of reference is that the cloud operator is using OpenStack 22:46:33 and they have these sue cases to address 22:46:47 adrian_otto: right, but is my take on 1-5 accurate 22:47:00 adrian_otto: I want to make sure I am not missing something before going on to the next two 22:48:05 1-5 could be solved using a variety of approaches, some not including openstack at all. However, I'm after something that addresses each with a consistent user experience that does not require the cloud operator do to r&D to figure out how to address these use cases. 22:48:29 it should just work. 22:48:52 adrian_otto: so yes. 22:49:01 as A cloud operator, I should not need to do circus tricks to solve those cases. 22:49:13 so the does not require R&D to figure it out 22:49:37 well I will get back to that 22:49:41 ok the last two use cases 22:49:42 I want at least one configuration to work with OpenStack out of the box 22:50:12 I like the use case in #6 22:50:28 nice way of hiding extra complexity from the user 22:50:50 but to solve 1-6 can't you add a small tool under an existing system? 22:51:00 and I am not sure what you mean in #7 22:51:02 sort of, but not really 22:51:07 the key is multi-tenancty 22:51:21 if I am a single tenant cloud, then yes, there are options for that 22:51:44 but as a multi-tenant cloud, that's where it all falls apart and becomes rather yucky from an ops perspective. 22:52:06 so as far as I can tell the model existing public clouds use for this is: 22:52:16 charge and manage the instances and let user deal with all things containers 22:52:34 in that model you don't need to deal with multi tenancy as each user would spin up there own copy of the service 22:52:45 using a preseeded image or something 22:53:36 jogo: that user experience is sub-optimal 22:53:43 adrian_otto: why? 22:53:57 requiring a user to spin up a single instance? 22:54:04 jogo, because that means ever user would need to manage that single instance 22:54:05 isn't that how everyone does it today? 22:54:16 And that doesn't seem to accumulate to unnecessary overhead once you have a significant customer base? A bunch of wasted resources that could have been solved by some orchestration above it? 22:54:17 because the complexity of dealing with wiring up a container infrastructure and cloud resources is carried by the customer, not by the hosted service 22:54:18 apmelton: but isn't that how everyone does it today? 22:54:32 I'm proposing a place where that complexity can be abstracted from the user 22:54:33 thomasem: who said that instance won't run containers as well 22:54:51 adrian_otto: right, so in the current form I don't really see that fleshed out in the spec 22:54:57 adrian_otto: less I just missed it 22:55:27 jogo, that's how everyone does it today because it's a quick win 22:55:34 jogo, "win" 22:55:46 apmelton: with minimal overhead to a user 22:55:48 it's a bad experience because as a user all I want to worry about are my containers 22:55:58 jogo: Nobody. I'm not talking about whether or not instances can run containers. I'm talking about providing a cleaner user experience by not making every customer use additional resources just to handle their containers when we could do it better as a cloud (we know the infrastructure). 22:56:12 apmelton: well users are still charged per instance not per container right? 22:56:13 jogo, I'm trying not to slant the proposal too much toward what matters to public cloud operators. 22:56:13 I don't want to have to worry about upgrading Container Management Service X 22:56:29 jogo, correct, only per instance is the idea. 22:56:37 I want a sensible balance for both public and private cloud use cases, even small ones. 22:57:10 and the review is bordering on 400 lines now 22:57:19 adrian_otto: that isn't very big ;) 22:57:19 I don't want it to be impossible to review either 22:57:30 so specs are not code 22:57:33 that's with no API details in it 22:57:44 so they rule of thumb about length is a little different IMHO 22:57:53 anyway, I am concerened that we do something 22:58:06 but the !OpenStack things in this space do it way better 22:58:07 jogo, yes, they are still charged for the instance even if they aren't using it 22:58:09 and everyone just adpos that 22:58:19 adopts that, and we waste our time 22:58:19 jogo, so you are suggesting that if we add additional rationale from the perspective of a public cloud operator, that it could help explain the desire to address the use cases? 22:58:40 we need to wrap up open discussion in a min 22:58:43 adrian_otto: no I am not saying that 22:59:03 jogo, I encourage you to continue with us in #openstack-containers 22:59:06 adrian_otto: I am saying I don't get why we cannot use the half dozen projects trying to solve this 22:59:17 we'd like to learn from your perspective on this 22:59:25 adrian_otto: just joined the room 22:59:27 thanks 22:59:40 thanks everyone for your attendance today. 22:59:46 wanna wrap up here and move to openstack-containers? 23:00:01 our next meeting is 2014-09-02 at UTC 1600 23:00:05 #endmeeting