16:01:41 #startmeeting containers 16:01:42 Meeting started Tue Sep 2 16:01:41 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:46 The meeting name has been set to 'containers' 16:01:47 #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda 16:01:54 #topic Roll Call 16:01:57 Thomas Maddox 16:02:04 Adrian Otto 16:02:04 Chris Alfonso 16:02:22 Hi 16:02:33 Digambar Patil 16:02:36 Hi 16:02:44 Matt Tesauro 16:02:45 Dmitry Guryanov 16:02:58 Andrew Melton 16:03:42 alright, let's begin 16:03:50 #topic Announcements 16:04:00 (sorry for the repeat from last week) 16:04:06 1) OpenStack Silicon Valley event. 16:04:13 #link http://openstacksv.com/ Openstack Silicon Valley 2014-09-16 16:04:26 this is two weeks from today 16:04:41 last time we decided to revisit it as the date draws closer. 16:04:42 Should we appoint a pro-tem chair for 2014-09-16 meeting, or cancel it? 16:04:46 okay 16:05:01 Iqbal Mohomed 16:05:10 I'm happy to bring it back up next week if there are no takers 16:05:40 Yeah, I'll know my schedule better then. 16:05:47 it's been... volatile 16:05:49 to say the least :) 16:05:53 ok, I'll nudge this up for next week's agenda 16:06:00 2) Diga has some initial API work, and will be ready to check that in for review. We need to get the repo opened first. 16:06:19 my review for opening the repo has a layout error, so I need to submit a revision to get that in 16:06:26 more on that in a moment 16:06:28 yes 16:06:42 special thanks to diga for taking the initiative to get taht work started 16:06:53 ok 16:07:04 the hope is that we get a lot of nice hooks where we can begin to hang up an API 16:07:05 Welcome 16:07:24 and make it self documenting with docstrings as we iterate 16:07:34 Any other announcements from tram members? 16:07:35 ok 16:07:39 *team 16:08:11 ok, then let's advance to the next agenda item 16:08:14 #topic Review Action Items 16:08:22 adrian_otto to create and share a new etherpad for recording current consensus about a containers service api 16:08:28 Status: Completed. 16:08:35 #link https://etherpad.openstack.org/p/openstack-containers-service-api New Etherpad for Containers API 16:08:49 feel free to mark this one up and get all our ideas into it 16:08:52 Good 16:08:59 Thanks! 16:09:06 I've written some comments there 16:09:33 dguryanov: excellent! Let's take a moment to look at those in just a minute here... 16:09:51 that was the only action item 16:09:56 #topic Discuss Specs for OpenStack Containers Service 16:10:08 #link https://review.openstack.org/114044 Spec Proposal 16:10:19 so first let's look for any open questions at the planning level 16:10:28 and then dive into implementation concerns next 16:11:09 ok, besides Jenkins testing my spec as python code 16:11:16 lol 16:11:19 it does not look like we have remaining open comments 16:11:49 anyone with concerns to discuss is welcome to raise them here. You don't need to comment on the gerrit review to be heard. 16:12:17 ok, so let's turn our attention to the etherpad again 16:12:23 so, one concern we've had is about gantt 16:12:37 has there been any commitment to actually work on it from anyone in openstack 16:12:38 apmelton: yes? 16:12:48 further than "this is a good idea, lets do this in the future" 16:12:53 yes, in fact the lead reached out to me about it 16:13:26 I guess the problem is that gantt isn't usable at the moment 16:13:31 I owe a follow-up conversation to cover that question 16:13:42 iqbalmohomed: right, it is still early days 16:14:12 the question this surfaces is something along the lines of "what do we do if Gantt does not mature in parallel with Magnum?" 16:14:23 oh, I should have included that as an announcement 16:14:38 magnum? 16:14:38 the OpenStack Containers service will initially be known as Magnum (a big container) 16:14:55 I had some questions on the use of Heat ... is the idea that the container service will use the existing docker resource or will a new heat resource be created? 16:14:55 ah! 16:15:02 If anyone has strong objections, there is still time to adjust that 16:15:41 iqbalmohomed: that's a great question. Let's come back to that in just a moment. 16:16:19 apmelton: in response to your concern about Gantt 16:16:38 maybe we can invite the Gantt developers to join us next week, or in a special collaboration session 16:16:56 adrian_otto: either of those I think would help quell our concern 16:16:56 and get a sense of what is planned (to a deeper level of understanding) 16:17:12 +1 16:17:14 then we can bring a summary back to the team 16:17:16 adrian_otto: just added a comment on the spec. Should we be referencing future plans for scheduling (i.e. how we'd like to use of Gantt if it matures) 16:17:24 and if needed, come up with contingency plan 16:17:56 We already lay out the immediate plan of sequential fill 16:17:57 thomasem: good question. I think it's important to signal our intent, even when things are not yet possible. 16:18:17 and state what early iterations will be on the path to that future vision 16:18:25 ep 16:18:26 yep* 16:18:30 okay cool 16:18:38 so as thomasem mentions, step one will be a trivial (naieve) implementation 16:19:00 and then we make that more sensible over time, using Gantt, or other bits to fold in as needed 16:19:23 anyone else have more concerns about this? I'll take an action now to schedule a follow-up 16:19:34 I remember us discussing things like being able to place guests next to or far away from each other via some scheduling service, so I'd like to document a core feature like that. 16:19:52 As a thing that we're looking for in our scheduling solution 16:19:55 #action adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work. 16:20:36 +1 16:20:41 thomasem: yes, affinity and anti-affinity will be very important for container placement 16:20:53 I guess the concern I have is that if Gantt is late, we might be stuck with the sequential filling scheduler for a while ... so good to keep in sync with the gantt guys 16:21:39 I'm repeatedly surprised by sudden progress that can be inspired by a clear and present use case 16:21:54 so our work may actually end up being a catalyst for Gantt to take form sooner 16:22:09 =] 16:22:36 I expect we can all agree that being stuck with sequential fill forever will be lame. 16:22:43 Folks need clear requirements to understand units of work to be done. Otherwise it's often insurmountable or gets stuck in research. 16:22:49 so we should only view that as step one. 16:22:59 yes, quite 16:23:10 i recall the spec also had explicit placement 16:23:20 yep 16:23:24 ok, so I think we will be in good shape, as I will have to review the action item with the team at the next meeting 16:23:34 this can help some people ... basically, do scheduling outside and pass the results to the api 16:23:36 so we will have a chance to review this question 16:24:05 we had to do this in the past ... a proxy in front of heat ... it was a stop gap measure but it did the trick 16:24:34 iqbalmohomed: you asked about Heat up above 16:24:41 so let's dive into that. 16:25:01 yes ... would be great to get some details if there are any 16:26:33 i imagine there are several ways of doing it ... we could have a heat resource for each container ... these could be nested resources on a top level "magnum" container 16:27:28 when i said magnum container, i guess it should be top-level instance 16:27:46 because the top-level thing can be anything 16:28:45 i've also heard people compare this to trove ... they also use heat as a way to orchestrate ... we might be able to learn from what they did 16:30:51 the big question i had was whether we are going to use the existing docker resource? Any thoughts? 16:31:37 iqbalmohomed: I think that conflicts with our main goal of using nova's instances so that it doens't matter what the underlying technology is 16:32:21 * adrian_otto back 16:32:29 sorry I was pulled away for a moment 16:32:29 cool .. that's reasonable ... but we need additional software on the nova instance, correct? 16:32:45 iqbalmohomed: correct, it'll need the container's service agent 16:34:23 so at least two ways of getting the agent in there ... either there is a blessed image that has the container service agent ... or we can install the agent on the fly 16:34:35 yes 16:34:38 iqbalmohomed: I think we should allow both options 16:34:47 apmelton: +1 16:34:55 cool 16:35:30 either use the blessed image, or we'll provide configuration details via cloudinit/config drive/etc 16:35:32 So the thing we get from heat is a nova instance 16:35:41 regarding the containers themselves, will they be nested resources to the top-level nova instance or something different 16:35:54 and we make calls to a Gantt service (or equivalent) to schedule over those isntances 16:36:18 iqbalmohomed: is there any reason you could see for them not to be nested? 16:36:39 I see no reason why the heat provider resource for Docker could not use the same code we use. 16:37:06 adrian_otto: that is an interesting idea 16:37:23 as long as the Docker resource provides a Docker container running the container service agent 16:37:32 well.... 16:37:40 apmelton: exactly. 16:37:53 does that docker resource also provide network or block storage? 16:37:54 apmelton: it would require us to change the code for the nova instance plugin ... if that is the top level resource 16:38:26 iqbalmohomed: how would it need to be changed? 16:39:05 apmelton: I don't think so, but I have not reviewed that code closely, so we wold want to ask erw to be sure. 16:39:10 or take a look at it to see 16:39:22 pong 16:39:31 I'd think containers would need to be nested so that empty top-level instances could be deleted after some time 16:39:32 hi there erw 16:39:39 apmelton: instantiate the containers for one thing ... if you used the vanilla plugin that's in heat today, it'll just ignore any nested entries you have (as far as i understand) 16:40:20 erw: does the docker resource in heat nest inside a nova instance or it is a top level thing? 16:40:37 iqbalmohomed: it talks directly to the Docker API 16:40:38 iqbalmohomed: it nests. 16:40:51 you need a nova resource first. 16:41:00 iqbalmohomed: you use Heat to spawn a nova instance and then tell Docker to talk to the docker endpoint inside of the instance 16:41:05 that you set up the docker API inside of 16:41:26 ah ok ... so then, we could probably use the same model for containers 16:41:32 but could it be used without nova at all? 16:41:35 in some ways, the Heat plugin is not all that dissimilar to what the containers service intends to do - it’s just that Heat isn’t all too friendly to use 16:42:08 adrian_otto: the heat plugin could be used without Nova, but you’d need some scheduler or such to inform you of the location of your docker hosts 16:42:33 or external input to the HOT template 16:42:57 erw, yes, so if Gantt provided an inventory list, that could work for both the heat provider plug-in, as well as Magnum 16:43:15 yes - is Magnum the name of the containers service now? 16:43:19 yes 16:45:03 iqbalmohomed: did this provide the clarity you were seeking? 16:45:18 yes... it did ... thx for the discussion :) 16:45:27 excellent 16:45:41 ok, let's check out the etherpad comments next. 16:45:59 https://etherpad.openstack.org/p/openstack-containers-service-api 16:47:20 thomasem: thanks for the comments in there 16:47:49 want to start with the ones pertaining to bindmounts? 16:48:24 who's red? 16:48:28 adrian_otto of course 16:48:50 Ha ... it says adrian is blue 16:49:05 what if we limited bindmounts to paths provided by a manila instance? 16:49:28 that reddish color is un-named still 16:49:51 It mine 16:49:53 adrian_otto: what if all we want is a temporary bind mount between containers that doesn't rely on networking 16:50:04 dguryanov: oh, thanks for those! 16:51:18 apmelton: humm, I'm not sure. Good point. 16:52:48 adrian_otto: what if instead of allowing arbitrary paths, we instead allow only "named" local ephemeral storage 16:52:57 the agent decides where that storage is 16:53:13 for instance, it'll create ephemeral storage X and keep track of it 16:53:39 so similar to the way the xenstored operates? 16:53:41 the agent could be configured to use just a local file system, or a more advanced LVM set up 16:53:54 adrian_otto: I'm not familiar with xenstored 16:54:03 +1 for ephemeral - if you can avoid absolute file paths, its always a security plus 16:54:27 Question about the metadata .. is it supposed to be crud-able or just specified at creation time? 16:54:37 adrian_otto: yeah, its sound similar to xenstore 16:54:47 iqbalmohomed: I believe it depends on how you've configured your system 16:55:03 if it's just config-drive with no metadata service, it's at creation time 16:55:57 ok, so looks like we have some good subjects to continue doing design work around 16:56:22 one option is to open an ML thread on each topic 16:56:27 and iterate there 16:56:29 also .. flavor_ref refers to flavor of the base image, right? 16:56:35 another option is to schedule design meetings in IRC 16:56:56 thomasem +1 to design meetings. 16:57:00 or we can continue to use our scheduled team meeting time as the agenda permits 16:57:26 Though, I'd like to keep it to the regular meeting as much as possible 16:57:30 maybe we can knock one out per meeting? 16:57:32 I have a new web based video conference and screen sharing tool we can try if there is interest in that. Very similar to Google Hangouts but without the participant limit. 16:57:50 adrian_otto: which is that? 16:58:00 thomasem: called vidyo 16:58:14 apparently an OEM of the same technology that hangouts uses 16:58:21 it's very similar in nature 16:58:28 Oh, I see 16:58:52 irc and etherpads works nicely too 16:58:57 #topic Open Discussion 16:59:34 #action adrian_otto to make a backlog of open design topics, and put them on our meeting schedule for discussion 16:59:49 looks like open discussion is too short today 16:59:51 sorry about that. 17:00:03 we can be reached in #openstack-containers for one-off topics 17:00:10 thanks everyone for attending today 17:00:15 Take it easy! 17:00:16 #endmeeting