16:01:41 <adrian_otto> #startmeeting containers
16:01:42 <openstack> Meeting started Tue Sep  2 16:01:41 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:46 <openstack> The meeting name has been set to 'containers'
16:01:47 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda
16:01:54 <adrian_otto> #topic Roll Call
16:01:57 <thomasem> Thomas Maddox
16:02:04 <adrian_otto> Adrian Otto
16:02:04 <funzo> Chris Alfonso
16:02:22 <diga_> Hi
16:02:33 <diga_> Digambar Patil
16:02:36 <dguryanov> Hi
16:02:44 <mtesauro> Matt Tesauro
16:02:45 <dguryanov> Dmitry Guryanov
16:02:58 <apmelton> Andrew Melton
16:03:42 <adrian_otto> alright, let's begin
16:03:50 <adrian_otto> #topic Announcements
16:04:00 <adrian_otto> (sorry for the repeat from last week)
16:04:06 <adrian_otto> 1) OpenStack Silicon Valley event.
16:04:13 <adrian_otto> #link http://openstacksv.com/ Openstack Silicon Valley 2014-09-16
16:04:26 <adrian_otto> this is two weeks from today
16:04:41 <adrian_otto> last time we decided to revisit it as the date draws closer.
16:04:42 <adrian_otto> Should we appoint a pro-tem chair for 2014-09-16 meeting, or cancel it?
16:04:46 <diga_> okay
16:05:01 <iqbalmohomed> Iqbal Mohomed
16:05:10 <adrian_otto> I'm happy to bring it back up next week if there are no takers
16:05:40 <thomasem> Yeah, I'll know my schedule better then.
16:05:47 <thomasem> it's been... volatile
16:05:49 <thomasem> to say the least :)
16:05:53 <adrian_otto> ok, I'll nudge this up for next week's agenda
16:06:00 <adrian_otto> 2) Diga has some initial API work, and will be ready to check that in for review. We need to get the repo opened first.
16:06:19 <adrian_otto> my review for opening the repo has a layout error, so I need to submit a revision to get that in
16:06:26 <adrian_otto> more on that in a moment
16:06:28 <diga_> yes
16:06:42 <adrian_otto> special thanks to diga for taking the initiative to get taht work started
16:06:53 <diga_> ok
16:07:04 <adrian_otto> the hope is that we get a lot of nice hooks where we can begin to hang up an API
16:07:05 <diga_> Welcome
16:07:24 <adrian_otto> and make it self documenting with docstrings as we iterate
16:07:34 <adrian_otto> Any other announcements from tram members?
16:07:35 <diga_> ok
16:07:39 <adrian_otto> *team
16:08:11 <adrian_otto> ok, then let's advance to the next agenda item
16:08:14 <adrian_otto> #topic Review Action Items
16:08:22 <adrian_otto> adrian_otto to create and share a new etherpad for recording current consensus about a containers service api
16:08:28 <adrian_otto> Status: Completed.
16:08:35 <adrian_otto> #link https://etherpad.openstack.org/p/openstack-containers-service-api New Etherpad for Containers API
16:08:49 <adrian_otto> feel free to mark this one up and get all our ideas into it
16:08:52 <diga_> Good
16:08:59 <dguryanov> Thanks!
16:09:06 <dguryanov> I've written some comments there
16:09:33 <adrian_otto> dguryanov: excellent! Let's take a moment to look at those in just a minute here...
16:09:51 <adrian_otto> that was the only action item
16:09:56 <adrian_otto> #topic Discuss Specs for OpenStack Containers Service
16:10:08 <adrian_otto> #link https://review.openstack.org/114044 Spec Proposal
16:10:19 <adrian_otto> so first let's look for any open questions at the planning level
16:10:28 <adrian_otto> and then dive into implementation concerns next
16:11:09 <adrian_otto> ok, besides Jenkins testing my spec as python code
16:11:16 <thomasem> lol
16:11:19 <adrian_otto> it does not look like we have remaining open comments
16:11:49 <adrian_otto> anyone with concerns to discuss is welcome to raise them here. You don't need to comment on the gerrit review to be heard.
16:12:17 <adrian_otto> ok, so let's turn our attention to the etherpad again
16:12:23 <apmelton> so, one concern we've had is about gantt
16:12:37 <apmelton> has there been any commitment to actually work on it from anyone in openstack
16:12:38 <adrian_otto> apmelton: yes?
16:12:48 <apmelton> further than "this is a good idea, lets do this in the future"
16:12:53 <adrian_otto> yes, in fact the lead reached out to me about it
16:13:26 <iqbalmohomed> I guess the problem is that gantt isn't usable at the moment
16:13:31 <adrian_otto> I owe a follow-up conversation to cover that question
16:13:42 <adrian_otto> iqbalmohomed: right, it is still early days
16:14:12 <adrian_otto> the question this surfaces is something along the lines of "what do we do if Gantt does not mature in parallel with Magnum?"
16:14:23 <adrian_otto> oh, I should have included that as an announcement
16:14:38 <apmelton> magnum?
16:14:38 <adrian_otto> the OpenStack Containers service will initially be known as Magnum (a big container)
16:14:55 <iqbalmohomed> I had some questions on the use of Heat ... is the idea that the container service will use the existing docker resource or will a new heat resource be created?
16:14:55 <apmelton> ah!
16:15:02 <adrian_otto> If anyone has strong objections, there is still time to adjust that
16:15:41 <adrian_otto> iqbalmohomed: that's a great question. Let's come back to that in just a moment.
16:16:19 <adrian_otto> apmelton: in response to your concern about Gantt
16:16:38 <adrian_otto> maybe we can invite the Gantt developers to join us next week, or in a special collaboration session
16:16:56 <apmelton> adrian_otto: either of those I think would help quell our concern
16:16:56 <adrian_otto> and get a sense of what is planned (to a deeper level of understanding)
16:17:12 <iqbalmohomed> +1
16:17:14 <adrian_otto> then we can bring a summary back to the team
16:17:16 <thomasem> adrian_otto: just added a comment on the spec. Should we be referencing future plans for scheduling (i.e. how we'd like to use of Gantt if it matures)
16:17:24 <adrian_otto> and if needed, come up with contingency plan
16:17:56 <thomasem> We already lay out the immediate plan of sequential fill
16:17:57 <adrian_otto> thomasem: good question. I think it's important to signal our intent, even when things are not yet possible.
16:18:17 <adrian_otto> and state what early iterations will be on the path to that future vision
16:18:25 <thomasem> ep
16:18:26 <thomasem> yep*
16:18:30 <thomasem> okay cool
16:18:38 <adrian_otto> so as thomasem mentions, step one will be a trivial (naieve) implementation
16:19:00 <adrian_otto> and then we make that more sensible over time, using Gantt, or other bits to fold in as needed
16:19:23 <adrian_otto> anyone else have more concerns about this? I'll take an action now to schedule a follow-up
16:19:34 <thomasem> I remember us discussing things like being able to place guests next to or far away from each other via some scheduling service, so I'd like to document a core feature like that.
16:19:52 <thomasem> As a thing that we're looking for in our scheduling solution
16:19:55 <adrian_otto> #action adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work.
16:20:36 <diga_> +1
16:20:41 <adrian_otto> thomasem: yes, affinity and anti-affinity will be very important for container placement
16:20:53 <iqbalmohomed> I guess the concern I have is that if Gantt is late, we might be stuck with the sequential filling scheduler for a while ... so good to keep in sync with the gantt guys
16:21:39 <adrian_otto> I'm repeatedly surprised by sudden progress that can be inspired by a clear and present use case
16:21:54 <adrian_otto> so our work may actually end up being a catalyst for Gantt to take form sooner
16:22:09 <thomasem> =]
16:22:36 <adrian_otto> I expect we can all agree that being stuck with sequential fill forever will be lame.
16:22:43 <thomasem> Folks need clear requirements to understand units of work to be done. Otherwise it's often insurmountable or gets stuck in research.
16:22:49 <adrian_otto> so we should only view that as step one.
16:22:59 <thomasem> yes, quite
16:23:10 <iqbalmohomed> i recall the spec also had explicit placement
16:23:20 <diga_> yep
16:23:24 <adrian_otto> ok, so I think we will be in good shape, as I will have to review the action item with the team at the next meeting
16:23:34 <iqbalmohomed> this can help some people ... basically, do scheduling outside and pass the results to the api
16:23:36 <adrian_otto> so we will have a chance to review this question
16:24:05 <iqbalmohomed> we had to do this in the past ... a proxy in front of heat ... it was a stop gap measure but it did the trick
16:24:34 <adrian_otto> iqbalmohomed: you asked about Heat up above
16:24:41 <adrian_otto> so let's dive into that.
16:25:01 <iqbalmohomed> yes ... would be great to get some details if there are any
16:26:33 <iqbalmohomed> i imagine there are several ways of doing it ... we could have a heat resource for each container ... these could be nested resources on a top level "magnum" container
16:27:28 <iqbalmohomed> when i said magnum container, i guess it should be top-level instance
16:27:46 <iqbalmohomed> because the top-level thing can be anything
16:28:45 <iqbalmohomed> i've also heard people compare this to trove ... they also use heat as a way to orchestrate ... we might be able to learn from what they did
16:30:51 <iqbalmohomed> the big question i had was whether we are going to use the existing docker resource? Any thoughts?
16:31:37 <apmelton> iqbalmohomed: I think that conflicts with our main goal of using nova's instances so that it doens't matter what the underlying technology is
16:32:21 * adrian_otto back
16:32:29 <adrian_otto> sorry I was pulled away for a moment
16:32:29 <iqbalmohomed> cool .. that's reasonable ... but we need additional software on the nova instance, correct?
16:32:45 <apmelton> iqbalmohomed: correct, it'll need the container's service agent
16:34:23 <iqbalmohomed> so at least two ways of getting the agent in there ... either there is a blessed image that has the container service agent  ... or we can install the agent on the fly
16:34:35 <adrian_otto> yes
16:34:38 <apmelton> iqbalmohomed: I think we should allow both options
16:34:47 <adrian_otto> apmelton: +1
16:34:55 <iqbalmohomed> cool
16:35:30 <apmelton> either use the blessed image, or we'll provide configuration details via cloudinit/config drive/etc
16:35:32 <adrian_otto> So the thing we get from heat is a nova instance
16:35:41 <iqbalmohomed> regarding the containers themselves, will they be nested resources to the top-level nova instance or something different
16:35:54 <adrian_otto> and we make calls to a Gantt service (or equivalent) to schedule over those isntances
16:36:18 <apmelton> iqbalmohomed: is there any reason you could see for them not to be nested?
16:36:39 <adrian_otto> I see no reason why the heat provider resource for Docker could not use the same code we use.
16:37:06 <apmelton> adrian_otto: that is an interesting idea
16:37:23 <apmelton> as long as the Docker resource provides a Docker container running the container service agent
16:37:32 <apmelton> well....
16:37:40 <adrian_otto> apmelton: exactly.
16:37:53 <apmelton> does that docker resource also provide network or block storage?
16:37:54 <iqbalmohomed> apmelton: it would require us to change the code for the nova instance plugin ... if that is the top level resource
16:38:26 <apmelton> iqbalmohomed: how would it need to be changed?
16:39:05 <adrian_otto> apmelton: I don't think so, but I have not reviewed that code closely, so we wold want to ask erw to be sure.
16:39:10 <adrian_otto> or take a look at it to see
16:39:22 <erw> pong
16:39:31 <apmelton> I'd think containers would need to be nested so that empty top-level instances could be deleted after some time
16:39:32 <adrian_otto> hi there erw
16:39:39 <iqbalmohomed> apmelton: instantiate the containers for one thing ... if you used the vanilla plugin that's in heat today, it'll just ignore any nested entries you have (as far as i understand)
16:40:20 <iqbalmohomed> erw: does the docker resource in heat nest inside a nova instance or it is a top level thing?
16:40:37 <erw> iqbalmohomed: it talks directly to the Docker API
16:40:38 <adrian_otto> iqbalmohomed: it nests.
16:40:51 <adrian_otto> you need a nova resource first.
16:41:00 <erw> iqbalmohomed: you use Heat to spawn a nova instance and then tell Docker to talk to the docker endpoint inside of the instance
16:41:05 <adrian_otto> that you set up the docker API inside of
16:41:26 <iqbalmohomed> ah ok ... so then, we could probably use the same model for containers
16:41:32 <adrian_otto> but could it be used without nova at all?
16:41:35 <erw> in some ways, the Heat plugin is not all that dissimilar to what the containers service intends to do - it’s just that Heat isn’t all too friendly to use
16:42:08 <erw> adrian_otto: the heat plugin could be used without Nova, but you’d need some scheduler or such to inform you of the location of your docker hosts
16:42:33 <erw> or external input to the HOT template
16:42:57 <adrian_otto> erw, yes, so if Gantt provided an inventory list, that could work for both the heat provider plug-in, as well as Magnum
16:43:15 <erw> yes - is Magnum the name of the containers service now?
16:43:19 <adrian_otto> yes
16:45:03 <adrian_otto> iqbalmohomed: did this provide the clarity you were seeking?
16:45:18 <iqbalmohomed> yes... it did ... thx for the discussion :)
16:45:27 <adrian_otto> excellent
16:45:41 <adrian_otto> ok, let's check out the etherpad comments next.
16:45:59 <adrian_otto> https://etherpad.openstack.org/p/openstack-containers-service-api
16:47:20 <adrian_otto> thomasem: thanks for the comments in there
16:47:49 <adrian_otto> want to start with the ones pertaining to bindmounts?
16:48:24 <apmelton> who's red?
16:48:28 <thomasem> adrian_otto of course
16:48:50 <iqbalmohomed> Ha ... it says adrian is blue
16:49:05 <adrian_otto> what if we limited bindmounts to paths provided by a manila instance?
16:49:28 <adrian_otto> that reddish color is un-named still
16:49:51 <dguryanov> It mine
16:49:53 <apmelton> adrian_otto: what if all we want is a temporary bind mount between containers that doesn't rely on networking
16:50:04 <adrian_otto> dguryanov: oh, thanks for those!
16:51:18 <adrian_otto> apmelton: humm, I'm not sure. Good point.
16:52:48 <apmelton> adrian_otto: what if instead of allowing arbitrary paths, we instead allow only "named" local ephemeral storage
16:52:57 <apmelton> the agent decides where that storage is
16:53:13 <apmelton> for instance, it'll create ephemeral storage X and keep track of it
16:53:39 <adrian_otto> so similar to the way the xenstored operates?
16:53:41 <apmelton> the agent could be configured to use just a local file system, or a more advanced LVM set up
16:53:54 <apmelton> adrian_otto: I'm not familiar with xenstored
16:54:03 <mtesauro> +1 for ephemeral - if you can avoid absolute file paths, its always a security plus
16:54:27 <iqbalmohomed> Question about the metadata .. is it supposed to be crud-able or just specified at creation time?
16:54:37 <mtesauro> adrian_otto: yeah, its sound similar to xenstore
16:54:47 <apmelton> iqbalmohomed: I believe it depends on how you've configured your system
16:55:03 <apmelton> if it's just config-drive with no metadata service, it's at creation time
16:55:57 <adrian_otto> ok, so looks like we have some good subjects to continue doing design work around
16:56:22 <adrian_otto> one option is to open an ML thread on each topic
16:56:27 <adrian_otto> and iterate there
16:56:29 <iqbalmohomed> also .. flavor_ref refers to flavor of the base image, right?
16:56:35 <adrian_otto> another option is to schedule design meetings in IRC
16:56:56 <thomasem> thomasem +1 to design meetings.
16:57:00 <adrian_otto> or we can continue to use our scheduled team meeting time as the agenda permits
16:57:26 <thomasem> Though, I'd like to keep it to the regular meeting as much as possible
16:57:30 <thomasem> maybe we can knock one out per meeting?
16:57:32 <adrian_otto> I have a new web based video conference and screen sharing tool we can try if there is interest in that. Very similar to Google Hangouts but without the participant limit.
16:57:50 <thomasem> adrian_otto: which is that?
16:58:00 <adrian_otto> thomasem: called vidyo
16:58:14 <adrian_otto> apparently an OEM of the same technology that hangouts uses
16:58:21 <adrian_otto> it's very similar in nature
16:58:28 <thomasem> Oh, I see
16:58:52 <adrian_otto> irc and etherpads works nicely too
16:58:57 <adrian_otto> #topic Open Discussion
16:59:34 <adrian_otto> #action adrian_otto to make a backlog of open design topics, and put them on our meeting schedule for discussion
16:59:49 <adrian_otto> looks like open discussion is too short today
16:59:51 <adrian_otto> sorry about that.
17:00:03 <adrian_otto> we can be reached in #openstack-containers for one-off topics
17:00:10 <adrian_otto> thanks everyone for attending today
17:00:15 <thomasem> Take it easy!
17:00:16 <adrian_otto> #endmeeting