22:00:24 <adrian_otto> #startmeeting containers
22:00:25 <openstack> Meeting started Tue Jun  3 22:00:24 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:28 <openstack> The meeting name has been set to 'containers'
22:00:32 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2014-06-03_2200_UTC Our Agenda
22:00:38 <adrian_otto> #topic Roll Call
22:00:40 <ewindisch> o/
22:00:41 <funzo> Chris Alfonso
22:00:42 <adrian_otto> Adrian Otto
22:00:45 <julienvey> Julien Vey
22:00:45 <Slower> o/  Ian Main
22:01:01 <asalkeld> o/
22:01:02 <dguryanov> Dmitry Guryanov
22:02:53 <adrian_otto> Welcome everyone.
22:02:57 <adrian_otto> #topic Announcements
22:03:03 <thomasem> o/
22:03:09 <adrian_otto> any announcements from team members?
22:03:43 <adrian_otto> #topic Review Action Items
22:03:44 <adrian_otto> adrian_otto to begin an ML thread for input on our Top Themes, and formation of a Wiki page to clearly document them for future reference
22:03:46 <ewindisch> we formed a nova-docker subteam
22:04:00 <ctracey> hola
22:04:00 <adrian_otto> aah, back to Announcements ;-)
22:04:14 <ewindisch> uh, that was it? ;-)
22:04:14 <adrian_otto> ewindisch: you are welcome to link that here if you wish
22:04:39 <adrian_otto> ok
22:04:45 <adrian_otto> adrian_otto to begin an ML thread for input on our Top Themes, and formation of a Wiki page to clearly document them for future reference
22:04:51 <adrian_otto> Status complete
22:05:07 <adrian_otto> #link http://lists.openstack.org/pipermail/openstack-dev/2014-May/035977.html ML Thread for Top Themes
22:05:16 <adrian_otto> adrian_otto to attend upcoming Nova meeting, and report Containers Team position on cinder support for containers in Nova
22:05:21 <adrian_otto> Status complete
22:05:40 <adrian_otto> #topic Questions about Containers+Cinder
22:06:06 <adrian_otto> (PhilD) Does not supporting cinder mean that a system using containers won't pass the DevRef standard ?
22:06:07 <Slower> We were just looking into that
22:06:15 <funzo> what Slower said
22:06:46 <Slower> I have no answers though heh
22:06:52 <ewindisch> adrian_otto: that’s been a big concern of mine - and there is no real solid answer
22:07:37 <ewindisch> except that I spoke to Josh McKenty and Rob Hirschfield, and they both stated that they’d - ideally - like to make it such that Docker can pass DefCore standards
22:07:49 <ewindisch> (and containers)
22:08:26 <ewindisch> that if there is an issue, it may lie in problems in the DefCore definitions, rather than an innate inability of containers to comply...
22:08:37 <Slower> what would be involved to get eg docker to support cinder?
22:09:01 <ctracey> There are a lot of "hypervisor" agnostic OpenStack use cases that have no need for Cinder
22:09:02 <Slower> or is that just a no go?  someone want to give a little background?
22:09:04 <ewindisch> but yes, as-of-right-now, lack of Cinder support would fail a RefStack and thus DefCore check
22:09:36 <ewindisch> Slower: I’ve done the analysis. Implementing it would be no more difficult than the pause/unpause work
22:10:02 <Slower> to me that sounds reasonable then..
22:10:18 <adrian_otto> so is it appropriate to relax the requirement, or is there a way to technically meet the requirement, even if that approach is theoretical?
22:10:19 <Slower> I'd be willing to take that on
22:10:32 <funzo> ewindisch: Slower sounds like a good thing to do this week.
22:10:55 <funzo> Slower: you mean we. WE, man
22:10:57 <Slower> with the amazing funzo's help
22:10:58 <ctracey> I think the requirement needs to be relaxed even if it gets implemented
22:11:07 <Slower> funzo: but of course :)
22:11:27 <adrian_otto> ok, is there a concrete reference to the Defcore requirement we are concerned about?
22:11:38 <adrian_otto> I'd like to record it with a #link if possible
22:11:39 <ewindisch> I’m good until Thursday, but then I go to SF for Dockercon and will be out-of-pocket until Wedsday or Thursday
22:11:51 <Slower> ewindisch: ok we'll hit you up for info tomorrow
22:12:16 <ewindisch> https://wiki.openstack.org/wiki/RefStack/DefCore_Requirements
22:12:23 <adrian_otto> Slower: are you willing to take an action item to identify technical options for better meeting the cinder integration expectations?
22:12:44 <ewindisch> strictly speaking, that forwards you to the implementation of refstack which lists this: https://github.com/stackforge/refstack/blob/master/defcore/havana/coretests.json
22:12:47 <adrian_otto> thanks ewindisch
22:12:57 <adrian_otto> #link https://wiki.openstack.org/wiki/RefStack/DefCore_Requirements
22:13:05 <adrian_otto> #link https://github.com/stackforge/refstack/blob/master/defcore/havana/coretests.json
22:13:25 <ewindisch> Slower: sounds good. I even have a patch started on the nova-docker side
22:13:32 <Slower> adrian_otto: sure
22:13:45 <adrian_otto> what I'd be looking for is preposals for what more sensible requirements might be, and what options exist for implementation that close the gap toward that
22:14:15 <ewindisch> the big gap in defcore compatibility isn’t in supporting Cinder
22:14:16 <adrian_otto> #action Slower to identify implementation options for adding cinder support to nova-docker to more closely meet expectations for DefCore criteria
22:14:21 <ewindisch> but the inability of containers to actually mount filesystems
22:14:30 <Slower> adrian_otto: well basically Im going to try to implement it for nova docker
22:14:38 <Slower> yeah
22:14:42 <adrian_otto> a pull request is fine ;-)
22:14:52 <Slower> good then :)
22:15:10 <ewindisch> that is, we can support cinder from the docker driver, but we can’t pass refstack tests as they exist today without mounting filesystems
22:15:16 <adrian_otto> ok, so ctracey does this address your concern?
22:15:26 <ewindisch> and it’s up for debate as to if that should be a defcore requirement
22:15:46 <ctracey> Yes. Though I think this is bigger than docker integration itself.
22:16:29 <adrian_otto> ok, so one option to deal with that would be to meet with the Coredef committee, right?
22:16:36 <adrian_otto> and raise this subject for discussion there
22:16:46 <ctracey> Yep
22:17:15 <adrian_otto> does anyone know when they meet, or could peek at the list of meeting schedules to find out?
22:17:18 <ewindisch> adrian_otto: yes. I would preface that with actually getting cinder support in nova-docker and seeing to having a basic ‘just dd to the disk’ test in Tempest
22:17:44 <adrian_otto> ewindisch: that's fair
22:18:18 <adrian_otto> we don't have to activate on this immediately, but I'd like to work this from both ends if we don't need the cinder requirements to be as strict
22:18:20 <ewindisch> adrian_otto: Thursday is the refstack/defcore meeting… I’ve been making sure to attend since the sumit
22:18:31 <adrian_otto> ewindisch: ok, thanks
22:18:37 <ewindisch> *summit
22:19:15 <adrian_otto> so should we wait a week or two, and then add this subject to the agenda for that meeting once we have naieve support for cinder in nova-docker?
22:19:31 <adrian_otto> we will have this same concern for other virt drivers as well
22:19:45 <adrian_otto> or other downstream technology that we might access through libvirt, etc.
22:20:34 <adrian_otto> #action adrian_otto to follow up with Slower and ewindisch to determine when we should address cinder requirements with refstack team
22:20:44 <adrian_otto> any other thoughts on this question?
22:20:46 <dguryanov> What about mounts from host? Isn't it simpler to add another API call to nova ?
22:21:20 <ewindisch> dguryanov: there are security concerns with that
22:21:51 <ewindisch> filesystem mounting can easily compromise the host
22:22:34 <adrian_otto> it could be offered as a use-at-your-own-risk feature, right?
22:22:47 <adrian_otto> there are some environments where that might be acceptable
22:23:04 <ewindisch> but yes, there are workarounds. You could launch a qemu instance, mount the filesystem, then use NFS (or something) to serve back to the container.
22:23:06 <Slower> that requires knowlegeable users though
22:23:55 <adrian_otto> ok, let's wrap on this one for this week
22:24:12 <adrian_otto> we will revisit this in the action items review at our next week meeting.
22:24:22 <adrian_otto> with any luck we might have reviews to reference
22:24:57 <adrian_otto> and we can also revisit this in our Open Discussion
22:25:00 <adrian_otto> #topic Containers in OpenStack -- Review Top Themes
22:25:08 <adrian_otto> #link https://wiki.openstack.org/wiki/Teams/Containers#Top_Themes Top Themes from Stakeholders
22:25:29 <adrian_otto> do you all think we are focusing in the right areas? barking up the wrong tree?
22:25:53 <adrian_otto> that wiki is a derivative of:
22:25:54 <adrian_otto> #link https://etherpad.openstack.org/p/containers Containers Etherpad
22:26:27 <ewindisch> I’m happy with it. Do you want to call a vote?
22:26:40 <adrian_otto> only if we feel we need one
22:27:05 <adrian_otto> I'm open to hearing any suggestions to tweak it, and just use this as a tool for guiding our focus
22:27:35 <adrian_otto> if not, I'll advance to a more interesting topic
22:27:45 <adrian_otto> #topic Identify Preferred Implementation Approaches
22:27:54 <adrian_otto> Identify Implementation options identified in https://etherpad.openstack.org/p/containers and determine if there is consensus for a primary approach.
22:28:53 <adrian_otto> so over the last couple of weeks we explored some pro/con arguments for each of the implementation options. This consensus will answer the question: "Where do containers fit in OpenStack?"
22:29:40 <adrian_otto> so before we debate the merits of each, I'd like to ask if there are other options that should be on that list?
22:30:40 <ewindisch> I’m still not sure how #3 differs from #1… they both read as, “add extensions to Nova”, how those extensions look is TBD
22:30:47 <adrian_otto> note that option 3 could be implemented using a host agent or a guest agent, or both
22:31:01 <adrian_otto> ewindisch: ^
22:31:27 <adrian_otto> whereas, #1 probably ony addresses the functionality set that VMs and containers have in common
22:31:49 <ewindisch> adrian_otto: #1 lists, “- implement containers extensions to sit on top / extend Nova “
22:32:35 <ewindisch> maybe scratch that line from #1 and move it to #3 for clarity?
22:32:41 <adrian_otto> ok
22:33:13 <adrian_otto> thanks for moving that
22:33:28 <adrian_otto> ok, are there more options?
22:34:53 <adrian_otto> ok, so let's take a quick poll to see where we are starting
22:34:55 <ewindisch> #afterstack
22:34:57 <ewindisch> ;-)
22:35:07 <adrian_otto> what's the heading number of the option you currently prefer?
22:35:14 <adrian_otto> 3
22:35:14 <ewindisch> in this case, my guiding principle has been to do #3 with an open door to #5, as I don’t think we can make a solid determination on that without a better plan for what those containers extensions will look like for Nova.
22:35:48 <adrian_otto> yes, 3 does not preclude 5.
22:36:00 <adrian_otto> so I suppose we might narrow the options to what to do first
22:36:10 <adrian_otto> and then expand on taht with some future vision for where to head next
22:36:33 <adrian_otto> so we have two indications of #3, do I hear others?
22:37:16 <Slower> I think #3 is the best balance and most attainable
22:37:36 <ewindisch> I should clarify I’m suggesting we do planning for #3, then decide if we should continue with #3, switch to #2, or switch to #5.
22:37:37 <dguryanov> 3. as virt driver + extend nova, if I anderstood correctly
22:37:39 <Slower> and we can basically start with #1 and add #3 features
22:38:08 <adrian_otto> Slower: yes.
22:38:34 <adrian_otto> ok, any more thoughts?
22:38:36 <Slower> it seems like #1 has pretty good political backing
22:39:17 <ewindisch> Slower: agreed, but #3 is an extension of #1, more than a divergent option
22:39:28 <adrian_otto> I have not found any stackers who think that Containers should not fit anywhere
22:39:34 <ewindisch> “do everything we can to the Nova API then do the rest in extensions"
22:39:54 <ewindisch> adrian_otto: you haven’t spoken to Joe Gordon, then ;-)
22:39:58 <adrian_otto> but we have not yet reached consensus about where they belong short, medium, and long term
22:40:22 <adrian_otto> ewindisch: is it possible for us to express his point of view in a fair way, so we can understand it?
22:40:24 <meghal> so in #3 by host-agent does it refer to something like a docker daemon running on compute hosts ? and nova-api interacting with that agent ?
22:41:40 <adrian_otto> meghal: Yes, thats one way to deal with it. We could have a nova extension that talks to a combination of host and/or guest agents to deal with the "inside the os" functionality
22:42:07 <adrian_otto> another option is to have a separate API endpoint for that, and only use nova for the "outside the os" functionality set
22:42:07 <ewindisch> adrian_otto: sorry, I was kidding - I believe his expression was more of, “containers shouldn’t be everywhere” - rather than, “anywhere”.
22:42:16 <dguryanov> What about application containers? 3. is suitable only for lightweight-VM-containers
22:43:15 <adrian_otto> dguryanov: agreed, an app container (such as a JVM, if I understand you) would be better managed by an approach like #2.
22:43:25 <meghal> adrian_otto:  thanks, so by guest-agents we are also looking into possibility of inside vm OS scenarios…so coming into picture after vm instances are already booted
22:44:11 <adrian_otto> meghal: yes. For example, if we want to support the running of a process within the container for example, with a particular shell environment set at boot time.
22:44:26 <ewindisch> adrian_otto: I understood his question more directed at docker-style microservices versus lxc/openvz “full OS” containers
22:45:04 <ctracey> isnt that already doable in nova-docker today?
22:45:11 <adrian_otto> ctracey: yes.
22:45:57 <ewindisch> actually #3 does raise some interesting points if we look at implementing these features for V
22:46:01 <ewindisch> VMs
22:46:37 <ewindisch> right now, we can specify the command-line for Docker containers, but that is seen as mapped to the kernel command-line
22:46:56 <meghal> adrian_otto: got it…thanks…yes, ewindisch I actually confused #3 with vms and thought about interacting with qemu guest agent inside the vms
22:47:09 <meghal> qemu guest agent for example
22:47:11 <ewindisch> if we wanted to extend the nova api to run a command “inside the OS”, then the mapping between kernel and OS is mismatched
22:47:37 <adrian_otto> +1
22:48:15 <adrian_otto> ok, I had Host Agent Discussion on the agenda, knowing that we would use more time on the previous discussion
22:48:43 <adrian_otto> I'm planning to keep that there for next week, and have you all think about this, and watch for ML discussion on the topic
22:49:10 <adrian_otto> so I will open us up for Open Discussion now
22:49:14 <adrian_otto> #topic Open Discussion
22:49:24 <ewindisch> first - back to Cinder...
22:49:51 <ewindisch> one big stopper is that attaching block devices to the host is responsiblity of the virt driver
22:50:10 <ewindisch> that is, connecting iscsi, fiber-channel, coraid, etc… is all virt-driver specific
22:50:19 <dguryanov> I think we could move code from libvirt's driver to some common lib
22:50:27 <ewindisch> yes, we can, and I’ve spoken to mikal about it.
22:50:45 <ewindisch> he is okay with us doing that, even with the containers code outside the tree, but we need to do the blueprint
22:51:28 <ewindisch> I promised it, but haven’t delivered on it yet. :)
22:52:03 <ewindisch> I was planning to have that ready for this week’s Nova meeting, though
22:53:15 <dguryanov> So who will actually fix the code?
22:54:23 <adrian_otto> Slower?
22:54:26 <Slower> hehe
22:54:29 <ewindisch> dguryanov: I’m willing/able to do work on it, but I’d appreciate help from anyone willing (Slower?)
22:54:37 <Slower> yeah I can help
22:54:48 <Slower> funzo will too I bet :)
22:55:04 <Slower> I guess just calling to libvirt won't work?
22:55:11 <Slower> seems like splitting it out is not the best idea?
22:55:23 <ewindisch> Slower: it doesn’t belong in virt/libvirt, it can be easily moved out
22:55:49 <ewindisch> I counted maybe 3-4 lines that seemed to really depend on libvirt, but it’s possible I’ve misguaged the effort
22:55:53 <Slower> oh this is just the nova libvirt driver, not libvirt itself?
22:56:05 <Slower> gotcha
22:56:07 <Slower> ok
22:56:31 <dguryanov> CIFS support should be implemented separately, because qemu accesses it without block device on host.
22:56:40 <Slower> ewindisch: ya I can help with that
22:56:45 <ewindisch> Slower: thanks
22:57:01 * Slower is slow sometimes
22:57:09 <ewindisch> dguryanov: interesting.
22:57:42 <ewindisch> dguryanov: that’s something that is an acceptable caveat, though, “Cinder support - doesn’t support CIFS” - etc
22:58:01 <ewindisch> I suspect vmware, xen don’t support all of the cinder backends
22:58:38 <dguryanov> Yes, as I remember they support ISCSI and possibly NFS
22:58:43 <ewindisch> next topic - cloud-init?
22:59:13 <ewindisch> that might be outside the scope for this team? I suppose it’s a per-image issue.
22:59:35 <ewindisch> it might be a matter of creating a document saying how to use it with containers — or not
22:59:53 <harlowja> hmmm, cloud-init
22:59:55 <harlowja> did i hear cloud-init
22:59:55 <ewindisch> since it isn’t something I think we can address in the drivers themselves
22:59:59 <Slower> so the issue is it's only in some containers?
23:00:00 <adrian_otto> time to wrap
23:00:21 <adrian_otto> thanks everyone. I liked getting more technical this week, we will keep this up.
23:00:37 <Slower> cool thx guys
23:00:41 <ewindisch> thanks adrian_otto.
23:00:42 <ctracey> thanks all
23:00:45 <adrian_otto> next meeting is Tue 6/10 at 1600 UTC
23:00:49 <ewindisch> *and everyone else
23:00:50 <adrian_otto> #endmeeting