16:00:51 #startmeeting containers 16:00:52 Meeting started Tue Jul 8 16:00:51 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:56 The meeting name has been set to 'containers' 16:00:56 o/ 16:01:06 Hello 16:01:10 Hi! 16:01:22 #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda 16:01:37 #topic Roll Call 16:01:51 Dmitry Guryanov, Parallels 16:01:52 Adrian Otto 16:02:00 o/ 16:02:00 Andrew Melton 16:02:04 Thomas Maddox 16:02:05 Iqbal Mohomed, IBM Research 16:02:12 Paul Czarkowski 16:02:19 Eric Windisch, Docker 16:02:23 steve wilson 16:02:26 Pavel Emelyanov, Parallels 16:02:42 good, looks like we have nice stronga ttendance today 16:04:18 feel free to chime in at any time to be recorded in the attendance 16:04:23 #topic Review Action Items 16:04:25 (none) 16:04:34 #topic Call For Proposals 16:04:56 Proposal (sorry if this was already discussed): how to mount volumes in container? 16:04:58 ok, so this team has really made a ton of progress discussing our options 16:05:21 xemul: I will come abck to your question a bit later in the agenda. I have made a note 16:05:28 any other additions to the agenda? 16:05:31 Thank you 16:05:57 ok, so regarding Call for Proposals 16:06:14 we have discussed a good deal, and driven consensus on a number of topics 16:06:32 one thing taht's hard to do is think about the future in terms of abstracts 16:06:48 one way to make considerations easier is to draw a sketch 16:07:32 so I'm asking our team if we have volunteers willing to draw sketches in the form of spec proposals that will outline our future state of OpenStack with containers capability added 16:07:41 I wonder if a discussion on glance integration would be useful? 16:07:44 the idea here is that we would have more than one submission 16:08:13 Slower: for an interactive topic today, or in the scope of a containers proposal for OpenStack? 16:08:40 maybe interactive topic 16:09:04 Slower: okay, I have added that to my list. 16:09:31 so I wanted two key outcomes from our time here today: 16:09:52 1) Statements of interest from those willing to work on proposals 16:10:01 2) A sense of where those proposals should be submitted 16:10:13 I'll work on #2 first while you each think about #1 16:10:43 I suggest that the proposals follow a prevailing format used in a number of OpenStack projects... 16:10:47 Format to follow prevailing nova-specs template: http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/template.rst 16:10:57 +1 16:11:03 adrian_otto: are proposals different than specs? 16:11:09 ah 16:11:11 +1 on using specs 16:11:16 +1 16:11:16 +1 from me as well 16:11:21 if our proposals are actually *not* Nova centric, then we might want to make a containers-specs repo for this purpose 16:11:31 +1 for spec too 16:11:53 ok, so some support expressed for that format 16:12:02 just a bit 16:12:02 lol 16:12:21 does anyone have an alternate point fo view on how to express a concrete plan for adding container support in OpenStack? 16:12:24 I am happy to work on specs. 16:12:38 I'd like to add my 5 cents for differentiating containers from Nova 16:12:39 erw_: Thanks. I will join you to help with that work. 16:12:42 Would the containers-specs repo be a good location for specs that cover multiple OpenStack services? 16:13:08 thomasem: yes, that is a possibility 16:13:33 one advantage to using the nova-specs repo is that it is likely to get a lot of eyeballs from key stakeholders 16:14:06 adrian_otto: I propose we draft in an etherpad and transfer to a spec document, submitted via code-review for wider consideration 16:14:14 if we create a new repo, there is a risk that it may get less consideration 16:14:21 that's very true 16:14:25 erw_: Good idea 16:14:34 Is there an existing location for specs and features involving multiple services? 16:14:46 specs related to* 16:14:52 adrian_otto: we could just discuss on a read-only version of the etherpad amongst the containers-folks and then submit to nova, as well 16:15:06 thomasem: not that I am aware of, each project handles them independently from what I have seen 16:15:08 right now with nova, you add your specs to the juno directory, is that a commitment to have the changes in that proposal in juno? 16:15:27 some of our changes may not be in scope for juno 16:15:35 adrian_otto: it’s a good point that specsa ren’t open for K yet. 16:15:39 it might also be wise to invite members of the OpenStack TC to review a containers spec, regardless of where it gets proposed 16:15:42 and won’t be for a long while 16:15:46 and specs are now closed for Juno 16:16:06 We can discuss specs in openstack-dev mailing list 16:16:21 also, as a sub-team of Nova, I believe the proper plan for specs is in Nova and not outside of it 16:16:31 #agreed the Containers Team shall use one or more etherpad(s) for initial drafts of a "Containers for OpenStack" proposal 16:16:32 erw_: I thought specs closed aug 21? 16:16:56 adrian_otto: Okay... I was wondering about that because, although a change involving multiple services would be broken down into specs for each one, it'd be good to have a higher-level concept to tie them all together, speaking to your earlier comment about difficulty in thinking abstractly. 16:17:01 feature proposal freeze: https://wiki.openstack.org/wiki/Juno_Release_Schedule 16:17:24 Oh yeah, apmelton, I saw that your userns spec was referencing Juno, but yeah... 16:17:32 apmelton: this isn’t something we plan to land in Juno, so I doubt it will be well-received. 16:17:39 thomasem: nothing prevents contributors of project A from commenting on a spec in project B 16:17:42 it’s really a spec for K 16:18:03 and so close to FF, I doubt we’ll get eyes on it 16:18:08 erw_: I totally agree what we're proposing isn't going to make Juno 16:18:18 adrian_otto: true true 16:18:34 ah so you're saying for our specs, the window is for all practical purposes closed 16:18:38 I propose we sync with Mikal to find an agreement on the best path for the spec process 16:19:04 erw_: +1 16:19:18 erw_: are you willing to take an action item to reach out to him? 16:19:19 erw_: +1 that's a good idea 16:19:30 or would you prefer if I do that? 16:20:44 I don’t mind either way 16:21:29 #action Eric Windisch to check with Mikal for guidance on the best approach for submitting specification drafts for community discussion 16:21:46 in the mean time, we can work on etherpad documents 16:22:00 now, if the etherpad doc looks terrible to you, don't fret 16:22:13 we can afford to have a few alternate specs to consider 16:22:26 the cost of this approach is more work in expressing each set of ideas 16:22:45 the benefit is faster convergence on a preferred approach 16:23:01 any objection to proceeding with potentially multiple proposals? 16:23:33 I will ask that if we do end up with more than one, that each proposal recognize and reference others as alternatives 16:23:47 so reviewers see the complete picture of options 16:23:52 fair enough? 16:24:41 Yes, it is. 16:24:52 adrian_otto: I think those that are working on proposals all be aware of each other and form an informal working group 16:25:18 either by stating their intention today, or through proxy - ML, yourself, etc. 16:25:37 yes, erw. My concern is that for those interested in providing external input also know about multiple choices in the works 16:26:22 adrian_otto: about alternatives, there is a section there for discussing alternatives. I also believe that links to reviews are discouraged in specs 16:26:45 might just be links to code reviews though 16:26:58 a spec review is different 16:27:14 apmelton: the template is a guideline. Specs are in RST format, so we can fit it in. 16:27:29 ok 16:27:56 adrian_otto: big question is how many are working on proposals? 16:28:04 on the subject of links to reviews of code contributions, that's probably a grey area meaning that the code came before the review, which in some projects is discouraged. 16:28:25 that makes sense 16:28:27 s/before the review/before the spec/ 16:28:59 erw_: I expect it to be a group of 4 or less, in all hosety 16:29:28 and we will source small bits of input from a dozen on this team, and maybe another dozen from outside the team 16:30:17 as any proposals take form, I suggest we discuss them here each week in terms of what's been added, and what comes next 16:30:42 ok, any further questions or concerns on this topic before I advance to the next? 16:30:51 adrian_otto: a note on the wiki page or an etherpad would be a way to at least link those that are working on this - along with contact details 16:31:07 if anyone wants to ping us, or if we want to ping each other 16:31:20 erw_: good idea. I like that. 16:31:21 erw_: I believe there's a section for that on each spec 16:31:55 the main author, and then other contributors 16:32:26 or are you suggesting just a list of team members who've volunteered to draft specs? 16:33:08 apmelton: it sounds like we’ll have several draft specs 16:33:08 apmelton: A Wiki page for the initiative, like we have for our Team wiki: https://wiki.openstack.org/wiki/Teams/Containers 16:33:34 ok, sounds good to me 16:33:50 we could use a section of that, and link to it, or use a sub-page 16:34:02 any other thoughts on this topic? 16:34:46 #topic Volume mounting in containers 16:34:48 xemul "Proposal (sorry if this was already discussed): how to mount volumes in container?" 16:34:54 xemul: you have the floor 16:35:13 adrian_otto: do you have a link to the etherpad from two weeks ago? 16:35:22 Yes, one moment 16:35:42 xemul: I’m interested to know if you have some ideas here. I’ve been working on this and have been making progress, but it’s all hypothetical / design, I haven’t written any code yet. 16:35:55 I have found a spec about libvirt lxc containers boot from volumes - https://github.com/openstack/nova-specs/blob/master/specs/juno/libvirt-start-lxc-from-block-devices.rst 16:36:10 Here is a quote from it^ 16:36:18 OK 16:36:18 We have one that we used during the host agent discussion on 6/24: https://etherpad.openstack.org/p/containers-plugin-arch 16:36:44 So the thing, the reason for not allowing this on host is -- if we provide corrupted disk image, mounting one can crash the box. 16:36:54 As LXC will always share the host's kernel, between all instanances, any vulnerability in the kernel, maybe used to harm the host. In general, the kernel's filesystem drivers should be trusted to free of vulnerabilities that the user filesystem image may exploit. 16:36:58 and one from the cinder support discussion: https://etherpad.openstack.org/p/container-block-storage 16:37:14 #link https://etherpad.openstack.org/p/container-block-storage Container Block Storage Options 16:37:33 apmelton: I think that's the one you wanted 16:37:39 adrian_otto: yup, thanks! 16:38:18 The boot from volumes is a slightly different problem 16:38:41 and has a different set of security concerns 16:38:44 Why is it different 16:38:45 and from previous minutes: http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.html we agreed: 16:38:48 AGREED: our first step for cinder support with Containers shall be addressed by option 8 in https://etherpad.openstack.org/p/container-block-storage (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.log.html#l-169, 22:38:09) 16:38:48 AGREED: Option #6 from https://etherpad.openstack.org/p/container-block-storage is not our preferred outcome. Secure by default is preferred. (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.log.html#l-213, 22:51:49) 16:38:55 xemul: “crash the box” is a generous statement. 16:39:23 Well yes :) 16:39:40 I used one as "generic" 16:39:45 hmm 16:39:46 As LXC will always share the host's kernel, between all instanances, any vulnerability in the kernel, maybe used to harm the host. In general, the kernel's filesystem drivers should be trusted to free of vulnerabilities that the user filesystem image may exploit. 16:39:53 interesting statement 16:40:40 Our point is that if we don't allow container to mount any FS and don't let it provide the virtual image it mounts, the security impact is not that big 16:41:05 since e.g. ext4 has been there for many years and can be considered as "mostly harmless" in that sense 16:41:50 dguryanov: you have to somehow mount the FS in the mounting-volumes case.. in the namespace of hte container 16:41:53 (sorry for being messy, the IRC format is quite unusual to me) 16:42:02 xemul: but they could use any FS.. 16:42:08 Containers? 16:42:30 for mounting volumes (user provided FS image) 16:42:32 No, typically containers work on volumes they are provided with by the host 16:42:38 What's the most likely (~80%) case for which filesystem would be used? 16:42:54 I'm sure that would be ext[234] 16:42:59 In our experience the list is ext4 and tempfs 16:43:14 And some std linux ones like proc, sys and devtmpfs :) 16:43:18 xemul: the problem is that with Cinder, we’re obligated to provide the block device to the container 16:43:19 but that doesn't mean someone couldn't do something malitious 16:43:22 If we limit functionality in the ~80% case due to the other 20, we're probably hurting ourselves. 16:43:32 xemul: as such, we ARE allowing the container to control the raw filesystem 16:43:41 erw_, this is one of the reasons we should consider differentiating from VMs 16:43:42 that's an interesting idea.. we could do a FS type whitelist 16:43:56 hmmm 16:43:57 xemul: but this is where something like Manila or a similar alternative to Cinder is viable 16:44:32 We can forbig raw access to a block device and only allow mounts 16:44:50 raw access is the easier part 16:44:53 erw_, I'll look at Manila, thanks 16:44:59 and it's option #8 that we agreed upon 16:45:00 since it doesn't involve kernel level interaction 16:45:04 Raw access is typically not required 16:45:11 ultimately, what are we trying to get with containers+cinder? 16:45:21 if it's just a remote file system, Manila should cover that, right? 16:45:47 firstly DefCore compliance, supporting volume attachment 16:45:56 +1 16:46:00 imho, containers+cinder, using the nova-volumes APIs should expose block devices into containers and containers should ideally be able to mount those volumes directly. 16:46:07 Containers also want to have some persistent data store 16:46:32 I think there is room for extensions or alternative APIs that do more sane and reasonably safe things 16:46:37 and we should propose those APIs or extensions 16:46:40 What if we expose raw device to driver, it mounts one and then launches container APP? 16:47:00 And container processes no longer have raw access to FS? 16:47:18 Let's spend just a moment longer on this topic so we can touch on the other remaining agenda item. We can revisit this in Open Discussion today as well. 16:47:24 xemul: there are valid usecases to allowing raw access to block devices, though 16:47:35 erw_, what are they? 16:47:36 xemul: cinder is NOT a filesystem service. Cinder is a raw block device service 16:47:43 erw_, I agree 16:47:54 arguably, we don’t need to allow mounting filesystems from Cinder, period. 16:48:00 Should we take the new FileSystem service into consideration? 16:48:23 xemul: that is Manila, right? Or are you referring to something else? 16:48:31 it isn’t in the API contract and neither VMs nor containers should be obligated to being able to support understanding the arbitrary data that resides on those block devices 16:48:42 and yes, filesystems are aribtrary data ;-) 16:48:42 erw_: what are the pure block device use cases for container? 16:48:44 adrian_otto, probably Manila, yes. 16:49:14 apmelton: raw memory heaps. databases. virtual tape drives. um… I’m sure I can think of more 16:49:19 apmelton: see line 128 of https://etherpad.openstack.org/p/container-block-storage 16:49:33 erw_, raw access to device w/o mounting one could be an option too, by the way. 16:49:49 User namespaces don't allow mounting arbitrary FS, so this is doable 16:49:58 ok, final thoughts before we advance topics? 16:50:02 And container may use two services -- Cinder for raw disks and Manila (?) for filesystems 16:50:20 My final thought (probably old, sorry if it is): 16:50:33 I like the idea of a filesystem whitelist. That would let us at least limit the kernel code exposed in volume mounting 16:50:47 Since we're talking about Manila -- probably it makes sense to think of Containers service which is not Nova driver 16:50:50 xemul: that is also a good point.. cinder & manila 16:50:57 One of the reasons -- scaling the containers 16:51:00 Slower: please record taht in the etherpad 16:51:28 The thing is -- applying the larger or smaller memory on container is much easier (and actually works) than for VM 16:51:31 The API that's implemented for the nova-driver could easily be a subset of container service API impl 16:51:42 That said, scaling in terms of applying new flavor might not be that good 16:51:56 ok, this is a good discussion, so I'm reluctant to cut it short. I suggest we revisit this again next week. 16:52:11 #topic Announcements 16:52:14 I don't mind 16:52:30 Reminder: adrian_otto will be on vacation 2014-07-11 to 2014-07-24. Eric Windisch will chair our 2014-07-15 and 2014-07-22 meetings. 16:52:47 adrian_otto: enjoy the vacation! 16:52:54 erw_: you'll own the agenda. I'm happy to help out in any way you want before my departure 16:52:57 adrian_otto, is the new meeting time already selected? 16:53:14 xemul: see: 16:53:24 #link https://wiki.openstack.org/wiki/Meetings/Containers Meeting Schedule and Agenda 16:53:25 adrian_otto: done 16:53:42 adrian_otto: thanks. 16:53:55 #topic Glance Integration 16:53:59 Slower: proceed 16:54:29 I will call time in a couple of minutes for Open Discussion, and then adjournment. This topic should be revisited next week as well. 16:54:32 I'm realizing that there are some differences in how image/glance integration would work in eg LXC vs docker 16:54:43 yeah we can punt if you want 16:55:02 let's discuss for just a moment 16:55:14 at least let us start to think on this topic 16:55:27 so erw_ is working on putting the docker containers inside glance 16:55:28 well, how about I start with the work I’ve been doing the past week 16:55:30 haha 16:55:35 erw_: go ahead 16:55:47 I’ve ripped out the docker-registry as a proxy 16:55:58 and have implemented save/load of images into/out-of glance 16:56:08 erw_: yay!! 16:56:19 erw_, and how do images look like? 16:56:22 adrian_otto: there are some serious security issues though, which I’d like to discuss - but we can offline that 16:56:30 I mean -- is it container FS packed into virtual disk image? 16:56:32 indeed 16:56:47 and in theory if the image is not in glance it will attempt to pull from docker registry, correct? 16:56:54 xemul: docker ‘save’ will create a tarball containing multiple layers and tags 16:57:01 Ah OK. Thanks. 16:57:02 they’re then imported via ‘docker load' 16:57:10 yeah it's native docker format right? 16:57:11 Slower: no. 16:57:15 er.. 16:57:19 no the the first question 16:57:26 yes to the second - it’s a “docker native format" 16:57:33 erw_: ah, how do we get new images into glance? 16:57:38 where “a tar ball of tarballs and a manifest file” == native to docker 16:57:40 oh, I thought I saw a pull if the image isn't found 16:57:53 I probably misread 16:57:58 erw_: so each glance image is a tarball of a tarball? 16:58:03 Slower: it would be something like, “docker save cirros | glance image-create" 16:58:26 #topic Open Discussion 16:58:27 erw_: ah so it has to be in the local registry to be available? 16:58:28 feel free to continue this discussion 16:58:48 when nova boots does docker have to pull the entire tarball down, or are layers some how preserved? 16:58:52 Slower: You would have a machien running docker to export the tarball from 16:58:57 and from that tarball, you can import it into glance 16:59:22 hrrm 16:59:27 apmelton: it would pull the entire tarball down, which would write layers into the system.. 16:59:31 so not very good docker registry integration then.. not like before 16:59:35 I've got to run, unfortunately. Catch y'all next week! 16:59:36 apmelton: so this is where there are security issues... 16:59:50 apmelton: the layers and the tags in these tarballs are arbitrary... 16:59:56 yea, you could have a massive uncompressed image 17:00:07 we timed out, sorry guys 17:00:12 apmelton: worse, you could overwrite the local docker’s idea of “ubuntu" 17:00:14 thanks for attending! 17:00:17 from your “cirrus” image. 17:00:24 erw_: scary 17:00:26 *cirros 17:00:30 #endmeeting