17:01:11 #startmeeting nova 17:01:12 Meeting started Fri Nov 22 17:01:11 2013 UTC and is due to finish in 60 minutes. The chair is rustlebee. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:15 The meeting name has been set to 'nova' 17:01:23 using the 'nova' topic. minutes will be with the rest of the nova project meetings 17:01:33 #chair samalba 17:01:34 Current chairs: rustlebee samalba 17:01:36 #chair kraman 17:01:37 Current chairs: kraman rustlebee samalba 17:01:48 #link https://etherpad.openstack.org/p/containers-service 17:01:56 #link https://etherpad.openstack.org/p/containers-service-api 17:02:01 #topic Posible new containers service 17:02:27 * danpb present 17:02:49 kraman: would you guys like to discuss what's written down so far? 17:03:04 ye 17:03:05 yes 17:03:06 sure 17:03:08 o/ 17:03:12 o/ 17:03:16 but before we start would like to see who is on the channel today 17:03:21 <- here 17:03:24 o/ 17:03:25 o/ 17:03:28 o/ 17:03:28 o/ 17:03:31 Please indicate if you know one of the container technologies as well 17:03:34 o/ 17:03:38 Yes, especially discuss the different plans proposed (for supporting containers): new service, new api, etc... 17:03:48 I know Docker 17:03:49 :-) 17:04:04 o/ <- LXC, Docker 17:04:09 i've heard of linux. 17:04:11 <-- LXC 17:04:12 i know things 17:04:15 <- lxc 17:04:28 <- the things i've seen... 17:04:29 I'm here as well (first time) 17:04:38 During the design session. we had gone over a few different options for how to proceed. 1) new service 2) merge into nova 3) some sort of hybrid 17:04:49 here 17:05:00 * danpb libvirt / libvirt-sandbox / docker 17:05:11 There was no consensus we we decided to try and come up with an API which can support containers and then go back to Nova community and decide how to proceed 17:05:14 * SpamapS knows nothing 17:05:35 SpamapS: yes we know ;) 17:06:02 Thanks for the introductions :) Has everyone had a chance to go over the 2 etherpads? 17:06:19 yeah I think a new service might be total overkill at this point 17:06:25 I guess everyone followed the thread on the ml? 17:06:31 I have, and I think the API one is extremely helpful in demonstrating why I think this is a massive increase in scope for nova :) 17:07:12 Pad containers-service line 75-86 contains some of the important use cases we would like to cover. and the api pad goes over the detail 17:07:34 damnsmith: what do you think of the point raised by Tim in the last email (about what should happen for integration for the end user)? 17:08:08 samalba: I think that most users would like the entire API to be a single endpoint :) 17:08:54 and a new APi would only be justified if it's not disrupting existing users 17:09:00 because it would be for use cases not currently possible 17:09:13 I think the deployer's impact of a new service is well-understood to be something we should try to avoid when possible, but not at all costs 17:09:32 I definitely understand why people outside of nova want this to be in nova because it's a unit of compute 17:09:43 I guess we have the same problem with adding a new top-level API in Nova itself, right? 17:09:45 to me the api description in etherpad doesn't seem very large 17:09:46 We can decide if we need a new API or not after we complete the draft. At this point, we just need to identify the deltas 17:09:53 danpb: yeah.. 17:09:58 and several of the "unique" features listed there could easily apply to existing nova code 17:10:13 eg the ability to get a list of processes is valuable even for full OS virt 17:10:28 Can everyone familiar with one of the container tech please update the capabilities section on containers-service-api please 17:10:38 danpb: but it's squarely outside the scope of nova right now 17:10:54 damnsmith: but if it's useful for full OS virt, where would it go in openstack? 17:11:15 One of the interesting aspects is even if the API is included in Nova, it seems a fairly common case will involve users that *only* access resources related to containers, while other users *only* access resources related to VMs 17:11:17 * rustlebee is undecided on this issue right now to be clear 17:11:22 none of the tuning container features listed are at all specific to containers either 17:11:24 rustlebee: I've said a couple times that I think it'd be nice if this new service was not containers specific, but a thing that provides "OS services" 17:11:29 rustlebee: same here... 17:11:52 damnsmith: yes, that would be a more useful distinction for some of the apis listed 17:11:54 there are multiple kinds of containers and I think that needs to be discussed - if we're placing something on hardware (any kind of container) then there is common pattern of usage which Nova already covers 17:11:57 damnsmith: whats an OS service? 17:12:01 I think that a well-defined "openstack agent" could live inside guests and provide this capability to all things, not just containers 17:12:10 but that is a HUGE increase in nova's scope to contain that agent as well 17:12:19 kraman: apis which get information about stuff inside an instance 17:12:31 kraman: as opposed to what we currently do which is mostly about stuff outside an instance 17:12:36 kraman: Operating System service is what I meant 17:12:37 containers can and should act like OS instances and I believe nova should handle this - scheduling the creation of containers within these hosted instances could be the realm of a different api/service 17:12:41 that's interesting ... 17:12:48 so it actually wouldn't be involved at all with placement or anything? 17:12:51 danpb: thats a pretty big leap from just managing containers 17:13:11 openstack is going to need in-guest management at some point 17:13:22 xen already has it in a totally incompatible-with-everything way 17:13:32 jhopper: containers are not the same as OS instances. just can be as simple as a single process with an ENV and kernel namespaces 17:13:50 jhopper: we loose a lot of the power of containrs when forcing them to behave like a VM 17:13:50 kraman: how are they not the same as OS instances? 17:13:59 damnsmith: ack, the two way metadata service means its portable to cross hypervisor, but that feels wrong 17:14:09 jhopper: containers are not the same as VMs, sometimes people conflate them, but they are much more than just a thin VM 17:14:11 kraman: containers operation on process trees - containers within containers or VMs loses nothing 17:14:14 kraman: not really, it is about not uneccessarily constraining apis to one specific technology 17:14:23 damnsmith: I understand they're not the same as VMs 17:14:40 damnsmith:however they share striking similarities with the OS instance that lives on a VM 17:14:48 damnsmith: taken further 17:14:58 jhopper: oh you mean Operating System, not OpenStack 17:15:00 damnsmith: you could argue a container holding a rootfs on bare metal is an instance 17:15:03 we need to stop using OS :) 17:15:06 damnsmith: sorry lol 17:15:06 kraman: don't equate VMs == full OS installs 17:15:17 an API that talks to an in-guest instance seems like it would have to be pretty tightly integrated with Nova 17:15:23 it's hard for me to think about how that could be out of nova 17:15:28 rustlebee: agreed 17:15:32 there are containers & VMs, and either of them can run full OS or individual services/ processes 17:15:34 a separate API, sure 17:15:45 rustlebee: dont think you have to yet. lets go over the API first 17:15:53 an openstack agent isnt that what cloud-init is..kind of? 17:15:59 rustlebee: I don't know why you say that 17:16:05 perhaps for the purpose of this discussion we can say OS == OpenStack, OpSys = Operating System ? 17:16:23 zul: cloud-init is not an agent 17:16:25 damnsmith: could be lots of reasons, like i'm just dumb, or it's friday, or 17:16:31 it is a bootstrap tool 17:16:41 rustlebee: ah, the inarguable friday excuse.. okay :) 17:16:59 right, so things that people want to do that would be agenty: 17:17:05 - two way console communication 17:17:08 - live process lists 17:17:11 -live process controls 17:17:17 - Filesystem manipulation at runtime, etc 17:17:30 those don't need nova's involvement and definitely don't fit with cloud-init I think 17:17:31 - password reset without reboot 17:17:37 johnthetubaguy1: +1000 17:17:38 damnsmith: sounds like mcollective 17:17:39 i don't think we should specifically talk about agents - they're a hypervisor specific implementation detail 17:17:51 I don't know - it seems like nova would benefit greatly from that feature set 17:17:52 danpb: they don't need to be 17:17:55 if it isn't already in there 17:18:02 OpenStack would benefit from it, yes 17:18:16 to benefit, it doesn't have to be Nova 17:18:25 sounds like "none of openstack's business" IMO. Inside a container is like inside a VM... we're like vampires, the user has to invite us in.. 17:18:26 hence discussion :) 17:18:35 and its probably in the compute program, just may not Nova... 17:18:47 agreed that this is all in scope for the compute program 17:19:03 well sure but realm of responsibility should be taken into account - maybe not nova in particular but I could see the agent working in the scope of many compute components 17:19:38 the question is probably how deep are the hooks in nova, keep the API separate to start with, integrate if it makes sense later, else we get nova-volume all over again? 17:19:45 also, unfscking network without reboot is another agenty thing 17:20:08 online filesystem resize is another 17:20:12 johnthetubaguy1: ah 17:20:14 damnsmith: its like your listing what the xenapi agent does, but yes 17:20:20 heh 17:20:39 would it be useful to have some process managements capabilities for VMs as well as some point? Hence extending the existing API? 17:20:40 damnsmith: with container based virt you can do alot of this without needing an agent, so it is desirable not to force an agent into our architecture uneccessarily - that's why i said it was a virt driver specific detail 17:20:43 I know only what xen's agent did in 2006 :D 17:21:10 danpb: well, I'm saying that for containers, the agent kinda deflates to nothing, and for VMs we provide the agent to do these things 17:21:16 damnsmith: ah, this is a new one, but anyways, lets leave that asside 17:21:19 danpb: but the api to do them from the outside becomes the same for a vm and a container 17:21:33 so it seems we're morphing from a container proposal to a guest management proposal 17:21:36 damnsmith: yep, that's what i'd expect 17:21:43 is that right? 17:22:06 rustlebee: i'd say a bit of both really 17:22:26 isnt guest management too broad a mandate for nova? 17:22:31 wow this all sounds.. a million miles outside of nova's scope. reaching into containers and vms? really? 17:22:33 rustlebee: i think there's clearly container specific stuff we'll need todo wrt booting instances, but alot of the ongoing mgmt apis are general guest management 17:22:42 a guest management proposal that also supports the features we need w/ containers seems like biting off a bit more than we can probably chew 17:22:48 going back to containers, can we just agree about creating them, and if that fits the nova "server" abstraction, I think the answer is yes, but I keep changing my mind? 17:22:50 guest management sounds like something better implemented with heat and other tools 17:23:04 and guest management would also not give the benefit of having a smaller service which just manages containers 17:23:06 it does... 17:23:14 johnthetubaguy1: the thing is, containers aren't super useful if treated like VMs 17:23:22 johnthetubaguy1: I think OpSys container instances absolutely fit the Nova model and should be integrated 17:23:33 It seems like an additional plugin service _might_ be useful/prudent for some of the finer-grained control aspects of a containers and/or OS service control and guest management. But I think that the primary aspects of container instance mgmt should be included within the existing Nova set. 17:23:38 +1 to jhopper 17:23:42 Am I correct in assuming that the reason the management items are necessary for containers is there isn't a way to do it within the container itself, e.g. with instances you can SSH and do all these things directly within the VM, but for a container that isn't possible? 17:23:47 damnsmith: maybe, but if you have containers, plus fork, is it just orchestration bits for everything else? 17:23:50 damnsmith: I disagree. Containers on bare metal would let you do a great deal and continue to use containers within that container without incurring the cost of the VM 17:23:56 sharwell: not, that's not accurate 17:24:10 sharwell: it entirely depends on whether you configure your containe to run ssh or not 17:24:16 jhopper: that is interesting. At that point, what is different about a container that starts by executing /bin/init vs. one that executes /usr/bin/apache2 ? 17:24:24 jhopper: again, you're thinking of containers as VMs 17:24:26 SpamapS: there really isn't 17:24:35 jhopper: the PaaS people want process level control to make them useful 17:24:40 damnsmith: I know but containers have to go somewhere and that's what nova odes - manage instances very well 17:24:47 damnsmith: I'm proposing that that be nova's realm 17:25:01 damnsmith: and let the other orchestrations fall where they may - other service or orchestration tools 17:25:09 danpb: so there are *some* containers for which what I said is true, but not for all containers? and since you want to support containers in general it makes the management API commands necessary? 17:25:32 sharwell: and similarly the same it true for OpSys 17:25:41 jhopper: yeah, and that makes nova suddenly concerned with what is inside the instance, which is crossing a defined line we have today 17:25:44 jhopper: if we can settle on the deltas between nova and container APIs we will have more information to deicde if it should or should not be part of nova 17:26:00 kraman: want to go through each API feature you have so far? 17:26:09 jhopper: because if we do it for containers, we might as well do it for vms as well, and then..boom 17:26:10 yes please :) 17:26:23 can we start with the interesting ones? 17:26:37 damnsmith: sure 17:27:05 Start with "setting environment" 17:27:18 thats like metadata injection right? 17:27:21 in a VM you can use cloud-init after the VM starts to pull this 17:27:22 that seems logically similar to metadata, yes 17:27:31 but in a container this needs to be set before the container starts 17:27:43 that seems doable in Nova today without any fundamental changes IMO 17:27:49 so if you can't do this before it starts, then you need an init process in the container, 17:27:51 rustlebee: how so? 17:27:54 which is what container folks don't want to do 17:28:04 damnsmith: there isnt an init process 17:28:05 eh what? 17:28:08 i think the key point here is that you have a different boot setup for the container 17:28:16 there is only one process which is what is running in the container 17:28:17 you just ... do it? in your driver 17:28:22 kraman: right 17:28:35 rustlebee: tried that with docker driver. changeset was rejected 17:28:35 rustlebee: we told sam no to that, remember? 17:28:35 I don't understand the assertion that metadata must be set before a container starts. 17:28:39 yep, this is a driver thing, assuming LXC+libvirt gets pulled into its own driver, etc 17:28:41 there is a process declared as the "init" (it can be any binary) and it has args + environ variables + console connections 17:28:43 i do not remember that, heh 17:28:52 evenif you're talking about an application container, there is an 'init' in that it is the reaper. 17:28:54 anyways, whats the next one after metadata? 17:28:54 did i say no, too? 17:29:14 SpamapS: if we're talking about contained processes then there can't be an agent that spins up to set the meta-data 17:29:22 hallyn_: we're talking about something that would need to be like cloud-init in a container, which means your application would be pid N not pid 1, which is the problem 17:29:25 SpamapS: a single container would have a single process and nothing else 17:29:26 danpb: as I said, container dont have an init process like systemd or anything where i can pull those env variable metadata 17:29:32 ^ that 17:29:39 kraman: you're talking about user_data field, not the metadata I guess 17:29:45 ok, metadata is overloaded 17:29:47 damnsmith: the cloutinti thing could go on to exec your init 17:29:48 rustlebee: it involved sending arbitrary docker config in a metadata string 17:29:50 woudl still be pid 1 17:29:53 kraman: when I say "init" i mean pid==1 17:30:02 damnsmith: OK, yeah, not what i had in mind 17:30:04 please specify the actual interface being discussed. I suspect you mean the process environment? 17:30:04 hallyn_: right, the need for that thing is undesirable 17:30:05 i don't specifically mean sysvinit/systemd 17:30:09 danpb: no requirement to use pid namespace in container 17:30:15 danpb: may not be pid 1 17:30:18 issues with metadata is that in the scope of docker for example, some metadata would be mandatory to start an instance, is that a problem? 17:30:21 danpb: that's doable 17:30:27 i meant more like .... same sort of data we expose in config drive or metadata server, but passed in through the env that the container supports 17:30:29 kraman: well ok, "the first pid" 17:30:30 damnsmith: ok, no opinion on that 17:30:38 having a process that initializes the program with metadata like cloud-init works 17:30:48 rustlebee: +1 17:30:56 rustlebee: we need a structured api for that though right? 17:31:02 danpb: also, containers may be built/provided by other community and not specifically built for openstack. want to allow that usecase 17:31:10 kraman: absolutely 17:31:23 docker and other container frameworks allow setting ENV outside the image or init process 17:31:26 do we? we have config drive / metadata server today ... i'm only talking about exposing that same info 17:31:32 kraman: that's why we shouldn't try to force in openstack specific agents/processes 17:31:39 would like not to loose that functionality 17:31:51 we don't have to lose it - we just have to make sure it's in the right place 17:31:59 jhopper: +1 17:32:03 but where is that 17:32:09 rustlebee: then nova doesn't support environment variables, and it's a contract between the user and the docker driver, bypassing all of nova in between 17:32:18 in the api I suggested, i add it as part of the create and start apis 17:32:19 the xenapi guest agent updates metadata changes into xenstore, so the callback is already there for metadata changes 17:32:45 damnsmith: yeah, it sounds bad like that 17:33:00 so what about having a kickstart process that is integrated with certain parts of OS such that it can reach and set metadata? 17:33:02 environment is really not the interesting one I would have chosen 17:33:06 you could just pass env variables as standard named metadata properties against the image, or pass them in with the boot api call 17:33:07 I wanted to choose process listing :) 17:33:15 pid1 kickstart -> container connfig -> launch pid2 17:33:33 damnsmith: process listing support is not critical IMHO 17:33:37 jhopper: pid1 may be apache which cant pull that config 17:33:38 it's not really an agent so much as a launcher 17:33:38 danpb: so we expose all metadata as envars? sounds dangerous. if not all, then we've got a structured API 17:33:43 nono 17:33:45 we control pid1 17:33:50 how? 17:33:52 samalba: okay 17:34:02 so what's important? :) 17:34:11 samalba: what about stdio? 17:34:13 because I can with containers by simply launching pid1 as my init and then having pid1 launch a process tree under new cgroups (i.e. container) 17:34:21 env is important, volumes are important 17:34:29 both can be passed in metadata 17:34:30 define volumes? 17:34:33 rustlebee: the ability to specify a binary to launch, and provide it cli args + env + stdio IMHO is the key first step 17:34:44 rustlebee: it's a mount-bind in docker 17:34:59 expose a dir from the host to the container 17:35:36 that worries me in the security sense - which is something we haven't touched on yet 17:35:50 jhopper: but it's a common thing people want to do with containers 17:36:02 I well understand but then what is the 'host' in this case? 17:36:05 If the create container operation does not actually start the container, then you could allow setting items like environment variables after creation or stopping it prior to a start command that operates consistently on any not-currently-started container 17:36:10 do we manage the host like a HV? 17:36:16 seems like that could be done using a cinder abstraction 17:36:39 rustlebee: cinder to manage environment? 17:36:40 * johnthetubaguy1 is told he has to run away, needs to be a taxi 17:36:41 yes, containers (at least for docker) are stateless and changes on local fs can be trashed at everyrun, for database the good workflow is to use volumes (to store data outside the container), so it's super important 17:36:41 jhopper: well i don't think its any worse than exposing block devices from a host to the guest which we already do with nova + cinder 17:36:44 no, the volumes part 17:36:48 ah 17:36:55 danpb: fair point 17:37:00 jhopper: its just something you have to deal with carefully as you would any other item 17:37:05 can we finsih off the environment discussion first please :) 17:37:48 jhopper: are you thinking that there is a container service which sets env and then kicks off the driver? 17:37:58 jhopper: so pid1 would be the container service? 17:38:13 honestly using metadata for the docker driver would improve a lot of things. It would just make metadata super important (possibly mandatory) for the user. If it's not a problem, maybe we have a first starting point to improve the support... 17:38:32 true 17:38:42 kraman: really the data could be stored anywhere - what matters is that we control the entire process of launching processes. This means that if I have a process acting as pid1 that reaches out to a service for meta-data and env variables, it can set them and then launch the 'container' i.e. the process I want to launch as pid2 17:38:43 but we would need to increase the metadata size 17:38:46 255 is not enough 17:38:51 i think jhopper's idea was a cloud-init-like thing that runs as pid1 that talks to the metadata server as needed before moving on to what the user really wanted to run 17:38:52 and define a protocol 17:39:15 kraman: trivial enough to do 17:39:16 rustlebee: jhopper ah, but that does only 1 container 17:39:32 right, it would be the way every container launches 17:39:32 kraman: yes but each container would start with cloud-init 17:39:33 jhopper: having openstack inject its own process/agent into the container startup is a non-starter IMHO 17:39:33 containers can be scheduled/started/stopped/deleted just like VMs 17:39:44 and we can pack 1000s of them on one host 17:39:46 danpb: it's not an agent 17:39:48 danpb: agree 17:39:55 you need to be able to work with whatever architecture / setup the container technology already has 17:39:56 danpb: think of it more like open-rc or systemd 17:40:02 OK 17:40:02 danpb: it boots, configs and gets out of the way 17:40:04 you can't force them to change the way their container architecture works 17:40:11 jhopper: that's still not going to fly 17:40:29 I agree with danpb. Define an interface, not a program. 17:40:42 jhopper: its like saying all VMs have to use an openstack specific bootloader which in turns loads grub 17:40:50 heh 17:40:50 Before there was cloud-init, there were bash scripts and curl. 17:41:00 danpb: ah, that's fair 17:41:08 OK, so back to how we can feed metadata in via the defined interfaces .. 17:41:16 openstack has to integrate with whatever architecture already exists for the container/virt technology not the other way around 17:41:50 rustlebee: right, so for network namespaced containers: ec2 metadata service, done. For not network namespaced containers: are we doing that? 17:41:57 * rustlebee tries to think of how an API extension specifically for the environment would look ... and if that's OK or terrible 17:42:36 SpamapS: network-less containers? is it useful? 17:42:45 it would be doable of course, and it'd likely require some changes to metadata, quotas, etc 17:42:54 i think we support network-less VMs fwiw 17:42:55 and it would only apply to container servers, not any others 17:43:11 samalba: if you have access to stdin/out/err then sure, it could be used for processing 17:43:17 samalba: I would have plenty of use for networkless containers ^ 17:43:23 samalba: it is but only for a few corner cases. 17:43:25 but if it's generic enough to apply to *all* container services, that could be acceptable 17:43:33 s/services/technologies/ 17:43:34 to be clear 17:43:35 SpamapS: cant force user container to query ec2 metadata service. containers like docker set these env before starting the user container 17:43:37 I see, not me but ok it makes sense :-) 17:43:57 I'm curious 17:44:00 kraman: force is a strong term. Offering it to the container users is enough isn't it? 17:44:01 for network-less contiainers you could do a config-drive like approach adding a mount or block device to the image 17:44:03 docker can skip networking for containers anyway, it's supported 17:44:08 danpb: indeed 17:44:21 SpamapS: not really. lots of images out there already which we would like to use 17:44:28 that xpect env to be set before container start 17:44:28 so use them 17:44:30 what if we set the meta-data in the image before the container is booted. is that an option? they're just files on a fs which is far easier to modify than going out and editing a glance image 17:44:45 well, I say image but I really mean fs 17:44:53 i agree that assuming metadata service solves this is a non-starter 17:44:57 that's just not how people use containers 17:45:12 It seems clear that environment variables need to be part of the Container resource prior to the container starting. If that is the case, then whatever harness actually starts the container can access those environment variables at whatever time is appropriate for any specific container technology. 17:45:35 question is, how would we expose that through nova in a way we find acceptable 17:45:45 so that it's not just a driver-user contract, but a nova feature 17:45:54 O-k, so set a subset of these as process environment variables, which the understanding that those are not updatable? 17:45:54 that we can have some consistency on between different container drivers 17:46:11 w/which/with/ 17:46:36 If you implicitly start the container in the POST operation that creates it, then you have to specify the environment variables in the body of the post operation. Does the create operation implicitly start the container? 17:46:41 SpamapS: not updatable after start. but can be updated on nest stop/start cycle 17:47:01 ok so see that seems reasonable 17:47:11 should not require to destroy/re-create 17:47:40 if you see the API doc, the only place to update env, bind-mount etc if at start and create times 17:49:02 are metadata out of the scope for supporting env after all? (did I miss anything?) 17:49:24 sharwell: to your point, yes the POST starts the container, but you can set the metadata in the request body 17:49:44 samalba: it sounds like a special extension to standardize the format and abuse metadata as the transport mechanism is the most popular option 17:50:04 so maybe we should switch to bind mounts ... only have 10 minutes left 17:50:10 The most flexible way to do it is allow an Update operation on the container to set just the environment variables, and in the documentation state that if the container is currently running, the changes might not take effect until it is stopped and restarted. That provides the option in the future for a user to specify a specific container technology that supports setting the environment variables without restarting the container and without req 17:50:35 cutoff at: and without re* 17:50:47 damnsmith: alright, it's just this approach will have to be taken for everything else (not just env), and it's going to be messy at some point :-) 17:50:47 requiring changes to the API. 17:50:58 samalba: what is everything else 17:51:06 samalba: I know, I don't like it, I'm saying "most popular" :) 17:51:11 let me show you 17:51:22 rustlebee: http://docs.docker.io/en/latest/api/docker_remote_api_v1.6/#create-a-container 17:51:29 I would need all of them 17:51:37 env is just one property 17:51:45 so we need to address each one individually 17:51:53 hmm 17:52:02 we were only talking about env IMO 17:52:07 perhaps an alternate approach: somebody point at actual concrete use cases for the API as defined in the etherpads? 17:52:12 yeah, a bunch of those are already applicable to full OpSys images too 17:52:31 what about some kind of property dict field and you leave the driver implement it in the way it wants. Honestly I guess this can differ a lot from a container runtime to another 17:52:46 we can't do stuff like that 17:52:49 samalba: that's a contract between the user and the driver, which I think is uncool 17:52:52 sort of defeats the purpose of having a nova abstraction 17:52:54 I don't think we can streeamline everything 17:52:56 samalba imho it is important to standardize stuff whereever possible 17:53:02 +1 17:53:08 (to danpb) 17:53:09 what are metadata used for then? 17:53:18 we already have a mess with some existing virt driver apis eg get_diagnostics 17:53:24 metadata is for the user 17:53:36 user -> user's instance 17:53:39 not user -> driver 17:53:48 env is an end-user thing as well 17:53:50 right 17:53:58 so we're saying, define some env abstraction in the nova API 17:54:00 hence the use of metadata 17:54:00 i view these kind of things as similar to the way we can tag glance images with things like hw_disk_bus=scsi to say how the vm should boot 17:54:06 that lets the user get info down to their container env 17:54:25 i can certainly start on some usecases on the etherpad explaining each of my api calls. would that be helpful? 17:54:28 there's a bunch of customizations we deal with using glance image properties, which are also desirable to set per-instance via the create API 17:55:05 eg we want to add the ability to pass kernel command line args per instance at create time 17:55:27 that would be nifty 17:55:40 (five minutes) 17:55:53 lets take action items for next meeting 17:56:00 I will work on use cases 17:56:01 alot of the items in samalba's link above would need similar kinda of handling 17:56:15 can someone help me with those? 17:56:32 kraman: I think it would help to find the minimum actual required API if the use cases were specified 17:56:33 i need a nova person i can bounce them off so i know it makes sense 17:57:10 well, first action item would be to discard this new service discussion I guess? 17:57:11 like, define 5 things you want to do well. Pick the simplest one and use that as first implementation goal, and the most complex one and use that to guide the design so it doesn't close any doors. 17:57:15 kraman: in #openstack-nova, lots of folks can help you, myself included 17:57:27 k 17:57:58 would also like a person from each container virt technology to be involved 17:58:13 hang out in the channel :) 17:58:16 There's so much "people want" and "users might" ... need some more "users absolutely will..." 17:58:17 I'll be involved for Docker obviously 17:58:19 i only know docker and libvirt-lxc … and those only from a user point of view 17:58:22 and we can meet in here weekly if you want 17:58:32 kraman: ill help out for lxc (if i dont know) i can find someone who does 17:58:33 also consider #heat for any agent discussions. :) 17:58:50 rustlebee: fine with me 17:58:54 kraman: i can do libvirt-lxc as well 17:59:07 inside the instance is a thing we can play with a little more (not much tho. :) 17:59:12 can you update https://etherpad.openstack.org/p/containers-service-api please 17:59:56 ok 1 minute 18:00:17 any final comments? 18:00:33 next Friday is a US holiday, so if we meet again, i'd say in 2 weeks 18:00:35 just want to thanks everyone, it was helpful 18:00:46 we can follow up on the list about that 18:00:50 ok 18:00:58 ok 18:01:00 and otherwise, #openstack-nova is the place to be :) 18:01:03 thanks everyone! 18:01:05 #endmeeting