17:01:11 <rustlebee> #startmeeting nova
17:01:12 <openstack> Meeting started Fri Nov 22 17:01:11 2013 UTC and is due to finish in 60 minutes.  The chair is rustlebee. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:15 <openstack> The meeting name has been set to 'nova'
17:01:23 <rustlebee> using the 'nova' topic.  minutes will be with the rest of the nova project meetings
17:01:33 <rustlebee> #chair samalba
17:01:34 <openstack> Current chairs: rustlebee samalba
17:01:36 <rustlebee> #chair kraman
17:01:37 <openstack> Current chairs: kraman rustlebee samalba
17:01:48 <kraman> #link https://etherpad.openstack.org/p/containers-service
17:01:56 <kraman> #link https://etherpad.openstack.org/p/containers-service-api
17:02:01 <rustlebee> #topic Posible new containers service
17:02:27 * danpb present
17:02:49 <rustlebee> kraman: would you guys like to discuss what's written down so far?
17:03:04 <kraman> ye
17:03:05 <kraman> yes
17:03:06 <zul> sure
17:03:08 <SpamapS> o/
17:03:12 <danspraggins> o/
17:03:16 <kraman> but before we start would like to see who is on the channel today
17:03:21 <damnsmith> <- here
17:03:24 <jk0> o/
17:03:25 <jhopper> o/
17:03:28 <hallyn_> o/
17:03:28 <s1rp> o/
17:03:31 <kraman> Please indicate if you know one of the container technologies as well
17:03:34 <zul> o/
17:03:38 <samalba_> Yes, especially discuss the different plans proposed (for supporting containers): new service, new api, etc...
17:03:48 <samalba_> I know Docker
17:03:49 <samalba_> :-)
17:04:04 <jhopper> o/ <- LXC, Docker
17:04:09 <danspraggins> i've heard of linux.
17:04:11 <zul> <-- LXC
17:04:12 <rustlebee> i know things
17:04:15 <hallyn_> <- lxc
17:04:28 <hallyn_> <- the things i've seen...
17:04:29 <sharwell> I'm here as well (first time)
17:04:38 <kraman> During the design session. we had gone over a few different options for how to proceed. 1) new service 2) merge into nova 3) some sort of hybrid
17:04:49 <coolsvap> here
17:05:00 * danpb libvirt / libvirt-sandbox / docker
17:05:11 <kraman> There was no consensus we we decided to try and come up with an API which can support containers and then go back to Nova community and decide how to proceed
17:05:14 * SpamapS knows nothing
17:05:35 <zul> SpamapS:  yes we know ;)
17:06:02 <kraman> Thanks for the introductions :) Has everyone had a chance to go over the 2 etherpads?
17:06:19 <zul> yeah I think a new service might be total overkill at this point
17:06:25 <samalba> I guess everyone followed the thread on the ml?
17:06:31 <damnsmith> I have, and I think the API one is extremely helpful in demonstrating why I think this is a massive increase in scope for nova :)
17:07:12 <kraman> Pad containers-service line 75-86 contains some of the important use cases we would like to cover. and the api pad goes over the detail
17:07:34 <samalba> damnsmith: what do you think of the point raised by Tim in the last email (about what should happen for integration for the end user)?
17:08:08 <damnsmith> samalba: I think that most users would like the entire API to be a single endpoint :)
17:08:54 <rustlebee> and a new APi would only be justified if it's not disrupting existing users
17:09:00 <rustlebee> because it would be for use cases not currently possible
17:09:13 <damnsmith> I think the deployer's impact of a new service is well-understood to be something we should try to avoid when possible, but not at all costs
17:09:32 <damnsmith> I definitely understand why people outside of nova want this to be in nova because it's a unit of compute
17:09:43 <samalba> I guess we have the same problem with adding a new top-level API in Nova itself, right?
17:09:45 <danpb> to me the api description in etherpad doesn't seem very large
17:09:46 <kraman> We can decide if we need a new API or not after we complete the draft. At this point, we just need to identify the deltas
17:09:53 <rustlebee> danpb: yeah..
17:09:58 <danpb> and several of the "unique" features listed there could easily apply to existing nova code
17:10:13 <danpb> eg the ability to get a list of processes is valuable even for full OS virt
17:10:28 <kraman> Can everyone familiar with one of the container tech please update the capabilities section on containers-service-api please
17:10:38 <damnsmith> danpb: but it's squarely outside the scope of nova right now
17:10:54 <rustlebee> damnsmith: but if it's useful for full OS virt, where would it go in openstack?
17:11:15 <sharwell> One of the interesting aspects is even if the API is included in Nova, it seems a fairly common case will involve users that *only* access resources related to containers, while other users *only* access resources related to VMs
17:11:17 * rustlebee is undecided on this issue right now to be clear
17:11:22 <danpb> none of the tuning container features listed are at all specific to containers either
17:11:24 <damnsmith> rustlebee: I've said a couple times that I think it'd be nice if this new service was not containers specific, but a thing that provides "OS services"
17:11:29 <samalba> rustlebee: same here...
17:11:52 <danpb> damnsmith: yes, that would be a more useful distinction for some of the apis listed
17:11:54 <jhopper> there are multiple kinds of containers and I think that needs to be discussed - if we're placing something on hardware (any kind of container) then there is common pattern of usage which Nova already covers
17:11:57 <kraman> damnsmith: whats an OS service?
17:12:01 <damnsmith> I think that a well-defined "openstack agent" could live inside guests and provide this capability to all things, not just containers
17:12:10 <damnsmith> but that is a HUGE increase in nova's scope to contain that agent as well
17:12:19 <danpb> kraman: apis which get information about stuff inside an instance
17:12:31 <danpb> kraman: as opposed to what we currently do which is mostly about stuff outside an instance
17:12:36 <damnsmith> kraman: Operating System service is what I meant
17:12:37 <jhopper> containers can and should act like OS instances and I believe nova should handle this - scheduling the creation of containers within these hosted instances could be the realm of a different api/service
17:12:41 <rustlebee> that's interesting ...
17:12:48 <rustlebee> so it actually wouldn't be involved at all with placement or anything?
17:12:51 <kraman> danpb: thats a pretty big leap from just managing containers
17:13:11 <damnsmith> openstack is going to need in-guest management at some point
17:13:22 <damnsmith> xen already has it in a totally incompatible-with-everything way
17:13:32 <kraman> jhopper: containers are not the same as OS instances. just can be as simple as a single process with an ENV and kernel namespaces
17:13:50 <kraman> jhopper: we loose a lot of the power of containrs when forcing them to behave like a VM
17:13:50 <jhopper> kraman: how are they not the same as OS instances?
17:13:59 <johnthetubaguy1> damnsmith: ack, the two way metadata service means its portable to cross hypervisor, but that feels wrong
17:14:09 <damnsmith> jhopper: containers are not the same as VMs, sometimes people conflate them, but they are much more than just a thin VM
17:14:11 <jhopper> kraman: containers operation on process trees - containers within containers or VMs loses nothing
17:14:14 <danpb> kraman: not really, it is about not uneccessarily constraining apis to one specific technology
17:14:23 <jhopper> damnsmith: I understand they're not the same as VMs
17:14:40 <jhopper> damnsmith:however they share striking similarities with the OS instance that lives on a VM
17:14:48 <jhopper> damnsmith: taken further
17:14:58 <damnsmith> jhopper: oh you mean Operating System, not OpenStack
17:15:00 <jhopper> damnsmith: you could argue a container holding a rootfs on bare metal is an instance
17:15:03 <damnsmith> we need to stop using OS :)
17:15:06 <jhopper> damnsmith: sorry lol
17:15:06 <danpb> kraman: don't equate VMs == full OS installs
17:15:17 <rustlebee> an API that talks to an in-guest instance seems like it would have to be pretty tightly integrated with Nova
17:15:23 <rustlebee> it's hard for me to think about how that could be out of nova
17:15:28 <jhopper> rustlebee: agreed
17:15:32 <danpb> there are containers & VMs, and either of them can run full OS or  individual services/ processes
17:15:34 <rustlebee> a separate API, sure
17:15:45 <kraman> rustlebee: dont think you have to yet. lets go over the API first
17:15:53 <zul> an openstack agent isnt that what cloud-init is..kind of?
17:15:59 <damnsmith> rustlebee: I don't know why you say that
17:16:05 <SpamapS> perhaps for the purpose of this discussion we can say OS == OpenStack, OpSys = Operating System ?
17:16:23 <SpamapS> zul: cloud-init is not an agent
17:16:25 <rustlebee> damnsmith: could be lots of reasons, like i'm just dumb, or it's friday, or
17:16:31 <SpamapS> it is a bootstrap tool
17:16:41 <damnsmith> rustlebee: ah, the inarguable friday excuse.. okay :)
17:16:59 <damnsmith> right, so things that people want to do that would be agenty:
17:17:05 <damnsmith> - two way console communication
17:17:08 <damnsmith> - live process lists
17:17:11 <damnsmith> -live process controls
17:17:17 <damnsmith> - Filesystem manipulation at runtime, etc
17:17:30 <damnsmith> those don't need nova's involvement and definitely don't fit with cloud-init I think
17:17:31 <johnthetubaguy1> - password reset without reboot
17:17:37 <damnsmith> johnthetubaguy1: +1000
17:17:38 <kraman> damnsmith: sounds like mcollective
17:17:39 <danpb> i don't think we should specifically talk about agents - they're a hypervisor specific implementation detail
17:17:51 <jhopper> I don't know - it seems like nova would benefit greatly from that feature set
17:17:52 <damnsmith> danpb: they don't need to be
17:17:55 <jhopper> if it isn't already in there
17:18:02 <rustlebee> OpenStack would benefit from it, yes
17:18:16 <rustlebee> to benefit, it doesn't have to be Nova
17:18:25 <SpamapS> sounds like "none of openstack's business" IMO. Inside a container is like inside a VM... we're like vampires, the user has to invite us in..
17:18:26 <rustlebee> hence discussion :)
17:18:35 <johnthetubaguy1> and its probably in the compute program, just may not Nova...
17:18:47 <rustlebee> agreed that this is all in scope for the compute program
17:19:03 <jhopper> well sure but realm of responsibility should be taken into account - maybe not nova in particular but I could see the agent working in the scope of many compute components
17:19:38 <johnthetubaguy1> the question is probably how deep are the hooks in nova, keep the API separate to start with, integrate if it makes sense later, else we get nova-volume all over again?
17:19:45 <damnsmith> also, unfscking network without reboot is another agenty thing
17:20:08 <damnsmith> online filesystem resize is another
17:20:12 <jhopper> johnthetubaguy1: ah
17:20:14 <johnthetubaguy1> damnsmith: its like your listing what the xenapi agent does, but yes
17:20:20 <damnsmith> heh
17:20:39 <samalba> would it be useful to have some process managements capabilities for VMs as well as some point? Hence extending the existing API?
17:20:40 <danpb> damnsmith: with container based virt you can do alot of this without needing an agent, so it is desirable not to force an agent into our architecture uneccessarily - that's why i said it was a  virt driver specific detail
17:20:43 <damnsmith> I know only what xen's agent did in 2006 :D
17:21:10 <damnsmith> danpb: well, I'm saying that for containers, the agent kinda deflates to nothing, and for VMs we provide the agent to do these things
17:21:16 <johnthetubaguy1> damnsmith: ah, this is a new one, but anyways, lets leave that asside
17:21:19 <damnsmith> danpb: but the api to do them from the outside becomes the same for a vm and a container
17:21:33 <rustlebee> so it seems we're morphing from a container proposal to a guest management proposal
17:21:36 <danpb> damnsmith: yep, that's what i'd expect
17:21:43 <rustlebee> is that right?
17:22:06 <danpb> rustlebee: i'd say a bit of both really
17:22:26 <kraman> isnt guest management too broad a mandate for nova?
17:22:31 <SpamapS> wow this all sounds.. a million miles outside of nova's scope. reaching into containers and vms? really?
17:22:33 <danpb> rustlebee: i think there's clearly container specific stuff we'll need todo wrt booting instances, but alot of the ongoing mgmt apis are general guest management
17:22:42 <s1rp> a guest management proposal that also supports the features we need w/ containers seems like biting off a bit more than we can probably chew
17:22:48 <johnthetubaguy1> going back to containers, can we just agree about creating them, and if that fits the nova "server" abstraction, I think the answer is yes, but I keep changing my mind?
17:22:50 <jhopper> guest management sounds like something better implemented with heat and other tools
17:23:04 <kraman> and guest management would also not give the benefit of having a smaller service which just manages containers
17:23:06 <zul> it does...
17:23:14 <damnsmith> johnthetubaguy1: the thing is, containers aren't super useful if treated like VMs
17:23:22 <jhopper> johnthetubaguy1: I think OpSys container instances absolutely fit the Nova model and should be integrated
17:23:33 <nelsnelson> It seems like an additional plugin service _might_ be useful/prudent for some of the finer-grained control aspects of a containers and/or OS service control and guest management.  But I think that the primary aspects of container instance mgmt should be included within the existing Nova set.
17:23:38 <danspraggins> +1 to jhopper
17:23:42 <sharwell> Am I correct in assuming that the reason the management items are necessary for containers is there isn't a way to do it within the container itself, e.g. with instances you can SSH and do all these things directly within the VM, but for a container that isn't possible?
17:23:47 <johnthetubaguy1> damnsmith: maybe, but if you have containers, plus fork, is it just orchestration bits for everything else?
17:23:50 <jhopper> damnsmith: I disagree. Containers on bare metal would let you do a great deal and continue to use containers within that container without incurring the cost of the VM
17:23:56 <danpb> sharwell: not, that's not accurate
17:24:10 <danpb> sharwell: it entirely depends on whether you configure your containe to run ssh or not
17:24:16 <SpamapS> jhopper: that is interesting. At that point, what is different about a container that starts by executing /bin/init vs. one that executes /usr/bin/apache2 ?
17:24:24 <damnsmith> jhopper: again, you're thinking of containers as VMs
17:24:26 <jhopper> SpamapS: there really isn't
17:24:35 <damnsmith> jhopper: the PaaS people want process level control to make them useful
17:24:40 <jhopper> damnsmith: I know but containers have to go somewhere and that's what nova odes - manage instances very well
17:24:47 <jhopper> damnsmith: I'm proposing that that be nova's realm
17:25:01 <jhopper> damnsmith: and let the other orchestrations fall where they may - other service or orchestration tools
17:25:09 <sharwell> danpb: so there are *some* containers for which what I said is true, but not for all containers? and since you want to support containers in general it makes the management API commands necessary?
17:25:32 <danpb> sharwell: and similarly the same it true for OpSys
17:25:41 <damnsmith> jhopper: yeah, and that makes nova suddenly concerned with what is inside the instance, which is crossing a defined line we have today
17:25:44 <kraman> jhopper: if we can settle on the deltas between nova and container APIs we will have more information to deicde if it should or should not be part of nova
17:26:00 <rustlebee> kraman: want to go through each API feature you have so far?
17:26:09 <damnsmith> jhopper: because if we do it for containers, we might as well do it for vms as well, and then..boom
17:26:10 <kraman> yes please :)
17:26:23 <damnsmith> can we start with the interesting ones?
17:26:37 <kraman> damnsmith: sure
17:27:05 <kraman> Start with "setting environment"
17:27:18 <johnthetubaguy1> thats like metadata injection right?
17:27:21 <kraman> in a VM you can use cloud-init after the VM starts to pull this
17:27:22 <rustlebee> that seems logically similar to metadata, yes
17:27:31 <kraman> but in a container this needs to be set before the container starts
17:27:43 <rustlebee> that seems doable in Nova today without any fundamental changes IMO
17:27:49 <damnsmith> so if you can't do this before it starts, then you need an init process in the container,
17:27:51 <kraman> rustlebee: how so?
17:27:54 <damnsmith> which is what container folks don't want to do
17:28:04 <kraman> damnsmith: there isnt an init process
17:28:05 <hallyn_> eh what?
17:28:08 <danpb> i think the key point here is that you have a different boot setup for the container
17:28:16 <kraman> there is only one process which is what is running in the container
17:28:17 <rustlebee> you just ... do it?  in your driver
17:28:22 <damnsmith> kraman: right
17:28:35 <kraman> rustlebee: tried that with docker driver. changeset was rejected
17:28:35 <damnsmith> rustlebee: we told sam no to that, remember?
17:28:35 <SpamapS> I don't understand the assertion that metadata must be set before a container starts.
17:28:39 <johnthetubaguy1> yep, this is a driver thing, assuming LXC+libvirt gets pulled into its own driver, etc
17:28:41 <danpb> there is a process declared as the "init"  (it can be any binary) and it has args + environ variables + console connections
17:28:43 <rustlebee> i do not remember that, heh
17:28:52 <hallyn_> evenif you're talking about an application container, there is an 'init' in that it is the reaper.
17:28:54 <johnthetubaguy1> anyways, whats the next one after metadata?
17:28:54 <rustlebee> did i say no, too?
17:29:14 <jhopper> SpamapS: if we're talking about contained processes then there can't be an agent that spins up to set the meta-data
17:29:22 <damnsmith> hallyn_: we're talking about something that would need to be like cloud-init in a container, which means your application would be pid N not pid 1, which is the problem
17:29:25 <jhopper> SpamapS: a single container would have a single process and nothing else
17:29:26 <kraman> danpb: as I said, container dont have an init process like systemd or anything where i can pull those env variable metadata
17:29:32 <jhopper> ^ that
17:29:39 <samalba> kraman: you're talking about user_data field, not the metadata I guess
17:29:45 <SpamapS> ok, metadata is overloaded
17:29:47 <hallyn_> damnsmith: the cloutinti thing could go on to exec your init
17:29:48 <damnsmith> rustlebee: it involved sending arbitrary docker config in a metadata string
17:29:50 <hallyn_> woudl still be pid 1
17:29:53 <danpb> kraman: when I say  "init" i mean pid==1
17:30:02 <rustlebee> damnsmith: OK, yeah, not what i had in mind
17:30:04 <SpamapS> please specify the actual interface being discussed. I suspect you mean the process environment?
17:30:04 <damnsmith> hallyn_: right, the need for that thing is undesirable
17:30:05 <danpb> i don't specifically mean sysvinit/systemd
17:30:09 <kraman> danpb: no requirement to use pid namespace in container
17:30:15 <kraman> danpb: may not be pid 1
17:30:18 <samalba> issues with metadata is that in the scope of docker for example, some metadata would be mandatory to start an instance, is that a problem?
17:30:21 <jhopper> danpb: that's doable
17:30:27 <rustlebee> i meant more like .... same sort of data we expose in config drive or metadata server, but passed in through the env that the container supports
17:30:29 <danpb> kraman: well ok,  "the first pid"
17:30:30 <hallyn_> damnsmith: ok, no opinion on that
17:30:38 <jhopper> having a process that initializes the program with metadata like cloud-init works
17:30:48 <johnthetubaguy1> rustlebee: +1
17:30:56 <damnsmith> rustlebee: we need a structured api for that though right?
17:31:02 <kraman> danpb: also, containers may be built/provided by other community and not specifically built for openstack. want to allow that usecase
17:31:10 <danpb> kraman: absolutely
17:31:23 <kraman> docker and other container frameworks allow setting ENV outside the image or init process
17:31:26 <rustlebee> do we?  we have config drive / metadata server today ... i'm only talking about exposing that same info
17:31:32 <danpb> kraman: that's why we shouldn't try to force in openstack specific agents/processes
17:31:39 <kraman> would like not to loose that functionality
17:31:51 <jhopper> we don't have to lose it - we just have to make sure it's in the right place
17:31:59 <kraman> jhopper: +1
17:32:03 <kraman> but where is that
17:32:09 <damnsmith> rustlebee: then nova doesn't support environment variables, and it's a contract between the user and the docker driver, bypassing all of nova in between
17:32:18 <kraman> in the api I suggested, i add it as part of the create and start apis
17:32:19 <johnthetubaguy1> the xenapi guest agent updates metadata changes into xenstore, so the callback is already there for metadata changes
17:32:45 <johnthetubaguy1> damnsmith: yeah, it sounds bad like that
17:33:00 <jhopper> so what about having a kickstart process that is integrated with certain parts of OS such that it can reach and set metadata?
17:33:02 <damnsmith> environment is really not the interesting one I would have chosen
17:33:06 <danpb> you could just pass env variables as standard named metadata properties against the image, or pass them in with the boot api call
17:33:07 <damnsmith> I wanted to choose process listing :)
17:33:15 <jhopper> pid1 kickstart -> container connfig -> launch pid2
17:33:33 <samalba> damnsmith: process listing support is not critical IMHO
17:33:37 <kraman> jhopper: pid1 may be apache which cant pull that config
17:33:38 <jhopper> it's not really an agent so much as a launcher
17:33:38 <damnsmith> danpb: so we expose all metadata as envars? sounds dangerous. if not all, then we've got a structured API
17:33:43 <jhopper> nono
17:33:45 <jhopper> we control pid1
17:33:50 <kraman> how?
17:33:52 <damnsmith> samalba: okay
17:34:02 <rustlebee> so what's important?  :)
17:34:11 <damnsmith> samalba: what about stdio?
17:34:13 <jhopper> because I can with containers by simply launching pid1 as my init and then having pid1 launch a process tree under new cgroups (i.e. container)
17:34:21 <samalba> env is important, volumes are important
17:34:29 <samalba> both can be passed in metadata
17:34:30 <rustlebee> define volumes?
17:34:33 <danpb> rustlebee: the ability to specify a binary to launch, and provide it cli args + env  + stdio   IMHO is the key first step
17:34:44 <samalba> rustlebee: it's a mount-bind in docker
17:34:59 <samalba> expose a dir from the host to the container
17:35:36 <jhopper> that worries me in the security sense - which is something we haven't touched on yet
17:35:50 <damnsmith> jhopper: but it's a common thing people want to do with containers
17:36:02 <jhopper> I well understand but then what is the 'host' in this case?
17:36:05 <sharwell> If the create container operation does not actually start the container, then you could allow setting items like environment variables after creation or stopping it prior to a start command that operates consistently on any not-currently-started container
17:36:10 <jhopper> do we manage the host like a HV?
17:36:16 <rustlebee> seems like that could be done using a cinder abstraction
17:36:39 <kraman> rustlebee: cinder to manage environment?
17:36:40 * johnthetubaguy1 is told he has to run away, needs to be a taxi
17:36:41 <samalba> yes, containers (at least for docker) are stateless and changes on local fs can be trashed at everyrun, for database the good workflow is to use volumes (to store data outside the container), so it's super important
17:36:41 <danpb> jhopper: well i don't think its any worse than exposing block devices from a host to the guest which we already do with nova + cinder
17:36:44 <rustlebee> no, the volumes part
17:36:48 <kraman> ah
17:36:55 <jhopper> danpb: fair point
17:37:00 <danpb> jhopper: its just something you have to deal with carefully as you would any other item
17:37:05 <kraman> can we finsih off the environment discussion first please :)
17:37:48 <kraman> jhopper: are you thinking that there is a container service which sets env and then kicks off the driver?
17:37:58 <kraman> jhopper: so pid1 would be the container service?
17:38:13 <samalba> honestly using metadata for the docker driver would improve a lot of things. It would just make metadata super important (possibly mandatory) for the user. If it's not a problem, maybe we have a first starting point to improve the support...
17:38:32 <kraman> true
17:38:42 <jhopper> kraman: really the data could be stored anywhere - what matters is that we control the entire process of launching processes. This means that if I have a process acting as pid1 that reaches out to a service for meta-data and env variables, it can set them and then launch the 'container' i.e. the process I want to launch as pid2
17:38:43 <kraman> but we would need to increase the metadata size
17:38:46 <kraman> 255 is not enough
17:38:51 <rustlebee> i think jhopper's idea was a cloud-init-like thing that runs as pid1 that talks to the metadata server as needed before moving on to what the user really wanted to run
17:38:52 <damnsmith> and define a protocol
17:39:15 <rustlebee> kraman: trivial enough to do
17:39:16 <kraman> rustlebee: jhopper ah, but that does only 1 container
17:39:32 <rustlebee> right, it would be the way every container launches
17:39:32 <jhopper> kraman: yes but each container would start with cloud-init
17:39:33 <danpb> jhopper: having openstack inject its own process/agent into the container startup is a non-starter IMHO
17:39:33 <kraman> containers can be scheduled/started/stopped/deleted just like VMs
17:39:44 <kraman> and we can pack 1000s of them on one host
17:39:46 <jhopper> danpb: it's not an agent
17:39:48 <damnsmith> danpb: agree
17:39:55 <danpb> you need to be able to work with whatever architecture / setup the container technology already has
17:39:56 <jhopper> danpb: think of it more like open-rc or systemd
17:40:02 <rustlebee> OK
17:40:02 <jhopper> danpb: it boots, configs and gets out of the way
17:40:04 <danpb> you can't force them to change the way their container architecture works
17:40:11 <danpb> jhopper: that's still not going to fly
17:40:29 <SpamapS> I agree with danpb. Define an interface, not a program.
17:40:42 <danpb> jhopper: its like saying all VMs have to use an openstack  specific bootloader which in turns loads grub
17:40:50 <rustlebee> heh
17:40:50 <SpamapS> Before there was cloud-init, there were bash scripts and curl.
17:41:00 <jhopper> danpb: ah, that's fair
17:41:08 <rustlebee> OK, so back to how we can feed metadata in via the defined interfaces ..
17:41:16 <danpb> openstack has to integrate with whatever architecture already exists for the container/virt technology not the other way around
17:41:50 <SpamapS> rustlebee: right, so for network namespaced containers: ec2 metadata service, done. For not network namespaced containers: are we doing that?
17:41:57 * rustlebee tries to think of how an API extension specifically for the environment would look ... and if that's OK or terrible
17:42:36 <samalba> SpamapS: network-less containers? is it useful?
17:42:45 <damnsmith> it would be doable of course, and it'd likely require some changes to metadata, quotas, etc
17:42:54 <rustlebee> i think we support network-less VMs fwiw
17:42:55 <damnsmith> and it would only apply to container servers, not any others
17:43:11 <sharwell> samalba: if you have access to stdin/out/err then sure, it could be used for processing
17:43:17 <jhopper> samalba: I would have plenty of use for networkless containers ^
17:43:23 <SpamapS> samalba: it is but only for a few corner cases.
17:43:25 <rustlebee> but if it's generic enough to apply to *all* container services, that could be acceptable
17:43:33 <rustlebee> s/services/technologies/
17:43:34 <rustlebee> to be clear
17:43:35 <kraman> SpamapS: cant force user container to query ec2 metadata service. containers like docker set these env before starting the user container
17:43:37 <samalba> I see, not me but ok it makes sense :-)
17:43:57 <jhopper> I'm curious
17:44:00 <SpamapS> kraman: force is a strong term. Offering it to the container users is enough isn't it?
17:44:01 <danpb> for network-less contiainers you could do a config-drive like approach adding a mount or block device to the image
17:44:03 <samalba> docker can skip networking for containers anyway, it's supported
17:44:08 <rustlebee> danpb: indeed
17:44:21 <kraman> SpamapS: not really. lots of images out there already which we would like to use
17:44:28 <kraman> that xpect env to be set before container start
17:44:28 <SpamapS> so use them
17:44:30 <jhopper> what if we set the meta-data in the image before the container is booted. is that an option? they're just files on a fs which is far easier to modify than going out and editing a glance image
17:44:45 <jhopper> well, I say image but I really mean fs
17:44:53 <rustlebee> i agree that assuming metadata service solves this is a non-starter
17:44:57 <rustlebee> that's just not how people use containers
17:45:12 <sharwell> It seems clear that environment variables need to be part of the Container resource prior to the container starting. If that is the case, then whatever harness actually starts the container can access those environment variables at whatever time is appropriate for any specific container technology.
17:45:35 <rustlebee> question is, how would we expose that through nova in a way we find acceptable
17:45:45 <rustlebee> so that it's not just a driver-user contract, but a nova feature
17:45:54 <SpamapS> O-k, so set a subset of these as process environment variables, which the understanding that those are not updatable?
17:45:54 <rustlebee> that we can have some consistency on between different container drivers
17:46:11 <SpamapS> w/which/with/
17:46:36 <sharwell> If you implicitly start the container in the POST operation that creates it, then you have to specify the environment variables in the body of the post operation. Does the create operation implicitly start the container?
17:46:41 <kraman> SpamapS: not updatable after start. but can be updated on nest stop/start cycle
17:47:01 <SpamapS> ok so see that seems reasonable
17:47:11 <kraman> should not require to destroy/re-create
17:47:40 <kraman> if you see the API doc, the only place to update env, bind-mount etc if at start and create times
17:49:02 <samalba> are metadata out of the scope for supporting env after all? (did I miss anything?)
17:49:24 <s1rp> sharwell: to your point, yes the POST starts the container, but you can set the metadata in the request body
17:49:44 <damnsmith> samalba: it sounds like a special extension to standardize the format and abuse metadata as the transport mechanism is the most popular option
17:50:04 <rustlebee> so maybe we should switch to bind mounts ... only have 10 minutes left
17:50:10 <sharwell> The most flexible way to do it is allow an Update operation on the container to set just the environment variables, and in the documentation state that if the container is currently running, the changes might not take effect until it is stopped and restarted. That provides the option in the future for a user to specify a specific container technology that supports setting the environment variables without restarting the container and without req
17:50:35 <jhopper> cutoff at: and without re*
17:50:47 <samalba> damnsmith: alright, it's just this approach will have to be taken for everything else (not just env), and it's going to be messy at some point :-)
17:50:47 <sharwell> requiring changes to the API.
17:50:58 <rustlebee> samalba: what is everything else
17:51:06 <damnsmith> samalba: I know, I don't like it, I'm saying "most popular" :)
17:51:11 <samalba> let me show you
17:51:22 <samalba> rustlebee: http://docs.docker.io/en/latest/api/docker_remote_api_v1.6/#create-a-container
17:51:29 <samalba> I would need all of them
17:51:37 <samalba> env is just one property
17:51:45 <rustlebee> so we need to address each one individually
17:51:53 <samalba> hmm
17:52:02 <rustlebee> we were only talking about env IMO
17:52:07 <SpamapS> perhaps an alternate approach: somebody point at actual concrete use cases for the API as defined in the etherpads?
17:52:12 <danpb> yeah, a bunch of those are already applicable to full OpSys images too
17:52:31 <samalba> what about some kind of property dict field and you leave the driver implement it in the way it wants. Honestly I guess this can differ a lot from a container runtime to another
17:52:46 <rustlebee> we can't do stuff like that
17:52:49 <damnsmith> samalba: that's a contract between the user and the driver, which I think is uncool
17:52:52 <rustlebee> sort of defeats the purpose of having a nova abstraction
17:52:54 <samalba> I don't think we can streeamline everything
17:52:56 <danpb> samalba  imho it is important to standardize stuff whereever possible
17:53:02 <rustlebee> +1
17:53:08 <rustlebee> (to danpb)
17:53:09 <samalba> what are metadata used for then?
17:53:18 <danpb> we already have a mess with some existing virt driver apis  eg  get_diagnostics
17:53:24 <damnsmith> metadata is for the user
17:53:36 <damnsmith> user -> user's instance
17:53:39 <damnsmith> not user -> driver
17:53:48 <samalba> env is an end-user thing as well
17:53:50 <rustlebee> right
17:53:58 <rustlebee> so we're saying, define some env abstraction in the nova API
17:54:00 <jhopper> hence the use of metadata
17:54:00 <danpb> i view these kind of things as similar to the way we can tag glance images with things like    hw_disk_bus=scsi   to say how the vm should boot
17:54:06 <rustlebee> that lets the user get info down to their container env
17:54:25 <kraman> i can certainly start on some usecases on the etherpad explaining each of my api calls. would that be helpful?
17:54:28 <danpb> there's a bunch of customizations we deal with using glance image properties, which are also desirable to set per-instance via the create API
17:55:05 <danpb> eg we want to add the ability to pass  kernel command line args  per instance at create time
17:55:27 <jhopper> that would be nifty
17:55:40 <damnsmith> (five minutes)
17:55:53 <kraman> lets take action items for next meeting
17:56:00 <kraman> I will work on use cases
17:56:01 <danpb> alot of the items in samalba's link above would need similar kinda of handling
17:56:15 <kraman> can someone help me with those?
17:56:32 <SpamapS> kraman: I think it would help to find the minimum actual required API if the use cases were specified
17:56:33 <kraman> i need a nova person i can bounce them off so i know it makes sense
17:57:10 <samalba> well, first action item would be to discard this new service discussion I guess?
17:57:11 <SpamapS> like, define 5 things you want to do well. Pick the simplest one and use that as first implementation goal, and the most complex one and use that to guide the design so it doesn't close any doors.
17:57:15 <damnsmith> kraman: in #openstack-nova, lots of folks can help you, myself included
17:57:27 <kraman> k
17:57:58 <kraman> would also like a person from each container virt technology to be involved
17:58:13 <rustlebee> hang out in the channel :)
17:58:16 <SpamapS> There's so much "people want" and "users might" ... need some more "users absolutely will..."
17:58:17 <samalba> I'll be involved for Docker obviously
17:58:19 <kraman> i only know docker and libvirt-lxc … and those only from a user point of view
17:58:22 <rustlebee> and we can meet in here weekly if you want
17:58:32 <zul> kraman:  ill help out for lxc (if i dont know) i can find someone who does
17:58:33 <SpamapS> also consider #heat for any agent discussions. :)
17:58:50 <samalba> rustlebee: fine with me
17:58:54 <zul> kraman:  i can do libvirt-lxc as well
17:59:07 <SpamapS> inside the instance is a thing we can play with a little more (not much tho. :)
17:59:12 <kraman> can you update https://etherpad.openstack.org/p/containers-service-api please
17:59:56 <rustlebee> ok 1 minute
18:00:17 <rustlebee> any final comments?
18:00:33 <rustlebee> next Friday is a US holiday, so if we meet again, i'd say in 2 weeks
18:00:35 <samalba> just want to thanks everyone, it was helpful
18:00:46 <rustlebee> we can follow up on the list about that
18:00:50 <kraman> ok
18:00:58 <samalba> ok
18:01:00 <rustlebee> and otherwise, #openstack-nova is the place to be :)
18:01:03 <rustlebee> thanks everyone!
18:01:05 <rustlebee> #endmeeting