*** PaulCzar has quit IRC | 00:30 | |
*** unicell has joined #openstack-containers | 00:33 | |
*** phiche has joined #openstack-containers | 00:50 | |
*** phiche has quit IRC | 00:55 | |
*** phiche has joined #openstack-containers | 01:50 | |
*** phiche has quit IRC | 01:55 | |
*** marcoemorais has quit IRC | 02:20 | |
*** harlowja is now known as harlowja_away | 02:25 | |
*** phiche has joined #openstack-containers | 02:50 | |
*** phiche has quit IRC | 02:56 | |
*** phiche has joined #openstack-containers | 03:50 | |
*** phiche has quit IRC | 03:54 | |
*** phiche has joined #openstack-containers | 04:50 | |
*** phiche has quit IRC | 04:56 | |
*** julim has quit IRC | 05:17 | |
*** phiche has joined #openstack-containers | 05:34 | |
*** phiche has quit IRC | 06:08 | |
*** stannie has quit IRC | 06:38 | |
*** stannie has joined #openstack-containers | 06:50 | |
*** phiche has joined #openstack-containers | 06:56 | |
*** stannie has quit IRC | 06:56 | |
*** julienvey has joined #openstack-containers | 08:02 | |
*** phiche has quit IRC | 08:16 | |
*** phiche has joined #openstack-containers | 08:24 | |
*** phiche1 has joined #openstack-containers | 08:25 | |
*** phiche has quit IRC | 08:25 | |
*** julienvey has quit IRC | 08:29 | |
*** julienvey has joined #openstack-containers | 08:30 | |
*** phiche has joined #openstack-containers | 08:30 | |
*** phiche1 has quit IRC | 08:31 | |
*** phiche has quit IRC | 08:56 | |
*** phiche has joined #openstack-containers | 09:01 | |
*** unicell1 has joined #openstack-containers | 09:54 | |
*** unicell has quit IRC | 09:56 | |
*** thomasem has joined #openstack-containers | 13:15 | |
*** julim has joined #openstack-containers | 13:28 | |
*** julienvey has quit IRC | 13:34 | |
*** stannie has joined #openstack-containers | 13:34 | |
*** julienvey has joined #openstack-containers | 13:52 | |
*** PaulCzar has joined #openstack-containers | 14:31 | |
*** EricGonczer_ has joined #openstack-containers | 14:33 | |
*** EricGonczer_ has quit IRC | 14:58 | |
*** EricGonczer_ has joined #openstack-containers | 14:59 | |
*** EricGonczer_ has quit IRC | 15:01 | |
*** EricGonczer_ has joined #openstack-containers | 15:01 | |
*** EricGonc_ has joined #openstack-containers | 15:02 | |
*** EricGonczer_ has quit IRC | 15:06 | |
*** diga has joined #openstack-containers | 15:42 | |
*** phiche has quit IRC | 15:55 | |
*** marcoemorais has joined #openstack-containers | 16:25 | |
*** julienve_ has joined #openstack-containers | 16:30 | |
*** julienvey has quit IRC | 16:34 | |
*** marcoemorais has quit IRC | 16:35 | |
*** marcoemorais has joined #openstack-containers | 16:36 | |
*** adrian_otto has joined #openstack-containers | 16:36 | |
*** marcoemorais has quit IRC | 16:36 | |
*** marcoemorais has joined #openstack-containers | 16:36 | |
bauzas_ | adrian_otto: hi adrian | 16:37 |
---|---|---|
bauzas_ | adrian_otto: just got a ping yesterday :) | 16:38 |
bauzas_ | adrian_otto: by reading the meeting logs, it seems you want to know the Gantt status, right ? | 16:39 |
adrian_otto | when is a good time to touch base about Gantt? | 16:39 |
bauzas_ | adrian_otto: I have ~15 mins for discussing now, or later in the evening | 16:39 |
adrian_otto | now works. | 16:39 |
bauzas_ | cool | 16:39 |
bauzas_ | so what do you want to know about the scheduler split ? | 16:39 |
adrian_otto | well, the hardest question to answer is timing of readiness | 16:41 |
adrian_otto | our intent is not to overlap our work with anything in Nova that can be accessed by an external service | 16:41 |
*** stannie has quit IRC | 16:42 | |
adrian_otto | we think that the Gantt scheduler is a very important component, and one we especially do not want to duplicate | 16:42 |
bauzas_ | adrian_otto: makes sense | 16:42 |
bauzas_ | adrian_otto: the main problem is the delivery timing | 16:42 |
adrian_otto | so I think what the team wants is a sense of the project, what stage it is in, and if it's in progress, when we anticipate it might be ready to try | 16:42 |
bauzas_ | adrian_otto: what do you exactly require to schedule ? | 16:42 |
adrian_otto | good question, so let's first explain what we plan to do as a first step | 16:43 |
adrian_otto | and then I can expand on that | 16:43 |
bauzas_ | adrian_otto: as per my understanding, you want to elect some containers coming from an end-user API | 16:43 |
bauzas_ | adrian_otto: sure | 16:43 |
adrian_otto | so we know there will be some period of time while Gantt is not yet ready | 16:43 |
adrian_otto | and our stated intent it to allow two container placement schemes during that initial period: | 16:44 |
adrian_otto | 1) Specify the instance_id of the Nova instance you want to place a container on. | 16:44 |
adrian_otto | 2) Automatically create a nova instance, and sequentially fill it with containers until no more fit | 16:44 |
adrian_otto | then create another, etc. | 16:44 |
bauzas_ | let me rephrase this | 16:45 |
adrian_otto | deleting instances when all containers are deleted. | 16:45 |
bauzas_ | containers would be subinstances ? | 16:45 |
adrian_otto | yes. | 16:45 |
bauzas_ | ie. you would spawn an instance and then some containers upon this ? | 16:45 |
bauzas_ | ok, I guess for security reasons ? | 16:45 |
adrian_otto | yes, but consider that an instance may be of any type. Currently VM, Bare Metal, and Container are contemplated. | 16:46 |
bauzas_ | I was seeing the docker driver as a direct driver over the host | 16:46 |
bauzas_ | adrian_otto: gotcha | 16:46 |
bauzas_ | so I understand the need of an upper API | 16:46 |
adrian_otto | so in the case of nova-docker as the nova virt driver, you would end up with nested containers | 16:46 |
bauzas_ | adrian_otto: got it | 16:46 |
bauzas_ | adrian_otto: the containers service is just for requesting high-level containers whatever the driver is | 16:47 |
bauzas_ | k | 16:47 |
adrian_otto | yes | 16:47 |
adrian_otto | so there are two unique scheduling problems | 16:47 |
bauzas_ | so, that's a new dependency | 16:47 |
adrian_otto | one is what nova does with placing instances on compute hosts | 16:47 |
adrian_otto | and another is what Magnum (OpenStack Containers Service) does with containers on instances | 16:48 |
bauzas_ | and I see the latter | 16:48 |
bauzas_ | got ity | 16:48 |
adrian_otto | fundamentally similar | 16:48 |
bauzas_ | nova-scheduler is addressing the instances | 16:48 |
adrian_otto | yes | 16:48 |
bauzas_ | so you need something for addressing the containers | 16:48 |
bauzas_ | k | 16:48 |
adrian_otto | yes | 16:48 |
bauzas_ | lemme braindump this | 16:48 |
adrian_otto | so we like the idea of using filters | 16:48 |
adrian_otto | jsut like we can do with instances | 16:49 |
bauzas_ | yup | 16:49 |
adrian_otto | so we can do things like affinity, anti-affinity, etc. | 16:49 |
bauzas_ | you need to care about a list of containers per host | 16:49 |
bauzas_ | oops | 16:49 |
bauzas_ | s/host/instance | 16:49 |
adrian_otto | exactly. | 16:49 |
bauzas_ | are the containers tied to the instance ? | 16:50 |
adrian_otto | our initial discussions about that led us to accept that Magnum would own an index mapping containers to instances | 16:50 |
adrian_otto | yes, a container belongs to one instance at a time. | 16:50 |
bauzas_ | k, do you have some instance requirements or flavor reqs ? | 16:50 |
adrian_otto | but like instances can move from one host to another, containers should also be allowed to move from one instance to another. | 16:51 |
bauzas_ | ie. how do you elect the instance on which the container will be started on ? | 16:51 |
bauzas_ | k | 16:51 |
bauzas_ | so you potentially want to migrate containers | 16:51 |
bauzas_ | got it | 16:51 |
adrian_otto | yes, I anticipate that we will use flavors, but that's not how prevailing contianer tools work today | 16:51 |
adrian_otto | for example, Docker allows a -m flag to specify an arbitrary memory ceiling | 16:52 |
bauzas_ | I guess the boot request will come to the Magnum API and Magnum will call the Nova API ? | 16:52 |
adrian_otto | and a similar flag for cpushares and cpuset | 16:52 |
bauzas_ | k | 16:52 |
adrian_otto | for creating instances, absolutely | 16:52 |
adrian_otto | we will have an in-guest agent. | 16:52 |
bauzas_ | ok, any plans to use Heat ? | 16:52 |
adrian_otto | yes | 16:53 |
adrian_otto | Heat will be used to do multi-step operations, to the extent possible | 16:53 |
bauzas_ | k | 16:53 |
bauzas_ | I see your issue then | 16:53 |
adrian_otto | to for exmaple if we have some "flavor" definition that has a Manila volume along with a container, we will use a HOT to order that. | 16:53 |
bauzas_ | and I see why you need Gantt | 16:53 |
bauzas_ | because you need to have your own scheduler | 16:54 |
bauzas_ | for electing containers and migrating them | 16:54 |
adrian_otto | so either we can Gantt to help us decide where to place containers, or a heat resource provider will. | 16:54 |
*** marcoemorais has quit IRC | 16:54 | |
adrian_otto | or possibly both. I don't care if there is only a single implementation of scheduling | 16:54 |
*** marcoemorais has joined #openstack-containers | 16:55 | |
bauzas_ | adrian_otto: yeah that sounds the good way | 16:55 |
adrian_otto | clarified; I don't care if we call it from multiple places. | 16:55 |
bauzas_ | ideally, the nova-scheduler could manage these resources | 16:55 |
bauzas_ | I'm just thinking how to update the scheduler with the metrics | 16:56 |
bauzas_ | as I said, pure braindump | 16:56 |
adrian_otto | ok, so back to the idea of the guest agent | 16:56 |
adrian_otto | that is much like nova-compute | 16:56 |
bauzas_ | indeed | 16:56 |
adrian_otto | in the way that nova-compute has access to list KVM guests | 16:57 |
adrian_otto | it will be able to introspect a single container | 16:57 |
bauzas_ | yeah, it sounds to me that's just a subset of the compute_node | 16:57 |
bauzas_ | ie. compute_nodes refers to instances | 16:57 |
adrian_otto | yes. | 16:58 |
bauzas_ | and provides the list of instances | 16:58 |
*** harlowja_away is now known as harlowja | 16:58 | |
adrian_otto | now like VM guests, containers do not run forever | 16:58 |
bauzas_ | I'm just thinking that hypervisor could just introspect the list of the containers within the instances | 16:58 |
adrian_otto | most are expected to run continuously, but some are expected to run for a time and then stop | 16:58 |
bauzas_ | and report it to the scheduler | 16:58 |
adrian_otto | so we will have a concept of a stopped container, where the cgroups and namespaces exist on the host still, but no processes exist int he cgroup | 16:59 |
adrian_otto | think of that maybe as a "sleeping" container that could be restarted with all of it's same disk state | 16:59 |
bauzas_ | nova provides shelved instances, that's basically the same workflow | 17:00 |
adrian_otto | so information about the running status of each container will need to be relayed back up | 17:00 |
adrian_otto | did I exhaust your time? | 17:01 |
bauzas_ | adrian_otto: yeah I see, I'm just thinking that there is no need to map the list of containers per instance, you just need to map the list of containers per host and add a info on which instance it's located | 17:01 |
adrian_otto | as long as we have an event to update that if an instance migrates to another host | 17:01 |
adrian_otto | I'd agree with that. | 17:01 |
bauzas_ | adrian_otto: yeah I will have to step off for a bit | 17:02 |
adrian_otto | ok, catch up again later. | 17:02 |
bauzas_ | adrian_otto: the key thing is that if you can report on the Compute ResourceTracker, then you can easily plug in to the existing scheduler | 17:03 |
adrian_otto | ok | 17:04 |
bauzas_ | adrian_otto: ttyl | 17:09 |
adrian_otto | yes, thanks bauzas_ | 17:09 |
*** marcoemorais has quit IRC | 17:30 | |
*** marcoemorais has joined #openstack-containers | 17:31 | |
*** marcoemorais has quit IRC | 17:32 | |
*** marcoemorais has joined #openstack-containers | 17:33 | |
*** marcoemorais has quit IRC | 17:33 | |
*** marcoemorais has joined #openstack-containers | 17:33 | |
*** julienve_ has quit IRC | 17:35 | |
*** julienvey has joined #openstack-containers | 17:35 | |
*** julienvey has quit IRC | 17:40 | |
*** diga has quit IRC | 18:01 | |
*** adrian_otto has quit IRC | 18:08 | |
*** adrian_otto has joined #openstack-containers | 18:25 | |
*** stannie has joined #openstack-containers | 18:35 | |
*** julienvey has joined #openstack-containers | 18:36 | |
*** julienvey has quit IRC | 18:40 | |
*** julim has quit IRC | 18:51 | |
*** julim has joined #openstack-containers | 18:53 | |
*** julienvey has joined #openstack-containers | 19:36 | |
*** julienvey has quit IRC | 19:41 | |
*** unicell1 has quit IRC | 19:56 | |
*** julienvey has joined #openstack-containers | 20:02 | |
bauzas_ | adrian_otto: I'm back | 20:04 |
*** julienvey has quit IRC | 20:07 | |
adrian_otto | hey there | 20:14 |
adrian_otto | picking up where we left off. You wrote "adrian_otto: the key thing is that if you can report on the Compute ResourceTracker, then you can easily plug in to the existing scheduler" | 20:14 |
adrian_otto | I think I understand that, but want to confirm that I really have a firm grasp | 20:15 |
adrian_otto | there is some mechanism that allows hosts to send usage metrics to the nova-scheduler that are recorded in something called the Compute ResourceTracker. Instances running containers could do the same thing and leverage the existing facilities for tracking utilization against available capacity. Did I understand it right? | 20:16 |
*** marcoemorais has quit IRC | 20:25 | |
*** marcoemorais has joined #openstack-containers | 20:26 | |
*** marcoemorais has quit IRC | 20:27 | |
*** marcoemorais has joined #openstack-containers | 20:28 | |
*** marcoemorais has quit IRC | 20:30 | |
*** marcoemorais has joined #openstack-containers | 20:30 | |
bauzas_ | adrian_otto: mmm, sorry, missed to look at the chan | 20:38 |
bauzas_ | adrian_otto: ping me when seeing I'm not replying directly back :) | 20:38 |
bauzas_ | adrian_otto: so, do you know a little bit about the Nova architecture and how scheduling is done ? | 20:39 |
bauzas_ | adrian_otto: you have to know 3 services : nova-conductor, nova-scheduler and nova-compute | 20:40 |
bauzas_ | nova-conductor is issuing boot requests | 20:40 |
bauzas_ | nova-conductor is issing boot request by calling scheduler.select_dest(), then picking the first host and calling compute mgr | 20:41 |
bauzas_ | the scheduler select_dest() is filtering out based on what we call HostState, an in-memory repr of all the computes state | 20:42 |
bauzas_ | and Compute mgr is updating every 60 secs DB table called compute_nodes, which is based by Scheduler for generating HostState | 20:43 |
bauzas_ | the Compute mgr section is called ResourceTracker | 20:43 |
bauzas_ | every host is executing this portion of code every 60 secs by getting the virt state, adding extra info and updating the DB | 20:44 |
bauzas_ | so by saying it's far easier to plug the containers code if that's host related, it's because the whole logic is already there | 20:45 |
bauzas_ | there is even a bp for having 3rd party plugins called Extensible Resource Tracker, that could help you | 20:45 |
*** julienvey has joined #openstack-containers | 21:03 | |
*** marcoemorais has quit IRC | 21:04 | |
*** marcoemorais has joined #openstack-containers | 21:04 | |
*** marcoemorais has quit IRC | 21:05 | |
*** marcoemorais has joined #openstack-containers | 21:06 | |
*** marcoemorais has quit IRC | 21:07 | |
*** julienvey has quit IRC | 21:07 | |
*** marcoemorais has joined #openstack-containers | 21:08 | |
*** EricGonc_ has quit IRC | 21:23 | |
*** julienve_ has joined #openstack-containers | 21:43 | |
*** julienve_ has quit IRC | 21:44 | |
*** stannie has quit IRC | 21:48 | |
*** thomasem has quit IRC | 22:06 | |
*** marcoemorais has quit IRC | 22:20 | |
*** marcoemorais has joined #openstack-containers | 22:20 | |
*** marcoemorais has quit IRC | 22:20 | |
*** marcoemorais has joined #openstack-containers | 22:21 | |
*** unicell has joined #openstack-containers | 22:43 | |
*** EricGonczer_ has joined #openstack-containers | 23:20 | |
*** harlowja has quit IRC | 23:48 | |
*** harlowja_ has joined #openstack-containers | 23:48 | |
*** adrian_otto has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!