*** marcoemorais has quit IRC | 00:28 | |
*** EricGonczer_ has joined #openstack-containers | 01:07 | |
*** EricGonczer_ has quit IRC | 01:10 | |
*** unicell has joined #openstack-containers | 01:13 | |
*** jeckersb is now known as jeckersb_gone | 01:14 | |
*** EricGonczer_ has joined #openstack-containers | 01:33 | |
*** PaulCzar has quit IRC | 01:35 | |
*** EricGonczer_ has quit IRC | 01:42 | |
*** kebray has joined #openstack-containers | 01:48 | |
*** bitblt has joined #openstack-containers | 02:39 | |
*** bitblt has quit IRC | 02:39 | |
*** bitblt has joined #openstack-containers | 02:39 | |
*** bitblt has left #openstack-containers | 02:39 | |
*** harlowja is now known as harlowja_away | 02:42 | |
*** kebray has quit IRC | 02:48 | |
*** EricGonczer_ has joined #openstack-containers | 03:05 | |
*** EricGonczer_ has quit IRC | 03:11 | |
*** nshaikh has joined #openstack-containers | 03:45 | |
*** unicell has quit IRC | 04:00 | |
*** nshaikh has quit IRC | 04:13 | |
*** unicell has joined #openstack-containers | 04:38 | |
*** unicell has quit IRC | 05:01 | |
*** marcoemorais has joined #openstack-containers | 05:14 | |
*** marcoemorais1 has joined #openstack-containers | 05:15 | |
*** marcoemorais has quit IRC | 05:19 | |
*** adrian_otto has joined #openstack-containers | 05:23 | |
*** nshaikh has joined #openstack-containers | 05:32 | |
*** diga has joined #openstack-containers | 06:18 | |
diga | Hi | 06:19 |
---|---|---|
*** zul has quit IRC | 06:31 | |
*** diga has quit IRC | 06:35 | |
*** diga has joined #openstack-containers | 06:35 | |
*** diga has quit IRC | 06:41 | |
*** diga has joined #openstack-containers | 06:42 | |
*** diga1 has joined #openstack-containers | 06:47 | |
*** diga has quit IRC | 06:49 | |
*** diga1 has quit IRC | 06:52 | |
*** zul has joined #openstack-containers | 06:52 | |
*** marcoemorais1 has quit IRC | 07:01 | |
*** diga1 has joined #openstack-containers | 07:18 | |
*** diga1 has quit IRC | 07:25 | |
*** diga1 has joined #openstack-containers | 07:42 | |
*** diga1 has quit IRC | 07:47 | |
*** coolsvap|afk is now known as coolsvap | 07:48 | |
*** stannie has joined #openstack-containers | 07:57 | |
*** julienvey has joined #openstack-containers | 08:15 | |
*** diga1 has joined #openstack-containers | 08:16 | |
*** diga1 has quit IRC | 08:24 | |
*** coolsvap is now known as coolsvap|afk | 08:24 | |
*** diga1 has joined #openstack-containers | 08:50 | |
*** diga1 has quit IRC | 08:54 | |
*** unicell has joined #openstack-containers | 09:00 | |
*** diga1 has joined #openstack-containers | 09:18 | |
*** julienve_ has joined #openstack-containers | 09:27 | |
*** diga1 has quit IRC | 09:28 | |
*** julienvey has quit IRC | 09:30 | |
*** diga1 has joined #openstack-containers | 09:49 | |
*** diga1 has quit IRC | 09:52 | |
*** diga1 has joined #openstack-containers | 09:52 | |
*** diga1 has quit IRC | 10:02 | |
*** unicell has quit IRC | 10:38 | |
*** unicell has joined #openstack-containers | 10:46 | |
*** EricGonczer_ has joined #openstack-containers | 11:08 | |
*** EricGonczer_ has quit IRC | 11:23 | |
*** unicell has quit IRC | 11:34 | |
nshaikh | hey diga | 11:34 |
*** unicell has joined #openstack-containers | 11:34 | |
*** nshaikh has quit IRC | 11:37 | |
*** EricGonczer_ has joined #openstack-containers | 12:08 | |
*** EricGonczer_ has quit IRC | 12:11 | |
*** nshaikh has joined #openstack-containers | 12:11 | |
*** nimissa1 has left #openstack-containers | 12:13 | |
*** adrian_otto has quit IRC | 12:17 | |
*** jeckersb_gone is now known as jeckersb | 12:29 | |
*** julim has joined #openstack-containers | 12:36 | |
*** nshaikh has quit IRC | 12:51 | |
*** nshaikh has joined #openstack-containers | 12:53 | |
*** jeckersb is now known as jeckersb_gone | 12:55 | |
*** jeckersb_gone is now known as jeckersb | 12:56 | |
*** thomasem has joined #openstack-containers | 12:56 | |
*** nshaikh has quit IRC | 13:17 | |
*** julienve_ has quit IRC | 13:38 | |
*** julienvey has joined #openstack-containers | 13:39 | |
*** adrian_otto has joined #openstack-containers | 14:28 | |
*** adrian_otto has quit IRC | 15:03 | |
*** n0ano has joined #openstack-containers | 15:35 | |
*** mikedillion has joined #openstack-containers | 15:42 | |
*** adrian_otto has joined #openstack-containers | 16:00 | |
*** bauzas has joined #openstack-containers | 16:01 | |
adrian_otto | hi everyone | 16:01 |
bauzas | adrian_otto: hi | 16:01 |
adrian_otto | bauzas is joining us to follow up on our initial Scheduling/Gantt talk from the Magnum perspective | 16:01 |
n0ano | o/ | 16:01 |
adrian_otto | I am going to start a logged meeting | 16:01 |
adrian_otto | #startmeeting containers | 16:01 |
openstack | Meeting started Wed Oct 1 16:01:58 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:02 |
openstack | The meeting name has been set to 'containers' | 16:02 |
adrian_otto | #topic Gantt and Magnum - Scheduling for Containers in OpenStack | 16:02 |
adrian_otto | welcome bauzas | 16:02 |
bauzas | adrian_otto: thanks | 16:02 |
bauzas | n0ano just joined us too, he's also working on Gantt | 16:02 |
adrian_otto | thanks for agreeing to meet with us. The purpose of today's discussion is to put some additional details about how Magnum will approach the subject of scheudling | 16:03 |
bauzas | for sure, I would love knowing your requirements | 16:03 |
adrian_otto | resource scheduling is something that Nova already does, and has a modular interface to adapt how it's done. | 16:03 |
bauzas | right | 16:03 |
adrian_otto | we understand that you plan to pull out the schedulingpiece as a standalone project named Gantt | 16:03 |
bauzas | right too | 16:04 |
adrian_otto | once that happens, other projects could tap into that capability, which is what we would like to do | 16:04 |
bauzas | that's the idea | 16:04 |
adrian_otto | also, last time we met, we covered our initial intent for Magnum | 16:04 |
n0ano | +1 | 16:04 |
bauzas | because we're convinced that scheduling needs to be holistic | 16:04 |
adrian_otto | to support two placement strategies for containers within Nova instances (through the help of an in-guest agent) | 16:04 |
adrian_otto | 1) To place a container on a specifiy instance id | 16:05 |
bauzas | right, could you please just restate here for n0ano very briefly ? | 16:05 |
adrian_otto | 2) To repeatedly fill an instance with containers until no more will fit, and then we will create another instance, fill that, etc. We have been referring to this mode as "simple sequential fill". | 16:05 |
bauzas | right | 16:06 |
adrian_otto | as containers are removed, and Nova instances are left completely vacant, the instances may be automatically deleted by Magnum | 16:06 |
adrian_otto | so clearly those two modes are a first step, and we can get there independently from Gantt, giving us some runway in terms of time | 16:06 |
adrian_otto | we would like to be able to do more sophisticated placement in the future | 16:06 |
adrian_otto | example use cases are instance affinity (think host affinity) | 16:07 |
bauzas | that makes sense | 16:07 |
adrian_otto | and instance anti-affinity | 16:07 |
bauzas | right | 16:07 |
adrian_otto | we may also wany other concepts such as zone filters (to represent network performance boundaries, etc.) | 16:07 |
n0ano | I think `everybody` wants to do more sophisticated scheduling, hence the desire to split out gantt to able able to satisfy more projects than just nova | 16:07 |
adrian_otto | indeed! | 16:07 |
adrian_otto | so that's the starting point. | 16:07 |
n0ano | violent agreement so far | 16:08 |
adrian_otto | we hope to leverage as much of what's already there as possible, and have a sensible way of extending it | 16:08 |
bauzas | n0ano: here there is no need for a cross-project scheduler, just for something that Nova doesn't provide yet | 16:08 |
adrian_otto | yes, we don't care if the scheduler is provided by nova or some other service | 16:09 |
n0ano | which should make the containers usage a closer fit, fewer changes needed | 16:09 |
adrian_otto | we are after the funcitonality | 16:09 |
bauzas | here, the idea is that you have N sets as capacity, and you want to spawn Y children based on a best effort occupancy of X | 16:09 |
adrian_otto | so a great outcome of today's chat will be a shared vision of where we are headed, and a rough tactical outline for how to begin advancing toward that vision together | 16:09 |
bauzas | X can be hosts/instances/routers/cinder-volumes and Y can be instances/containers/networks/volumes | 16:10 |
adrian_otto | yes, that's right | 16:10 |
bauzas | the placement logic and the efficiency of that logic still has to be the same | 16:10 |
adrian_otto | agreed. | 16:10 |
n0ano | I think there's general concensus on the desired outcome, we've been bogged down by the mechanics of getting there so are | 16:11 |
bauzas | n0ano: right, and that's why we're taking down to using objects | 16:11 |
*** thomasem has quit IRC | 16:11 | |
adrian_otto | I do have a specific question about fault recovery, and whether we conceptualize that as scheduling, or orchestration | 16:11 |
bauzas | adrian_otto: the current nova scheduler model is accepting non validated and non-typed dictionaries of values | 16:11 |
adrian_otto | say I have provisioned resource Y on instance X | 16:12 |
bauzas | adrian_otto: sure go ahead | 16:12 |
adrian_otto | and Y dies. | 16:12 |
bauzas | yep | 16:12 |
adrian_otto | I need a way to have that restarted. | 16:12 |
adrian_otto | does Nova already have a concept of this? | 16:12 |
bauzas | that's not automatic | 16:12 |
n0ano | adrian_otto, I've always interpreted that as a heat issue, not a nova scheduling issue | 16:12 |
bauzas | but you can evacuate | 16:12 |
bauzas | n0ano: strong +1 | 16:13 |
adrian_otto | ok, so the user would use a delete command in the api and then another post to create a new one? | 16:13 |
bauzas | IIRC, there is a heat effort about that | 16:13 |
bauzas | adrian_otto: nope | 16:13 |
bauzas | I heard about Heat 'convergence' | 16:13 |
n0ano | nova (at least currently) is kind of a fire and forget, once you've launced the instance nova scheduling is done (this is an area that might need to change in future) | 16:13 |
adrian_otto | yes, I'm pretty sure this is within the scope of Heat, but I wanted your perspective on that | 16:13 |
bauzas | still needs to figure out what's exactly | 16:13 |
bauzas | adrian_otto: Gantt will be scheduling things and provide SLAs | 16:14 |
bauzas | adrian_otto: if something goes weird, Gantt has to be updated for doing good decisions | 16:14 |
bauzas | adrian_otto: by design, Gantt can be racy | 16:14 |
n0ano | I think fault recover is a heat issue, gantt will have to be involved but the core logic comes from heat | 16:15 |
bauzas | adrian_otto: ie. if the updates are stale, Gantt will provide wrong decisions | 16:15 |
adrian_otto | well, you do need to know the current utilization against capacity, so if a sub-resource is deleted, the scheduler may need to learn about that. Is this assumption correct? | 16:15 |
bauzas | adrian_otto: right, Gantt will have a statistics API | 16:15 |
adrian_otto | or will it poll each instance at decision time? | 16:15 |
adrian_otto | ok, I see | 16:15 |
bauzas | adrian_otto: so projects using it will update its views | 16:15 |
bauzas | adrian_otto: nope, no polling from Gantt | 16:15 |
n0ano | adrian_otto, right one of the changes we're actively doing is to put all resource tracking inside the scheduler | 16:16 |
bauzas | adrian_otto: projects provide a view to Gantt, that's project's responsibilies to engage consistency | 16:16 |
adrian_otto | so there will need to be some way to inform the stats through that statistics API | 16:16 |
bauzas | adrian_otto: exactly | 16:16 |
adrian_otto | we will expect Heat do to that upon triggering a resource restart (heal) event | 16:16 |
bauzas | n0ano: to be precise, that will be the claiming process | 16:16 |
adrian_otto | got it, ok | 16:16 |
n0ano | what bauzas said | 16:16 |
adrian_otto | excellent. | 16:17 |
bauzas | n0ano: for making sure that concurrent schedulers can do optimistic scheduling | 16:17 |
bauzas | without lock mechanism | 16:17 |
bauzas | but that's internal to Gantt | 16:17 |
*** zul has quit IRC | 16:17 | |
*** harlowja_away has quit IRC | 16:17 | |
adrian_otto | ok | 16:18 |
bauzas | projects will have to put stats to Gantt using an API endpoint, and will consume decisions from another API endpoint | 16:18 |
adrian_otto | ok, understood. | 16:18 |
bauzas | so that requires a formal datatype for updating the stats | 16:18 |
adrian_otto | this will be some form of a feed interface like an RPC queue? | 16:18 |
bauzas | and here is what we're currently working on | 16:18 |
*** zul has joined #openstack-containers | 16:19 | |
n0ano | note that, currently, all stats are kind of nova instance related, for cross project (cinder, neutron) usage we'll have to expand those stats | 16:19 |
bauzas | adrian_otto: the internals are not yet agreed | 16:19 |
adrian_otto | kk | 16:19 |
bauzas | adrian_otto: I was seeing someting like a WSME datatype | 16:19 |
bauzas | adrian_otto: or a Nova object currently | 16:19 |
bauzas | adrian_otto: my personal opinion is that Nova objects are good candidates for becoming WSME datatypes | 16:20 |
adrian_otto | :-) | 16:20 |
bauzas | adrian_otto: but that's my personal opinion :) | 16:20 |
bauzas | so the API will have precise types | 16:20 |
adrian_otto | ok, so I think I distracted you from your description of your current focus of work | 16:20 |
bauzas | adrian_otto: not exactly | 16:20 |
bauzas | adrian_otto: I just wanted to emphasize that as the current Nova scheduler is not validating what we send to it, we need to do that work | 16:21 |
n0ano | focus = clean up claims, split out into gantt | 16:21 |
adrian_otto | ok | 16:21 |
bauzas | so the basic plan for Kilo is what we call "fix the tech debt, ie. provide clean interfaces for updating stats and consuming requests" | 16:22 |
bauzas | and eventually move the claiming process to Gantt for doing concurrent non-locking decisions | 16:22 |
bauzas | but that's not impacting your work | 16:22 |
n0ano | there is so much that people want to do that is predicated on splitting out gantt that I `really` want to focus on that | 16:22 |
*** thomasem has joined #openstack-containers | 16:23 | |
adrian_otto | our favorite question from product managers is "when will XYZ be ready". We have been pretty clear about the sequencing of new features. What we could tighten up is an expectation of when we might clear various milestones. What do you think the best way is to address these questions? | 16:24 |
bauzas | adrian_otto: what we can promise is to deliver a Nova library by Kilo | 16:24 |
bauzas | adrian_otto: ie. something detached from the Nova repo, residing in its own library but still pointing to the same DB | 16:25 |
adrian_otto | what is a reasonable expectation for what features will be present in that library? Is that a feature parity with what is in nova today with tech debt subtracted out? | 16:25 |
n0ano | adrian_otto, when you get a good answer to that let me know, I get the same questions internally and `when it's done' is never sufficient | 16:25 |
adrian_otto | n0ano: here, here! | 16:25 |
bauzas | adrian_otto: feature parity is a must have | 16:25 |
bauzas | adrian_otto: no split without feature parity | 16:25 |
adrian_otto | ok, that makes sense | 16:26 |
n0ano | bauzas, +1 | 16:26 |
bauzas | adrian_otto: so, based on our progress, that requires to provide some way to update the scheduler for other than just "compute node" stats | 16:26 |
adrian_otto | maybe if we thought about it like "in what release of OpenStack could Magnum and Gantt be offering affinity and anti-affinity scheduling capability"? | 16:26 |
bauzas | and the target is for Kilo | 16:26 |
adrian_otto | that might be a more concrete question that we could chew on together | 16:26 |
bauzas | adrian_otto: ask Nova cores the priority they will put for the Scheduler effort | 16:27 |
adrian_otto | !! :-) | 16:27 |
openstack | adrian_otto: Error: "!" is not a valid command. | 16:27 |
bauzas | adrian_otto: raise the criticity if you want so | 16:27 |
adrian_otto | ok so assuming this is in a new repo, it could have it's own review queue | 16:27 |
bauzas | adrian_otto: tbh, we're not sufferring from a contributors bandwidth | 16:27 |
adrian_otto | and a focused reviewer team | 16:28 |
*** marcoemorais has joined #openstack-containers | 16:28 | |
n0ano | adrian_otto, +1 | 16:28 |
adrian_otto | anyone who is a true stakeholder could opt into that | 16:28 |
bauzas | adrian_otto: right, but before doing that, we need to get merged the necessary bits for fixing the tech debt and provide stats with other Nova objects | 16:28 |
adrian_otto | ok, that's sensible too | 16:28 |
bauzas | and here comes the velocity problem | 16:28 |
bauzas | as we need feature parity *before* the split, we need to carry our changes using the Nova workflow | 16:29 |
bauzas | adrian_otto: hope you understand what I mean | 16:29 |
adrian_otto | I do. | 16:29 |
adrian_otto | I'm thinking on it | 16:29 |
bauzas | adrian_otto: in particular wrt Nova runways/slots proposal | 16:30 |
bauzas | adrian_otto: that's envisaged for kilo-3, not before but still | 16:30 |
n0ano | bauzas, I'm not `too` concerned about the runways proposal, I hope we'll be in before we have to worry about that (I don't like it but that's a different issue) | 16:30 |
*** thomasem has quit IRC | 16:30 | |
bauzas | n0ano: think about it, and how we can merge changes before kilo-3 | 16:31 |
bauzas | n0ano: and you'll see that isolate-scheduler-db will probably be handled by kilo-3 | 16:31 |
bauzas | and winter is coming (no jokes) | 16:31 |
n0ano | yeah but we get more accomplished in winter (ignoring the holidays) | 16:32 |
adrian_otto | the challenge here is that we can't just produce more Nova core reviewers by mustering them up. | 16:32 |
adrian_otto | that is a scarce resource that grows organically, and at a slow rate. | 16:32 |
n0ano | adrian_otto, nope, but this is a generic problem that everyone is facing, we just have to deal with it | 16:33 |
adrian_otto | so as you mentioned, we would need to cause the existing reviewers to perceive the Gantt related work as a priority. | 16:33 |
bauzas | adrian_otto: that's the thing | 16:33 |
*** EricGonczer_ has joined #openstack-containers | 16:34 | |
bauzas | in particular, Design Summits are a good opportunity for raising concerns about prorities | 16:34 |
adrian_otto | it probably only takes 3 reviewers with a commitment of 5 hours a week or less to succeed at this, right? | 16:34 |
bauzas | even less | 16:34 |
adrian_otto | probably half that time | 16:34 |
bauzas | patches are quite small, except a big one I just proposed | 16:35 |
adrian_otto | let's just call it 2 hours a week for sake of discussion | 16:35 |
n0ano | we can get reviewers, it 2 core reviewers that is the problem. | 16:35 |
adrian_otto | that seems to be something that we could influence using a 1:1 campaign | 16:35 |
bauzas | n0ano: +1000 | 16:35 |
bauzas | adrian_otto: as usual, pings and f2f during Summit | 16:36 |
adrian_otto | and I have a feeling that not just any cores will do, it's a subset of the cores who can rationalize and criticize contributions to this part of the system architecture. | 16:36 |
bauzas | adrian_otto: tbh, we already have support from a set of them | 16:37 |
adrian_otto | so what we need is an earmarked time commitment from them | 16:37 |
n0ano | the reality is we've thrashed the issues enough that I think we know what needs to be done, now it's just a matter of doing it and getting it reviewed | 16:37 |
bauzas | adrian_otto: we just need to emphasize the priority | 16:37 |
n0ano | bauzas, +1 | 16:37 |
adrian_otto | and if we approached each of them with a join me for two 1-hour meetings each week to review patches for this subject matter. | 16:38 |
adrian_otto | that's a commitment that's likely to succeed, and may be just enough lubrication to help you get that work through | 16:38 |
adrian_otto | or even three 30 minute meetings, something along those lines | 16:38 |
adrian_otto | ideally you'd have them review together interactively. | 16:39 |
adrian_otto | so they can debate disputes on the spot to keep cycle times between patch revisions shorter. | 16:39 |
n0ano | that'd be nice, whether we can get that kind of commitment is the issue | 16:39 |
adrian_otto | how would you feel about working in this way for a limited time, until a particular milestone is reached (Gantt in own code base with it's own reviewer team?) | 16:40 |
adrian_otto | so the comitter, and ideally three reviewers show up like that on a regular schedule as a tiger team | 16:40 |
n0ano | I'll try anything, if you think that can help I'd do it | 16:41 |
adrian_otto | and make revisions on the fly to the extent possible | 16:41 |
bauzas | adrian_otto: well, that's quite close to what Nova calls 'slot' | 16:41 |
adrian_otto | ok, I'd like to help pitch that approach, or any variation on this theme that you think might resonate with the stakeholders and get the throughput up (velocity++). | 16:42 |
bauzas | adrian_otto: I mean, asking specific people to do specific reviews on a specific time is FWIW what we call a runway or a slot | 16:42 |
adrian_otto | tell me more about 'slot' please. | 16:42 |
adrian_otto | ok, mikal proposed that about a month back | 16:42 |
bauzas | adrian_otto: the proposal is about having a certain number of blueprints covered at the same time | 16:42 |
adrian_otto | has that approach resumed, and is it effective? | 16:42 |
bauzas | the current threshold is set to 10 | 16:42 |
bauzas | adrian_otto: not yet, planned for k3 | 16:43 |
adrian_otto | oh, so like an "on approach" strategy | 16:43 |
n0ano | adrian_otto, only a proposal, hasn't been done yet | 16:43 |
adrian_otto | where you cherry pick feature topics that must land by a deadline? | 16:43 |
n0ano | basically, it's a way to prioritize important BPs | 16:43 |
adrian_otto | I see | 16:43 |
bauzas | adrian_otto: that's just giving reviews attention until it gets merged | 16:43 |
bauzas | s/reviews/reviewers | 16:43 |
n0ano | or, another way of saying it, the current review system doesn't work :-) | 16:44 |
adrian_otto | I refer to this as the "bird dog" approach | 16:44 |
bauzas | adrian_otto: if a blueprint gets staled in the middle of a vote, it looses its slot | 16:44 |
adrian_otto | you have a team of reviewers bird dog a given patch topic | 16:44 |
adrian_otto | ok, that's good food for thought. Please let me know how I can help with this. | 16:45 |
adrian_otto | I'll plan to help with the persuasive pitch delivery. | 16:45 |
bauzas | adrian_otto: well, the best way is to emphasize your needs during the Summit | 16:45 |
n0ano | adrian_otto, tnx, much appreciated | 16:46 |
adrian_otto | I'll be there. -) | 16:46 |
bauzas | we have some proposals for talks at the Summit | 16:46 |
adrian_otto | the Design summit schedule is still unreleased, correct? | 16:46 |
n0ano | a couple of proposed launchpads so far | 16:47 |
n0ano | https://etherpad.openstack.org/p/kilo-nova-summit-topics | 16:47 |
n0ano | https://etherpad.openstack.org/p/kilo-crossproject-summit-topics | 16:47 |
bauzas | n0ano: s/launchpad/etherpad | 16:48 |
n0ano | at least I got the pad part right :-) | 16:48 |
adrian_otto | :-) | 16:48 |
adrian_otto | ok, I will review those and make remarks on them | 16:49 |
*** thomasem has joined #openstack-containers | 16:49 | |
adrian_otto | thanks for your time today bauzas and n0ano. | 16:49 |
n0ano | NP | 16:49 |
bauzas | np | 16:49 |
adrian_otto | anything more before we wrap up? | 16:50 |
bauzas | good to chat with you | 16:50 |
bauzas | nope | 16:50 |
bauzas | we got your requirements | 16:50 |
n0ano | looking forward to Paris | 16:50 |
adrian_otto | ok, cool. I'll see you in Paris! | 16:50 |
adrian_otto | #endmeeting | 16:50 |
openstack | Meeting ended Wed Oct 1 16:50:38 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:50 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.html | 16:50 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.txt | 16:50 |
openstack | Log: http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.log.html | 16:50 |
bauzas | AIUI, that's basically same filters/logic but applied to distinct components | 16:51 |
bauzas | adrian_otto: still there ? | 16:55 |
*** thomasem_ has joined #openstack-containers | 16:57 | |
*** thomasem has quit IRC | 16:57 | |
*** thomasem_ has quit IRC | 16:58 | |
adrian_otto | yes , but I am dialing into a conference call atm | 17:00 |
*** thomasem has joined #openstack-containers | 17:02 | |
*** harlowja has joined #openstack-containers | 17:15 | |
*** diga has joined #openstack-containers | 17:47 | |
diga | Hi | 17:47 |
*** diga has quit IRC | 18:00 | |
*** EricGonczer_ has quit IRC | 18:11 | |
*** EricGonczer_ has joined #openstack-containers | 18:11 | |
*** unicell has quit IRC | 19:01 | |
*** diga has joined #openstack-containers | 19:02 | |
*** julienvey has quit IRC | 19:05 | |
*** diga has quit IRC | 19:28 | |
*** diga has joined #openstack-containers | 19:37 | |
*** julienvey has joined #openstack-containers | 19:38 | |
*** julienvey has quit IRC | 19:49 | |
*** thomasem has quit IRC | 19:53 | |
*** thomasem has joined #openstack-containers | 19:55 | |
*** thomasem has quit IRC | 19:59 | |
*** diga has quit IRC | 20:03 | |
*** EricGonczer_ has quit IRC | 20:28 | |
*** EricGonczer_ has joined #openstack-containers | 20:28 | |
*** EricGonczer_ has quit IRC | 20:32 | |
*** EricGonczer_ has joined #openstack-containers | 20:33 | |
*** adrian_otto has quit IRC | 20:42 | |
*** PaulCzar has joined #openstack-containers | 20:47 | |
*** mikedillion has quit IRC | 20:51 | |
*** n0ano has left #openstack-containers | 20:54 | |
*** adrian_otto has joined #openstack-containers | 21:00 | |
*** marcoemorais has quit IRC | 21:02 | |
*** marcoemorais has joined #openstack-containers | 21:02 | |
*** marcoemorais has quit IRC | 21:03 | |
*** marcoemorais has joined #openstack-containers | 21:05 | |
*** marcoemorais has quit IRC | 21:05 | |
*** marcoemorais has joined #openstack-containers | 21:05 | |
*** marcoemorais has quit IRC | 21:06 | |
*** marcoemorais has joined #openstack-containers | 21:06 | |
*** diga has joined #openstack-containers | 21:25 | |
*** diga has quit IRC | 21:30 | |
*** jeckersb is now known as jeckersb_gone | 21:32 | |
*** stannie has quit IRC | 21:36 | |
*** EricGonczer_ has quit IRC | 21:39 | |
*** marcoemorais has quit IRC | 22:05 | |
*** marcoemorais has joined #openstack-containers | 22:06 | |
*** marcoemorais has quit IRC | 22:06 | |
*** marcoemorais has joined #openstack-containers | 22:07 | |
*** marcoemorais has quit IRC | 22:07 | |
*** marcoemorais has joined #openstack-containers | 22:07 | |
*** mikedillion has joined #openstack-containers | 22:10 | |
*** marcoemorais has quit IRC | 22:13 | |
*** marcoemorais has joined #openstack-containers | 22:13 | |
*** marcoemorais has quit IRC | 22:20 | |
*** marcoemorais has joined #openstack-containers | 22:31 | |
*** marcoemorais has quit IRC | 22:33 | |
*** mikedillion has quit IRC | 23:09 | |
*** mikedillion has joined #openstack-containers | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!