Thursday, 2011-10-20

*** vladimir3p has quit IRC00:01
*** jdg_ has quit IRC00:04
*** ohnoimdead_ has joined #openstack-meeting00:15
*** Ravikumar_hp has quit IRC00:16
*** dragondm has quit IRC00:17
*** ohnoimdead has quit IRC00:18
*** ohnoimdead_ is now known as ohnoimdead00:18
*** HowardRoark has joined #openstack-meeting00:24
*** nati2 has quit IRC00:33
*** nati2 has joined #openstack-meeting00:33
*** tsuzuki_ has joined #openstack-meeting00:58
*** ohnoimdead has quit IRC01:07
*** ohnoimdead has joined #openstack-meeting01:07
*** nati2 has quit IRC01:25
*** nati2_ has joined #openstack-meeting01:25
*** nati2_ has quit IRC01:32
*** bencherian has quit IRC01:39
*** bencherian has joined #openstack-meeting01:42
*** bencherian has quit IRC02:01
*** jdg_ has joined #openstack-meeting02:11
*** ohnoimdead has quit IRC02:18
*** jdg_ has quit IRC02:58
*** jdg_ has joined #openstack-meeting03:41
*** HowardRoark has quit IRC03:56
*** corrigac_ has joined #openstack-meeting04:03
*** _cerberu` has joined #openstack-meeting04:04
*** vish1 has joined #openstack-meeting04:05
*** annegentle has quit IRC04:05
*** vishy has quit IRC04:05
*** _cerberus_ has quit IRC04:05
*** ogelbukh has quit IRC04:05
*** annegentle has joined #openstack-meeting04:05
*** jaypipes has quit IRC04:05
*** corrigac has quit IRC04:05
*** tr3buchet has quit IRC04:05
*** tr3buchet has joined #openstack-meeting04:05
*** deshantm has quit IRC04:05
*** ogelbukh has joined #openstack-meeting04:06
*** deshantm has joined #openstack-meeting04:06
*** jaypipes has joined #openstack-meeting04:18
*** jdg_ has quit IRC04:20
*** reed has quit IRC04:45
*** bengrue has joined #openstack-meeting04:57
*** royh has quit IRC05:01
*** royh has joined #openstack-meeting05:02
*** _cerberu` is now known as _cerberus_05:28
*** bengrue has quit IRC05:54
*** novas0x2a|laptop has quit IRC06:02
*** bencherian has joined #openstack-meeting07:22
*** bencherian has quit IRC07:44
*** tsuzuki_ has quit IRC11:10
*** markvoelker has joined #openstack-meeting11:45
*** cmagina has quit IRC12:54
*** cmagina has joined #openstack-meeting12:54
*** scottsanchez has quit IRC13:22
*** mdomsch has joined #openstack-meeting13:46
*** dprince has joined #openstack-meeting14:06
*** scottsanchez has joined #openstack-meeting14:19
*** deshantm has quit IRC14:21
*** deshantm has joined #openstack-meeting14:33
*** dolphm has joined #openstack-meeting14:45
*** reed__ has joined #openstack-meeting14:56
*** rnirmal has joined #openstack-meeting14:56
*** dragondm has joined #openstack-meeting15:02
*** vladimir3p has joined #openstack-meeting15:12
*** bencherian has joined #openstack-meeting15:15
*** HowardRoark has joined #openstack-meeting15:23
*** michaelkre has joined #openstack-meeting15:23
*** reed__ is now known as reed15:27
*** cmagina has quit IRC15:33
*** cmagina has joined #openstack-meeting15:40
*** dolphm has quit IRC15:48
*** dolphm has joined #openstack-meeting15:50
*** nati2 has joined #openstack-meeting15:51
*** dolphm has quit IRC15:59
*** dolphm has joined #openstack-meeting15:59
*** dendrobates is now known as dendro-afk16:14
*** adjohn has joined #openstack-meeting16:16
*** dolphm has joined #openstack-meeting16:25
*** dendro-afk is now known as dendrobates16:29
*** clayg has joined #openstack-meeting16:43
*** DuncanT has joined #openstack-meeting16:46
*** dolphm has quit IRC16:47
*** darraghb has joined #openstack-meeting16:48
*** timr1 has joined #openstack-meeting16:49
*** dolphm has joined #openstack-meeting16:54
*** mdomsch has quit IRC16:59
*** vish1 is now known as vishy17:00
vladimir3pok, so probably waiting for Renuka to start the meeting17:01
*** bencherian has quit IRC17:01
claygvladimir3p: blueprint looks great nice work17:01
vladimir3pclayg: thanks17:02
*** renuka has joined #openstack-meeting17:02
renukaShould we start the nova-volume meeting?17:03
vladimir3phey, yes. go ahead17:03
openstackMeeting started Thu Oct 20 17:03:29 2011 UTC.  The chair is renuka. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.17:03
renukavladimir3p: would you like to begin with the scheduler discussion17:04
renuka#topic volume scheduler17:04
*** openstack changes topic to "volume scheduler"17:04
vladimir3pno prob. So, there is a blueprint spec you've probably seen17:04
vladimir3pwe have couple options on implementing it17:04
vladimir3p1. to create a single "generic" scheduler tat will perform some exemplary scheduling17:05
*** dolphm has quit IRC17:05
vladimir3p2. or don't do there almost nothing and just find an appropriate sub-scheduler17:05
*** hggdh has quit IRC17:05
vladimir3panother change that we may need - providing some extra-data back to Default Scheduler (for sending it to manager host)17:06
vladimir3plet's first of all discuss how we would like schedduler to look like:17:06
vladimir3peither we will put there any basic logic17:06
*** dolphm has joined #openstack-meeting17:06
vladimir3por just rely on every vendor to supply their own17:06
vladimir3pI prefer the 1st option17:07
vladimir3pany ideas?17:07
rnirmalthe only problem I see with the sub scheduler is, it's going to become too much too soon17:07
claygI think a generic type scheduler could get you pretty far17:07
rnirmalso starting with a generic scheduler would be nice17:07
DuncanTAssuming it is generic enough, I can't see a problem with a good generic scheduler17:07
claygI'm not sure how zones and host aggregates play in at the top level scheduler17:07
timr1vladimir3p: am I rigth in understaning that the generic schduler will eb able to schdule on opaque keys17:08
renukaI think the vendors can plug in their logic in report capabilities, correct? So why not let the scheduler decide which backend is appropriate and based on a mapping which shows which volume workers can reach with backend, we can have the scheduler select a node17:08
vladimir3ptimr1: yes, at the beginning the generic scheduler could just match volume_type keys with reported keys17:09
vladimir3pI was thinking that it must have some logic for quantities as well17:09
DuncanTSince the HP backend can support many volume types in a single instance, it is also important that required capabilities get passed through to the create. Is that covered in the design already?17:09
vladimir3p (instead of going to the DB for ll available vols on that host)17:09
vladimir3pDuncanT: can you pls clarify?17:09
vladimir3pDuncanT: I was thinking that this is the essential requirement for it17:10
vladimir3prenuka: on volume-driver level, yes, they will report whatever keys they want17:10
renukaDuncanT: isn't the volume type already passed in the create call (with the extensions added)17:10
*** hggdh has joined #openstack-meeting17:11
DuncanTvladimir3p: A single hp backend could support say (spindle speed==4800, spindle speed=7200, spindlespeed = 15000) so if the user asked for spindlespeed > 7200 we'd need that detail passed through to create17:11
vladimir3pDuncanT: yes, volume type is used during volume creation and might be retrieved by the scheduler17:11
renukavladimir3p: correct, so those keys should be sufficient at this point to plug in vendor logic, correct?17:11
timr1that sound ok so17:12
renukaDuncanT: I think the user need only specify the volume type. The admin can decide if a certain type means spindle speed = x17:12
vladimir3prenuka: yes, but without special scheduler driver they will be useless (generic will only match volume type vs this reported capabilities)17:12
vladimir3pDuncanT, renuka: yes, I was thinking that admin will create separate volume types. Every node will report what exactly it sees and scheduler will perform a matching between them17:13
renukavladimir3p: what is the problem with that?17:13
renukavladimir3p: every node should report for every backend (array) that it can reach17:13
renukathe duplicates can be filtered later or appropriately at some point17:14
DuncanTvladimir3p: That makes sense, thanks17:14
vladimir3prenuka: I mean drivers will report whatever data they think is appropriate (some opaque key/value pairs), but generic scheduler will only look at the ones it recognize (and those are the one supplied in volume types)17:14
vladimir3pif vendor would like to perform some special logic based on this opaque data - special schedulers will be required17:15
timr1I agree that we want to deal with quantities as well17:15
rnirmalcan we do something like match volume_type: if type supports sub_criteria match those too (gleaned from the opaque key/value pairs)17:15
DuncanTvladimir3p: It should be possible to do ==. =/=, <, > generically on any key shouldn't it?17:16
renukarnirmal: I agree17:16
timr1dont htink we need special schedulers  to support addiitonal vendor logic in backend17:16
vladimir3pDuncanT: == and != yes, not sure about >, <17:16
renukayea, we should be able to add at least some basic logic via report capabilities17:17
vladimir3prnirmal: it would be great to have sub_criteria, but how scheduler will recognize that particular key/value pair is not a single criteria, but a complex one17:17
renuka#idea what if we add a "specify rules" at driver level and keep "report capabilities" at node/backend level17:17
rnirmalvaldimir3p: it wouldn't know that, without some rules17:18
vladimir3pfolks, how about to start with something basic and improve it with the time (we could discuss these improvements right now, but let's agree on basics)17:18
vladimir3pso, meanwhile we agreed that:17:18
vladimir3p#agreed drivers will report capabilities in key/value pairs17:19
renukavladimir3p: an opaque rules should be simple enough to add, while keeping it generic17:19
renukacorrection, nodes will report capabilities in key/value pairs per backend17:19
vladimir3p#agreed basic scheduler will have a logic to match volume_type's key/value pairs vs reported ones17:19
vladimir3prenuka: driver will decide how many types it would like to report17:20
claygI think the abstract_scheduler for compute would be a good base for the gerneic type/quantity scheduler17:20
vladimir3peach type has key/value pairs17:20
vladimir3pclayg: yes, I was thinking exactly the same17:20
vladimir3pclayg: there is already some logic for matching this stuff17:21
claygleast_cost allows you to register cost functions17:21
claygoh right...17:21
claygin the vsa scheduler17:21
vladimir3pwe will need to rework vsa scheduler, but it has some basic logic for that17:22
vladimir3phow about quantities: I suppose this is important. are we all agree to add reserved keywords?17:22
renukaclayg: so do you think it will be straightforward enough to go with the register cost functions at this point?17:23
DuncanTvladimir3p: I think the set needs to be small, since novel backends may not have many of the same concepts as other designs17:24
timr1vladimir3p: yes - IOPS, bandwidths, capacity etc?17:24
claygrenuka: I'm not sure... something like that may work, but the cost idea is more about selecting a best match from a group of basically indentically capable hosts - but with different loads17:24
vladimir3pthe cost in our case might be total capacity or amount of drivers, etc...17:25
*** darraghb has quit IRC17:25
renukaI think there needs to be a way to call some method for a driver specific function. It will keep things generic and simplify a lot of stuff17:25
vladimir3pthat's why I suppose reporting something that scheduler will look at is important17:26
claygperhaps better would be to just subclass and over-ride "filter_capable_nodes" to look at type... and also some vendor specific opaque k/v pairs.17:26
claygthe quantities stuff would more closely match the costs concepts17:26
vladimir3pclayg: yes, subclass will work for single-vendor scenario17:26
*** novas0x2a|laptop has joined #openstack-meeting17:26
claygvladimir3p: I mean you could call the super method and ignore stuff that's pass on "other-vendor" stuff17:27
renukavladimir3p: how about letting the scheduler to the basic scheduling as you said, and call a driver method passing the backends it has filtered?17:28
claygmeh, I think we start with types and quantities - knowning that we'll have to had something more later17:28
rnirmalregarding filtering, is anyone considering locating volume near vm, for scenarios with top-of-rack storage17:28
claygs/we/get out of the way and let Vlad17:28
renukarnirmal: yes17:28
rnirmalrenuka: ok17:29
vladimir3prenuka: not sure if generic scheduler should call driver... probably as clayg mentioned every provider could declare a subclass that will do whatever required17:29
timr1timr1: we are including it17:29
vladimir3pin this case we will not need to  implement volume_type - 2 - driver translation tables17:29
vladimir3phowever, there is a disadvantage with this - only a single vendor per scheduler ...17:30
timr1we (HP) are agnostic at this point as our driver is generic and we re-schdule in the back end17:30
vladimir3ptimr1: do you really want to rescheduler on volume driver level?17:30
vladimir3ptimr1: will it be easier if scheduler will have all the logic17:31
renukavladimir3p: you mean single driver... SM for example supports multiple vendors backend types17:31
timr1vladimir3p: we already do it this way - legacy17:31
vladimir3ptimr1: and just use volume node as a gateway17:31
timr1that is correct - we jsut use it as a gateway17:31
vladimir3prenuka: yes, single driver17:31
vladimir3pso, can we claim that we agreed that there will be no volume_type-2-driver translation menwhile?17:32
renukavladimir3p: i still think matching simply on type is a bit less. If a driver does report capabilities per backend, there should be a way to distinguish between two backends17:33
renukafor example, 2 netapp servers do not make 2 different volume types. They map to the same one, but the scheduler should be able to choose one17:33
claygtwo backends of the same type?17:33
vladimir3phow about to go over particular renuka's scenario and see if we could do it with a basic scheduler17:34
renukasimilarly, as rnirmal pointed out, top of rack type of restrictions cannot be applied17:34
claygrenuka: how do you suggest the scheduler choose between the two if they both have equal "qantities"17:34
renukathat is where i think there should be a volume driver specific call17:35
renukaor a similar way17:35
claygi do't really know how to support top of rack restrictions... unless you know where the volume is going to be attached when you create it17:35
*** adjohn has quit IRC17:36
renukaclayg: that will have to be filtered based on some reachability which is reported17:36
rnirmalclayg: yeah, it's a little more complicated case, where it would require combining both vm and volume placement, so it might be out of scope for the basic scheduler17:36
renukaclayg: we can look at user/project info and make a decision. I expect that is how it is done anyway17:37
vladimir3prenuka: I suppose the generic scheduler could not really determine which one should be used, but you could do it in the sub-class. So you will over-ride some methds and pick based on opaque data17:37
renukaso by deciding to subclass, we have essentially made it so that at least for a while, we can have only 1 volume driver in the zone17:38
vladimir3pin this case scheduler will call driver "indirecty", because driver will over-ride it17:38
*** bengrue has joined #openstack-meeting17:38
renukacan everyone here live with that for now?17:38
claygisn't there like an "agree" or "vote" option?17:38
DuncanTOne issue I've just thought off with volume-types:17:38
claygI'm totally +1'ing the simple thing first17:39
vladimir3p#agreed simple things first :-) there will be no volume_type-2-driver translation meanwhile (sub-class will override whatever necessary)17:39
*** jdg_ has joined #openstack-meeting17:40
vladimir3pso, renuka, back to your scenario. I suppose it will be possible to register volume types like: SATA volumes, SAS volumes, etc... probably with some RPM, QoS, etc extra data17:41
DuncanTIs it possible to specify volume affinity or anti-affinity with volume types?17:41
vladimir3pon volume driver side, each driver will go to all its underlying arrays and will collect things like what type of storage each one of them support.17:41
DuncanTI don't think there is a way to express taht with volume types?17:42
vladimir3pI think after that it could rearrange it in the form like... [ {type: "SATA", arrays: [{array1, access path1}, {array2, accesspath2, ...}]17:42
renukavladimir3p: as long as we can subclass appropriately, the SM case should be fine, since we use one driver (multiple instances) in a zone17:42
vladimir3pyeah, the sub-class on scheduler level will be able to perform all the filtering and will be able to recognize such data17:43
claygDuncanT: no I don't think so, not yet17:43
timr1DuncanT: I dont think volume types can do this - but it is somethign customers want17:43
vladimir3pthe only thing - it will need to report back what exactly should be done17:43
renukaDuncanT: yes, that is why I am stressing on reachability17:43
DuncanTrenuka: I see, yes, it is a reachability question too -we were looking at it for performance reasons but the API is the same17:45
vladimir3pactually, on volume driver level, we could at least understand the preferred path to the array17:45
renukaDuncanT: yes, we could give it a better name ;)17:45
vladimir3prenuka: do you think that what I described above will work for you?17:46
DuncanTrenuka: Generically it is affinity - we don't care about reachability from a scheduler point of view but obviously other technologies might, but we need a way for it to be passed to the driver...17:47
renukavladimir3p: yes17:47
vladimir3prenuka: great17:47
vladimir3pDuncanT: yes, today scheduler returns back only host name17:47
vladimir3pit will be required to return probably a tuple of host and some capabilities that were used for this decision17:48
renukavladimir3p: could we ensure that when a request is passed to the host, it has enough information about which backend array to use17:48
DuncanTvladimir3p: I'm more worried about the user facing API - we need a suer to be able to specify both a volume_type and a volume (or list of volumes) to have affinity to or antiaffinity to17:48
renuka#topic admin APIs17:49
*** openstack changes topic to "admin APIs"17:49
vladimir3pDuncanT: afinity of user to type?17:49
renukaDuncanT: that was in fact the next thing on the agenda, what are the admin APIs that everyone requires17:49
DuncanTvladimir3p: No, affinity of the new volume to an existing volume17:50
DuncanTvladimir3p: On a per-create basis17:50
vladimir3pDuncanT: yes, we have the same requirement17:50
timr1i dont thinks duncanT is talkign about an admin API, it is the ability of a user to say create my volume near (or far)rom volume-abs17:50
renukaDuncanT: could that be restated as ensuring a user/project data is kept together17:50
timr1renuka: also want anti-ainity for availability17:51
renukatimr1: isn't that too much power in the hands of the user. Will they even know such details?17:51
*** adjohn has joined #openstack-meeting17:51
timr1renuka: we have users who want to do it, I dont see it as power. They say make volume X anywhere. Make Volume Y in a different place from volume X17:52
renukatimr1: yes, in general some rules about placing user/project data. Am i correct in assuming that anti-affinity is used while creating redundant volumes?17:52
rnirmalrenuka: I think it's something similar to the hostId, which is an opaque id to the user, on a per user/project basis and maps on the backend to a host location17:52
vladimir3pDuncanT: I guess affinity could be solved on sub-scheduler level. Actually specialized scheduler go first retrive what was already created for the user/project and based on that perform filtering17:52
renukavladimir3p: yes but we still have to have a way of saying that17:53
rnirmalvolumes may need something like the hostId for vms, to be able to tackle affinity first17:53
timr1yes it is for usere who want to create their own tier of  availability. agree it could be solved on sub-sched - but need to user API expandedd17:53
claygvladimir3p: that would probably work, when they create the volume they can send in meta info, a customer scheduler could look for affinity and anti-affinity keys17:53
DuncanTvladimir3p: What we'd like to make sure of if possible is that the user can specif which specific volume that already exists they want to be affine to (or anti-) - e.g. if they are creating several mirrored pairs, they might want to say that each pair is 'far' from each other to enhance the survivability of that data17:54
*** bencherian has joined #openstack-meeting17:54
DuncanTclayg: That should work fine, yes17:54
rnirmalDuncanT: is this for inclusion in the basic scheduler ?17:55
vladimir3pDuncanT: I see ... in our case we create volumes and build RAID on top of them. So, it is qute important for us to be able to schedule volumes on different nodes and to know about that17:55
*** renuka_ has joined #openstack-meeting17:55
*** jdg_ has quit IRC17:56
*** mattray has joined #openstack-meeting17:56
DuncanTrnirmal: The API field, yes. I'm happy for the scheduler to ignore it, just pass it to the driver17:56
renuka_vladimir3p: I was disconnected, has that disrupted anything?17:56
vladimir3prenuka: not really, still discussing options for affinity17:56
DuncanTTim and I can write up a more detailed explaination off-line and email it round if that helps with progress?17:57
claygthe scheduler will send the volume_ref, any meta data hanging of of that will be available to the driver17:57
renuka_#action DuncanT will write up detailed explaination for affinity17:57
vladimir3pDuncanT: thanks it wil be very helpfull. seems like we have some common areas there17:58
*** renuka has quit IRC17:58
renuka_vladimir3p: what is the next step? We have 2 minutes17:58
claygOH: someone on ozone is working on adding os-volumes support to novaclient17:58
vladimir3pclayg: seems like we will need to change how message is sent from scheduler to volumemanager17:58
rnirmalI suppose admin api topic is for the next meeting17:59
vladimir3prenuka: I guess we will need to find a volunteer17:59
renuka_rnirmal: yes17:59
claygvladimir3p: I don't see that, manager can get whatever it needs from volume_id17:59
DuncanTOT: I've put in a blueprint for snap/backup API for some discussion18:00
claygyou're so mysterious...18:00
vladimir3pclayg: scheduler will need to pass volume not only to the host, but "for particular backEnd array"18:00
renuka_any volunteers for the simple scheduler work in the room? we can have people taking this up in the mailing list, since we are out of time18:00
clayghrmmm.... if the manager/driver is responsible for multiple backends that support the same type... I don't see why it would let the scheduler make that decision18:01
renuka_alright i am going to end the meeting at this point18:01
renuka_is that ok?18:01
vladimir3pclayg: do you have some free time to continue ?18:01
claygit's going to aggregate that info in capabilities and send it to the scheduler, but by the time create volume comes down - it'll know better than the scheduler node which array is the best place for the volume18:01
vladimir3prenuka: fine with me18:01
renuka_oh wow, that didn't work because I got disconnected before18:02
vladimir3pclayg: it is not really relevant for my company, but I suppose folks managing multiple arays would prefer to have the scheduling on one place18:03
*** renuka_ is now known as renuka18:03
*** openstack changes topic to "Openstack Meetings: | Minutes:"18:03
openstackMeeting ended Thu Oct 20 18:03:54 2011 UTC.  Information about MeetBot at . (v 0.1.4)18:03
openstackMinutes (text):
claygthanks renuka!18:04
DuncanTYup, thanks Renuka18:04
vladimir3pclayg: do you have couple of min to continue?18:04
vladimir3pthanks Renuka & all18:04
claygvladimir3p: I'm pming you ;)18:05
renukasure, I do have time to continue, do we know if there is any other meeting on this channel lined up?18:05
claygI thouht there was...18:06
clayg#opnstack-volumes is open18:06
*** timr1 has quit IRC18:06
renukaclayg: i have joined that18:07
*** renuka has quit IRC18:08
claygrenuka volumeS?  we don't see you?18:08
vladimir3pseems like renuka quit from this one18:08
*** df1 has joined #openstack-meeting18:16
*** adjohn has quit IRC18:18
*** zul has quit IRC18:22
*** zul has joined #openstack-meeting18:26
*** HowardRoark has quit IRC18:31
*** dolphm has quit IRC18:33
*** HowardRoark has joined #openstack-meeting18:36
*** rohitk has joined #openstack-meeting18:36
*** clayg has left #openstack-meeting18:47
*** timr has joined #openstack-meeting18:53
*** dendrobates is now known as dendro-afk18:54
*** dolphm has joined #openstack-meeting18:56
*** dendro-afk is now known as dendrobates18:58
*** vladimir3p has quit IRC19:08
*** dolphm has quit IRC19:23
*** dolphm has joined #openstack-meeting19:25
*** dendrobates is now known as dendro-afk19:26
*** adjohn has joined #openstack-meeting19:39
*** df1 has left #openstack-meeting19:40
*** HowardRoark has quit IRC19:45
*** timr has quit IRC19:52
*** HowardRoark has joined #openstack-meeting20:04
*** sandywalsh has quit IRC20:13
*** adjohn has quit IRC20:15
*** bencherian has quit IRC20:16
*** jdag has quit IRC20:19
*** sandywalsh has joined #openstack-meeting20:19
*** dprince has quit IRC20:23
*** dendro-afk is now known as dendrobates20:23
*** dolphm has quit IRC20:25
*** dolphm has joined #openstack-meeting20:28
*** donaldngo_hp has joined #openstack-meeting20:48
*** rnirmal has quit IRC20:58
*** bencherian has joined #openstack-meeting21:01
*** nati2 has quit IRC21:08
*** dolphm has quit IRC21:09
*** dendrobates is now known as dendro-afk21:11
*** bencherian has quit IRC21:12
*** markvoelker has quit IRC21:14
*** bencherian has joined #openstack-meeting21:15
*** nati2 has joined #openstack-meeting21:30
*** reed has quit IRC21:37
*** jdg_ has joined #openstack-meeting21:37
*** reed has joined #openstack-meeting21:38
*** mattray has quit IRC21:46
*** mattray has joined #openstack-meeting21:50
*** bencherian has quit IRC21:53
*** bencherian has joined #openstack-meeting21:56
*** jdg_ has quit IRC22:04
*** jdag has joined #openstack-meeting22:06
*** mattray has quit IRC22:09
*** dolphm has joined #openstack-meeting22:24
*** dolphm has quit IRC22:26
*** bencherian has quit IRC22:28
*** nati2 has quit IRC22:31
*** dragondm has quit IRC22:32
*** HowardRoark has quit IRC22:32
*** rohitk has quit IRC22:36
*** nati2 has joined #openstack-meeting22:44
*** mattray has joined #openstack-meeting23:07

Generated by 2.14.0 by Marius Gedminas - find it at!