Thursday, 2014-10-02

*** reed has quit IRC00:30
*** reed has joined #openstack-zaqar01:03
*** flwang has joined #openstack-zaqar01:05
*** alcabrera|afk is now known as alcabrera01:17
*** vkmc has joined #openstack-zaqar01:31
*** jeffrey4l has joined #openstack-zaqar01:41
*** reed has quit IRC01:55
*** echevemaster has quit IRC02:17
*** vkmc has quit IRC02:59
*** openstackgerrit has quit IRC03:08
*** openstackgerrit has joined #openstack-zaqar03:10
*** openstackgerrit has quit IRC03:18
*** openstackgerrit has joined #openstack-zaqar03:18
*** alcabrera is now known as alcabrera|afk04:03
*** sgotliv has joined #openstack-zaqar04:27
*** flaper87|afk is now known as flaper8706:14
*** sgotliv has quit IRC06:33
openstackgerritFlavio Percoco proposed a change to openstack/zaqar: Add a pool_group to pools in v1.1  https://review.openstack.org/12407906:37
openstackgerritFlavio Percoco proposed a change to openstack/zaqar: Add first reliability enforcement  https://review.openstack.org/12375006:37
*** sgotliv has joined #openstack-zaqar07:59
*** flaper87 is now known as flaper87|afk08:25
*** flaper87|afk is now known as flaper8708:59
*** jeffrey4l has quit IRC09:39
*** jeffrey4l has joined #openstack-zaqar09:51
*** prashanthr_ has joined #openstack-zaqar11:13
prashanthr_Hi flaper87: Sorry for the delay11:41
prashanthr_Will add sentinel support very soon11:41
flaper87prashanthr_: hey hey, no need to be sorry, really. I'm just doing PTL duties and unfortunately that means I gotta push some things back11:42
flaper87but it's not your fault. Thanks for working on that11:42
prashanthr_I have not been active at all. From today I will surely be more active and also write code faster :)11:42
openstackgerritFlavio Percoco proposed a change to openstack/zaqar: Add first reliability enforcement  https://review.openstack.org/12375011:47
flaper87prashanthr_: did you get everything sorted for the summit?11:53
flaper87see you in 1 month from today?11:53
flaper87OMFG, I just realized the summit is 1 month away11:53
*** sgotliv has quit IRC12:02
prashanthr_flaper87: Yes everything has been sorted :)12:16
prashanthr_yeah just one month left12:16
prashanthr_my visa processing is also underway12:16
flaper87great12:16
flaper87looking forward to meet you there12:16
prashanthr_Same here :)12:17
*** sgotliv has joined #openstack-zaqar12:17
*** vkmc has joined #openstack-zaqar12:18
*** vkmc has quit IRC12:18
*** vkmc has joined #openstack-zaqar12:18
*** prashanthr_ has left #openstack-zaqar12:23
flaper87vkmc: helllloooooooo12:23
vkmcflaper87, heeeeeeeey12:23
flaper87vkmc: how are you doing?12:35
flaper87Can I bother you with 2 review requests?12:35
flaper87I'd like to get those patches merged before my 1x1 with ttx today12:35
flaper87vkmc: https://review.openstack.org/#/c/123230/7 and https://review.openstack.org/#/c/124079/12:36
vkmcflaper87, not a problem, I should review those yesterday12:36
flaper87Kurt already +2'd the later, I just updated the docs so feel free to ninja-approve an say that the last PS just udpated docs12:36
vkmcflaper87, I'm doing ok! a bit tired, dunno what is going on with me this week12:37
vkmcflaper87, and you?12:37
* flaper87 gives vkmc some gummy bears and coffee \_/?12:37
vkmcnom nom nom12:37
*** fifieldt has joined #openstack-zaqar12:37
* flaper87 hates pep812:37
openstackgerritFlavio Percoco proposed a change to openstack/zaqar: Add first reliability enforcement  https://review.openstack.org/12375012:38
*** vkmc has quit IRC12:51
*** vkmc has joined #openstack-zaqar12:52
*** sriram has joined #openstack-zaqar13:04
*** sgotliv has quit IRC13:05
*** mpanetta has joined #openstack-zaqar13:11
*** jchai has joined #openstack-zaqar13:14
vkmcflaper87, are we cutting rc1 today?13:16
flaper87vkmc: I'm hoping so, we need to get those 2 bugs fixed13:16
flaper87meaning, land those patches I mentioned above13:16
*** sgotliv has joined #openstack-zaqar13:18
vkmc+113:20
flaper87vkmc: lemme know when you're done with the reviews13:21
vkmcI'm a bit worried about the master-slave execution13:23
vkmcwhat happen if it fails on the slave?13:23
vkmcoh haha https://review.openstack.org/#/c/123230/3/zaqar/queues/storage/redis/claims.py13:23
vkmcforgot the link13:23
flaper87well, in theory, it shouldn't fail because things are executed in the same order as in the master node13:24
flaper87however, the slave can crash13:24
flaper87in that case, when it comes up, it'll start from where it left13:24
flaper87in a master-slave scenario, this shouldn't happen13:25
flaper87BUT (wait for it):13:25
vkmcit could happen, if there is some issue we cannot control13:25
flaper87"In theory, theory and practice are the same. In practice, they are not." (Albert Einstein)13:26
*** mpanetta has quit IRC13:26
vkmchaha wise words13:26
flaper87;)13:26
*** mpanetta has joined #openstack-zaqar13:26
vkmcwell I think that is the best effort approach13:26
flaper87I don't think best-effort would happen here:13:27
flaper87think of this13:27
flaper87Master gets these tasks:13:27
flaper871. write A13:27
flaper872. run script X13:27
flaper873. delete C13:27
* vkmc nods13:27
flaper87then it tells the slave: Hey, do these things in the exact same order:13:27
flaper871.13:27
flaper872.13:27
flaper873.13:28
flaper87Assuming they both have a consistent starting point - which is fair to assume/expect here - nothing bad should happen13:28
flaper87unless it happens *outside* redis13:28
flaper87which, as you said, we cannot control13:28
vkmcyeah my concern is13:28
vkmcwhat happens if in 2. the slave dies for some reason13:28
vkmcbut you told me that it continues from the last point, I guess, 1.13:29
flaper87yeah, all those operations are atomic13:30
flaper87they either succeed or fail13:30
flaper87if they fail they don't alter the state of the db13:30
vkmcthat is what I wanted to hear13:30
vkmc:D13:30
flaper87why didn't you ask that?13:30
flaper87:P13:30
vkmcbecause I13:30
* flaper87 is messing with vkmc's head13:30
vkmcI'm slow sometimes13:30
vkmcahahaha13:30
vkmcread sometimes as most of the times13:31
mpanettaHey guys :)13:31
mpanettaI have a quick question.13:31
vkmchey mpanetta :)13:31
mpanettaWhat storage backends does queues support these days?13:31
vkmcRedis and MongoDB13:31
mpanettahow ya doin vkmc?13:32
mpanettaOk sweet13:32
vkmcmpanetta, good thx, and you?13:32
mpanettaWas rabbitmq on the table or no?13:32
mpanettaBusy!  But good :)13:32
flaper87mpanetta: it was more like *under* the table13:32
vkmcI hear ya >.>13:32
flaper87:P13:32
mpanettaflaper87: haha13:32
vkmcyeah unfortunately that won't happen13:33
*** malini1 has joined #openstack-zaqar13:33
mpanettaUnfortunately?13:33
vkmcyeah, it was a cool idea13:33
mpanettaWhy can't it happen?  Not technically feasable?13:33
vkmcthe good sir here present wrote a blog post about it http://blog.flaper87.com/post/marconi-amqp-see-you-later/13:33
mpanettaOr just nobody to do it.13:34
mpanettaAh!  Thanks!13:34
vkmcnot technically feasable13:34
flaper87https://twitter.com/flaper87/status/51766883079005798513:34
mpanettalol13:34
vkmclol13:34
flaper87this channel is the best13:35
mpanettaI agree :)13:35
flaper87we should tweet more often the great cool things we say here13:35
flaper87and cool != smart13:35
vkmcit is13:35
vkmcwe should set up a repository with some of the quotes in here13:35
mpanettawhat, like bash.org?13:35
vkmcexactly13:35
mpanettahahaha that would be awsome13:36
* vkmc tried to make that looks as it was her idea13:36
vkmcclose enough13:36
vkmchahaha13:36
mpanettaIt wasn't?13:36
mpanetta:P13:36
vkmcmaybe it was13:36
vkmc:p13:36
vkmcI was not stoling bash.org idea, not at all13:36
*** mpanetta has quit IRC13:41
*** mpanetta has joined #openstack-zaqar13:41
mpanettaGAH13:41
mpanettastupid network13:41
vkmcflaper87, ninja approved your patch13:42
flaper87vkmc: AWESOME, thanks!13:42
vkmcflaper87, thanks13:42
vkmcbbl13:43
*** alcabrera|afk is now known as alcabrera13:55
openstackgerritA change was merged to openstack/zaqar: Move Redis driver's claim transaction to Lua  https://review.openstack.org/12323013:58
openstackgerritA change was merged to openstack/zaqar: Improve efficiency of the redis driver when posting messages  https://review.openstack.org/12517513:58
flaper87great, 1 patch left to merge13:58
*** sgotliv has quit IRC14:10
openstackgerritA change was merged to openstack/python-zaqarclient: Add a read-only property for Queues  https://review.openstack.org/12147814:12
openstackgerritA change was merged to openstack/zaqar: Add a pool_group to pools in v1.1  https://review.openstack.org/12407914:15
*** jeffrey4l has quit IRC14:15
flaper87ok, we're rc1-ready14:16
flaper87vkmc: one more review request: https://review.openstack.org/#/c/123750/14:17
flaper87we should totally get that into juno14:17
flaper87but not mandatory for rc114:17
*** cpallares has joined #openstack-zaqar14:21
*** amitgandhinz has joined #openstack-zaqar14:21
*** sgotliv has joined #openstack-zaqar14:22
*** reed has joined #openstack-zaqar14:56
openstackgerritFlavio Percoco proposed a change to openstack/zaqar-specs: Template for new storage driver's addition  https://review.openstack.org/12565814:58
flaper87vkmc: kgriffs|afk ^ let me know what you thinkg14:59
flaper87think*14:59
*** kgriffs|afk is now known as kgriffs15:25
*** ametts has quit IRC15:27
* flaper87 lurks15:31
* kgriffs looking15:34
openstackgerritFlavio Percoco proposed a change to openstack/zaqar: Open Kilo development  https://review.openstack.org/12568115:37
flaper87kgriffs: not urgent, whenever you have time15:39
flaper87kgriffs: thanks and good morning, my friend.15:39
kgriffssure thing15:39
flaper87kgriffs: btw, redis patch landed... w00000t15:41
kgriffsoh good15:42
kgriffsshoot15:42
kgriffsI just noticed this one seems to have stalled15:42
kgriffshttps://review.openstack.org/#/c/121474/15:42
kgriffsit's essential for HA15:43
kgriffsanyone seen Prashanth around lately?15:43
kgriffsflaper87: perhaps I should just update the patch myself and we can merge it real quick for RC?15:43
kgriffsflaper87: or has RC1 already been cut?15:45
flaper87kgriffs: RC1 is closed: https://review.openstack.org/#/c/125681/15:48
flaper87but we can have rc2 if needed15:48
flaper87we can still get it in before the release15:48
kgriffsok. so make a branch and backport after this merges?15:49
flaper87I talked to prashanth earlier today, he said he's getting back to this patch this week15:49
flaper87kgriffs: correct15:49
kgriffsoic15:49
flaper87kgriffs: lets give him today and tomorrow15:49
flaper87if he can't get back to it, we can restore it15:49
flaper87I'd like to get this one in too: https://review.openstack.org/#/c/123750/15:49
flaper87hint hint, review ;)15:49
flaper87:D15:50
kgriffsyep, I've started reviewing that15:50
kgriffsok, so let's try to get those two into rc2 plus if we find anything critical during smoke testing that needs to be fixed.15:50
*** jchai is now known as jchai_afk15:51
flaper87+115:51
flaper87kgriffs: in addition to that, we need to document the "recomended" deployment for both redis and mongodb15:51
flaper87we can do that later15:51
kgriffsoh, that is a good point15:51
flaper87but I think it's important15:51
kgriffsi'm still tempted to add support for simple queue-level sharding across multiple redis instances without having to use a pool (as I've brought up before) but I am trying to resist the urge to add anything else to RC215:52
flaper87kgriffs: did you see this? https://www.dropbox.com/s/w2ufh784co1m5pf/insert.jpg?dl=015:52
flaper87kgriffs: I'd probably do that in Kilo, we need to discuss that better15:53
flaper87I think15:53
flaper87kgriffs: that link is a sharded mongodb deployment15:53
kgriffsregarding redis, yeah, another reason to resist the urge15:53
flaper87it's the graph of how writes are distributed across shards15:53
kgriffsinteresting15:54
kgriffswhat's the shard key?15:54
flaper87kgriffs: http://paste.openstack.org/show/117806/15:55
flaper87that one15:55
flaper87the marker index inverted15:55
flaper87which, btw, I think we're not actually using15:55
kgriffsi want to say that we are using it to enforce the unique constraint so we know when there was a collision15:56
flaper87kgriffs: ah yeah, good point15:58
flaper87I had forgotten about that15:58
flaper87then yeah, we're using it15:58
flaper87:D15:59
flaper87this gives that index another use15:59
kgriffsare you using hash-based sharding vs. range-based?15:59
flaper87range based16:00
flaper87hashed indexes can't be used in compound indexes16:00
kgriffsoic16:01
flaper87#sadpanda16:01
kgriffsI hadn't learned much about those yet since they are kinda sorta newish16:01
* kgriffs is a just-in-time learner16:01
* flaper87 is a JIT learner too16:01
flaper87or should I say: WINI16:01
flaper87whenever I need it16:01
flaper87:P16:02
flaper87which is basically JIT16:02
kgriffsif it is range-based, wouldn't that cause the distribution to be less even than your graphs show? I think I'm missing something.16:02
flaper87so, the k gives enough cardinality to distribute data across shards and the p_j field keeps them all together16:02
flaper87the reason I moved k as the first field is because otherwise all writes would go to the last shard16:03
flaper87(since k is sequentially incremented)16:03
flaper87with this, it hits the last shard of the group of shards used for p_j16:03
flaper87but inserts are basically ordered by k first16:03
flaper87does that make sense?16:04
flaper87that's what I've been telling myself :P16:04
flaper87kgriffs: http://paste.openstack.org/show/117814/16:05
flaper87that's the status16:05
kgriffsoic16:06
kgriffsso you will get ranges of messages for each queue colocated16:06
flaper87right16:07
kgriffsseems like there is a small chance that a queue might bleed over to another shard?16:07
flaper87yeah, I think there's a chance, we'll have to play with chunks sizes, I guess16:08
kgriffsand in that case a very small chance that someone listing messages (not claiming them) would miss one since that unique index constraint is not enforced across shards afaik16:08
flaper87oh wait, isn't it?16:08
flaper87I.... didn't know that16:08
kgriffsidk16:08
flaper87ok, let me check that16:08
kgriffssomething to check16:08
flaper87I think it is but I'll check right away16:09
kgriffskk16:09
kgriffsbtw, here is a thought I just had that may or may not turn into something useful16:09
kgriffsI was thinking about how swift has decoupled containers from the data in the containers16:10
kgriffslike, the "list of things" in a container is held in sqlite iirc16:10
kgriffsbut that is replicated separately from the actual "things"16:10
kgriffs I wonder if a similar notion might be useful for the Redis driver. You could track the message IDs that are in a queue in one place16:11
kgriffsbut then the messages could be flung to the wind / replicated / etc. in a different way16:11
flaper87"in one place" => "in redis but separate set/list"16:12
flaper87?16:12
kgriffs(like I said, this is a nascent thought, so not sure if it makes sense yet)16:12
kgriffsflaper87: yeah, well it is already sort of like that16:12
kgriffsthere is a set that is used to index the message IDs16:13
kgriffsbut why does it have to live on the same redis node?16:13
kgriffsalso, consider that the index is actually what determines the ordering/rank of the messages16:13
flaper87that reminds me our discussion of considering queues part of the "control" plane instead of the "data" plane16:14
kgriffsif it gives you enough info to go find those messages that are perhaps sharded across a hashring of nodes, then you can still bring them together and preserve the stateless marker semantics that are necessary to implement an RESTful message pub/sub feed16:15
kgriffsflaper87: good thought16:15
*** sgotliv has quit IRC16:15
kgriffsin any case, you still end up with a bottlneck but it is much lighter and should be easy to scale that to a level such that other things become bigger bottlenecks anyway16:16
flaper87I don't think your idea is crazy. What if we have a "register_message" method or something like that in the queue_controller16:16
flaper87that may/may not be implemented16:16
flaper87depending on the driver16:16
kgriffshmmm16:16
kgriffsso for each message post operation the driver would get two calls?16:17
kgriffsfirst, to "register_messages"16:17
kgriffsthen, to "post_messages"?16:17
kgriffsthe message IDs would have to be returned from the former and passed to the latter16:17
kgriffsconsider it for mongodb16:19
kgriffshmm16:19
flaper87probably the other way around16:19
flaper87what if post_messages calls register_messages16:19
kgriffsactually, it may be less performant16:20
* kgriffs is thinking16:20
flaper87oh no wait, that sounds like a bad idea. I'd like to keep control plane and data planse separated16:20
flaper87as much as possible16:20
kgriffsyeah16:20
* flaper87 is making all this up16:20
kgriffsLOL16:20
kgriffswell, what I was going to say about mongo was if you could use a hashed shard key to distribute the message payloads and then use range-based to lookup what messages belong in the queue from a different collection, you could evenly distribute messages around the cluster16:22
* vkmc lurks and reads the backlog16:22
kgriffsand you would be in much less danger of a "queue" bleeding over, since it would basically just be a list of some IDs and maybe a little bit of metadata16:23
kgriffsdownside is that now I have to make two calls to get messages16:23
kgriffsthe redis driver already has to do that, but it is offset a little bit since Redis is so fast16:24
kgriffs(it could be done in a single lua call, but then you just tightly coupled the queue IDs and the messages into the same DB)16:24
flaper87not sure I understand why you need 2 calls to get messages16:24
flaper87mongodb knows in which shard each record is16:24
kgriffsyou would have a call to list the object IDs of the messages that belong to a queue16:25
flaper87mmh, why?16:25
kgriffsthe "queue_messages" collection would be sharded like you are doing now with the messages collections16:25
flaper87I mean, all that information is already in mongodb16:26
flaper87the config server contains all that metadata16:26
flaper87it knows how chunks were split across shards16:26
kgriffsflaper87: what that does is create a sort of app-level index, like was done (out of necessity due to it's lack of higher-order operations) for the redis driver.16:26
kgriffsfor each queue you have a list of message IDs (object IDs)16:27
kgriffsyou list those in one call16:27
kgriffsthen you go get those messages from a different collection16:27
kgriffsthe advantage is you can choose to shard the two collections in different ways16:28
kgriffsthe disadvantage is you have to jump through more hoops to get those messages16:28
flaper87ah wait, you're explaining how your idea would be implemented in mongodb16:28
kgriffsyeah16:28
*** alcabrera is now known as alcabrera|afk16:28
flaper87ah ok16:28
flaper87sorry16:28
kgriffsjust a thought experiment16:28
flaper87now I get why you're saying the whole 2-step things16:29
kgriffsidk if the tradeoff is worth it, but it may be worth a POC16:29
kgriffsflaper87: heh, sorry. It makes more sense once you work with the Redis driver a bit, because it forces you to do things a different way.16:30
kgriffsso, really the idea is a result of cross-referencing swift's architecture with some concepts I noticed while reviewing the Redis driver16:31
flaper87yeah, I don't think it's a terrible idea, it would be useful to have some sort of POC or at least to write it down.16:31
flaper87so we can reason about it16:31
flaper87design session ?16:31
flaper87:D16:31
kgriffsdefinitely be easier to discuss in person16:31
vkmcjust one doubt16:40
vkmcI remember that flaper87 suggested the idea of removing queues as they are right now16:40
vkmcare we still considering that?16:41
kgriffswell, even if we do that there will still be some notion of "tags" or "topics"16:41
vkmcof course16:41
kgriffsand that would be were the ID indexes would live16:41
kgriffs(i guess)16:41
flaper87I think we can move one step closer to removing queues16:41
flaper87for example, it'd be great if we would create queues *just* for those that are explicitly created16:42
flaper87that basically means we need another way to list queues16:42
vkmcidk if it makes sense to do a queue level sharding if we are going to remove queues as they are16:42
vkmcthat is why I'm asking16:43
flaper87what we would remove is the concept of "queue" as a container16:43
flaper87but we still need something (topic?) to group messages together16:43
vkmcyeah, we are going to have a queue attribute instead of a data structure16:43
flaper87vkmc: we kinda already have that (p_q)16:44
flaper87queues imply order, IMHO, which is why I think FIFO makes sense16:44
vkmccool16:44
flaper87but if we end up making FIFO optional, we probably could also change the term16:45
kgriffswrt FIFO16:45
vkmchmm yeah16:45
vkmcqueues == FIFO16:46
kgriffspersonally, I care less about the need for gauranteeing some sort of platonic ideal re FIFO and more about the practical concern of just not missing a message16:46
kgriffsthat being said, a long-standing complaint of SQS was its lack of such guarantees16:47
kgriffsnot saying we should try to do it better just to prove we can, but something to consider16:47
kgriffsbut let me address the first point16:48
kgriffsif I am listing messages in a stateless, REST architecture, the client needs a marker16:48
flaper87my tendency to prefer FIFO is that there are cases that require it and I believe it makes sense to support it if we can16:48
kgriffsif you don't keep messages in some kind of order, markers don't work16:48
flaper87making it optional removes the burden of having to support it for every queue16:48
kgriffsthere are three questions I think we need to answer16:49
kgriffsactually more than that16:49
kgriffs1. what are the performance consequences of relaxing the FIFO constraint?16:50
kgriffs2. what the scaling consequences?16:50
kgriffs3. is it possible to list messages (not claim them) in a stateless manner (client keeps track of marker) without the FIFO constraint?16:51
kgriffs(and guarantee a message will never be missed)16:51
kgriffs4. If not, should we also relax the guaranteed deliver constraint?16:51
flaper873. is exactly what I think we shouldn't do. I mean, asking the client to do that is cheating, IMHO.16:51
kgriffs5. And/Or should we make listing messages optional16:52
kgriffs6. And/or should we split Zaqar into two different APIs and services16:52
kgriffs</end>16:52
kgriffsI don't think some people realize how much we have agonized over questions like these, careful weighing the tradeoffs to arrive where we are today.16:53
flaper87mmh, I think resolving the FIFO thing is a matter of answering 1 and 216:53
flaper87hehe, I was reading the ehterpad where we discussed FIFO16:53
vkmc1. relaxing the FIFO constraing would retrievals very expensive16:53
flaper87man, we put lots of thoughts on that16:53
flaper87really16:53
kgriffsI'm not saying we are in a perfect place today, but I think everyone needs to realize this is very complex and there are lots of things we need to keep in mind as we continue to iterate on the project.16:54
vkmcand it would also have a negative impact in several use cases16:54
kgriffsnot least of which, is what end users want/need16:54
vkmc+1 to that16:54
flaper87I'm not sure it'd make retrievals expensive, if anything it'll make writes cheaper16:54
*** jchai_afk is now known as jchai16:54
kgriffsI think these questions could form the basis of a very good discussion at the summit.16:55
flaper87I think we also need to ask ourselves how doing optional things will afect the drivers16:55
kgriffsI'm sure there are other questions you all would add as well16:55
* kgriffs lives two seconds in the future16:55
flaper87:P16:56
vkmcflaper87, yeah, but it will be really expensive if we want to guarantee ordering16:56
vkmcit's a tradeoff16:56
vkmcas everything in computing >.>16:56
flaper87something I'm trying to figure out now is how can we make the storage driver aware of the features enabled for a queue16:58
flaper87we have flavors, great....16:58
flaper87but we don't have a way to tell the driver: "Hey, this queue is supposed to be ordered"16:58
vkmchm17:01
vkmcgood point17:01
*** cpallares has quit IRC17:11
kgriffsfwiw17:15
kgriffshttps://etherpad.openstack.org/p/zaqar-design-constraints17:15
kgriffsThere is a line in there that may be inflamatory so I will remove it17:15
kgriffsbbl (lunch)17:25
*** kgriffs is now known as kgriffs|afk17:26
*** alcabrera|afk is now known as alcabrera17:40
*** kgriffs|afk is now known as kgriffs17:53
vkmcalcabrera, hey :) happy birthdaaaaaaaaaaaaaaaaaay \o/17:55
alcabreravkmc: thank! <317:56
alcabrera*thanks17:56
vkmc:D17:56
vkmchope you are having a great day17:56
alcabrerait's... not so great, but the weekend is near. :)17:57
vkmcohh boo17:57
vkmcyeah that's good17:57
*** alcabrera is now known as alcabrera|afk18:02
*** alcabrera|afk is now known as alcabrera18:16
*** jchai has quit IRC18:18
*** ametts has joined #openstack-zaqar18:22
vkmcflaper87, the spec for storage drivers looks really good18:37
*** sgotliv has joined #openstack-zaqar18:41
*** jchai has joined #openstack-zaqar19:12
*** mpanetta_ has joined #openstack-zaqar19:19
*** mpanetta has quit IRC19:21
*** jchai is now known as jchai_afk19:30
*** jchai_afk is now known as jchai19:34
*** alcabrera is now known as alcabrera|afk19:52
*** malini1 has quit IRC20:00
*** sgotliv has quit IRC20:02
*** jchai is now known as jchai_afk20:11
*** mpanetta_ is now known as mpanetta20:21
*** jchai_afk is now known as jchai20:37
*** flaper87 is now known as flaper87|afk21:11
*** sriram has quit IRC21:20
*** malini1 has joined #openstack-zaqar21:26
*** malini1 has quit IRC21:51
*** jchai has quit IRC21:52
*** mpanetta has quit IRC21:52
*** kgriffs is now known as kgriffs|afk22:06
*** amitgandhinz has quit IRC22:25
*** kgriffs|afk is now known as kgriffs22:35
*** cpallares has joined #openstack-zaqar22:39
*** kgriffs is now known as kgriffs|afk22:47
vkmchey guys, any ideas on why this is happening? http://paste.openstack.org/show/117903/23:03
*** echevemaster has joined #openstack-zaqar23:34
*** kgriffs|afk is now known as kgriffs23:38

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!