*** openstack has joined #openstack-marconi | 18:46 | |
kgriffs | in that case, you would have to either have atomic id/marker generation (ala autoinc key) or you would need to pipe all inserts for a given queue through a serializing proxy thingy | 18:47 |
---|---|---|
kgriffs | if you wanted to guarantee FIFO | 18:47 |
kgriffs | currently, the spec does not require FIFO for multiple producers | 18:47 |
kgriffs | if you want to spread out a single queue across multiple storage nodes, you unfortunately can't just use a timestamp-based id/field to order by, since it is easy to get race conditions under heavy load | 18:48 |
kgriffs | BTW, SQS does not guarantee ordering at all | 18:49 |
kgriffs | makes their lives a lot easier. :p | 18:49 |
torgomatic | (just so I'm clear here) so, even though the insert order for one producer is A1, A2, A3, the spec doesn't guarantee getting A1, then A2, then A3 out? | 18:49 |
torgomatic | (not complaining, just asking) | 18:49 |
torgomatic | and that's because there's a second producer at the same time? | 18:49 |
kgriffs | well, it depends on how close both producers are | 18:50 |
kgriffs | if they interleave with gaps of, say, 1 seconds or so they likely won't get out of order | 18:50 |
torgomatic | okay, I think I see | 18:51 |
torgomatic | thanks | 18:51 |
*** meganw has quit IRC | 19:01 | |
*** meganw has joined #openstack-marconi | 19:01 | |
*** meganw has quit IRC | 19:06 | |
kgriffs | torgomatic: rock on. | 19:13 |
openstackgerrit | Kurt Griffiths proposed a change to stackforge/marconi: chore: Add the up-and-coming oslo.cache module https://review.openstack.org/44131 | 19:13 |
kgriffs | guys, need some reviews on these | 19:14 |
kgriffs | https://review.openstack.org/#/c/44105/ | 19:14 |
kgriffs | https://review.openstack.org/44131 | 19:14 |
*** JRow has joined #openstack-marconi | 19:20 | |
*** JRow has quit IRC | 19:34 | |
kgriffs | Alex_Gaynor: thought you'd like to know that pylru is 47x faster for a lookup under pypy vs. py27 | 19:54 |
* kgriffs loves pypy | 19:54 | |
Alex_Gaynor | kgriffs: awesome | 19:54 |
flaper87 | kgriffs: pong | 20:00 |
* flaper87 is back | 20:00 | |
acabrera | I'm back from the tech talk. :) | 20:02 |
*** acabrera is now known as cppcabrera | 20:02 | |
cppcabrera | Got my name back, too. :P | 20:02 |
cppcabrera | kgriffs: The *only* downside to pylru is that it doesn't support python 3. It's extension building approach is incompatible with py3k. | 20:03 |
cppcabrera | I do love how speedy it is compared to available lru cache options. | 20:03 |
cppcabrera | I tried porting it over the weekend, but kept running into some exceptions at import time when I built the source modified with ifdefs to handle py3k. | 20:04 |
*** malini is now known as malini_afk | 20:04 | |
* cppcabrera has little to no extension building experience in python, fyi | 20:04 | |
*** meganw has joined #openstack-marconi | 20:07 | |
*** meganw has quit IRC | 20:07 | |
*** meganw has joined #openstack-marconi | 20:08 | |
kgriffs | flaper87: were you already working on the change to remove the range queries from _list ? | 20:09 |
kgriffs | (and remove threshold "feature" from gc) | 20:10 |
flaper87 | kgriffs: nope, today I focused on the mapping and then got pulled into thousands of meetings T_T | 20:11 |
flaper87 | kgriffs: I can do it tomorrow | 20:11 |
kgriffs | ok, I couldn't remember who was going to work on it | 20:12 |
flaper87 | and then focus on the test change | 20:12 |
flaper87 | kgriffs: whe never said it :D | 20:12 |
flaper87 | so, either would be great | 20:12 |
kgriffs | no wonder! | 20:12 |
kgriffs | I can take it | 20:12 |
flaper87 | cool | 20:12 |
flaper87 | thanks | 20:12 |
kgriffs | in other news, see my patches to pull in cache from oslo | 20:12 |
kgriffs | from future import oslo.cache | 20:13 |
kgriffs | ;) | 20:13 |
kgriffs | flaper87: do you have a favorite Python LRU library? | 20:13 |
kgriffs | I would like to implement an LRU backend and then create a hierarchical meta-backend | 20:14 |
kgriffs | (with write-through support) | 20:14 |
kgriffs | cppcabera: can you +2 those oslo update patches? | 20:20 |
kgriffs | cppcabrera: can you review those? | 20:21 |
flaper87 | kgriffs: mmh, TBH, I don't have a preferred. Last time I needed one, I worte it myself | 20:21 |
kgriffs | kk | 20:21 |
kgriffs | cppcabrera: i may have missed a namespace change, so need some eyes on it | 20:21 |
cppcabrera | lessee... | 20:22 |
* cppcabrera goes to check on the growing review queue | 20:23 | |
cppcabrera | It's been a meetings kind of day. I'm lucky I found a good hour this morning to modularize marconi-proxy. :P | 20:23 |
flaper87 | quick english question | 20:24 |
flaper87 | why do spell checkers tell be I should put a hypem in backend ? | 20:24 |
flaper87 | hypen | 20:24 |
meganw | they do that to me too! but its wrong. one word. | 20:25 |
amitgandhi | https://www.google.com/search?q=define%3A+backend&oq=define%3A+backend&aqs=chrome.0.69i57j69i58j69i62l3.2711j0&sourceid=chrome&ie=UTF-8 | 20:26 |
kgriffs | I suppose you can make it two words | 20:26 |
kgriffs | "back end" | 20:26 |
amitgandhi | interesting, they use with space, without, and with hypen | 20:26 |
cppcabrera | the use of hyphens is one of those debated things in English, IIRC. Most of the time, when a word could take a hyphen, it can also be used with a space. | 20:27 |
kgriffs | but seems like the colloquial spelling is to remove the space | 20:27 |
flaper87 | I used to write it as backend 'til those stupid spell checkers consued me | 20:27 |
meganw | gotta stand up to the man on these things | 20:27 |
kgriffs | heh | 20:28 |
kgriffs | "common misspelling" | 20:28 |
kgriffs | http://how-do-you-spell.net/back-end | 20:28 |
flaper87 | uuuu | 20:28 |
cppcabrera | lol | 20:28 |
cppcabrera | +1 meganw | 20:28 |
cppcabrera | I'm all for theatre and catalogue which sets off english (US) spell checkers. ;) | 20:29 |
cppcabrera | Also, any time spanish sneaks into my english - yup. :P | 20:29 |
kgriffs | flaper87: what do you think about cache backends normalizing exceptions, so that the caller doesn't have to have except blocks based on backend-specific types? | 20:32 |
flaper87 | kgriffs: I think that's a good idea, however, IIRC, the api doesn't raise any exception yet | 20:57 |
kgriffs | I was just looking at the memcached backend | 20:59 |
kgriffs | looks like it just propagates any exceptions that are raised from python-memcached | 21:00 |
flaper87 | kgriffs: mmh, if it crashes really bad, yup, that's current case. The methods we're using from python-memcache shouldn't raise any exception | 21:01 |
flaper87 | AFAIC :/ | 21:01 |
flaper87 | AFAIK* | 21:01 |
flaper87 | damn, I can't type today | 21:01 |
kgriffs | kk | 21:02 |
flaper87 | except for decr and incr | 21:02 |
flaper87 | I agree on normalizing exceptions, though | 21:02 |
flaper87 | like catching bad things, printing a nice log and re-raising a cache-op exception | 21:03 |
flaper87 | or something like that | 21:03 |
flaper87 | kgriffs: did you hit any case where an exception should be raised? | 21:03 |
*** meganw has quit IRC | 21:04 | |
*** meganw has joined #openstack-marconi | 21:05 | |
*** malini_afk is now known as malini | 21:05 | |
kgriffs | it was with ultramemcache I hit that | 21:06 |
kgriffs | but judging by the python-memcached code it handles exceptions for you | 21:06 |
*** meganw has quit IRC | 21:09 | |
*** meganw has joined #openstack-marconi | 21:09 | |
kgriffs | looks like redis-py does raise exceptions, tho | 21:09 |
kgriffs | flaper87: ^^^ | 21:10 |
kgriffs | so, the cache abstraction should define behavior there. | 21:10 |
kgriffs | as in, should it fail silently and mark the node as "dead" or what? | 21:10 |
flaper87 | kgriffs: yeah, I guess it should | 21:14 |
flaper87 | it should try to reconnect | 21:14 |
flaper87 | or find a new valid node | 21:14 |
flaper87 | IMHO, what do you think? | 21:14 |
torgomatic | I'd reconnect, but that's just me | 21:15 |
flaper87 | torgomatic: me too :) | 21:15 |
torgomatic | if you try to find a new valid node, then you can get inconsistent cache | 21:15 |
torgomatic | either from different ideas of "next valid node", or even node A can connect to cachebox1, but node B can't | 21:16 |
flaper87 | torgomatic: btw, did you take a look at swift's column here? | 21:16 |
flaper87 | https://docs.google.com/spreadsheet/ccc?key=0ApBVEzsnQlGrdDRSMW5EVUVKbUdPOE1aNmFGczU2cUE#gid=0 | 21:16 |
flaper87 | do my and cppcabrera comments make sense? | 21:16 |
torgomatic | flaper87: I did; I've been thinking over ways to do that and take advantage of multiple containers for performance, though | 21:17 |
torgomatic | but what you've said is correct, and I believe that 1-1 mapping of queue to container will give you the semantics you want, subject to the usual eventual consistency of Swift | 21:17 |
flaper87 | torgomatic: cool | 21:18 |
flaper87 | thanks | 21:18 |
torgomatic | np | 21:18 |
cppcabrera | torgomatic: The using multiple containers to handle very large queues idea has come up in general marconi discussions as well. Even in mongo, breaking down a large queue into sub-queues transparently would be a benefit to latency in some cases. If you come with any ways to tackle that, let us know. :) | 21:19 |
torgomatic | cppcabrera: will do :) | 21:19 |
kgriffs | guys, a quick approval: https://review.openstack.org/#/c/44105/ | 21:19 |
kgriffs | (updates oslo) | 21:19 |
flaper87 | kgriffs: I was thinking to give a bit of extra priority to that backend, what do you think? (once we figure out the API thing) | 21:19 |
torgomatic | if I don't have to order the results of a list-message request in any particular way, it's easy ;) | 21:19 |
* flaper87 would love to take care of swift's backend | 21:19 | |
flaper87 | it would be an amazing opportunity to get to know it better | 21:19 |
kgriffs | flaper87: not sure on the priority yet. I'd like to find out what people are most interested in for backends | 21:21 |
*** ayoung has quit IRC | 21:21 | |
flaper87 | kgriffs: I think it is important for us to support at least 1 other backend != mongodb | 21:21 |
flaper87 | kgriffs: it could be that or a rel db | 21:21 |
flaper87 | I'd love it to be a MB | 21:22 |
flaper87 | but we need to figure out the API issue | 21:22 |
kgriffs | yep | 21:22 |
flaper87 | and what's actualyl possible, valid and reasonable | 21:22 |
flaper87 | cool, what aboout putting it on next week's meeting agenda ? | 21:22 |
cppcabrera | I'm thinking of working on one of two things this Friday: Redis (finish it) or zmq transport (start it). There's a hackday at the office this friday. :) | 21:22 |
flaper87 | kgriffs: -1 some files missing | 21:23 |
flaper87 | cppcabrera: redis :) | 21:23 |
flaper87 | you could start having it in a separate repo | 21:23 |
flaper87 | which will be a proof of marconi's support for third-party modules | 21:23 |
kgriffs | flaper87: added to agenda | 21:23 |
flaper87 | once we figure out what modules should live in Marconi's tree, we could pull it in | 21:24 |
cppcabrera | I see, to show how someone else might add a third party moodule, hmm... | 21:24 |
flaper87 | cppcabrera: yup | 21:24 |
cppcabrera | How would that plug into stevedore? | 21:24 |
flaper87 | pip install marconi-redis-backend | 21:24 |
flaper87 | cppcabrera: entry_points: https://github.com/stackforge/marconi/blob/master/setup.cfg#L35 | 21:24 |
flaper87 | cppcabrera: you'd have something like: redis = marconi.storage.redis.driver:Driver | 21:25 |
cppcabrera | Ahhh, I get it. | 21:25 |
flaper87 | (the packages can be whatever you want) | 21:25 |
zyuan_ | i'll love if it's hyperdex backend :) | 21:25 |
cppcabrera | entry points uses a module import syntax | 21:25 |
cppcabrera | So as long as the entry point entry is installed on the python module path... got it! | 21:26 |
flaper87 | cppcabrera: exactly | 21:26 |
cppcabrera | myproj.awesome.redis.driver:Driver | 21:26 |
cppcabrera | Alright, I'll give it a spin. | 21:26 |
flaper87 | cppcabrera: another cool project for your hackday could be the client :P | 21:26 |
cppcabrera | Hahha, that's #3 on my list. :P | 21:27 |
flaper87 | LOL | 21:27 |
cppcabrera | I like using hack days to explore the unknown - it's the only reason the client is lower on my list for that day. :) | 21:27 |
flaper87 | cppcabrera: I was joking anyway, redis backend should be fun | 21:27 |
flaper87 | cppcabrera: +1 | 21:27 |
cppcabrera | ;) | 21:27 |
*** malini is now known as malini_afk | 21:30 | |
*** oz_akan_ has quit IRC | 21:31 | |
*** oz_akan_ has joined #openstack-marconi | 21:31 | |
*** oz_akan_ has quit IRC | 21:36 | |
kgriffs | flaper87: re fileutils and excutils, setup.py pulled those in automagically, so I didn't think they were needed in openstack-common.conf | 21:37 |
flaper87 | kgriffs: yeah, basically update.py can use that file to know which modules to update | 21:38 |
flaper87 | that script is still stupid, I guess | 21:38 |
kgriffs | oic | 21:38 |
flaper87 | the whole process is really dumb | 21:38 |
*** meganw has quit IRC | 21:50 | |
*** meganw has joined #openstack-marconi | 21:51 | |
*** meganw has quit IRC | 21:55 | |
cppcabrera | I'm out for the night. Take care, guys. :) | 21:56 |
*** cppcabrera has quit IRC | 21:56 | |
*** kgriffs is now known as kgriffs_afk | 21:59 | |
*** fifieldt_ has joined #openstack-marconi | 22:00 | |
*** meganw has joined #openstack-marconi | 22:03 | |
*** meganw_ has joined #openstack-marconi | 22:03 | |
*** meganw_ has quit IRC | 22:05 | |
*** meganw_ has joined #openstack-marconi | 22:05 | |
*** meganw_ has quit IRC | 22:07 | |
*** meganw_ has joined #openstack-marconi | 22:07 | |
*** meganw has quit IRC | 22:07 | |
*** meganw_ has quit IRC | 22:12 | |
*** flaper87 is now known as flaper87|afk | 22:20 | |
*** amitgandhi has quit IRC | 22:23 | |
*** oz_akan_ has joined #openstack-marconi | 22:42 | |
*** key4 has quit IRC | 22:44 | |
*** key4 has joined #openstack-marconi | 22:45 | |
*** oz_akan_ has quit IRC | 22:47 | |
*** kgriffs_afk is now known as kgriffs | 22:59 | |
*** fifieldt_ is now known as fifieldt | 23:03 | |
*** jergerber has quit IRC | 23:24 | |
*** kgriffs is now known as kgriffs_afk | 23:34 | |
*** amitgandhi has joined #openstack-marconi | 23:45 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!