openstackgerrit | Changbin Liu proposed a change to openstack/swift: Use new style class for ringbuilder's Commands https://review.openstack.org/77463 | 00:12 |
---|---|---|
*** miurahr has joined #openstack-swift | 00:19 | |
*** miurahr has left #openstack-swift | 00:19 | |
*** Midnightmyth_ has quit IRC | 00:23 | |
*** miurahr has joined #openstack-swift | 00:38 | |
*** Dharmit has joined #openstack-swift | 00:48 | |
*** miurahr has quit IRC | 00:56 | |
*** nosnos has joined #openstack-swift | 01:28 | |
*** haomaiwang has quit IRC | 02:22 | |
*** haomaiwa_ has joined #openstack-swift | 02:23 | |
Anju | notmyname: does swift provide Notification configuration on container ? | 02:57 |
openstackgerrit | Jenkins proposed a change to openstack/swift: Updated from global requirements https://review.openstack.org/75596 | 03:35 |
*** fifieldt has joined #openstack-swift | 04:18 | |
*** bvandenh has joined #openstack-swift | 04:26 | |
*** nosnos has quit IRC | 04:28 | |
openstackgerrit | Zhang Jinnan proposed a change to openstack/python-swiftclient: Use six.StringIO instead of StringIO.StringIO https://review.openstack.org/74705 | 04:33 |
*** nosnos has joined #openstack-swift | 04:34 | |
*** ppai has joined #openstack-swift | 04:35 | |
*** fifieldt has quit IRC | 04:43 | |
*** nshaikh has joined #openstack-swift | 04:53 | |
*** bvandenh has quit IRC | 04:58 | |
*** vu has quit IRC | 05:08 | |
*** haomaiwa_ has quit IRC | 05:48 | |
*** haomaiwa_ has joined #openstack-swift | 05:49 | |
*** nosnos has quit IRC | 06:01 | |
*** nosnos has joined #openstack-swift | 06:06 | |
*** sungju has quit IRC | 06:30 | |
*** saju_m has joined #openstack-swift | 06:31 | |
*** Dharmit has quit IRC | 06:38 | |
*** Dharmit has joined #openstack-swift | 06:38 | |
*** Dharmit has quit IRC | 06:40 | |
*** nosnos has quit IRC | 06:40 | |
*** chandankumar_ has quit IRC | 06:45 | |
*** chandan_kumar has joined #openstack-swift | 06:51 | |
*** chandan_kumar has quit IRC | 06:55 | |
*** Dharmit has joined #openstack-swift | 06:57 | |
*** nshaikh has quit IRC | 06:59 | |
*** dharmit_ has joined #openstack-swift | 06:59 | |
*** nshaikh has joined #openstack-swift | 06:59 | |
*** Dharmit has quit IRC | 07:02 | |
*** nosnos has joined #openstack-swift | 07:04 | |
*** dharmit_ has quit IRC | 07:12 | |
*** haomaiwang has joined #openstack-swift | 07:12 | |
*** haomaiwa_ has quit IRC | 07:15 | |
*** csd has joined #openstack-swift | 07:22 | |
*** davidhadas has quit IRC | 07:35 | |
*** sungju has joined #openstack-swift | 07:36 | |
*** sungju_ has joined #openstack-swift | 07:38 | |
*** sungju_ has quit IRC | 07:39 | |
*** sungju_ has joined #openstack-swift | 07:40 | |
*** sungju has quit IRC | 07:41 | |
*** sungju_ has quit IRC | 07:47 | |
*** nshaikh has quit IRC | 07:51 | |
*** nshaikh has joined #openstack-swift | 08:08 | |
*** nshaikh has quit IRC | 08:15 | |
openstackgerrit | Zhang Jinnan proposed a change to openstack/python-swiftclient: Use six.StringIO instead of StringIO.StringIO https://review.openstack.org/74705 | 08:15 |
*** mlipchuk has joined #openstack-swift | 08:21 | |
*** nshaikh has joined #openstack-swift | 08:24 | |
*** nshaikh has quit IRC | 08:25 | |
*** nshaikh has joined #openstack-swift | 08:26 | |
*** davidhadas has joined #openstack-swift | 08:27 | |
*** csd has quit IRC | 08:29 | |
*** nshaikh has quit IRC | 08:32 | |
*** therve_ has joined #openstack-swift | 08:33 | |
*** haomaiwang has quit IRC | 08:35 | |
*** haomaiwang has joined #openstack-swift | 08:35 | |
*** davidhadas_ has joined #openstack-swift | 08:36 | |
*** nshaikh has joined #openstack-swift | 08:37 | |
*** davidhadas has quit IRC | 08:39 | |
*** saju_m has quit IRC | 08:40 | |
*** saju_m has joined #openstack-swift | 08:42 | |
*** therve_ has quit IRC | 08:45 | |
*** Trixboxer has joined #openstack-swift | 08:45 | |
*** sungju has joined #openstack-swift | 08:45 | |
*** Dharmit has joined #openstack-swift | 08:48 | |
openstackgerrit | ChangBo Guo(gcb) proposed a change to openstack/swift: Add common units constants https://review.openstack.org/76490 | 08:49 |
openstackgerrit | ChangBo Guo(gcb) proposed a change to openstack/swift: Replace mumeric expression with units constants https://review.openstack.org/76523 | 08:50 |
*** saju_m has quit IRC | 09:02 | |
*** chandan_kumar has joined #openstack-swift | 09:03 | |
*** mlipchuk has quit IRC | 09:03 | |
*** sungju has quit IRC | 09:04 | |
*** mkollaro has joined #openstack-swift | 09:20 | |
*** tanee-away is now known as tanee | 09:21 | |
*** Dharmit has quit IRC | 09:33 | |
*** saju_m has joined #openstack-swift | 09:34 | |
*** mrsnivvel has joined #openstack-swift | 09:41 | |
*** nacim has joined #openstack-swift | 09:43 | |
*** mlipchuk has joined #openstack-swift | 10:03 | |
*** Midnightmyth has joined #openstack-swift | 10:06 | |
*** mkollaro has quit IRC | 10:14 | |
*** mkollaro has joined #openstack-swift | 10:21 | |
*** saju_m has quit IRC | 10:25 | |
*** saju_m has joined #openstack-swift | 10:41 | |
*** vills_ has joined #openstack-swift | 10:54 | |
vills_ | hi, guys. Is there any way to stop syncronization between containers, without deleting container itself? | 10:55 |
*** saju_m has quit IRC | 11:11 | |
*** nosnos has quit IRC | 11:21 | |
*** saju_m has joined #openstack-swift | 11:25 | |
*** nshaikh has quit IRC | 11:43 | |
*** erlon has joined #openstack-swift | 11:45 | |
*** foexle has joined #openstack-swift | 12:29 | |
*** ppai has quit IRC | 12:37 | |
*** mkollaro has quit IRC | 12:45 | |
*** chandan_kumar has quit IRC | 12:48 | |
*** chandan_kumar has joined #openstack-swift | 12:49 | |
*** saju_m has quit IRC | 12:52 | |
*** chandankumar_ has joined #openstack-swift | 12:54 | |
*** vills_ has quit IRC | 12:55 | |
*** chandan_kumar has quit IRC | 12:57 | |
*** mkollaro has joined #openstack-swift | 13:08 | |
*** acoles has joined #openstack-swift | 13:17 | |
*** tristanC has quit IRC | 13:30 | |
*** tristanC has joined #openstack-swift | 13:30 | |
*** swifterdarrell has quit IRC | 13:32 | |
*** swifterdarrell has joined #openstack-swift | 13:32 | |
*** ChanServ sets mode: +v swifterdarrell | 13:32 | |
tristanC | Hello folks! | 13:40 |
tristanC | I'm not sure what to answer to http://lists.openstack.org/pipermail/openstack/2014-February/005662.html (swiftclient compatibility for grizzly) | 13:40 |
tristanC | I will test swiftclient 2.0 against a grizzly devstack, but what is the official status on that ? | 13:41 |
*** creiht has quit IRC | 13:43 | |
*** miurahr has joined #openstack-swift | 13:43 | |
*** creiht has joined #openstack-swift | 13:43 | |
*** ChanServ sets mode: +v creiht | 13:43 | |
*** clarkb has quit IRC | 13:44 | |
*** mrsnivvel has quit IRC | 13:44 | |
*** miurahr has quit IRC | 13:49 | |
*** tdasilva has quit IRC | 13:49 | |
*** vills_ has joined #openstack-swift | 13:50 | |
*** fifieldt has joined #openstack-swift | 13:51 | |
*** rustlebee is now known as russellb | 13:56 | |
*** koolhead17 has quit IRC | 13:57 | |
*** mrsnivvel has joined #openstack-swift | 13:58 | |
*** tongli has joined #openstack-swift | 14:01 | |
*** koolhead17 has joined #openstack-swift | 14:03 | |
*** acoles has quit IRC | 14:09 | |
*** thomaschaaf has joined #openstack-swift | 14:12 | |
thomaschaaf | Hey is there a way to get proxy-server to return caching headers? I am running it behind pound. | 14:12 |
thomaschaaf | I simply want to specify that the client should cache as the headers currently don't specify that. Thus chrome fetches it every time | 14:13 |
thomaschaaf | I have WWW -> pound for ssl -> proxy-server set up. | 14:14 |
notmyname | thomaschaaf: sortof | 14:15 |
notmyname | thomaschaaf: first (and just a quick sidestep), I'd suggest you look into HAProxy or stud or stunnel for SSL termination. pound, while it works, also prevents use of expect 100 continue. not the end of the world, but also not great | 14:16 |
notmyname | thomaschaaf: here's what you could do to get the headers: add the caching headers you want to save to the object server config file: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L88 | 14:17 |
notmyname | thomaschaaf: then you can save those headers on a per-object basis with a PUT or POST | 14:18 |
thomaschaaf | Na I'd rather just replace pound then | 14:18 |
notmyname | thomaschaaf: and while that's not always an ideal situation (eg you may want to dynamically set them) it would work | 14:18 |
notmyname | thomaschaaf: pound vs stud is orthogonal to caching headers | 14:18 |
thomaschaaf | Okay thank you notmyname which of the three would you use? | 14:19 |
thomaschaaf | okay so stunnel or haproxy? | 14:19 |
notmyname | what are you using for load balanacing? | 14:19 |
thomaschaaf | there is no loadbalancing going on for now | 14:19 |
*** fifieldt has quit IRC | 14:19 | |
thomaschaaf | simply have 3 servers with dns round robin | 14:20 |
notmyname | ok | 14:20 |
*** koolhead17 has left #openstack-swift | 14:20 | |
notmyname | then stud is really simple to set up | 14:20 |
notmyname | that's what I'd look at first | 14:20 |
thomaschaaf | but with stud I can't set headers or am I wrong? | 14:20 |
notmyname | correct. stud only does ssl termination. setting headers is something else entirely | 14:21 |
thomaschaaf | I'd rather have one application do both | 14:21 |
*** mkollaro has quit IRC | 14:21 | |
notmyname | I'm not sure if you can. what are the rules you have for setting the headers? | 14:22 |
*** peluse has quit IRC | 14:22 | |
thomaschaaf | add_header Pragma "public"; add_header Cache-Control "public"; expires 30d; | 14:22 |
thomaschaaf | that is all I really want :) | 14:22 |
thomaschaaf | any file in our cdn will have a unique file name | 14:23 |
*** peluse has joined #openstack-swift | 14:23 | |
notmyname | thomaschaaf: ah, cool. so HAProxy then :-) | 14:23 |
ahale | I've never done it, but you should be able to modify headers with haproxy, and the recent dev releases have the stud ssl termination code which is fine | 14:23 |
notmyname | thomaschaaf: with HAProxy, I'm told that you should be using the release from last December | 14:24 |
thomaschaaf | thank you :) | 14:24 |
notmyname | thomaschaaf: ya, because of what ahale said | 14:24 |
thomaschaaf | how do you guys set the http headers? | 14:25 |
* notmyname punts to ahale | 14:25 | |
ahale | hehe thats the bit i've never played with | 14:25 |
notmyname | me neither | 14:26 |
ahale | the rspadd and rspdel commands in a backend stanza look like they will help | 14:26 |
thomaschaaf | No I am talking about in general? Not with haproxy | 14:26 |
thomaschaaf | because it seems awkward not caching at all | 14:26 |
ahale | we (rax) put a CDN in front of the stuff we want to cache | 14:27 |
thomaschaaf | cdn being something like varnish? | 14:27 |
ahale | yeah pretty much | 14:27 |
notmyname | ahale: akamai is now laughing at you ;-) | 14:28 |
*** jamieh has joined #openstack-swift | 14:29 | |
ahale | :) | 14:29 |
jamieh | how do you upload a directory with the cli python-swiftclient? | 14:30 |
jamieh | swift post container ./test is not working | 14:30 |
*** ChanServ changes topic to "Current Swift Release: 1.13.0 | Priority Reviews: https://wiki.openstack.org/wiki/Swift/PriorityReviews | Channel Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/" | 14:31 | |
notmyname | Swift 1.13 released http://lists.openstack.org/pipermail/openstack-dev/2014-March/028691.html | 14:31 |
jamieh | it's `upload` not `post` :) | 14:31 |
thomaschaaf | I'm please to announce :D | 14:32 |
notmyname | yay :( | 14:33 |
*** mlipchuk has quit IRC | 14:33 | |
*** mlipchuk has joined #openstack-swift | 14:34 | |
*** mkollaro has joined #openstack-swift | 14:44 | |
tristanC | notmyname: Could you please help me on this question: http://lists.openstack.org/pipermail/openstack/2014-February/005662.html (swiftclient compatibility for grizzly) ? | 14:45 |
tristanC | I did some basic tests with swiftclient 2.0 on grizzly, and it's working... | 14:46 |
notmyname | tristanC: the bump to 2.x included some API changes (in method parameters and CLI options), so the best I can do is "maybe" | 14:46 |
notmyname | I'm going to be at the OpenStack operators gathering today (https://www.eventbrite.co.uk/e/openstack-march-2014-operators-gathering-tickets-10318369521) | 14:48 |
notmyname | I'm not sure how much I'll be online today | 14:48 |
*** jergerber has joined #openstack-swift | 14:49 | |
*** ccorrigan has quit IRC | 14:58 | |
*** bada has joined #openstack-swift | 15:04 | |
*** lpabon has joined #openstack-swift | 15:04 | |
*** tdasilva has joined #openstack-swift | 15:05 | |
*** dmsimard has joined #openstack-swift | 15:07 | |
*** mrsnivvel has quit IRC | 15:14 | |
*** acoles has joined #openstack-swift | 15:36 | |
*** mlipchuk has quit IRC | 15:36 | |
*** mlipchuk has joined #openstack-swift | 15:37 | |
*** clarkb has joined #openstack-swift | 15:41 | |
*** chandan_kumar has joined #openstack-swift | 15:41 | |
*** mjseger has joined #openstack-swift | 15:49 | |
mjseger | notmyname: portante: clayg: I'm seeing some weird timing numbers across the proxy/object servers I'm hoping someone can explain. specifically... | 15:52 |
mjseger | I'm doing a bunch of 1K PUTs and recording the latency numbers for anything > 2 seconds. then I grep for the transaction IDs in the logs to see where the time is being spent | 15:52 |
mjseger | what I'm seeing is instances where 2 and sometimes all 3 objects servers have almost identical times | 15:53 |
mjseger | does this mean the proxy is delaying between first connecting and then sending the data? is this a known problem? | 15:53 |
luisbg | mjseger, 1K PUTs as in 1kilobyte? | 15:58 |
luisbg | or 1 thousand PUTs | 15:58 |
mjseger | 1024 bytes/put | 15:58 |
mjseger | sorry for the confusion | 15:59 |
luisbg | mjseger, no problem! don't mind asking and clarifying | 15:59 |
mjseger | just wondering if anyone has done any detailed investigation of individual timings before | 16:00 |
luisbg | mjseger, it would make sense for the objects to be written at the same time, the proxy needs to decide which 3 partitions to write to (which is the bulk of it's job) and once then, it can write to the object servers quickly in parallel | 16:01 |
luisbg | mjseger, I'm too new around here to know about historical performance tests/benchmarks | 16:01 |
luisbg | mjseger, that said, > 2 seconds sounds very high | 16:02 |
mjseger | let me back up a few... | 16:02 |
luisbg | mjseger, but how fast the proxy can be depends on the infrastructure as well | 16:03 |
mjseger | in general swift takes < 0.1 seconds to do a 1KB PUT, though occasionally they take a little longer. if you look at the server.log for the object servers involved, you can see how long each server spent on the operation | 16:03 |
luisbg | mjseger, ooooh! I see | 16:03 |
luisbg | mjseger, I'm sure core developers would be interested to look at the logs | 16:03 |
*** chandan_kumar has quit IRC | 16:03 | |
mjseger | since each server is independent of each other you typical see different times for each | 16:03 |
*** bada has quit IRC | 16:03 | |
luisbg | mjseger, why don't you paste them somewhere (pastie.org/pastebin.com) | 16:03 |
mjseger | in this case I'm see almost identical times for each and am trying to understand what's wrong | 16:04 |
mjseger | there is too much data in the logs to paste and they're difficult to read. that's why I wrote my own analysis script that summaries the data I'm seeing, expliclty ONLY the exceptions. hang on and I'll paste it. | 16:05 |
mjseger | so during this one set of tests, there were 8 PUTs that all took over 2 seconds: see http://paste.openstack.org/show/71791/ | 16:07 |
*** dmsimard1 has joined #openstack-swift | 16:07 | |
mjseger | I've identified the times for the proxy, the 3 object servers and the 3 container servers | 16:07 |
mjseger | I'm also including the names of the disks, because I can also looks at the second-by-second collectl data so see if the disk latencies are too long but haven't done so in these cases because clearly the problem must be elsewhere | 16:08 |
*** dmsimard has quit IRC | 16:08 | |
*** dmsimard has joined #openstack-swift | 16:11 | |
*** dmsimard1 has quit IRC | 16:13 | |
*** chandan_kumar has joined #openstack-swift | 16:16 | |
*** tanee is now known as tanee-away | 16:25 | |
*** tanee-away is now known as tanee | 16:26 | |
*** dtalton has joined #openstack-swift | 16:29 | |
*** dtalton has left #openstack-swift | 16:29 | |
*** mlipchuk has quit IRC | 16:45 | |
*** davidhadas_ has quit IRC | 16:52 | |
*** thomaschaaf has quit IRC | 16:53 | |
*** mlipchuk has joined #openstack-swift | 16:59 | |
*** tanee is now known as tanee-away | 16:59 | |
*** tanee-away is now known as tanee | 16:59 | |
*** tanee is now known as tanee-away | 17:00 | |
*** gyee has joined #openstack-swift | 17:05 | |
*** tanee-away is now known as tanee | 17:13 | |
*** chandan_kumar has quit IRC | 17:25 | |
*** davidhadas has joined #openstack-swift | 17:29 | |
*** vills_ has quit IRC | 17:31 | |
*** nacim has quit IRC | 17:42 | |
*** tanee is now known as tanee-away | 17:47 | |
*** shri has joined #openstack-swift | 17:55 | |
*** piyush1 has joined #openstack-swift | 17:55 | |
*** mlipchuk has quit IRC | 18:02 | |
*** gyee has quit IRC | 18:03 | |
*** odyssey4me has joined #openstack-swift | 18:08 | |
*** dmsimard1 has joined #openstack-swift | 18:08 | |
*** dmsimard has quit IRC | 18:09 | |
*** sfineberg has quit IRC | 18:17 | |
*** zul has quit IRC | 18:18 | |
*** zaitcev has joined #openstack-swift | 18:19 | |
*** ChanServ sets mode: +v zaitcev | 18:19 | |
*** sfineberg has joined #openstack-swift | 18:20 | |
*** zul has joined #openstack-swift | 18:21 | |
*** davidhadas has quit IRC | 18:21 | |
*** davidhadas has joined #openstack-swift | 18:22 | |
notmyname | acoles: around? | 18:23 |
*** dmsimard has joined #openstack-swift | 18:27 | |
*** dmsimard1 has quit IRC | 18:29 | |
*** davidhadas has quit IRC | 18:31 | |
*** dmsimard1 has joined #openstack-swift | 18:35 | |
*** dmsimard has quit IRC | 18:36 | |
*** gyee has joined #openstack-swift | 18:38 | |
*** dmsimard1 has quit IRC | 18:44 | |
*** dmsimard has joined #openstack-swift | 18:45 | |
*** dmsimard1 has joined #openstack-swift | 18:47 | |
*** dmsimard has quit IRC | 18:49 | |
*** piyush1 has quit IRC | 18:55 | |
acoles | notmyname: just back, seen email | 18:56 |
notmyname | acoles: kk | 18:56 |
*** piyush1 has joined #openstack-swift | 19:08 | |
*** piyush1 has quit IRC | 19:12 | |
*** piyush1 has joined #openstack-swift | 19:13 | |
*** piyush2 has joined #openstack-swift | 19:16 | |
*** piyush1 has quit IRC | 19:17 | |
*** piyush1 has joined #openstack-swift | 19:18 | |
mjseger | notmyname: are you still around? | 19:20 |
notmyname | mjseger: yup | 19:20 |
mjseger | just wondering if you saw my posting about what I've been seeing with latencies. I can restate if need be | 19:21 |
*** piyush2 has quit IRC | 19:21 | |
notmyname | mjseger: here in IRC earlier today? ya, and I'm rereading it now | 19:22 |
mjseger | notmyname: yes, that's what I'm referring to. just trying to understand the numbers and whether they indicate problem or not. my guess is a problem | 19:22 |
notmyname | mjseger: I think you're potentially finding some stuff that hasn't been as specifically examined before. and that's great. looks like you're finding some things, and I'd love to see further data (and patches) | 19:25 |
mjseger | notmyname: I have lots of data, did you see what I posted in http://paste.openstack.org/show/71791/? I actually have data spread out over a month! the real question is how best to move forward | 19:27 |
mjseger | notmyname: one thought I'd had is to add more debugging to both the proxy and the object servers to show finer grained timings so we can tell exactly where the delays are | 19:27 |
mjseger | another thought is maybe some people would be willing to get together and go over this in more detail at the summit? | 19:28 |
mjseger | or maybe do some independent testing | 19:28 |
mjseger | the who idea there is one can run getput, instructing it to save long latencies (you pick the number) to a file with transaction IDs, then then grep thought the server/proxy logs to see what the number say | 19:29 |
mjseger | the typical situation is one object server is slow as expected and the other 2 are fast, also as expected | 19:29 |
mjseger | but sometimes 2 object servers are slow and even other times all 3 are slow | 19:30 |
mjseger | sometimes an object servers has a >> time than getput sees. for example, I'll record an 8 second latency and one server may report something like 10 or 20 seconds. | 19:30 |
mjseger | I've even seen at least one case where an object server reported a latency of over 500 seconds?!?!? whart's with that? | 19:31 |
notmyname | you've got the data, but I wonder if you're seeing something about the way swift is using eventlet | 19:32 |
notmyname | it sounds like you've identified an issue, and so you need to next isolate it. then figure out if configs and/or patches can solve it | 19:32 |
*** vu has joined #openstack-swift | 19:37 | |
mjseger | notmyname: sorry, I had to step away. the first thing I wanted to do we at least get some confirmation from you that this sounds like a problem before I jump in with both feet. sounds like you're saying it is. | 19:47 |
notmyname | mjseger: yes, it sounds like something that may in fact be an issue. and honestly doesn't surprise me too much. very few (if any) prod deployments I know of have focused on timings of small objects | 19:48 |
mjseger | notmyname: so as I said my thought is to more heavily instrument the proxy/swift server code to print out more detailed records along the way to show where time is being spent. sound reasonable? | 19:48 |
notmyname | mjseger: yes, that sounds reasonable. to be explicit though, I think you should first do this locally, and only maybe should it be upstream | 19:49 |
notmyname | depends on what the instrumentation is :-) | 19:50 |
mjseger | notemyname: absolutely, I'm not even suggesting doing anythign upstream [yet]. the interesting thing about small object timings is these delays really smack you in the face. when a PUT of a lager object takes multiple seconds you don't even notice them. | 19:50 |
notmyname | mjseger: ya, exactly | 19:53 |
notmyname | mjseger: and I'm glad you're looking at it. thanks | 19:54 |
mjseger | stay tuned ;) | 19:54 |
*** dmsimard has joined #openstack-swift | 19:54 | |
*** dmsimard1 has quit IRC | 19:57 | |
portante | mjseger: does slide 35 match some of your findings: http://www.openstack.org/assets/presentation-media/openstackperformance-v4.pdf | 20:03 |
luisbg | portante, interesting presentation :) | 20:04 |
mjseger | portante: isnt't that about large objects? | 20:05 |
mjseger | ok wait, I see it also talks about 3K? | 20:05 |
portante | it is a simple example of 5 clients, each churning on 1 size, and how they affect the large object | 20:06 |
mjseger | portante: notmyname: also to be clear, the numbers I'm reporting came from an analysis of a months worth of data during which I wrote small objects for 2 mins every 1/2 hour. during that time, the vast majority took <2 second with a relatively small handful taking longer. it was those longer ones I compares the times against | 20:08 |
mjseger | I suppose one could do this for all operations or even those with latencies in the 1/2sec or even 1sec range, but there's clearly be a lot more | 20:08 |
notmyname | mjseger: you might be interested in https://review.openstack.org/#/c/53270/ | 20:09 |
notmyname | mjseger: to test out and see if it can give you any info without needing to do a ton of your own instrumentation | 20:10 |
mjseger | notmyname: thx for the pointer | 20:10 |
*** acoles has left #openstack-swift | 20:44 | |
*** mkollaro has quit IRC | 20:51 | |
*** mkollaro has joined #openstack-swift | 20:52 | |
*** jamieh has quit IRC | 21:07 | |
openstackgerrit | David Goetz proposed a change to openstack/swift: Make cors work better. https://review.openstack.org/69419 | 21:15 |
notmyname | dfg: again? ;-) | 21:16 |
dfg | chmouel: ok- can you check out that change again? i don't know what the problems you had before are. I was testing with Chrome | 21:16 |
dfg | notmyname: ya. i've been dealing with other crap | 21:17 |
luisbg | hahahaa | 21:17 |
notmyname | dfg: ah, I first thought it was a new patch. this is another changeset on the same patch | 21:17 |
luisbg | topic: "cors_strikes_again_2" | 21:17 |
luisbg | dfg, hahaha ^ | 21:17 |
dfg | notmyname: ya- the only diff between this patch and the last is that its not just a change for static web anymroe. its everything | 21:17 |
notmyname | cool | 21:18 |
dfg | luisbg: :) i really hate CORS | 21:18 |
dfg | well- its just so much annoyance for such little gain | 21:18 |
luisbg | dfg, stay strong | 21:19 |
dfg | haha | 21:19 |
*** foexle has quit IRC | 21:20 | |
notmyname | I honestly think I'd like CORS better if the spec was all in one color and not a rainbow of cross-referenced terms | 21:20 |
notmyname | it's just so hard to read | 21:20 |
*** tongli has quit IRC | 21:20 | |
luisbg | notmyname, ? going to google that | 21:20 |
dfg | ya. its not really all that bad. but its history with cloudfiles has been a little iritating | 21:21 |
notmyname | maybe it's gotten better http://www.w3.org/TR/cors/ | 21:21 |
luisbg | notmyname, I am sure you are busy catching up after the workshop, ping me whenever you have time to look at some review that needs the final push (tomorrow or whenever) | 21:21 |
luisbg | don't want to be annoying or distracting | 21:21 |
notmyname | dfg: looks like http://www.w3.org/TR/cors/ was updated in january this year | 21:21 |
luisbg | very colorful indeed! | 21:22 |
notmyname | dfg: thanks for working on it | 21:22 |
dfg | np- hopefully it works | 21:23 |
gholt | Best thing for the CORS spec: Just adjust your monitor to greyscale. | 21:23 |
notmyname | there's a lot of good comments at the ops workshop today. fortunately, most don't apply to swift ;-) | 21:23 |
notmyname | gholt: heh | 21:23 |
luisbg | notmyname, is that beecause swift works too well for people to comment with improvements? | 21:26 |
*** piyush1 has quit IRC | 21:32 | |
notmyname | here's an idea that was proposed today at the ops worshop: instead of blueprints, use text files in a git repo for design docs, and use gerrit to proposed and discuss changes | 21:41 |
luisbg | idea is good, except gerrit :S | 21:43 |
creiht | lol | 21:44 |
creiht | that's when the gating on changes starts | 21:44 |
*** tdasilva has left #openstack-swift | 21:44 | |
creiht | sorry you your change proposal failed, because your title ends with a period | 21:44 |
notmyname | creiht: the TC has started using such a system for their motions, and they use the no-op gate | 21:44 |
torgomatic | "instead of blueprints, use X" gets an automatic +1 from me for just about all values of X | 21:44 |
creiht | haha | 21:45 |
clarkb | notmyname: have you been following the storyboard stuff at all? might be worth discussing with them too | 21:45 |
luisbg | torgomatic, hehehe | 21:45 |
luisbg | torgomatic, X = telegrams sent to your house | 21:45 |
notmyname | clarkb: storyboard is something that always comes up as a "ya there is this thing, but it's not ready yet" | 21:45 |
clarkb | notmyname: right, but if you have specific needs that need addressing bettter to bring them up now than later | 21:46 |
torgomatic | luisbg: at least with telegrams I get notifications :) | 21:46 |
clarkb | notmyname: even if you can't use it today | 21:46 |
notmyname | clarkb: yup. there is a quite a list of things that have been generated today. fifieldt taking notes | 21:46 |
luisbg | torgomatic, +1 from me | 21:46 |
*** piyush1 has joined #openstack-swift | 22:01 | |
*** piyush2 has joined #openstack-swift | 22:03 | |
*** piyush1 has quit IRC | 22:05 | |
*** erlon has quit IRC | 22:12 | |
notmyname | high praise for swift in the openstack operators summit today. the question was asked about any operator questions with swift (the whole day is for operator feedback on all of openstack), and the answer was just, "no issues. swift just works" | 22:22 |
creiht | nice | 22:23 |
zaitcev | sounds a little surprising, I expected them seeing something stuck replicators and hotspots at least somewhat | 22:26 |
notmyname | ya. it's not like we are perfect. but swift scales. and you can upgrade it. etc | 22:29 |
notmyname | here's some notes from it: | 22:30 |
notmyname | swift | 22:30 |
notmyname | it just works | 22:30 |
notmyname | it's a simple model | 22:30 |
notmyname | there's no message queue | 22:30 |
notmyname | there's no database | 22:30 |
notmyname | how did you instill this mentality into swift? the people | 22:30 |
notmyname | the followup question (after the "no it just works" answer) was "how do you get here? how do you get a team in openstack to do so well at this?" which gave me a great opportunity for me to brag on all of you :-) | 22:31 |
*** Trixboxer has quit IRC | 22:32 | |
shri | Hey Swifters… I asked a question on the mailing list about optimizing a single node swift instance. http://www.gossamer-threads.com/lists/openstack/dev/36267 | 22:44 |
shri | Any suggestions? | 22:44 |
*** miurahr has joined #openstack-swift | 22:45 | |
*** sungju has joined #openstack-swift | 22:45 | |
creiht | shri: I was thinking about that actually | 22:47 |
creiht | shri: how many replicas are you using in your ring? | 22:47 |
creiht | and how many disks on each node | 22:48 |
creiht | err the node | 22:48 |
shri | for now, I'm fine not using any replicas. We have one VM and that has 1 500G disk backed by SSD. | 22:49 |
creiht | for your tests that you ran earlier how many replicas did it have? | 22:49 |
shri | There was no redundancy. No replication. There were no replicas of each object. | 22:50 |
creiht | k | 22:51 |
creiht | are you benchmarking on the same machine or on another client machine? | 22:52 |
shri | On another client machine on the same network. | 22:52 |
creiht | how fast is the network? | 22:52 |
*** mkollaro has quit IRC | 22:53 | |
creiht | shri: and with your fastest result, how many containers were you using? | 22:54 |
creiht | btw, your dd is going to be a bit misleading | 22:54 |
creiht | because that is straight throughput to the disk | 22:54 |
creiht | where as benchmarking swift is all going to be random io in your cade | 22:54 |
creiht | case | 22:54 |
shri | I have a 1Gig connection to the Swift instance. About sharding across containers, yes, I was using that. Data was being spread across 32 containers. | 22:57 |
creiht | k | 22:57 |
shri | Agree that dd will be different than Swift. But wanted to check just how much can the disk support. And thereby how much is the Swift overhead. | 22:58 |
creiht | in the client that is benchmarking, at what concurrency are you sending the requests? | 22:58 |
creiht | shri: yes, but sequential io can be orders of magnitude faster than random io | 22:59 |
physcx | notmyname, luisbg: I mentioned during the NYC workshop that object_gets with resp_chunk_size set was returning a generator that finished iterating the data of an object without exception but still missing a large portion of some objects -- well we still have the issue with swclient 2.0.3 and looking thru the logs we find this error within the proxy logs "proxy-server ERROR with Object... | 22:59 |
physcx | ...server 172.17.2.40:6000/disk3p1 re: Trying to read during GET: ChunkReadTimeout (10s) (txn: tx99c8b2a0f8304a0ea9554-0053150156) " followed by "proxy-server Trying to send to client: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/swift/proxy/controllers/base.py", line 971, in _make_app_iter#012 raise Exception(_('Failed to read all data'#012Exception:... | 22:59 |
physcx | ...Failed to read all data from the source (txn: tx99c8b2a0f8304a0ea9554-0053150156)" -- we checked the object servers and they are showing success for the problem GETs and within our python app no exceptions are being raised and it is proceeding as if it the generator iterating over the objects data finished normally despite only receiving a fraction of the total object | 22:59 |
physcx | sorry for the ultra long msg | 22:59 |
shri | creiht: 64 parallel threads, each writing an object of 64K. 10K objects in total. | 22:59 |
creiht | physcx: paste.openstack.org next time please :) | 22:59 |
luisbg | creiht, +1 | 22:59 |
physcx | ok | 23:00 |
creiht | shri: ok so a couple of thoughts | 23:00 |
creiht | I think your workers are set way to high for a single disk (even if ssd) | 23:00 |
*** piyush2 has left #openstack-swift | 23:01 | |
creiht | I would back those down quite a bit, and run iterative tests increasing each time to see where things start to bottom out | 23:01 |
shri | Should I reduce the # of workers for all servers (proxy, account, containers, object)? | 23:02 |
creiht | yes | 23:02 |
creiht | how many cores do you have? | 23:02 |
*** jergerber has quit IRC | 23:02 | |
creiht | and did you watch cpu utilization while the test was happening? | 23:02 |
shri | 8 cores.. I briefly looked at whether anything was hogging the CPU, but didn't look like it. | 23:04 |
creiht | k | 23:04 |
shri | There was a suggestion of changing the hash so that the number of directories per object can be reduced. Can that work? | 23:06 |
creiht | also if you are doing 20MB/s and 64k objects, that averages out to 320 requests/s | 23:07 |
shri | I meant, changing the hash so that more objects get placed in the same directory. Lesse directories get created. | 23:07 |
creiht | which isn't too bad for everything going to one node on one vm | 23:07 |
creiht | shri: what partition power are you using in your ring? | 23:08 |
creiht | if you put all the files in the same dir, then you run into filesystem issues with having too many files in a single directory | 23:08 |
physcx | a bit cleaner: http://paste.openstack.org/show/71934/ | 23:08 |
creiht | shri: if you have a large number of partitions, you could make that smaller (assuming you never want to grow beyond one node) | 23:10 |
shri | creiht: Oh.. so the partition power decides how many files go into each directory? | 23:10 |
creiht | which would cause fewer partition dirs to get created | 23:10 |
creiht | shri: kind of | 23:10 |
shri | cool.. so there is already a knob I can play with.. let me look into this. | 23:11 |
creiht | another knob you could try is playing with object_chunk_size and client_chunk_size | 23:12 |
creiht | if you know all requests are going to be 64k, you could try setting each of those to 64k | 23:12 |
creiht | in etc/swift/proxy-server.conf | 23:13 |
shri | objects are guaranteed to have a max size of 64k. | 23:13 |
shri | isn't 64K the default value of object_chunk_size and client_chunk_size? | 23:14 |
*** Midnightmyth has quit IRC | 23:14 | |
creiht | oh nm, that defaults to 64k, the default is wrong in the sample config | 23:14 |
creiht | so other outside thoughts, you are just seeing the limitations of what can be done currently with swift to a single vm with a single disk | 23:16 |
creiht | it would be interesting to see what would happen if you added another drive | 23:16 |
creiht | just for testing | 23:17 |
creiht | that use case hasn't really been optimized for | 23:17 |
creiht | as swift usually scales horizontally out across many disks and nodes | 23:17 |
shri | yeah.. I did realize that while object stores are trying to scale out, I'm trying to do things unnaturally by using a single node :-) | 23:18 |
creiht | yeah I know | 23:18 |
bsdkurt | hello. the deployment guide is quiet about memcache and the object-expirer considerations. should the object-expirer use the same memcache servers as the proxy? | 23:18 |
creiht | I'm just saying that there is likely opportunity for improvement in that area, it just hasn't been the focus for most of us | 23:18 |
shri | sure. The partition power option looks promising. I'll play with that a bit. | 23:19 |
creiht | k | 23:19 |
*** dmsimard has quit IRC | 23:19 | |
shri | thanks a lot creiht!! appreciate this help! | 23:20 |
creiht | np | 23:20 |
creiht | shri: what are you using to test clientside? | 23:22 |
shri | We have a java tool developed using jclouds called BlobStoreBench. | 23:22 |
creiht | k | 23:24 |
creiht | you might also just double check that jclouds isn't doing something like a head before each put or other things like that | 23:24 |
creiht | as some libs do things like that | 23:24 |
creiht | it might be interesting to test with something like swift-bench | 23:25 |
creiht | to see if you get similar results | 23:25 |
shri | I am reasonably sure that jclouds is doing the right thing. I tested with swift-bench too. I got 302 object writes/s => ~19MB/s | 23:26 |
*** judd7 has joined #openstack-swift | 23:29 | |
creiht | k | 23:34 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!