h6w | Bugger. Doesn't look like it: https://github.com/openstack/swift/blob/fc18d0856dd8677521b8f940cb7fc2c6ffbc2e26/swift/common/wsgi.py#L162 | 00:00 |
---|---|---|
*** keving1 has quit IRC | 00:01 | |
*** matsuhashi has joined #openstack-swift | 00:12 | |
*** Midnightmyth has quit IRC | 00:20 | |
*** matsuhashi has quit IRC | 00:31 | |
*** mlipchuk has quit IRC | 00:46 | |
*** matsuhashi has joined #openstack-swift | 00:52 | |
*** keving1 has joined #openstack-swift | 00:58 | |
*** keving1 has quit IRC | 01:07 | |
*** nosnos has joined #openstack-swift | 01:30 | |
*** shakamunyi has joined #openstack-swift | 01:37 | |
notmyname | h6w: no, but you wouldn't want it anyway | 01:48 |
notmyname | h6w: because reasons https://github.com/notmyname/ssl_eventlet_slowloris | 01:48 |
notmyname | h6w: basically python + eventlet is completely unusable in a productions server. so use an external service to terminate ssl (eg stud, stunnel, HAProxy, etc) | 01:49 |
h6w | I don't understand. I have a RAPIDSSL cert I'd like to use with my proxy server, but it requires an intermediate CA. | 01:49 |
h6w | Oh, so you're saying someone could DOS my server? | 01:50 |
notmyname | h6w: all I'm saying is that you shouldn't terminate the ssl connection at the swift proxy server. eg do it with stud on the box and then redirect the cleartext to the proxy on the same box. or use a separate ssl-terminating load balancer | 01:50 |
notmyname | h6w: ya, exactly | 01:50 |
notmyname | client --- (Internet)--> stud on proxy box ---(localhost)--->proxy on proxy box -->etc | 01:51 |
notmyname | h6w: gotta go help with the kids. kids are going crazy and wife is asking for help :-) | 01:52 |
h6w | I guess I don't *really* need SSL at all. So far it's all on a private net. But the longer aim is to use it for co-lo, like I saw at LCA. | 01:53 |
h6w | Ok. Thanks for your help! | 01:53 |
*** keving1 has joined #openstack-swift | 02:03 | |
*** keving1 has quit IRC | 02:11 | |
*** haomaiw__ has quit IRC | 02:17 | |
*** haomaiwa_ has joined #openstack-swift | 02:18 | |
*** haomai___ has joined #openstack-swift | 02:31 | |
*** haomaiwa_ has quit IRC | 02:35 | |
*** matsuhashi has quit IRC | 02:48 | |
*** matsuhas_ has joined #openstack-swift | 02:52 | |
*** keving1 has joined #openstack-swift | 03:08 | |
*** tsg has joined #openstack-swift | 03:08 | |
*** lpabon has joined #openstack-swift | 03:10 | |
*** keving1 has quit IRC | 03:15 | |
*** mrsnivvel has joined #openstack-swift | 03:16 | |
*** tsg is now known as tgohad | 03:16 | |
*** tgohad is now known as tusharsg | 03:16 | |
*** tusharsg has left #openstack-swift | 03:17 | |
*** tusharsg has joined #openstack-swift | 03:20 | |
*** mmcardle has joined #openstack-swift | 03:22 | |
*** tusharsg is now known as tgohad | 03:22 | |
*** matsuhas_ has quit IRC | 03:22 | |
*** mmcardle has quit IRC | 03:26 | |
h6w | I have some conflicting information I don't understand. | 03:44 |
h6w | From the (old) documentation: "Assuming there are 5 zones with 1 node per zone, ZONE should start at 1 and increment by one for each additional node." | 03:44 |
h6w | However, in noymyname's talk (http://www.youtube.com/watch?v=LpmBRqevuVU) he says "You have two regions in a co-lo, but note all the nodes have the same zone." | 03:45 |
h6w | noymyname=notmyname | 03:46 |
*** tgohad is now known as tusharsg | 03:46 | |
h6w | Hmmm, so why would I want 1 node per zone? | 03:48 |
*** Edward-Zhang has joined #openstack-swift | 04:04 | |
*** keving1 has joined #openstack-swift | 04:13 | |
*** Edward-Zhang has quit IRC | 04:13 | |
*** keving1 has quit IRC | 04:19 | |
*** lpabon has quit IRC | 04:23 | |
*** mmcardle has joined #openstack-swift | 04:24 | |
*** mmcardle has quit IRC | 04:29 | |
*** ppai has joined #openstack-swift | 04:29 | |
*** matsuhashi has joined #openstack-swift | 04:43 | |
*** keving1 has joined #openstack-swift | 04:48 | |
*** Edward-Zhang has joined #openstack-swift | 04:53 | |
*** keving1 has quit IRC | 04:56 | |
*** Diddi has joined #openstack-swift | 05:05 | |
*** yuanz has joined #openstack-swift | 05:11 | |
*** yuan has quit IRC | 05:11 | |
*** Edward-Zhang has quit IRC | 05:35 | |
*** keving1 has joined #openstack-swift | 05:53 | |
*** nosnos has quit IRC | 05:54 | |
*** nosnos has joined #openstack-swift | 05:54 | |
*** matsuhashi has quit IRC | 05:56 | |
*** matsuhashi has joined #openstack-swift | 05:57 | |
*** matsuhashi has quit IRC | 05:57 | |
*** keving1 has quit IRC | 06:00 | |
*** matsuhashi has joined #openstack-swift | 06:00 | |
*** matsuhashi has quit IRC | 06:09 | |
*** matsuhashi has joined #openstack-swift | 06:10 | |
*** matsuhas_ has joined #openstack-swift | 06:12 | |
*** matsuhashi has quit IRC | 06:14 | |
*** nosnos has quit IRC | 06:24 | |
*** nosnos has joined #openstack-swift | 06:25 | |
*** matsuhas_ has quit IRC | 06:25 | |
*** matsuhashi has joined #openstack-swift | 06:25 | |
*** mmcardle has joined #openstack-swift | 06:26 | |
*** mmcardle has quit IRC | 06:30 | |
*** matsuhashi has quit IRC | 06:39 | |
*** matsuhashi has joined #openstack-swift | 06:40 | |
*** saju_m has joined #openstack-swift | 06:47 | |
*** gabia has joined #openstack-swift | 07:15 | |
gabia | hi all. | 07:15 |
gabia | i have some error. | 07:15 |
gabia | swift showen error log | 07:16 |
gabia | object-server ERROR __call__ error with DELETE | 07:16 |
gabia | [Errno 2] No such file or directory | 07:16 |
gabia | can help somebody? | 07:17 |
*** rongze has joined #openstack-swift | 07:21 | |
*** rongze has quit IRC | 07:23 | |
gabia | object-server ERROR __call__ error with DELETE /sdb3/40392/.expiring_objects | 07:27 |
gabia | line 2150, in _run_in_eventlet_tpool#012OSError: [Errno 2] No such file or directory: '/srv/3/node/sdb3/objects/40392/dda/27723df0bced61f4670c3c0564b2cdda/1395040284.12976.ts | 07:27 |
gabia | it is error log message | 07:27 |
*** keving1 has joined #openstack-swift | 08:03 | |
*** mmcardle has joined #openstack-swift | 08:07 | |
*** keving1 has quit IRC | 08:10 | |
*** nosnos has quit IRC | 08:13 | |
*** nosnos has joined #openstack-swift | 08:14 | |
*** matsuhashi has quit IRC | 08:17 | |
*** matsuhashi has joined #openstack-swift | 08:17 | |
*** openstack has quit IRC | 08:21 | |
*** openstack has joined #openstack-swift | 08:30 | |
*** openstackstatus has joined #openstack-swift | 08:30 | |
*** nosnos has quit IRC | 08:34 | |
*** bada_ has quit IRC | 08:40 | |
*** jamieh has joined #openstack-swift | 08:44 | |
*** jamieh is now known as Guest89169 | 08:44 | |
*** mlipchuk has joined #openstack-swift | 08:49 | |
*** nacim has joined #openstack-swift | 08:49 | |
*** matsuhas_ has quit IRC | 08:56 | |
*** matsuhashi has joined #openstack-swift | 09:06 | |
*** gabia has quit IRC | 09:07 | |
*** keving1 has joined #openstack-swift | 09:08 | |
*** nosnos has joined #openstack-swift | 09:14 | |
*** keving1 has quit IRC | 09:17 | |
*** krtaylor has quit IRC | 09:38 | |
*** Trixboxer has joined #openstack-swift | 09:40 | |
*** saschpe has quit IRC | 09:42 | |
*** chandan_kumar has joined #openstack-swift | 09:48 | |
*** saschpe has joined #openstack-swift | 09:52 | |
*** nosnos_ has joined #openstack-swift | 09:55 | |
*** nosnos has quit IRC | 09:58 | |
*** saschpe has quit IRC | 10:02 | |
*** saschpe has joined #openstack-swift | 10:08 | |
*** nosnos_ has quit IRC | 10:10 | |
*** nosnos has joined #openstack-swift | 10:12 | |
*** keving1 has joined #openstack-swift | 10:13 | |
*** saschpe has quit IRC | 10:13 | |
*** saschpe has joined #openstack-swift | 10:15 | |
*** Midnightmyth has joined #openstack-swift | 10:15 | |
*** saschpe has quit IRC | 10:16 | |
*** saju_m has quit IRC | 10:17 | |
*** saschpe has joined #openstack-swift | 10:21 | |
*** keving1 has quit IRC | 10:21 | |
*** mkollaro has joined #openstack-swift | 10:22 | |
*** saschpe has quit IRC | 10:26 | |
*** saschpe has joined #openstack-swift | 10:28 | |
*** saschpe has quit IRC | 10:28 | |
*** saju_m has joined #openstack-swift | 10:32 | |
*** Midnightmyth has quit IRC | 10:35 | |
*** matsuhashi has quit IRC | 10:38 | |
*** nosnos has quit IRC | 10:46 | |
*** nosnos has joined #openstack-swift | 10:47 | |
*** matsuhashi has joined #openstack-swift | 10:54 | |
*** saschpe has joined #openstack-swift | 10:57 | |
*** matsuhashi has quit IRC | 11:05 | |
*** openstackgerrit has quit IRC | 11:10 | |
*** openstackgerrit has joined #openstack-swift | 11:10 | |
*** yuanz has quit IRC | 11:11 | |
*** yuanz has joined #openstack-swift | 11:11 | |
*** matsuhashi has joined #openstack-swift | 11:13 | |
*** saju_m has quit IRC | 11:15 | |
*** keving1 has joined #openstack-swift | 11:18 | |
*** keving1 has quit IRC | 11:27 | |
*** matsuhashi has quit IRC | 11:31 | |
*** chuck_ has joined #openstack-swift | 11:31 | |
*** chuck_ is now known as zul | 11:34 | |
*** zul has joined #openstack-swift | 11:34 | |
*** saju_m has joined #openstack-swift | 11:36 | |
*** ppai has quit IRC | 12:00 | |
*** Edward-Zhang has joined #openstack-swift | 12:18 | |
*** chandan_kumar has quit IRC | 12:21 | |
*** keving1 has joined #openstack-swift | 12:23 | |
*** matsuhashi has joined #openstack-swift | 12:27 | |
*** keving1 has quit IRC | 12:30 | |
*** matsuhashi has quit IRC | 12:37 | |
*** nosnos_ has joined #openstack-swift | 12:37 | |
*** nosnos has quit IRC | 12:37 | |
*** matsuhashi has joined #openstack-swift | 12:37 | |
*** mmcardle has quit IRC | 12:39 | |
*** matsuhashi has quit IRC | 12:42 | |
*** mmcardle has joined #openstack-swift | 12:50 | |
*** saschpe has quit IRC | 12:54 | |
*** tdasilva has joined #openstack-swift | 12:56 | |
*** matsuhashi has joined #openstack-swift | 13:03 | |
*** saschpe has joined #openstack-swift | 13:05 | |
*** matsuhashi has quit IRC | 13:05 | |
*** matsuhashi has joined #openstack-swift | 13:05 | |
*** matsuhashi has quit IRC | 13:10 | |
*** krtaylor has joined #openstack-swift | 13:12 | |
*** nosnos_ has quit IRC | 13:12 | |
*** saschpe has quit IRC | 13:12 | |
*** mrsnivvel has quit IRC | 13:14 | |
*** fifieldt has quit IRC | 13:20 | |
*** keving1 has joined #openstack-swift | 13:28 | |
*** tanee-away is now known as tanee | 13:29 | |
*** keving1 has quit IRC | 13:36 | |
*** dmsimard has joined #openstack-swift | 13:37 | |
*** cschwede has joined #openstack-swift | 13:44 | |
*** JuanManuelOlle has joined #openstack-swift | 13:47 | |
*** saschpe has joined #openstack-swift | 13:56 | |
*** saschpe has quit IRC | 14:00 | |
*** jasondotstar has joined #openstack-swift | 14:10 | |
*** zul has quit IRC | 14:11 | |
*** zul has joined #openstack-swift | 14:13 | |
*** judd7 has joined #openstack-swift | 14:17 | |
*** byeager has joined #openstack-swift | 14:25 | |
*** keving1 has joined #openstack-swift | 14:33 | |
*** j_king_ is now known as j_king | 14:40 | |
*** keving1 has quit IRC | 14:42 | |
*** Edward-Zhang has quit IRC | 14:47 | |
notmyname | from my company's twitter feed this morning: "Hi @SwiftStack ! Do you like Taylor Swift ? Follow me on twitter and become Swifty ! <3" | 14:54 |
notmyname | not the kind of "Swift" I'm going for ;-) | 14:54 |
*** mlipchuk has quit IRC | 14:58 | |
*** saschpe_ has joined #openstack-swift | 15:00 | |
luisbg | morning | 15:03 |
luisbg | notmyname, hahahah Taylor Swift is the spirit animal of Swift, right? | 15:03 |
*** saju_m has quit IRC | 15:03 | |
*** tburnes has joined #openstack-swift | 15:07 | |
*** saschpe_ has quit IRC | 15:11 | |
*** saju_m has joined #openstack-swift | 15:20 | |
notmyname | luisbg: don't be shy about adding a -1 to a review if you think it needs something (https://review.openstack.org/#/c/78043/) | 15:21 |
notmyname | luisbg: and thanks for the reviews! :-) | 15:21 |
luisbg | notmyname, a core reviewer liked the change, it is small and useful | 15:22 |
luisbg | but I think it can be better with a unit test | 15:22 |
luisbg | notmyname, my pleasure :) | 15:22 |
luisbg | notmyname, and... wow! so fast to read the review | 15:23 |
notmyname | (email alerts) | 15:23 |
*** byeager has quit IRC | 15:23 | |
*** byeager has joined #openstack-swift | 15:24 | |
notmyname | I get an email for every gerrit comment on every swift-related project (yes, that's a lot of email) | 15:24 |
luisbg | notmyname, hopefully you have it well filtered | 15:24 |
luisbg | notmyname, https://review.openstack.org/#/c/78043/ why is Jenkins starting a test build when I commented? | 15:25 |
notmyname | luisbg: because if it hasn't passed a test run in the last 48(?) hours when there is activity on the review, it automatically reruns the tests | 15:26 |
luisbg | makes sense, thanks | 15:26 |
notmyname | but the real reason that was put into place was to try to frontload some of the checking work the gate queue was doing so that the (literally) week-long gate queues of a few months ago don't happen again | 15:27 |
luisbg | notmyname, sorry I haven't been very active in Swift lately, I have been focusing on the Swift music player example | 15:27 |
notmyname | IMO, that counts (as long as you tell people about it) ;-) | 15:27 |
luisbg | notmyname, I can imagine close to a freeze the gates get collapsed with jobs | 15:27 |
* notmyname goes to get ready for the day | 15:28 | |
luisbg | notmyname, it doesn't count in my "presence" as a contributor, but yeah, it is fun. Halfway through to share it with you guys and the world | 15:28 |
notmyname | I'll be back in an hour or so | 15:28 |
*** byeager has quit IRC | 15:28 | |
luisbg | notmyname, have a good commute | 15:29 |
*** shakamunyi has quit IRC | 15:31 | |
*** byeager has joined #openstack-swift | 15:35 | |
*** keving1 has joined #openstack-swift | 15:38 | |
*** annegentle has joined #openstack-swift | 15:46 | |
*** keving1 has quit IRC | 15:47 | |
*** saschpe_ has joined #openstack-swift | 15:49 | |
*** saschpe_ has quit IRC | 16:02 | |
*** saschpe has joined #openstack-swift | 16:04 | |
*** saschpe has quit IRC | 16:06 | |
*** gyee has joined #openstack-swift | 16:13 | |
*** saju_m has quit IRC | 16:15 | |
*** jasondotstar has quit IRC | 16:19 | |
notmyname | acoles-: is eamonn on IRC? | 16:24 |
notmyname | also, I'm looking for mjseger | 16:24 |
*** jasondotstar has joined #openstack-swift | 16:30 | |
*** keving1 has joined #openstack-swift | 16:43 | |
*** keving1 has quit IRC | 16:52 | |
anticw | seems if you have an SLO with missing segments you can't HEAD, DELETE, or GET it ... | 16:55 |
anticw | anyone know when this was fixed (i'm sure it must have been) | 16:55 |
*** saschpe has joined #openstack-swift | 17:01 | |
notmyname | anticw: on current master, I created a static manifest, and deleted segment 4 (100MB total, 10MB chunks) | 17:08 |
*** saschpe has quit IRC | 17:08 | |
notmyname | GET on the object fails part-way through. HEAD works: https://gist.github.com/notmyname/ecfd64acf4fc3ca2f4d5 | 17:08 |
notmyname | anticw: and you may be interested in looking at this patch: https://review.openstack.org/#/c/80383/ (/cc torgomatic) | 17:09 |
anticw | notmyname: if you use the swift cli and upload something --use-slo then delete the segments ... can you delete the manifest? | 17:10 |
notmyname | anticw: via curl or the CLI? | 17:10 |
anticw | either | 17:10 |
*** yuanz has quit IRC | 17:11 | |
notmyname | john@europa:~$ curl -H "X-Auth-Token: AUTH_tk0262b8e8389149958c968d10d0ccf980" http://saio:8080/v1/AUTH_test/c/100MB -XDELETE | 17:11 |
notmyname | john@europa:~$ echo $? | 17:11 |
notmyname | 0 | 17:11 |
notmyname | (forgot to add -i. let me redo | 17:11 |
anticw | -v is useful to know what's going on | 17:11 |
anticw | (a bit too much sometimes with curl) | 17:12 |
*** yuan has joined #openstack-swift | 17:12 | |
notmyname | ya. that's why I like -i. (when I don't care what is send and just want to see the response headers | 17:12 |
notmyname | anticw: here's my entire flow: https://gist.github.com/notmyname/00ebf5fdc65b949d3a56 | 17:15 |
anticw | thanks | 17:15 |
notmyname | seems to do everything I'd expect | 17:16 |
anticw | ok, 500 on a bad SLO seems fair ... DELETE does work but not with -I which implied HEAD | 17:16 |
anticw | pebkac :) | 17:16 |
*** keving1 has joined #openstack-swift | 17:21 | |
*** Guest89169 has quit IRC | 17:22 | |
*** jamieh has joined #openstack-swift | 17:22 | |
*** jamieh is now known as Guest45863 | 17:23 | |
*** annegentle has quit IRC | 17:31 | |
*** annegentle has joined #openstack-swift | 17:32 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/python-swiftclient: Copy Swift's .mailmap to swiftclient repo https://review.openstack.org/81027 | 17:34 |
notmyname | torgomatic: ah, thanks | 17:34 |
torgomatic | notmyname: np | 17:34 |
*** mmcardle has quit IRC | 17:41 | |
*** nacim has quit IRC | 17:54 | |
*** fbo is now known as fbo_away | 17:59 | |
peluse | torgomatic: So I've updated the 409 patch based on clayg's input when you get a chance feel free to take a look. As we're getting down to the wire here is there something else I can help with on the reconciliator? | 18:01 |
peluse | thatks for approving ssync patch BTW :) | 18:01 |
*** tburnes has quit IRC | 18:06 | |
torgomatic | peluse: nothing comes to mind; I think the whole thing is just about all on that branch | 18:11 |
torgomatic | today I'm working on rephrasing the feature/ec branch in a few, logically-coherent pieces for proposal to master | 18:12 |
openstackgerrit | gro qez proposed a change to openstack/python-swiftclient: Decode HTTP responses, fixes bug #1282861 https://review.openstack.org/78043 | 18:16 |
peluse | torgomatic: OK, cool. You have several patches outstanding for the EC branch. Are they all intended to go in or are some in need of being abandoned (sorry, lost track). Also on 'Fix changing of storage policy index' you show and outdated dependency. | 18:16 |
torgomatic | peluse: interesting; I'll have to go look at that one | 18:17 |
bsdkurt | I've started to use the feature/ec branch with a storage policy. How do you set the storage policy for a container? | 18:17 |
torgomatic | I think they're all supposed to go in, at least for this version of the reconciler | 18:17 |
peluse | torgomatic: that might be due to my rebasing off of another in your chain where it rebased yours w/no changes for some reason... dropped you a note over the weekend about that | 18:17 |
notmyname | peluse: my plan today is to talk with torgomatic and clayg and have a plan around getting stuff onto master. | 18:17 |
peluse | notmyname: cool,please include me if possible | 18:17 |
notmyname | absolutely | 18:18 |
peluse | bsdkurt: to associate a policy with a container, create the container with the policy tag. No docs yet, one sec I'll paste it in | 18:18 |
bsdkurt | peluse: thanks | 18:18 |
peluse | bsdkurt: Here's a curl sample of createing a container for a policy called "one" that would be defined in swift.conf and already have a ring created called object-1.ring.gz. curl -v -X PUT -H 'X-Auth-Token: AUTH_tkd1c41f44366945d8a1cb9998e2297039' -H "X-Storage-Policy: one" http://127.0.0.1:8080/v1/AUTH_test/myCont1 | 18:19 |
bsdkurt | peluse: got it. thanks again | 18:19 |
peluse | bsdkurt: and then all object operations are done as usual.... | 18:19 |
bsdkurt | notmyname: I spoke with you at the NYC workshop about cluster performance. eventually I tracked down what the issue was.... | 18:21 |
notmyname | bsdkurt: ah, cool. what'd you find? | 18:22 |
bsdkurt | notmyname: I was testing random access using O_DIRECT and getting ~100MB/sec with 256K buffers, but swift was getting ~22MB/sec. | 18:23 |
bsdkurt | notmyname: so I have a patch to make swift us O_DIRECT for reading and writing | 18:23 |
bsdkurt | notmyname: and can see ~5x improvement in reading | 18:23 |
bsdkurt | notmyname: is this something people would be interested in seeing upstream? | 18:24 |
notmyname | bsdkurt: I think you'd get people interested in looking at a patch :-) | 18:25 |
notmyname | but I don't know enough about O_DIRECT to know why it's good or bad (or when it's good and bad) | 18:26 |
bsdkurt | notmyname: ok, I'll figure out how to submit it | 18:26 |
notmyname | but others are much smarter than me and would comment, i'm sure :-) | 18:26 |
bsdkurt | notmyname: :-) It seems like a good fit as swift invalidates the buffer cache after reading anyway. | 18:28 |
*** Guest45863 has quit IRC | 18:28 | |
bsdkurt | is there a development page describing where and how to submit patches? | 18:30 |
notmyname | bsdkurt: ya. let me find a link | 18:31 |
notmyname | bsdkurt: actually, it's in the CONTRIBUTING doc in the source tree: https://github.com/openstack/swift/blob/master/CONTRIBUTING.md | 18:31 |
bsdkurt | notmyname: thanks | 18:32 |
notmyname | bsdkurt: basically, you gotta submit it to openstack's gerrit system, and that requires you to sing a CLA | 18:32 |
notmyname | and sign it too | 18:32 |
bsdkurt | :-) | 18:32 |
bsdkurt | lalalala | 18:32 |
*** tanee is now known as tanee-away | 18:33 | |
clayg | notmyname: torgomatic: my bad he said it "is totally braindamaged" - I mis-quoted -> https://lkml.org/lkml/2007/1/10/233 | 18:34 |
clayg | ... again that's wrt to O_DIRECT to a filesystem, not a block device. | 18:35 |
clayg | bsdkurt: I'm curious how you approached buffer alignment? | 18:35 |
bsdkurt | mmap | 18:35 |
clayg | awesome | 18:36 |
bsdkurt | clayg: I'm also familiar with that quote, however it *does* make a huge difference. | 18:36 |
bsdkurt | clayg: I'm getting setup to submit it, however if you want I can email the diff to you for a preview | 18:37 |
clayg | no... i can wait :) | 18:39 |
*** changbl has quit IRC | 18:39 | |
notmyname | a .mailmap change busted pypy? I'll blame Alex_Gaynor for this one ;-) | 18:51 |
notmyname | https://review.openstack.org/#/c/81027/ | 18:51 |
*** tburnes has joined #openstack-swift | 18:51 | |
clayg | "sudo: unable to resolve host py3k-precise-1394676888" ? | 18:56 |
clayg | peluse: what do you think about doing the .pending_file in a @property? | 19:16 |
peluse | clayg: what would be the advantage? | 19:19 |
*** Trixboxer has quit IRC | 19:23 | |
*** changbl has joined #openstack-swift | 19:29 | |
*** jasondotstar has quit IRC | 19:30 | |
*** zaitcev has joined #openstack-swift | 19:33 | |
*** ChanServ sets mode: +v zaitcev | 19:33 | |
luisbg | clayg, reading Linus is always fun | 19:35 |
clayg | peluse: I might be wrong, it may not be better to have it as a property | 19:35 |
luisbg | clayg, on his defense he sometimes gets persistent people trying to push wrong things repeatedly and the only way for him to stop it is getting aggressive | 19:35 |
clayg | luisbg: I don't really follow him that closely, but I've looked at "the right" way to do O_DIRECT on a filesystem enough times that I'm basically concinced there is no good way to do it (on purpose). | 19:37 |
luisbg | clayg, hahaha that is true | 19:38 |
*** jasondotstar has joined #openstack-swift | 19:45 | |
*** piyush1 has joined #openstack-swift | 19:51 | |
*** changbl has quit IRC | 20:00 | |
wer | is there an easy way to prevent a non empty container from being deleted? | 20:01 |
wer | at least the swift python client will gladly delete them without the --all flag :/ | 20:02 |
Alex_Gaynor | notmyname: seems legit ;-P | 20:04 |
luisbg | wer, you could write your own python middleware to add that feature | 20:11 |
luisbg | wouldn't be too complicated | 20:11 |
wer | luisbg: yeah :/ I think I need to luisbg. It scares me :) | 20:12 |
luisbg | wer, only fear fear itself, this channel is full of people willing to help you achieve this | 20:12 |
luisbg | :) | 20:12 |
clayg | luisbg: I'm not sure you can work around the way swiftclient does it - it just does the container listing and deletes all the objects | 20:12 |
luisbg | clayg, I meant, copying the code from python-swiftclient and writing something that does what he needs | 20:13 |
luisbg | on top | 20:13 |
clayg | oh.. so not middleware, just a different client with a different commandline interface - yeah seems legit | 20:13 |
*** jasondotstar has quit IRC | 20:14 | |
luisbg | clayg, sorry yes, wrong terminology | 20:14 |
clayg | you may even be able to patch the commandline client to look at a "SAFE" or -i var or something | 20:14 |
luisbg | clayg, I sometimes consider python-swiftclient the example middleware | 20:14 |
luisbg | clayg, +1 | 20:14 |
clayg | some people have asked to have the delete account symantics of the swiftclient cli be more agressive - so... like the oppisite problem | 20:15 |
wer | heh, they have the --all flag :/ Just seems like lies. | 20:15 |
luisbg | wer, becuase it is wrongly documented? or it just does something wrong? | 20:16 |
clayg | wer: I think it may be too late to change the default behavior - but there's probably been enough confusion around the point that having either behavior configurable via environ would cirtainly be a good start | 20:16 |
luisbg | s/becuase/because | 20:16 |
luisbg | clayg, if you want you can open a bug in launchpad and assign it to me :) | 20:16 |
clayg | it's not a problem for me - i'm used to and expect and script against the default behavior | 20:17 |
wer | the way I read it the --all flag would be needed to delete everything in an account or container. But I definitely delete a non empty container without the --all flag. | 20:18 |
luisbg | ok | 20:18 |
wer | did that answer questions? | 20:18 |
luisbg | unrelated question, when new imports are added in a python project. do these have to be listed anywhere for pypi? | 20:18 |
clayg | luisbg: sort of? I think we managed to get netifaces grandfathered in | 20:19 |
luisbg | clayg, I see. thanks | 20:19 |
clayg | luisbg: there's a openstack-requirements project that keeps tabs on everthing | 20:20 |
luisbg | clayg, cool | 20:20 |
clayg | I think --all's help not saying "(only effects delete for accounts)" or something like that might be a doc bug - I can see how it could be confusing. | 20:22 |
notmyname | wer: clayg: the "--all" parameter is supposed to be the "yes I mean it do it anyway and do it all right now" flag | 20:22 |
notmyname | clayg: well the bug there is "affects" ;-) | 20:22 |
clayg | notmyname: yeah but it's not required for delete container - the tool assumes you meant what you said | 20:23 |
wer | heh, so I'm not worried about the actual swift client... but if a web request could delete a container that would be bad. | 20:23 |
notmyname | wer: it can't | 20:23 |
notmyname | wer: swift will respond with a 409 to a DELETE to a non-empty container | 20:24 |
wer | I thought that would be the case.... cause the client is doing things. | 20:24 |
wer | ossm. | 20:24 |
*** keving1 has quit IRC | 20:25 | |
* clayg tried to google ossm but it was still a miss | 20:25 | |
wer | ha! awesome. sorry clayg :) | 20:25 |
notmyname | http://www.urbandictionary.com/define.php?term=ossm | 20:26 |
wer | lol. I guess I let my local dialect spew into this channel :) But yeah ossm = awesome :) | 20:27 |
clayg | heh | 20:27 |
*** tdasilva has left #openstack-swift | 20:29 | |
wer | so I've written two middlewares.... The first fixes some incoming req.path_nifo's... so I put it before tempauth. The second is before the proxy-server... but somehow I manage to get the return value of the proxy-server and suggest a 302. And I basically have no idea how I did this. wsgi hurts me. | 20:30 |
torgomatic | oh, it's not just you ;) | 20:31 |
wer | lol | 20:31 |
clayg | i'm slowly becoming immune to the pain | 20:31 |
* clayg feels nothing | 20:31 | |
wer | my hope is that I have not ruined the pipeline with inefficient code :/ I am about to do the "b" in a/b testing.... but.... secretly I just have a lot of hope :) | 20:32 |
wer | and the thing I could not do... would have been so much cooler. I really wanted to populate swift with the missing thing instead of 302'ing. But failed. | 20:33 |
wer | I thought I could return a fetched response from somewhere else to the client.... and then also spawn off another to populate swift with the missing asset. But I gave up. | 20:34 |
torgomatic | wer: that's possible, but certainly tricky... probably you'd want a 3 greenthreads: one fetcher, one return-to-client-...er, and one swift-populater | 20:38 |
wer | torgomatic: I was unable to pull it off :( | 20:40 |
torgomatic | wer: yeah, the combination of concurrency, WSGI, and Swift idiosyncracies is not especially welcoming :| | 20:45 |
notmyname | wer: it sounds like you might be interested in reading https://review.openstack.org/#/c/64430/ | 20:46 |
portante | bsdkurt: currently swift uses the linux buffer cache as a poor-man's asyncio engine | 20:49 |
portante | using fsync and fadvise64 to get the memory footprint needed (writes are not cached, only reads are cached, writes are always forced to disk before close()) | 20:50 |
portante | bsdkurt: does your code with O_DIRECT still allow for the cached reads behavior? | 20:51 |
* portante might be too late to the party here ... | 20:51 | |
bsdkurt | portante: no O_DIRECT skips the buffer cache for reading and writing. However, for large objects it gets much better random IO throughput. | 20:52 |
portante | random IO throughput? | 20:53 |
bsdkurt | I'm rerunning the tests again before I submit to gerrit for reivew... should be a few more minutes | 20:53 |
portante | okay | 20:53 |
portante | are you testing ranged reads or something? | 20:54 |
portante | and what is your test environment? | 20:54 |
bsdkurt | portante: yes. on my sata drives random IO (1000 open files all reading sequentially) gets ~22MB/sec throughput, but with O_DIRECT gets ~100MB/sec throughput | 20:54 |
portante | bsdkurt, thanks, no rush to answer | 20:54 |
portante | what disk elevator are you using? | 20:54 |
portante | cfq? | 20:54 |
bsdkurt | is cfq default? I didn't change that value | 20:55 |
portante | is that iozone random IO testing? | 20:55 |
portante | or from swift random IO? | 20:55 |
portante | I think cfq is the default for SATA drives. | 20:55 |
bsdkurt | standalone testing with a custom program I have for work, but iostat watching swift gets similar results as well | 20:56 |
portante | have you measured swift response times improving for large files? | 20:56 |
bsdkurt | I have a 5 storage node cluster with 5 2TB drives in each node, all 10GB networking | 20:56 |
bsdkurt | I have been measuring throughput using iostat on bond0 (client facing) and bond1 (storage facing) | 20:57 |
bsdkurt | proxy has 4 bonded 10 gigabit nics for bond0 and the same for bond1 | 20:59 |
portante | I think you have to confirm with measuring a swift workload | 20:59 |
portante | so each storage node also have 4 bonded 10ge nics? | 20:59 |
bsdkurt | each storage node has 1 10gig nic | 20:59 |
portante | which way is the 4 bonded 10ge nic facing? storage or client? | 21:00 |
portante | ah, sorry I see it | 21:00 |
bsdkurt | both.. yes :-) | 21:00 |
portante | so the proxy only talks to the storage nodes via bond1 interface? | 21:01 |
portante | and do you have arp_filter enabled? | 21:01 |
bsdkurt | correct | 21:01 |
bsdkurt | no | 21:01 |
bsdkurt | not sure what that is | 21:01 |
portante | oh, you might want to do that so that linux does try to "ensure a packet gets to a host any which way it can" | 21:01 |
portante | meaning, it might send packets over bond1 to get to a client if the route allows it | 21:02 |
bsdkurt | ok. I'll research it | 21:02 |
portante | and vice versa | 21:02 |
portante | we had an issue where our lab network was getting flooded by backend traffic, so we have been setting arp_filter to help prevent that from happening | 21:03 |
bsdkurt | ah, the storage is on its own subnet with a dedicated 10GB switch | 21:03 |
notmyname | wer: I just saw someone in a different IRC channel say "ossum" | 21:03 |
notmyname | (queue http://www.damninteresting.com/the-baader-meinhof-phenomenon/) | 21:03 |
portante | hmm, if that nodes routes, it might use it, just be careful | 21:04 |
wer | they are ossm. | 21:04 |
bsdkurt | portante: okay. I'll check it out. thanks | 21:04 |
portante | also, you might want to consider playing around with the io elevators to see if they make a difference here | 21:05 |
portante | bsdkurt | 21:05 |
*** shri has joined #openstack-swift | 21:08 | |
*** JuanManuelOlle has quit IRC | 21:15 | |
*** dmsimard has quit IRC | 21:20 | |
*** dmsimard has joined #openstack-swift | 21:20 | |
*** dmsimard has quit IRC | 21:22 | |
bsdkurt | portante: sry, I was called away... I'll try the io elevators as well. thanks | 21:27 |
*** keving1 has joined #openstack-swift | 21:30 | |
portante | bsdkurt: thahnks | 21:30 |
wer | notmyname: That data migrate thing is close. yeah I essentially tried to implement something like that.... but probably very poorly. I would love to see ho something like that is implemented. I think it would give me a lot of insight into how I should have done certain thing. Even using container headers would have been better then what I did in proxy-server.conf :/ | 21:35 |
openstackgerrit | gro qez proposed a change to openstack/python-swiftclient: Decode HTTP responses, fixes bug #1282861 https://review.openstack.org/78043 | 21:36 |
*** changbl has joined #openstack-swift | 21:38 | |
*** mlipchuk has joined #openstack-swift | 21:41 | |
bsdkurt | is there some cleanup needed between .unittests? | 21:44 |
notmyname | bsdkurt: shouldn't be | 21:44 |
* notmyname runs `resetswift` about 30 times a day | 21:44 | |
bsdkurt | hmm... | 21:45 |
bsdkurt | at the end of .unittest I get the following error: http://paste.openstack.org/show/73698/ even if I remove my o_direct patch | 21:48 |
wer | notmyname: there is good stuff in that code you sent. good comments. On a public bucket it seems you could expose amazone s3 keys :/ | 21:58 |
wer | but it's super close to what I need I think. | 21:58 |
*** piyush1 has quit IRC | 22:03 | |
*** Dieterbe has joined #openstack-swift | 22:12 | |
peluse | clayg: FYI have to run out for a while, have all your comments incorporated except one (the database broker initialize comment) that I need to look at a bit closer either tonight or 1st thing in the morn. Thanks! | 22:17 |
clayg | peluse: thank YOU! no worries. | 22:18 |
*** keving1 has left #openstack-swift | 22:20 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Store policy index in container_stat table https://review.openstack.org/71704 | 22:21 |
*** fifieldt has joined #openstack-swift | 22:29 | |
*** byeager has quit IRC | 22:36 | |
*** byeager has joined #openstack-swift | 22:36 | |
*** openstackgerrit has quit IRC | 22:39 | |
*** openstackgerrit has joined #openstack-swift | 22:39 | |
*** byeager has quit IRC | 22:41 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Fix changing of storage policy index https://review.openstack.org/72536 | 22:44 |
h6w | Morning all! | 22:45 |
h6w | Very interesting answer to: https://ask.openstack.org/en/question/25373/how-do-i-push-or-repush-a-swift-ring/ | 22:45 |
h6w | It seems strange that something that is done automagically by the swift-ring-builder command can't redo it later (with perhaps, some additional flag). | 22:46 |
h6w | Since Swift knows where the mounts are, doesn't it introduce a potential for human error to use a completely different command and find the correct mount points, etc, to sync again? | 22:48 |
torgomatic | h6w: you copy the rings from $adminbox to each server; that doesn't involve finding mount points or anything | 22:51 |
h6w | torgomatic: oic! I thought it referred to the ring data, not the ring files in /etc/swift | 22:54 |
torgomatic | h6w: yeah, the .ring.gz files in /etc/swift | 22:55 |
torgomatic | not the .builder files | 22:55 |
*** bvandenh has quit IRC | 22:56 | |
h6w | Do I have to do that to all the nodes, or just the ones that have been modified? | 22:57 |
clayg | all the nodes | 22:59 |
*** mlipchuk has quit IRC | 23:03 | |
clayg | would it surprise anyone if I said that the proxy doesn't use the same x-timestamp for all backend requests on a container PUT? | 23:05 |
clayg | like if your choices were 1) no that's dumb - your setup is broken 2) yeah sounds right, container replication is merge based so it's w/e or 3) hrmm.. i'm not sure, if that's true probably worth fixing | 23:06 |
clayg | ^ where would you fall? | 23:06 |
openstackgerrit | Clay Gerrard proposed a change to openstack/swift: Fix race on container recreate https://review.openstack.org/81104 | 23:07 |
*** yuanz has joined #openstack-swift | 23:11 | |
*** yuan has quit IRC | 23:11 | |
openstackgerrit | Clay Gerrard proposed a change to openstack/swift: Fix race on container recreate https://review.openstack.org/81104 | 23:13 |
openstackgerrit | Kurt J Miller proposed a change to openstack/swift: O_DIRECT: Use O_DIRECT for reading and writing object data. https://review.openstack.org/81111 | 23:20 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!