*** jrichli has quit IRC | 00:01 | |
*** dmsimard is now known as dmsimard_away | 00:01 | |
*** InAnimaTe has joined #openstack-swift | 00:05 | |
InAnimaTe | hey all, devops/sysadmin here who just inherited a swift cluster that was lifted, and then never touched... | 00:06 |
---|---|---|
InAnimaTe | this chan and myself will most likely become very good friends over the next few months | 00:06 |
notmyname | :-) | 00:06 |
mattoliverau | InAnimaTe: lol, welcome to channel :) | 00:08 |
InAnimaTe | thx | 00:08 |
notmyname | InAnimaTe: glad to have you. anything you can share as you ask questions will be helpful | 00:09 |
notmyname | InAnimaTe: anything bugging you today? | 00:09 |
InAnimaTe | actually yes, compiling the necessary stuff and creating a pastie to share...give me a few minutes | 00:09 |
* InAnimaTe is already liking this community :) | 00:10 | |
clayg | torgomatic: man unit.proxy.controller.test_obj got sorta jacked by the earlier call to container_info :/ | 00:16 |
InAnimaTe | http://tty0.in/view/5497adce | 00:17 |
InAnimaTe | ^well thats it | 00:17 |
InAnimaTe | tl;dr been getting 404's on some GET's, but completely random. Weird Expect 100 error in proxy logs, everything else seems fine | 00:17 |
InAnimaTe | ohh, and swift 1.8.0 | 00:18 |
InAnimaTe | notmyname: yeah i have a blog ill be posting my experiences as best i can | 00:18 |
clayg | one dot *eight* | 00:19 |
* clayg rembers the good ol' days | 00:19 | |
*** ho has joined #openstack-swift | 00:20 | |
ho | good morning guys! | 00:20 |
notmyname | ho: good morning | 00:20 |
mattoliverau | ho: morning | 00:21 |
ho | notmyname: mattoliverau: morning! | 00:21 |
notmyname | InAnimaTe: just to get the question out of the way, I'm curious about your plans if any to upgrade | 00:21 |
InAnimaTe | yeah | 00:21 |
InAnimaTe | so | 00:21 |
notmyname | InAnimaTe: I certainly want to help make you successfull with what you have. but I like newer versions of swift better than older ones ;-) | 00:22 |
InAnimaTe | the guy who stood this up said that as long as there are no ring changes in the new versions, it should be as easy as upgrading the package and restarting services | 00:22 |
notmyname | looking at the paste | 00:22 |
notmyname | InAnimaTe: we're pretty serious about always allowing version upgrades without any end-user downtime | 00:22 |
notmyname | but yes there have been changes to the rings since 1.8 (most notably storage policies) | 00:23 |
InAnimaTe | hmm ok. ill have to dig through the docs and figure out how to handle those changes | 00:24 |
notmyname | we can help with that later. let's get to your issue today | 00:24 |
notmyname | InAnimaTe: for future reference, here's a blog post on upgrading swift https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ | 00:25 |
*** bhakta has joined #openstack-swift | 00:25 | |
clayg | InAnimaTe: topology would be good to know as well - i'm seeing some logs from "storage-proxy-01" *host* from both "proxy-server" and "container-server" - but no log lines from "object-server" | 00:25 |
bhakta | I have a question about the operator role | 00:25 |
clayg | bhakta: sounds very keystone-y | 00:26 |
bhakta | yes | 00:26 |
bhakta | We are using keystone middleware for authentication | 00:26 |
clayg | oic, you must be doing some syslog forwarding... I wonder if the object-servers' are misconfigured or just dropping udp messages or something... | 00:27 |
clayg | InAnimaTe: ^ | 00:27 |
notmyname | bhakta: what's your question? | 00:27 |
InAnimaTe | hmm good question. | 00:27 |
InAnimaTe | also, any chance the chan admin could add this chan to https://botbot.me/ ? | 00:27 |
InAnimaTe | think it would be pretty worthwhile | 00:27 |
bhakta | trying to figure out who should belong to the SwiftOperator role | 00:27 |
notmyname | InAnimaTe: what is that? | 00:28 |
bhakta | From the documentation I read it says SwiftOperator is allowed to create containers | 00:29 |
notmyname | this is a logged channel http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ | 00:29 |
bhakta | So I created a SwiftOperator role in keystone and assign a user ..everything works from Horizon | 00:30 |
notmyname | bhakta: that role is keystone's way of knowing if a given identity is the "owner" of a swift account. the owner has full read/write access to a given swift account. other users (non-owners) must be explicitly granted permissions | 00:30 |
InAnimaTe | notmyname: ahh ok thx. botbot is a really nice log keeper with a great gui and search capabilities | 00:30 |
bhakta | But my users who don't belog to the SwiftOperator roles get - unauthorized errors even when I explicitly gave permission to a container (using swift post -r) | 00:32 |
*** ChanServ changes topic to "Review Dashboard: http://goo.gl/uRzLBX | Overview Dashboard: http://goo.gl/9EI0Sz | Priority Reviews: https://wiki.openstack.org/wiki/Swift/PriorityReviews | Ideas: https://wiki.openstack.org/wiki/Swift/ideas | Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/" | 00:32 | |
notmyname | InAnimaTe: now in the topic :-) | 00:33 |
bhakta | it seems that the user should be able to list the container he as read access to | 00:33 |
InAnimaTe | yay | 00:33 |
bhakta | am I missing somethin? | 00:33 |
InAnimaTe | clayg: im putting together a tldr ver of our layout | 00:34 |
notmyname | bhakta: that would be up to keystone to track (and AFAIK, they don't) | 00:34 |
mattoliverau | bhakta: the user needs the rlistings acl to list the container | 00:35 |
notmyname | bhakta: ah, my mistake. I misread your question. what mattoliverau is correct | 00:35 |
bhakta | I even tried setting .rlistings | 00:35 |
bhakta | but same ..could not get it to list the container | 00:35 |
*** dmorita has joined #openstack-swift | 00:36 | |
bhakta | I can list the object within the container but not the container itself | 00:36 |
*** km has joined #openstack-swift | 00:36 | |
notmyname | bhakta: what's the request you're making that's getting the error? | 00:37 |
bhakta | let me get that | 00:37 |
bhakta | swift post -r "example_role,example_account:Test1,.rlistings" test_container | 00:41 |
notmyname | is that's what's getting the 401 response? | 00:42 |
bhakta | yes | 00:42 |
notmyname | so your user identity doesn't have access to set ACLs on that container | 00:43 |
clayg | if you're giving read access to a user the "refer" permissions don't matter - if a user has read access to a container they can do listings or get objects | 00:43 |
clayg | yeah can you swift list test_container? | 00:43 |
bhakta | listings of containers or objects? | 00:43 |
bhakta | yes.. i was able to "swift list test_container" | 00:44 |
ho | bhakta: Does the user have operator role (default: swiftoperator)? to execute it the user need to have operator role. | 00:44 |
bhakta | no..that's the qustion..do I need to assign every user the "SwiftOperator" role | 00:45 |
bhakta | ? | 00:45 |
clayg | what role do you have that's giving you access if not admin acess - very strange | 00:45 |
bhakta | then what's the advantage of container level access. If I do that every user with that role can access everything | 00:45 |
bhakta | _member_ | 00:46 |
ho | bhakta: if you put/post to containers, you always have to have operator role | 00:46 |
bhakta | but I am trying to do " | 00:46 |
bhakta | Swift list | 00:46 |
bhakta | as non SwiftOperator | 00:46 |
clayg | bhakta: yes - but to achive that you need to set ACL's - and it sounds like *thats* the part giving you the 401 - so we have to fix that first | 00:47 |
openstackgerrit | Tim Burke proposed openstack/python-swiftclient: Mention --segment-size option after 413 response https://review.openstack.org/161966 | 00:47 |
bhakta | I thought doing this would set the ACL "swift post -r "example_role,example_account:Test1,.rlistings" test_container" ? is that not right? | 00:48 |
bhakta | Do I need to set the "ACLs" on Keystone side? | 00:49 |
*** rmcall has quit IRC | 00:52 | |
bhakta | ls | 00:53 |
ho | bhakta: your commad line looks good and you don't need to set ACL in keystone (I think Container-ACL is a function of swift) | 00:54 |
ho | bhakta: the user needs to have operator roles (swiftoperator) when you execute the command line. | 00:57 |
bhakta | so you are saying if I trying get container listing via REST API, it should return me the list? | 00:58 |
bhakta | is this swift client issue then? | 00:58 |
bhakta | I am not able to get the contianer to show up in Horizon which is bad | 00:59 |
openstackgerrit | Tim Burke proposed openstack/python-swiftclient: Remove all DLO segments on upload of replacement https://review.openstack.org/161972 | 00:59 |
notmyname | bhakta: can you stat the container (HEAD) and give us the value of the read acl header? | 00:59 |
bhakta | I nedd to leave now to pick up my daughter but I will be loggin bck in few hours | 01:00 |
clayg | notmyname: I know a GET to the container won't *return* X-Container-Read if you're not a swift_owner = True - but I don't know if the cli will just display the empty value regardless or if will only display the thing if it's returned in the resp headers | 01:00 |
bhakta | hopefully i will be able to continue the discussion | 01:01 |
bhakta | thanks | 01:01 |
InAnimaTe | clayg: http://tty0.in/view/f6028be6 | 01:01 |
notmyname | clayg: right, good point. I was hoping for doing that with an account that has permissions | 01:01 |
InAnimaTe | fyi, all i know about is pretty much whats in /etc/swift | 01:02 |
notmyname | InAnimaTe: good info, thanks | 01:02 |
*** bhakta has quit IRC | 01:05 | |
notmyname | InAnimaTe: ok, so the problem is the connection errors? the 404s? or just understanding what's going on? | 01:05 |
InAnimaTe | mainly the 404's | 01:05 |
InAnimaTe | i guess im wondering why exactly they are happening | 01:06 |
InAnimaTe | and at what layer, proxy or storage | 01:06 |
InAnimaTe | seems to be proxy to me | 01:06 |
notmyname | InAnimaTe: yes, it looks like the proxy is returning a 404 to the client | 01:07 |
notmyname | it only does that after it gets 404s (or errors) from all of the storage nodes it tried | 01:08 |
InAnimaTe | ahh ok | 01:08 |
notmyname | it will try all of the primaries, and then several handoff nodes (the default is 2*<replica count>. I think it was that in 1.8.0 too) | 01:08 |
InAnimaTe | so the "ERROR with Object server 10.1.24.69:6000/sdf re: Expect: 100-continue" means it got an crappy reply and the node that should have it doesnt? | 01:08 |
torgomatic | notmyname: I wouldn't count on that | 01:08 |
notmyname | InAnimaTe: ok, torgomatic says I'm wrong. that means I'm probably wrong | 01:09 |
InAnimaTe | lol | 01:09 |
notmyname | no, look at that error line. it says ConnectionTimeout(0.5s) | 01:09 |
InAnimaTe | ahh yeah | 01:09 |
notmyname | that's the timeout on establishing the tcp connection with that box | 01:10 |
notmyname | (technically, creating an HTTPConnection object) | 01:10 |
InAnimaTe | yeah but does that directly mean it couldn't get the object? or just that that particular node didn't answer in time? | 01:10 |
notmyname | that log line means that the particular server didn't answer in time. but that's hidden from the end-user | 01:11 |
torgomatic | notmyname: request_node_count was in d79a67eb, which is in version (tag) 1.8.1.1, but not 1.8.0 | 01:11 |
notmyname | ah | 01:11 |
notmyname | InAnimaTe: swift has some cli tools for figuring out where an object /should/ be. `swift-get-nodes` | 01:11 |
InAnimaTe | ahh good to know | 01:16 |
InAnimaTe | gonna derp around with that command a bit | 01:16 |
*** fandi has joined #openstack-swift | 01:16 | |
*** aix has joined #openstack-swift | 01:16 | |
*** Canaimero-e64b has joined #openstack-swift | 01:18 | |
*** nexusz99 has joined #openstack-swift | 01:19 | |
*** Canaimero-e64b7 has joined #openstack-swift | 01:20 | |
InAnimaTe | so i guess im not understanding why the first PUT of an object (L37) returns a 201 (which from what i know, means the object has been written to at least one node) but then subsequent GET's fail | 01:21 |
InAnimaTe | and im guessing those connection timeouts are in relation to the HEAD which is checking if the object exists at all | 01:21 |
InAnimaTe | http://tty0.in/view/5497adce#L37 <- L37 | 01:22 |
InAnimaTe | and even after two minutes, GET's are still failing | 01:22 |
*** Canaimero-e64b has quit IRC | 01:24 | |
notmyname | InAnimaTe: ah interesting. look at the transaction id tx02ee2f8d581549f393483d52c49b463b | 01:25 |
notmyname | so the data got written | 01:26 |
notmyname | specirically to storage 3, 5,and 7 | 01:26 |
InAnimaTe | right | 01:26 |
* notmyname can't English | 01:26 | |
InAnimaTe | rofl | 01:26 |
*** tsg has quit IRC | 01:26 | |
notmyname | however, you've got the connection timeouts | 01:26 |
*** tsg has joined #openstack-swift | 01:26 | |
notmyname | on .69 .71 and .63 | 01:27 |
notmyname | so I'm guessing swift-get-nodes would show that it should be on those 3 servers | 01:27 |
*** Canaimero-e64b7 has left #openstack-swift | 01:28 | |
InAnimaTe | ohhhh. so it does actually exist on .69 .71 and .63 but the proxy doesn't get a response from them, so it returns a "nope, this object dont exist" | 01:28 |
notmyname | so (and based on torgomatic's comment) I'm guessing that 1.8.0 doesn't check handoffs deep enough (or at all) on the GET | 01:29 |
InAnimaTe | hmm | 01:29 |
notmyname | I mean, I could actually look at the code. or I could just hypothesize | 01:30 |
notmyname | InAnimaTe: but the issue you have, then, is that the data is on handoff nodes and not primary nodes, so you've got to figure out why (ie check processes on those boxes or just check that they are turned on and have network plugged in) | 01:32 |
InAnimaTe | well, (https://github.com/openstack/swift/blob/master/swift/proxy/controllers/obj.py#L351 | 01:32 |
*** fandi has quit IRC | 01:33 | |
*** fandi has joined #openstack-swift | 01:33 | |
InAnimaTe | lol have network plugged in? | 01:34 |
*** tsg has quit IRC | 01:37 | |
notmyname | InAnimaTe: it's actually in the except below there. catching the timeout | 01:37 |
notmyname | right? the ConnectionTimeout | 01:37 |
InAnimaTe | ahh yeah | 01:38 |
InAnimaTe | thats super fucking weird then. in no way do any of these boxes *seen* to have connection limits | 01:39 |
InAnimaTe | might have to fiddle with timewait sysctl values | 01:40 |
notmyname | is this in the midst of other concurrent reuqests? | 01:40 |
InAnimaTe | i would assume | 01:40 |
InAnimaTe | (yes its continuously used) | 01:40 |
notmyname | ok | 01:40 |
notmyname | InAnimaTe: can you paste one of the container server configs? | 01:41 |
notmyname | container-server.conf | 01:41 |
InAnimaTe | yep hold on | 01:41 |
InAnimaTe | notmyname: http://tty0.in/view/c7231d8e | 01:43 |
notmyname | hmm..I was looking for the backlog setting. not there, so it's the default | 01:45 |
* notmyname checks out 1.8 version of swift | 01:45 | |
notmyname | Thu Apr 4 04:15:54 2013 +0000 | 01:45 |
notmyname | 2 years old ;-) | 01:45 |
notmyname | backlog default is 4096 | 01:45 |
notmyname | ie it can queue 4096 connections per worker | 01:47 |
notmyname | so I'm guessing that with 8 workers, you aren't seeing 8k connections to the storage servers. | 01:48 |
notmyname | well, you could, if you're seeing about 3k concurrent requests to the proxy | 01:48 |
InAnimaTe | well | 01:48 |
InAnimaTe | so it used to be two workers | 01:48 |
InAnimaTe | (probably should have mentioned this) | 01:48 |
InAnimaTe | I upped it last night to 8 | 01:48 |
*** zhill has quit IRC | 01:53 | |
notmyname | oh | 01:53 |
notmyname | still, probably ok | 01:53 |
InAnimaTe | yeah we run the Account, Container, and Object servers with 8 workers each | 01:55 |
notmyname | did you confirm that those servers are online and accepting connections? | 01:56 |
InAnimaTe | yeah but let me double check | 01:57 |
InAnimaTe | i mean if they weren't id have a lot more problems than intermittent 404' | 01:57 |
notmyname | while you're there, check IO wait and CPU usage | 01:58 |
*** kei_yama has joined #openstack-swift | 02:04 | |
InAnimaTe | btw, http://tty0.in/view/35a02bfc | 02:04 |
InAnimaTe | ^some transactions that derped today | 02:04 |
InAnimaTe | so | 02:10 |
InAnimaTe | it looks like everything is pretty much running | 02:10 |
InAnimaTe | one thing that isnt running on any of them is the object-auditor | 02:10 |
InAnimaTe | the original admin disabled it for whatever reason | 02:10 |
zaitcev | crazy | 02:12 |
*** haomaiwang has joined #openstack-swift | 02:13 | |
InAnimaTe | is it actually super important? | 02:13 |
*** tsg has joined #openstack-swift | 02:14 | |
torgomatic | depends; do you like data integrity? /s | 02:17 |
torgomatic | it's the only thing that checks offline for object-level bitrot, so it's sort of important | 02:18 |
torgomatic | when you GET an object, the downloaded copy will also get checked, so if your objects are *all* accessed fairly often you might be able to do without, but why run the risk? | 02:18 |
torgomatic | it is hard on disks though, so it should be tuned to run quite slowly | 02:19 |
zaitcev | I suspect that's the part that confuses people | 02:21 |
zaitcev | Also, we used to have a bug where ZBF just goes crazy regardless | 02:21 |
InAnimaTe | hmm | 02:23 |
InAnimaTe | i wonder if it was turned off because its hard on the disks | 02:23 |
*** dmsimard_away is now known as dmsimard | 02:25 | |
*** dmsimard is now known as dmsimard_away | 02:35 | |
*** tsg has quit IRC | 02:39 | |
*** bandarji has quit IRC | 02:40 | |
*** rmcall has joined #openstack-swift | 02:58 | |
*** amrith has left #openstack-swift | 02:58 | |
*** rmcall has quit IRC | 03:20 | |
*** david-lyle is now known as david-lyle_afk | 03:44 | |
openstackgerrit | Merged openstack/swift: Use Container override header to update etag and size https://review.openstack.org/161030 | 03:47 |
openstackgerrit | Merged openstack/python-swiftclient: Mention --segment-size option after 413 response https://review.openstack.org/161966 | 03:48 |
*** fandi has quit IRC | 03:49 | |
openstackgerrit | Pete Zaitcev proposed openstack/swift: Pluggable Back-ends for account and container servers https://review.openstack.org/47713 | 03:52 |
*** Fin1te has joined #openstack-swift | 04:15 | |
*** gyee has quit IRC | 04:23 | |
*** Fin1te has quit IRC | 04:50 | |
*** ppai has joined #openstack-swift | 04:58 | |
*** DCWilliams_VA has joined #openstack-swift | 05:09 | |
*** zaitcev has quit IRC | 05:10 | |
*** DCWilliams_VA has quit IRC | 05:13 | |
*** dmorita has quit IRC | 05:20 | |
*** SkyRocknRoll has joined #openstack-swift | 05:47 | |
*** SkyRocknRoll has joined #openstack-swift | 05:47 | |
openstackgerrit | Clay Gerrard proposed openstack/swift: Fix EC download when data nodes is not divisiable by two. https://review.openstack.org/162026 | 06:05 |
*** Bsony has joined #openstack-swift | 06:13 | |
*** Bsony has quit IRC | 06:17 | |
ho | bhakta: notmyname: clayg: mattoliverau: I just reproduced the bhakta's problem. I think step (2) in the following url is our target. http://paste.openstack.org/show/190144/ . I will wait for bhkta's response. | 06:18 |
*** kei_yama has quit IRC | 06:19 | |
mattoliverau | ho: thanks for the research. Yeah (2) will always fail because test1 has no rights to set the container ACLs, they have no rights at all. Only the owner or reseller admin can. | 06:29 |
mattoliverau | ho: so yeah hopefully that's what he's doing wrong. | 06:29 |
mattoliverau | nice way to write it in an understandable way. | 06:30 |
mattoliverau | and on that note I'm calling it a day. Night all. | 06:30 |
ho | mattoliverau: good night! | 06:31 |
*** wshao has joined #openstack-swift | 06:39 | |
wshao | hello, | 06:39 |
wshao | I have an older version of swift 1.8.4, with swauth. I try to upgrade, but swauth does not work. Is that abandoned? | 06:40 |
ho | wshao: hello, I know it worked with havana (1.10.x) so It should work with the version. | 06:44 |
ho | wshao: sorry I miss understand. you upgrade from 1.8.4 to latest | 06:46 |
ho | s/miss understand/mis-understand/ | 06:47 |
*** nexusz99 has quit IRC | 06:47 | |
*** Bsony has joined #openstack-swift | 06:57 | |
*** wshao has quit IRC | 07:10 | |
*** jamielennox is now known as jamielennox|away | 07:16 | |
*** torgomatic has quit IRC | 07:21 | |
*** Bsony has quit IRC | 07:36 | |
*** mmcardle has joined #openstack-swift | 07:57 | |
*** zul has quit IRC | 08:01 | |
openstackgerrit | Yuan Zhou proposed openstack/swift: Fix EC PUT on HTTP_CONFLICT or HTTP_PRECONDITION_FAILED https://review.openstack.org/162047 | 08:04 |
*** rledisez has joined #openstack-swift | 08:08 | |
*** chlong has quit IRC | 08:11 | |
*** acoles_away is now known as acoles | 08:12 | |
acoles | clayg: yes i have shamelessly copied https://gist.github.com/clayg/d349be91bfad19b3cd85 ;) | 08:13 |
*** zul has joined #openstack-swift | 08:14 | |
acoles | clayg: i thought 'rv' was something you guys drove on vacation | 08:15 |
*** geaaru has joined #openstack-swift | 08:17 | |
*** notmyname has quit IRC | 08:30 | |
*** zhill has joined #openstack-swift | 08:34 | |
mattoliverau | acoles: lol, love me some good dad jokes especially those aimed at the guys in the US ;p | 08:34 |
*** zhill has quit IRC | 08:36 | |
acoles | mattoliverau: so you have started your weekend, right? | 08:36 |
*** torgomatic has joined #openstack-swift | 08:38 | |
*** ChanServ sets mode: +v torgomatic | 08:38 | |
mattoliverau | acoles: yup, and its a long weekend too! But probably still do some work.. Cause I'm a geek :p | 08:40 |
acoles | mattoliverau: he, have a good one | 08:41 |
mattoliverau | acoles: You too, happy Friday! | 08:43 |
*** jordanP has joined #openstack-swift | 08:44 | |
*** jistr has joined #openstack-swift | 09:02 | |
*** jistr is now known as jistr|biab | 09:36 | |
*** jistr|biab is now known as jistr | 10:39 | |
*** haomaiwang has quit IRC | 11:09 | |
*** chlong has joined #openstack-swift | 11:22 | |
*** nellysmitt has joined #openstack-swift | 11:25 | |
*** Trixboxer has joined #openstack-swift | 11:50 | |
*** bill_az has joined #openstack-swift | 11:54 | |
*** ho has quit IRC | 12:10 | |
*** aix has quit IRC | 12:38 | |
*** km has quit IRC | 12:44 | |
*** DCWilliams_VA has joined #openstack-swift | 13:04 | |
*** mmcardle has quit IRC | 13:26 | |
*** ppai has quit IRC | 13:27 | |
*** mahatic has joined #openstack-swift | 13:28 | |
*** aix has joined #openstack-swift | 13:30 | |
*** mahatic has quit IRC | 13:37 | |
*** ppai has joined #openstack-swift | 13:39 | |
*** mmcardle has joined #openstack-swift | 13:48 | |
*** DCWilliams_VA has quit IRC | 13:57 | |
*** lpabon has joined #openstack-swift | 13:57 | |
*** mahatic has joined #openstack-swift | 13:57 | |
*** annegentle has joined #openstack-swift | 13:59 | |
*** annegentle has quit IRC | 14:00 | |
*** annegentle has joined #openstack-swift | 14:01 | |
*** ppai has quit IRC | 14:01 | |
*** fifieldt_ has quit IRC | 14:05 | |
*** rdaly2 has joined #openstack-swift | 14:13 | |
*** annegentle is now known as superanne | 14:31 | |
*** chlong has quit IRC | 14:34 | |
*** jrichli has joined #openstack-swift | 14:36 | |
*** jordanP has quit IRC | 14:47 | |
*** rmcall has joined #openstack-swift | 14:52 | |
*** jrichli has quit IRC | 15:04 | |
*** aarondelp has joined #openstack-swift | 15:12 | |
*** jmacs has joined #openstack-swift | 15:12 | |
jmacs | Hi all, question from a beginner - when object-replicator is copying partitions around with rsync, is there any signal to object-server that the replication has finished? I haven't seen anything in the code yet | 15:14 |
*** jordanP has joined #openstack-swift | 15:14 | |
glange | no signal | 15:22 |
glange | out of place objects are deleted after replication is successful | 15:23 |
glange | and the proxy looks on multiple object servers when looking for an object | 15:24 |
jmacs | Sure - I was just wondering how the receiving node knew the replication was successful and ready to serve up objects | 15:24 |
glange | so, it's ok if one (or sometimes more copies) is out of place | 15:24 |
glange | well, on the receiving node, the object is either there or not | 15:25 |
glange | if it's there, it will serve the object | 15:25 |
*** jrichli has joined #openstack-swift | 15:26 | |
glange | does that help? | 15:26 |
jmacs | Is the underlying data file moved to its destination atomically? | 15:26 |
*** dmsimard_away is now known as dmsimard | 15:28 | |
jmacs | Yes, I think it is (just reading rsync's man page now) | 15:29 |
jmacs | That makes sense. Thanks. | 15:29 |
glange | cool, I wasn't sure about that, I didn't see anything in the python code that makes that happen | 15:30 |
*** superanne has quit IRC | 15:59 | |
*** superanne has joined #openstack-swift | 16:00 | |
*** superanne has quit IRC | 16:07 | |
*** superanne has joined #openstack-swift | 16:08 | |
*** chlong has joined #openstack-swift | 16:33 | |
*** Nadeem_ has joined #openstack-swift | 16:47 | |
mahatic | hi, this could be very basic: given a URL like this http://127.0.0.1:6000/, how does it know which one to call - AccountController, ContainerController, ObjectController? | 16:48 |
*** rmcall has quit IRC | 16:48 | |
*** rmcall has joined #openstack-swift | 16:49 | |
mahatic | I have followed the SAIO installation. And my configs are from there. How does it pick identify devices from here: /etc/swift/container-server/1.conf and call the appropriate controllers? | 16:49 |
*** gyee has joined #openstack-swift | 16:49 | |
tdasilva | mahatic: Hi, the proxy server has a method that returns the correct controller based on the request url. Take a look here: https://github.com/openstack/swift/blob/master/swift/proxy/server.py#L238,L265 | 16:52 |
mahatic | tdasilva, looking. thanks so much for the response! | 16:53 |
*** jordanP has quit IRC | 16:53 | |
tdasilva | mahatic: I hope it helps :-) | 16:53 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 16:53 |
mahatic | :) | 16:54 |
*** notmyname has joined #openstack-swift | 16:54 | |
*** ChanServ sets mode: +v notmyname | 16:54 | |
notmyname | good morning | 16:55 |
notmyname | looks like rackspace restarted my znc box last night, so I missed the playback | 16:55 |
mahatic | tdasilva, get_controller is a default call on any URL? In a context where recon.py is handling "hosts", does every call to a URL like this "http://127.0.0.1:6000/" first goes to get_controller? | 16:56 |
mahatic | good morning, notmyname | 16:58 |
jmacs | mahatic: I think 6000 is always the object server | 16:58 |
jmacs | By default, it's configurable in /etc/swift/object-server.conf | 16:58 |
mahatic | jmacs, true, the config is done that way. But, i'm asking how does it know to call the appropriate controller, does it always call get_controller | 17:01 |
jmacs | That I don't know | 17:02 |
tdasilva | mahatic: I think I understood your question incorrectly. The get_controller is especifically in the proxy server. If a request is being set to the object-server, then there will be some other request handling function in the object server | 17:02 |
tdasilva | *being sent | 17:03 |
tdasilva | mahatic: Once I know which server the request is going to, I always start at the __call__ method of the server.py. So in the case of the object-server, I'd start here: https://github.com/openstack/swift/blob/master/swift/obj/server.py#L706 | 17:05 |
mahatic | tdasilva, actually, that also is not my question I guess :). So here in the config, /etc/swift/container-server/1.conf, [app:container-server] use = egg:swift#container | 17:06 |
mahatic | that sets the url to object/container/account server, correct? | 17:07 |
tdasilva | mahatic: sorry...i'm probably confusing you more than helping | 17:08 |
*** superanne has quit IRC | 17:08 | |
mahatic | tdasilva, nope. I'm aware how the code works (__call__ or other methods inside a server.py file). My question was how does the config set? I don't understand the words "app" and "egg" in the config or how do they work. | 17:09 |
mahatic | tdasilva, this should help me I believe http://manpages.ubuntu.com/manpages/raring/man5/container-server.conf.5.html | 17:11 |
mahatic | tdasilva, thanks for the effort! :) | 17:11 |
tdasilva | mahatic: in the case of this line use = egg:swift#container especifically | 17:12 |
tdasilva | take a look at setup.cfg | 17:12 |
tdasilva | and take a look at paste deploy | 17:12 |
mahatic | yeah | 17:12 |
tdasilva | mahatic: sorry, i'm back, but yeah, take a look here: http://pythonpaste.org/deploy/ that should help a little bit.... | 17:17 |
tdasilva | mahatic: sorry for the confusion | 17:17 |
jrichli | I think she was asking how the port is assigned to a server in the config. The "bind_port" only has to be specified if you want to use something | 17:18 |
jrichli | besides the default. | 17:18 |
mahatic | tdasilva, sure. And you didn't confuse me at all. Thanks for the info! :) | 17:18 |
notmyname | jmacs: not true any more | 17:19 |
notmyname | jrichli: ^ | 17:19 |
notmyname | jmacs: mistype | 17:19 |
notmyname | jrichli: the bind port setting is required. it does not have a default value any more | 17:19 |
notmyname | that was the first step in changing the recommended ports to swift (which hasn't been done yet) | 17:19 |
mahatic | yeah, that true. I noticed the patch | 17:19 |
jrichli | oh, ok. I just figured she mentioned the egg b/c it was the only thing there in her config | 17:20 |
mahatic | jrichli, I was not aware of the terms in config. But i'm now looking at links that help me understand. Thanks! | 17:20 |
*** rledisez has quit IRC | 17:22 | |
openstackgerrit | Prashanth Pai proposed openstack/swift: Make object creation more atomic in Linux https://review.openstack.org/162243 | 17:30 |
openstackgerrit | Alistair Coles proposed openstack/swift: Multiple fragment Archive Index Support https://review.openstack.org/159637 | 17:32 |
openstackgerrit | Alistair Coles proposed openstack/swift: DiskFile refactoring towards per-policy classes https://review.openstack.org/162249 | 17:32 |
acoles | ^^ peluse clayg i hope i didn't break anything | 17:32 |
*** thebloggu has joined #openstack-swift | 17:37 | |
*** jistr has quit IRC | 17:37 | |
acoles | peluse: i'm not convinced yield_hashes is right, left a TODO in there to come back to | 17:42 |
*** david-lyle_afk is now known as david-lyle | 17:43 | |
*** acoles is now known as acoles_away | 17:43 | |
*** jordanP has joined #openstack-swift | 17:43 | |
thebloggu | I have a swift test cluster with 1 Proxy Node and 5 Storage Nodes and 3 zones. Of those 5 Storage Nodes 4 went down and I was able to recover 3. They're online but the other one has a filesystem corruption and is currently down. this happened before and the cluster recovered (after a long time). Why does it take so long to recover (please note I only have a few KB stored) and can I speed it up? | 17:45 |
openstackgerrit | Prashanth Pai proposed openstack/swift: Make object creation more atomic in Linux https://review.openstack.org/162243 | 17:46 |
*** jordanP has quit IRC | 17:53 | |
*** zaitcev has joined #openstack-swift | 18:01 | |
*** ChanServ sets mode: +v zaitcev | 18:01 | |
*** superanne has joined #openstack-swift | 18:04 | |
*** Nadeem_ has quit IRC | 18:13 | |
*** hurricanerix_ has quit IRC | 18:15 | |
*** hurricanerix has joined #openstack-swift | 18:16 | |
*** morganfainberg is now known as needscoffeebadly | 18:19 | |
*** needscoffeebadly is now known as CaptainMorgan | 18:22 | |
mahatic | notmyname, I can't figure a scenarios where rings could be different from the actual deployment. Won't the rings pick up from the devices and their config anyway? Could you help on that? | 18:23 |
notmyname | mahatic: policy 1 could be all spinning drives. policy 2 could be all flash. policy 3 could be all in asia. policy 4 all in europe | 18:25 |
mahatic | notmyname, right. But before that, I was thinking about validate servers scenario in the recon | 18:26 |
*** superanne has quit IRC | 18:27 | |
mahatic | The rings would have the info from the config (I'm still talking about policy 0, default) | 18:28 |
mahatic | When could it be wrong? | 18:29 |
*** geaaru has quit IRC | 18:29 | |
*** SkyRocknRoll has quit IRC | 18:34 | |
*** mmcardle has quit IRC | 18:39 | |
notmyname | I don't understand | 18:40 |
openstackgerrit | Clay Gerrard proposed openstack/swift: Fix EC download when data nodes is not divisiable by two. https://review.openstack.org/162026 | 18:40 |
*** bill_az has quit IRC | 18:40 | |
clayg | torgomatic: so are you going to use the pyeclib get_segment_info to get the last fragment size when you're doing the ranged GET stuff? I only ask because calculating it by hand seems to be rife with danger -> https://review.openstack.org/162026 | 18:43 |
torgomatic | clayg: dunno, haven't gotten that far yet | 18:43 |
clayg | maybe you don't need the last fragment size... I mean if your ranged GET to the storage node is fragment_size aligned you can just start reading fragment payloads and getting out real segments - maybe shave a bit of the first segment one if the client request isn't quite aligned... but if they want all the way to the end of the file you'll just hand pyeclib whatever < fragment_size comes out of the object server... | 18:47 |
*** rmcall has quit IRC | 18:58 | |
*** superanne has joined #openstack-swift | 19:04 | |
torgomatic | clayg: yeah, for bytes=M-N kind of requests, that's all I do. for bytes=-N suffix requests, I'll need some math to shave off any partial fragments before decode, but I haven't written that yet | 19:05 |
clayg | torgomatic: oh right... well I'd still suggest letting pyeclib do the math :P | 19:07 |
clayg | ok, back to reviewing tdasilva's patches | 19:14 |
clayg | tdasilva: I'm going to do the ec put/container headers first - then right into version middleware - but I feel like the open question you had about delete's to versioned container where the version-location doesn't have write acl's needs to be addressed before you can finish it? | 19:15 |
clayg | tdasilva: there was also the comment you had being half-way done with the "remove the pre-flight HEAD request" | 19:16 |
clayg | tdasilva: I'm fine with *either* finishing that out or leaving it for future work - but not merging with the unsed method (that's probably stating the obvious) | 19:16 |
clayg | tdasilva: but anyway - i'd like to see that extraction wrapped up on master so we can start dealing with it (and the *other* proxy cleanup) making there way onto feature/ec ASAP | 19:17 |
clayg | tdasilva: so... what can I do to help versioned-writes-middleware get landed :D | 19:19 |
tdasilva | clayg: yeah, I think we need to address the acl question. | 19:19 |
tdasilva | clayg: and I'm leaning towards putting the "remove the pre-flight HEAD request" as a separate patch | 19:19 |
tdasilva | clayg: would you be ok with that? | 19:19 |
clayg | tdasilva: that all sounds so wonderful I could just cry | 19:20 |
tdasilva | clayg: lol..if we can deal with the acl behavior, I can right the correct tests for it | 19:20 |
clayg | tdasilva: I think we already identified a use case for "versioned container to protect users from themselves" - so if we have to pick one or the other to "fix" - I'd say fix delete | 19:20 |
clayg | I mean - what's the behavior when it breaks anyway, it just tries to COPY and fails - the "real" object is not over-written by the last version | 19:21 |
tdasilva | clayg: yeah, it won't delete anything | 19:22 |
clayg | zaitcev: whoot whoot -> https://review.openstack.org/#/c/47713/18 is back! | 19:22 |
tdasilva | I think it actually breaks because it can't get a container listing | 19:22 |
zaitcev | clayg: it never went anywhere, but I am trying to get lpabon and tdasilva to review it, so I updated it to the latest. Unfortunately, it's one of those forever-WIP things. | 19:23 |
*** thebloggu has quit IRC | 19:24 | |
*** superanne has quit IRC | 19:24 | |
*** aix has quit IRC | 19:24 | |
clayg | tdasilva: oh, interesting - that makes sense | 19:24 |
lpabon | zaitcev: clayg tdasilva, yeah, I am going to spend some time on it after Vault conferenc | 19:25 |
tdasilva | clayg, zaitcev: yes, I need to start looking into how to hook that up to swiftonfile | 19:25 |
tdasilva | or maybe lpabon is :-) | 19:25 |
lpabon | is *also* you mean ;-) | 19:25 |
tdasilva | sure | 19:26 |
tdasilva | clayg: one more question on obj. versioning | 19:26 |
tdasilva | clayg: that unit test that you found was missing: test_cross_policy_DELETE_versioning. I was looking at that test and don't think it is applicable now | 19:28 |
tdasilva | clayg: I think a functional test would make more sense | 19:28 |
clayg | zaitcev: I need to think more about Backend-has-a-Broker, maybe it could get you want you want and still be a stepping stone to cleaning up our current dbbroker mess | 19:29 |
clayg | tdasilva: I'm cool with "this unit test was moved to functtests" - i was just trying to audit we didn't miss anything | 19:30 |
tdasilva | clayg: ok, thanks | 19:30 |
clayg | tdasilva: OTOH sometimes the unittests can give you quicker feedback when you're refactoring... so I never mind seeing them duplicated either | 19:30 |
zaitcev | if it's not skpped for in-process functests and actually tests what it needs to test, I'm fine with such move... not that anyone asks my opinion | 19:31 |
clayg | lol - zaitcev tell us how you REALLY feel | 19:31 |
zaitcev | like an imposter | 19:31 |
*** lcurtis has joined #openstack-swift | 19:33 | |
tdasilva | zaitcev: not sure if we have cross-policy tests in in-process functests :\ | 19:34 |
openstackgerrit | Thiago da Silva proposed openstack/swift: fixing small typos in associated projects doc https://review.openstack.org/162279 | 19:36 |
clayg | tdasilva: lol - i saw that the other day :P | 19:37 |
tdasilva | clayg: hehe...just saw it today | 19:37 |
tdasilva | clayg: btw: got a new version of the saio-ansible project going up in a bit | 19:38 |
tdasilva | organized the scripts a bit more | 19:38 |
clayg | notmyname: I'm on a tear -> https://review.openstack.org/#/c/162279/ | 19:38 |
clayg | tdasilva: you know honestly I haven't tried it yet - maybe sometime before Vancouver :\ | 19:38 |
notmyname | :-) | 19:39 |
clayg | notmyname: please to be making the wrist slapping as needed - I think one time I remeber a bunch of core in the room being like "yeah if it's a simple doc fix just merge it and stop wasting my time" - but I can't remember if that was mostly *me* saying that? or if we were drinking at the time? | 19:39 |
notmyname | ya, that's totally fine and good IMO | 19:39 |
clayg | tdasilva: oh, the override headers was alreayd merged | 19:40 |
tdasilva | yep | 19:40 |
clayg | mattoliverau: peluse: thanks! | 19:41 |
tdasilva | clayg: did you have a problem with it? I remember you mentionining something about GETs failing for you | 19:41 |
tdasilva | don't remember if it was related to that patch specifically or something else you found | 19:42 |
clayg | tdasilva: no it was the other thing - i had a couple of draft comments that I didn't post - but looking at them now I don't care about them | 19:42 |
*** CaptainMorgan is now known as morganfainberg | 19:50 | |
*** superanne has joined #openstack-swift | 19:50 | |
*** superanne has quit IRC | 19:51 | |
*** superanne has joined #openstack-swift | 19:52 | |
*** superanne has quit IRC | 19:56 | |
*** superanne has joined #openstack-swift | 19:57 | |
clayg | does https://review.openstack.org/#/c/162249/ need a rebase or something - i thought we fixed all the py26 issues on master and featre/ec | 19:58 |
clayg | functests on feature/ec -> FAILED (SKIP=12, errors=26, failures=13) | 19:58 |
zaitcev | Ouch. A class called "helper" and it's rather big | 20:07 |
clayg | it's *really* helpful! | 20:08 |
* clayg kids | 20:08 | |
*** nellysmitt has quit IRC | 20:15 | |
notmyname | mahatic: still around? (I know it's crazy late for you) | 20:16 |
notmyname | portante: what's the awesome bash prompt? must be great if you're talking about it publicly :-) | 20:20 |
zaitcev | wait, what | 20:21 |
zaitcev | Peter is alive? | 20:21 |
*** rdaly2 has quit IRC | 20:31 | |
*** rdaly2 has joined #openstack-swift | 20:35 | |
*** dmsimard has quit IRC | 20:36 | |
tdasilva | clayg: hi, i was looking at your patch 159739, and to fix the doc issue you just added that line back in index.rst. Do I also need to un-delete the overview_object_versioning file? I was looking to add a reference like this: :ref:`versioned_writes` but that didn't work | 20:39 |
portante | aaron griffis, a former co-worker has a really cool .bashrc.prompt file I have been using for years, and I just copied it over to my mac and it worked flawlessly | 20:41 |
portante | notmyname: hopefully, he'll comment on g+ so that I can reference where he keeps the source | 20:41 |
clayg | peluse: acoles_away: I keep trying to review the suffixes hashes changes and it's killing me - I *really* need to pitch in and help clean that up - but we've already got three cooks in that kitchen and at least TWO refactorings we're thinking about | 20:41 |
* portante I'm not dead (you're only foolin' yourself) | 20:42 | |
clayg | peluse: acoles_away: maybe I can *just* work on the suffix classes part and over acoles change - I know it'll mess up everyones dependent changes - but I think I've got some good ideas for the suffix classes that will pay off in the end | 20:42 |
* portante I feel happy, I feel happy ... | 20:43 | |
clayg | once we're all happy with 159637 we can merge it - peluse can go back to the reconstrutor, acoles can keep chipping away at per policy diskfiles, and I'll... I'll probably just find something new to bitch about :\ | 20:43 |
clayg | what's the easiest way to turn patch 159739 into a link? I always end up just changing the url of some other patch I'm done looking at but didn't close the tab for :\ | 20:44 |
clayg | tdasilva: oh shit, maybe something got messed up there - i needed to say something about include inline or some ya :\ | 20:45 |
notmyname | clayg: that's why I usd to have the patch bot | 20:45 |
clayg | but yeah - i don't see it in the diff | 20:46 |
clayg | patch bot i miss you! | 20:46 |
clayg | tdasilva: oh I remember - i just copied the autodoc stanza that you added to middlewares over the content of the page referenced in the source/index | 20:47 |
*** patchbot has joined #openstack-swift | 20:48 | |
notmyname | patchbot: hi!! | 20:48 |
tdasilva | lol | 20:49 |
notmyname | patchbot: p 159637 | 20:49 |
patchbot | notmyname: https://review.openstack.org/#/c/159637/ | 20:49 |
notmyname | clayg: ^^ | 20:49 |
tdasilva | nice | 20:49 |
notmyname | it was actually portante who made me write that | 20:50 |
notmyname | he always talked about patch numbers and I had no way to linkify them :-) | 20:50 |
clayg | that portante | 20:51 |
* portante is a pain in the arse | 20:51 | |
lcurtis | hello all...is there a way to see queue as to what is causing so many 'container-updater: ERROR account update failed ' errors? | 21:06 |
clayg | lcurtis: does it say the path to the db file or the name of the container or anything? | 21:13 |
clayg | i mean it could just be one of the account servers is offline | 21:13 |
lcurtis | both are online | 21:18 |
lcurtis | getting connection timeout | 21:18 |
lcurtis | 18000 connections at the moment.and listening on port 6001 | 21:18 |
lcurtis | but trying to figure out why the timeout | 21:18 |
clayg | poke at it with curl - backend api is /<device>/<part>/<account> - you should be able to grab any ol' request from the log | 21:19 |
clayg | i guess I'd try to coorolate if it's just some subset of db's that doing the timeouts - or some subset of nodes | 21:20 |
clayg | if the account server seems to be doing other work ok... zoom on a single request that failed (that's why i was asking if the container error log line said the db file or name of the container/account) | 21:20 |
clayg | lcurtis: might do a launchpad search, there's a few open bugs with container updates that pop up now and again - maybe we managed to ignore one and need to fix it. | 21:23 |
openstackgerrit | Ricardo Ferreira proposed openstack/swift: Removing the ".data" makes it check *any* file for metadata, so it works with .meta and .ts filetypes. https://review.openstack.org/162306 | 21:25 |
clayg | rsFF: ^ lol! | 21:25 |
rsFF | :( | 21:29 |
clayg | or sorry - no no no | 21:30 |
clayg | I meant like i just thought it was funny that was all it took - NICE WORK - i like the easy ones :P | 21:30 |
rsFF | haaa ok, i thought i had made a bad branching or something | 21:31 |
clayg | where's fifield when you need him :\ | 21:31 |
clayg | yes, i should have realized you could have misunderstood me - it was insensitive - my apologies | 21:32 |
rsFF | :) | 21:32 |
clayg | :) | 21:32 |
rsFF | I am planning on taking this one: https://bugs.launchpad.net/swift/+bug/1428866 | 21:34 |
openstack | Launchpad bug 1428866 in OpenStack Object Storage (swift) "swift-object-info display for sysmeta" [Wishlist,New] | 21:34 |
rsFF | any objection/tips? | 21:34 |
notmyname | did we say anything about canceling next week's team meeting? | 21:36 |
*** mahatic has quit IRC | 21:39 | |
*** superanne has quit IRC | 21:44 | |
lcurtis | clayg delayed thank you | 21:46 |
*** rdaly2 has quit IRC | 21:46 | |
clayg | lcurtis: i did what now? | 21:46 |
lcurtis | about the container update error advice | 21:46 |
clayg | well... but... i mean - what was the problem :) | 21:47 |
lcurtis | im not sure...but gives me a place to start | 21:47 |
lcurtis | im not sure how i would get a request from the log with curl though | 21:48 |
clayg | no i mean peak in the log for an example of the path for a request - then try to make that request with curl - see if it times out | 21:50 |
*** torgomatic has quit IRC | 21:50 | |
lcurtis | ah | 21:50 |
lcurtis | gotcha | 21:50 |
lcurtis | got it | 21:50 |
*** torgomatic has joined #openstack-swift | 21:52 | |
*** ChanServ sets mode: +v torgomatic | 21:52 | |
*** superanne has joined #openstack-swift | 21:52 | |
lcurtis | i have asked before but any ideas on current largest swift implementations worldwide? | 21:56 |
*** panbalag has quit IRC | 21:58 | |
clayg | lcurtis: single cluster? object # or petabytes? | 21:59 |
lcurtis | all of the above ;) | 21:59 |
lcurtis | literally | 22:00 |
clayg | i'm sure rax most a good run for their money - i'd bet they have the larget total "bytes managed by swift" with clusters in texas, chicago, london, australia (and others?) | 22:01 |
lcurtis | interesting | 22:03 |
lcurtis | would make sense tho | 22:03 |
clayg | they've been at it the longest! that data is sticky and it takes awhile to build up | 22:09 |
*** lpabon has quit IRC | 22:09 | |
lcurtis | ugh...now seeing ERROR rsync failed with 10 on replication | 22:13 |
lcurtis | node is u[, rsync rsync://ip works | 22:14 |
lcurtis | up | 22:14 |
clayg | hrmm.. wonder what error code 10 is | 22:15 |
lcurtis | socket io ? | 22:16 |
lcurtis | maybe something akin to connection refused | 22:17 |
lcurtis | but things are syncing | 22:17 |
clayg | hrmmm... max connection limit? | 22:20 |
lcurtis | yes potentially | 22:21 |
lcurtis | would that be on OS side or swift side | 22:21 |
lcurtis | rsyncd.conf | 22:22 |
lcurtis | ? | 22:22 |
lcurtis | sounds like it | 22:23 |
clayg | yeah rsyncd.conf i think | 22:23 |
lcurtis | great call | 22:23 |
lcurtis | ill bet that is it | 22:23 |
*** chlong has quit IRC | 22:24 | |
*** jrichli has quit IRC | 22:29 | |
*** aarondelp has quit IRC | 22:42 | |
clayg | peluse: i'm mulling over if growing a X-Backend-Node-Index for the object server and diskfile interface (node_index) is more palitable than DiskFile growing a frag_index | 22:52 |
clayg | peluse: I think that X-Object-Sysmeta-EC-Frag-Index is a good thing, but X-Backend-Node-Index I think makes more sense for the DiskFile interface | 22:52 |
clayg | peluse: basically the way it's coded is to translate X-Backend-Node-Index to the DiskFile kwarg frag_index right there in the ObjectController methods - I think going one level down inside DiskFile would be better... | 22:54 |
clayg | I'll keep mulling | 22:54 |
*** superanne has quit IRC | 22:59 | |
torgomatic | so this looks interesting for Swift: https://lwn.net/Articles/612483/ (not sure where I saw it; sorry if it was in here already) | 23:00 |
torgomatic | the threadpool reader dealie in the object server could first try a readv2 in the main thread, then if it got nothing, kick a blocking read out into a worker thread | 23:00 |
torgomatic | so if the kernel is doing some readahead stuff, then that could save us a whole ton of overhead when the data is in buffer cache, but if it's not there, *then* we kick over to the big expensive reader threadpool | 23:03 |
torgomatic | or we could rewrite the whole thing in Erlang. either one. /s | 23:04 |
openstackgerrit | Samuel Merritt proposed openstack/swift: Small optimization to ring builder. https://review.openstack.org/162335 | 23:07 |
clayg | hell yeah make that thing go fast | 23:09 |
*** sandywalsh has quit IRC | 23:12 | |
*** sandywalsh has joined #openstack-swift | 23:13 | |
*** sneezewort has joined #openstack-swift | 23:18 | |
sneezewort | Hello all. It appears only admins can add stuff to swift on my install. It this normal, or am I doing something wrong? | 23:19 |
clayg | what's an "admin" | 23:25 |
clayg | torgomatic: golang first | 23:26 |
clayg | torgomatic: speaking of doing object servers in other languges the sorta off the rails expect 100 continue stuff is going to suck :'( | 23:27 |
clayg | torgomatic: I still sorta feel like a pipelined POST would have been the way to go | 23:27 |
torgomatic | clayg: don't think I haven't been tempted | 23:27 |
torgomatic | clayg: yeah, depends on how good the http libs are in $otherlnag | 23:27 |
torgomatic | *lang | 23:27 |
clayg | torgomatic: didn't you some how convince me that a PUT/POST on the same connection wouldn't work for Encyrption? | 23:27 |
torgomatic | clayg: yeah, there's a race condition in ther | 23:28 |
torgomatic | *there | 23:28 |
clayg | torgomatic: I'd guess they'd be doing good to send anything that looks like a 100: Continue (we had to add it to eventlet in the first place) and *no* one would support this crazy "call a method on the file like readable that is the input from the web server to send extra headers" *crazy*ness | 23:28 |
clayg | torgomatic: for Encryption or EC - I'm quite sure a POST that said write a durable for this timestamp isn't racy - and cirtainly no worse then what we've got - but there something about encyrption... you made me feel like the trailing metadata was the way to go - oh maybe that was it - not the expect 100 just the trailers | 23:30 |
torgomatic | clayg: true; depends on how low-level your http libs are. if you've got a send_response_headers(socket, status, headers) function or something, you can pretty easily (ab)use that to get 1200-continue responses going | 23:30 |
clayg | so MIME encoding good, waiting on the request to "finsih" by looking for a second set of information headers - crazy town | 23:30 |
torgomatic | *100 | 23:30 |
* torgomatic can't type for carp today | 23:31 | |
clayg | torgomatic: fair enough - maybe it's wsgi that's the crazy ness | 23:31 |
torgomatic | ruby's rack isn't much better for this | 23:31 |
torgomatic | clojure, perhaps? I wonder how ring handles things like this | 23:31 |
clayg | torgomatic is a total clojure hipster | 23:32 |
torgomatic | heh, if I'd written anything serious in clojure I'd probably know the answer to that question already | 23:32 |
clayg | redbo: dfg: glange: I want to add a policy param to hash_cleanup_listdir and it looks silly after the reclaim_age kwarg, but like all of the place we call that function with only posistional arguemnts... | 23:43 |
clayg | redbo: dfg: glange: we have a bunch of node/disk/health checky type code but we never call hash_cleanup_listdir - you got any scripts that ever call it? Maybe something for looking for rotten tombstones? | 23:44 |
*** dmsimard has joined #openstack-swift | 23:45 | |
clayg | i guess I can just leave it hash_cleanup_listdir(hsh_path, reclaim_age=X, policy=None) and either default to the 0-policy or raise a TypeError - but I figure if a script says hash_cleanup_listdir(path, 2_DAY) they'll be somewhat dissappointed with hash_cleanup_listdir(path, policy, reclaim_age=X) | 23:47 |
clayg | i guess the int would attribute error pretty quickly... maybe it's fine | 23:47 |
*** Fin1te has joined #openstack-swift | 23:56 | |
sneezewort | I figured it out. I forgot to create the SwiftOperator role. | 23:56 |
clayg | ^ notmyname weren't you trying to tell me that the keystone default setups were like that? | 23:57 |
clayg | notmyname: and I was like - nah that'd be stupid | 23:57 |
clayg | notmyname: maybe some docs somewhere are lacking | 23:57 |
clayg | sneezewort: were you following any specific instructions? Maybe the SwiftOperator role should be in bold red or something? | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!