*** dmorita has joined #openstack-swift | 00:30 | |
*** oomichi has joined #openstack-swift | 00:32 | |
*** occupant has quit IRC | 00:44 | |
*** occupant has joined #openstack-swift | 00:45 | |
*** bill_az_ has quit IRC | 00:55 | |
*** addnull has joined #openstack-swift | 01:03 | |
*** nellysmitt has joined #openstack-swift | 01:08 | |
*** nellysmitt has quit IRC | 01:12 | |
*** nexusz99 has joined #openstack-swift | 01:17 | |
*** madhuri has joined #openstack-swift | 01:42 | |
madhuri | mattoliverau: Hi Matthew, can you please review it https://review.openstack.org/#/c/91753/ ? | 01:44 |
---|---|---|
mattoliverau | madhuri: sure thing :) | 01:45 |
madhuri | mattoliverau: Thank you :) | 01:53 |
*** haomaiwang has joined #openstack-swift | 02:08 | |
openstackgerrit | Daisuke Morita proposed openstack/swift: Execute object daemons' tasks according to dir information https://review.openstack.org/141252 | 02:24 |
*** nellysmitt has joined #openstack-swift | 03:08 | |
*** nellysmitt has quit IRC | 03:13 | |
*** addnull has quit IRC | 03:14 | |
*** tellesnobrega has joined #openstack-swift | 04:05 | |
egon | Is there anyone maybe interested in a swiftsync feature add? https://review.openstack.org/#/c/144701/ | 04:18 |
*** addnull has joined #openstack-swift | 04:25 | |
*** addnull has quit IRC | 04:59 | |
*** nellysmitt has joined #openstack-swift | 05:09 | |
*** nellysmitt has quit IRC | 05:14 | |
*** nshaikh has joined #openstack-swift | 05:20 | |
*** ppai has joined #openstack-swift | 05:26 | |
openstackgerrit | Matthew Oliver proposed openstack/swift-specs: Container sharding spec https://review.openstack.org/139921 | 05:42 |
*** oomichi has quit IRC | 05:47 | |
openstackgerrit | Matthew Oliver proposed openstack/swift-specs: Container sharding spec https://review.openstack.org/139921 | 05:49 |
notmyname | mattoliverau: thanks! | 06:15 |
notmyname | also, cool ascii art diagram. | 06:15 |
mattoliverau | notmyname: lol, its the only way to go :P | 06:15 |
notmyname | you do it by hand or with a tool? | 06:15 |
mattoliverau | notmyname: also, I may need to re-read it again tomorrow to make sure it makes sense.. its a bit brain dumpy :P | 06:16 |
notmyname | heh. me too. I thought it looked ok, for my own late-night look at it | 06:16 |
mattoliverau | notmyname: http://asciiflow.com/ | 06:16 |
notmyname | nice | 06:17 |
mattoliverau | notmyname: just wanted to get something down early in the week and keep it updated leading to mid-cycle in case it comes up in dicussion :) | 06:17 |
notmyname | I'm sure it will :-) | 06:17 |
notmyname | thanks again | 06:17 |
notmyname | and speaking of late night here.... | 06:17 |
mattoliverau | notmyname: yeah you should be sleeping! I'm about to call it a day myself! | 06:18 |
notmyname | I'm out. talk to you (my) tomorrow | 06:18 |
mattoliverau | night | 06:18 |
*** addnull has joined #openstack-swift | 06:23 | |
*** SkyRocknRoll has joined #openstack-swift | 06:26 | |
*** tellesnobrega_ has joined #openstack-swift | 06:30 | |
*** jyoti-ranjan has joined #openstack-swift | 06:31 | |
*** tellesnobrega has quit IRC | 06:32 | |
*** silor has joined #openstack-swift | 06:40 | |
*** nellysmitt has joined #openstack-swift | 07:10 | |
*** nellysmitt has quit IRC | 07:15 | |
*** nexusz99_ has joined #openstack-swift | 07:25 | |
*** nexusz9__ has joined #openstack-swift | 07:27 | |
*** nexusz99 has quit IRC | 07:28 | |
*** nexusz99_ has quit IRC | 07:30 | |
openstackgerrit | Madhuri Kumari proposed openstack/swift: Check for existence of swift servers binaries. https://review.openstack.org/91753 | 07:38 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Merge "Drop redundant check in SLO segment-size validation" https://review.openstack.org/152011 | 07:44 |
*** madhuri has quit IRC | 07:58 | |
*** jyoti-ranjan has quit IRC | 07:59 | |
*** chlong has quit IRC | 08:00 | |
*** mkerrin has quit IRC | 08:00 | |
*** MooingLemur has quit IRC | 08:08 | |
*** rledisez has joined #openstack-swift | 08:09 | |
openstackgerrit | Merged openstack/swift: Make more memcache options configurable https://review.openstack.org/146011 | 08:14 |
*** addnull has quit IRC | 08:21 | |
*** geaaru has joined #openstack-swift | 08:26 | |
openstackgerrit | Merged openstack/python-swiftclient: Fix cross account upload using --os-storage-url https://review.openstack.org/125759 | 08:30 |
*** nellysmitt has joined #openstack-swift | 08:36 | |
*** tdasilva has joined #openstack-swift | 08:47 | |
*** joeljwright has joined #openstack-swift | 08:48 | |
*** jistr has joined #openstack-swift | 08:58 | |
*** jyoti-ranjan has joined #openstack-swift | 09:02 | |
*** jordanP has joined #openstack-swift | 09:12 | |
*** ho has joined #openstack-swift | 09:19 | |
*** mkerrin has joined #openstack-swift | 09:54 | |
*** jordanP has quit IRC | 10:28 | |
*** jordanP has joined #openstack-swift | 10:40 | |
*** tellesnobrega_ has quit IRC | 10:42 | |
*** addnull has joined #openstack-swift | 10:43 | |
*** SkyRocknRoll has quit IRC | 11:00 | |
*** SkyRocknRoll has joined #openstack-swift | 11:07 | |
*** haomaiwang has quit IRC | 11:07 | |
*** jyoti-ranjan has quit IRC | 11:22 | |
*** ho has quit IRC | 11:27 | |
*** NM has joined #openstack-swift | 11:33 | |
*** tellesnobrega has joined #openstack-swift | 11:34 | |
*** erlon has joined #openstack-swift | 11:36 | |
*** nexusz9__ has quit IRC | 12:14 | |
*** miqui_away has quit IRC | 12:21 | |
*** dmorita has quit IRC | 12:25 | |
*** aix has joined #openstack-swift | 12:31 | |
*** addnull has quit IRC | 12:33 | |
*** nellysmitt has quit IRC | 12:38 | |
*** nellysmi_ has joined #openstack-swift | 12:38 | |
*** nellysmi_ has quit IRC | 12:39 | |
*** delattec has joined #openstack-swift | 12:40 | |
*** cdelatte has quit IRC | 12:42 | |
*** nellysmitt has joined #openstack-swift | 12:58 | |
openstackgerrit | Takashi Kajinami proposed openstack/swift: Prevent redundant commenting by drive-audit https://review.openstack.org/149317 | 13:01 |
*** nshaikh has quit IRC | 13:13 | |
*** panbalag has joined #openstack-swift | 13:17 | |
*** ppai has quit IRC | 13:21 | |
*** joeljwright has quit IRC | 13:24 | |
*** joeljwright has joined #openstack-swift | 13:28 | |
*** bill_az_ has joined #openstack-swift | 13:28 | |
*** lpabon has joined #openstack-swift | 13:30 | |
*** pdion891 has joined #openstack-swift | 13:55 | |
*** jkugel has joined #openstack-swift | 13:56 | |
*** mahatic has joined #openstack-swift | 14:15 | |
*** acoles_away is now known as acoles | 14:23 | |
*** zigo has quit IRC | 14:31 | |
*** zigo has joined #openstack-swift | 14:35 | |
*** jasondot_ has joined #openstack-swift | 14:45 | |
*** openstackgerrit has quit IRC | 14:52 | |
*** openstackgerrit has joined #openstack-swift | 14:52 | |
*** jwalcik has joined #openstack-swift | 14:55 | |
*** SkyRocknRoll has quit IRC | 15:05 | |
*** rdaly2 has joined #openstack-swift | 15:29 | |
ekarlso | can I configure swift to just use a flat directory on local host ? | 15:36 |
ekarlso | like a single srver install kinda | 15:36 |
*** jyoti-ranjan has joined #openstack-swift | 15:36 | |
tdasilva | ekarlso: do you mean something like a saio deployment? | 15:39 |
*** annegent_ has joined #openstack-swift | 15:51 | |
*** jrichli has joined #openstack-swift | 15:54 | |
ekarlso | tdasilva: kinda but just a backend to store stuff in a directory or so | 16:03 |
*** abhirc has joined #openstack-swift | 16:16 | |
ekarlso | tdasilva: is that possible ? | 16:17 |
*** lpabon has quit IRC | 16:18 | |
*** david-lyle_afk is now known as david-lyle | 16:19 | |
tdasilva | ekarlso: sorry, it is not exactly clear to me what you are trying to do, can you describe more? My first thought was an saio deployment? you also mentioned "like a single srver", do you mean you just want one replica? | 16:22 |
ekarlso | tdasilva: as in I want to have a swift api just with one node ;p | 16:23 |
ekarlso | I just a swift api that puts everything on one disk | 16:23 |
eikke | tdasilva: for your no-http review => would you be interested if we spend some time on doing some changes/cleanups, then publish the branch so you can cherry-pick whhatever you like? (mostly things I mentioned on Gerrit) | 16:26 |
tdasilva | ekarlso: right, so you could setup your object rings with one device and just run all your swift servers in one node. but I assume you know this is not "safe" as in your data is not being correctly replicated | 16:29 |
tdasilva | eikke: hey, yeah...I've been looking for a chance to upload a new patchset and am hoping I can do that this week | 16:29 |
tdasilva | eikke: are you going to the hackathon next week? | 16:30 |
rledisez | tdasilva: hello | 16:31 |
rledisez | tdasilva: you asked me to enable reuseport in SAIO | 16:31 |
rledisez | tdasilva: i'm not sure it's a good idea because the official doc says to use ubuntu 12.04, but ubuntu 12.04 comes with a kernel 3.2 (not compatible with reuseport) | 16:31 |
rledisez | tdasilva: and reuseport is only for linux (>=3.9), so it would make saio linux only | 16:32 |
rledisez | tdasilva: what do you think? | 16:32 |
tdasilva | rledisez: you could have it "reuseport = no" by default, just like you did in etc | 16:33 |
rledisez | tdasilva: oh, ok, i misunderstood you :) | 16:33 |
tdasilva | rledisez: it's just that to test your patch, I had to go and change all the configs there | 16:33 |
notmyname | good morning | 16:38 |
notmyname | tdasilva: ekarlso: and easy way to set up a one-server, one-replica "cluster" (ie effectively just an API test endpoint) is to use the vagrant-swift-all-in-one and set the config to be one replica and one drive | 16:39 |
notmyname | https://github.com/swiftstack/vagrant-swift-all-in-one | 16:39 |
tdasilva | rledisez: I wonder if it would make sense to go one step further and provide instruction on the SAIO doc on how to enable all the configs with a simple sed script, that way your tests would be "always" run on people's dev. environments | 16:41 |
tdasilva | notmyname: perfect | 16:41 |
*** annegent_ has quit IRC | 16:46 | |
*** annegent_ has joined #openstack-swift | 16:49 | |
*** gyee has joined #openstack-swift | 16:50 | |
*** MooingLemur has joined #openstack-swift | 16:50 | |
*** jkugel has left #openstack-swift | 16:55 | |
*** tdasilva has quit IRC | 16:59 | |
*** abhirc_ has joined #openstack-swift | 17:01 | |
*** abhirc has quit IRC | 17:03 | |
*** annegent_ has quit IRC | 17:04 | |
notmyname | dfg: thanks for the research on https://review.openstack.org/#/c/150149/ | 17:09 |
dfg | notmyname: ya np. you dont know of why anybody would ever want a valid content-type of 304s do you? | 17:12 |
dfg | i'm hopinh those tests were just there just for completeness's (is that a word?) sake | 17:13 |
notmyname | dfg: makes sens that 304 means "you already have this" so not sending the headers kinda makes sense. seems you looked at the rfc and it said leave them out? | 17:15 |
*** imkarrer has joined #openstack-swift | 17:15 | |
*** panbalag has quit IRC | 17:17 | |
imkarrer | Good almost noon to you guys! I have a question about rate limiting. Is this a cluster wide parameter or is it enforced per proxy. I have set a rate limit of 0.1 at the account level on a deployment with two proxy servers. I expect 50 requests for new containers to take 8.33 minutes. The requests take exactly half that time, which happens to be the number of proxies I am running. I am thinking I have a problem in my me | 17:18 |
*** EmilienM is now known as EmilienM|afk | 17:18 | |
notmyname | imkarrer: rate limiting is set up globally for the cluster. that is, while each proxy does enforce it individually, they share the same ratelimiting state info in memcache. so all the configs should be the same or you're going to get some really hard-to-debug behavior | 17:20 |
imkarrer | notmyname: I should have been more specific The configs are the same, but I have haproxy round robining the requests between proxies. The load is split between the proxies which is part of why I am seeing this discrepancy. | 17:22 |
notmyname | imkarrer: if the configs are the same, and if the memcache configs reference all of the proxy servers (ie every proxy server config references the same pool of memcache servers), then the ratelimiting should be constant for any request | 17:23 |
notmyname | imkarrer: but note that ratelimiting is designed to slow down the request rate (not throughput) on a per container basis based on how many objects are in the container | 17:24 |
notmyname | it's not designed to do eg "make sure each request takes at least 200 ms" | 17:24 |
imkarrer | notmyname: I see. I am using account rate limiting. Setting account rate limit to 0.1 should mean that I am setting any updates to the account db to 6 times per minute. I am seeing 12 times per minute, each proxy seems to only know about its own rate. | 17:27 |
notmyname | dfg: any chance you can help here? | 17:32 |
*** jyoti-ranjan has quit IRC | 17:37 | |
*** jistr has quit IRC | 17:39 | |
notmyname | swift 2.2.2 release email has been sent to the -dev and -announce mailing lists | 17:41 |
*** jordanP has quit IRC | 17:46 | |
*** rledisez has quit IRC | 17:48 | |
*** wer_ has quit IRC | 17:51 | |
*** wer_ has joined #openstack-swift | 17:52 | |
dfg | notmyname: about 304- ya the spec says to not include them | 17:52 |
*** zaitcev has joined #openstack-swift | 17:52 | |
*** ChanServ sets mode: +v zaitcev | 17:52 | |
notmyname | dfg: ok. thanks | 17:53 |
dfg | about rateliming thing- the memcache is set up properly? if its not set each proxy would use only itself. if they dont share then the doubling makes sense | 17:56 |
dfg | notmyname: imkarrer ^^ | 17:56 |
*** EmilienM|afk is now known as EmilienM | 18:00 | |
*** annegent_ has joined #openstack-swift | 18:04 | |
*** jasondot_ has quit IRC | 18:20 | |
notmyname | I got this email today from someone using swift: | 18:23 |
notmyname | "I am very thankful this technology exists as it's enabling people like me to easily have a ridiculously scalable distribution method." | 18:23 |
notmyname | dfg: glange: peluse_: acoles: cschwede: torgomatic: swifterdarrell: clayg_: ^^ | 18:24 |
notmyname | the person who wrote that is storing and distributing game content with Swift | 18:24 |
imkarrer | dfg: notmyname: thanks for confirming. I have my memcache servers listed in proxy-server.conf, I can telnet and get the stats of my memcached servers. Where else can I check for configuration errors aside from proxy-server.conf? | 18:24 |
*** geaaru has quit IRC | 18:25 | |
acoles | notmyname: cool, my credibility with my kids will go up enormously if i can say my work is connected to the gaming industry :) | 18:25 |
notmyname | acoles: :-) | 18:26 |
notmyname | there's quite a few game companies using Swift | 18:26 |
acoles | notmyname: totally off-topic, but i was reading that sfo had zero rainfall in january - wow! | 18:28 |
notmyname | ya. it's pretty bad. december was very wet. but the entire year last year was pretty bad. | 18:28 |
acoles | yea. where 'bad' in rainfall context has opposite sense for you than us | 18:29 |
notmyname | it's supposed to rain this coming weekend. | 18:31 |
dfg | imkarrer: thats the only place for this. can you make a paste of the proxy-server.conf? | 18:34 |
acoles | notmyname: ok, so i should pack my raincoat (like brits always do) | 18:35 |
imkarrer | dfg: making a gist | 18:41 |
imkarrer | dfg: https://gist.github.com/imkarrer/253cf81066a7b72a5361 | 18:42 |
*** clayg_ is now known as clayg | 18:53 | |
*** acoles is now known as acoles_away | 18:53 | |
imkarrer | dfg: If thats the only place for this, the only other possible thing I can think of is the the necessary ports are not open, but they are. My memcached servers can contact each other. | 18:56 |
imkarrer | dfg: notmyname: I restarted my memcached servers and that seems to have fixed the problem. The set of requests I am making are now taking the expected amount of time. | 18:59 |
*** aix has quit IRC | 18:59 | |
imkarrer | dfg: notmyname: Thanks for your help today! I really appreciate it. | 19:00 |
*** reed has joined #openstack-swift | 19:01 | |
zaitcev | weird story | 19:01 |
zaitcev | I suspect something stuck in the connection caching in our client, but I haven't heard of such things happening before. | 19:02 |
*** jasondot_ has joined #openstack-swift | 19:02 | |
dfg | imkarrer: oh cool. glad it worked | 19:04 |
imkarrer | If anyone is curious about how to test rate limiting expected time = 1/rate * #requests | 19:06 |
*** echevemaster has joined #openstack-swift | 19:11 | |
*** mahatic has quit IRC | 19:11 | |
*** mahatic has joined #openstack-swift | 19:16 | |
mahatic | notmyname, hello! what you said is right - any modification to the code and the test would still pass. Should I compare the message of NotImplementedError? (I'm not sure how to test it explicitly) | 19:16 |
*** silor1 has joined #openstack-swift | 19:24 | |
*** EmilienM is now known as EmilienM|afk | 19:25 | |
*** silor has quit IRC | 19:25 | |
*** nellysmitt has quit IRC | 19:34 | |
*** pdion891 has quit IRC | 19:38 | |
*** EmilienM|afk is now known as EmilienM | 19:50 | |
*** jasondot_ has quit IRC | 20:02 | |
*** bsdkurt has quit IRC | 20:05 | |
*** bsdkurt has joined #openstack-swift | 20:06 | |
*** jrichli has quit IRC | 20:10 | |
*** thumpba has joined #openstack-swift | 20:15 | |
thumpba | can i override the 5GB limit on images | 20:16 |
*** mahatic has quit IRC | 20:24 | |
glange | thumpba: do you mean on objects? | 20:28 |
thumpba | yes | 20:29 |
thumpba | glange: im getting an error uploading a snapshot of an raw image into glance which is using swift as its backend. the snapshot.raw is over 5gb | 20:30 |
thumpba | glange: glance is giving me this error "during chunked upload to backend, deleting stale chunks" | 20:31 |
thumpba | glange: and "Failed to add object to Swift" | 20:31 |
*** jasondot_ has joined #openstack-swift | 20:33 | |
*** jrichli has joined #openstack-swift | 20:33 | |
glange | https://github.com/openstack/swift/blob/master/etc/swift.conf-sample <-- it looks like you can, see max_file_size | 20:34 |
glange | it might not be a good idea though -- the bigger the objects the bigger potential problem with uneven data distrubution you might have in your swift cluster | 20:34 |
thumpba | hmmmm...i am just trying to migrate an instance from one openstack env to another, i took a snapshot and downloaded it and now trying to upload the snap to create an instance from it | 20:36 |
glange | if you have control over what you are doing, you could use either a static large object or dynamic large object instead, there are "no limits" on the size of those files | 20:37 |
ahale | doesnt glance call it a chunked upload when it is creating a swift large object? that sounds like an error in one of the <5GB chunks, which causes it to roll back everything else as it cant retry a chunk ? | 20:38 |
glange | and once they are created, they seem to be one object to the downloader | 20:38 |
thumpba | https://github.com/openstack/glance/blob/master/etc/glance-api.conf <-- under e_large_object_size | 20:38 |
thumpba | swift_store_large_object_size | 20:38 |
glange | thumpba: so there is a mismatch betwen swift and glance about what is too large an object? | 20:39 |
*** rdaly2 has quit IRC | 20:39 | |
glange | or, how are you getting that error if glance starts "chunking" ? | 20:39 |
thumpba | glange: glance image-create --name image-name --disk-format raw --container-format bare --is-public True --file snapshot.raw, i found the error on the glance-api log | 20:40 |
glange | sorry, I don't know much about glance, is there a glance irc channel? | 20:41 |
thumpba | glange: me niether, figured since i was using swift, this would be more relavant | 20:42 |
*** tongli has joined #openstack-swift | 20:42 | |
thumpba | should i modify the swift.conf to get the upload completed? | 20:43 |
thumpba | glange: let me ask you how do i use static large object or dynamic large object instead | 20:43 |
*** mahatic_ has joined #openstack-swift | 20:44 | |
*** jasondot_ has quit IRC | 20:45 | |
*** nellysmitt has joined #openstack-swift | 20:47 | |
glange | http://docs.openstack.org/developer/swift/overview_large_objects.html <-- read that | 20:52 |
glange | http://docs.openstack.org/api/openstack-object-storage/1.0/content/create_static_large_objects.html | 20:52 |
glange | that seems better for SLO, which might be what you want to use | 20:53 |
*** nellysmitt has quit IRC | 20:54 | |
torgomatic | SLO > DLO in nearly every use case | 20:56 |
torgomatic | (IMO) | 20:56 |
*** silor1 has quit IRC | 21:06 | |
*** jrichli has quit IRC | 21:07 | |
mattoliverau | Morning | 21:09 |
*** IRTermite has quit IRC | 21:16 | |
*** NM has quit IRC | 21:19 | |
*** IRTermite has joined #openstack-swift | 21:24 | |
thumpba | "Got error from Swift: Object PUT failed: 413 Request Entity Too Large" trying to upload an image over 5gb | 21:26 |
notmyname | whew. 2.5 hour meeting with a customer | 21:28 |
*** chipmanc has joined #openstack-swift | 21:30 | |
*** jrichli has joined #openstack-swift | 21:32 | |
notmyname | long meeting, but good to hear user stories | 21:33 |
*** imkarrer has quit IRC | 21:37 | |
*** mahatic_ has quit IRC | 21:49 | |
*** mahatic_ has joined #openstack-swift | 21:51 | |
*** chipmanc has quit IRC | 21:56 | |
thumpba | how do i restart swift | 21:56 |
notmyname | thumpba: `swift-init all restart`. also `swift-init all reload` will do a graceful reload | 22:04 |
notmyname | thumpba: why are you restarting? | 22:05 |
notmyname | thumpba: https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ might be interesting to you | 22:05 |
thumpba | notmyname: changed the max_file_size in swift.conf, | 22:06 |
thumpba | notmyname: "Got error from Swift: Object PUT failed: 413 Request Entity Too Large" trying to upload an image over 5gb | 22:06 |
notmyname | ah. then reload is fine to get that new value | 22:06 |
*** mariusleu has quit IRC | 22:06 | |
notmyname | thumpba: however, note that raising the max_file_size is generally not recommended. that's not how to store bigger objects in swift, and changing it can make life more difficult for your ops and users | 22:07 |
notmyname | thumpba: obviously, there are times you want to change it (otherwise it wouldn't be configurable). just note that "I need to store larger objects" is almost never best solved by raising the max object size | 22:08 |
thumpba | notmyname: i want to migrate an instance from another openstack env, when uploading into glance its giving me the error from Swift. | 22:08 |
thumpba | notmyname: once i can make the instance from the image i will delete it and reset the value to normal | 22:09 |
notmyname | how big is the image? | 22:10 |
thumpba | notmyname: whats the correct way? the image is 5.63GB | 22:10 |
*** jwalcik has quit IRC | 22:10 | |
notmyname | the correct way is to use large object manifests | 22:11 |
thumpba | notmyname: dont i have to split the image first in order to do that | 22:11 |
notmyname | yes | 22:11 |
*** wer_ has quit IRC | 22:11 | |
*** wer has quit IRC | 22:11 | |
thumpba | notmyname: is there an openstack tool to split the image? | 22:12 |
*** wer has joined #openstack-swift | 22:12 | |
*** wer_ has joined #openstack-swift | 22:12 | |
notmyname | thumpba: the glance client should. also, the swift CLI can. otherwise, the unix tools `split` is sufficient | 22:14 |
*** tongli has quit IRC | 22:15 | |
*** rdaly2 has joined #openstack-swift | 22:16 | |
*** jrichli has quit IRC | 22:17 | |
*** mariusleu has joined #openstack-swift | 22:18 | |
thumpba | notmyname: thanks. gotta figure out how to split and use the large object manifests | 22:19 |
notmyname | thumpba: http://docs.openstack.org/developer/swift/overview_large_objects.html is a great starting place | 22:19 |
thumpba | notmyname: once they're uploaded into the container how will glance recognize that | 22:20 |
notmyname | thumpba: don't know. and from listening in the -infra channel, it would seem that different openstack deployments do it different ways (eg HP and Rackspace). | 22:22 |
notmyname | thumpba: that's why using the glance client is probably the best bet. but I'm not sure of the root cause of the error your seeing | 22:23 |
notmyname | thumpba: I mean "object too big". but the question is why glance isn't chunking that | 22:23 |
gyee | does swift team aware of this? https://review.openstack.org/#/c/131515/ | 22:38 |
notmyname | gyee: nope. thanks | 22:41 |
gyee | it makes me nervous because Swift is eventual consistency | 22:41 |
gyee | I really worry about performance as well | 22:41 |
*** annegent_ has quit IRC | 22:42 | |
notmyname | gyee: why? isn't keystone performance really slow today? | 22:44 |
notmyname | ;-) | 22:44 |
gyee | heh | 22:44 |
notmyname | gyee: in seriousness, ya, perf could be an issue. net request + disk IO for every token fetch can be slow. works fine for well-behaved clients. probably not good for clients who re-auth every request | 22:46 |
notmyname | gyee: but this has been done before. swauth is an auth system that uses swift as a k/v store for the tokens | 22:47 |
gyee | notmyname, good, that's what I am looking for | 22:48 |
gyee | the ratio for token lookup versus token creation is pretty high | 22:49 |
notmyname | ya, it should be | 22:49 |
egon | Actually, what would be really nice would be for *swift* to reuse tokens. | 22:49 |
notmyname | egon: the swift cli? yeah, that'd be nice | 22:50 |
egon | Which is what I thought that patch was for at first. | 22:50 |
notmyname | egon: well, it does. just not across different command-line invoactions | 22:50 |
notmyname | gyee: and I think you could build a pretty high-performance cluster to handle that | 22:51 |
gyee | notmyname, does Swift cache objects that have high lookup rate? | 22:51 |
egon | is that fixed in the latest clients? last I was looking, the tokens aren't reused. | 22:51 |
notmyname | egon: s/does/should/ | 22:52 |
gyee | egon, keystoneclient session does token management | 22:52 |
gyee | it won't reauth | 22:52 |
notmyname | gyee: yes. it can. but it relies on the system page cache for that. so other memory pressure on a box can evict items from the page cache. but yes, swift can specifically keep certain things in memory | 22:52 |
gyee | k, we may be OK with Swift as a token backend then | 22:55 |
*** nellysmitt has joined #openstack-swift | 22:55 | |
gyee | for multiregion, we may use federation instead of relying on container sync | 22:55 |
egon | keystone token-get returns a different token each time. | 22:56 |
gyee | egon, yes, it suppose to | 22:56 |
ahale | the swauth proxy auth middleware uses memcache tons anyway so page cache eviction is less of an issue for hot tokens | 22:57 |
gyee | ahale, this is for using swift as token backend | 22:58 |
gyee | keystone is calling swift to fetch token objects | 22:59 |
*** reed has quit IRC | 23:00 | |
*** nellysmitt has quit IRC | 23:00 | |
ahale | ah like that ignore me :) | 23:00 |
notmyname | gyee: I do have one concern on that spec: using tempauth | 23:02 |
egon | gyee: so, what makes you assert that the keystone client reuses tokens? | 23:03 |
notmyname | gyee: instead, using tempurls might be a really great idea | 23:03 |
gyee | notmyname, all bootstrap evil :) | 23:03 |
notmyname | gyee: doesn't solve all the auth bootstrap problems, but is interesting | 23:03 |
gyee | that's a shared secret key so security folks may get too curious | 23:04 |
notmyname | gyee: as opposed to the shared secret stored in plain text in a config file that is tempauth? | 23:05 |
gyee | egon, https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L143 | 23:06 |
notmyname | and tempurls are a lot more supportable than tempauth IMO (ie what we recommend people to use in prod) | 23:06 |
gyee | notmyname, we obfuscated passwords in conf files | 23:06 |
gyee | I supposed we can encrypt the shared keys | 23:07 |
notmyname | gyee: not unless you rewrote tempauth, you didn't | 23:07 |
gyee | but I am with you, tempurl is a lot better | 23:07 |
gyee | we have requirement to rotate passwords every 3 months or so | 23:07 |
gyee | so managing service passwords is PITA | 23:07 |
notmyname | tempurls allow multiple keys so you can rotate them without expiring existing urls | 23:08 |
gyee | nice! | 23:08 |
notmyname | gyee: I actually just wrote all this down last week :-) https://swiftstack.com/blog/2015/01/29/swift-feature-highlight-tempurls/ | 23:08 |
*** dmsimard is now known as dmsimard_away | 23:09 | |
gyee | awesome, ++ for tempurl | 23:10 |
gyee | actually, ++ for signature-based access | 23:10 |
*** chlong has joined #openstack-swift | 23:11 | |
notmyname | gyee: yes! | 23:14 |
notmyname | I'd love to see better support for that overall. give me an authenticated service API to get the shared secret(s) for a given user, and I'll do signature-based auth all day long | 23:14 |
notmyname | right now, since that API doesn't exist, we're using the account metadata to store the shared secret. that way swift can access it to validate the signatures | 23:15 |
gyee | notmyname, I wrote a bp awhile back, got shot down | 23:15 |
gyee | sorry | 23:15 |
notmyname | gyee: I left a comment in gerrit on the spec | 23:16 |
gyee | notmyname, great, thanks for help! | 23:16 |
notmyname | gyee: thanks for letting us know | 23:17 |
egon | gyee: the cli's do not reuse tokens automatically. I think I see that you're saying that if you're using the python client library, it does. | 23:27 |
notmyname | oh yeah. the swift client lib will too | 23:30 |
gyee | egon, "keystone token-get" explicitly ask for new token, that's by design | 23:34 |
gyee | lib will cache the token | 23:34 |
gyee | not sure if swift cli supports keyring like nova cli does | 23:35 |
notmyname | gyee: what are you seeing as the problems with storing tokens in swift and eventual consistency? | 23:36 |
gyee | notmyname, for multi-region, I don't know how long the replication takes | 23:36 |
gyee | also, I am not sure about searching capability, like search all the tokens own by a user so we can revoke them | 23:37 |
egon | gyee: If you do keystone user-list twice in a row, it uses a different token for each request. | 23:37 |
gyee | I suppose we'll have to store metadata for swift objects to enhance searching capability? | 23:37 |
notmyname | gyee: ya, I think there would need to be some thought put into how to organize accounts/containers/objects so that it remains performant and searchable | 23:38 |
notmyname | gyee: maybe. maybe not | 23:38 |
gyee | notmyname, for now I put a -1 there for more details | 23:38 |
notmyname | gyee: ya, I saw | 23:38 |
gyee | egon, yes, because we don't use keyring | 23:39 |
gyee | for keystone CLI, every call is a new session | 23:39 |
notmyname | gyee: now can I ask for some keystone help? :-) | 23:41 |
gyee | sure :) | 23:41 |
notmyname | https://review.openstack.org/#/c/152283/ was proposed today by torgomatic. We're not at all experts in keystone, so we'd appreciate help (even with getting tests passing) | 23:42 |
*** annegent_ has joined #openstack-swift | 23:42 | |
gyee | looks like a trivial change | 23:43 |
gyee | let me take a look | 23:43 |
notmyname | gyee: thanks | 23:43 |
notmyname | torgomatic is on a train home right now, but he'l probably be back online later today | 23:44 |
gyee | k | 23:44 |
*** annegent_ has quit IRC | 23:49 | |
*** annegent_ has joined #openstack-swift | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!