ho | good morning | 00:04 |
---|---|---|
*** vjujjuri has quit IRC | 00:30 | |
*** dmorita has joined #openstack-swift | 00:31 | |
mattoliverau | ho: morning | 00:41 |
*** cdelatte has quit IRC | 00:51 | |
*** harlowja has quit IRC | 00:58 | |
*** asettle is now known as asettle-afk | 00:59 | |
*** gyee has quit IRC | 01:14 | |
*** haomaiwa_ has quit IRC | 01:26 | |
*** kota_ has joined #openstack-swift | 01:27 | |
*** haomaiwa_ has joined #openstack-swift | 01:28 | |
*** panbalag has joined #openstack-swift | 01:31 | |
*** panbalag has left #openstack-swift | 01:32 | |
ho | mattoliverau: morning! | 01:36 |
*** mitz has quit IRC | 02:52 | |
minwoob | peluse notmyname: Quick q -- In the Trello discussion for small file optimizations, https://trello.com/c/QWaYXWNf/120-small-file-optimizations -- could you explain the purpose of the padded zeros in Paul's first suggestion as a solution to the problem? | 03:10 |
minwoob | Thanks. | 03:10 |
*** occupant has quit IRC | 03:21 | |
*** zhill has quit IRC | 03:24 | |
*** occupant has joined #openstack-swift | 03:38 | |
kota_ | minwoob: hi | 03:39 |
*** vjujjuri has joined #openstack-swift | 03:39 | |
kota_ | miwoob: In my memory, current swift slices the original data to the segment size and encode it. | 03:42 |
kota_ | minwoob: i.e. the object smaller than the segment size will be adjudsted with zero padding. | 03:42 |
kota_ | minwoob: it will cause inefficient disk space usage because the smaller object will be maintain as 1MB (default segment size is 1MB) | 03:44 |
kota_ | minwoob: therefore Paul says that we want to minimize the redandunt (and unnecessary) padding, I think. | 03:45 |
kota_ | I'm looking at current code my description was correct tho. | 03:46 |
*** tobe4333 has joined #openstack-swift | 03:48 | |
minwoob | kota_: I see. So it seems like it wouldn't be as straightforward (for a fix), to just not store the padding, right? | 03:49 |
minwoob | kota_: (I assume they are there for a reason). | 03:49 |
*** HenryG has quit IRC | 03:51 | |
*** asettle-afk is now known as asettle | 03:53 | |
kota_ | minwoob: would not be easy, i think. | 03:53 |
kota_ | minwoob: but possible, maybe. | 03:54 |
kota_ | minwoob: current swift decides the encoding/decoding unit from segment size and the numbers of k and m. | 03:55 |
kota_ | minwoob: maybe we have to secialize the last (or small one) segments whose size is different from another fragments in the fragment archive. | 03:56 |
kota_ | minwoob: wow, sorry I might have misunderstanding. | 03:58 |
kota_ | minwoob: current *Swift* doesn't pad additional data to the segments. for now, I'm going to dig PyECLib which behaves like as we are assuming. | 03:59 |
minwoob | kota_: Ah, I see. | 04:00 |
minwoob | kota_: Thanks for explaining, btw. | 04:02 |
kota_ | minwoob: no worries, I'll ping you if I get more information for that. | 04:03 |
minwoob | kota_: Okay, great! | 04:04 |
*** tobe4333 has quit IRC | 04:20 | |
*** vjujjuri_ has joined #openstack-swift | 04:31 | |
*** vjujjuri has quit IRC | 04:33 | |
*** vjujjuri_ is now known as vjujjuri | 04:33 | |
openstackgerrit | pradeep kumar singh proposed openstack/swift: Swift account auditor fails to quarantine corrupt db due to Disk IO error. This patch fix that by handling Disk IO Error Exception. https://review.openstack.org/182734 | 04:34 |
kota_ | minwoob: not sure but no padding to the *segment* | 04:36 |
kota_ | minwoob: However please note that ec sprits the source data into n data. | 04:37 |
kota_ | minwoob: i.e. if the incoming object consites of 1 byte data, the fragments consist of more than 1 bytes (some reason like as the source divisable by k, aligned 16bytes for performance and adding the ec info as headers) | 04:39 |
kota_ | minwoob: the additional bytes for the fragments might be larger than the benefit of decreasing disk space by EC in the small object case. | 04:41 |
kota_ | minwoob: but it would be better to ask Paul, again. the info looks a bit stale. | 04:43 |
minwoob | kota_: All right. | 04:44 |
kota_ | s/the info/the info at Trello/ | 04:45 |
minwoob | kota_: We'll see if Paul wants to chime in on this. | 04:46 |
kota_ | minwoob: exactly :) | 04:47 |
minwoob | Sounds good. | 04:48 |
*** minwoob has quit IRC | 04:52 | |
*** SkyRocknRoll has joined #openstack-swift | 04:55 | |
torgomatic | http://cube-drone.com/2015_05_12-145_Progress.html | 05:05 |
*** mwheckmann has joined #openstack-swift | 05:07 | |
*** mwheckmann has quit IRC | 05:12 | |
*** ppai has joined #openstack-swift | 05:18 | |
*** bkopilov has quit IRC | 05:20 | |
*** bkopilov has joined #openstack-swift | 05:22 | |
swifterdarrell | torgomatic: lol | 05:32 |
swifterdarrell | torgomatic: so true | 05:33 |
*** HenryG has joined #openstack-swift | 05:38 | |
openstackgerrit | Darrell Bishop proposed openstack/swift: Allow SAIO to answer is_local_device() better https://review.openstack.org/183395 | 05:40 |
swifterdarrell | ^^^^^^^^ that is in prep for some vagrant-swift-all-in-one changes and another patch that provides support for one-object-server-per-device-port-in-all-rings | 05:42 |
*** vjujjuri has quit IRC | 05:47 | |
*** zaitcev has quit IRC | 05:57 | |
*** zhill has joined #openstack-swift | 06:37 | |
*** zhill has quit IRC | 07:16 | |
*** aluria has joined #openstack-swift | 07:28 | |
*** acoles_away is now known as acoles | 07:30 | |
*** ho_ has joined #openstack-swift | 07:31 | |
*** ho_ has quit IRC | 07:31 | |
*** chlong has quit IRC | 07:32 | |
*** jistr has joined #openstack-swift | 07:33 | |
*** silor has joined #openstack-swift | 07:34 | |
*** SkyRocknRoll has quit IRC | 07:36 | |
*** geaaru has joined #openstack-swift | 07:37 | |
*** tobe4333 has joined #openstack-swift | 07:44 | |
*** kei_yama has quit IRC | 07:54 | |
*** kei_yama has joined #openstack-swift | 07:57 | |
*** tobe4333 has quit IRC | 08:07 | |
*** jordanP has joined #openstack-swift | 08:07 | |
*** km has quit IRC | 08:15 | |
*** kei_yama has quit IRC | 08:26 | |
*** Trozz has joined #openstack-swift | 08:53 | |
*** early has quit IRC | 08:57 | |
*** early has joined #openstack-swift | 09:09 | |
openstackgerrit | pradeep kumar singh proposed openstack/swift: Handle Disk IO error Exception in swift account auditor. https://review.openstack.org/182734 | 09:15 |
*** kota_ has quit IRC | 10:02 | |
*** wbhuber has quit IRC | 10:03 | |
*** aix has joined #openstack-swift | 10:21 | |
*** tobe4333 has joined #openstack-swift | 10:27 | |
*** kota_ has joined #openstack-swift | 10:28 | |
*** ho has quit IRC | 10:46 | |
*** tobe4333 has quit IRC | 11:07 | |
*** cdelatte has joined #openstack-swift | 11:30 | |
*** delattec has joined #openstack-swift | 11:30 | |
*** dencaval has joined #openstack-swift | 11:51 | |
*** dmorita has quit IRC | 11:55 | |
*** ppai has quit IRC | 12:04 | |
*** dencaval has quit IRC | 12:10 | |
*** links has joined #openstack-swift | 12:13 | |
*** dencaval has joined #openstack-swift | 12:21 | |
*** annegentle has joined #openstack-swift | 12:48 | |
*** kota_ has quit IRC | 12:50 | |
*** links has quit IRC | 12:57 | |
*** jkugel has joined #openstack-swift | 13:17 | |
*** erlon has joined #openstack-swift | 13:22 | |
*** CaioBrentano has joined #openstack-swift | 13:23 | |
*** lastops has joined #openstack-swift | 13:25 | |
*** wbhuber has joined #openstack-swift | 13:25 | |
openstackgerrit | Christian Cachin proposed openstack/swift-specs: Updates to encryption spec https://review.openstack.org/154318 | 13:31 |
*** CaioBrentano has quit IRC | 13:43 | |
*** aix has quit IRC | 13:44 | |
*** fthiagogv has joined #openstack-swift | 13:54 | |
*** jrichli has joined #openstack-swift | 13:57 | |
*** annegentle has quit IRC | 13:59 | |
*** mwheckmann has joined #openstack-swift | 14:00 | |
*** lcurtis has joined #openstack-swift | 14:02 | |
lcurtis | hello all...I added a 3rd container node to my swift cluster but getting low throughput | 14:03 |
lcurtis | it looks like only one single process running even though i set concurrency to 50000 | 14:04 |
lcurtis | /usr/bin/python /usr/bin/swift-container-replicator /etc/swift/container-server.conf | 14:04 |
lcurtis | or 5000 rather | 14:04 |
lcurtis | is this expected behavior? | 14:07 |
*** archers has joined #openstack-swift | 14:09 | |
*** aix has joined #openstack-swift | 14:22 | |
*** breitz has quit IRC | 14:24 | |
*** breitz has joined #openstack-swift | 14:25 | |
openstackgerrit | Thiago Gomes proposed openstack/python-swiftclient: Fix the Upload an object to a pseudo-folder https://review.openstack.org/165112 | 14:28 |
openstackgerrit | Thiago Gomes proposed openstack/python-swiftclient: Fix the Upload an object to a pseudo-folder https://review.openstack.org/165112 | 14:30 |
*** annegentle has joined #openstack-swift | 14:31 | |
glange | for the container replicator it's only one process and the conncurrency is an eventlet GreenPool | 14:33 |
glange | all the replicators are single processes | 14:33 |
*** archers has quit IRC | 14:38 | |
lcurtis | Thank you glange | 14:43 |
*** zynisch_o7 has joined #openstack-swift | 14:45 | |
*** mragupat has joined #openstack-swift | 14:45 | |
*** mragupat has quit IRC | 14:46 | |
openstackgerrit | Thiago da Silva proposed openstack/swift: WIP: new attempt at single-process https://review.openstack.org/159285 | 14:57 |
*** nadeem has joined #openstack-swift | 15:10 | |
*** mwheckmann has quit IRC | 15:10 | |
*** mwheckmann has joined #openstack-swift | 15:14 | |
*** jistr is now known as jistr|mtgh | 15:15 | |
*** jistr|mtgh is now known as jistr|mtg | 15:15 | |
*** mwheckmann has quit IRC | 15:19 | |
*** mwheckmann has joined #openstack-swift | 15:19 | |
*** minwoob has joined #openstack-swift | 15:20 | |
*** csmart has quit IRC | 15:20 | |
*** csmart has joined #openstack-swift | 15:25 | |
*** jistr|mtg is now known as jistr | 15:28 | |
*** csmart has quit IRC | 15:30 | |
*** geaaru has quit IRC | 15:30 | |
*** acoles is now known as acoles_away | 15:31 | |
*** csmart has joined #openstack-swift | 15:31 | |
*** gyee has joined #openstack-swift | 15:37 | |
openstackgerrit | Michael Barton proposed openstack/swift: go: log 499 on client early disconnect https://review.openstack.org/183577 | 15:43 |
*** mahatic has joined #openstack-swift | 15:46 | |
*** shakamunyi has quit IRC | 16:00 | |
*** barra204 has quit IRC | 16:00 | |
*** harlowja has joined #openstack-swift | 16:01 | |
*** vjujjuri has joined #openstack-swift | 16:02 | |
*** harlowja has quit IRC | 16:03 | |
*** annegentle has quit IRC | 16:04 | |
ctennis | lcurtis: are you on a recent version of swift? | 16:06 |
lcurtis | ctennis: 1.13.1-0ubuntu1.1 | 16:08 |
lcurtis | any good stuff I am missing? | 16:08 |
*** jordanP has quit IRC | 16:12 | |
*** jordanP has joined #openstack-swift | 16:13 | |
ctennis | lcurtis: one thing that comes to mind is that in a more recent version a bug was fixed that cleaned up empty container and account partitions which didn't have anthing in them...in your versino you may have a lot of empty partition directories which impedes replication time | 16:14 |
ctennis | you might look at nsee if you have empty partition directories | 16:14 |
lcurtis | okay! will do..thanks ctennis | 16:18 |
notmyname | good morning | 16:21 |
*** jordanP has quit IRC | 16:21 | |
notmyname | less than 48 hours until I'm on a place to Vancouver. I'm starting to feel a little rushed to get stuff done ;-) | 16:29 |
egon | notmyname: I hear ya. I still have finishing stuff to do on my deck, let alone pack, or figure out simple travel logistics. | 16:41 |
egon | what flight are you on? you're in SF, right? | 16:42 |
notmyname | egon: i'm leaving early sunday morning. will be in vancouver by lunch | 16:42 |
egon | I get in at 4-something | 16:42 |
egon | pm | 16:43 |
egon | ctennis: for cleaning up empty containers, is that a job that runs, or a new feature? | 16:44 |
*** acoles_away is now known as acoles | 16:51 | |
*** mahatic has quit IRC | 16:51 | |
openstackgerrit | Michael Barton proposed openstack/swift: go: check error returns part 1 https://review.openstack.org/183605 | 16:51 |
ctennis | egon: it's part of the replicator, it's something it should have been doing all along but was not | 16:51 |
*** acoles is now known as acoles_away | 16:52 | |
egon | ctennis: we have an application team using swift who pre-creates a lot of containers, because they saw a performance improvement. So they have tons of empty ones. Is that considered a valid use case anymore? | 16:56 |
ctennis | egon: not empty containers, those are fine. this is empty partitions of containers | 17:02 |
ctennis | essentially container data that's moved elsewhere in the system and the enclosing directory was not cleaned up | 17:03 |
jodah | Q about storage policies. it was suggested that i could use storage policies to represent racks of hosts, and by effective place object replicas across racks by using storage policies. how do i associate groups of hosts, such as racks, with a storage policy, so that the replicas are placed across racks? | 17:03 |
egon | ctennis: oh! gotcha. that makes sense | 17:03 |
jodah | my question stems from a discussion with cschwede a while back on the mailing list http://lists.openstack.org/pipermail/openstack-dev/2015-February/057326.html | 17:03 |
*** annegentle has joined #openstack-swift | 17:05 | |
*** zhill has joined #openstack-swift | 17:05 | |
*** annegentle has quit IRC | 17:10 | |
*** jistr has quit IRC | 17:12 | |
*** aix has quit IRC | 17:15 | |
*** RobOakes has joined #openstack-swift | 17:19 | |
RobOakes | I've been playing with a development cluster we use for OpenStack Swift. It's configured with a single node and a single object copy (no replication). I'd like to add a second node and up the replication count to two. What is the best way to do this? | 17:22 |
RobOakes | Can I create a new ring with both machines, modify the object count, and still maintain the data on the current storage node? | 17:22 |
notmyname | yes. or rather, you should add the 2nd node to the existing ring | 17:23 |
notmyname | then when you rebalance and deploy the updated ring, swift will rearrange the data and move it to the right place | 17:23 |
RobOakes | Okay. Once the second node is added, is there a way to up the number of replication copies? | 17:24 |
RobOakes | My understanding was that once the number of replication copies is set, that you can't change it. | 17:25 |
*** askname has joined #openstack-swift | 17:27 | |
askname | Hi guys, question regarding md5/ETag of object. When a new object is uploaded into Swift, who is calcultaing the checksum of object ? swift-proxy, object-server daemon ? | 17:28 |
notmyname | the proxy | 17:29 |
notmyname | err...no, sorry | 17:29 |
notmyname | the object | 17:29 |
tdasilva | in EC, it's the proxy, right? | 17:30 |
*** annegentle has joined #openstack-swift | 17:35 | |
*** annegentle has quit IRC | 17:40 | |
*** NM has joined #openstack-swift | 17:42 | |
*** jrichli has quit IRC | 17:54 | |
notmyname | http://d.not.mn/ec-v-repl.png <--- graph of EC vs Replication | 18:11 |
notmyname | tdasilva: yea | 18:11 |
notmyname | askname: tdasilva: it's actually done in a few places | 18:11 |
*** annegentle has joined #openstack-swift | 18:11 | |
askname | what is EC ? | 18:12 |
egon | notmyname: is that a performance graph, or rps? | 18:12 |
egon | askname: erasure codes | 18:12 |
openstackgerrit | Michael Barton proposed openstack/swift: go: replace ghetto getpwnam with os/user https://review.openstack.org/183635 | 18:12 |
notmyname | egon: PUTs/sec | 18:12 |
notmyname | egon: from http://d.not.mn/20150511_run1_15fullness.csv | 18:13 |
egon | notmyname: so that's performance of pps, or pps required to finish replication? | 18:14 |
notmyname | egon: that's from a client perspective. so performance of puts per second | 18:14 |
egon | notmyname: gotcha | 18:15 |
notmyname | egon: taller bars are better | 18:15 |
notmyname | so eg you can see that there is a point where EC becomes faster than replication | 18:15 |
openstackgerrit | Michael Barton proposed openstack/swift: go: replace ghetto getpwnam with os/user https://review.openstack.org/183635 | 18:15 |
egon | notmyname: what are the object sizes, and what happens if you have mixed-workloads? | 18:19 |
tdasilva | notmyname: do you have any info on the cluster used for those tests? | 18:19 |
notmyname | egon: the scenario files used are at https://github.com/swiftstack/ssbench/pull/107/files | 18:20 |
notmyname | tdasilva: it's the community QA cluster. 5 servers, Intel Avoton chips, 8GB memory, 4 drives per server in the policy (6 and 8 TB helium drives) | 18:21 |
glange | notmyname: what does that graph show? requests per second to the cluster or requests per second for replication (or something) | 18:21 |
notmyname | glange: a benchmark run of replication and EC policies. everything else the same | 18:22 |
glange | run of puts to the cluster? | 18:22 |
notmyname | yes | 18:22 |
glange | ok | 18:22 |
*** RobOakes has left #openstack-swift | 18:22 | |
notmyname | glange: so it shows the puts/sec in each policy. 3x replica and 6+4 EC | 18:22 |
*** openstackgerrit has quit IRC | 18:22 | |
glange | ok | 18:22 |
*** openstackgerrit has joined #openstack-swift | 18:22 | |
notmyname | the "medium" category is objects between 5MB and 25MB | 18:23 |
notmyname | small = 1-5MB | 18:23 |
notmyname | note that the EC segment size is 1MB | 18:23 |
notmyname | miniscule = 10-2048 bytes | 18:24 |
notmyname | tiny = 4k - 8k | 18:24 |
tdasilva | notmyname: what about ssbench configuration? number of workers, connections, etc... | 18:24 |
notmyname | 4 workers | 18:28 |
notmyname | to the 5 servers | 18:28 |
notmyname | concurrency in that run was 30 | 18:28 |
*** rdaly2 has joined #openstack-swift | 18:30 | |
*** rdaly2 has quit IRC | 18:31 | |
notmyname | if you really want to see all the data, I'm uploading it now | 18:34 |
notmyname | :-) | 18:34 |
notmyname | 191MB compressed, 2G uncompressed | 18:35 |
*** shakamunyi has joined #openstack-swift | 18:36 | |
*** barra204 has joined #openstack-swift | 18:36 | |
notmyname | http://d.not.mn/20150512_run2_15fullness.tgz | 18:38 |
openstackgerrit | Stuart McLaren proposed openstack/python-swiftclient: Add minimal working service token support. https://review.openstack.org/182640 | 18:41 |
*** nadeem has quit IRC | 18:44 | |
openstackgerrit | Stuart McLaren proposed openstack/python-swiftclient: Add minimal working service token support. https://review.openstack.org/182640 | 18:44 |
*** NM1 has joined #openstack-swift | 18:49 | |
*** nadeem has joined #openstack-swift | 18:50 | |
*** nadeem has quit IRC | 18:51 | |
*** NM has quit IRC | 18:52 | |
*** silor1 has joined #openstack-swift | 18:57 | |
*** silor has quit IRC | 18:59 | |
mwheckmann | clayg: Tested the patch for bug #1413619. Seems to solve the problem. | 19:00 |
openstack | bug 1413619 in OpenStack Object Storage (swift) "container sync gets stuck after deleting all objects" [Undecided,New] https://launchpad.net/bugs/1413619 - Assigned to Gil Vernik (gilv) | 19:00 |
*** wbhuber_ has joined #openstack-swift | 19:01 | |
*** silor1 has quit IRC | 19:01 | |
*** ahale_ has joined #openstack-swift | 19:03 | |
*** ahale has quit IRC | 19:03 | |
*** NM1 has quit IRC | 19:04 | |
*** wbhuber has quit IRC | 19:04 | |
*** NM has joined #openstack-swift | 19:22 | |
*** NM has quit IRC | 19:23 | |
*** tdasilva has quit IRC | 19:30 | |
*** annegentle has quit IRC | 19:33 | |
*** tdasilva has joined #openstack-swift | 19:37 | |
*** azure23 has joined #openstack-swift | 19:42 | |
*** azure23 has quit IRC | 19:42 | |
*** vinsh has joined #openstack-swift | 19:46 | |
*** mahatic has joined #openstack-swift | 19:47 | |
*** mahatic has quit IRC | 19:47 | |
*** barra204 has quit IRC | 19:48 | |
*** shakamunyi has quit IRC | 19:48 | |
*** jrichli has joined #openstack-swift | 19:49 | |
openstackgerrit | Thiago da Silva proposed openstack/swift: move replication code to ReplicatedObjectController https://review.openstack.org/182826 | 19:50 |
*** vinsh has quit IRC | 19:51 | |
openstackgerrit | Thiago da Silva proposed openstack/swift: WIP: new attempt at single-process https://review.openstack.org/159285 | 19:53 |
*** thumpba has joined #openstack-swift | 19:54 | |
ekarlso | heya, for devstack swift shouldn't it be enable_service swift s-object s-account s-proxy s-container | 20:00 |
ekarlso | ? | 20:00 |
*** dencaval has quit IRC | 20:01 | |
*** lpabon has joined #openstack-swift | 20:01 | |
clayg | mwheckmann: oh that's great! can you put that on the bug report | 20:02 |
mwheckmann | clayg: already did :) | 20:03 |
clayg | notmyname: that graph is nice - with like gradients and stuff - you're stepping up your game | 20:04 |
glange | haha | 20:04 |
clayg | notmyname: still it would be nice for "huge write only" to say what the object size is | 20:05 |
*** tab____ has joined #openstack-swift | 20:05 | |
*** fthiagogv has quit IRC | 20:07 | |
ekarlso | noone uses devstack ? | 20:10 |
glange | not where I work | 20:11 |
*** wbhuber__ has joined #openstack-swift | 20:11 | |
*** wbhuber_ has quit IRC | 20:15 | |
tdasilva | ekarlso: i think most swift devs use their own SAIO dev. environment instead of devstack | 20:17 |
*** zaitcev has joined #openstack-swift | 20:18 | |
*** ChanServ sets mode: +v zaitcev | 20:18 | |
glange | and we don't run that in production either | 20:20 |
*** lastops has quit IRC | 20:21 | |
ekarlso | what is it that runs in the gates then ? | 20:25 |
notmyname | clayg: the huge one is 1-5GB | 20:29 |
notmyname | all of them are a range | 20:29 |
*** askname has quit IRC | 20:32 | |
*** nadeem has joined #openstack-swift | 20:38 | |
*** annegentle has joined #openstack-swift | 20:44 | |
tdasilva | taking off for today...hope you guys have fun at the conference...looking forward watch presentations and to hear back from the discussions | 20:46 |
jrichli | tdasilva: enjoy being with the little one! | 20:47 |
ekarlso | is there a easy way to deploy a SAIO box and have it be configured towards keystone ? | 20:50 |
*** lpabon has quit IRC | 20:58 | |
*** nadeem has quit IRC | 21:11 | |
jrichli | ekarlso: would you like links to SAIO and keystone setup instructions, or are you asking for something "easier" than that? | 21:16 |
morganfainberg | jrichli: i kind of have a snarky answer to your rhetorical question but i don't want to be too snarky today... | 21:19 |
morganfainberg | jrichli: also *waves* | 21:19 |
morganfainberg | :) | 21:19 |
jrichli | morganfainberg: I am sorry. I didn't mean for it to come across that way. It is difficult to read intention with text only. | 21:20 |
morganfainberg | jrichli: no i was commenting that i have a snarky response :) | 21:21 |
morganfainberg | jrichli: and that i didn't want to be the snarky one today :) | 21:21 |
ekarlso | jrichli: eh, I was wondering if there was a easy way to standup a saio instance that would interact with a existing keystone.. | 21:21 |
morganfainberg | jrichli: your just fine :) | 21:21 |
jrichli | ekarlso: I believe you just have to add keystone middleware and the config section to your proxy-server.conf | 21:25 |
jrichli | morganfainberg: good to know, just wanted to make sure. :-) | 21:27 |
morganfainberg | jrichli: i'm a little punchy / snarky because summit time - means i have a lot to think about. | 21:28 |
morganfainberg | and a presentation or two to finish writing | 21:28 |
morganfainberg | :P | 21:28 |
zaitcev | the question, I suspect, is what to put into that proxy-server.conf | 21:28 |
zaitcev | like, e.g. swiftoperator=???? | 21:28 |
zaitcev | you probably want to find some kind of group like "users" or create a new one | 21:28 |
zaitcev | create one in Keystone | 21:28 |
jrichli | morganfainberg: I hope to see you there! Good luck on getting things done. | 21:29 |
zaitcev | all the rest should be trivial... just uncomment from samples in etc/proxy-server.conf-sample | 21:30 |
*** jrichli has quit IRC | 21:34 | |
*** jkugel has quit IRC | 21:35 | |
*** erlon has quit IRC | 21:41 | |
*** annegentle has quit IRC | 21:49 | |
ekarlso | does swiftclient support keystone sessions ? | 21:51 |
*** annegentle has joined #openstack-swift | 21:51 | |
zaitcev | Not sure what Keystone sessions are. Swift client simply does the same thing that e.g. "keystone token-get" would. | 21:53 |
zaitcev | Oh and it also pulls the endpoint from the attached catalog, although I think Keystone server does the interpolation. | 21:54 |
morganfainberg | zaitcev: keystone sessions are an object that handles re-auth, plugins for different forms of authentication, catalog parsing, etc | 21:54 |
morganfainberg | zaitcev: not sure if swiftclient uses it or not. | 21:54 |
morganfainberg | but for keystone authenticated cases, longterm it should. | 21:55 |
*** zhill_ has joined #openstack-swift | 21:59 | |
* notmyname is happy to see morganfainberg in the -swift channel! | 21:59 | |
morganfainberg | notmyname: i've been lurking here for about 6-8 months | 21:59 |
morganfainberg | i just usually stay quiet | 21:59 |
notmyname | lurking != "here" ;-) | 22:00 |
morganfainberg | dude, i have been reading the channel | 22:00 |
morganfainberg | thats enough of being here for IRC... ;) | 22:00 |
notmyname | :-) | 22:00 |
notmyname | but I'm definitely excited that you jump in to help oiut with keystone stuff in here | 22:00 |
morganfainberg | also, once we get keystoneauth split out, it should be easier to get swiftclient in a state that it can use it w/o all the other icky deps [for the cases you need keystone authentication-y-stuff] | 22:01 |
morganfainberg | if you don't already have session fun in the client | 22:01 |
*** tdasilva has quit IRC | 22:01 | |
* morganfainberg hasn't looked at swiftclient tbh | 22:01 | |
morganfainberg | notmyname: i try and jump in on all the major channels when i can actually speak to what is going on. | 22:02 |
minwoob | Regarding the GIL/multithreading issue. | 22:04 |
minwoob | If it does turn out that the GIL is posing significant barriers to performance | 22:05 |
minwoob | Where do we go from there? | 22:05 |
minwoob | Can't just replace CPython, right? | 22:05 |
minwoob | Possibly some extensions that need to be worked with. | 22:05 |
minwoob | Also, it seems that Kevin was suggesting that GIL shouldn't be a problem where there are only a few I/O bound operations. | 22:07 |
minwoob | From what I've read, it seems that I/O bound operations are fine, but rather the CPU bound operations are where we can really take a performance hit from the lock. | 22:07 |
notmyname | minwoob: the GIL in python will limit compute-bound workloads (in python code). are you seeing something else? | 22:07 |
notmyname | or more specifically, it will limit multi-threaded, simgle-process compute-bound workloads | 22:08 |
minwoob | It seems that Kevin has a different understanding of this problem, based on his description of it. | 22:09 |
minwoob | seems to be suggesting that we need to watch out for GIL for I/O operations, but from what I've read that should be fine. | 22:10 |
notmyname | no, the EC stuff is in a C library, and the GIL is released when a C library is called. therefore it's not an issue there | 22:10 |
redbo | Threading and the GIL only affects threaded code, which the proxy isn't. Also from a cursory glance, PyECLib never seems to release the GIL. | 22:14 |
*** wbhuber__ has quit IRC | 22:16 | |
notmyname | hmmm | 22:18 |
notmyname | I'll bug tsg about that next week | 22:18 |
redbo | But not releasing the GIL is fine if you're not planning on using multiple threads | 22:22 |
notmyname | ya | 22:23 |
*** tdasilva has joined #openstack-swift | 22:23 | |
minwoob | So, ideally all the threading should be done in liberasurecode and the pluggable backends, right? | 22:27 |
*** proteusguy has joined #openstack-swift | 22:28 | |
notmyname | well except that there is no threading. it's all in the same process | 22:34 |
*** tab____ has quit IRC | 22:34 | |
minwoob | Hmm | 22:35 |
minwoob | I'll think about this more when I come back. Thanks. | 22:38 |
*** jamielennox|away is now known as jamielennox | 22:48 | |
portante | redbo, notmyname: pyeclib is mostly compute boudn right? Does it perform and non-blocking IO? | 22:50 |
notmyname | portante: it's just the EC computations. no IO | 22:50 |
*** annegentle has quit IRC | 22:51 | |
*** vinsh has joined #openstack-swift | 22:52 | |
*** vinsh has quit IRC | 22:59 | |
*** vinsh has joined #openstack-swift | 22:59 | |
mattoliverau | Morning all, well I'm off to the airport, cya in Vancouver.. In about 30 hours or so :p | 23:02 |
*** proteusguy has quit IRC | 23:15 | |
*** lcurtis has quit IRC | 23:17 | |
torgomatic | if anything, it'd be better for us if pyeclib did not release the GIL | 23:22 |
torgomatic | since the proxy is single-threaded, and it's faster to not do something than to do it | 23:22 |
redbo | I kind of think threads wouldn't help any. Unless the EC operations take a really long time, which isn't the impression I get. | 23:46 |
torgomatic | I completely agree there. If we were erasure-coding giant wads of data at once, maybe, but we're only doing dinky amounts per call | 23:47 |
*** annegentle has joined #openstack-swift | 23:51 | |
*** annegentle has quit IRC | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!