*** baojg has quit IRC | 00:13 | |
*** haomaiwang has joined #openstack-swift | 00:27 | |
*** haomaiwang has quit IRC | 00:32 | |
*** mingdang1 has joined #openstack-swift | 00:42 | |
*** chlong has quit IRC | 00:51 | |
*** takashi has joined #openstack-swift | 00:51 | |
takashi | good morning | 00:53 |
---|---|---|
*** david-lyle has quit IRC | 00:54 | |
*** david-lyle has joined #openstack-swift | 00:54 | |
kota_ | Good morning! | 00:56 |
kota_ | ho, takashi: \o/ | 00:57 |
*** km has quit IRC | 00:58 | |
*** km has joined #openstack-swift | 00:58 | |
mingdang1 | oh, morning | 01:00 |
*** StraubTW has joined #openstack-swift | 01:03 | |
*** chlong has joined #openstack-swift | 01:05 | |
takashi | kota_, ho_, mingdang1: morning | 01:06 |
kota_ | mingdang1: o/ | 01:06 |
*** StraubTW has quit IRC | 01:07 | |
*** NM has joined #openstack-swift | 01:29 | |
*** StraubTW has joined #openstack-swift | 01:40 | |
*** panda has quit IRC | 01:40 | |
*** panda has joined #openstack-swift | 01:41 | |
*** NM has quit IRC | 01:46 | |
*** tsg has joined #openstack-swift | 01:49 | |
tsg | kota_: ping | 01:50 |
kota_ | tsg: pong | 01:50 |
kota_ | tsg: alright, we are talking about patch 282578 | 01:51 |
patchbot | kota_: https://review.openstack.org/#/c/282578/ - swift - Set backend content length for fallocate - EC Policy | 01:51 |
tsg | comments on https://review.openstack.org/#/c/282578/3/swift/proxy/controllers/obj.py - in particular the "TODO" comment at #2116 | 01:51 |
patchbot | tsg: patch 282578 - swift - Set backend content length for fallocate - EC Policy | 01:51 |
kota_ | :) | 01:51 |
kota_ | directly speaking, in my eyes, current PyECLib.get_segment_info seems to round the data_len into a segment if the data_len is smaller enough than the segment_size. | 01:52 |
kota_ | e.g. | 01:52 |
kota_ | (i'm not sure, making currect exmaple) | 01:53 |
kota_ | segment_size = 1MB, data_len = 1MB + 1B -> num_segment = 1, last_segment_size = 1MB + 1B... | 01:54 |
kota_ | like this? | 01:54 |
kota_ | actually, Swift will make 2 segments likely num_segment = 2, last_segment_size = 1B | 01:54 |
kota_ | tsg: is that correct? | 01:55 |
*** baojg has joined #openstack-swift | 01:55 | |
tsg | kota_: that's how it was planned early on - I assume that's what you see happen? | 01:55 |
tsg | kota_: is there a bug? or is this not what's desired? | 01:56 |
kota_ | tsg: I don't know why PyECLib does so for now i.e. don't know it's desired or not. | 01:56 |
kota_ | but to be simple, I prefer the way current Swift does even if it waste the device space a bit. | 01:57 |
tsg | kota_: it was a design choice :) | 01:58 |
kota_ | anyways, we found the difference, PyECLib expects and Swift does different way. | 01:59 |
kota_ | tsg: exactly | 01:59 |
tsg | kota_: padding a smaller segment was also a part of design discussion at the time but we chose to go the other route | 01:59 |
kota_ | tsg: (honesty it's the reason, I didn't raise a bug report for that, becuase it's actually design choice :)) | 01:59 |
tsg | kota_: I see - did you discover this when playing with the 1MB + 1b* like case? | 02:01 |
tsg | kota_: (do you have a test case for Swift where this becomes an issue, is the q) | 02:01 |
kota_ | tsg: yup, at first, the first author Janie are confusing why it will be different values. | 02:02 |
kota_ | wait | 02:02 |
*** haomaiwang has joined #openstack-swift | 02:04 | |
kota_ | tsg: not found, I might not save the test for now, sorry. | 02:04 |
jrichli | kota_: the functests that had failed for me were test_slo_copy and test_slo_copy_account | 02:04 |
tsg | jrichli: o/ | 02:05 |
kota_ | jrichli: \o/ | 02:05 |
jrichli | tsg kota_: o/ | 02:05 |
tsg | jrichli: did these tests fail after you added code to handle the "non-chunked" case (ie when CL is in the PUT headers) | 02:06 |
jrichli | I would get a 499 on a PUT because fsize != upload_size. actual size was +80 | 02:06 |
jrichli | these tests do not fail with the latest code in the patch | 02:07 |
*** chlong has quit IRC | 02:07 | |
tsg | jrichli: ok .. I am curious what made these tests fail (they were passing earlier) | 02:07 |
kota_ | because PyEClib round the 1 byte into previous one but Swift transfer the fragment header + 1 bytes as the last. | 02:07 |
kota_ | fragment header is now 80 bytes, right? | 02:07 |
kota_ | tsg:^^ | 02:08 |
tsg | kota_: correct | 02:08 |
tsg | kota_: it has always been :) | 02:08 |
kota_ | tsg: does it make you sense? | 02:09 |
tsg | kota_: I am cross-checking everything from libec, pyeclib to the original ec put code .. but wondering why these tests didn't fail all this time. :-) | 02:10 |
tsg | jrichli, kota_: nm .. may be we were not running functests after all .. sorry | 02:10 |
kota_ | tsg: and currently we don't use get_segment_info for the fallocate, just chunked transfer. | 02:11 |
kota_ | tsg: it was found when we tried to use it in Swift itself :) | 02:11 |
tsg | kota_: there is a long history to the "chunked-only" decision :) | 02:12 |
kota_ | tsg: I know the tons of effort :) | 02:12 |
tsg | kota_: but I do see the value in making sure there is enough disk space before streaming the fragments | 02:12 |
tsg | kota_: given get_segment_info() is not part of the common(ly used) API and Swift is the only caller at the moment, it should be possible to make a change - let's also chat with Kevin - I will start a thread. thank you for bringing this up! | 02:17 |
kota_ | tsg: thanks a lot! | 02:17 |
tsg | thank you jrichli! for the patch | 02:18 |
*** tsg has quit IRC | 02:18 | |
jrichli | tsg: np! thank you | 02:19 |
*** chlong has joined #openstack-swift | 02:20 | |
*** km has quit IRC | 02:23 | |
*** km has joined #openstack-swift | 02:24 | |
*** StraubTW has quit IRC | 02:34 | |
*** StraubTW has joined #openstack-swift | 02:35 | |
*** StraubTW has quit IRC | 02:39 | |
*** StraubTW has joined #openstack-swift | 02:40 | |
*** nadeem has joined #openstack-swift | 02:50 | |
*** StraubTW has quit IRC | 02:55 | |
*** haomaiwang has quit IRC | 03:01 | |
*** haomaiwang has joined #openstack-swift | 03:01 | |
*** asettle has quit IRC | 03:20 | |
*** asettle has joined #openstack-swift | 03:20 | |
*** chlong has quit IRC | 03:22 | |
*** chlong has joined #openstack-swift | 03:41 | |
*** asettle has quit IRC | 03:44 | |
*** sanchitmalhotra has joined #openstack-swift | 03:45 | |
*** sanchitmalhotra has quit IRC | 03:47 | |
*** links has joined #openstack-swift | 03:50 | |
*** ekarlso- has quit IRC | 04:00 | |
*** haomaiwang has quit IRC | 04:01 | |
*** haomaiwang has joined #openstack-swift | 04:01 | |
*** ekarlso- has joined #openstack-swift | 04:13 | |
*** silor has joined #openstack-swift | 04:26 | |
*** klrmn has quit IRC | 04:31 | |
*** treaki__ has joined #openstack-swift | 04:32 | |
*** ppai has joined #openstack-swift | 04:33 | |
*** chlong has quit IRC | 04:35 | |
*** treaki_ has quit IRC | 04:36 | |
*** chlong has joined #openstack-swift | 04:47 | |
*** baojg has quit IRC | 04:52 | |
*** baojg has joined #openstack-swift | 04:53 | |
*** haomaiwang has quit IRC | 05:01 | |
*** haomaiwang has joined #openstack-swift | 05:01 | |
*** haomaiwang has quit IRC | 05:02 | |
*** chlong has quit IRC | 05:03 | |
*** chlong has joined #openstack-swift | 05:16 | |
*** haomaiwa_ has joined #openstack-swift | 05:18 | |
openstackgerrit | Brian Cline proposed openstack/swift: Don't report recon mount/usage status on files https://review.openstack.org/292206 | 05:18 |
*** chlong has quit IRC | 05:22 | |
*** haomaiwa_ has quit IRC | 05:22 | |
*** haomaiwang has joined #openstack-swift | 05:33 | |
*** chlong has joined #openstack-swift | 05:34 | |
*** haomaiwang has quit IRC | 05:38 | |
*** haomaiwang has joined #openstack-swift | 05:49 | |
*** haomaiwang has quit IRC | 05:52 | |
*** haomaiwang has joined #openstack-swift | 05:52 | |
*** haomaiwang has quit IRC | 06:01 | |
*** haomaiwa_ has joined #openstack-swift | 06:01 | |
*** trifon has joined #openstack-swift | 06:01 | |
*** nadeem has quit IRC | 06:10 | |
*** nadeem has joined #openstack-swift | 06:10 | |
*** andreaponza has joined #openstack-swift | 06:13 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Imported Translations from Zanata https://review.openstack.org/292217 | 06:13 |
*** asettle has joined #openstack-swift | 06:18 | |
*** ChubYann has quit IRC | 06:19 | |
*** andreaponza has quit IRC | 06:34 | |
*** daemontool has quit IRC | 06:34 | |
*** pcaruana has quit IRC | 06:39 | |
openstackgerrit | Nadeem Syed proposed openstack/swift: go: add ability to dump goroutines stacktrace with SIGABRT https://review.openstack.org/292229 | 06:46 |
*** chlong has quit IRC | 06:48 | |
*** siva_krishnan has joined #openstack-swift | 06:51 | |
*** ChanServ sets mode: +v cschwede | 06:53 | |
*** haomaiwa_ has quit IRC | 07:01 | |
*** haomaiwa_ has joined #openstack-swift | 07:01 | |
*** bhakta_ has quit IRC | 07:02 | |
*** Lickitysplitted_ has joined #openstack-swift | 07:04 | |
*** patchbot has quit IRC | 07:04 | |
*** patchbot` has joined #openstack-swift | 07:04 | |
*** andymccr_ has joined #openstack-swift | 07:05 | |
*** jlhinson_ has joined #openstack-swift | 07:05 | |
*** patchbot` is now known as patchbot | 07:05 | |
*** timburke_ has joined #openstack-swift | 07:06 | |
*** ChanServ sets mode: +v timburke_ | 07:06 | |
*** chrisnelson_ has joined #openstack-swift | 07:06 | |
*** acorwin_ has joined #openstack-swift | 07:07 | |
*** asettle has quit IRC | 07:07 | |
*** bhakta has joined #openstack-swift | 07:08 | |
*** redbo_ has joined #openstack-swift | 07:10 | |
*** csmart_ has joined #openstack-swift | 07:10 | |
*** km has quit IRC | 07:12 | |
*** cschwede_ has joined #openstack-swift | 07:14 | |
*** ChanServ sets mode: +v cschwede_ | 07:14 | |
*** Lickitysplitted has quit IRC | 07:15 | |
*** cschwede has quit IRC | 07:15 | |
*** redbo has quit IRC | 07:15 | |
*** jlhinson has quit IRC | 07:15 | |
*** mathiasb has quit IRC | 07:15 | |
*** chrisnelson has quit IRC | 07:15 | |
*** sileht has quit IRC | 07:15 | |
*** timburke has quit IRC | 07:15 | |
*** ajiang has quit IRC | 07:15 | |
*** csmart has quit IRC | 07:15 | |
*** clyps__ has quit IRC | 07:15 | |
*** wbhuber has quit IRC | 07:15 | |
*** dabukalam has quit IRC | 07:15 | |
*** acorwin has quit IRC | 07:15 | |
*** andymccr has quit IRC | 07:15 | |
*** km has joined #openstack-swift | 07:15 | |
*** cschwede_ is now known as cschwede | 07:15 | |
*** ajiang has joined #openstack-swift | 07:16 | |
*** mathiasb has joined #openstack-swift | 07:16 | |
*** clyps__ has joined #openstack-swift | 07:16 | |
*** wbhuber has joined #openstack-swift | 07:16 | |
*** dabukalam has joined #openstack-swift | 07:16 | |
*** sileht has joined #openstack-swift | 07:16 | |
*** treyd_ has quit IRC | 07:18 | |
*** treyd has joined #openstack-swift | 07:21 | |
*** timur has joined #openstack-swift | 07:32 | |
*** timur has left #openstack-swift | 07:32 | |
*** csmart_ is now known as csmart | 07:34 | |
*** mmcardle has joined #openstack-swift | 07:37 | |
*** andymccr_ is now known as andymccr | 07:39 | |
*** timur has joined #openstack-swift | 07:43 | |
*** mmcardle1 has joined #openstack-swift | 07:49 | |
*** mmcardle has quit IRC | 07:51 | |
*** tesseract has joined #openstack-swift | 07:52 | |
*** tesseract is now known as Guest59187 | 07:52 | |
*** baojg has quit IRC | 07:58 | |
*** haomaiwa_ has quit IRC | 08:01 | |
*** haomaiwang has joined #openstack-swift | 08:01 | |
*** jmccarthy has quit IRC | 08:02 | |
*** jmccarthy has joined #openstack-swift | 08:03 | |
*** baojg has joined #openstack-swift | 08:05 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix reclaimable PUT racing .durable/.data cleanup https://review.openstack.org/289756 | 08:13 |
*** rledisez has joined #openstack-swift | 08:13 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix reclaimable PUT racing .durable/.data cleanup https://review.openstack.org/289756 | 08:22 |
*** rcernin has joined #openstack-swift | 08:25 | |
*** pcaruana has joined #openstack-swift | 08:43 | |
*** permalac has joined #openstack-swift | 08:50 | |
permalac | Guys, how do you manage large swift environments. I'm with a small swift (3 nodes and 3 proxy on the openstack controllers), and this is dificult to follow. | 08:51 |
permalac | I do not manage to see where my glance images go, and makes me nervous. | 08:51 |
permalac | What am I missing? | 08:52 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix ssync related object-server config docs https://review.openstack.org/292257 | 08:52 |
*** asettle has joined #openstack-swift | 08:55 | |
*** haomaiwang has quit IRC | 09:01 | |
*** jordanP has joined #openstack-swift | 09:02 | |
*** stantonnet has quit IRC | 09:02 | |
*** stantonnet has joined #openstack-swift | 09:05 | |
*** haomaiwang has joined #openstack-swift | 09:06 | |
*** asettle has quit IRC | 09:09 | |
mingdang1 | @kota_ excuse me ,i have a questin. When I run a data migrate in swift cluster ,how to ensure the service is normal? | 09:09 |
*** asettle has joined #openstack-swift | 09:11 | |
*** asettle has quit IRC | 09:15 | |
*** McMurlock has quit IRC | 09:19 | |
*** McMurlock has joined #openstack-swift | 09:23 | |
*** McMurlock has left #openstack-swift | 09:24 | |
*** jistr has joined #openstack-swift | 09:28 | |
*** nadeem has quit IRC | 09:29 | |
*** acoles_ is now known as acoles | 09:55 | |
*** haomaiwang has quit IRC | 10:01 | |
*** haomaiwang has joined #openstack-swift | 10:01 | |
*** kei_yama has quit IRC | 10:02 | |
*** ho_ has quit IRC | 10:07 | |
*** mingdang1 has quit IRC | 10:19 | |
*** haomaiwang has quit IRC | 10:23 | |
*** mvk has joined #openstack-swift | 10:24 | |
*** haomaiwang has joined #openstack-swift | 10:26 | |
*** haomaiwang has quit IRC | 10:32 | |
*** haomaiwang has joined #openstack-swift | 10:33 | |
*** mingdang1 has joined #openstack-swift | 10:55 | |
*** mingdang1 has joined #openstack-swift | 10:55 | |
kota_ | mingdang: I'm back | 10:58 |
kota_ | mingdng1:^^ | 10:58 |
kota_ | mingdang1: how does "migrate" mean for you? | 10:59 |
*** haomaiwang has quit IRC | 11:01 | |
*** haomaiwang has joined #openstack-swift | 11:01 | |
mingdang1 | when i rebalance a ring,and swift-replicator is working on "update_deleted" | 11:02 |
mingdang1 | @kota_ :) | 11:02 |
*** baojg has quit IRC | 11:03 | |
kota_ | mingdang1: I think always the swift service is normal in the way. | 11:05 |
mingdang1 | yeah? | 11:06 |
kota_ | mingdang1: if you want to know the status for the replication, you can see the log in the object-replicator | 11:06 |
*** silor has quit IRC | 11:07 | |
kota_ | it might be needed what you mean as 'normal', though. | 11:07 |
mingdang1 | maybe i request a object,but one replica of the object is replicating... | 11:07 |
mingdang1 | after I rebalance, the partition that one object belong is moved from one node to another node | 11:10 |
kota_ | yes | 11:10 |
mingdang1 | now i get it,the ring recond the node is old,but it moving to new node | 11:11 |
mingdang1 | %s/recond/record | 11:11 |
mingdang1 | oh,I am wrong.now i get it,the ring record the node is new,and it is moving to new node | 11:13 |
kota_ | yup | 11:13 |
kota_ | the primary who should have the replica was changed. | 11:14 |
mingdang1 | maybe the object is not moved to new node complete, i request to this node | 11:15 |
mingdang1 | it will return a 404 not found? | 11:16 |
kota_ | mingdang1: it may return 404 *but* | 11:17 |
kota_ | mingdang1: basically Swift will move only one of replica for each paritions at once | 11:17 |
kota_ | mingdang1: when you do "rebalance" command for swift-ring-builder | 11:18 |
kota_ | mingdang1: so 2 replicas will be still remaining in the primary nodes. | 11:18 |
mingdang1 | but the 2 replication's node is old | 11:19 |
mingdang1 | i get node from ring find new | 11:19 |
kota_ | mingdang1: and then, if proxy got 404 not found from the first primary, proxy will attempt to the next primary (who should have the one of the replicas) | 11:19 |
kota_ | mingdang1: you mean all primary nodes replaced with completely fresh new nodes? | 11:20 |
kota_ | mingdang1: can i make sure your migration scenario? | 11:22 |
kota_ | mingdang1: I thought... | 11:22 |
mingdang1 | when i run the rebalance,it not all replica is changed in ring? | 11:22 |
kota_ | mingdang1: 1. add new devices to ring, 2. do rebalance, 3. deploy the ring, 4 waiting, 5 remove old devices, 6. do rebalance, 7. deploy the ring, 8 waiting | 11:23 |
kota_ | in my scenario. | 11:23 |
kota_ | mingdagn1: exactly | 11:24 |
kota_ | mingdang1: except removing the all devices from the ring, maybe. | 11:24 |
kota_ | except? unless? | 11:24 |
kota_ | lack of my Einglish skill :/ | 11:25 |
kota_ | just one replica will be moved at once per rebalance. | 11:25 |
mingdang1 | now I add a new device, and rebalance, and the partitino that A object is located is moving to another mode | 11:25 |
mingdang1 | now i get A object, if i will return a 404? | 11:26 |
kota_ | mingdang1: did you get 404? | 11:27 |
mingdang1 | no, no, i guess :) | 11:27 |
mingdang1 | on the basis of code i read :) | 11:28 |
*** km has quit IRC | 11:28 | |
mingdang1 | maybe i am wrong | 11:28 |
kota_ | mingdang1: almostly you can get 200 unless 2 primaries (who is not moving) down. | 11:29 |
mingdang1 | swift read one replica,whether the one is the not moving completely | 11:29 |
mingdang1 | primaries means? | 11:29 |
kota_ | alright, primary means the nodes who should have the replica in current ring. | 11:30 |
kota_ | the first 3 devices (if you set 3 replicas) you can see when you run 'swift-get-nodes <ring> acccount container object' | 11:32 |
kota_ | mingdang1: note that swift can redirect the get request to another node if an object-server down, not found, whatever for 4xx, 5xx. | 11:34 |
kota_ | mingdagn1: exactly proxy tries to read one replica and the repica might be in moving for the rebalance, but proxy can get the object from another node. | 11:35 |
mingdang1 | when I run rebalance and not copy the ring to storage node ,the storage node is old ring, the proxy is new ring? | 11:37 |
kota_ | mingdagn1: wow, scared | 11:37 |
kota_ | mingdang1: I think you shouldn't do so. | 11:38 |
kota_ | mingdang1: Does it mean the Swift has 2 different rings, right? | 11:38 |
mingdang1 | yes? | 11:39 |
mingdang1 | yes | 11:39 |
*** ujjain- is now known as ujjain | 11:39 | |
mingdang1 | if i don't copy the ring to storage node manual,how to ensure the ring is same? | 11:40 |
kota_ | Swift is desinged that all nodes have same ring. | 11:40 |
kota_ | depends on operation, e.g. md5sum? | 11:41 |
mingdang1 | where do you run rebalance ? | 11:41 |
kota_ | in the out of Swift cluster | 11:42 |
kota_ | maybe we call likely management node | 11:42 |
mingdang1 | yes | 11:42 |
mingdang1 | then? | 11:42 |
kota_ | deploy the ring to nodes in various way | 11:43 |
kota_ | ways | 11:43 |
mingdang1 | is there any process ensure the same? | 11:43 |
kota_ | e.g. ansible? scp? git and agent pulling? | 11:43 |
kota_ | depends on your oeration model. | 11:43 |
kota_ | s/oeration/operation/ | 11:43 |
kota_ | no process in Swift itself. | 11:44 |
mingdang1 | oh | 11:44 |
mingdang1 | maybe i leave out a storage node.....:( | 11:45 |
kota_ | hmm | 11:47 |
kota_ | mingdang1: you can see how HPE cloud managed the ring, here https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/maintaining-and-operating-swift-at-public-cloud-scale | 11:49 |
kota_ | mingdang1: since about 26:00-ish | 11:50 |
*** cdelatte has joined #openstack-swift | 11:52 | |
*** cdelatte has quit IRC | 11:53 | |
*** cdelatte has joined #openstack-swift | 11:53 | |
mingdang1 | @kota_ ok, thanks very much :) | 11:56 |
*** haomaiwang has quit IRC | 12:01 | |
*** haomaiwang has joined #openstack-swift | 12:01 | |
*** chlong has joined #openstack-swift | 12:01 | |
*** ppai_ has joined #openstack-swift | 12:01 | |
*** ppai has quit IRC | 12:03 | |
*** delattec has joined #openstack-swift | 12:25 | |
*** cdelatte has quit IRC | 12:28 | |
*** silor has joined #openstack-swift | 12:32 | |
*** haomaiwang has quit IRC | 12:32 | |
*** NM has joined #openstack-swift | 12:35 | |
*** silor has quit IRC | 12:39 | |
*** silor has joined #openstack-swift | 12:39 | |
*** MVenesio has joined #openstack-swift | 12:40 | |
*** silor1 has joined #openstack-swift | 12:42 | |
*** silor has quit IRC | 12:44 | |
*** silor1 is now known as silor | 12:44 | |
*** links has quit IRC | 12:46 | |
*** StraubTW has joined #openstack-swift | 12:46 | |
*** links has joined #openstack-swift | 12:46 | |
*** ppai_ has quit IRC | 12:50 | |
*** _JZ_ has joined #openstack-swift | 12:59 | |
*** StraubTW has quit IRC | 13:01 | |
openstackgerrit | Gleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet. https://review.openstack.org/287157 | 13:01 |
*** delatte has joined #openstack-swift | 13:03 | |
*** haomaiwang has joined #openstack-swift | 13:05 | |
*** delattec has quit IRC | 13:05 | |
*** esker has quit IRC | 13:13 | |
*** esker has joined #openstack-swift | 13:14 | |
*** yarkot_ has joined #openstack-swift | 13:19 | |
*** StraubTW has joined #openstack-swift | 13:24 | |
*** BigWillie has joined #openstack-swift | 13:24 | |
openstackgerrit | Gleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet. https://review.openstack.org/287157 | 13:25 |
*** yarkot_ has quit IRC | 13:27 | |
*** cbartz has joined #openstack-swift | 13:34 | |
*** panda has quit IRC | 13:40 | |
*** panda has joined #openstack-swift | 13:40 | |
openstackgerrit | Gleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet. https://review.openstack.org/287157 | 13:41 |
*** mvk has quit IRC | 13:41 | |
*** mingdang1 has quit IRC | 13:47 | |
*** haomaiwang has quit IRC | 14:01 | |
*** haomaiwang has joined #openstack-swift | 14:01 | |
*** ig0r_ has joined #openstack-swift | 14:02 | |
*** ametts has joined #openstack-swift | 14:04 | |
*** tongli has joined #openstack-swift | 14:10 | |
*** daemontool has joined #openstack-swift | 14:10 | |
*** mvk has joined #openstack-swift | 14:12 | |
*** asettle has joined #openstack-swift | 14:12 | |
*** david-lyle has quit IRC | 14:16 | |
*** david-lyle has joined #openstack-swift | 14:19 | |
*** asettle has quit IRC | 14:19 | |
*** pcaruana has quit IRC | 14:28 | |
*** CaioBrentano has joined #openstack-swift | 14:35 | |
*** vinsh has joined #openstack-swift | 14:38 | |
gmmaha | good morning | 14:40 |
*** gmmaha has left #openstack-swift | 14:42 | |
*** gmmaha has joined #openstack-swift | 14:43 | |
*** zaitcev has joined #openstack-swift | 14:45 | |
*** ChanServ sets mode: +v zaitcev | 14:45 | |
*** cbartz has left #openstack-swift | 14:49 | |
*** twm2016 has joined #openstack-swift | 14:56 | |
*** haomaiwang has quit IRC | 15:01 | |
*** tmoreira has quit IRC | 15:01 | |
*** haomaiwa_ has joined #openstack-swift | 15:01 | |
pdardeau | good morning gmmaha | 15:04 |
gmmaha | pdardeau: o/ | 15:04 |
*** nchristia has joined #openstack-swift | 15:04 | |
*** tmoreira has joined #openstack-swift | 15:05 | |
twm2016 | Hello everyone, I am working on this bug https://bugs.launchpad.net/swift/+bug/1537811 and am trying to write a functional test. I have never done this before and am looking for some guidance on this. I think I want to add an if statement before this one here: https://github.com/openstack/swift/blob/master/test/functional/swift_test_client.py#L292 | 15:08 |
openstack | Launchpad bug 1537811 in OpenStack Object Storage (swift) "204 No Content responses have Content-Length specified" [Low,In progress] - Assigned to Trevor McCasland (twm2016) | 15:08 |
*** gmmaha has quit IRC | 15:08 | |
*** gmmaha has joined #openstack-swift | 15:08 | |
twm2016 | My change checks for response status 204 and removes the "Content-Length" header if it exists. | 15:09 |
twm2016 | I have the unit test written but the functional test is a bit different. | 15:09 |
*** arch-nemesis has joined #openstack-swift | 15:10 | |
*** klrmn has joined #openstack-swift | 15:13 | |
*** esker has quit IRC | 15:15 | |
*** corvus is now known as jeblair | 15:21 | |
*** daemontool has quit IRC | 15:22 | |
*** daemontool has joined #openstack-swift | 15:22 | |
*** links has quit IRC | 15:23 | |
*** ig0r__ has joined #openstack-swift | 15:23 | |
*** StraubTW has quit IRC | 15:23 | |
*** ig0r_ has quit IRC | 15:26 | |
*** StraubTW has joined #openstack-swift | 15:26 | |
*** tsg has joined #openstack-swift | 15:29 | |
siva_krishnan | good morning! | 15:40 |
*** fthiagogv has joined #openstack-swift | 15:44 | |
*** zul has joined #openstack-swift | 15:50 | |
*** garthb has joined #openstack-swift | 15:53 | |
notmyname | good morning, everyone | 15:53 |
*** proteusguy_ has quit IRC | 15:54 | |
notmyname | I was kinda absent late last week so i could recover from whatever sickness I had. I hope to catch up today | 15:54 |
notmyname | jrichli: I hope you're feeling better, too | 15:54 |
jrichli | notmyname: thanks, my plan to sleep it off worked! It never really fully developed :-) | 15:55 |
jrichli | glad you are feeling better! | 15:56 |
notmyname | great :-) | 15:57 |
*** haomaiwa_ has quit IRC | 16:01 | |
*** haomaiwang has joined #openstack-swift | 16:01 | |
*** StraubTW has quit IRC | 16:01 | |
*** daemontool has quit IRC | 16:01 | |
*** StraubTW has joined #openstack-swift | 16:04 | |
*** pcaruana has joined #openstack-swift | 16:09 | |
*** proteusguy_ has joined #openstack-swift | 16:11 | |
jidar | is there no documented process for recovering object from quarantine? | 16:13 |
jidar | I've searched through a few books and some of the docs and admin guide, and while it's mentioned a few times I don't actually see the process outlined anywhere | 16:13 |
notmyname | jidar: quarantined objects are put into a "quarantine" directory (sibling of "objects"). you can examine them there. however, note that they are only put there if something is wrong with them. so you almost certainly don't want to put them back | 16:14 |
notmyname | after a replica is quarantined, replication will replace that replica with a good one (ie copy over another replica) | 16:15 |
jidar | notmyname: I can explain why they are there, but the solution has already been run (path_hash settings gone awry) | 16:15 |
notmyname | ah. yikes | 16:15 |
jidar | for a few hours the hash was wrong | 16:15 |
jidar | now I've got a bunch of glance images sitting there unable to be used | 16:15 |
notmyname | ok, so you got a lot of good stuff into the quarantine directory and want to put it back. is that for every drive or just for one drive? | 16:16 |
jidar | three servers, all controllers and all under /srv/node/d1 | 16:16 |
openstackgerrit | Trevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response https://review.openstack.org/291461 | 16:17 |
*** gyee has joined #openstack-swift | 16:17 | |
*** dmorita has joined #openstack-swift | 16:19 | |
notmyname | so under quarantine, you have objects/<hash>/<ts>.data | 16:19 |
jidar | something along these lines: /srv/node/d1/quarantined/objects/04b27334c0de225af769837593324876/1452023491.12492.data | 16:19 |
notmyname | and under objects (the good one), you have the pattern <part>/<suffix>/<hash>/<ts>.data | 16:20 |
notmyname | so here's how that works | 16:20 |
notmyname | note that they both have <hash>. that's the same thing | 16:20 |
jidar | similar: /srv/node/d1/objects/532/f1e/852369f73b2efe65167c43af382b0f1e/1452030564.69473.ts | 16:21 |
acoles | notmyname: glad you're feeling better. i'm confused by the status of this patch 289890 which seems to have stalled - I can only wonder that maybe you needed to add your +2 *followed by* your +A | 16:21 |
patchbot | acoles: https://review.openstack.org/#/c/289890/ - python-swiftclient (stable/liberty) - Do not reveal auth token in swiftclient log messag... | 16:21 |
*** twm2016 has quit IRC | 16:21 | |
notmyname | it's the hash of the object name according to the ring (including the hash_suffix and hash_prefix values in swift.conf) | 16:21 |
acoles | notmyname: along with patch 284644 | 16:21 |
patchbot | acoles: https://review.openstack.org/#/c/284644/ - python-swiftclient (stable/liberty) - Fix the http request headers being overwritten in ... | 16:21 |
zaitcev | do the renames while daemons are stopped | 16:22 |
notmyname | jidar: yes, what zaitcev said | 16:22 |
jidar | oh | 16:22 |
notmyname | jidar: so the other parts of the path | 16:22 |
jidar | just take down one of my swift servers and move the directories, restart services? | 16:22 |
notmyname | jidar: the <suffix> is the last 3 characters of the hex representation fo the hahs | 16:22 |
*** klrmn has quit IRC | 16:23 | |
notmyname | jidar: yeah. "just" ;-) | 16:23 |
jidar | notmyname: hahaha | 16:23 |
notmyname | jidar: so for your hash 852369f73b2efe65167c43af382b0f1e see that the suffix is f1e | 16:23 |
notmyname | ok, so the last part is the part. that's the decimal representation of the ring partition the object hashes too | 16:24 |
zaitcev | jidar: most likely if objects are recovered properly, as soon as one of them is up, it'll replicate to rest, meaning 2x space in each else where quarantine still full of them; make sure space is enough | 16:24 |
notmyname | jidar: so if your part power is 12, then the part power is "int(hash, 16) >> (32-12)" | 16:24 |
jidar | I suppose this is why mirantis made this post : https://www.mirantis.com/blog/openstack-swift-hash_path_suffix-can-go-wrong/ | 16:25 |
notmyname | wait, that snippet was wrong. trying to figure out what it should be | 16:26 |
zaitcev | I'd try to get the object name, including the account after auth and container, then run swift-get-nodes instead of calculating | 16:27 |
notmyname | zaitcev: yeah, that's probably best | 16:28 |
jidar | so I've done a few swift-object-info commands | 16:28 |
notmyname | jidar: what is your part power? | 16:28 |
jidar | I don't see it defined in the config | 16:28 |
*** dmorita has quit IRC | 16:29 | |
notmyname | jidar: swift-ring-builder will tell you the number of partitions. what's that? | 16:29 |
jidar | 1024 partitions | 16:30 |
jidar | on all hosts | 16:30 |
notmyname | ok, so that's a part power of 10 | 16:30 |
notmyname | 2**10 == 1024 | 16:30 |
*** dmorita has joined #openstack-swift | 16:30 | |
notmyname | so the actual math is "int(x, 16) >> (128-10)". but zaitcev is right that using swift-object-info would probably be safer | 16:31 |
*** Guest59187 has quit IRC | 16:31 | |
jidar | https://gist.github.com/0b5bbe6a16ee58b7cc9b | 16:31 |
jidar | an example of swift-object-info output | 16:32 |
notmyname | sorry, I meant to say swift-get-nodes | 16:33 |
notmyname | so you could take one object you know of. suppose it's "AUTH_foo/bar_container/my_awesome_image.quuz". and you'd run `swift-get-nodes /etc/swift/object.ring.gz AUTH_foo/bar_container/my_awesome_image.quuz` | 16:33 |
notmyname | jidar: so I get somethign like https://gist.github.com/notmyname/529fe493d21b3360bd14 on my dev box | 16:33 |
zaitcev | and hopefuly it matches lines 31-33 in the gist, assuming the hash prefix is correct now | 16:34 |
zaitcev | and suffix | 16:34 |
notmyname | which gives you a path name on the right servers to use (note this is based on the current values in swift.conf, so make sure that is right | 16:34 |
notmyname | yeah | 16:34 |
zaitcev | Sorry, I keep talking across you... | 16:34 |
notmyname | and then move one into the right place and run replication (eg `swift-init object replication once`) and you should get it into the other places | 16:34 |
notmyname | zaitcev: no, you're saying all the right thigns :-) | 16:35 |
jidar | let me try with the AUTH_ part filled out and a correct glance image ID | 16:35 |
acoles | notmyname: ohh! so they just needed a simple 'recheck'! thanks | 16:37 |
jidar | https://gist.github.com/b12394f7071c8d7dfcd4 | 16:38 |
jidar | what I'm having trouble figuring out is the portion listed in quarantine, with the .data bits on it | 16:38 |
notmyname | acoles: I hope :-) | 16:39 |
acoles | notmyname: they went to zuul this time, so far so good | 16:39 |
notmyname | jidar: keep those the same. that's the timestamp of when the object was created | 16:40 |
jidar | from this: /srv/node/d1/quarantined/objects/04b27334c0de225af769837593324876/1452023491.12492.data, removing quarantined, and replacing the last few bits, what .... hurm | 16:40 |
notmyname | jidar: to swift's on-disk format, an object is actually a directory. so just keep the contents of that directory the same | 16:40 |
jidar | so am I just renaming the object-id there? | 16:41 |
jidar | the 04b27334c0de225af769837593324876 bit? | 16:41 |
jidar | sorry to be a bit daft at this, I haven't really gotten to work with this very much prior to having a issue :( | 16:42 |
*** lyrrad has joined #openstack-swift | 16:43 | |
notmyname | jidar: right. that's the hash. you are keeping the 1452023491.12492.data file and moving it to a different directory. that's basically it. (the trick is putting it into the *right* directory) | 16:43 |
notmyname | jidar: also, you need to go to the root of your data drives and run `find . -name \*.pkl` and delete anything you find | 16:45 |
notmyname | jidar: note that all of this work is going to (1) result in a *lot* (like 100%) data movement in your cluster (2) totally should only be a last resort (3) pretty much an unsupported use case (4) definitely will have downtime in your cluster | 16:46 |
notmyname | jidar: basically, if it's at all possible to reupload the data, that will be easier and safer | 16:46 |
*** cdelatte has joined #openstack-swift | 16:48 | |
*** delatte has quit IRC | 16:51 | |
*** dmorita has quit IRC | 16:52 | |
*** delattec has joined #openstack-swift | 16:53 | |
jidar | heh | 16:54 |
jidar | thats the conclusion I've come too | 16:54 |
jidar | I don't mind moving the data, it's only 20 gigs or so | 16:54 |
jidar | even if that's 2 or 3x over, it's all on 10gigE | 16:55 |
notmyname | lesson 0: don't change the hash path suffix or prefix. take those notes in the sample config file seriously ;-) | 16:55 |
jidar | notmyname: full disclosure, running the tripleo overcloud deploy command from the wrong directory results in a new hash_suffix being created | 16:55 |
zaitcev | I suspect someone has ran TrippleO or Director twice | 16:55 |
jidar | hahahahahaha | 16:55 |
notmyname | yikes | 16:55 |
zaitcev | or that | 16:56 |
notmyname | actually, that's what I was about to ask | 16:56 |
jidar | zaitcev: hit the nail on the head | 16:56 |
*** rcernin has quit IRC | 16:56 | |
*** cdelatte has quit IRC | 16:56 | |
notmyname | what is it we can do on the swift side to prevent this from happening? | 16:56 |
jidar | you can run it twice, but it has to be from the same directory | 16:56 |
*** chlong has quit IRC | 16:56 | |
notmyname | what is it about those things that causes this to happen? | 16:56 |
*** dmorita has joined #openstack-swift | 16:56 | |
jidar | so the undercloud doesn't know what the overclouds hash_suffix is at run time, it's going to create a new one | 16:57 |
jidar | is hash_suffix there for security reasons? | 16:57 |
jidar | I'd see some people advocating not using it | 16:57 |
notmyname | you should use both hash_suffix and hash_prefix. they are mixed into the hashing so that an end user can't target a particular partition and attack the cluster (or, in general, know the hash of an object) | 16:58 |
zaitcev | I saw clusters with hash_suffis=%CHANGEME% (straight from RPMs) | 16:58 |
notmyname | :-( | 16:59 |
*** haomaiwang has quit IRC | 17:01 | |
*** haomaiwang has joined #openstack-swift | 17:01 | |
jidar | man this sucks to go back to my customers and tell them to re-upload :/ | 17:01 |
zaitcev | You can probably re-upload for them... You know their credentials, right? You have the object's body in the quarantine directory. It's the same actual data, etag is going to be same. | 17:02 |
jidar | oh I see | 17:02 |
zaitcev | slip in curl in the night | 17:02 |
zaitcev | nobody will ever know | 17:03 |
jidar | I'm still not 100% sure I understand how to do that though | 17:03 |
jidar | let me poke around with it a bit | 17:03 |
*** nadeem has joined #openstack-swift | 17:03 | |
jidar | like how do I find all of the right .data objects to throw into a directory? | 17:05 |
notmyname | it's all of the ones in the quarantine directory, right? | 17:06 |
jidar | right, but they belong to different objects, no? | 17:08 |
jidar | [root@overcloud-controller-0 objects]# find . -iname \*.data -exec swift-object-info '{}' \; | grep ETag | 17:08 |
jidar | ETag: 50bdc35edb03a38d91b1b071afb20a3c (valid) | 17:08 |
jidar | for instance, do I find everything with the same ETag and throw them together? | 17:09 |
notmyname | no, I wouldn't do that | 17:10 |
notmyname | etag is the md5 of the contents. I can upload the same object to different names and get the same etag for both | 17:11 |
jidar | it looks like I have about 20 ETags | 17:11 |
jidar | but about 100 objects | 17:11 |
jidar | er, .data files | 17:11 |
notmyname | you shouldn't worry about the etag at all for anythign you're doing here (I don't think) | 17:11 |
notmyname | you only care about the object names and the .data files | 17:11 |
notmyname | do you have anything other than the .data files? do you have any .meta files or .ts files? | 17:12 |
jidar | no, not in quarantine | 17:12 |
notmyname | ok, that's good | 17:12 |
notmyname | and just to check, do you have more than one storage policy? | 17:12 |
jidar | let me double check | 17:12 |
jidar | don't think so | 17:13 |
notmyname | good | 17:13 |
notmyname | and it's a replicated policy? | 17:13 |
jidar | yea | 17:13 |
notmyname | good | 17:13 |
jidar | so these two files belong to the same etag, ./a426aa3c2de4acf1809691ef515dbb7f/1450768165.06062.data ./98ed8dcf30af8dadb0deadf46e08a203/1450773487.96097.data, but they have no data or object id that looks similar to them | 17:16 |
jidar | nothing based on the file name is what I mean | 17:16 |
jidar | how do I know to upload them together? | 17:16 |
notmyname | what do you mean? | 17:16 |
jidar | is this not multi-part data? | 17:16 |
notmyname | if you run swift-object-info on them, you should see that they have different metadata | 17:16 |
notmyname | doens't matter if it's part of a large object or not. you don't care (just like you shouldn't care what the etag is) | 17:17 |
jidar | https://gist.github.com/cdcc5f15cc2d157f50da | 17:18 |
notmyname | oh, when you do the copy to the other directory, be sure you're using the cp option that preserves extended attributes | 17:18 |
jidar | so these files even though they have the same ETag, are not part of the same object? | 17:18 |
notmyname | doesn't matter. stop worrying about the etag | 17:18 |
jidar | will do! | 17:18 |
*** klrmn has joined #openstack-swift | 17:18 | |
notmyname | yes. you could worry about the etag and probably get some network efficiency. but that is an optimization that will only add complexity | 17:19 |
jidar | it looks like all objects are limited to 200mb? | 17:19 |
notmyname | treat the etag like any other piece of metadata: an opaque blob of bytes. you do not care about any of it | 17:19 |
notmyname | you only want to make sure that the object is in the right on-disk place based on it's name | 17:19 |
notmyname | so in this case, you can take the account, container, and object name reported by swift-object-info, then copy it to the right place (preserving xattrs). that's it | 17:20 |
notmyname | so eg with that one you just pasted... | 17:20 |
notmyname | if you're on the .11 machine, then copy the .data file to the directory <mount point location>/d1/objects/158/63e/27b89b0e67aff4385678b1c4bc19b63e | 17:21 |
notmyname | after you make sure that directory exists | 17:21 |
notmyname | (that's for the .06062.data file) | 17:22 |
jidar | so the objects/158 exists, but not objects/158/63e | 17:23 |
jidar | and objects/158/hashs.pkl is in there | 17:23 |
notmyname | do all the file moving first, then delete the hashes.pkl. then start replication | 17:23 |
notmyname | hmm... | 17:24 |
notmyname | you said you have 3 servers, right? | 17:24 |
notmyname | only one drive on each? | 17:24 |
jidar | yea, so just for clarity sake I'm thinking, unless you want to correct me, that it might just be easier to try and upload these behind the scenes | 17:24 |
jidar | yes | 17:24 |
jidar | as new images | 17:24 |
*** jordanP has quit IRC | 17:25 | |
notmyname | do you have the same set of quarantined objects on each? | 17:25 |
jidar | let me double check | 17:25 |
notmyname | I think you'll be able to recover this without having to reupload | 17:25 |
jidar | yea, all 15gigs | 17:25 |
*** twm2016 has joined #openstack-swift | 17:25 | |
*** rledisez has quit IRC | 17:25 | |
jidar | on all 3 servers, same directories and everything | 17:25 |
notmyname | cool. so you should be able to do this on just one machine and then it will move it out. that' more inefficient from a network sense, but simpler from your recovery script perspective | 17:26 |
notmyname | ie you can do it on one machine, run replication, and things should be good | 17:26 |
jidar | yea, and because the data set is small, it wouldn't take long | 17:27 |
notmyname | right | 17:27 |
notmyname | so for every .data file in quarantine, rung swift-object-info, fine the correct right place it should be, and move it back to that place. that's it | 17:27 |
notmyname | (assuming you've already corrected your hash suffix/prefix | 17:27 |
jidar | while the swift services are down, and while they are down and I've done run the replication once | 17:28 |
jidar | and after I'm done, run the replication once* | 17:28 |
*** jistr has quit IRC | 17:28 | |
notmyname | yeah, it's probably best to do it with swift replication turned off. if you can afford the downtime, you could turn everything off | 17:28 |
jidar | let me try this once and see what happens | 17:28 |
notmyname | then after moving it, start up the main services, and run replication once. check that it's ok, then start up everything normally | 17:29 |
notmyname | try it once == try with one file? | 17:29 |
jidar | I'm thinking yes | 17:29 |
openstackgerrit | Trevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response https://review.openstack.org/291461 | 17:30 |
*** nadeem has quit IRC | 17:34 | |
*** nadeem has joined #openstack-swift | 17:35 | |
*** panda has quit IRC | 17:40 | |
*** panda has joined #openstack-swift | 17:40 | |
*** dmorita_ has joined #openstack-swift | 17:43 | |
*** dmorita has quit IRC | 17:45 | |
openstackgerrit | Merged openstack/swift: go: add ability to dump goroutines stacktrace with SIGABRT https://review.openstack.org/292229 | 17:47 |
*** chsc has joined #openstack-swift | 17:48 | |
*** alejandrito has joined #openstack-swift | 17:51 | |
clayg | heyoh! | 17:55 |
*** haomaiwang has quit IRC | 18:01 | |
*** haomaiwang has joined #openstack-swift | 18:01 | |
*** nadeem has quit IRC | 18:05 | |
*** mmcardle1 has quit IRC | 18:05 | |
*** mvk has quit IRC | 18:09 | |
*** fthiagogv has quit IRC | 18:12 | |
*** fthiagogv has joined #openstack-swift | 18:12 | |
*** fthiagogv has quit IRC | 18:13 | |
*** fthiagogv has joined #openstack-swift | 18:13 | |
*** ejat has quit IRC | 18:13 | |
*** ejat has joined #openstack-swift | 18:14 | |
*** ejat has quit IRC | 18:14 | |
*** ejat has joined #openstack-swift | 18:14 | |
openstackgerrit | Merged openstack/swift: Imported Translations from Zanata https://review.openstack.org/292217 | 18:15 |
*** nadeem has joined #openstack-swift | 18:15 | |
*** permalac has quit IRC | 18:19 | |
openstackgerrit | Trevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response https://review.openstack.org/291461 | 18:21 |
*** ChubYann has joined #openstack-swift | 18:34 | |
*** twm2016 has quit IRC | 18:34 | |
*** gyee has quit IRC | 18:46 | |
*** haomaiwang has quit IRC | 19:01 | |
*** haomaiwang has joined #openstack-swift | 19:01 | |
*** zul has quit IRC | 19:04 | |
*** mvk has joined #openstack-swift | 19:12 | |
*** esker has joined #openstack-swift | 19:20 | |
acoles | notmyname: i'm holding my breath... | 19:33 |
notmyname | acoles: I have ever mentioned that I don't like gerrit and I think the new gerrit is worse than the old? ;-) | 19:33 |
acoles | notmyname: can you imagine how long i stared at gerrit looking for some clue before summoning the courage to raise my hand in -infra ? ;) | 19:34 |
notmyname | I can now! | 19:34 |
acoles | +1 for -infra though, immediate helpful response | 19:35 |
acoles | notmyname: ...and they're in the gate queue :) thanks for clicking the right places the right number of times! | 19:36 |
notmyname | when all else fails, I'm happy to do it all over again in a different order | 19:36 |
acoles | good night | 19:37 |
notmyname | acoles: thanks for tracking it down. good nioght | 19:37 |
*** acoles is now known as acoles_ | 19:38 | |
briancline | so I know nobody likes to talk about tempest... but why wouldn't it have failed while testing patch #291461? | 19:46 |
patchbot | briancline: https://review.openstack.org/#/c/291461/ - swift - Remove Content-Length from 204 No Content Response | 19:46 |
briancline | seeing as it's explicitly checking for it at least here: https://github.com/openstack/tempest/blob/master/tempest/api/object_storage/test_account_services.py#L68 | 19:47 |
*** insanidade has joined #openstack-swift | 19:56 | |
insanidade | hi all. quick question: is it possible to server | 19:56 |
insanidade | ops. sorry. | 19:56 |
insanidade | hi all. quick question: is it possible to copy data from one swift cluster to another swift cluster without downloading and uploading all data? | 19:57 |
*** macgyver_ has joined #openstack-swift | 20:00 | |
*** haomaiwang has quit IRC | 20:01 | |
*** haomaiwang has joined #openstack-swift | 20:01 | |
insanidade | anyone ? | 20:01 |
MooingLemur | if both clusters have it enabled and available, you can do per container sync. It's not necessarily fast | 20:02 |
insanidade | MooingLemur: but that would be a better solution than downloading and then uploading every file, right ? | 20:03 |
*** ig0r_ has joined #openstack-swift | 20:03 | |
MooingLemur | it's the same amount of data transfer happening either way | 20:04 |
MooingLemur | I have some 10Gb-connected servers that I could use to download/reupload, so that'd probably be how I'd do it. | 20:04 |
insanidade | MooingLemur: it takes me a day and a half just to download all data. | 20:05 |
MooingLemur | container-sync might take longer, depends on how many objects and how fast the pipes are | 20:05 |
insanidade | MooingLemur: my understanding from your first sentence is that I could make both clusters sync if that feature was enabled in both sides. am I wrong ? | 20:06 |
MooingLemur | container-sync will (somewhat lazily) migrate all objects and subsequent changes from one container to another (on a different cluster, or the same cluster) | 20:06 |
*** ig0r__ has quit IRC | 20:07 | |
MooingLemur | and by migrate, I mean it'll make a copy of all uploads and propagate deletes | 20:08 |
insanidade | MooingLemur: hmmm. I've managed to sync containers in the same cluster. Would it be a much different task to sync containers in different clusters ? | 20:08 |
MooingLemur | do you control both clusters? | 20:10 |
insanidade | yes | 20:11 |
insanidade | I mean: I have an account in both clusters but I do not configure them. | 20:11 |
insanidade | so I don't control. I'm a user. | 20:11 |
*** MVenesio has quit IRC | 20:12 | |
MooingLemur | so, the target cluster might have to be configured by the administrator to allow container-sync from the source cluster. | 20:14 |
MooingLemur | http://docs.openstack.org/developer/swift/overview_container_sync.html explains the feature both in terms of cluster configuration and as a user | 20:14 |
*** StraubTW has quit IRC | 20:17 | |
*** fthiagogv has quit IRC | 20:18 | |
jidar | I'm having trouble understanding something, I don't see any files larger than 200004 in my quarantine directory, but I see several 1gig or so files in my regular objects directory, is it possible as part of quarantine objects are split up? I could just be wrong here but it's confusing me | 20:32 |
jidar | notmyname: zaitcev, I didn't get to thank you guys earlier for all of the help, thanks :) | 20:33 |
notmyname | jidar: these are glance images, right? doesn't glance split the data into smaller chunks like that? | 20:38 |
*** silor has quit IRC | 20:39 | |
jidar | only in quarantine? | 20:39 |
jidar | that's what's throwing me for a loop | 20:39 |
*** asettle has joined #openstack-swift | 20:46 | |
*** BigWillie has quit IRC | 20:46 | |
*** bapalm has quit IRC | 20:55 | |
*** bapalm has joined #openstack-swift | 20:58 | |
*** haomaiwang has quit IRC | 21:01 | |
*** haomaiwang has joined #openstack-swift | 21:01 | |
jidar | ./glance/glance-cache.conf:117:#swift_store_large_object_chunk_size=200 | 21:09 |
jidar | apprears to be commented out, but I wonder if that's the default somehow | 21:09 |
*** chlong has joined #openstack-swift | 21:14 | |
*** dmorita_ has quit IRC | 21:24 | |
*** dmorita has joined #openstack-swift | 21:24 | |
torgomatic | quarantining doesn't modify the objects | 21:25 |
mattoliverau | morning all. yesterday was a holiday here in Oz. Still no baby. Feels like the kid will be middleaged before she's born :P | 21:28 |
notmyname | :-) | 21:28 |
timburke_ | mattoliverau: my wife and i are convinced our daughter was born a month old :) | 21:29 |
mattoliverau | timburke_: lol, I'm beginning to undetstand that feeling :P | 21:30 |
*** mathiasb has quit IRC | 21:31 | |
*** timburke_ is now known as timburke | 21:31 | |
*** NM has quit IRC | 21:34 | |
jidar | torgomatic: damn | 21:35 |
*** mathiasb has joined #openstack-swift | 21:37 | |
*** panda has quit IRC | 21:40 | |
*** panda has joined #openstack-swift | 21:41 | |
*** dmorita has quit IRC | 21:48 | |
clayg | jrichli: looks like you and acoles_ managed to get patch 158401 merged! | 21:48 |
patchbot | clayg: https://review.openstack.org/#/c/158401/ - swift (feature/crypto) - Enable middleware to set metadata on object POST (MERGED) | 21:48 |
clayg | jrichli: I had started to review an earlier patch set on Thursday but lost track of my comments - i was mainly still loading it into my head | 21:50 |
*** dmorita has joined #openstack-swift | 21:54 | |
*** nadeem has quit IRC | 21:55 | |
clayg | oh, nm - i found it - submitted (patch set 10) | 21:55 |
clayg | jrichli: so what's the next one? probably patch 291458 ??? | 21:55 |
patchbot | clayg: https://review.openstack.org/#/c/291458/ - swift (feature/crypto) - Changes crypto to use transient-sysmeta for crypto... | 21:55 |
notmyname | clayg: yes, that one is next | 21:56 |
notmyname | clayg: I just forwarded you an email about it | 21:56 |
clayg | notmyname: just saw that - thanks! | 21:57 |
*** haomaiwang has quit IRC | 22:01 | |
*** haomaiwang has joined #openstack-swift | 22:01 | |
*** vinsh has quit IRC | 22:02 | |
*** vinsh has joined #openstack-swift | 22:02 | |
*** garthb_ has joined #openstack-swift | 22:03 | |
*** vinsh_ has joined #openstack-swift | 22:03 | |
*** vinsh_ has joined #openstack-swift | 22:04 | |
*** garthb has quit IRC | 22:06 | |
*** vinsh has quit IRC | 22:07 | |
*** vinsh has joined #openstack-swift | 22:09 | |
*** vinsh_ has quit IRC | 22:12 | |
*** MVenesio has joined #openstack-swift | 22:13 | |
timur | I'm observing a peculiar behavior with Swift that I'm trying to figure out whether it's intended or not. When submitting a HEAD or a GET request to retrieve the account metadata, container list, container metadata, or object metadata, the HTTP connection to the proxy server remains open. However, after submitting a GET request for an object, the connection is closed and the "Connection: close" | 22:15 |
timur | header is set. Is this intended or is it a bug? I couldn't find any documentation about this behavior | 22:15 |
*** timur has quit IRC | 22:18 | |
*** MVenesio has quit IRC | 22:18 | |
*** timur has joined #openstack-swift | 22:19 | |
timur | I'm observing a peculiar behavior with Swift that I'm trying to figure out whether it's intended or not. When submitting a HEAD or a GET request to retrieve the account metadata, container list, container metadata, or object metadata, the HTTP connection to the proxy server remains open. However, after submitting a GET request for an object, the connection is closed and the header is set. Is this | 22:19 |
timur | intended or is it a bug? I couldn't find any documentation about this behavior | 22:19 |
* notmyname gives timur a look from across the room | 22:20 | |
notmyname | (for double-posting | 22:20 |
timur | right, configuring irssi with sasl on ec2 is to blame for that. sorry :( | 22:20 |
notmyname | heh, no worries | 22:20 |
notmyname | timur: for your question... | 22:20 |
notmyname | timur: yes, being undocumented is either unintended or a bug | 22:21 |
notmyname | that's what you were asking about, right? ;-) | 22:21 |
timur | notmyname: it's not clear to me why the client's proxy server connection would need to be closed after fulfilling a GET request | 22:21 |
timur | I'm happy to dig in to fix it, unless there is a reason it's done this way (which I guess I may or may not find out during the digging) | 22:22 |
notmyname | yeah, swift should support multiple requests pipelined on a single connection | 22:23 |
notmyname | I'd guess that the object connection close may have snuck in at some point. but no, I don't know the reason it's one way or another | 22:23 |
timur | notmyname: thanks! I'll try to figure out why that's happening and submit a patch! | 22:24 |
*** gyee has joined #openstack-swift | 22:27 | |
notmyname | mattoliverau: will you have a chance to handle the merge conflict on concurrent gets? | 22:37 |
*** alejandrito has quit IRC | 22:42 | |
*** mvk_ has joined #openstack-swift | 22:48 | |
*** mvk has quit IRC | 22:52 | |
*** trifon has quit IRC | 22:55 | |
*** haomaiwang has quit IRC | 23:01 | |
*** haomaiwang has joined #openstack-swift | 23:01 | |
briancline | anyone mind taking a quick peek at patch #292206? | 23:02 |
patchbot | briancline: https://review.openstack.org/#/c/292206/ - swift - Don't report recon mount/usage status on files | 23:02 |
*** _JZ_ has quit IRC | 23:05 | |
*** km has joined #openstack-swift | 23:05 | |
*** ametts has quit IRC | 23:20 | |
*** chsc has quit IRC | 23:27 | |
*** arch-nemesis has quit IRC | 23:32 | |
*** kei_yama has joined #openstack-swift | 23:32 | |
notmyname | zaitcev: on patch 248867 you left a +1. it's got another +2 from cschwede on it now. I'm curious about your +2 instead of +2. do you want to leave it as-is? your comment said you're ok but the patch violates the RFC. what do you want to do? | 23:32 |
patchbot | notmyname: https://review.openstack.org/#/c/248867/ - swift - Stop staticweb revealing container existence to un... | 23:32 |
*** gyee has quit IRC | 23:39 | |
*** macgyver_ has left #openstack-swift | 23:43 | |
mattoliverau | notmyname: I will get a new version up today :) | 23:45 |
notmyname | mattoliverau: thanks | 23:45 |
* mattoliverau has been in a meeting but back now | 23:45 | |
*** mkrcmari__ has joined #openstack-swift | 23:48 | |
notmyname | ptl candidacy submitted | 23:50 |
torgomatic | I wonder what would happen if you didn't run for PTL | 23:50 |
timburke | torgomatic: i'd guess there'd be a write-in campaign and he'd still get elected | 23:51 |
torgomatic | whether he likes it or not, eh? | 23:51 |
notmyname | actually, the TC would appoint someone | 23:51 |
*** mvk_ has quit IRC | 23:52 | |
timburke | notmyname: that's not as much fun. although you may *still* find yourself stuck with the position | 23:52 |
*** ho_ has joined #openstack-swift | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!