mattoliverau | notmyname: a new pull request, added some colours so large graphs should be easier to read. | 00:00 |
---|---|---|
*** dmorita has joined #openstack-swift | 00:09 | |
*** shri has quit IRC | 00:27 | |
*** tsg has joined #openstack-swift | 00:40 | |
*** kevinc_ has quit IRC | 00:44 | |
*** tdasilva has quit IRC | 01:00 | |
*** wer has joined #openstack-swift | 01:14 | |
*** Tyger has quit IRC | 01:25 | |
*** tsg has quit IRC | 01:28 | |
*** kota_ has joined #openstack-swift | 01:43 | |
*** nosnos has joined #openstack-swift | 01:44 | |
*** zhiyan_ is now known as zhiyan | 01:47 | |
*** zhiyan is now known as zhiyan_ | 01:56 | |
*** zhiyan_ is now known as zhiyan | 01:56 | |
*** tsg has joined #openstack-swift | 02:01 | |
*** Edward-Zhang has joined #openstack-swift | 02:02 | |
*** haomaiw__ has quit IRC | 02:10 | |
*** haomaiwang has joined #openstack-swift | 02:11 | |
openstackgerrit | A change was merged to openstack/swift: Merge tag '2.0.0' https://review.openstack.org/105222 | 02:22 |
*** haomai___ has joined #openstack-swift | 02:26 | |
*** haomaiwang has quit IRC | 02:29 | |
*** blazesurfer has joined #openstack-swift | 02:29 | |
blazesurfer | hi all | 02:30 |
blazesurfer | quest around full disk. i have been trying for awhile to balance out or remove a disk from my swift cluster is ther away to ensure all data has evacuated the disk? | 02:31 |
blazesurfer | ive gradualy dropped the wieght of the disk though doesnt seem to remove data off the disk, any one able to clear up my understanding? | 02:31 |
zaitcev | Just remove it from the ring. | 02:33 |
zaitcev | Rings. | 02:33 |
blazesurfer | only issue with that is i want to make sure not to loss any data this is one of 2 drives that were originally in a single replica single node cluster. have expanded it now to multi node and have got it replicating thought 100% sure how to confirm if i have two replicas of all the data in cluster. | 02:35 |
blazesurfer | does that make sence? | 02:35 |
blazesurfer | thank you for your response +zaitcev | 02:35 |
zaitcev | Are you running with just 2? Most people build rings with swift-ring-builder 18 3 1 or something. Not 2. | 02:37 |
zaitcev | Oh, wait. It's even worse - single replica. | 02:37 |
zaitcev | I don't know what to do in such case. Normally one just bumps one drive off and replicators restore redundancy. If you only have 2, there's a probability that something, somewhere, got 2 assigned to this drive. | 02:39 |
torgomatic | If the ring only had two drives in it, there will be one replica per disk. You can just remove it from the ring, then cross your fingers and hope the one remaining disk doesn't die. | 02:49 |
blazesurfer | ok | 02:52 |
blazesurfer | i started with on server 4 disks 1 replica(running on enterprise san array ) typical dont do it this way desing but what i had to start with. | 02:53 |
blazesurfer | now i have running 2 servers 4 drivers per server 2 zones (each server is a zone) and have replicas set to 2 | 02:54 |
blazesurfer | will be adding a 3rd zone/server/replica once i have it setup. | 02:54 |
blazesurfer | does that read right? | 02:55 |
torgomatic | Sounds reasonable so far. | 02:56 |
*** madhuri has joined #openstack-swift | 02:56 | |
blazesurfer | so from what i understand the disks can be added gradually and removed gradually though im just not seeing it happne. | 02:57 |
blazesurfer | also do you know of a way to confirm if there is a replica in both zones (note i own all the accounts on the cluster so can authenticate as any of them) if that helps. | 02:57 |
torgomatic | swift-get-nodes will tell you where the primary nodes are for any particular object. | 02:59 |
torgomatic | Or account or container | 02:59 |
torgomatic | If you are trying to remove an entire server, gradual removal won't work. Durability trumps even distribution. You'll just have to remove the disks and let replication go a while | 03:01 |
blazesurfer | ok my issue is 2 disks are full and that is why i am trying to remove them gradually. the other disks have 800gb free on them so trying to balance the data | 03:06 |
blazesurfer | as im getting alot of disk full errors logs | 03:06 |
blazesurfer | not trying to remove entire server just the two full drives. | 03:06 |
torgomatic | So you reduce the weights, rebalance, and distribute the rings to both servers? | 03:07 |
blazesurfer | i have yes | 03:08 |
*** ho has joined #openstack-swift | 03:09 | |
blazesurfer | soo all 2tb drives reduced the two drives that are full to 1500 weight rest are at 2000 rebalanced a few times still 100% full | 03:09 |
torgomatic | I think that swift-ring-builder will tell you got many partitions are on each drive; do the 1500 weight drives have fewer? | 03:12 |
blazesurfer | http://paste.openstack.org/show/HpI3amYYG5QcE2jOKp0i/ | 03:13 |
blazesurfer | yes it reports that there are fewer ment to be there | 03:13 |
blazesurfer | that paste is of the swift-get-nodes command | 03:14 |
blazesurfer | node 10.42.123.13 drives sdb1 and sdc1 are the two that are full | 03:15 |
blazesurfer | will tripple check partitions again | 03:15 |
blazesurfer | ok sorry yes they do report as fewer partitions what i check the rings using swift-ring-builder command though dont appear to be shrinking in size on disk. | 03:17 |
torgomatic | If fewer partitions are allocated to the 1500 weight drives, things are working as intended. You may just have some partitions with lots of big objects in them, and you got unlucky. | 03:17 |
torgomatic | Try dropping the weights to 1000 and see what happens, maybe? | 03:18 |
blazesurfer | ok so the balance in the ring should that be at any thing in particular to be healthy then as it changes each time | 03:19 |
*** nosnos has quit IRC | 03:20 | |
zaitcev | Isn't it the case that nobody actually deletes objects that do not need to be there? | 03:21 |
torgomatic | The replicator will delete things. | 03:22 |
*** kota_ has quit IRC | 04:01 | |
*** Edward-Zhang has quit IRC | 04:05 | |
*** ho has quit IRC | 04:10 | |
*** nosnos has joined #openstack-swift | 04:13 | |
*** nosnos has quit IRC | 04:24 | |
*** Edward-Zhang has joined #openstack-swift | 04:36 | |
*** elambert has joined #openstack-swift | 04:45 | |
*** kopparam has joined #openstack-swift | 04:45 | |
*** nshaikh has joined #openstack-swift | 04:49 | |
*** zaitcev has quit IRC | 04:50 | |
*** kopparam has quit IRC | 04:56 | |
*** kopparam has joined #openstack-swift | 04:57 | |
*** ppai has joined #openstack-swift | 05:18 | |
*** chandankumar has joined #openstack-swift | 05:19 | |
*** Kbee has joined #openstack-swift | 05:52 | |
*** nacim has quit IRC | 06:01 | |
*** elambert has quit IRC | 06:11 | |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Swift configuration parameter audit https://review.openstack.org/104760 | 06:12 |
openstackgerrit | Zhang Hua proposed a change to openstack/swift: Add distributed tracing capablities in logging. https://review.openstack.org/93677 | 06:32 |
mattoliverau | I'm off to the in-laws for dinner, so need to head off a little early, have a great night all. | 06:35 |
openstackgerrit | Zhang Hua proposed a change to openstack/swift: Add distributed tracing capablities in logging. https://review.openstack.org/93677 | 06:36 |
*** mkerrin has joined #openstack-swift | 06:49 | |
*** wer has quit IRC | 06:54 | |
*** kopparam has quit IRC | 07:02 | |
*** kopparam has joined #openstack-swift | 07:02 | |
*** kopparam has quit IRC | 07:07 | |
*** wer has joined #openstack-swift | 07:09 | |
*** ppai has quit IRC | 07:14 | |
*** ppai has joined #openstack-swift | 07:26 | |
charz | notmyname: I found a bug in swift3 (current repo: stackforge/swift3) and create a bug report in https://launchpad.net/swift3. And also have a patch for it. | 07:27 |
*** sungju has quit IRC | 07:28 | |
charz | notmyname: Should I post this bug to swift project? or keep it in https://launchpad.net/swift3. | 07:28 |
*** madhuri has quit IRC | 07:33 | |
*** kopparam has joined #openstack-swift | 07:45 | |
*** ppai has quit IRC | 07:49 | |
openstackgerrit | Christian Schwede proposed a change to openstack/swift: Limit changed partitions when rebalancing https://review.openstack.org/105666 | 07:49 |
*** kopparam has quit IRC | 07:50 | |
*** nacim has joined #openstack-swift | 08:04 | |
*** kopparam has joined #openstack-swift | 08:11 | |
openstackgerrit | Christian Schwede proposed a change to openstack/swift: Limit changed partitions when rebalancing https://review.openstack.org/105666 | 08:20 |
*** mmcardle has joined #openstack-swift | 08:23 | |
*** andyandy has joined #openstack-swift | 08:39 | |
*** tusharsg has joined #openstack-swift | 09:00 | |
*** tsg has quit IRC | 09:02 | |
*** ppai has joined #openstack-swift | 09:04 | |
*** Edward-Zhang has quit IRC | 09:11 | |
*** mmcardle has quit IRC | 09:20 | |
*** mmcardle has joined #openstack-swift | 09:22 | |
*** jaz has joined #openstack-swift | 09:25 | |
*** jaz is now known as Guest24271 | 09:25 | |
Guest24271 | Hi, can anyone please answer? How the storage URL is generated? | 09:26 |
Guest24271 | and where in code? | 09:26 |
*** TaiSHi has quit IRC | 09:28 | |
*** nshaikh has quit IRC | 09:33 | |
*** aswadr has joined #openstack-swift | 09:43 | |
Guest24271 | Where is the settings for IP of proxy service mentioned? | 09:46 |
*** haomai___ has quit IRC | 09:47 | |
*** haomaiwa_ has joined #openstack-swift | 09:48 | |
*** Shivani has joined #openstack-swift | 09:50 | |
*** haomaiwa_ has quit IRC | 10:00 | |
*** haomaiwang has joined #openstack-swift | 10:01 | |
*** mmcardle has quit IRC | 10:07 | |
*** mmcardle has joined #openstack-swift | 10:08 | |
*** tusharsg has quit IRC | 10:11 | |
Shivani | Hi anyone there? | 10:14 |
*** kopparam has quit IRC | 10:14 | |
*** ppai has quit IRC | 10:28 | |
Shivani | Can someone tell from where to get data, metadata for put, uplodad, etc. | 10:31 |
Shivani | ? | 10:31 |
omame | Shivani: what do you mean exactly? what is that you're trying to do? | 10:34 |
*** kopparam has joined #openstack-swift | 10:46 | |
*** Kbee has quit IRC | 10:50 | |
*** kopparam has quit IRC | 10:50 | |
Shivani | I have to issues, I have done installation using SAIO. Now I want to change the server from 127.0.0.1 to my local ip to get/put requests there. I am unable to do this. I am adding *memcache_servers = 10.0.9.18:8080* in /etc/swift/proxy-server.conf. Still connection is not being established. What should I do? | 10:53 |
ctennis | Shivani: you need to change the bind_ip line in the proxy-server.conf file | 11:00 |
ctennis | memcache_servers is the pointing to where memcached is listening | 11:00 |
*** kopparam has joined #openstack-swift | 11:04 | |
*** ppai has joined #openstack-swift | 11:14 | |
*** ujjain has quit IRC | 11:37 | |
*** ppai has quit IRC | 11:55 | |
*** mali_ has joined #openstack-swift | 12:02 | |
*** dmorita has quit IRC | 12:03 | |
*** mmcardle has quit IRC | 12:08 | |
*** mali_ has quit IRC | 12:11 | |
*** mmcardle has joined #openstack-swift | 12:11 | |
*** kopparam has quit IRC | 12:21 | |
*** kopparam has joined #openstack-swift | 12:22 | |
*** Edward-Zhang has joined #openstack-swift | 12:24 | |
*** kopparam has quit IRC | 12:26 | |
*** tdasilva has joined #openstack-swift | 12:27 | |
*** zul has quit IRC | 13:11 | |
*** mmcardle has quit IRC | 13:13 | |
*** mmcardle has joined #openstack-swift | 13:16 | |
*** otoolee has joined #openstack-swift | 13:18 | |
*** zul has joined #openstack-swift | 13:18 | |
*** mbeegala has joined #openstack-swift | 13:19 | |
*** foexle has joined #openstack-swift | 13:36 | |
*** anticw_ has joined #openstack-swift | 13:43 | |
*** Anticime1 has joined #openstack-swift | 13:43 | |
*** jokke__ has joined #openstack-swift | 13:44 | |
*** MooingLe1ur has joined #openstack-swift | 13:44 | |
*** swat30_ has joined #openstack-swift | 13:45 | |
*** ondergetekende_ has joined #openstack-swift | 13:45 | |
*** tanee2 has joined #openstack-swift | 13:46 | |
*** mlanner_ has joined #openstack-swift | 13:46 | |
*** mitz- has joined #openstack-swift | 13:47 | |
*** mordred_ has joined #openstack-swift | 13:47 | |
*** tsg has joined #openstack-swift | 13:47 | |
*** foexle has quit IRC | 13:48 | |
*** tdasilva has quit IRC | 13:48 | |
*** Edward-Zhang has quit IRC | 13:48 | |
*** swat30 has quit IRC | 13:48 | |
*** mitz has quit IRC | 13:48 | |
*** mordred has quit IRC | 13:48 | |
*** mlanner has quit IRC | 13:48 | |
*** JelleB has quit IRC | 13:48 | |
*** MooingLemur has quit IRC | 13:48 | |
*** ryao has quit IRC | 13:48 | |
*** mordred_ is now known as mordred | 13:48 | |
*** mlanner_ is now known as mlanner | 13:48 | |
*** Anticimex has quit IRC | 13:48 | |
*** redbo has quit IRC | 13:48 | |
*** glange has quit IRC | 13:48 | |
*** tanee has quit IRC | 13:48 | |
*** anticw has quit IRC | 13:48 | |
*** jokke_ has quit IRC | 13:48 | |
*** ondergetekende has quit IRC | 13:48 | |
*** swat30_ is now known as swat30 | 13:48 | |
*** tsg has joined #openstack-swift | 13:49 | |
*** foexle has joined #openstack-swift | 13:50 | |
*** tdasilva has joined #openstack-swift | 13:56 | |
*** redbo has joined #openstack-swift | 13:58 | |
*** ChanServ sets mode: +v redbo | 13:58 | |
*** jokke__ is now known as jokke_ | 13:58 | |
*** glange has joined #openstack-swift | 13:58 | |
*** ChanServ sets mode: +v glange | 13:58 | |
*** ryao has joined #openstack-swift | 13:58 | |
*** kopparam has joined #openstack-swift | 13:58 | |
*** zaitcev has joined #openstack-swift | 14:02 | |
*** ChanServ sets mode: +v zaitcev | 14:02 | |
*** mordred has quit IRC | 14:02 | |
*** mordred has joined #openstack-swift | 14:02 | |
*** mmcardle has quit IRC | 14:09 | |
*** JelleB has joined #openstack-swift | 14:09 | |
*** mwstorer has joined #openstack-swift | 14:10 | |
*** blazesurfer has quit IRC | 14:17 | |
*** blazesurfer has joined #openstack-swift | 14:17 | |
*** mmcardle has joined #openstack-swift | 14:21 | |
*** otoolee has quit IRC | 14:29 | |
*** tsg has quit IRC | 14:33 | |
*** kopparam has quit IRC | 14:40 | |
*** kopparam has joined #openstack-swift | 14:41 | |
*** kopparam has quit IRC | 14:45 | |
*** nacim has quit IRC | 14:46 | |
*** otoolee has joined #openstack-swift | 14:56 | |
*** MooingLe1ur is now known as MooingLemur | 15:00 | |
*** kevinc_ has joined #openstack-swift | 15:02 | |
*** zhiyan is now known as zhiyan_ | 15:10 | |
*** kevinc_ has quit IRC | 15:18 | |
*** kevinc_ has joined #openstack-swift | 15:21 | |
*** tsg has joined #openstack-swift | 15:25 | |
*** elambert has joined #openstack-swift | 15:44 | |
*** zz_wasmum is now known as wasmum | 15:49 | |
*** otoolee has quit IRC | 15:54 | |
*** haomaiw__ has joined #openstack-swift | 15:56 | |
*** haomaiwang has quit IRC | 15:59 | |
*** gyee has quit IRC | 16:08 | |
notmyname | good morning | 16:11 |
notmyname | charz: if it's a bug in swift3, then it needs to be tracked and patched there | 16:11 |
charz | notmyname: ok, got it. thanks. | 16:13 |
*** chandankumar has quit IRC | 16:18 | |
*** gyee has joined #openstack-swift | 16:22 | |
*** kevinc_ has quit IRC | 16:24 | |
notmyname | glange: I like reading your sports analysis | 16:25 |
*** kevinc_ has joined #openstack-swift | 16:26 | |
notmyname | (http://greglange.blogspot.com/2014/07/what-lebron-james-should-do.html) | 16:26 |
*** miqui has quit IRC | 16:27 | |
*** miqui has joined #openstack-swift | 16:28 | |
zaitcev | charz: and while you're at it, make Tomo to cut a 2.0 release, for crissakes | 16:39 |
*** aswadr has quit IRC | 16:42 | |
charz | zaitcev: ok, I don't see any tag or branch for 2.0 release. I will ask Tomo. | 16:46 |
*** miqui_ has joined #openstack-swift | 16:54 | |
*** miqui has quit IRC | 16:55 | |
*** mbeegala has quit IRC | 16:56 | |
*** blazesurfer has quit IRC | 17:08 | |
*** blazesurfer has joined #openstack-swift | 17:08 | |
notmyname | reminder that the swift team meeting is in a little less than 2 hours | 17:12 |
*** mmcardle has quit IRC | 17:20 | |
*** andyandy has quit IRC | 17:30 | |
*** blazesurfer has quit IRC | 17:32 | |
*** elambert has quit IRC | 17:33 | |
*** dfg_ is now known as dfg | 17:34 | |
*** ChanServ sets mode: +v dfg | 17:34 | |
*** Midnightmyth has joined #openstack-swift | 17:36 | |
*** elambert has joined #openstack-swift | 17:50 | |
*** Guest24271 has quit IRC | 17:59 | |
*** Shivani has quit IRC | 18:00 | |
*** pberis has quit IRC | 18:03 | |
*** pberis has joined #openstack-swift | 18:05 | |
*** tusharsg has joined #openstack-swift | 18:09 | |
*** tsg has quit IRC | 18:12 | |
*** Tyger has joined #openstack-swift | 18:19 | |
*** wasmum is now known as zz_wasmum | 18:21 | |
*** gvernik has joined #openstack-swift | 18:24 | |
*** shri has joined #openstack-swift | 18:35 | |
*** zz_wasmum is now known as wasmum | 18:36 | |
*** pberis has quit IRC | 18:38 | |
glange | notmyname: thanks, that blog post is 100% true :) | 18:41 |
*** pberis has joined #openstack-swift | 18:42 | |
*** elmiko has joined #openstack-swift | 18:52 | |
elmiko | hi, i'm having some difficulty with swiftclient.client.Connection using the preauthtoken. do i need to specify more than preauthtoken when creating the Connection? | 18:53 |
torgomatic | elmiko: probably the storage URL too | 18:53 |
elmiko | torgomatic: thanks, i'll give that a try | 18:53 |
elmiko | torgomatic: one more thing, will preauthurl understand 'swift://container/object' or do i need the full http form? | 18:54 |
torgomatic | elmiko: the full HTTP URL, otherwise it doesn't know which host to talk to or what the account name is (or which protocol, or...) | 18:55 |
elmiko | torgomatic: cool, thanks again :) | 18:55 |
torgomatic | np :) | 18:56 |
elmiko | that did it | 18:57 |
*** pberis has quit IRC | 18:58 | |
notmyname | meeting time in #openstack-meeting in a cuple of minutes | 18:58 |
*** tusharsg has quit IRC | 18:59 | |
elmiko | i've been using `$ swift stat -v` to get the StorageURL, is there a better way to do this? | 19:07 |
torgomatic | elmiko: not that I know of; you have to auth to get the storage URL, and that's as good a way to auth as any | 19:11 |
elmiko | ok | 19:12 |
elmiko | is the storageURL guaranteed to be ip:port/AUTH_projectid ? | 19:12 |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: added process pid to the end of storage node log lines https://review.openstack.org/105309 | 19:12 |
elmiko | torgomatic: i'm working an issue where i want to use keystone delegated trust tokens to access swift objects, so the final user of the container/object might not have sufficient privilege to get the storageURL. i'm trying to make sure i have all the details of what i need to bundle with the trust token for the final consumer. | 19:14 |
torgomatic | elmiko: storage URL (includes protocol, hostname, and account) + token + container + object should do it | 19:15 |
elmiko | torgomatic: that's what it's looking like. i was just curious about discovering the storageURL. | 19:16 |
elmiko | also, sorry for interrupting the meeting :) | 19:17 |
torgomatic | heh, IRC meetings are slow anyway :) | 19:18 |
elmiko | yea | 19:18 |
* notmyname makes a note to talk to torgomatic more in the meeting | 19:21 | |
elmiko | lol, sorry torgomatic | 19:22 |
*** pberis has joined #openstack-swift | 19:23 | |
Tyger | Whoever talks in here gets the most responsibility in the meeting? | 19:28 |
*** miqui_ is now known as miqui | 19:37 | |
*** tdasilva has quit IRC | 19:43 | |
*** tdasilva has joined #openstack-swift | 19:43 | |
*** tsg has joined #openstack-swift | 19:44 | |
*** kevinc_ has quit IRC | 19:45 | |
zaitcev | elmiko: You're supposed to receive the storage URL alongside the token. It should not be "discovered". | 19:47 |
cschwede | clayg: thanks for the review! am i missing something when adding new regions or zones? maybe i don’t need https://review.openstack.org/#/c/105666/ at all? | 19:56 |
mattoliverau | OK, I'm going back to bed, be back in a few hours :) | 19:58 |
*** gvernik has quit IRC | 19:59 | |
elmiko | zaitcev: as part of the endpoint catalog? | 20:00 |
zaitcev | elmiko: You can call it that, but normally it's like {"access": {"token": {"expires": "2011-11-30T14:52:29.768403", "id": "fbd7f5d0-8896-482a-babc-c2718b2e18e2", "tenant": {"id": "1", "name": "adminTenant"}}, "serviceCatalog": [{"endpoints": [{"....... "publicURL": "http://localhost:8774/v1.1/"}], | 20:03 |
zaitcev | Okay that's example from 3 years ago :-) | 20:04 |
zaitcev | And uses Nova | 20:04 |
elmiko | lol | 20:04 |
elmiko | zaitcev: yea, it looks a little different now but i get your meaning | 20:04 |
zaitcev | So you can look it up in the endpoint catalog, then interpolate as needed, but why bother | 20:05 |
*** pberis has quit IRC | 20:05 | |
elmiko | well yea, when i get the token i also get the endpoints available | 20:06 |
elmiko | i'm trying to work this from the angle that i will need to discover these things using the keystone client | 20:06 |
zaitcev | so the library does an equiv of | 20:07 |
zaitcev | curl -X POST http://127.0.0.1:5000/v2.0/tokens -d '{"auth": {"tenantName":"tsa17", "passwordCredentials":{"username":"spare17", "password":"secrete"}}}' -H 'Content-Type: application/json | 20:07 |
elmiko | that makes sense | 20:07 |
zaitcev | The returned serviceCatalog entries are already interpolated | 20:08 |
zaitcev | Well, I didn't do it by hand in years. | 20:08 |
zaitcev | And holey moley these new-fanged PKI tokens a hueg | 20:09 |
elmiko | lol yes | 20:09 |
elmiko | thanks for the help zaitcev | 20:10 |
*** tdasilva has quit IRC | 20:10 | |
*** kevinc_ has joined #openstack-swift | 20:11 | |
*** jergerber has joined #openstack-swift | 20:14 | |
clayg | cschwede: i need to look at those gists and figure out what you're seeing - i guess if you go from 3-replica 2-zone to 3-replica 3-zone one replica of every partition is going to move regadless of weight? that doesn't sound right... | 20:15 |
cschwede | clayg: exactly! in my case it was from 3-replica 1-zone 1-region to 3-replica 2-regions. This would put a lot of stress to the replication network | 20:17 |
cschwede | clayg: but if you think it is a bug i will open a ticket and fix this instead | 20:17 |
clayg | cschwede: yeah i think that seems like a bug? torgomatic: can you grok on 105666 when you get a chance (gholt?) | 20:18 |
torgomatic | notmyname: can you poke at https://review.openstack.org/#/c/103783/ and try it again? (it's real fast) | 20:33 |
clayg | cschwede: so you can also see when adding a second device to a 2-replica 1-device ring - rebalance thinks it *has* to move the parts that are duped on the device now that it has a second one - weights be damned! | 20:35 |
notmyname | torgomatic: when I go to http://wiki.openstack.org/HowToContribute#If_you.27re_a_developer.2C_start_here I get redirected to the top of the page (no anchor) | 20:36 |
torgomatic | http://wiki.openstack.org/HowToContribute#If_you.27re_a_developer | 20:38 |
torgomatic | notmyname: try ^^ | 20:38 |
cschwede | clayg: yep, you’re right. i’m currently trying to find the bug in swift-ring-builder | 20:39 |
notmyname | torgomatic: actually a browser issue, not the link | 20:39 |
torgomatic | cschwede: clayg: it's a feature? | 20:39 |
ahale | when I saw 105666 I was thinking it would be nice to be able to specify a specific partition you want to move in a rebalance sometimes | 20:39 |
torgomatic | it is currently expected that going from 2 regions to 3, in a 3-replica ring, will move one replica of every partition into the new region | 20:39 |
clayg | torgomatic: ok, well can I make that not get me fired by net-eng | 20:40 |
ahale | like in the situation we were in with the db preallocation issue, there was a particularly large one it would have been nice to ensure a copy of moved | 20:40 |
torgomatic | durability trumps balance, at least how it is now | 20:40 |
torgomatic | clayg: feel free... I'm not saying what's good or bad, just how it's expected to work now | 20:40 |
cschwede | torgomatic: yes, but this will create a lot of replication traffic and load and i like to be able to control this a little bit more | 20:41 |
torgomatic | ahale: that also sounds like a fine idea, though orthogonal to what's being discussed | 20:41 |
torgomatic | cschwede: sounds reasonable to me | 20:41 |
ahale | oh indeed, just mentioning it as an ops PoV thought I had this morning reading that gerrit | 20:41 |
torgomatic | there was the question of whether slowly turning up the weights would do any good, and I was just clarifying that no, it doesn't help jack | 20:42 |
torgomatic | now, when going 3-region to 4-region with 3 replicas, the weights do help | 20:42 |
cschwede | torgomatic: ah, yes that makes sense, after thinking about how it works | 20:45 |
cschwede | clayg: torgomatic: so you think that it makes sense to continue working on https://review.openstack.org/#/c/105666/ instead of changing the current behavior? | 20:48 |
torgomatic | cschwede: sure, seems good to me. I think the current behavior is pretty good; it just has some rough edges when you have fewer regions/zones than you do replicas | 20:52 |
cschwede | torgomatic: ok great, thanks for your feedback! | 20:54 |
*** wasmum is now known as zz_wasmum | 21:06 | |
*** tsg has quit IRC | 21:19 | |
*** tsg has joined #openstack-swift | 21:29 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: account to account copy implementation https://review.openstack.org/72157 | 21:30 |
clayg | cschwede: torgomatic: so we'll have two ways of slowly rebalancing - one is per device, and the other is per rebalance? | 21:40 |
torgomatic | clayg: looks that way | 21:40 |
clayg | cschwede: torgomatic: what happens if you have a cluster with 100 devices in region0 and you add 10 devcies in region1 if rebalance doesn't respect weight? | 21:40 |
*** elmiko has left #openstack-swift | 21:40 | |
torgomatic | clayg: region1 fills up in a darn hurry | 21:40 |
cschwede | torgomatic: clayg: exactly, this is my problem ;) | 21:41 |
clayg | cschwede: I'm not sure having two crappy ways to do the same thing is a as good as one way that does what we want | 21:41 |
cschwede | clayg: so, from my tests it looks like the weight is considered inside a region or zone | 21:42 |
cschwede | as long as there are less regions or zones than replicas | 21:42 |
*** CaioBrentano has joined #openstack-swift | 21:46 | |
openstackgerrit | A change was merged to openstack/python-swiftclient: Add context sensitive help https://review.openstack.org/102510 | 21:48 |
zaitcev | Oh great, Swift in Docker. | 21:59 |
zaitcev | I seem to recall someone did it before... | 21:59 |
notmyname | zaitcev: http://serverascode.com/2014/06/12/run-swift-in-docker.html | 22:07 |
notmyname | ? | 22:07 |
zaitcev | notmyname: thanks a lot | 22:09 |
*** miqui_ has joined #openstack-swift | 22:13 | |
*** miqui has quit IRC | 22:13 | |
*** nacim has joined #openstack-swift | 22:15 | |
mattoliverau | Morning all... Again :) | 22:18 |
clayg | oh goody, are we doing that thing where the gate fails everything | 22:21 |
*** nacim has quit IRC | 22:28 | |
*** Midnightmyth has quit IRC | 22:46 | |
*** CaioBrentano has quit IRC | 22:48 | |
*** foexle has quit IRC | 22:57 | |
*** kevinc_ has quit IRC | 22:59 | |
*** jergerber has quit IRC | 23:08 | |
*** gyee has quit IRC | 23:14 | |
*** kevinc_ has joined #openstack-swift | 23:18 | |
openstackgerrit | Clay Gerrard proposed a change to openstack/swift: Let eventlet.wsgi.server log tracebacks when eventlet_debug is enabled https://review.openstack.org/105918 | 23:31 |
openstackgerrit | A change was merged to openstack/swift: Fix the section name in CONTRIBUTING.rst https://review.openstack.org/103783 | 23:41 |
openstackgerrit | paul luse proposed a change to openstack/swift: Allow Object Auditor to Specify a Different disk_chunk_size https://review.openstack.org/105920 | 23:51 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!