*** tab___ has quit IRC | 00:08 | |
*** miurahr has joined #openstack-swift | 00:11 | |
*** annegent_ has quit IRC | 00:13 | |
*** miurahr has quit IRC | 00:26 | |
*** miurahr has joined #openstack-swift | 00:29 | |
*** tellesnobrega has joined #openstack-swift | 00:32 | |
*** dmorita has joined #openstack-swift | 00:32 | |
*** Masahiro has joined #openstack-swift | 00:38 | |
*** tellesnobrega has quit IRC | 00:39 | |
*** nellysmitt has joined #openstack-swift | 00:39 | |
*** Masahiro has quit IRC | 00:43 | |
*** nellysmitt has quit IRC | 00:43 | |
*** annegent_ has joined #openstack-swift | 00:43 | |
*** miurahr has quit IRC | 00:45 | |
*** shri has quit IRC | 00:45 | |
*** aix has quit IRC | 00:49 | |
*** annegent_ has quit IRC | 00:52 | |
*** addnull has joined #openstack-swift | 01:13 | |
*** gyee has quit IRC | 01:19 | |
*** tdasilva has joined #openstack-swift | 01:35 | |
*** nexusz99 has joined #openstack-swift | 01:42 | |
*** lpabon has joined #openstack-swift | 01:45 | |
*** lpabon has quit IRC | 02:00 | |
*** bill_az has quit IRC | 02:01 | |
*** addnull has quit IRC | 02:07 | |
*** haomaiwang has joined #openstack-swift | 02:08 | |
*** Masahiro has joined #openstack-swift | 02:26 | |
*** Masahiro has quit IRC | 02:31 | |
openstackgerrit | Thiago da Silva proposed openstack/swift: fix dlo manifest file getting versioned https://review.openstack.org/140206 | 02:32 |
---|---|---|
*** addnull has joined #openstack-swift | 02:33 | |
*** nellysmitt has joined #openstack-swift | 02:40 | |
openstackgerrit | Thiago da Silva proposed openstack/swift: fix dlo manifest file getting versioned https://review.openstack.org/140206 | 02:43 |
*** imkarrer has joined #openstack-swift | 02:44 | |
*** nellysmitt has quit IRC | 02:44 | |
*** tdasilva has quit IRC | 02:54 | |
imkarrer | Good evening everyone! I have a question about handoff partitions. Reading the source it appears the that handoff nodes are determined by the partition and consistent hash ring when get_more_nodes is called. Is there a way to specify a handoff node. | 02:54 |
imkarrer | ? | 02:54 |
imkarrer | If for example you want to designate a certain device as a handoff device | 02:57 |
*** addnull has quit IRC | 03:13 | |
*** david-lyle is now known as david-lyle_afk | 03:24 | |
notmyname | imkarrer: no, that's not possible | 03:32 |
*** addnull has joined #openstack-swift | 03:38 | |
imkarrer | Thanks! I did not think so, wanted to check. | 03:42 |
*** abhirc_ has quit IRC | 03:42 | |
*** Masahiro has joined #openstack-swift | 03:42 | |
*** annegent_ has joined #openstack-swift | 03:46 | |
*** Masahiro has quit IRC | 03:47 | |
*** annegent_ has quit IRC | 03:48 | |
*** annegent_ has joined #openstack-swift | 03:48 | |
*** erlon has quit IRC | 03:50 | |
*** erlon has joined #openstack-swift | 03:51 | |
*** imkarrer has quit IRC | 04:24 | |
*** SkyRocknRoll has joined #openstack-swift | 04:27 | |
*** tkay has quit IRC | 04:33 | |
*** nellysmitt has joined #openstack-swift | 04:40 | |
*** Masahiro has joined #openstack-swift | 04:43 | |
*** nellysmitt has quit IRC | 04:45 | |
*** addnull has quit IRC | 04:47 | |
*** Masahiro has quit IRC | 04:47 | |
*** ppai has joined #openstack-swift | 04:57 | |
*** rebelshrug has quit IRC | 05:05 | |
*** TaiSHi has quit IRC | 05:06 | |
*** kopparam has joined #openstack-swift | 05:18 | |
*** addnull has joined #openstack-swift | 05:22 | |
*** kopparam has quit IRC | 05:31 | |
*** kopparam_ has joined #openstack-swift | 05:31 | |
*** Masahiro has joined #openstack-swift | 05:44 | |
*** Masahiro has quit IRC | 05:48 | |
*** zaitcev has quit IRC | 06:08 | |
*** CrackerJackMack has quit IRC | 06:12 | |
*** CrackerJackMack has joined #openstack-swift | 06:13 | |
*** kopparam has joined #openstack-swift | 06:24 | |
*** kopparam_ has quit IRC | 06:26 | |
*** annegent_ has quit IRC | 06:28 | |
*** bkopilov has quit IRC | 06:39 | |
*** jyoti-ranjan has joined #openstack-swift | 06:41 | |
*** addnull has quit IRC | 06:41 | |
*** nellysmitt has joined #openstack-swift | 06:42 | |
*** kopparam has quit IRC | 06:44 | |
*** kopparam has joined #openstack-swift | 06:45 | |
*** nellysmitt has quit IRC | 06:46 | |
*** addnull has joined #openstack-swift | 06:47 | |
*** xianghui has quit IRC | 06:57 | |
*** nshaikh has joined #openstack-swift | 07:03 | |
*** kopparam has quit IRC | 07:03 | |
*** kopparam has joined #openstack-swift | 07:03 | |
*** CybergeekDK has quit IRC | 07:05 | |
*** CybergeekDK has joined #openstack-swift | 07:06 | |
*** pberis has quit IRC | 07:14 | |
*** pberis has joined #openstack-swift | 07:14 | |
*** k4n0 has joined #openstack-swift | 07:15 | |
*** sungju has quit IRC | 07:18 | |
*** nellysmitt has joined #openstack-swift | 07:23 | |
*** nellysmitt has quit IRC | 07:28 | |
*** annegent_ has joined #openstack-swift | 07:28 | |
*** NellyK has joined #openstack-swift | 07:30 | |
*** Masahiro has joined #openstack-swift | 07:33 | |
*** annegent_ has quit IRC | 07:33 | |
*** NellyK is now known as nellysmitt | 07:36 | |
*** Masahiro has quit IRC | 07:37 | |
openstackgerrit | Hisashi Osanai proposed openstack/swift: Fix the GET's response code when there is a missing segment in LO https://review.openstack.org/136258 | 07:40 |
*** nellysmitt has quit IRC | 07:46 | |
*** xianghui has joined #openstack-swift | 08:02 | |
*** rledisez has joined #openstack-swift | 08:04 | |
*** NellyK has joined #openstack-swift | 08:14 | |
*** jistr has joined #openstack-swift | 08:20 | |
*** NellyK has quit IRC | 08:23 | |
*** bkopilov has joined #openstack-swift | 08:24 | |
*** nellysmitt has joined #openstack-swift | 08:24 | |
*** Masahiro has joined #openstack-swift | 08:33 | |
*** Masahiro has quit IRC | 08:38 | |
*** jordanP has joined #openstack-swift | 08:51 | |
*** jwang has quit IRC | 09:06 | |
*** kopparam has quit IRC | 09:31 | |
*** kopparam_ has joined #openstack-swift | 09:31 | |
*** addnull has quit IRC | 09:34 | |
*** aix has joined #openstack-swift | 09:35 | |
*** ppai has quit IRC | 09:43 | |
*** ppai has joined #openstack-swift | 09:56 | |
*** nellysmitt has left #openstack-swift | 09:58 | |
*** addnull has joined #openstack-swift | 10:04 | |
*** addnull has quit IRC | 10:14 | |
*** haomaiwang has quit IRC | 10:18 | |
*** jistr has quit IRC | 10:21 | |
*** Masahiro has joined #openstack-swift | 10:22 | |
*** nshaikh has quit IRC | 10:24 | |
*** Masahiro has quit IRC | 10:27 | |
*** nshaikh has joined #openstack-swift | 10:29 | |
*** addnull has joined #openstack-swift | 10:42 | |
*** jistr has joined #openstack-swift | 10:49 | |
*** nshaikh has quit IRC | 10:55 | |
*** kopparam_ has quit IRC | 10:58 | |
*** kopparam has joined #openstack-swift | 10:59 | |
*** aix has quit IRC | 11:19 | |
*** addnull has quit IRC | 11:24 | |
*** SkyRocknRoll has quit IRC | 11:28 | |
*** aix has joined #openstack-swift | 11:32 | |
*** mahatic has joined #openstack-swift | 11:34 | |
*** dmsimard_away is now known as dmsimard | 11:37 | |
*** nshaikh has joined #openstack-swift | 11:42 | |
*** tellesnobrega has joined #openstack-swift | 11:42 | |
openstackgerrit | Xiang Hui proposed openstack/swift: Fix getaddrinfo if dnspython is installed. https://review.openstack.org/116618 | 11:43 |
*** addnull has joined #openstack-swift | 11:46 | |
*** tellesnobrega has quit IRC | 11:54 | |
*** lpabon has joined #openstack-swift | 11:57 | |
*** addnull has quit IRC | 12:10 | |
*** Masahiro has joined #openstack-swift | 12:12 | |
*** tdasilva has joined #openstack-swift | 12:14 | |
*** addnull has joined #openstack-swift | 12:14 | |
*** Masahiro has quit IRC | 12:15 | |
*** nshaikh has quit IRC | 12:21 | |
*** nshaikh has joined #openstack-swift | 12:21 | |
*** dmorita has quit IRC | 12:26 | |
*** delatte has quit IRC | 12:29 | |
*** oomichi has quit IRC | 12:36 | |
*** xianghui has quit IRC | 12:41 | |
*** kopparam has quit IRC | 12:45 | |
*** cdelatte has joined #openstack-swift | 12:47 | |
*** pberis has quit IRC | 13:00 | |
*** xianghui has joined #openstack-swift | 13:08 | |
*** xianghui has quit IRC | 13:14 | |
*** bill_az has joined #openstack-swift | 13:19 | |
*** foexle has joined #openstack-swift | 13:19 | |
*** ppai has quit IRC | 13:34 | |
*** miqui has joined #openstack-swift | 13:42 | |
*** annegent_ has joined #openstack-swift | 13:58 | |
*** Masahiro has joined #openstack-swift | 14:00 | |
*** Masahiro has quit IRC | 14:04 | |
*** annegent_ has quit IRC | 14:06 | |
*** tellesnobrega has joined #openstack-swift | 14:09 | |
*** k4n0 has quit IRC | 14:21 | |
*** nshaikh has quit IRC | 14:36 | |
openstackgerrit | Daniel Wakefield proposed openstack/python-swiftclient: Verify MD5 of uploaded objects. https://review.openstack.org/129254 | 14:45 |
*** rebelshrug has joined #openstack-swift | 14:50 | |
*** tdasilva has quit IRC | 14:52 | |
*** k4n0 has joined #openstack-swift | 15:01 | |
*** neoteo has joined #openstack-swift | 15:08 | |
*** neoteo has left #openstack-swift | 15:08 | |
*** pberis has joined #openstack-swift | 15:08 | |
*** rdaly2 has joined #openstack-swift | 15:18 | |
*** imkarrer has joined #openstack-swift | 15:21 | |
imkarrer | Another question about handoff devices. Say there is a deployment maintaining 3 replicas with 3 devices shared between the account and container rings. If one of the devices goes down, where does the replica go since there are no handoffs? Is the replica copied to one of the two working devices? | 15:23 |
ctennis | imkarrer: no, because they will already have one of the replicas anyway. In this case, there won't be a handoff. | 15:25 |
ahale | yeah, they will, won't they | 15:26 |
*** jyoti-ranjan has quit IRC | 15:26 | |
ahale | since in a swift-get-nodea -a you see every disk is a possible handoff | 15:26 |
ahale | swift-get-nodes -a | 15:26 |
ahale | well, i guess unless each node is just one drive | 15:27 |
imkarrer | So there would only be 2 copies of the data until the drive is restored? Doesn't the unique as possible data algorithm maintain 3 copies | 15:27 |
imkarrer | With only 3 devices in a ring with 3 replicas, there are no handoffs enumerated | 15:27 |
ctennis | yes, but the drive is the lowest level in that scheme | 15:27 |
ctennis | there would only be 2 copies until the drive is restored | 15:28 |
imkarrer | Thank you ctennis. | 15:28 |
*** bkopilov has quit IRC | 15:32 | |
imkarrer | ctennis, is it possible that rsync copies an object over to one of the drives to maintain three replicas? I dont think so but I figure I should ask. I think rsync places replicas based on the ring. If the ring has no 4th device name to place a copy during a failure then there will be no third copy. | 15:38 |
imkarrer | And the 'Unique as Possible' placement is decided when the rings are created, correct? | 15:39 |
ctennis | imkarrer: yes, when the rings are "rebalanced" actually. but again, the uniqueness of the data is at the drive at the lowest level. There won't be multiple copies of the same replica on the same drive. | 15:41 |
imkarrer | Thanks for the clarification ctennis. | 15:42 |
*** bkopilov has joined #openstack-swift | 15:47 | |
*** Masahiro has joined #openstack-swift | 15:48 | |
*** lpabon has quit IRC | 15:50 | |
*** Masahiro has quit IRC | 15:53 | |
*** tdasilva has joined #openstack-swift | 16:00 | |
*** addnull has quit IRC | 16:02 | |
*** jamieh_ has joined #openstack-swift | 16:25 | |
*** abhirc has joined #openstack-swift | 16:27 | |
*** k4n0 has quit IRC | 16:34 | |
*** bkopilov has quit IRC | 16:35 | |
notmyname | good morning | 16:46 |
mahatic | good morning! | 16:47 |
*** gyee has joined #openstack-swift | 16:47 | |
peluse | mornin' | 16:54 |
*** rdaly2 has quit IRC | 17:20 | |
*** rledisez has quit IRC | 17:22 | |
*** anderstj has quit IRC | 17:23 | |
*** otherjon has quit IRC | 17:23 | |
*** alpha_ori has quit IRC | 17:23 | |
*** chrisnelson has quit IRC | 17:23 | |
*** zackmdavis has quit IRC | 17:23 | |
*** bobby2 has quit IRC | 17:23 | |
*** ctennis has quit IRC | 17:23 | |
*** acorwin has quit IRC | 17:23 | |
*** swifterdarrell has quit IRC | 17:23 | |
*** hugokuo has quit IRC | 17:23 | |
*** amandap has quit IRC | 17:23 | |
*** joearnold has quit IRC | 17:23 | |
*** mlanner has quit IRC | 17:23 | |
*** charz has quit IRC | 17:23 | |
*** acorwin has joined #openstack-swift | 17:24 | |
*** alpha_ori has joined #openstack-swift | 17:25 | |
*** otherjon has joined #openstack-swift | 17:26 | |
*** swifterdarrell has joined #openstack-swift | 17:27 | |
*** ChanServ sets mode: +v swifterdarrell | 17:27 | |
*** zackmdavis has joined #openstack-swift | 17:29 | |
*** amandap has joined #openstack-swift | 17:29 | |
*** tkay has joined #openstack-swift | 17:30 | |
*** anderstj has joined #openstack-swift | 17:30 | |
*** bobby2 has joined #openstack-swift | 17:30 | |
*** charz has joined #openstack-swift | 17:31 | |
*** chrisnelson has joined #openstack-swift | 17:31 | |
*** ctennis has joined #openstack-swift | 17:32 | |
*** hugokuo has joined #openstack-swift | 17:34 | |
*** joearnold has joined #openstack-swift | 17:34 | |
*** ctennis has quit IRC | 17:35 | |
*** ctennis has joined #openstack-swift | 17:35 | |
*** mlanner has joined #openstack-swift | 17:35 | |
*** Masahiro has joined #openstack-swift | 17:37 | |
*** Masahiro has quit IRC | 17:42 | |
cschwede | notmyname: torgomatic: i’m currently looking at this bug report: https://bugs.launchpad.net/swift/+bug/1400497 - it’s related to https://review.openstack.org/#/c/121422/ | 17:53 |
cschwede | notmyname: torgomatic: so i think the behaviour is correct, because the total device weight in zone 2 in the example is twice as big as zone 1 | 17:53 |
cschwede | notmyname: torgomatic: so i’m wondering if this is a bug or more like a missing warning in the documentation. wdyt? | 17:54 |
*** abhirc has quit IRC | 17:55 | |
notmyname | cschwede: just got out of a meeting. give me a moment and I'll look | 17:55 |
*** david-lyle_afk is now known as david-lyle | 17:58 | |
swifterdarrell | cschwede: notmyname: (cc torgomatic): that's definitely a fall-out of better taking the weight into consideration during ring rebalancing | 17:58 |
swifterdarrell | cschwede: notmyname: (cc torgomatic): we're planning on being able to generate a metric relating to "amount of reduced availability" that falls out of this | 17:59 |
cschwede | swifterdarrell: is this something you’re already working on (ie a patch for swift-ring-builder)? | 17:59 |
swifterdarrell | cschwede: notmyname: (cc torgomatic): if you have one failure-domain with less than 1/N*100% of the total weight (where N == replica count), then you're likely to get some amount of reduced availability | 17:59 |
swifterdarrell | cschwede: not atm, no | 17:59 |
cschwede | swifterdarrell: yes, the 1/N | 18:03 |
cschwede | *100% makes sense. i can work on a patch for this | 18:03 |
cschwede | swifterdarrell: so, it’s mostly two patches: 1. raise a warning to the user + update docs 2. add an option to swift-ring-builder to show some general info+statistics | 18:04 |
*** jwang has joined #openstack-swift | 18:12 | |
*** bkopilov has joined #openstack-swift | 18:14 | |
cschwede | swifterdarrell: hmm, i think this is also only a problem if there are less than N failure domains with a weight of (1/N*totalweight). look at this: http://paste.openstack.org/show/148054/ | 18:19 |
notmyname | cschwede: I just talked to swifterdarrell about it and got a little whiteboard drawing | 18:19 |
cschwede | notmyname: now i’m curious about your discussions :) | 18:20 |
notmyname | cschwede: nah, he was explaining the problem to me. and talking about building the exact same thing you were just talking about building | 18:21 |
swifterdarrell | cschwede: what I'm planning on doing is writing a piece of code that takes a builder file as input and constructs a forest of nodes (with regions at top, then zones, then nodes; but stopping before drives) with each node containing the count of total partitions underneath the node and the count of partitions for which all N replicas are underneath the node. | 18:22 |
cschwede | swifterdarrell: ah, that sounds nice! | 18:22 |
swifterdarrell | cschwede: the percentage: parts_with_all_replicas_under / all_parts_under * 100% is proportional to the probability of not having data available if the failure domain in question fails. | 18:23 |
swifterdarrell | cschwede: it's some kind of badness metric, IOW | 18:23 |
swifterdarrell | cschwede: that would fit well as an instance method on a builder object | 18:23 |
swifterdarrell | cschwede: simple function that generates a graph... then somethign else can decide how to display/interpret/act on that data | 18:24 |
swifterdarrell | torgomatic: notmyname clayg ^^^^^^^^^^^^^^ | 18:24 |
notmyname | +1 | 18:25 |
cschwede | swifterdarrell: notmyname: i think a separate patch with a simple warning and doc update makes sense. wdyt? | 18:27 |
notmyname | cschwede: yup | 18:27 |
swifterdarrell | cschwede: +1 | 18:29 |
cschwede | notmyname: swifterdarrell: ok, i start working on this. thx for your time! | 18:29 |
swifterdarrell | cschwede: please point me at the review when you post it... I'm extremely interested in this :) | 18:30 |
cschwede | swifterdarrell: sure, will do | 18:30 |
swifterdarrell | cschwede: and thanks for picking it up! | 18:30 |
cschwede | swifterdarrell: well it’s my patch that changed the behaviour, so… ;) | 18:31 |
*** geaaru has joined #openstack-swift | 18:31 | |
*** IRTermite has quit IRC | 18:32 | |
*** jamieh_ has quit IRC | 18:32 | |
*** tkay has left #openstack-swift | 18:36 | |
*** ToMfromTO has joined #openstack-swift | 18:38 | |
*** ToMfromTO has left #openstack-swift | 18:39 | |
notmyname | cschwede: are you working with Tim Leak, the reporter of that bug? | 18:40 |
*** ToMfromTO has joined #openstack-swift | 18:40 | |
*** ToMfromTO has left #openstack-swift | 18:41 | |
cschwede | notmyname: no, not yet. do you think i should ask him for a patch first? | 18:42 |
notmyname | cschwede: no. I was wondering if he was a coworker or customer and you were already had talked to him | 18:43 |
notmyname | s/were// | 18:43 |
cschwede | notmyname: no, just saw this bug and immediately thought that this needs a warning and doc update | 18:44 |
notmyname | ok, thanks. if possible, it would be nice to have these things land by the end of the weekend so we can include them in the 2.2.1 release | 18:45 |
*** jistr has quit IRC | 18:49 | |
*** mahatic has quit IRC | 18:52 | |
*** jamieh_ has joined #openstack-swift | 18:53 | |
*** jamieh_ has quit IRC | 18:55 | |
*** jordanP has quit IRC | 18:57 | |
*** IRTermite has joined #openstack-swift | 19:09 | |
*** mahatic has joined #openstack-swift | 19:11 | |
*** remix_tj has joined #openstack-swift | 19:12 | |
*** aix has quit IRC | 19:20 | |
*** Masahiro has joined #openstack-swift | 19:26 | |
remix_tj | hi, is this the right channel for asking about architectural requirements and how to configure properly swift for a multi-zone/multi-region cluster? | 19:30 |
*** Masahiro has quit IRC | 19:30 | |
*** zul has quit IRC | 19:31 | |
remix_tj | i've some issues understanding some requirements about the various dedicated networks | 19:31 |
remix_tj | i asked also on ask openstack, but seems that no one is caring | 19:34 |
*** zul has joined #openstack-swift | 19:34 | |
notmyname | remix_tj: ya, you can get some answers here. I'm in a meeting, but feel free to ask | 19:34 |
remix_tj | notmyname: thank you | 19:35 |
*** zul has quit IRC | 19:35 | |
remix_tj | i'm planning a deployment of a swift cluster involving multiple zones and two regions (starting with one, but then i'll add the second one) | 19:36 |
*** zul has joined #openstack-swift | 19:37 | |
remix_tj | i want to use dedicated networks for cluster-facing network and for replication network | 19:38 |
notmyname | ok | 19:39 |
remix_tj | first doubt: the proxy server of one region (supposing that i deploy only one for simplicity) does need access to all the storage nodes of all zones of that region? | 19:40 |
remix_tj | and does it need access also to storage nodes of the other regions? | 19:40 |
*** exploreshaifali has joined #openstack-swift | 19:47 | |
remix_tj | anyway, where can i find documentation about an implementation of a cluster like the one that i want to do, with dedicated networks? seems that no one has already done this kind of setup | 19:48 |
peluse | looking to save myself a little googling here... was asked internally what current options there are for customers looking to migrate from file based to Swift and I assume they meant tools/etc to helop both on the app side as well as migration of data... anyone have any pre-canned info on this? | 19:49 |
notmyname | peluse: mostly that's done with some gateway. there's a few things out there that can present swift with some set of posix semantics. the other way is looking at explorer or dashboard tools that give you (pseudo) directories and drag-drop functionality | 19:52 |
notmyname | peluse: clayg just gave torgomatic and me a run-down of your current work on https://review.openstack.org/#/c/131872/ | 19:53 |
peluse | notmyname, cool, I'm about to push another update that fixes a few things, cleans shit up and makes existing test code work. Will start adding new test code that but I think its the right direction this time | 19:54 |
notmyname | remix_tj: yup. totally possible. the nodes in the swift cluster need to be able to talk to one another (all the storage nodes need to talk to one another and the proxies should be able to talk to the storage nodes | 19:54 |
notmyname | peluse: everything I heard sounded good. still a bunch of "fun" problems to solve. seems a good direction though | 19:55 |
peluse | notmyname, patch includes all the plumbing for new hashes.pkl and functional update and update_delete for both reconstructor and repl (well, reconstructor side won't actually reconstruct until GET it done but it will reuse all of that) | 19:55 |
peluse | notmyname, yeah, I'll update the design doc too after I push the cleaned up version.... | 19:56 |
peluse | one more existing unit test failure to fix... | 19:56 |
notmyname | peluse: oh, and my "pre-canned" info for file->swift is basically spelled "swiftstack sales pitch" :-) | 19:56 |
peluse | heh | 19:58 |
peluse | i have no problem giving them that answer | 19:58 |
notmyname | remix_tj: https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/ https://www.swiftstack.com/docs/admin/cluster_management/regions.html https://www.youtube.com/watch?v=mcaTwhP_rPE https://www.youtube.com/watch?v=LpmBRqevuVU | 19:58 |
peluse | notmyname, so does the SS Filesystem Gateway require SS Controller or is it a standalone thing? | 20:04 |
peluse | heh, watching Joe's video on it now :) | 20:05 |
notmyname | peluse: (I'm told we aren't supposed to call it "SS" anything). it's not stand-alone. only part of the product | 20:05 |
peluse | well, you didn't do that I did :) | 20:05 |
*** gyee has quit IRC | 20:06 | |
*** lpabon has joined #openstack-swift | 20:15 | |
remix_tj | notmyname: so e | 20:40 |
remix_tj | *so, summing up, every network needs to have global visibility of all nodes of the clusters | 20:41 |
*** rdaly2 has joined #openstack-swift | 20:41 | |
openstackgerrit | paul luse proposed openstack/swift: New hashes.pkl format and several other reconstructor related changes https://review.openstack.org/131872 | 20:43 |
openstackgerrit | paul luse proposed openstack/swift: Build up reconstructor with correct node selection https://review.openstack.org/129361 | 20:43 |
openstackgerrit | paul luse proposed openstack/swift: Add node/pair index patch back into feature/EC https://review.openstack.org/134065 | 20:43 |
peluse | clayg, getting close... https://review.openstack.org/131872 has all the stuff we talked about (pretty sure) and existing test code updated to work with it. Still a little stitching to do on the reconstructor side but will start putting in test code for new functions here shortly.... | 20:46 |
peluse | well, have an eye Dr appt and will be dilated so maybe not so shortly :) | 20:47 |
*** prontotest has joined #openstack-swift | 20:58 | |
*** prontotest has left #openstack-swift | 20:58 | |
*** rdaly2 has quit IRC | 20:59 | |
*** cdelatte has quit IRC | 21:00 | |
notmyname | remix_tj: yes | 21:05 |
remix_tj | ok, this doesn't emerge from docs and books. Maybe is implicit but when i'll ask our netadmins for new vlans/networks it's the first question they'll ask :-\ | 21:07 |
*** dmsimard is now known as dmsimard_away | 21:09 | |
mattoliverau | Morning | 21:10 |
swifterdarrell | remix_tj: it's not simple from an implementation standpoint, but conceptually it's simple: every proxy-server needs to be able to route to every IP/port defined in all rings (with the possible exception of the replication IP/ports if they differ?); every storage node needs to be able to route to every IP/port defined in all rings. notmyname, that sound about right? | 21:10 |
notmyname | yup | 21:10 |
swifterdarrell | remix_tj: notmyname: there's also necessity for all proxy-servers to be able to route to every memcached IP/port defined in the configs (these are often co-deployed on the proxy-servers themselves) | 21:11 |
remix_tj | very good | 21:11 |
remix_tj | swifterdarrell: even remote proxy server? | 21:11 |
swifterdarrell | remix_tj: for distributed proxies (say, 2 regions, 2 proxies each for simplicity), you have 2 choices: one memcached pool per region (defined by the set of memcached server IPs in each proxy's configs) with the potential issue that each proxy will not have access to cache members in the other region; OR one large memcached pool that's coherent but introduces WAN latency in to some proportion of your requests. | 21:13 |
openstackgerrit | Christian Schwede proposed openstack/swift: [WIP] Warn if multiple replicas are stored within same region/zone https://review.openstack.org/140478 | 21:14 |
*** Masahiro has joined #openstack-swift | 21:15 | |
remix_tj | swifterdarrell: the idea is that the proxy on second region is only contacted in case of disaster recovery | 21:19 |
remix_tj | (more or less) | 21:19 |
*** Masahiro has quit IRC | 21:19 | |
swifterdarrell | remix_tj: then you'd still need full routability, but you could deploy disjoint memcached pools | 21:19 |
swifterdarrell | remix_tj: and after a fail-over, you'll have a cold cache (auth tokens will become invalid and/or need to be re-injected into memcached, etc.) | 21:20 |
remix_tj | ok, auth won't be a problem since servers and applications will be restarted with a specific order | 21:21 |
remix_tj | so they'll reauth | 21:21 |
swifterdarrell | remix_tj: note also that you'll need >= 33% of the raw storage capacity (as given to the ring as the sum of all devices's weight) in the DR region or you run the risk of > 0 partitions having all 3 replicas in the non-DR region (that's assuming 3 replicas, where you want 2 replicas in primary region and 1 in DR region) | 21:23 |
*** tab___ has joined #openstack-swift | 21:24 | |
remix_tj | yeah, i know. At the moment, for testing purposes, we planned a local region composed by 6 storage nodes in two datacenters at campus distance and 6 storage nodes on remote region, so raw capacity will be theorically 50/50 | 21:27 |
*** pberis has quit IRC | 21:33 | |
*** lpabon has quit IRC | 21:35 | |
*** tdasilva has quit IRC | 21:53 | |
openstackgerrit | Christian Schwede proposed openstack/swift: [WIP] Warn if multiple replicas are stored within same region/zone https://review.openstack.org/140478 | 21:57 |
*** pberis has joined #openstack-swift | 21:59 | |
*** exploreshaifali has quit IRC | 22:06 | |
*** foexle has quit IRC | 22:07 | |
*** dmsimard_away is now known as dmsimard | 22:14 | |
*** dmsimard is now known as dmsimard_away | 22:15 | |
*** miurahr has joined #openstack-swift | 22:22 | |
*** StevenK has quit IRC | 23:00 | |
*** Masahiro has joined #openstack-swift | 23:04 | |
*** tacticus_v1 is now known as tacticus | 23:05 | |
*** Masahiro has quit IRC | 23:08 | |
*** rmcall has joined #openstack-swift | 23:09 | |
*** tab___ has quit IRC | 23:19 | |
*** tab____ has joined #openstack-swift | 23:19 | |
*** gyee has joined #openstack-swift | 23:33 | |
*** miurahr has quit IRC | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!