clayg | so bknudson says he doesn't think i'm doing anything wrong on my devstack - I'm not so sure - is acoles_away the only guy that's gotten keystone/swift up and running any time recently? | 00:03 |
---|---|---|
clayg | hurricanerix_: aren't neck deap in a mess of keystone currently? Maybe we can cry on each other's shoulders? | 00:04 |
*** rdaly2 has quit IRC | 00:24 | |
*** rdaly2 has joined #openstack-swift | 00:26 | |
*** rdaly2 has quit IRC | 00:27 | |
*** gyee has joined #openstack-swift | 00:28 | |
openstackgerrit | Clay Gerrard proposed openstack/swift-specs: Add containeralias spec https://review.openstack.org/155524 | 00:32 |
klrmn | does the built-in pre-commit hook in the swift repo not do the same pep8 checks as jenkins? | 00:35 |
torgomatic | klrmn: no, I think the pre-commit hook there is just whatever git-review puts in, and that doesn't enforce much of anything | 00:45 |
torgomatic | $ tox -e pep8 # just run pep8 and other linters | 00:45 |
klrmn | torgomatic: the pre-commit hook runs pep8 and tox runs flake8 | 00:46 |
*** bill_az has quit IRC | 00:46 | |
klrmn | torgomatic: i'm endeavoring to change that (locally) but i get "./test/probe/test_container_merge_policy_index.py:375:34: F812 list comprehension redefines 'metadata' from line 371" which looks like perfectly valid code to me | 00:47 |
torgomatic | klrmn: huh; my pre-commit hook doesn't seem to do that... in fact, the only hook I have is commit-msg, and that does the Change-Id stuff | 00:49 |
torgomatic | so whatever you've got, it's not Swift's, although it probably is a good idea | 00:49 |
klrmn | torgomatic: given this is the third time jenkins has told me my flake8 sucks... | 00:50 |
torgomatic | klrmn: the gate is just complaining about an unused Manager import, not whatever F812 is... maybe your local system has a newer pep8 | 00:52 |
torgomatic | that's one of the nice things about tox; it uses the versions in test-requirements.txt, and I think we have those locked down pretty well | 00:53 |
klrmn | torgomatic: yes, i know. but when i changed my pre-commit hook to use flake8, i got the other error | 00:54 |
openstackgerrit | Leah Klearman proposed openstack/swift: more probe test refactoring https://review.openstack.org/155895 | 00:56 |
*** lnxnut has joined #openstack-swift | 01:10 | |
*** abhirc has joined #openstack-swift | 01:15 | |
*** rdaly2 has joined #openstack-swift | 01:28 | |
*** david-lyle has joined #openstack-swift | 01:32 | |
*** rdaly2 has quit IRC | 01:32 | |
*** david-lyle is now known as david-lyle_afk | 01:33 | |
*** gyee has quit IRC | 01:38 | |
mattoliverau | At SFO airport now, thanks for a great week all! | 01:40 |
notmyname | mattoliverau: have a safe trip | 01:40 |
*** zhill_ has quit IRC | 01:44 | |
mattoliverau | Looks like I'm going to miss valentines day completely, as it won't exist as a day. I go from Friday straight to Sunday.. #badhusband | 01:52 |
clayg | welp, that's it for me today I think | 01:54 |
clayg | torgomatic: can you help me remember next week that we need to update https://review.openstack.org/#/c/155421/ with the proposal for the "new" versioned object ideas that otherjon had been kicking around | 01:55 |
torgomatic | clayg: sure, I'll make a note | 01:55 |
clayg | torgomatic: I really do think you're the most familar with his ideas, so I'd appreciate your help tricking cschwede into writing^W^W^W^W writing up the spec | 01:56 |
torgomatic | clayg: sounds good; we'll get that on Tuesday | 01:57 |
*** lnxnut has quit IRC | 02:08 | |
mattoliverau | Have a great long weekend guys | 02:09 |
*** lnxnut has joined #openstack-swift | 02:30 | |
*** lnxnut has quit IRC | 02:34 | |
*** lnxnut has joined #openstack-swift | 02:38 | |
klrmn | clayg: ok, jenkins has +1-ed it, and i do not plan to make any more changes | 02:53 |
*** lnxnut has quit IRC | 04:03 | |
*** dmsimard_away is now known as dmsimard | 04:04 | |
*** dmsimard is now known as dmsimard_away | 04:06 | |
*** dmsimard_away is now known as dmsimard | 04:07 | |
*** lnxnut has joined #openstack-swift | 04:21 | |
*** dmsimard is now known as dmsimard_away | 04:34 | |
*** lnxnut has quit IRC | 04:37 | |
*** IRTermite has joined #openstack-swift | 04:53 | |
*** abhirc has quit IRC | 05:37 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Imported Translations from Transifex https://review.openstack.org/155967 | 06:09 |
*** panbalag has quit IRC | 07:13 | |
*** panbalag has joined #openstack-swift | 07:27 | |
*** glange has quit IRC | 07:28 | |
*** jd__ has quit IRC | 07:28 | |
*** jd__ has joined #openstack-swift | 07:32 | |
*** glange has joined #openstack-swift | 07:35 | |
*** ChanServ sets mode: +v glange | 07:35 | |
*** joeljwright has joined #openstack-swift | 07:43 | |
*** sileht has quit IRC | 09:49 | |
*** silor has joined #openstack-swift | 11:00 | |
*** joeljwright has quit IRC | 11:55 | |
*** silor has quit IRC | 12:16 | |
*** silor has joined #openstack-swift | 12:16 | |
*** bkopilov has quit IRC | 12:19 | |
*** mahatic has joined #openstack-swift | 13:19 | |
*** dmsimard_away is now known as dmsimard | 13:48 | |
*** sileht has joined #openstack-swift | 14:35 | |
*** tsg has joined #openstack-swift | 14:38 | |
*** tsg has quit IRC | 14:52 | |
*** dmsimard is now known as dmsimard_away | 15:16 | |
openstackgerrit | Richard Hawkins proposed openstack/swift: Add functional tests for container TempURLs https://review.openstack.org/155513 | 15:49 |
openstackgerrit | Richard Hawkins proposed openstack/swift: Add functional tests for container TempURLs https://review.openstack.org/155513 | 16:05 |
openstackgerrit | Richard Hawkins proposed openstack/swift: Add additional func tests for TempURLs https://review.openstack.org/155985 | 16:20 |
openstackgerrit | Richard Hawkins proposed openstack/swift: Add additional func tests for TempURLs https://review.openstack.org/155985 | 16:21 |
*** otoolee has quit IRC | 16:43 | |
*** otoolee has joined #openstack-swift | 16:48 | |
*** geaaru has joined #openstack-swift | 17:06 | |
*** Anayag has joined #openstack-swift | 17:33 | |
Anayag | Hi I am installing SAIO in a server and everything worked fine. Now I just changed the proxy IP to the local ip instead 127.0.0.1 and add the same local ip to memcache config and restarted proxy. Then the temp AUTH token is changing in every request | 17:35 |
Anayag | do you have any idea to resolve this? | 17:35 |
ctennis | did you restart memcache too? | 17:40 |
Anayag | yes | 17:42 |
Anayag | but now I revert the memcache to 127.0.0.1 | 17:42 |
Anayag | then it works fine | 17:42 |
Anayag | was it wrong to set the memcache ip? | 17:43 |
ctennis | did you update the proxy server configuration to tell it memcache is listening on a different ip? | 17:43 |
Anayag | ahh that I did not | 17:44 |
Anayag | How do I set it in the proxy config? | 17:45 |
ctennis | https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L338 | 17:46 |
Anayag | Thanks a lot | 17:47 |
*** jrichli has joined #openstack-swift | 18:52 | |
mattoliverau | jrichli is here! | 18:55 |
jrichli | mattoliverau: hey Matt! I guess you made it home safely? | 18:56 |
mattoliverau | I'm about to take off on my last leg of the trip home, only 4-5 hours to go! | 18:56 |
mattoliverau | So no not yet :( | 18:56 |
jrichli | have you fully recovered yet? | 18:56 |
jrichli | or are you still a bit under the weather? | 18:57 |
mattoliverau | Feeling much better, but just came off a 13.5 hour flight and didn't really sleep.. So more like a dead man walking now :p | 18:58 |
notmyname | mattoliverau: did you get the fancy AirNZ seats on the long flight back? | 18:58 |
mattoliverau | I take it you got home safe :) | 18:58 |
jrichli | I am back in town having some great coffee in a cafe near UT. About to learn me some more swift :-) | 18:59 |
mattoliverau | notmyname: yeah, and I did sleep better.. Got 3-4 hours, which is the best so far ;p | 18:59 |
notmyname | nice | 18:59 |
notmyname | jrichli: sounds like a great plan!! | 18:59 |
mattoliverau | Nice, valentines day is the best day to learn about swift :p | 19:00 |
jrichli | lol, well ... I am sure we will eat somewhere exciting tonight. Hubby is here shopping to buy me more laptop stickers! | 19:02 |
mattoliverau | Haha, we've created a momster... And love it ;) OK phone going off again, have a great rest of you day jrichli and notmyname (and anyone else reading) :) | 19:03 |
mattoliverau | *monster | 19:03 |
notmyname | mattoliverau: hope you have a nice flight and get a couple more hours of sleep | 19:03 |
mattoliverau | Thanks, I'm tired enough too so should be fine :) | 19:04 |
jrichli | +1 | 19:04 |
*** doxavore has joined #openstack-swift | 19:05 | |
notmyname | mattoliverau: jrichli: I wrote up something short about the hackathon (and have both your pictures) https://swiftstack.com/blog/2015/02/13/openstack-swift-hackathon/ | 19:05 |
mattoliverau | notmyname: oh and I talked to jogo yest(?) And there is a new zuul feature that will be awesome for swift check and gate tests, the ability to skip tests, eg don't run full integration tests if only docs have changed.. I'll make a patch on Monday :) | 19:07 |
doxavore | Are there any detailed docs on how placement actually works with regard to multiple regions and weights? I'd like to make sure there is 1 replica in a different region, but I haven't been able to find any operational or admin docs that describe how weights come into play. | 19:07 |
notmyname | mattoliverau: cool | 19:07 |
notmyname | doxavore: yes | 19:08 |
notmyname | doxavore: I'll give you some links, but there have been some subtle changes recently that we're currently in the process of writing up more completely that might affect things in some cases | 19:08 |
*** jrichli has quit IRC | 19:08 | |
*** jrichli has joined #openstack-swift | 19:09 | |
doxavore | notmyname: that would be great, thank you! | 19:09 |
notmyname | doxavore: http://docs.openstack.org/developer/swift/overview_ring.html and https://swiftstack.com/blog/2012/11/21/how-the-ring-works-in-openstack-swift/ | 19:09 |
mattoliverau | OK phone really going off now! I feel the glare from the airplane staff :p | 19:09 |
jrichli | notmyname: Great summary! | 19:10 |
notmyname | doxavore: but the short answer is "yes, if you have 2 regions and 3 replicas (and the regions are the same size), then you'll have at least 1 replica in each region" | 19:10 |
notmyname | jrichli: thanks | 19:10 |
notmyname | doxavore: also, https://swiftstack.com/openstack-swift/architecture/ is a high-level summary of all of swift | 19:12 |
doxavore | notmyname: by the same size, you mean that to ensure i have 2 copies in region 1 and 1 copy in region 2, i need to make sure the total weight of region 2 is _at least_ 1/3 of my total cluster weight? | 19:12 |
notmyname | doxavore: first (because things have recently changed slightly) what version of swift are you using? | 19:13 |
doxavore | such that the weight is given priority, then it uses its "most unique" placements? | 19:13 |
notmyname | right | 19:13 |
notmyname | that's the current strategy | 19:13 |
notmyname | doxavore: or to slightly rephrase, things are placed as widely as possible across failure domains until you have a failure domain that is full with respect to weight. then you can get more than one replica in a failure domain tier | 19:14 |
doxavore | iirc, swift-2.2.0 from Ubuntu Cloud Archive (Juno on 14.04) - apologies, VPN issues are preventing me from grabbing the exact version :-/ | 19:15 |
notmyname | doxavore. latest version is 2.2.2 | 19:16 |
*** bkopilov has joined #openstack-swift | 19:16 | |
doxavore | okay that makes sense. you said that could be changing soon though? | 19:17 |
notmyname | what could? | 19:17 |
notmyname | the placement? | 19:17 |
notmyname | that did change in 2.2.2 | 19:17 |
notmyname | doxavore: https://github.com/openstack/swift/blob/master/CHANGELOG | 19:18 |
doxavore | oh okay. you mentioned some "subtle changes" coming :> | 19:18 |
notmyname | doxavore: ya, those are what are n 2.2.2 | 19:18 |
notmyname | doxavore: and mostly they affect deployments that are unbalanceable | 19:18 |
notmyname | doxavore: one cool thing is that the `swift-ring-builder` tool in 2.2.2 includes a command to print out the "durability". It will report if there are partitions in the ring that are too heavily concentrated in one failure domain | 19:20 |
doxavore | interesting - looks like we are on 2.2.0 still. is there a different recommended install from the juno docs (which seem to suggest using the ubuntu-cloud-archive repo)? | 19:21 |
*** silor1 has joined #openstack-swift | 19:21 | |
notmyname | doxavore: I'd always recommend people using the latest tagged version of swift :-) | 19:21 |
*** silor has quit IRC | 19:22 | |
notmyname | doxavore: but I don't know what distros have packaged right now | 19:22 |
doxavore | notmyname: i'll have to see if we can get up to 2.2.2 then. in the meantime, is my understanding of how the placement works different in 2.2.0, or are there just some gotchas I should watch out for? :> | 19:23 |
notmyname | doxavore: how many regions do you have? | 19:24 |
doxavore | 2 regions, 3 replicas. just trying to use one of the regions for DR insurance more than anything (but there are some application services that will use it periodically) | 19:25 |
notmyname | ok. so an active-active DR thing. makes sense | 19:25 |
doxavore | but that insurance is only valid if we can actually make sure there is 1 copy of everything in that 2nd region :) | 19:25 |
notmyname | right | 19:25 |
notmyname | are the regions similarly sized? ie capacity (and specifically, weights) | 19:26 |
doxavore | we're still setting things up, so I can play with the weights however I need to. generally though, it should be 2/3 vs 1/3 of total cluster weight, yes | 19:27 |
doxavore | if that's all we need to do to make sure we're getting the DR we want, then I can make sure we stay configured that way. With some drives dropping out, etc, it can be difficult to make sure it's always exactly 2/3 - 1/3 split | 19:28 |
notmyname | doxavore: weights should always be related to the available capacity. the best rule of thumb is "number of GB". so 3000 = 3 TB drive, 6000 = 6TB drive. etc | 19:29 |
notmyname | ok, you aren't going to get what you expect there | 19:29 |
notmyname | you need to have 1/2 and 1/2 | 19:29 |
notmyname | because you'll get 2 replicas in one region and 1 in the other (for each partition. so you'd have at least 1 replica in each region) | 19:30 |
notmyname | the way you have it now... | 19:30 |
notmyname | you'll have that for 1/3 of your cluster, then the smaller region gets "full" (wrt weight) and then you'll start getting all 3 replicas in the larger region | 19:31 |
doxavore | hmm... is there not a way i could make sure 2 of my 3 replicas are always in 1 region? e.g. our truly active data center vs our higher-latency offsite | 19:31 |
notmyname | how far apart are your regions? in latency | 19:31 |
doxavore | we will be running with 2/3 of disk storage in region 1 and 1/3 in region 2, so i had planned on using the rule to make weight = GB | 19:32 |
doxavore | 20ms, give or take | 19:32 |
notmyname | so you might be able to get away with calling it one region and then splitting it into 3 evenly sized zones in that one region | 19:33 |
notmyname | then each zone will fill up evenly | 19:33 |
notmyname | and give you what you're currently expecting/looking for | 19:33 |
doxavore | hrm. that's unfortunate. so 3 evenly-sized zones or (presumably) 3 evently-sized regions where 2 of them are in that primary facility and 1 in the offsite is the only way to get that.. | 19:38 |
doxavore | as even saying weights in region 2 are 2*GB would result in 2 copies being placed there (half the time) | 19:39 |
notmyname | doxavore: right. so think of it as tiers (drive->server->zone->region). and swift fills up evenly across the available failure domains at a given tier. so if you have 2 regions, you'll get 2 replicas in one and 1 in the other. before it goes to the lower tiers (zone, etc) | 19:40 |
notmyname | doxavore: I'm being called away, but I'll be back online later. also, there are several other people who will be able to answer similar questions (mostly during the workweek though) | 19:42 |
doxavore | beautifully stated. that's exactly what I was trying to wrap my head around. I was having trouble finding that in any of the docs. :) thank you very much for your help. | 19:42 |
notmyname | doxavore: glad to have helped | 19:42 |
notmyname | doxavore: good luck with your deployment. please let us know if you have other questions | 19:42 |
notmyname | doxavore: and I always love hearing about new clusters and their use cases. I'd be happy to hear anything you can share about it | 19:43 |
* notmyname out | 19:43 | |
doxavore | will do. I'm sure i'll be around in the coming days/weeks. :> | 19:43 |
*** MasterPiece has joined #openstack-swift | 19:50 | |
*** silor1 has quit IRC | 20:18 | |
*** MasterPiece has quit IRC | 20:21 | |
*** zul has joined #openstack-swift | 20:47 | |
*** doxavore has quit IRC | 21:03 | |
*** jrichli_ has joined #openstack-swift | 21:18 | |
*** jrichli has quit IRC | 21:18 | |
*** mahatic has quit IRC | 21:23 | |
*** mahatic has joined #openstack-swift | 21:40 | |
*** mahatic has quit IRC | 21:52 | |
*** mahatic has joined #openstack-swift | 21:53 | |
*** cppforlife_ has quit IRC | 22:24 | |
*** cppforlife_ has joined #openstack-swift | 22:45 | |
*** jrichli_ has quit IRC | 22:52 | |
*** tsg has joined #openstack-swift | 23:06 | |
*** gsilvis has quit IRC | 23:38 | |
*** gsilvis has joined #openstack-swift | 23:40 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!