*** occupant has quit IRC | 00:01 | |
notmyname | nitika2: mahati (not here): I didn't get to OPW project scheduling today. should be towards the top of my list for tomorrow | 00:09 |
---|---|---|
*** dmsimard_away is now known as dmsimard | 00:15 | |
*** dmsimard is now known as dmsimard_away | 00:18 | |
nitika2 | notmyname: No problem. | 00:19 |
nitika2 | thanks | 00:19 |
*** dmorita has joined #openstack-swift | 00:26 | |
*** gyee has quit IRC | 00:27 | |
*** kopparam has joined #openstack-swift | 00:40 | |
*** addnull has joined #openstack-swift | 00:42 | |
*** kopparam has quit IRC | 00:45 | |
*** shri has left #openstack-swift | 00:46 | |
*** cpen has quit IRC | 00:52 | |
*** nosnos has joined #openstack-swift | 01:01 | |
*** oomichi has joined #openstack-swift | 01:23 | |
*** annegentle has joined #openstack-swift | 01:34 | |
*** annegentle has quit IRC | 01:40 | |
*** kopparam has joined #openstack-swift | 01:41 | |
*** kopparam has quit IRC | 01:46 | |
*** kyles_ne has quit IRC | 01:53 | |
*** kyles_ne has joined #openstack-swift | 01:54 | |
*** kyles_ne has quit IRC | 01:58 | |
redbo | does that mean I get 2 votes in everything? | 02:10 |
redbo | fungi: I have no idea how I'd remove that account. | 02:12 |
fungi | redbo: not remove it from gerrit, just check the checkbox next to it in the group membership list and hit the delete button (take the redundant account out of the swift-core member list) | 02:14 |
fungi | redbo: i wasn't 100% sure which one you were using, or i'd have done it for you | 02:14 |
*** annegentle has joined #openstack-swift | 02:18 | |
redbo | fungi: I can't check those boxes and don't have a "remove" button, but feel free to remove mike-launchpad@. It's some zombie account that gerrit and launchpad conspired to create and cause me headaches. | 02:23 |
fungi | redbo: oh, sorry, just noticed swift-core was set owned by swift-release instead of self-managed. no wonder i had you confused | 02:24 |
fungi | redbo: cleaned up--thanks! | 02:25 |
redbo | for some reason my launchpad account had 2 oauth identities associated with it when gerrit started, and it could never decide which one I was. | 02:25 |
*** zigo has quit IRC | 02:27 | |
*** zigo has joined #openstack-swift | 02:29 | |
*** kopparam has joined #openstack-swift | 02:42 | |
*** kopparam has quit IRC | 02:46 | |
*** annegentle has quit IRC | 02:47 | |
*** notsogentle is now known as annegentle | 02:47 | |
*** nitika2 has quit IRC | 02:54 | |
*** kbee has joined #openstack-swift | 03:22 | |
*** nosnos has quit IRC | 03:24 | |
*** kbee has quit IRC | 03:35 | |
*** nosnos has joined #openstack-swift | 04:10 | |
*** kota_ has joined #openstack-swift | 04:15 | |
*** fifieldt_ has quit IRC | 04:42 | |
*** kota_ has quit IRC | 04:44 | |
*** kota_ has joined #openstack-swift | 04:45 | |
*** kota_ has quit IRC | 04:56 | |
*** kyles_ne has joined #openstack-swift | 04:59 | |
*** cschwede has joined #openstack-swift | 05:21 | |
*** cschwede has quit IRC | 05:24 | |
*** zaitcev has quit IRC | 05:31 | |
*** nosnos has quit IRC | 05:37 | |
*** nosnos has joined #openstack-swift | 05:38 | |
*** nosnos has quit IRC | 05:42 | |
*** nosnos has joined #openstack-swift | 05:43 | |
*** kopparam has joined #openstack-swift | 05:45 | |
*** bkopilov has quit IRC | 05:46 | |
*** cschwede has joined #openstack-swift | 05:58 | |
*** oomichi has quit IRC | 06:02 | |
*** jokke_ has quit IRC | 06:11 | |
*** ttrumm_ has joined #openstack-swift | 06:12 | |
*** hhuang has quit IRC | 06:13 | |
mattoliverau | Well I'm calling it a night! Night all. | 06:15 |
*** kyles_ne has quit IRC | 06:21 | |
*** kyles_ne has joined #openstack-swift | 06:22 | |
*** kyles_ne has quit IRC | 06:26 | |
*** hhuang has joined #openstack-swift | 06:27 | |
*** delattec has quit IRC | 07:09 | |
*** delattec has joined #openstack-swift | 07:12 | |
*** kopparam has quit IRC | 07:18 | |
*** fifieldt has joined #openstack-swift | 07:22 | |
*** joeljwright has joined #openstack-swift | 07:30 | |
*** hhuang has quit IRC | 07:33 | |
*** hhuang has joined #openstack-swift | 07:37 | |
*** geaaru has joined #openstack-swift | 07:39 | |
*** hhuang has quit IRC | 07:42 | |
*** kopparam has joined #openstack-swift | 07:46 | |
*** jistr has joined #openstack-swift | 07:46 | |
*** foexle has joined #openstack-swift | 08:00 | |
*** acoles_away is now known as acoles | 08:04 | |
*** openstackgerrit has quit IRC | 08:11 | |
*** nosnos has quit IRC | 08:23 | |
*** nellysmitt has joined #openstack-swift | 08:25 | |
*** nosnos has joined #openstack-swift | 08:40 | |
*** aix has joined #openstack-swift | 08:52 | |
*** mkollaro has joined #openstack-swift | 09:00 | |
*** aix has quit IRC | 09:00 | |
*** aix has joined #openstack-swift | 09:01 | |
*** geaaru has quit IRC | 09:11 | |
*** geaaru has joined #openstack-swift | 09:15 | |
*** oomichi_ has joined #openstack-swift | 09:16 | |
*** hhuang has joined #openstack-swift | 09:18 | |
*** kbee has joined #openstack-swift | 09:18 | |
*** ChanServ sets mode: +v cschwede | 09:20 | |
*** aix has quit IRC | 09:38 | |
*** Dafna has joined #openstack-swift | 09:40 | |
*** kbee has quit IRC | 09:42 | |
*** btorch has joined #openstack-swift | 09:43 | |
*** dosaboy_ has joined #openstack-swift | 09:43 | |
*** btorch_ has quit IRC | 09:47 | |
*** kopparam has quit IRC | 09:48 | |
acoles | cschwede: hi, you around? | 09:48 |
*** kopparam has joined #openstack-swift | 09:48 | |
*** dosaboy has quit IRC | 09:48 | |
cschwede | acoles: Hi Alistair! | 09:49 |
*** kopparam has quit IRC | 09:49 | |
acoles | cschwede: i just hit bug 1376878, assertion error in test_upload | 09:49 |
acoles | did you make any progress on a fix, i think i can see the cause | 09:50 |
cschwede | acoles: no, no fix from my side yet. i'm curious, what's the cause? | 09:51 |
*** aix has joined #openstack-swift | 09:52 | |
acoles | cschwede: service.py, line 1189 onwards, jobs to create container and segment container are now submitted to thread pools | 09:52 |
cschwede | acoles: ahh, yes, now it makes a lot of sense to me. good catch! | 09:53 |
acoles | so can occur 'out of order', but test assumes they are ordered (assert_called_with checks the last call to a method) | 09:53 |
cschwede | acoles: yes, and i was trying to use something like "has_calls" and just check if the calls are there (in any order). but didn't submit a patch yet | 09:53 |
*** kopparam has joined #openstack-swift | 09:53 | |
acoles | also the segment container job is put in a object_uu_pool ?? | 09:54 |
acoles | cschwede: my first thought was to fix the test too, but i'm not sure the behavior is correct. | 09:54 |
*** mahatic has joined #openstack-swift | 09:54 | |
cschwede | acoles: the other idea might be to ensure the segment container is created first, otherwise uploads might fail. wdyt? | 09:55 |
acoles | cschwede: the segment container create will attempt to HEAD the first container, but that HEAD could occur before the first container is PUT?? So ordering should be enforced - would you agree? | 09:55 |
cschwede | acoles: yes, agreed, that should be in the correct order | 09:56 |
acoles | the 'old' behavior was segment container second | 09:56 |
* acoles just looks to double check that | 09:56 | |
*** ttrumm has joined #openstack-swift | 09:57 | |
*** ttrumm_ has quit IRC | 09:57 | |
*** kopparam has quit IRC | 09:58 | |
*** ttrumm_ has joined #openstack-swift | 09:59 | |
acoles | cschwede: ok, the old order was container PUT, then segment container PUT, on a single thread | 10:01 |
acoles | https://github.com/openstack/python-swiftclient/blob/eedb0d4ab5f2fc6ac8b49a80cd2128edcbc5aceb/swiftclient/shell.py#L1182 | 10:01 |
cschwede | acoles: which makes sense to me | 10:01 |
*** ttrumm has quit IRC | 10:01 | |
acoles | if the first container PUT fails, then the segment container PUT is still attempted, but thats another issue! | 10:02 |
acoles | cschwede: shall I put up a patch? | 10:02 |
*** kopparam has joined #openstack-swift | 10:03 | |
cschwede | acoles: sure, add me as a reviewer and i'll have a look at it | 10:03 |
mahatic | hi cschwede, I see that you're a core reviewer. Can you take a look at this and suggest any changes? https://review.openstack.org/#/c/125275/ | 10:03 |
acoles | cschwede: ok. btw how was your journey home? | 10:03 |
cschwede | mahatic: sure, i'll have a look at your patch! | 10:04 |
cschwede | acoles: thanks, was quite relaxed - i was a little bit worried about traffic, but security took even more time than driving to Boston ;) | 10:04 |
*** jokke_ has joined #openstack-swift | 10:04 | |
joeljwright | acoles cschwede: just spotted this discussion | 10:05 |
mahatic | cschwede, alright, thank you! will i have to add you as a reviewer there or not necessary? | 10:05 |
joeljwright | if the segment container creation job is being put in object_uu_pool then it's in the wrong place | 10:05 |
joeljwright | I'm still not happy with the object_uu and object_dd thread pools | 10:06 |
acoles | joeljwright: ok i'll move it to the container pool. i'll add you as a reviewer ok? | 10:06 |
joeljwright | but it was the simplest way to avoid deadlocks | 10:06 |
cschwede | mahatic: no need for this patch, but feel free to add me whenever you need a review | 10:06 |
joeljwright | yes that's fine, I'll review too | 10:06 |
mahatic | cschwede, okay, great. thank you. | 10:07 |
*** addnull has quit IRC | 10:12 | |
*** ttrumm has joined #openstack-swift | 10:17 | |
*** ttrumm_ has quit IRC | 10:20 | |
*** addnull has joined #openstack-swift | 10:21 | |
*** dmorita has quit IRC | 10:28 | |
*** addnull has quit IRC | 10:28 | |
*** mkollaro has quit IRC | 10:30 | |
*** joeljwright has quit IRC | 10:32 | |
cschwede | mahatic: i would like to see a test too, i could add an example to the review if you like. or want to work on your own on this? | 10:33 |
*** ttrumm_ has joined #openstack-swift | 10:33 | |
mahatic | cschwede, sure, an example would be great. I'm not quite sure on how/where to add a test. | 10:34 |
cschwede | mahatic: ok, review is online, i added a sample test and where to put it. let me know if you have questions on this | 10:35 |
*** ttrumm has quit IRC | 10:36 | |
*** kopparam has quit IRC | 10:36 | |
*** kopparam has joined #openstack-swift | 10:39 | |
*** jd__ has quit IRC | 10:41 | |
*** jd__ has joined #openstack-swift | 10:41 | |
mahatic | cschwede, great, thank you! looking at it. | 10:43 |
*** jokke__ has joined #openstack-swift | 10:47 | |
*** jokke_ has quit IRC | 10:54 | |
*** hhuang has quit IRC | 10:54 | |
*** delattec has quit IRC | 10:54 | |
*** mordred has quit IRC | 10:54 | |
cschwede | that might become quite helpful for Swift: http://permalink.gmane.org/gmane.comp.db.sqlite.general/90549 | 10:58 |
*** cdelatte has quit IRC | 10:59 | |
*** dmsimard_away is now known as dmsimard | 10:59 | |
*** bkopilov has joined #openstack-swift | 11:04 | |
*** ttrumm_ has quit IRC | 11:06 | |
*** ttrumm has joined #openstack-swift | 11:06 | |
*** hhuang has joined #openstack-swift | 11:08 | |
*** mordred has joined #openstack-swift | 11:08 | |
*** foexle has quit IRC | 11:09 | |
*** foexle has joined #openstack-swift | 11:11 | |
*** cdelatte has joined #openstack-swift | 11:31 | |
*** delattec has joined #openstack-swift | 11:31 | |
*** jistr has quit IRC | 11:32 | |
*** kopparam has quit IRC | 11:37 | |
*** joeljwright has joined #openstack-swift | 11:45 | |
*** mkollaro has joined #openstack-swift | 11:46 | |
*** jistr has joined #openstack-swift | 11:52 | |
*** cschwede has left #openstack-swift | 11:54 | |
*** jistr is now known as jistr|english | 11:54 | |
*** cschwede has joined #openstack-swift | 11:54 | |
dmsimard | What does python-swiftclient do exactly when you try to upload a file larger than 5GB ? It seems to do multiple uploads so I would guess it uploads chunks.. | 11:56 |
*** cschwede has quit IRC | 11:57 | |
*** cschwede has joined #openstack-swift | 11:57 | |
*** delattec has quit IRC | 11:58 | |
*** cdelatte has quit IRC | 11:58 | |
*** cschwede has quit IRC | 11:59 | |
*** cdelatte has joined #openstack-swift | 12:00 | |
*** cschwede has joined #openstack-swift | 12:00 | |
*** cschwede has quit IRC | 12:07 | |
*** cschwede has joined #openstack-swift | 12:08 | |
*** cschwede has quit IRC | 12:09 | |
*** kopparam has joined #openstack-swift | 12:09 | |
*** cschwede has joined #openstack-swift | 12:09 | |
*** ttrumm has quit IRC | 12:13 | |
*** ttrumm__ has joined #openstack-swift | 12:15 | |
*** cschwede has quit IRC | 12:18 | |
*** AnjuT has joined #openstack-swift | 12:18 | |
*** cschwede has joined #openstack-swift | 12:18 | |
*** cschwede has quit IRC | 12:19 | |
*** ttrumm has joined #openstack-swift | 12:21 | |
*** cschwede has joined #openstack-swift | 12:22 | |
*** ttrumm__ has quit IRC | 12:24 | |
*** cschwede has quit IRC | 12:26 | |
*** cschwede has joined #openstack-swift | 12:27 | |
*** cschwede has quit IRC | 12:27 | |
*** cschwede has joined #openstack-swift | 12:28 | |
*** ChanServ sets mode: +v cschwede | 12:29 | |
*** davdunc has quit IRC | 12:35 | |
*** fungi has left #openstack-swift | 12:39 | |
*** bsdkurt1 has quit IRC | 12:39 | |
*** NM has joined #openstack-swift | 12:40 | |
*** kopparam has quit IRC | 12:45 | |
*** kopparam has joined #openstack-swift | 12:45 | |
*** AnjuT has quit IRC | 12:47 | |
*** openstackgerrit has joined #openstack-swift | 12:49 | |
*** kopparam has quit IRC | 12:50 | |
cschwede | dmsimard: by default it does not split the object - you need to use „--segment-size“ or „-S“ to specify the segment size | 12:50 |
dmsimard | cschwede: Thanks. | 12:51 |
cschwede | dmsimard: but if you upload multiple objects, swiftclient uses multiple threads | 12:51 |
* cschwede is happy to have a working bouncer again | 12:52 | |
*** geaaru has quit IRC | 12:53 | |
*** geaaru has joined #openstack-swift | 13:00 | |
*** jistr|english is now known as jistr | 13:07 | |
*** miqui has joined #openstack-swift | 13:09 | |
*** kopparam has joined #openstack-swift | 13:16 | |
*** ppai has joined #openstack-swift | 13:20 | |
*** kopparam has quit IRC | 13:22 | |
*** nosnos has quit IRC | 13:24 | |
dmsimard | cschwede: How would I upload multiple files simultaneously to make use of those multiple threads ? (Feel silly asking that..) | 13:31 |
*** CaioBrentano has joined #openstack-swift | 13:35 | |
dmsimard | (I'm running into bottlenecks and trying to put the finger on it) | 13:36 |
*** mrsnivvel has quit IRC | 13:45 | |
*** oomichi_ has quit IRC | 13:46 | |
cschwede | dmsimard: for example, let’s assume you have files named obj_1, obj_2, …, obj_10. if you do a „swift upload test obj_*“ you will most likey see that these are uploaded not in the „correct“ order, because different upload threads. so basically you just add multiple filenames for uploading | 13:51 |
dmsimard | cschwede: Yeah, I kind of just started multiple "while true" uploads on different filenames to see what would happen | 13:52 |
*** tab____ has joined #openstack-swift | 13:58 | |
*** foexle has quit IRC | 14:05 | |
*** hhuang has quit IRC | 14:12 | |
*** bkopilov has quit IRC | 14:15 | |
NM | Good morning guys. | 14:15 |
NM | Does anyone knows if the memcached used by object-expirer must be the same used by the proxy servers ? Or if there is any recommendation for that? | 14:16 |
*** kopparam has joined #openstack-swift | 14:18 | |
*** nexusz99 has joined #openstack-swift | 14:23 | |
*** kopparam has quit IRC | 14:23 | |
*** bkopilov has joined #openstack-swift | 14:29 | |
openstackgerrit | Alistair Coles proposed a change to openstack/python-swiftclient: Fix cross account upload using --os-storage-url https://review.openstack.org/125759 | 14:29 |
*** kyles_ne has joined #openstack-swift | 14:57 | |
*** bkopilov has quit IRC | 14:59 | |
*** elambert has joined #openstack-swift | 14:59 | |
*** kyles_ne has quit IRC | 15:03 | |
*** lpabon has joined #openstack-swift | 15:05 | |
mahatic | hi, when i try to do any swift operation and get this at the end of stack trace "pkg_resources.DistributionNotFound: python-swiftclient==2.3.2.dev1.gc93df57" | 15:06 |
mahatic | does it mean i will have to get a latest of python-swiftclient? | 15:06 |
*** nitika_ has joined #openstack-swift | 15:10 | |
*** bkopilov has joined #openstack-swift | 15:11 | |
cschwede | mahatic: this worked for me in the past: „easy_install --upgrade pip“ | 15:11 |
*** mrsnivvel has joined #openstack-swift | 15:13 | |
mahatic | cschwede, "easy_install --upgrade pip" gives me "pip 1.5.6 is already the active version in easy-install.pth" | 15:15 |
*** ttrumm has quit IRC | 15:19 | |
*** kopparam has joined #openstack-swift | 15:19 | |
*** ttrumm has joined #openstack-swift | 15:20 | |
*** kenhui has joined #openstack-swift | 15:24 | |
*** kopparam has quit IRC | 15:24 | |
*** mrsnivvel has quit IRC | 15:24 | |
*** bkopilov has quit IRC | 15:29 | |
openstackgerrit | paul luse proposed a change to openstack/swift: Merge master to feature/ec https://review.openstack.org/126595 | 15:29 |
notmyname | good morning | 15:31 |
*** hhuang has joined #openstack-swift | 15:32 | |
peluse | morning | 15:32 |
*** mrsnivvel has joined #openstack-swift | 15:32 | |
*** k4n0 has quit IRC | 15:33 | |
peluse | notmyname, for some reason the topics for mt gerrit patches (at least for merging master to ec) seem to randomly select bugs despite my commit message indicating the swift-ec blueprint... any idea what might be happening? | 15:37 |
mahatic | cschwede, "python setup.py develop" this installed required dependencies | 15:37 |
mahatic | cschwede, it's working alright now | 15:37 |
*** ttrumm has quit IRC | 15:37 | |
notmyname | peluse: on https://review.openstack.org/#/c/126595/ ? | 15:43 |
*** lcurtis has joined #openstack-swift | 15:45 | |
notmyname | peluse: or are you referring to the emails that say a bug fix was proposed to feature/ec? | 15:50 |
notmyname | cschwede: ya, I think the new sqlite would be cool to check out | 15:54 |
*** hhuang has quit IRC | 15:58 | |
notmyname | torgomatic: openstack requirements are unfrozen, so it can be updated now (like new eventlet) | 15:58 |
*** kyles_ne has joined #openstack-swift | 15:58 | |
*** kyles_ne has quit IRC | 16:02 | |
*** mkollaro has quit IRC | 16:03 | |
*** gyee has joined #openstack-swift | 16:08 | |
peluse | notmyname, yes to the link above and the 'topic' there isn't something I added and then results in updating the bug and thus sending emails out that the merge to feature/ec is a proposed fix for that bug | 16:12 |
notmyname | peluse: I do not know by what magic the "topic" string is chosen. I thought it came from your local branch name | 16:13 |
torgomatic | I think git-review fishes out the topic string by examining your commit comment | 16:13 |
notmyname | peluse: the emails are fine (and expected), though. since you're merging one tree into another (master->ec), the bug fixes that were on master and not yet on feature/ec are now also proposed to feature/ec | 16:13 |
torgomatic | if you manually push, it's $ git push gerrit $mybranch:refs/publish/master/$topic | 16:13 |
peluse | yeah, but I specifically added a commit message to keep that from happening | 16:13 |
torgomatic | or refs/publish/feature/ec/$topic | 16:14 |
peluse | well, I mean to specufy the topic I mean. I just changed it manually on gerrit | 16:14 |
openstackgerrit | John Dickinson proposed a change to openstack/swift: Refer multi node install to docs.openstack.org https://review.openstack.org/93788 | 16:15 |
peluse | man, I can't find where its coming... just annoying is all | 16:17 |
peluse | notmyname, yeah, maybe the one that ends up as the "topic" is like the last one in the auto-merge process or something.... | 16:18 |
peluse | it just started with these last 2 merges though, before then the topic would be whatever I selected in the commit message.... | 16:19 |
*** zaitcev has joined #openstack-swift | 16:19 | |
*** ChanServ sets mode: +v zaitcev | 16:19 | |
notmyname | there is no coffee in my house. I must go fix that now... | 16:21 |
peluse | torgomatic, were you in on the 'approval pointer' discussion and free for 5-10 min to talk a bit about it? | 16:28 |
torgomatic | peluse: yeah, I've got some time | 16:28 |
*** jistr has quit IRC | 16:29 | |
peluse | torgomatic, cool wanna buzz me at 480 554 3688? would be faster than typing :) | 16:29 |
peluse | or I can call too, either way | 16:29 |
torgomatic | peluse: okay, give me just a minute here to go get situated | 16:29 |
peluse | np | 16:29 |
notmyname | torgomatic: peluse: if you want to 3-way dial me in too, I'm available. or not. whatever :-) | 16:30 |
peluse | ahh, thought you were out for coffee. Can do, I'll just send you guys a bridge to make it easy | 16:30 |
torgomatic | k | 16:31 |
elambert | peluse: mind if I lurk on that call? | 16:31 |
peluse | 916-356-2663, Bridge: 1, Passcode: 4650256 | 16:31 |
peluse | all are welcome | 16:31 |
peluse | well, I guess the default is 5 people if so someone is unable to joine let me know and I'll figure out how to change it | 16:33 |
peluse | notmyname, coming? | 16:35 |
*** marcusvrn_ has joined #openstack-swift | 16:38 | |
*** zigo has quit IRC | 16:43 | |
*** zigo has joined #openstack-swift | 16:46 | |
notmyname | peluse: ah, sorry. just walked to get coffee | 16:46 |
*** mahatic has quit IRC | 16:51 | |
*** NM has quit IRC | 16:55 | |
*** lcurtis has quit IRC | 16:58 | |
peluse | call over -- thanks guys!! | 17:05 |
notmyname | peluse: sorry, I had stepped away before you said something about a bridge. thanks for doing that | 17:06 |
*** NM has joined #openstack-swift | 17:06 | |
notmyname | reminder to those interested, the openstack TC nominations are open. if you want to run, you need to send an email to the mailing list. http://lists.openstack.org/pipermail/openstack-dev/2014-October/047749.html | 17:11 |
zaitcev | too much work... | 17:12 |
*** mahatic has joined #openstack-swift | 17:13 | |
*** kyles_ne has joined #openstack-swift | 17:16 | |
*** alexpec has joined #openstack-swift | 17:20 | |
*** alexpec has left #openstack-swift | 17:20 | |
*** kopparam has joined #openstack-swift | 17:21 | |
NM | Does anyone knows if the memcached used by object-expirer must be the same used by the proxy servers ? Or if there is any recommendation for that? | 17:21 |
*** kopparam has quit IRC | 17:25 | |
notmyname | NM: the reason it has a separate config file is because it instantiates an InternalClient (which is basically a simplified in-memory proxy server). therefore, if you share the memcache pool, then the internal client will be able to take advantage of the stuff the proxy has cached (account and container info, basically). but there is no hard requirement on that | 17:28 |
*** kopparam has joined #openstack-swift | 17:28 | |
NM | notmyname: Do you think this sentence on the docs should be reviewd? "Only the proxy server uses memcache." | 17:31 |
notmyname | NM: depends if we care about it being correct or not ;-) | 17:32 |
*** kopparam has quit IRC | 17:33 | |
notmyname | NM: heh, as of about an hour ago, that entire file that had that sentence in it gone removed ;-) | 17:38 |
notmyname | s/gone/got/ | 17:38 |
notmyname | NM: https://review.openstack.org/#/c/93788/ | 17:39 |
*** kyles_ne has quit IRC | 17:39 | |
*** kyles_ne has joined #openstack-swift | 17:40 | |
NM | notmyname: Thanks! https://review.openstack.org/#/q/owner:tom%2540openstack.org+status:open,n,z seens to be pretty straight forward: "was so horribly outdated | 17:42 |
NM | " :D | 17:42 |
NM | s/big_wrong_string/Tom Fified/ | 17:42 |
notmyname | eh | 17:42 |
notmyname | heh | 17:42 |
notmyname | yeah, fifieldt has got it right there :-) | 17:43 |
notmyname | hurricanerix: ping | 17:43 |
hurricanerix | notmyname: hey | 17:43 |
notmyname | hurricanerix: re your metadata patch. in functional/test_account.py what is test_bad_metadata3() testing? | 17:44 |
*** kyles_ne has quit IRC | 17:44 | |
*** geaaru has quit IRC | 17:44 | |
*** kyles_ne has joined #openstack-swift | 17:44 | |
hurricanerix | notmyname: basically, once the fix was in place, the func tests failed because part of the test would fill the account metadata to it max constraints. | 17:45 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 17:45 |
hurricanerix | so the next test would try to add more, and fail when it asserted it would work. | 17:45 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 17:45 |
notmyname | hurricanerix: yeah, I get that. just looking at that specific function | 17:45 |
zaitcev | I mentioned that in comments bug my -1 got lost in reuploads | 17:46 |
zaitcev | Just run SWIFT_TEST_IN_PROCESS=1 ./.functests, it's enough to trigger | 17:46 |
hurricanerix | oh, let me look, i didn't change the tests, i just broke them up so that the tearDown would clean up and allow the tests to pass | 17:46 |
notmyname | hurricanerix: the first check for 204 is to ensure that exactly the metadata max size works? | 17:47 |
zaitcev | Thanks to Portante you don't need to set up the whole functests thing anymore. | 17:47 |
*** echevemaster has joined #openstack-swift | 17:47 | |
notmyname | zaitcev: what are you triggering? | 17:47 |
zaitcev | notmyname: metadata overflow | 17:47 |
notmyname | zaitcev: yeah, hurricanerix got that taken care of in the patch that landed. split the tests up so it cleans up properly | 17:48 |
hurricanerix | notmyname: it looks like it is adding metadata to MAX_META_OVERALL_SIZE (should be 4096), and verifies that it is successful, then tries to add one more key, which would make it exceed that value. | 17:48 |
notmyname | I'm investigating backporting it to icehouse. | 17:48 |
hurricanerix | *up to | 17:48 |
zaitcev | oh | 17:48 |
notmyname | hurricanerix: ah, thanks. just confirming I was reading it right. I'm getting 2 errors on my backport. one there and one in _bad_metadata2() where it's checking for jsut the max number of keys | 17:50 |
hurricanerix | notmyname: ahh. you could check if your account you are running it on has tempurl keys set | 17:50 |
hurricanerix | because the test assumes they are not set. | 17:51 |
hurricanerix | need to look at fixing it so those get cleaned up, or removed before the tests run to prevent that from causing problems. | 17:51 |
notmyname | hurricanerix: looks like test_bad_metadata2 removes them for that test | 17:51 |
hurricanerix | notmyname: yeah, because they were causing that test to fail. it's not a very good solution though because it assumes that test_bad_metadata2 runs before it. | 17:53 |
*** kenhui has quit IRC | 17:53 | |
hurricanerix | notmyname: i think unittest runs them in order, but one test probably should not be dependent on another one i would think. | 17:54 |
notmyname | hurricanerix: yeah. I'm getting fun* results | 17:54 |
notmyname | *not fun | 17:55 |
hurricanerix | notmyname: also, wasn't sure if you were just running test_bad_metadata3 | 17:55 |
notmyname | running the whole TestAccount class and both 2 and 3 fail. reset and rung them individually and whichever I run first passes and the other fails | 17:55 |
hurricanerix | notmyname: if you reset and run one of them, then head the account, do you have any metadata still set? | 17:56 |
hurricanerix | notmyname: maybe it is not all getting cleaned up like i thought it was. | 17:56 |
swift_fan | Hi -- does anyone have experience using the "balance = source" configuration in HAProxy ? | 17:56 |
swift_fan | I am wondering if, | 17:57 |
swift_fan | I were to upload lots of big files from one machine, to my Swift cluster | 17:57 |
notmyname | hurricanerix: yup. that was my next check. and it's not getting cleaned up | 17:57 |
swift_fan | Even if HAProxy is inclined towards sending all of the files to one particular Swift proxy server (when the HAProxy configuration is set to "source") | 17:58 |
swift_fan | will it start to spread out some of the load to maybe some of the other Swift proxies ? | 17:58 |
torgomatic | swift_fan: you're asking if, should your load balancer fail to balance the load, will Swift then balance the load itself? | 18:00 |
swift_fan | (when it sees that a lot of big files are being uploaded by just one machine outside the cluster) | 18:00 |
torgomatic | it will not; Swift proxies talk to storage nodes, not to other proxies. | 18:00 |
swift_fan | torgomatic : Possibly, but also whether HAProxy does so as well. | 18:00 |
notmyname | swift_fan: yeah, haproxy will spread it out based on the source IP when it's set to balance=source. see https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts and https://swiftstack.com/blog/2013/06/10/configuring-haproxy-as-a-swiftStack-load-balancer/ | 18:00 |
torgomatic | swift_fan: I can't speak to haproxy's behavior, but Swift does not include any method for moving load between proxy servers. | 18:01 |
hurricanerix | notmyname: strange, was the cleanup stuff added to the teardown after icehouse? | 18:01 |
swift_fan | notmyname : I know that HAProxy will choose a destination server and try to stick with it, but what happens if there is a particular server that's sitting outside the Swift cluster, that wants to back up a LOT of data at once? | 18:01 |
notmyname | hurricanerix: not sure | 18:01 |
mahatic | notmyname, hello, did you happen to finish your todo list yesterday? :) | 18:02 |
notmyname | swift_fan: then you'll probably not want to use balance=source | 18:02 |
swift_fan | notmyname : Will HAProxy (which is set to "balance = source") still try to send all of that data to just one Swift node ? | 18:02 |
swift_fan | notmyname : In such an "extreme" case ? | 18:02 |
notmyname | mahatic: almost. I'm working on a backport now and then I should be able to look at OPW stuff | 18:02 |
mahatic | notmyname, okay | 18:03 |
swift_fan | notmyname : Usually, I would prefer HAProxy "balance = source", but I'm concerned about what HAProxy can do in the lots-of-large-objects coming from just one server, scenario .......... | 18:03 |
notmyname | swift_fan: the haproxy docs say that if you set balance=source then each client will go to the same backend (swift proxy in this case). so I'm not sure what the question is | 18:04 |
swift_fan | torgomatic : You said that Swift proxies don't talk to other Swift proxies, but is that always the case ? For instance, from the tcpdump utility, I think I was able to see the servers talking to each other for the account,container,object replication services ........... | 18:04 |
hurricanerix | notmyname: i don't see any tearDown in the icehouse code, and i am pretty sure that is where it is getting cleaned up. https://github.com/openstack/swift/blob/stable/icehouse/test/functional/test_account.py | 18:04 |
notmyname | swift_fan: use haproxy to balance client requests going to swift proxy servers. the swift proxy will choose the appropriate storage nodes in the cluster, but that's completely orthogonal to client load balancing | 18:05 |
swift_fan | notmyname : I have 3 Swift nodes | 18:05 |
swift_fan | notmyname : Each has the proxy,account,container,object services. | 18:05 |
notmyname | hurricanerix: yeah, looks like I should add the tearDown to the icehouse backport | 18:06 |
torgomatic | swift_fan: you are conflating the idea of "server" (an OS image with a bunch of processes in it) with "Swift proxy server" (/usr/bin/swift-proxy-server) | 18:07 |
torgomatic | Swift proxy servers do not talk to one another. | 18:08 |
* portante they are a funny lot | 18:08 | |
*** Trixboxer has joined #openstack-swift | 18:09 | |
swift_fan | torgomatic notmyname : So, can an object be downloaded from outside the cluster, even if the Swift cluster hasn't made 3 replicas of an uploaded object yet ? | 18:09 |
torgomatic | swift_fan: yes | 18:09 |
swift_fan | torgomatic notmyname : Or, however many replicas it aims to create. | 18:09 |
notmyname | hurricanerix: yeah, thanks. looks like adding the setup and teardown makes it work. running full tests now and will propose if it works | 18:09 |
swift_fan | torgomatic notmyname : How ? | 18:10 |
swift_fan | torgomatic notmyname : What if the Swift proxy server that receives the download request is not able to find the requested object in one of the locations it needs to be replicated in ? | 18:10 |
hurricanerix | notmyname: nice, glad i could help. | 18:11 |
torgomatic | swift_fan: it looks in another location | 18:11 |
notmyname | important to know summary of current happenings in the openstack TC: http://www.openstack.org/blog/2014/10/openstack-technical-committee-update-2/ | 18:12 |
swift_fan | torgomatic notmyname : Ah, I see. So what you're saying is that when a new object is first located, and immediately downloaded by a client, the Swift proxy will first look at a particular node for the requested object, | 18:14 |
swift_fan | torgomatic notmyname : Then will try another node, | 18:14 |
*** aix has quit IRC | 18:14 | |
notmyname | hurricanerix: torgomatic: acoles: https://review.openstack.org/#/c/126645/ <-- metadata backport to icehouse | 18:14 |
swift_fan | torgomatic notmyname : Until it has either found the object or exhausted all the possible nodes, | 18:14 |
swift_fan | torgomatic notmyname : Right ? | 18:14 |
notmyname | swift_fan: ya, basically. but instead of just one location for the first lookup, there are <replica count> possibilities. if it isn't found in any of those locations, then it looks in the rest of the cluster (up to a limit of nodes--you don't want to look on 10000 drives for every 404) | 18:16 |
swift_fan | notmyname : Okay, But if it's not a newly updated object, and just an update to one -- if a client requests to download that updated object, the Swift proxy that receives the download request will just return the first instance of the requested object that it can find ... ? | 18:17 |
swift_fan | notmyname torgomatic : Even if the particular object/file replica that the Swift proxy retrieves hasn't been updated to the new version, yet ? | 18:18 |
notmyname | swift_fan: correct. and that might happen when you have hardware failures in your cluster (eg servers or drives being down). swift doesn't require all the servers to be up in order to respond to a request. this is called eventual consistency | 18:20 |
swift_fan | notmyname : Okay, thanks. Where in the code can I find how the Swift proxy server communicates with the Swift storage services ? (so that I can also verify what torgomatic said about the Swift proxies never being able to communicate with each other.) | 18:23 |
swift_fan | as in, with other Swift proxies. | 18:23 |
swift_fan | (No communications via Swift proxy <-----> Swift proxy) ? | 18:23 |
notmyname | swift_fan: to make an analogy to biology, I feel like you've jumped from "what color are your eyes" straight into "show me the specific gene that controls my eye color" | 18:24 |
notmyname | swift_fan: point is, the answer to "where in the code" is https://github.com/openstack/swift/tree/master/swift/proxy but that's not a small set of code | 18:25 |
swifterdarrell | swift_fan: if you're going to read the code anyway, why didn't you just start there? ;) | 18:26 |
*** andreia_ has joined #openstack-swift | 18:28 | |
*** kopparam has joined #openstack-swift | 18:29 | |
notmyname | this looks interesting, for those of you who don't mind conferences http://events.linuxfoundation.org/events/vault/program/cfp | 18:29 |
notmyname | peluse: ^^ | 18:29 |
lpabon | notmyname: that's the conference i was supposed to send you :-) | 18:31 |
notmyname | lpabon: :-) | 18:31 |
swift_fan | notmyname swifterdarrell : Sorry, what I meant to ask was where I could fine the logic in the code that shows how the cluster retrieves+delivers a download request ? | 18:31 |
swift_fan | notmyname swifterdarrell : Not necessarily to modify it, but hopefully to gather some more insights about this. Thanks | 18:32 |
notmyname | swift_fan: same place. https://swiftstack.com/blog/2013/02/12/swift-for-new-contributors/ has some general pointers as to the high-level data flow in the code. that's where you should start | 18:32 |
*** kopparam has quit IRC | 18:33 | |
swift_fan | notmyname torgotmatic swifterdarrell : If the Swift proxies never communicate with the other Swift proxies -- then how does each Swift proxy receive updates on the rings (so that they know where certain objects and containers are stored) ? | 18:37 |
swift_fan | newest updates* on the rings. | 18:37 |
notmyname | swift_fan: that's the kind of thing talked about in http://docs.openstack.org/developer/swift/admin_guide.html (tl;dr you do it just like you do config management on your servers) | 18:38 |
swift_fan | notmyname : config management ? | 18:40 |
*** mrsnivvel has quit IRC | 18:41 | |
swift_fan | notmyname : Well I mean besides the part where the Swift administrator copies the rings to each Swift proxy-server upon initial installation/configuration of the Swift cluster, | 18:42 |
swift_fan | notmyname : how do the Swift proxy servers then continue to keep the most updated set of rings, when there isn't any communication between the Swift proxies ? | 18:42 |
swifterdarrell | swift_fan: chef, puppet, home-grown system, etc... | 18:44 |
swift_fan | swifterdarrell : You mean that Swift itself doesn't have a way for managing the rings ? | 18:45 |
notmyname | managing != deploying | 18:46 |
swift_fan | swifterdarrell : How do the Swift proxies know where newly uploaded objects are located (within the cluster), then ? | 18:46 |
swifterdarrell | swift_fan: well, it has a way for managing the rings (swift-ring-builder) but not for distributing them | 18:46 |
swifterdarrell | swift_fan: by using the ring | 18:46 |
swift_fan | referring to a cluster that's been up and running for a while, not one that's just been deployed. | 18:46 |
swifterdarrell | swift_fan: to jump a question or two ahead: say you have 3 replicas; you set a "min part hours" >= the longest swift-object-replicator cycle time in your cluster; the swift-ring-builder won't move more than one of a partition's replica per "min part hours" | 18:48 |
*** andreia_ has quit IRC | 18:48 | |
swifterdarrell | swift_fan: and that maintains availability during the course of data shuffling (via the swift-object-replicator) as capacity gets added and partitions moved around | 18:49 |
*** shri has joined #openstack-swift | 18:49 | |
peluse | notmyname, cool - yeah I think I saw something about that one. Didn't notive the 'suggested' topics including Swift and SDS though.... thanks | 18:49 |
swifterdarrell | swift_fan: if 1 of 3 replicas is in the wrong place, that's no big deal for GET requests; they'll get served from the other 2; new PUTs will go to the new 3 correct locations and the replicator will sort out that that orphaned 3rd location can be reaped | 18:50 |
*** brnelson has joined #openstack-swift | 18:50 | |
zackmdavis | swift_fan, re "how do proxies know where newly uploaded objects are located", if it wasn't already clear, you don't need a whole new ring to place and retrieve new objects | 18:51 |
swift_fan | zackmdavis swifterdarrell : I know that the cluster doesn't need a whole new ring, but it needs to update the ring, though, right? | 18:52 |
swifterdarrell | swift_fan: this might help https://swiftstack.com/blog/2012/04/09/swift-capacity-management/ | 18:53 |
swifterdarrell | swift_fan: I'm not 100% sure what you're asking | 18:53 |
swifterdarrell | swift_fan: swift-proxy-server notices new rings on disk automatically | 18:54 |
*** andreia_ has joined #openstack-swift | 18:55 | |
swift_fan | zackmdavis swifterdarrell : So is it on the Swift proxy server where the rings are stored ? | 18:57 |
notmyname | swift_fan: the ring is a file on disk | 18:58 |
notmyname | and yes, it's on the proxy server machines | 18:58 |
swift_fan | notmyname : a disk on the Swift proxy servers? | 18:58 |
openstackgerrit | A change was merged to openstack/python-swiftclient: Add tests for account listing using --lh switch https://review.openstack.org/125402 | 18:58 |
notmyname | yes. to make it simple, yes it's a file on disk on all of the boxes in your swift cluster | 18:58 |
zackmdavis | swift_fan, no, the ring specifies how to compute where objects should live; it doesn't explicitly store a record of, "foo.jpg lives on partitions bar and quux", but rather specifies a mathematical relationship so that given an object like "foo.jpg", you can derive that it's supposed to be on the "bar" and "quux" partitions, so the ring itself only needs to be updated when the cluster changes (new nodes, drives decomissioned, &c.) not when | 18:59 |
zackmdavis | the data stored in the cluster changes | 18:59 |
*** occup4nt is now known as occupant | 18:59 | |
zackmdavis | I wrote a little bit about this earlier this year http://zackmdavis.net/blog/2014/01/consistent-hashing/ | 19:00 |
swifterdarrell | swift_fan: this is pretty sweet, you should totally read it! http://docs.openstack.org/developer/swift/overview_ring.html | 19:01 |
swift_fan | notmyname zackmdavis swifterdarrell : I think zackmdavis's response hit the spot! :) | 19:01 |
swift_fan | Sorry if it wasn't clear what exactly I was trying to ask ..... | 19:02 |
swift_fan | But anyways, yeah, that's exactly what I needed. | 19:02 |
*** nexusz99 has quit IRC | 19:03 | |
swift_fan | notmyname zackmdavis swifterdarrell : Thanks for all the helpful resources along the way :) | 19:03 |
*** acoles is now known as acoles_away | 19:09 | |
*** tdasilva has joined #openstack-swift | 19:09 | |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 19:09 |
swift_fan | This is somewhat of a loadbalancing question, but | 19:10 |
swift_fan | in the /etc/haproxy/haproxy.cfg file here : http://paste.ubuntu.com/8516575/ | 19:10 |
swift_fan | listen swift_proxy_cluster | 19:11 |
swift_fan | bind <Virtual IP>:8080 | 19:11 |
swift_fan | balance source | 19:11 |
swift_fan | option tcplog | 19:11 |
swift_fan | option tcpka | 19:11 |
swift_fan | server controller1 10.0.0.1:8080 check inter 2000 rise 2 fall 5 | 19:11 |
swift_fan | server controller2 10.0.0.2:8080 check inter 2000 rise 2 fall 5 | 19:11 |
swift_fan | what is | 19:11 |
*** Nadeem has joined #openstack-swift | 19:11 | |
swift_fan | "option tcplog" | 19:11 |
swift_fan | "option tcpka" | 19:12 |
swift_fan | and | 19:12 |
swift_fan | each "check inter 2000 rise 2 fall 5" | 19:12 |
swift_fan | ? | 19:12 |
notmyname | swift_fan: here's the first one: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20tcpka | 19:13 |
notmyname | swift_fan: I'm pretty sure you could find the others there too | 19:13 |
ppai | swift_fan, http://www.haproxy.org/download/1.3/doc/configuration.txt | 19:13 |
ppai | swift_fan, you'll find all haproxy config there | 19:14 |
zackmdavis | swift_fan, I usually find search engines, such as the ever-popular Google, useful for answering these questions. In this case, I found the same documentation that notmyname just linked by searching for _tcplog openstack_, which lead me to this page http://docs.openstack.org/high-availability-guide/content/ha-aa-haproxy.html which itself links to the HAProxy docs | 19:15 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 19:17 |
notmyname | I propose we push back tomorrows Swift meeting for 30 minutes. I've got a conflict and won't be available until 1930UTC | 19:19 |
notmyname | mattoliverau: acoles_away: cschwede: ^^ (since you are all asleep now) | 19:21 |
cschwede | notmyname: ok, fine with me. not completely asleep yet ;) | 19:22 |
notmyname | cschwede: and by asleep I mean you probably shouldn't be lurking in IRC right now ;-) | 19:22 |
notmyname | except for mattoliverau. he better be asleep now ;-) | 19:22 |
cschwede | notmyname: yeah, i probably should do other things at this time. but then, swift keeps me up atm ;) | 19:23 |
notmyname | cschwede: yeah, I can sympathize. | 19:24 |
openstackgerrit | A change was merged to openstack/swift: Refer multi node install to docs.openstack.org https://review.openstack.org/93788 | 19:27 |
*** HenryG has quit IRC | 19:28 | |
swift_fan | cschwede notmyname : What do you mean swift keeps you up ? | 19:28 |
swift_fan | Because it's fun, or because there are lots of tasks to do? | 19:28 |
notmyname | yes | 19:28 |
swift_fan | (or both) | 19:28 |
swift_fan | What made you choose to work on Swift ? | 19:29 |
swift_fan | as opposed to say, | 19:29 |
swift_fan | another cluster file system ? | 19:29 |
swift_fan | such as Ceph | 19:29 |
*** kopparam has joined #openstack-swift | 19:30 | |
notmyname | I work on swift because I'm paid to. but I enjoy working on swift because I get to work with talented devs, solve hard problems, and build a storage engine that IMO has a real possibility of storing a large part of the world's data | 19:33 |
*** kyles_ne has quit IRC | 19:34 | |
*** kopparam has quit IRC | 19:34 | |
notmyname | (that's the best 2-line summary of the last 5 years of my life I could come up with off the top of my head) | 19:34 |
swift_fan | notmyname : You are employed by Rackspace, then? | 19:34 |
*** kyles_ne has joined #openstack-swift | 19:34 | |
*** kyles_ne has quit IRC | 19:34 | |
*** kyles_ne has joined #openstack-swift | 19:35 | |
notmyname | no. I used to work for rackspace. for the last 2+ years I've worked at swiftstack | 19:35 |
zackmdavis | woo SwiftStack | 19:35 |
swift_fan | notmyname : Just curious, but what does the term "data management" mean to you ? | 19:35 |
notmyname | swift_fan: not much. ie taken alone it's sorta meaningless | 19:35 |
swift_fan | notmyname : How so? | 19:37 |
swift_fan | (as in, why meaningless) | 19:37 |
notmyname | "data management" doesn't tell you anything about any actual use case or problem being solved. therefore the phrase is general enough to apply to everything and nothing. | 19:39 |
swift_fan | notmyname : How often in your everyday-work do you have to consider the actual use case / application that your storage systems are supporting ? | 19:41 |
notmyname | I can't imagine not considering that | 19:42 |
swift_fan | notmyname : For instance, when setting up and maintaining Swift clusters, do you consider the end application, often? | 19:42 |
notmyname | otherwise what in the world are we even doing here? | 19:42 |
swift_fan | notmyname : Well, anything you do can work for a very general case, right ? | 19:42 |
swift_fan | notmyname : For instance, setting up and maintaining a Swift cluster. | 19:42 |
notmyname | there are two sets of users: the deployers running a cluster and the applications/users talking to it. both are always considered | 19:42 |
swift_fan | notmyname : Pretty much any application that I can think of, that requires storing data, can utilize it. | 19:43 |
swift_fan | notmyname : Why, aside from object storage, is the cloud *block* storage as well ? | 19:46 |
openstackgerrit | A change was merged to openstack/swift: Merge master to feature/ec https://review.openstack.org/126595 | 19:46 |
swift_fan | notmyname : I've heard that it's faster, but is it really worth it for application designers to try to manage things on the *block* level ? | 19:46 |
notmyname | swift_fan: are you asking out of curiosity or are you looking for something specific to an app you're writing? | 19:47 |
swift_fan | notmyname : I've been wanting to ask this for a while. | 19:49 |
swift_fan | notmyname : Mostly out of curiosity, but I imagine it could be very useful in the future. | 19:49 |
openstackgerrit | Christian Schwede proposed a change to openstack/swift: Add a reference to the OpenStack security guide https://review.openstack.org/126709 | 19:54 |
notmyname | swift_fan: I'd suggest the following: https://dl.dropboxusercontent.com/u/21194/distributed-object-store-principles-of-operation.pdf and https://www.youtube.com/watch?v=Og0BHTMH66M and basically anything at https://www.google.com/#q=object+storage+vs+block+storage | 19:56 |
swift_fan | notmyname : Ok, thanks. | 19:56 |
notmyname | and I apologize for giving you a link to a google search, but there's a ton of info out there already that you'll be able to peruse at your own speed | 19:56 |
swift_fan | ok, no worries!! | 19:57 |
*** HenryG has joined #openstack-swift | 20:06 | |
swift_fan | notmyname : Does SwiftStack support analytics applications on top of SwiftStack's stored data ? | 20:14 |
swift_fan | notmyname : Or is cloud *block* storage more suited for that ? | 20:14 |
notmyname | swift_fan: what do you mean by "analytics applications"? | 20:15 |
notmyname | so "could you store analytics data in swift?" yes, definitely. "is swift good for every use case of storing analytics data?" probably not | 20:16 |
swift_fan | notmyname : I was trying to ask whether SwiftStack does analytics (e.g., data mining), or supports analytics, on the data that it stores ? | 20:17 |
zaitcev | search for ZeroVM, that's probably the best bet today | 20:18 |
notmyname | swift_fan: ah. a different question | 20:18 |
notmyname | swift_fan: yes, swiftstack's product does include both time-series metrics about what's going on in the cluster and utilization info on a per-account basis. | 20:19 |
notmyname | swift_fan: but note that swiftstack != swift | 20:19 |
swift_fan | notmyname : Ok. But do SwiftStack customers ever store the data that they use for data analytics, on SwiftStack ? | 20:21 |
swift_fan | notmyname : Or is SwiftStack mainly used for backup+archive purposes ? | 20:21 |
swift_fan | notmyname : Or do I have my facts wrong? (e.g., backup+archive doesn't necessarily mean it's not used for data analytics?) | 20:21 |
notmyname | swift_fan: if you're interested in swiftstack, this isn't really the place for it. you can go to swiftstack.com and get a free trial. but this channel is for swift dev work, and we try to leave product-pitches out :-) | 20:23 |
zaitcev | Do Red Hat's customers ever store data on Red Hat. No, they store it in Swift. SwiftStack is a company and they offer a product that manages OpenStack Swift, but do not store the data themselves. That is the mental model I have anyway. | 20:24 |
notmyname | zaitcev: correct | 20:24 |
swift_fan | notmyname zaitcev : So, SwiftStack doesn't have data centers that store customer data ?? | 20:25 |
swifterdarrell | swift_fan: didn't notmyname just say this wasn't the place to talk about that? | 20:26 |
*** cdelatte has quit IRC | 20:26 | |
swifterdarrell | swift_fan: specifically, "if you're interested in swiftstack, this isn't really the place for it." | 20:26 |
swift_fan | swifterdarrell : This was the last question I had about that, since it was brought up. | 20:27 |
swifterdarrell | swift_fan: cloud block storage will have different performance characteristics vs. cloud object storage, different consistency/availability guarantees, as well as different scaling characteristics | 20:29 |
swift_fan | swifterdarrell : I'm still a little uneasy about this concept of "eventual consistency". | 20:30 |
*** kopparam has joined #openstack-swift | 20:30 | |
swift_fan | swifterdarrell : Just the POSIX concept of "strong" consistency seems to make a lot more sense. | 20:31 |
swift_fan | swifterdarrell : It seems kind of like, once you have eventual consistency, in a way you're losing the fidelity of the data. | 20:31 |
swift_fan | swifterdarrell : Sort of like, (in a sense), losing the "true nature" of it. | 20:31 |
swifterdarrell | swift_fan: Sometimes. But sometimes it makes a lot of sense to be able to keep using your system when a switch dies or you otherwise get a network partition | 20:32 |
swift_fan | swifterdarrell : Do you have any scenarios off the top of your head where it makes more sense to "keep using your system"? | 20:35 |
swift_fan | in the case of a dead switch, or network partition, etc | 20:35 |
*** kopparam has quit IRC | 20:35 | |
swift_fan | swifterdarrell : It still seems dangerous to try to do anything, when you're not observing the most updated copy of your data. | 20:36 |
*** byeager_away has quit IRC | 20:38 | |
*** aerwin has joined #openstack-swift | 20:39 | |
*** thurloat has quit IRC | 20:40 | |
*** thurloat has joined #openstack-swift | 20:41 | |
*** byeager_away has joined #openstack-swift | 20:41 | |
glange | there are other data stores that aren't eventually consistent if you need that | 20:41 |
glange | you can't imagine a singe use case where it would be better to retrieve an old version of an object than no version of the object ? | 20:43 |
swift_fan | glange -- like what? | 20:43 |
swift_fan | (what data store) | 20:43 |
swift_fan | glange swifterdarrell -- Ok, I see what you are saying. Basically, it's very situational | 20:43 |
swift_fan | (is what you're saying) | 20:43 |
swift_fan | glange swifterdarrell -- I can think of some cases, but they seem somewhat contrived ..... | 20:44 |
glange | there http://ceph.com/ <-- that is one example of a data store that has different properties from swift | 20:44 |
acorwin | swift_fan: for one simple example, there are many cases where eventual consistency is irrelevant because data is stored and retrieved and never (or extremely rarely) updated | 20:44 |
glange | swift_fan: http://en.wikipedia.org/wiki/Doghouse <-- look at the images on that page, would it be ok to serve an old version of those pictures if you can't serve the new version? | 20:45 |
zackmdavis | acorwin, you mean like backups, or archives? | 20:45 |
glange | swift_fan: would it be better to serve an old version than nothing? | 20:45 |
swifterdarrell | glange: NEW DOGS OR NO DOGS, that's my motto | 20:45 |
swift_fan | glange -- I see what you're saying | 20:45 |
acorwin | swift_fan: you know what they say - you can't teach an old dog new tricks | 20:46 |
glange | swift_fan: and that is not a contrived situation :) | 20:46 |
clayg | glange: how can we know if we're even seeing the same dogs when we look at that page | 20:47 |
glange | clayg: we don't, and that is ok :) | 20:47 |
acorwin | clayg: do any two people ever truly see the same dog? | 20:47 |
*** mkollaro has joined #openstack-swift | 20:48 | |
elambert | torgomatic: got a sec to discuss range retrieve for EC? Would it be a problem if PyECLib, under the covers, always retreived at least an entire segment (but still only returned just the requested range)? | 20:48 |
clayg | also it's not just serving old data - it's also being able to eat new data even if you can't *guarentee* that you won't be able to serve that version | 20:49 |
clayg | I think it's the write characteristics in the face of failure that really differentiate AP from CP | 20:50 |
swift_fan | clayg : What is AP and CP ? | 20:53 |
acorwin | swift_fan: http://en.wikipedia.org/wiki/CAP_theorem | 20:54 |
swifterdarrell | swift_fan: see also http://en.wikipedia.org/wiki/Consistency_model | 20:56 |
torgomatic | elambert: if that's how it has to work, that's okay, I suppose | 20:57 |
torgomatic | as long as the segment size isn't too large | 20:57 |
elambert | user definable iirc | 20:58 |
elambert | but standard seems to be 1MB | 20:58 |
torgomatic | Swift can certainly throw away data the client didn't want, and now we've got some sanity checks on range requests, so that's probably alright | 20:58 |
elambert | ok, thanks | 20:59 |
torgomatic | there'll be some nasty edge cases around multiple ranges that don't overlap, but do require bytes from the same segment, and that'll need good tests and careful coding, but it should be okay | 20:59 |
swift_fan | What kind of data is usually stored in the "Archive Tier" in the tiers of cloud data storage ? | 20:59 |
swift_fan | (as opposed to the non-archive storage tiers right above it). | 21:00 |
elambert | torgomatic... and depending on the encoding scheme it may not be possibel to decode w/o reading the entire segment | 21:00 |
torgomatic | elambert: yep, exactly... so really, fetching the whole segment is probably fine; it's just some extra overhead | 21:00 |
* elambert nods | 21:01 | |
NM | Guys, sometimes I see this on my logs, (account-replicator is one of the then) "ERROR syncing" ending with "#012error: [Errno 1] Operation not permitted". | 21:09 |
NM | Has anyone seen this? | 21:09 |
*** CaioBrentano1 has joined #openstack-swift | 21:11 | |
clayg | NM: i only see that when I have some permissions messed up in /srv/node | 21:11 |
clayg | NM: sometimes it's cause I replaced a motherboard os on a node with old disks and the uid/gid's got messed up | 21:12 |
*** nellysmitt has quit IRC | 21:13 | |
clayg | NM: sometimes permissions are just wrong (probably a operator error/misconfig or chef/puppet gone wild) and I just have to fix them | 21:13 |
*** CaioBrentano has quit IRC | 21:13 | |
clayg | NM: the rest of the line leading up to the Operation not permitted might be helpful | 21:13 |
*** nitika__ has joined #openstack-swift | 21:14 | |
NM | clayg: That was my first thought. But $ find . ! -user swift didn't return anything | 21:14 |
*** chrisnelson has joined #openstack-swift | 21:14 | |
clayg | you sure the effective permissins of the daemon are running as swift? | 21:15 |
clayg | again the traceback might even have the db file name in it | 21:15 |
*** nitika_ has quit IRC | 21:17 | |
*** Trixboxer has quit IRC | 21:18 | |
NM | Yes. I checked twice. | 21:19 |
NM | I was looking at the exception but didn't find anything useful | 21:19 |
NM | Sometimes it work sometimes it doesn't | 21:21 |
NM | Most of the times it replicate with 0 failures | 21:22 |
*** mahatic has quit IRC | 21:23 | |
mattoliverau | Morning all! | 21:24 |
peluse | morning | 21:24 |
mattoliverau | notmyname: yay a 30 extra sleep in, I'm happy with that! | 21:24 |
NM | Morning :) | 21:25 |
mattoliverau | *minute (/me types well today) | 21:25 |
NM | clayg: grep "Oct 7" aco.log|grep "account"|grep -c "0 failure" = 2152 | 21:25 |
NM | grep "Oct 7" aco.log|grep "account"|grep -c "1 failure" = 35 | 21:25 |
notmyname | mattoliverau: :-) | 21:25 |
NM | clayg: May be I'm too perfectionist? | 21:26 |
*** CaioBrentano1 is now known as CaioBrentano | 21:27 | |
clayg | NM: well does the traceback at least reference a line number that's getting the EPERM? | 21:29 |
swift_fan | Does anyone know which lines/methods of the code specify where the Swift proxy tries to retrieve an object, which may or may not have been replicated yet ?? | 21:30 |
swift_fan | :) | 21:30 |
*** kopparam has joined #openstack-swift | 21:31 | |
notmyname | swift_fan: no. I suggest you look through the proxy source. doing so will help you find those answers, and also give you a good understanding of how things are put together. but know that it's something you won't fully understand with one 30 minute scan of the source | 21:32 |
portante | notmyname: wait, we don't have self-documenting code? | 21:33 |
portante | ;) | 21:33 |
notmyname | portante: it's python, so it's executable pseudo-code right? | 21:33 |
portante | =) | 21:33 |
notmyname | portante: especially that best_response() method. that one is _so_ easy and straightforward | 21:33 |
* notmyname loves the mutable data type passed to a method executed by greenlets where the successful handling of the request requires the side-effects from each running instance of the method | 21:34 | |
portante | yeah, what he said | 21:35 |
notmyname | oh, make_requests() not best_response() | 21:36 |
*** kopparam has quit IRC | 21:36 | |
NM | clayg: Yes. It didn't help me :( http://paste.openstack.org/show/119504/ | 21:37 |
notmyname | NM: interesting that it's a statsd message that is getting the error. ie sending a UDP packet | 21:40 |
notmyname | NM: also it's interesting that you're using py26 ;-) | 21:40 |
clayg | notmyname: oh i bet the socket got closed? | 21:41 |
clayg | can't send datagram to a closed socket | 21:41 |
clayg | how the hell do you close a datagram socket | 21:41 |
zaitcev | but ends with EPERM? | 21:41 |
clayg | tcp has spoiled me | 21:41 |
NM | notmyname: Don't mention that :(( When I started working with swift the doc pages said python 2.6. On last summit I got surprised when the guys was running it with 2.7. | 21:41 |
glange | maybe those who want to understand swift should take a time machine to a simpler time :) https://github.com/openstack/swift/tree/2ee9b837b5a1e13681ca9359138451719f8641dd | 21:42 |
clayg | zaitcev: sure why not? don't you get EPERM when you try to send to a close tcp socket? | 21:42 |
zaitcev | no wai | 21:42 |
zaitcev | should be EPIPE | 21:42 |
clayg | glange: hehe that's awesome | 21:42 |
dfg | glange: thanks for making me want to cry... | 21:43 |
NM | notmyname: and clayg for me it looks like I had a sync error and a exception while sending this sync error to statd. | 21:43 |
notmyname | NM: http://stackoverflow.com/questions/23859164/linux-udp-socket-sendto-operation-not-permitted | 21:44 |
notmyname | NM: that says conntrack | 21:44 |
*** tab____ has quit IRC | 21:44 | |
clayg | zaitcev: connect can raise EPERM, do you have to connect first to send to udp? | 21:46 |
clayg | notmyname: oh good one! | 21:47 |
notmyname | NM: the swift docs originally said py26 because it originally targeted lucid. depending on what OS you're running on, I'd suggest moving to py27 since py26 doesn't even get security updates any more | 21:49 |
* notmyname hopes the docs don't still say py26 | 21:49 | |
NM | notmyname: Well, that was on the beginner of the year. | 21:50 |
NM | The stackoverflow post is good. | 21:51 |
NM | And I got the same error stracing the process | 21:51 |
NM | But I still not understanding why the syncing is failing. | 21:52 |
clayg | NM: well... it's not exactly - some code before return sync success is failing :) | 21:53 |
*** ppai has quit IRC | 21:54 | |
NM | But reading the source code, that code runs only in case of exception, right? | 21:55 |
*** mkollaro has quit IRC | 21:57 | |
clayg | self.logger.increment('no_changes') looks like success to me! | 21:58 |
clayg | just check dmesg for dropped packets - if this is the only problem you're having your getting whicked lucky | 21:59 |
clayg | NM: or maybe it's something else... but that'd be useful to know | 21:59 |
NM | Ok. I'll fix the nf_conntrack problem | 22:01 |
NM | Would it be a good idea if the log said "Sync Ok but error sending increment log…" ? | 22:06 |
notmyname | NM: yes, that would be good | 22:07 |
NM | Should I open an issue or let it up to you guys? | 22:08 |
notmyname | NM: at least open a bug on it, please | 22:09 |
notmyname | NM: https://bugs.launchpad.net/swift/+filebug | 22:09 |
*** nitika__ has quit IRC | 22:09 | |
NM | Thank you clayg, thank you notmyname :) | 22:12 |
*** NM has quit IRC | 22:16 | |
*** andreia_ has quit IRC | 22:23 | |
*** aerwin has quit IRC | 22:31 | |
*** kopparam has joined #openstack-swift | 22:32 | |
*** kopparam has quit IRC | 22:36 | |
*** marcusvrn_ has quit IRC | 22:43 | |
*** Nadeem has quit IRC | 22:50 | |
*** nitika2__ has joined #openstack-swift | 22:59 | |
*** kyles_ne has quit IRC | 23:04 | |
*** kyles_ne has joined #openstack-swift | 23:05 | |
*** kyles_ne has quit IRC | 23:09 | |
*** NM has joined #openstack-swift | 23:15 | |
*** joeljwright has quit IRC | 23:29 | |
*** joeljwright has joined #openstack-swift | 23:29 | |
*** elambert has quit IRC | 23:32 | |
*** kopparam has joined #openstack-swift | 23:33 | |
*** joeljwright has quit IRC | 23:34 | |
*** kopparam has quit IRC | 23:37 | |
*** bsdkurt has joined #openstack-swift | 23:49 | |
*** NM has quit IRC | 23:50 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!