*** thumpba_ has quit IRC | 00:04 | |
*** km has joined #openstack-swift | 00:15 | |
*** tgohad has quit IRC | 00:15 | |
*** zhill has quit IRC | 00:25 | |
*** dmorita has joined #openstack-swift | 00:29 | |
ho | good morning! | 00:36 |
---|---|---|
*** erlon has quit IRC | 00:41 | |
mattoliverau | ho: morning | 01:03 |
*** mjfork has joined #openstack-swift | 01:07 | |
ho | mattoliverau: morning! | 01:13 |
*** jrichli has joined #openstack-swift | 01:27 | |
*** kota_ has joined #openstack-swift | 01:44 | |
*** tdasilva has quit IRC | 01:49 | |
*** nottrobin has quit IRC | 02:00 | |
*** tdasilva has joined #openstack-swift | 02:00 | |
*** jdaggett has quit IRC | 02:00 | |
*** donagh has quit IRC | 02:00 | |
*** fanyaohong has quit IRC | 02:01 | |
*** matt__ has joined #openstack-swift | 02:01 | |
*** donagh has joined #openstack-swift | 02:01 | |
*** Trozz_ has joined #openstack-swift | 02:01 | |
*** goodes has quit IRC | 02:02 | |
*** zacksh has quit IRC | 02:02 | |
*** mattoliverau has quit IRC | 02:02 | |
*** sudorandom has quit IRC | 02:02 | |
*** Trozz has quit IRC | 02:02 | |
*** CrackerJackMack has quit IRC | 02:02 | |
*** jroll has quit IRC | 02:02 | |
*** matt__ is now known as mattoliverau | 02:02 | |
*** ChanServ sets mode: +v mattoliverau | 02:03 | |
*** zacksh has joined #openstack-swift | 02:03 | |
*** goodes has joined #openstack-swift | 02:04 | |
*** jdaggett has joined #openstack-swift | 02:04 | |
*** sudorandom has joined #openstack-swift | 02:07 | |
*** CrackerJackMack has joined #openstack-swift | 02:07 | |
*** jroll has joined #openstack-swift | 02:07 | |
*** jroll has quit IRC | 02:08 | |
*** jroll has joined #openstack-swift | 02:08 | |
*** fanyaohong has joined #openstack-swift | 02:14 | |
*** nottrobin has joined #openstack-swift | 02:15 | |
*** thumpba has joined #openstack-swift | 02:52 | |
*** jrichli has quit IRC | 02:54 | |
*** mjfork has quit IRC | 03:26 | |
*** thumpba has quit IRC | 03:56 | |
*** km_ has joined #openstack-swift | 04:01 | |
*** km has quit IRC | 04:02 | |
*** kota_ has quit IRC | 04:05 | |
*** ppai has joined #openstack-swift | 04:25 | |
*** annegentle has joined #openstack-swift | 04:49 | |
*** torgomatic has quit IRC | 04:51 | |
*** torgomatic has joined #openstack-swift | 04:52 | |
*** ChanServ sets mode: +v torgomatic | 04:52 | |
*** annegentle has quit IRC | 04:54 | |
*** km_ has quit IRC | 05:00 | |
*** cdelatte has quit IRC | 05:10 | |
*** km has joined #openstack-swift | 05:11 | |
*** nshaikh has joined #openstack-swift | 05:13 | |
*** tsg has joined #openstack-swift | 05:20 | |
*** gyee has quit IRC | 05:23 | |
*** SkyRocknRoll has joined #openstack-swift | 05:34 | |
*** SkyRocknRoll has joined #openstack-swift | 05:34 | |
*** thumpba has joined #openstack-swift | 05:39 | |
*** thumpba has quit IRC | 05:41 | |
*** cdelatte has joined #openstack-swift | 05:47 | |
*** delattec has joined #openstack-swift | 05:47 | |
*** silor has joined #openstack-swift | 05:52 | |
*** thomaschaaf has joined #openstack-swift | 06:27 | |
thomaschaaf | Hello is there any way to decrease the partition size? Or is there a good tutorial on how to do it? | 06:29 |
thomaschaaf | I think my 18 part partition is to large for our project and causing the system to do way to much io for idle usage | 06:30 |
*** krykowski has joined #openstack-swift | 06:37 | |
*** silor has quit IRC | 06:41 | |
*** welldannit has quit IRC | 06:47 | |
ho | thomaschaaf: i think we don't have a function without any downtime so far so you need to re-create a builder files with new part_power then over-write them. | 06:47 |
thomaschaaf | would I have to move over all files or would they just stay in their current folders? | 06:48 |
ho | thomaschaaf: but I'm not sure that replicators copy replica to approprete place or you need to copy by yourself now. | 06:48 |
thomaschaaf | as in export via web interface and then reimport via web interface? | 06:48 |
thomaschaaf | by webinterface i mean api.. | 06:49 |
thomaschaaf | decreasing the part count would reduce io workload correctly? | 06:49 |
ho | thomaschaaf: as for the workload, I think so and there is a info https://swiftstack.com/blog/2012/05/14/part-power-performance/. | 06:51 |
ho | thomaschaaf: as for the export/import apis, I think swift doesn't have it. | 06:52 |
*** tsg has quit IRC | 06:52 | |
thomaschaaf | okay hmm maybe thats not my problem then.. | 06:52 |
thomaschaaf | shouldn't replication die down and basically end up at doing nothing if no new files are added to the system? | 06:53 |
*** kota_ has joined #openstack-swift | 06:56 | |
ho | thomaschaaf: a partition is created when object is uploaded. replication works for each partions. i think replicator checks all partitions but copy is only for new partions. | 06:59 |
thomaschaaf | okay so if I decrease the partition count it would actually lower the replication time for object.. but if it is done quicker it doesn't save io but just does it more often correct? | 07:00 |
thomaschaaf | does this: https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0901.png look normal to you for a system that is not getting files added? | 07:01 |
thomaschaaf | https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0902.png | 07:02 |
ho | thomaschaaf: wait a minute. | 07:06 |
*** foexle has joined #openstack-swift | 07:07 | |
ho | thomaschaaf: your understanding is same as me. we can configure interval time b/w replications. | 07:10 |
ho | thomaschaaf: do you have big server (around 18 cores)? | 07:10 |
thomaschaaf | 8 ocres | 07:11 |
*** joeljwright has joined #openstack-swift | 07:15 | |
ho | thomaschaaf: you configure the number of object-server. right? uuun... each io (read) looks not high | 07:17 |
thomaschaaf | its actually the container server eating all the cpu | 07:18 |
thomaschaaf | which I think is weird | 07:18 |
thomaschaaf | https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0918.png | 07:19 |
ho | thomaschaaf: is your concern cpu usage of container-replicator? yeah, it looks high. | 07:19 |
thomaschaaf | just not happy with the performance and looking for any possible bottle necks.. | 07:20 |
ho | thomaschaaf: first bottle neck should be the cpu usage of container-server/replicator. | 07:21 |
thomaschaaf | https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0923.png | 07:23 |
thomaschaaf | the syslog just doesnt look like it should use that much cpu | 07:23 |
*** geaaru has joined #openstack-swift | 07:27 | |
*** nshaikh has quit IRC | 07:28 | |
ho | thomaschaaf: the interval seems to be 30s and each replication takes 16s. | 07:30 |
thomaschaaf | I guess I can increase that.. still don't quiet understand why it would be so "expensive" cpu wise. Is it doing computation? I thought it was pretty much just network | 07:31 |
ho | thomaschaaf: it looks like no replication here (no_change value) means not network. maybe sql related cpu usage??? | 07:33 |
*** mmcardle has joined #openstack-swift | 07:35 | |
thomaschaaf | might have to do with: https://bugs.launchpad.net/swift/+bug/1260460 | 07:38 |
openstack | Launchpad bug 1260460 in OpenStack Object Storage (swift) "Container replication does not scale for very large container databases" [Wishlist,New] | 07:38 |
ho | thomaschaaf: I will check it. btw what is your value of concurrency of [container-replication] in container-server.conf? | 07:39 |
ho | thomaschaaf: Does size of your container db look like this big? | 07:53 |
thomaschaaf | 92 M | 07:54 |
thomaschaaf | and 159 M | 07:54 |
thomaschaaf | https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0955.png | 07:55 |
*** jistr has joined #openstack-swift | 08:02 | |
ho | thomaschaaf: thanks! I don't have access to our cloud so i'm not sure whether it's big | 08:03 |
ho | thomaschaaf: if this meets the bug, i think the interval change is effective. | 08:07 |
thomaschaaf | but that would cause the files to take longer to arrive at other computers | 08:08 |
thomaschaaf | as in replication would take longer | 08:09 |
thomaschaaf | it seems really what I should do is create more containers so that it is balanced better | 08:09 |
ho | thomaschaaf: yeah. it's trade off problem. | 08:09 |
thomaschaaf | what I could do is create a container for each file name beginning such so for abcdef.jpg it would be containername_ab | 08:10 |
thomaschaaf | and then I have way smaller containers | 08:10 |
ho | thomaschaaf: could be. I think it is better to know whether your problem is same as the bug (maybe current spec) first. | 08:13 |
*** jordanP has joined #openstack-swift | 08:14 | |
*** cppforlife_ has quit IRC | 08:15 | |
*** cppforlife_ has joined #openstack-swift | 08:16 | |
thomaschaaf | how can I reduce the number of containers? there should only be 8 but there seem to be 5048.. | 08:23 |
ho | thomaschaaf: i read the response of notmyname and he said "use many containers". sorry I mis-understood something. | 08:26 |
*** acoles_away is now known as acoles | 08:29 | |
ho | thomaschaaf: You only have 3 containers which are over M bytes. I'm not sure your situation meets the bug report. Therefore it might be a good idea to tunning your configuration to reduce the container-replicator overload. My idea for this is changing the interval of replications and reducing the number of the concurrency (container-replicator) and check the stats. | 08:33 |
ho | thomaschaaf: i have to leave office now. see you! | 08:33 |
*** ho has quit IRC | 08:34 | |
thomaschaaf | :) | 08:34 |
thomaschaaf | thanks | 08:34 |
*** km has quit IRC | 08:43 | |
*** thomaschaaf has quit IRC | 08:57 | |
*** theanalyst has quit IRC | 09:04 | |
*** theanalyst has joined #openstack-swift | 09:07 | |
*** maniOS has joined #openstack-swift | 09:11 | |
*** maniOS has quit IRC | 09:12 | |
*** bkopilov has quit IRC | 09:52 | |
*** jordanP has quit IRC | 09:55 | |
*** jordanP has joined #openstack-swift | 10:20 | |
*** jordanP has quit IRC | 10:35 | |
*** jordanP has joined #openstack-swift | 10:38 | |
*** annegentle has joined #openstack-swift | 10:53 | |
*** annegentle has quit IRC | 10:58 | |
*** mmcardle has quit IRC | 11:03 | |
*** mahatic has joined #openstack-swift | 11:30 | |
*** mahatic has quit IRC | 11:35 | |
*** delatte has joined #openstack-swift | 11:40 | |
*** aix has joined #openstack-swift | 11:42 | |
*** cdelatte has quit IRC | 11:43 | |
*** delattec has quit IRC | 11:43 | |
*** bkopilov has joined #openstack-swift | 11:50 | |
*** bkopilov has quit IRC | 11:52 | |
*** bkopilov has joined #openstack-swift | 11:53 | |
*** annegentle has joined #openstack-swift | 11:54 | |
*** bkopilov has quit IRC | 11:57 | |
*** bkopilov has joined #openstack-swift | 11:57 | |
*** mahatic has joined #openstack-swift | 11:57 | |
*** otoolee has quit IRC | 11:58 | |
*** annegentle has quit IRC | 11:59 | |
*** bkopilov has quit IRC | 12:01 | |
*** bkopilov has joined #openstack-swift | 12:01 | |
*** otoolee has joined #openstack-swift | 12:02 | |
*** bkopilov has quit IRC | 12:05 | |
*** nshaikh has joined #openstack-swift | 12:08 | |
*** nshaikh has left #openstack-swift | 12:12 | |
*** bkopilov has joined #openstack-swift | 12:12 | |
openstackgerrit | Mike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test https://review.openstack.org/171593 | 12:13 |
*** dmorita has quit IRC | 12:15 | |
*** bkopilov has quit IRC | 12:15 | |
*** bkopilov has joined #openstack-swift | 12:17 | |
*** mmcardle has joined #openstack-swift | 12:19 | |
*** delattec has joined #openstack-swift | 12:22 | |
*** bkopilov has quit IRC | 12:22 | |
*** delatte has quit IRC | 12:22 | |
*** bkopilov has joined #openstack-swift | 12:24 | |
*** bkopilov has quit IRC | 12:28 | |
*** winggundamth has joined #openstack-swift | 12:34 | |
winggundamth | hi. I have problem with object expirer. the files has been found by object expirer and in log said objects expired but in dashboard or swift list command still random shows the file | 12:39 |
winggundamth | anyone successful on doing object expirer before? did it work correctly and the file is completely deleted or not? | 12:40 |
ppai | expired objects get cleaned up by object-expirer daemon | 12:47 |
ppai | make sure the daemon is running | 12:47 |
*** kota_ has quit IRC | 12:52 | |
*** ppai has quit IRC | 12:53 | |
*** annegentle has joined #openstack-swift | 12:55 | |
*** jkugel has joined #openstack-swift | 12:59 | |
winggundamth | yes. I'm sure that daemon already running. and already check in log file that it do its job | 13:02 |
winggundamth | I'm trying to download expired object too but it always showing 404 not found | 13:03 |
mandarine | winggundamth: do you have several expirer running? | 13:03 |
mandarine | also, did you set the "processes" and "process" variables ? | 13:04 |
cschwede | winggundamth: the 404 for an expired object is expected? | 13:08 |
*** bkopilov has joined #openstack-swift | 13:19 | |
*** bkopilov has quit IRC | 13:19 | |
*** zul has quit IRC | 13:22 | |
*** zul has joined #openstack-swift | 13:22 | |
winggundamth | mandarine: only 1 object expirer running on proxy node. I have 1 proxy node and 3 storage nodes | 13:24 |
winggundamth | mandarine: I did not configure processes and process variables. | 13:24 |
*** bkopilov has joined #openstack-swift | 13:24 | |
winggundamth | cschwede: yes 404 is expected but I just wonder why swift list still random showing object that already expired. even I wait for a night but it still there | 13:25 |
winggundamth | when I listed with both swift list and horizon dashboard. sometimes it shows but sometimes not | 13:26 |
cschwede | winggundamth: granted, a night is a bit long. your updaters and replicators are all running? sounds like one of the container servers has an old DB, and because of that the listing sometimes includes this object and sometimes not | 13:26 |
winggundamth | yes I suspected that too so I check the storage node log to see when I'm doing swift list so maybe some clue | 13:27 |
*** bkopilov has quit IRC | 13:28 | |
winggundamth | but I found that when I list it random get list from each node but the expired object can show on every node that list | 13:28 |
winggundamth | for example I list many times and check the logs on storage node A. it has request but it still random shows and not shows the expired object | 13:29 |
*** bkopilov has joined #openstack-swift | 13:30 | |
winggundamth | even it request to same storage node A | 13:30 |
winggundamth | that's really make me hit the wall now | 13:30 |
*** bkopilov has quit IRC | 13:32 | |
*** bkopilov has joined #openstack-swift | 13:33 | |
*** bkopilov has quit IRC | 13:36 | |
*** SkyRocknRoll has quit IRC | 13:38 | |
winggundamth | anyone has any thought? | 13:44 |
ctennis | winggundamth: I think you need to show what you're seeing in a gist | 13:49 |
ctennis | winggundamth: if you only have one proxy, the any "list" you are doing is hitting that proxy. if you're getting inconsistent results back from doing a "swift list" then you have inconsistent object or container data, which means your replicators and updaters aren't talking between your storage nodes. | 13:50 |
*** joeljwright has quit IRC | 13:50 | |
*** joeljwright has joined #openstack-swift | 13:53 | |
winggundamth | I posted detail in here http://lists.openstack.org/pipermail/openstack/2015-April/012195.html | 13:54 |
*** jistr has quit IRC | 13:54 | |
*** jistr has joined #openstack-swift | 13:54 | |
winggundamth | let me know if the data to solve the problem is not enough and I'll get it for you | 13:55 |
*** bkopilov has joined #openstack-swift | 13:56 | |
*** aix has quit IRC | 13:56 | |
*** chlong has quit IRC | 13:58 | |
winggundamth | ctennis: please see the output from swift-recon here http://pastebin.com/0P7QUZ2J. Do you think it's not consistent between each node? | 13:59 |
ctennis | winggundamth: look at the process list on your nodes. are there replicator and updater processes running? | 14:00 |
winggundamth | ctennis: here process list http://pastebin.com/yFk8mYRp | 14:03 |
ctennis | winggundamth: "swift list" ultimately gets information back from a container server. It sounds like 1 of your 3 container servers has inconsistent data and can't reconcile with the others. | 14:03 |
ctennis | is that list the same on all three of your storage nodes? | 14:04 |
winggundamth | checking right now | 14:04 |
winggundamth | yes. every nodes are the same | 14:07 |
winggundamth | anyway can I troubleshoot to check inconsistent between node? | 14:08 |
ctennis | yeah.. | 14:08 |
ctennis | you can run "swift-get-nodes /etc/swift/container.ring.gz ACCOUNTNAME CONTAINERNAME" | 14:09 |
ctennis | and it will give you back a list of info about where that container is stored...more interestingly it will give you back a list of curl commands | 14:09 |
ctennis | you can run those curl commands within the cluster to get info about the container db on each machine | 14:09 |
*** jistr has quit IRC | 14:10 | |
ctennis | I'm changing locations..but I would recommend in seeing if you find out if/why between your three container databases one is different. | 14:11 |
*** jistr has joined #openstack-swift | 14:12 | |
winggundamth | ok thank you so much. I'll try it | 14:14 |
openstackgerrit | Mike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test https://review.openstack.org/171593 | 14:16 |
winggundamth | ctennis: very nice command. never find this command in any document before... | 14:18 |
*** jrichli has joined #openstack-swift | 14:23 | |
*** khivin has quit IRC | 14:24 | |
*** aix has joined #openstack-swift | 14:24 | |
*** vinsh has quit IRC | 14:27 | |
tdasilva | good morning! just noticed some comments on the ec docs...I'm planning push fixes unless somebody else is already working on it... | 14:38 |
tdasilva | I'll add a comment on the patch too... | 14:38 |
acoles | tdasilva: just adding some comments | 14:39 |
tdasilva | acoles: hey! welcome back, hope you had a good time | 14:39 |
acoles | tdasilva: yes thanks! | 14:40 |
*** vinsh has joined #openstack-swift | 14:44 | |
openstackgerrit | Kamil Rykowski proposed openstack/swift: More user-friendly output for object metadata https://review.openstack.org/164019 | 14:44 |
*** jistr has quit IRC | 14:45 | |
*** lpabon has joined #openstack-swift | 14:47 | |
*** jistr has joined #openstack-swift | 14:51 | |
acoles | tdasilva: i didn't get far but posted a few doc comments on gerrit | 14:54 |
*** erlon has joined #openstack-swift | 14:55 | |
tdasilva | acoles: cool, thanks! just noticed your +1 on the ec_m, ec_k discussion | 14:55 |
tdasilva | sounds good to me, I'll go with that | 14:56 |
cschwede | tdasilva: don’t take my comments too serious, i’m not a native speaker and might be wrong… | 14:57 |
tdasilva | cschwede: hehe...english is not first language either, so hopefully I'm not writing anything too broken...counting on native speakers to pick up any mistakes | 14:58 |
tdasilva | *not my first* | 14:59 |
*** Akshat has joined #openstack-swift | 15:10 | |
Akshat | Hi clayg | 15:10 |
Akshat | Hi ctennis | 15:10 |
Akshat | Hi acoles | 15:11 |
Akshat | I am tuning my swift cluster for better tps | 15:11 |
Akshat | can you please provide me some pointers/best practices to get good numbers | 15:11 |
*** jistr is now known as jistr|mtg | 15:16 | |
openstackgerrit | Joel Wright proposed openstack/python-swiftclient: Log and report trace on service operation fails https://review.openstack.org/171692 | 15:16 |
openstackgerrit | Joel Wright proposed openstack/python-swiftclient: Log and report trace on service operation fails https://review.openstack.org/171692 | 15:21 |
*** takoTuesday has joined #openstack-swift | 15:26 | |
ctennis | Akshat: http://shop.oreilly.com/product/0636920033288.do | 15:28 |
takoTuesday | Hey guys, Im trying to modify a django project that currently uses django.views.static.serve to serve static html files, and I am wanting to serve documents from swift | 15:29 |
*** jistr|mtg is now known as jistr | 15:30 | |
takoTuesday | does any one know of any open source projects I could look at that do this? | 15:30 |
ctennis | takoTuesday: can you explain in more detail? | 15:31 |
ctennis | takoTuesday: you can easily just serve static content from swift, I don't think you need an open source project to do anything | 15:32 |
takoTuesday | ctennis: oh I just meant as an example to see how they serve static content from swift | 15:32 |
takoTuesday | ctennis: thats all I wanted to look at an open source project for. | 15:32 |
ctennis | takoTuesday: everything in swift is already a URL, as long as it's publicly readable you can just directly point to the swift object | 15:33 |
*** setmason has joined #openstack-swift | 15:33 | |
cschwede | takoTuesday: have a look at https://github.com/blacktorn/django-storage-swift | 15:36 |
takoTuesday | thanks guys Im going to check it out | 15:36 |
*** tsg has joined #openstack-swift | 15:37 | |
*** gvernik has joined #openstack-swift | 15:39 | |
*** peluse_ has joined #openstack-swift | 15:42 | |
*** ozialien has joined #openstack-swift | 15:44 | |
*** peluse has quit IRC | 15:46 | |
*** ozialien has quit IRC | 15:47 | |
*** pberis has joined #openstack-swift | 15:53 | |
gvernik | notmyname: you here? | 15:53 |
notmyname | gvernik: yup. I was just checking emails/IRC buffers | 15:54 |
notmyname | gvernik: what's up? | 15:54 |
gvernik | notmyname: in your PTL email you wrote "There's a company that has written a tape library connector for Swift and is open-sourcing it". Can you give some info on it? | 15:54 |
egon | Wow, that's kind of awesome. | 15:54 |
notmyname | gvernik: ya. they're working on open-sourcing it, but it's going slow (since it's a new thing for them to do). but I did get permission from them to talk about it. just haven't too much since they haven't actually opened it yet | 15:55 |
notmyname | but the company is http://www.bdt.de | 15:55 |
notmyname | gvernik: and I know your company is talking about it at the summit, too ;-) | 15:55 |
notmyname | (IBM) | 15:55 |
*** jistr has quit IRC | 15:56 | |
takoTuesday | cschwede: "Once installed and configured, use of django-storage-swift should be automatic and seamless." Does this mean that use of django.views.static.serve will now serve files from swift seamlessly? | 15:57 |
gvernik | notmyname,: my company is huge, i was not aware someone speak about it actually... thanks for info :) and what is the other drive vendor that make sure that Swift support their media? | 15:57 |
notmyname | gvernik: but the overall point is that all the different storage media people getting involved in swift is really exciting to me :-) | 15:57 |
notmyname | gvernik: drive vendor? are you talking about the SMR drives? | 15:58 |
mtreinish | notmyname: could you use ltfs to use swift on tape today? | 15:58 |
gvernik | notmyname,: you wrote "many different media vendors are coming to the Swift community asking how they can ensure that Swift natively supports their media." and than mentioned Kinnetic, SMR... | 15:59 |
*** ozialien has joined #openstack-swift | 15:59 | |
notmyname | mtreinish: sort of, maybe. the hard part of swift+tape is the auditing and replication where swift is churning and walking all the data | 15:59 |
eikke | wouldn't on LTFS be more for a single-copy scenario? | 16:00 |
notmyname | gvernik: ya, so the seagate kinetic stuff has been talked about for a while. and seagate is working with other drive vendors to ensure that it's not a seagate-only tech. and swift can talk to kinetic devices today | 16:00 |
eikke | or even mixed: 2 copies on HD, 1 on tape, prefer the HD copies for retrieval if available | 16:00 |
cschwede | wohoo, that are some cool news (that swift on tape stuff) | 16:01 |
gvernik | notmyname,: right...i now recall we even spoke about it in Paris | 16:01 |
notmyname | gvernik: smr is shingled magentic recording. it's a trick that spinning drives are using to get more density, but it comes with some "interesting" performance considerations | 16:01 |
*** ozialien has quit IRC | 16:01 | |
mtreinish | notmyname: sure, I could see that. I was just thinking about that, it's might not be a good idea | 16:01 |
takoTuesday | cschwede: Im confused as to how to actually use django-storage swift | 16:01 |
notmyname | gvernik: and every drive vendor (both of them!) are working on that tech. | 16:01 |
notmyname | mtreinish: my thoughts exactly, and unfortunately I'm still waiting to see code rather than just hear about "there is a thing..." | 16:01 |
gvernik | notmyname,: thanks | 16:02 |
notmyname | gvernik: with SMR there are some considerations about how to write data, and eventually SMR drives will require some knowledge outside of the drive (ie the kernel, filesystem, or even the app). and that's the kind of stuff I'm looking at playing with this year in swift | 16:03 |
openstackgerrit | Mike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test https://review.openstack.org/171593 | 16:03 |
notmyname | gvernik: not to mention all the cool stuff around flash | 16:03 |
*** ultgeek has joined #openstack-swift | 16:03 | |
notmyname | gvernik: the overall summary is "make swift work better for the specific media it's running on" | 16:03 |
gvernik | notmyname,: is there some session or talk in Vancouver about Swift and specific media? | 16:04 |
*** theanalyst has quit IRC | 16:04 | |
notmyname | and I'm not sure exactly what that looks like in all cases. but it is important as we look at improvements in speed and density in swift clusters | 16:04 |
notmyname | gvernik: conference talk or swift tech sessions? | 16:04 |
gvernik | notmyname,: swift tech sessions | 16:05 |
notmyname | gvernik: you can search the conference sessions for the IMB tape ones. and the swift tech sessions haven't been set yet. in fact that's an agenda item for the meeting today :-) | 16:05 |
*** ozialien has joined #openstack-swift | 16:05 | |
ultgeek | Greetings all, for the upcoming Kilo release, what version of Swift will be at release? (2.2.2, 2.3.0, other?) | 16:06 |
gvernik | notmyname,: great... thanks | 16:06 |
*** ozialien has quit IRC | 16:06 | |
notmyname | ultgeek: 2.3.0 | 16:06 |
ultgeek | ty | 16:06 |
*** Gu_______ has joined #openstack-swift | 16:06 | |
notmyname | ultgeek: we're working on getting an RC for it very soon (friday or this weekend or early next week) | 16:06 |
*** theanalyst has joined #openstack-swift | 16:07 | |
*** peluse_ is now known as peluse | 16:08 | |
*** ChanServ sets mode: +v peluse | 16:08 | |
Akshat | ctennis,clayg: What is significance of backlog and max_clients param in config files | 16:09 |
ultgeek | notmyname: I saw on launchpad that the 2.3.0-rc1 only has 1 bug fix targeted and no blueprints listed, is there another source for finding what the new features will be? | 16:09 |
Akshat | how can they impach perf | 16:09 |
notmyname | ultgeek: ya, I haven't done my LP duties yet. | 16:10 |
peluse | acoles, welcome back from your very short time away.... | 16:10 |
notmyname | ultgeek: the biggest thing that will be in 2.3.0 is a beta of erasure codes. but there are several other things. there will be a full change log by the time of the RC | 16:10 |
notmyname | ultgeek: I'm currently working on that (or rather, it's in progress) | 16:11 |
ultgeek | notmyname: excellent! thanks. Look forward to seeing it. | 16:11 |
notmyname | ultgeek: me too ;-) | 16:11 |
peluse | me three :) | 16:12 |
notmyname | ultgeek: I'll likely have it up on gerrit later this week | 16:12 |
*** dencaval has joined #openstack-swift | 16:12 | |
*** foexle has quit IRC | 16:15 | |
notmyname | ok, I'm going to finish getting ready and go to the office | 16:16 |
*** tsg has quit IRC | 16:17 | |
cschwede | takoTuesday: i’ll have a look later after dinner, will get back to you | 16:17 |
*** ozialien has joined #openstack-swift | 16:19 | |
*** ultgeek has quit IRC | 16:20 | |
*** gyee has joined #openstack-swift | 16:20 | |
*** welldannit has joined #openstack-swift | 16:21 | |
*** ozialien has quit IRC | 16:23 | |
*** krykowski has quit IRC | 16:23 | |
*** aerwin has joined #openstack-swift | 16:24 | |
gvernik | I am writting some middleware, that runs in proxy and object servers. Is there a way inside middleware's code to know if it run in proxy or storage nodes? i guess i can check path for device and partition, but i wonder if there is something else | 16:25 |
peluse | notmyname, you there? | 16:27 |
*** krykowski has joined #openstack-swift | 16:30 | |
*** setmason has quit IRC | 16:30 | |
tdasilva | gvernik: are you writing a middleware that could be run on either the proxy or the object servers? or one for each? | 16:30 |
*** setmason has joined #openstack-swift | 16:31 | |
gvernik | tdasilva,: same middleware, i activate it all servers, but i want to have some "if" inside code...like "if running in proxy" then... else "in object" .... | 16:31 |
openstackgerrit | Mike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test https://review.openstack.org/171593 | 16:33 |
gvernik | tdasilva,: my question is , is this enough "device, partition, account, container, obj = split_path(req.path_info, 5, 5, True)" to figure out where the middleware runs ( in proxy or other nodes ), but i wonder if there is somethign better to use | 16:36 |
tdasilva | gvernik: got it...haven't run into this before...trying to look around in the code | 16:37 |
gvernik | tdasilva,: thanks | 16:37 |
tdasilva | gvernik: would looking at the instance of the app help? | 16:38 |
gvernik | tdasilva,: you mean it's different instance on each server... right? | 16:39 |
acoles | gvernik: idk, maybe your middleware could pass a /info request down the pipeline, only a proxy will return 2xx? | 16:40 |
acoles | peluse: hi! yeah, it was too short | 16:40 |
tdasilva | gvernik: like isinstance(self.app, BaseStorageServer) ?? | 16:41 |
gvernik | tdasilva,acoles: thanks... i will check if this will work for me... | 16:42 |
*** zhill has joined #openstack-swift | 16:47 | |
notmyname | peluse: about to get on my bike. what's up? | 16:52 |
peluse | cschwede, had asked for clarrification the other day and I might have missed it in the scrollback, on the review branch should we go ahead and +A something on 2nd +2 right away? | 16:53 |
notmyname | peluse: cschwede: yes. if you're teh 2nd +2 on an ec_review patch, go ahead and +A it | 16:53 |
peluse | cool | 16:53 |
cschwede | peluse: notmyname: thx - i did this already, and nothing exploded :) | 16:53 |
notmyname | :-_ | 16:54 |
notmyname | :-) | 16:54 |
notmyname | clayg has put a -2 on the first one, so that prevents the whole chain from landing until it gets removed | 16:54 |
winggundamth | I try swift-get-nodes but got this Attention! mismatch between ring and item detected!. what does it means? | 16:54 |
notmyname | we'll get everything landed to ec_review, then one merge commit patch to master at the end | 16:55 |
winggundamth | is Account = Tenant in Swift? | 16:55 |
notmyname | ok, I'm getting on my bike now | 16:55 |
ctennis | winggundamth: can you post the error? | 16:57 |
Akshat | ctennis : What is significance of backlog and max_clients param in config files | 17:00 |
Akshat | ctennis : can they impact performance | 17:00 |
winggundamth | ctennis: http://pastebin.com/yQpUWrM3 here you go | 17:01 |
winggundamth | I use Account name is my Tenant name | 17:01 |
ctennis | winggundamth: I don't see the "Attention!" piece you were noting | 17:01 |
winggundamth | whoops let me recheck again | 17:03 |
winggundamth | ctennis: but do I understand correct about Account = Tenant? | 17:05 |
*** Gu_______ has quit IRC | 17:05 | |
ctennis | winggundamth: Tenant is a Keystone concept,if you aren't using keystone then it's not a swift term. | 17:05 |
winggundamth | I'm using Keystone | 17:06 |
ctennis | winggundamth: then there is usually a correlation between your tenants and swift accounts yes | 17:06 |
winggundamth | :) | 17:06 |
winggundamth | ctennis: I got it now. if I specify container.ring.gz than you have to combine Account and Container together or else it will show that error | 17:07 |
timburke | klrmn: feature-icr-ui-improvements (pr is https://github.com/swiftstack/SwiftStack/pull/673) | 17:08 |
winggundamth | ctennis: just wonder even I put the Container name that isn't really exist but why it still show information without any error? | 17:08 |
ctennis | winggundamth: it's just a lookup table, it doesn't verify existence | 17:09 |
*** mmcardle has quit IRC | 17:10 | |
*** annegentle has quit IRC | 17:10 | |
winggundamth | ctennis: I check with the Account and Backup that really exist. But when I go to directory that it shown I couldn't find that directory. What do you think? | 17:11 |
winggundamth | ctennis: If I using curl. it will show 404 not found | 17:11 |
*** Gu_______ has joined #openstack-swift | 17:12 | |
*** cutforth has joined #openstack-swift | 17:13 | |
winggundamth | *check with Account and Container that really exist | 17:16 |
*** acoles is now known as acoles_away | 17:16 | |
winggundamth | ctennis: so it turns out that Account = AUTH_somerandomhash that I get from swift stat -v | 17:18 |
*** lcurtis has joined #openstack-swift | 17:20 | |
*** ozialien has joined #openstack-swift | 17:22 | |
winggundamth | ctennis: yes you are right. http://pastebin.com/j3YqDMJ9 there is inconsistent between each storage container. How can I fix this? | 17:28 |
*** pberis has quit IRC | 17:34 | |
notmyname | hello | 17:35 |
notmyname | so this looks interesting https://www.stellar.org/blog/stellar-consensus-protocol-proof-code/ | 17:40 |
*** jordanP has quit IRC | 17:44 | |
cschwede | takoTuesday: here you go, a small quickstart guide: https://github.com/cschwede/django-storage-swift#quickstart | 17:44 |
*** Gu_______ has quit IRC | 17:47 | |
*** setmason has quit IRC | 17:48 | |
torgomatic | gvernik: probably the easiest way for a middleware to discover where it's running is to tell it via configuration, i.e. put a line in its config like "running_in = object(/container/account/proxy/whatever)" | 17:49 |
*** setmason has joined #openstack-swift | 17:49 | |
torgomatic | failing that, I believe you could make an OPTIONS request for /, and account/container/object servers will tell you what they are via a header | 17:49 |
torgomatic | (the Server header, maybe? dunno) | 17:50 |
*** foexle has joined #openstack-swift | 17:50 | |
*** Gu_______ has joined #openstack-swift | 17:51 | |
gvernik | torgomatic,: thanks | 17:55 |
*** tsg_ has joined #openstack-swift | 17:59 | |
*** delatte has joined #openstack-swift | 18:00 | |
*** annegentle has joined #openstack-swift | 18:01 | |
*** geaaru has quit IRC | 18:01 | |
*** krykowski has quit IRC | 18:01 | |
*** Akshat has quit IRC | 18:02 | |
*** delattec has quit IRC | 18:03 | |
*** delattec has joined #openstack-swift | 18:12 | |
*** delatte has quit IRC | 18:15 | |
peluse | wow, some really good comments on the EC review chain! | 18:17 |
clayg | morning! | 18:17 |
clayg | what'd I miss? | 18:17 |
peluse | hopefully nothing :) | 18:18 |
clayg | i'm going to start addressing all the great comments on ec_review so you guys can get a clean slate in the am | 18:19 |
peluse | and good morning! | 18:19 |
clayg | but please keep looking at them - and assume everything currently marked today will be fixed tomorrow | 18:19 |
peluse | that'd be awesome, most at this point are nits (most not all) which is way cool | 18:19 |
clayg | I want to say someone saw something in the async_update/container update stuff about policy vs. int(policy) - but I also think it was one of those things where the policy instance was *more* correct - but the int works because the only thing we do with it is convert it to an int for the headers or pass it to get_async_dir (which can take either an int or policy instance) | 18:20 |
*** Akshat has joined #openstack-swift | 18:20 | |
clayg | ... maybe there's a real brokeness - but I bet it was just one of those works anyway kind of bugs | 18:21 |
peluse | cschwede made that comment | 18:21 |
clayg | torgomatic's fixup on the proxy PUT commit message was great too | 18:21 |
peluse | in a delete at update call or something like that | 18:21 |
clayg | acoles_away: are you back? I'm still leaning towards making purge leave .durables and having hash_cleanup_listdir clean them out on the next suffix rehash (vs. trying to sneak fragment indexes into the .durable file names) | 18:22 |
peluse | clayg, he was here earlier but I'm cool with that. I like the benefit of the hard linked .durable though | 18:22 |
peluse | but I think I'm more in favor at this point in time with going "as is" for beta in that area.... | 18:23 |
cschwede | clayg: peluse: i think it works because somewhere down the path in async_update it falls back to policy index 0 | 18:23 |
*** aix has quit IRC | 18:24 | |
clayg | cschwede: oh, well that sounds less good :P | 18:25 |
cschwede | clayg: yeah, that’s what i thought. and it’s not catched in the test because of this fallback (the code itself is covered) | 18:26 |
*** delatte has joined #openstack-swift | 18:26 | |
*** Gu_______ has quit IRC | 18:26 | |
peluse | ahhh | 18:26 |
clayg | cschwede: well it must be a fairly week assertion - unless it's only calling it with policy-0 | 18:27 |
clayg | i *though* i had some memory of wrting a test to hit the async_update code with a non-zero policy that checked the outgoing headers | 18:27 |
peluse | not looking at the code now but yeah seems like we can tighten that up | 18:27 |
*** annegentle has quit IRC | 18:28 | |
*** Gu_______ has joined #openstack-swift | 18:29 | |
*** Akshat has quit IRC | 18:29 | |
*** Akshat has joined #openstack-swift | 18:30 | |
*** annegentle has joined #openstack-swift | 18:30 | |
*** delattec has quit IRC | 18:30 | |
cschwede | clayg: peluse: i might be wrong though, but either way line 445 or 449 in https://review.openstack.org/#/c/169988/4/swift/obj/server.py is not consistent | 18:32 |
clayg | cschwede: yeah i think it should use policy vs. index - i was just questioning if we're dealing with a lack of testing - or if the tests are doing the right thing and it just works either way | 18:32 |
clayg | cschwede: maybe try to foce it to a literal 0 and see if some test called "make sure async knows how to policy" fails with a 1 != 0 kind of assestion in the headers or the async dir or something? | 18:33 |
clayg | cschwede: anyway - i'll look at it - if we need another test we'll write it ;) | 18:33 |
*** lpabon has quit IRC | 18:34 | |
*** lcurtis has quit IRC | 18:34 | |
*** delattec has joined #openstack-swift | 18:35 | |
*** delatte has quit IRC | 18:38 | |
*** ho has joined #openstack-swift | 18:50 | |
*** mahatic has quit IRC | 18:53 | |
*** annegentle has quit IRC | 18:53 | |
notmyname | you know what time it is? | 18:55 |
notmyname | 5 minutes until the best time of your week: the swift team meeting! | 18:56 |
peluse | oh yeah | 18:56 |
tdasilva | notmyname: the lack of water in your state may be making you guys drink too much tequila | 18:57 |
notmyname | lol | 18:57 |
*** kota_ has joined #openstack-swift | 18:58 | |
*** acoles_away is now known as acoles | 18:59 | |
mattoliverau | morning | 18:59 |
kota_ | morning | 18:59 |
ho | good morning | 18:59 |
cschwede | you guys rock! | 19:00 |
notmyname | yes thay do! | 19:00 |
acoles | clayg: i'm back | 19:01 |
*** Akshat has quit IRC | 19:02 | |
acoles | clayg: i didn't yet think much more about the FI.durable other than that it might be possible to save the .durable inode using *sym*links, which negates the 2nd of my reasons to do #FI.durable | 19:06 |
*** Gu_______ has quit IRC | 19:07 | |
acoles | clayg: may have been my comment about policy being a controller attribute but the i think i could see how that could go wrong when handling COPY etc | 19:09 |
*** takoTuesday has quit IRC | 19:09 | |
*** Gu_______ has joined #openstack-swift | 19:10 | |
*** petertr7 has joined #openstack-swift | 19:12 | |
*** annegentle has joined #openstack-swift | 19:15 | |
clayg | acoles: well my current plan is to have purge just clean up the .data and make hash_cleanup_listdir remove a hanging .durable if it's the only file in the dir - same as it does with a single expired .ts | 19:16 |
acoles | clayg: so yeah if thats my comment ignore it i think i figured out the reasons not to, the d[policy] was jus tantalising | 19:16 |
clayg | acoles: yes it's very sexy! we'll do it once the middleware extractions get done | 19:17 |
acoles | clayg: yup. i could buy that. | 19:17 |
clayg | acoles: I kept thinking that we don't want per-fi fragments for the same reason we dont' want per-fi tombstones | 19:18 |
clayg | there's no such thing as a tombstone for a single fragment - either the object is deleted or not; there's no such thing as a durable for a single fragment - either the object is durable or it's not. | 19:18 |
clayg | in the hash suffix syncing we'd have to parse the fragment index out of the durable when we hash to make sure that a durable fi#X is in sync with a durable fi#X+1 - but on the otherhand if the fi#X+1 only has a durable for "the wrong" fi# it's not really "stable" - so I worry about either having to create a .durable for the fi's you're leaving behind - or well i'm just worried :P | 19:20 |
acoles | clayg: hmm, yeah would have to compare the timestamps of what is going into the suffix hash. yuk. | 19:22 |
*** Gu_______ has quit IRC | 19:23 | |
acoles | clayg: you want me to work up a diff for that i.e. purge only .data, HCL takes out stray .durables? | 19:24 |
clayg | acoles: I want to move the part cleanup "if not suffixes" into the start of the "next" run anyway - if that plus "have purge just remove the .data" works out I'll go with that for tomorrow and we can think more about if we have the ondisk file format right - of if we need to make another run at it | 19:25 |
clayg | acoles: maybe? i've got a good chunk of the reconstructor pulled apart to clean up some stuff in process job and build_job_info | 19:25 |
clayg | but I technically haven't gotten to ripping the revert job cleanup out of ssync and moving the part cleanup to the top of "the next run" | 19:26 |
acoles | clayg: moving the revert cleanup out of ssync was also on my list but i'd shelved it in favor of reviewing other stuff | 19:27 |
clayg | I think I can get it all tonight - but any code you write is better than any code I write - so I'm sure anything you throw out I can make use of - I'm just not sure when I'll need it - or if any tests you write would still apply | 19:27 |
acoles | ^^ that ain't true ! | 19:27 |
clayg | acoles: yeah np, that's for the better! | 19:27 |
clayg | I'll publish my wip branch - if you're itching to write some reconstructor code today you can play with it | 19:27 |
clayg | since I'm currently addressing comments (and will be for a couple hours) - anything you do would be fresh and clean to me when I go to pull it later | 19:28 |
acoles | clayg: no it will be my tomorrow before i can do anything (parents in law visiting!), i can check status of the review tomorrow and start on anything you haven't got round to, or just leave me a note | 19:29 |
clayg | acoles: perfect! | 19:30 |
clayg | have a good one - thanks | 19:30 |
acoles | clayg: so were you thinking to wait for reclaim age before clearing out stray .durables, or clear them out immediately they are found? | 19:32 |
clayg | if there's only a .durable - I think we can rip it out? | 19:33 |
clayg | we only need one for it work work in th eend | 19:34 |
clayg | and they don't do much for us if they're not with a .data file | 19:34 |
clayg | reconstruction will always "commit" the fragment on the remote end before it calls purge on the local - so any .durable that doesn't have a .data with it should have been moved to another node and the .durable should be there. | 19:35 |
clayg | acoles: do you think anyone tested that the ECDiskFileManger's yield hashes doesn't yeild files unless they're durable? | 19:35 |
acoles | clayg: whoa! does it? | 19:37 |
acoles | :/ | 19:37 |
clayg | acoles: anyway - here's the wip-fix-ec-recon branch - currently has failing tests -> https://github.com/clayg/swift/tree/fix-ec-recon | 19:38 |
mattoliverau | k, I'm going back to bed, see y'all in a few hours. | 19:38 |
kota_ | mattoliverau: me, too. | 19:39 |
*** kota_ has quit IRC | 19:39 | |
clayg | acoles: I *think* it skipps files unless they have a .durable - I think that's the correct behavior - i just don't know if it got an explicit test for it - there's a bunch of reconstructor tests that were on my todo | 19:39 |
clayg | w/e it's just beta :P | 19:39 |
* clayg is moving to the next patch! | 19:40 | |
acoles | clayg: it *should* only yield objects that have a valid fileset, idk if there is an explicit test tho | 19:42 |
clayg | cschwede: thanks for http://paste.openstack.org/show/199247/ - acoles says he liked it so i'm going to apply it now | 19:43 |
clayg | cschwede: just running tests... | 19:43 |
acoles | cschwede: see clayg shifting blame around there :P | 19:43 |
cschwede | clayg: you’re welcome, hope it helps a little bit! | 19:44 |
clayg | acoles: i'm just a figure head - i take credit for none of this | 19:46 |
clayg | git briancline | 19:47 |
clayg | ^ that is what git bra<tab> get's you in irc | 19:47 |
clayg | i was on fix-ec-recon btw if anyone was wondering :P | 19:47 |
acoles | clayg: TestEcDiskFileManager.test_yield_hashes_ignores_bad_ondisk_filesets kind of covers it | 19:50 |
*** gvernik has quit IRC | 19:51 | |
*** thumpba has joined #openstack-swift | 19:52 | |
*** acoles is now known as acoles_away | 19:55 | |
*** ozialien has quit IRC | 19:59 | |
*** joeljwright has quit IRC | 20:01 | |
*** joeljwright1 has joined #openstack-swift | 20:04 | |
*** Gu_______ has joined #openstack-swift | 20:06 | |
*** bkopilov has quit IRC | 20:14 | |
*** annegentle has quit IRC | 20:17 | |
*** annegentle has joined #openstack-swift | 20:23 | |
*** bkopilov has joined #openstack-swift | 20:24 | |
*** tsg_ has quit IRC | 20:26 | |
*** bkopilov has quit IRC | 20:32 | |
*** dencaval has quit IRC | 20:35 | |
*** annegentle has quit IRC | 20:38 | |
*** annegentle has joined #openstack-swift | 20:39 | |
*** winggundamth has quit IRC | 20:42 | |
*** annegentle has quit IRC | 20:45 | |
*** Gu_______ has quit IRC | 20:58 | |
*** annegentle has joined #openstack-swift | 21:00 | |
*** cutforth has quit IRC | 21:01 | |
*** annegentle has quit IRC | 21:09 | |
*** Gu_______ has joined #openstack-swift | 21:09 | |
*** annegentle has joined #openstack-swift | 21:10 | |
*** Gu_______ has quit IRC | 21:10 | |
*** setmason has quit IRC | 21:26 | |
ho | tdasilva: around? | 21:26 |
*** Gues_____ has joined #openstack-swift | 21:29 | |
*** jrichli has quit IRC | 21:33 | |
ho | I tried to upload a patch with removing a space for "Erasure Code Docs". I got a message "Do you really want to submit the above commits?" and listed up all patches on ec-review. when I executed git review. | 21:41 |
ho | first time to see it. so i think it's better to not do this by me. (i'm really nervous about it) | 21:42 |
*** Gues_____ has quit IRC | 21:44 | |
*** Gues_____ has joined #openstack-swift | 21:47 | |
*** Gues_____ has quit IRC | 21:48 | |
*** Gues_____ has joined #openstack-swift | 21:51 | |
mattoliverau | Morning.. Again | 21:56 |
ho | mattoliverau: morning again :-) | 21:57 |
*** jkugel has quit IRC | 22:01 | |
*** chlong has joined #openstack-swift | 22:01 | |
*** sandywalsh has quit IRC | 22:04 | |
clayg | ho: was it just doc changes? | 22:05 |
*** sandywalsh has joined #openstack-swift | 22:05 | |
clayg | ho: you want it to not change any of the dependent patches sha's - but as long as it's the last patch in the chain everything will be automatically rebased - but nothing *should* have changed that would have caused that | 22:05 |
clayg | ho: so it's probably fine | 22:05 |
clayg | ho: but if you were really unsure you could paste the the output where it's asking you to type yes - and anyone in channel can probably confirm it's safe to push | 22:06 |
mattoliverau | ho: if your going to push a new docs patchset have you addressed all the comments on the patchset? | 22:07 |
*** annegentle has quit IRC | 22:09 | |
*** jamielennox is now known as jamielennox|away | 22:11 | |
*** annegentle has joined #openstack-swift | 22:20 | |
clayg | mattoliverau: even if a fixup doesn't address everyone comments I'll try to audit and make sure any lingering comments since the last time I pushed are marked done. | 22:20 |
clayg | mattoliverau: I'd like to avoid discouraging folks for pushing a fix because they don't feel like they can address *all* the commnets | 22:21 |
mattoliverau | clayg: cool, good point :) | 22:21 |
clayg | also why is anyone making comments to the doc patch - just push - if someone thinks they can word it better - then *they* can push ;) | 22:21 |
clayg | mattoliverau: that being said - I thought I saw a comment in email from peluse where I'd apparently dropped an earlier comment in a subsequent revision - the system is not without risk of human failure | 22:22 |
mattoliverau | clayg: although your a machine, your also still human.. Wrote that sentence makes no sense :p BTW you've been doing an amazing job! | 22:25 |
mattoliverau | s/wrote/wow | 22:26 |
clayg | lol - i had to read it twice to parse it as nonsensical - i was like "yeah, robot humans, sounds awesome" | 22:27 |
*** bkopilov has joined #openstack-swift | 22:30 | |
vinsh | Small question here... in swift.. will a value of "r01z01" for read/write affinity be interpreted the same as "r1z1" ? | 22:31 |
clayg | vinsh: I'd have to go read the parser - it's probably in proxy.server or proxy.controller.base | 22:32 |
clayg | torgomatic: ^ do you know off hand? | 22:32 |
ho | clayg: i'm back. really sorry. I send your time (I don't want to). http://paste.openstack.org/show/200756/ | 22:32 |
vinsh | I'm wondering if I can be a bit lazy in some puppet code here :) | 22:32 |
torgomatic | clayg: vinsh: I'd guess it would; it all gets run through int() before getting jammed into the ring | 22:32 |
vinsh | Right on. I'll test it out then. | 22:33 |
vinsh | Thanks. | 22:33 |
clayg | ho: looks good - just type yes - should only effect the docs change (that's the only one where the sha changed) | 22:35 |
ho | clayg: thanks :-) | 22:35 |
*** aerwin has quit IRC | 22:36 | |
*** annegentle has quit IRC | 22:36 | |
*** annegentle has joined #openstack-swift | 22:47 | |
*** Gues_____ has quit IRC | 22:53 | |
*** annegentle has quit IRC | 23:03 | |
*** zhill has quit IRC | 23:04 | |
*** annegentle has joined #openstack-swift | 23:05 | |
*** chlong has quit IRC | 23:05 | |
*** bkopilov has quit IRC | 23:06 | |
*** bkopilov has joined #openstack-swift | 23:06 | |
ho | mattoliverau: thanks for the advise. i was really nervous about it | 23:10 |
*** jamielennox|away is now known as jamielennox | 23:12 | |
*** annegentle has quit IRC | 23:16 | |
*** kei_yama has joined #openstack-swift | 23:32 | |
clayg | peluse: that get_object_ring/load_object_ring is a rabbit hole | 23:39 |
clayg | peluse: but in fairness I think you're right - account.reaper container.sync and obj.reconstructor should probably all be updated to go with use policy instead of policy_index, and load_object_ring instead of get_object_ring | 23:39 |
clayg | peluse: proxy.server is a special case I think because we don't want to go with policy everywhere until we're ready to use a controller policy attribute - which we should wait to do until after versioned writes and COPY middleware | 23:40 |
*** km has joined #openstack-swift | 23:50 | |
vinsh | torgomatic: clayg: Testing has confirmed. r01z01... works same as r1z1 for read/write affinity in the conf files. | 23:52 |
*** vinsh has quit IRC | 23:55 | |
*** jrichli has joined #openstack-swift | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!