taras_ | i sent email titled 'Observations re swift-container usage of SQLite', not subscribed to list, we'll see if it makes it | 00:03 |
---|---|---|
notmyname | taras_: hi | 00:05 |
* notmyname just read al the scrollback | 00:06 | |
notmyname | taras_: so, you are wondering about db replication | 00:06 |
notmyname | it's more than just "rsync the DBs". db replication only falls back to moving the entire file if the replicas are really different | 00:08 |
taras_ | notmyname: yes | 00:08 |
taras_ | i'm gonna read the code later | 00:08 |
notmyname | otherwise it only moves rows. and torgomatic pointed out the recent improvement for bulk inserts | 00:08 |
taras_ | oh, so it doesn't use rsync at all? | 00:08 |
notmyname | no it does | 00:08 |
notmyname | just not as the default :-) | 00:09 |
taras_ | :) | 00:09 |
notmyname | well, not as the normal case | 00:09 |
taras_ | seems like an interesting problem | 00:09 |
taras_ | so stupid question | 00:09 |
taras_ | why this design | 00:09 |
taras_ | vs something like a clustered mysql? | 00:09 |
notmyname | and IIRC swift writes everything to a local .pending file. then on db operations it flushes those to disk. like our own WAL | 00:09 |
taras_ | eg 1 master + 2 hot standbys | 00:09 |
taras_ | notmyname: yup, i'd advocate replacing sqlite with that .pending file altogether | 00:10 |
taras_ | in the long term | 00:10 |
notmyname | interesting | 00:10 |
*** tkay has quit IRC | 00:10 | |
notmyname | one thing we definitely want to do (and have talked about for years) is manage some container sharding to better support containers with high cardinality | 00:11 |
notmyname | originally swift replaced a system that had a large, sharded postgres DB for storing the data placement (similar design to mogileFS) | 00:11 |
taras_ | infinite amounts of metadata with varying degrees of warmness is a very interesting problem :) | 00:11 |
taras_ | notmyname: was postgres storing data too? | 00:11 |
taras_ | or just metadata | 00:12 |
notmyname | no, just the metadata | 00:12 |
taras_ | interesting | 00:12 |
notmyname | swift's design with sqlite was chosen because it's simple and good enough. it's well-proven code that allows for the functionality needed (listings) and also has the advantage of being a db library so the data can be managed like files if needed (see rsync or other replication primatives) | 00:13 |
taras_ | sqlite is very good at being simple and robust | 00:13 |
notmyname | so when there are failures in the cluster, it's easy to recreate the db replicas | 00:13 |
notmyname | and we don't have to manage with different ha/durability patterns. basically everything is treated pretty much the same way | 00:14 |
notmyname | that is, eventually consistent replicas | 00:14 |
taras_ | i definitely appreciate your approach | 00:14 |
notmyname | gives a lot of flexibility and robustness to the overall system | 00:14 |
taras_ | in that one can skim the code without going crazy | 00:14 |
notmyname | :-) | 00:14 |
notmyname | (i just saw your email to the ML) | 00:15 |
notmyname | ...reading | 00:15 |
notmyname | taras_: " lack of index for LIST" I don't think that's true | 00:16 |
notmyname | there's an index on (deleted, name) so we can select the name and easily filter out the deleted ones | 00:17 |
taras_ | my sqlite failed me there | 00:18 |
notmyname | taras_: also, note that almost every production swift cluster ends up using flash for the account and container storage. I'm not sure if that would change any of your views, but it's something important to know | 00:18 |
taras_ | notmyname: i noticed that | 00:18 |
taras_ | notmyname: i don't have any particular views, i certainly appreciate good-enough engineering | 00:19 |
notmyname | reason being, buying a few SSDs is vastly cheaper than spending the engineering time to figure out how to effectively shard containers | 00:19 |
taras_ | i just have experience dealing with perf issues caused by similar patterns, was wondering if it's of any use to other projects | 00:19 |
taras_ | eg your index + timestamp format is likely to effectively double your db size | 00:20 |
*** erlon has quit IRC | 00:20 | |
notmyname | taras_: https://github.com/openstack/swift/blob/master/swift/container/backend.py#L187 | 00:20 |
taras_ | which causes more cold io, etc | 00:20 |
notmyname | what do you mean by index + timestamp format | 00:20 |
taras_ | yeah i mentioned that, just didnt think it through :( | 00:20 |
taras_ | notmyname: so as i understand, main piece of data that you store is key + some metadata | 00:21 |
taras_ | eg timestamp, etag, etc | 00:21 |
taras_ | you are likely to have paths in the keys | 00:21 |
taras_ | eg long-ish keys | 00:21 |
taras_ | so with that index not using a hash function, you double you db size on disk | 00:22 |
notmyname | let's be specific so we know what's going on :-) | 00:22 |
notmyname | container DBs | 00:22 |
notmyname | 2 tables: stat and objects (I'm ignoring the policy one for now) | 00:22 |
notmyname | stat contains stuff about the container itself. generally just one row | 00:22 |
notmyname | but htere is one row per object in the objects table | 00:23 |
notmyname | and that has a few columns (5? /mes goes to check) | 00:23 |
taras_ | right | 00:23 |
notmyname | name, content type, etag, and size are the ones set by the user | 00:24 |
notmyname | name can be long, as you mentioned. up to the name limit in the cluster (default is 1024) | 00:24 |
notmyname | content type could be long, I guess, since that's ultimately just user-set data. so up to whatever the header max is. 8k IIRC | 00:25 |
taras_ | so INSERT into objects -> append rowid,name,created_at,size,content_type,etag,deleted,storage_policy_index, append deleted,ix_object_deleted_name | 00:25 |
taras_ | so INSERT into objects -> append rowid,name,created_at,size,content_type,etag,deleted,storage_policy_index; append deleted,ix_object_deleted_name | 00:25 |
*** dmorita has joined #openstack-swift | 00:25 | |
notmyname | and ya, you're right that the timestamp is stored as TEXT | 00:25 |
notmyname | and the timestamp value will always come from https://github.com/openstack/swift/blob/master/swift/common/utils.py#L684 | 00:26 |
notmyname | ie a normalize 10.5 string | 00:26 |
taras_ | so you end up with something like <page for objects><page for objects><page for ix_object_deleted_name> | 00:26 |
taras_ | so scanning the index is very likely to scan the whole db | 00:27 |
taras_ | sorry, not very likely | 00:27 |
taras_ | i'm actually not sure how likely it would be | 00:28 |
notmyname | ah, ok. had me confused for a second | 00:28 |
notmyname | ok. so we don't have any idea now ;-) | 00:28 |
peluse | somewhat likely? :) | 00:28 |
taras_ | but you are very likely | 00:28 |
notmyname | it's going to read something into memory. | 00:28 |
taras_ | to have doubled your storage | 00:28 |
notmyname | doubled? from what? | 00:29 |
taras_ | from not having that index | 00:29 |
notmyname | on | 00:29 |
notmyname | an index ont he timestamp? | 00:29 |
taras_ | eg if you have a db handy, can do vacuum..drop index ix_object_deleted_name and see how much overhead it took up | 00:30 |
taras_ | notmyname: sorry, 2 issues: big timestamp and big index | 00:30 |
notmyname | I don't have any prod DBs handy right now | 00:31 |
taras_ | i'm gonna bike home, back in a few hours | 00:33 |
notmyname | taras_: ok. I'll be in and out for the rest of the evening myself | 00:33 |
notmyname | taras_: I definitely want to talk more about all this | 00:34 |
notmyname | and maybe if we're lucky redbo can join in too | 00:34 |
taras_ | i appreciate it :) | 00:34 |
redbo | says everyone all the time | 00:35 |
notmyname | taras_: also, if you visit the mothership in SF (I'm not sure how mozilla is structured) I'd be happy to chat in person too | 00:35 |
notmyname | redbo: all the things? | 00:35 |
taras_ | i'm leaving moz | 00:35 |
taras_ | moving to MV | 00:35 |
notmyname | wll ok then | 00:35 |
taras_ | would be great to chat | 00:36 |
taras_ | err to clarify:not looking for a job. gotta run now | 00:36 |
redbo | I didn't read through all that, but looked at the doc. I've always meant to see how much caching db connections in an LRU would help, would love to see someone benchmark it. | 00:40 |
*** shri1 has quit IRC | 00:41 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Zero-copy object-server GET responses with splice() https://review.openstack.org/102609 | 00:56 |
*** marcusvrn has quit IRC | 01:01 | |
*** mwstorer has quit IRC | 01:03 | |
*** erlon has joined #openstack-swift | 01:04 | |
*** gyee has quit IRC | 01:04 | |
*** lnxnut has joined #openstack-swift | 01:06 | |
*** tongli has quit IRC | 01:06 | |
openstackgerrit | Yuan Zhou proposed a change to openstack/swift: Fix delete versioning objects when previous is expired https://review.openstack.org/88204 | 01:10 |
*** lnxnut has quit IRC | 01:16 | |
*** tab_ has quit IRC | 01:30 | |
*** addnull has joined #openstack-swift | 01:40 | |
*** nosnos has joined #openstack-swift | 01:52 | |
*** haomaiwang has quit IRC | 02:02 | |
*** 18VAATCEN has joined #openstack-swift | 02:03 | |
*** tgohad has quit IRC | 02:15 | |
*** nosnos has quit IRC | 02:16 | |
*** haomaiwa_ has joined #openstack-swift | 02:19 | |
*** bill_az_ has quit IRC | 02:19 | |
*** 18VAATCEN has quit IRC | 02:22 | |
*** config has joined #openstack-swift | 02:23 | |
config | Hi | 02:23 |
config | Is there a way to customize the auditor service for each service: account, container, and object? | 02:23 |
config | Basically, to configure it in different ways? | 02:23 |
*** addnull has quit IRC | 02:29 | |
*** addnull has joined #openstack-swift | 02:29 | |
redbo | data point: I created a 3M object container db (496MB) and did a 10,000 object GET from the middle (cache flushed). It read 89.6MB to do that GET and took 1:30 (on my crappy VM). After vacuuming, the same GET read 93.6MB and took 1:14. | 02:32 |
*** addnull has quit IRC | 02:34 | |
*** bkopilov has quit IRC | 02:35 | |
*** haomaiwa_ has quit IRC | 02:37 | |
*** haomaiwang has joined #openstack-swift | 02:37 | |
portante | redbo: read more data but went faster? | 02:37 |
portante | am I missing something? | 02:37 |
redbo | yeah, after a vacuum the data is probably more contiguous. At least the index. | 02:38 |
*** addnull has joined #openstack-swift | 02:38 | |
redbo | it wasn't using much CPU, so obviously it did a lot of seeks in there if it took a minute and a half to read 90MB. It can dd 44 MB/s. | 02:39 |
torgomatic | redbo: how are you measuring io? | 02:40 |
redbo | grep read_bytes /proc/<PID>/io | 02:40 |
torgomatic | thanks | 02:42 |
redbo | ha.. doing the fadvise(WILLNEED) drops the whole thing to 10 seconds. Wouldn't work on really big containers, though. | 02:48 |
config | Does anyone know whether the Auditor service is configurable for the account, container, objects? | 02:50 |
taras_ | yeah fadvise is awesome for smaller stuff | 02:53 |
taras_ | the thing to do is to trace number of bytes read | 02:53 |
taras_ | when using index or not using it | 02:53 |
taras_ | bytes read from disk..not read() calls | 02:53 |
taras_ | i'm gonna try a bunch of stuff on wed | 02:54 |
taras_ | redbo: if you use fadvise, you can also tell sqlite to use the mmap backebd | 02:56 |
redbo | interesting. without the index, that query took 27 seconds and read 353.7 MB. | 02:56 |
taras_ | then it's really fast | 02:56 |
taras_ | mmap is a perf hit otherwise | 02:56 |
taras_ | redbo: 27 vs 10s orvs 1:30? | 02:58 |
redbo | 10s with precache (including caching), 27 with no index and no cache, 1:30 with index and no cache. | 02:59 |
redbo | 18s with no index and mmap and no precache. | 02:59 |
taras_ | sounds like that index hurts | 03:00 |
taras_ | redbo: mind sharing that db? | 03:00 |
redbo | oh but this is after vacuuming. I should have saved it and done that on a copy. | 03:02 |
redbo | yeah, let me recreate it | 03:02 |
taras_ | hehm if you are outrunning an index post-vacuum,it's not gonna be better prevacuum :) | 03:05 |
redbo | yeah. I just want a better simulation of real life. | 03:05 |
config | Hey, I noticed that nobody answered my question about the Auditor, which is totally all right, but was just wondering if that's because it's just not an interesting topic to talk about, or if it's because the answer is no, you can't configure it? | 03:06 |
config | :) | 03:06 |
portante | peluse: I'll look to see what I have around for using the in-memory object server for storage policy based functional tests | 03:07 |
redbo | config: I didn't know what you meant, but I'm pretty sure the answer is no. | 03:07 |
config | https://swiftstack.com/openstack-swift/architecture/ | 03:08 |
config | subsection: Auditors | 03:08 |
config | under Consistency Services | 03:08 |
redbo | how would you want to configure it differently? | 03:09 |
config | just as an example, say, maybe you want it to run less frequently, or more frequently? | 03:09 |
*** erlon has quit IRC | 03:10 | |
config | that's just an example | 03:10 |
redbo | oh. that you can probably do. But it's a little bit scattershot. The object auditor supports rate limiting, the others I think you can only tune with concurrency and how long they wait between passes. | 03:11 |
config | Do you know how this is done? | 03:11 |
redbo | if you look for example at the [object-replicator] second of https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample | 03:13 |
zaitcev | daaaarn I was just going to paste that | 03:14 |
redbo | config: you can see there's a concurrency setting, and "run_pause" which is how long it waits between passes | 03:14 |
zaitcev | I think it may have a delay between passes, I know I set it | 03:14 |
redbo | well you want it set to something, or it'll just sit and spin when your cluster is empty :) | 03:14 |
zaitcev | For containers it's called "interval". | 03:15 |
redbo | also if you look at [object-auditor] in that same file, you'll see it has bytes_per_second and files_per_second | 03:16 |
redbo | oh yeah, interval. we should call it one thing everywhere. | 03:16 |
mattoliverau | Actually if I remember correctly, I think you can use run_pause or interval, they set the same variable in the code. | 03:21 |
config | Wow, thanks!! | 03:21 |
mattoliverau | for the container | 03:21 |
config | I didn't even know that these samples were available :) | 03:22 |
config | I'll probably be looking over them, which will probably lead to more questions later on. | 03:22 |
config | I will* be looking over them | 03:22 |
config | I think this Auditor issue might just be the beginning. | 03:23 |
config | :) | 03:23 |
redbo | ha.. yes, db and container replicator will support run_pause or interval. but object replicator will only support run_pause and container updater will only support interval. | 03:24 |
redbo | er account and container replicator | 03:25 |
config | So it seems that there is a lot that you can do with not just the Auditor, but with the other services as well. | 03:25 |
redbo | someone should make it support interval everywhere, and both where it used to be run_pause. sounds like it could be one of those low-hanging fruit thingies for new people. | 03:29 |
mattoliverau | actually I lie, it is the account and container replicators where interval and run_pause is loaded into the same variable. | 03:29 |
*** kopparam has joined #openstack-swift | 03:35 | |
*** kopparam has quit IRC | 03:35 | |
*** kopparam has joined #openstack-swift | 03:36 | |
*** kopparam has quit IRC | 03:40 | |
portante | redbo, notmyname, torgomatic, clayg, _others_: did you see the article I posted earlier? | 03:48 |
torgomatic | Nope | 03:56 |
portante | it talks about the need to fsync the directory to persist the temp file name and the rename to the target file | 03:59 |
portante | object server PUT operation | 03:59 |
portante | torgomatic: shall I repost the article? | 03:59 |
portante | or is it not interesting? | 04:00 |
torgomatic | If you don't mind, I'd appreciate it. | 04:00 |
*** bkopilov has joined #openstack-swift | 04:00 | |
portante | http://lwn.net/Articles/457667/ | 04:01 |
portante | the code example which matches the PUT operation in swift is: http://lwn.net/Articles/457672/ | 04:01 |
torgomatic | Huh. I wonder if XFS needs the directory to be fsynced or not. | 04:43 |
redbo | it should probably be in there. but from what I've seen, xfs is pretty protective of its metadata. file data not so much. | 04:44 |
*** kopparam has joined #openstack-swift | 04:44 | |
torgomatic | In other words, is Swift behavior 100% safe on the most common deployment and could be enhanced, or is it busted in a rare case? | 04:45 |
portante | from what I heard today from the file system guys, a system failure might cause the newly created object to disappear, where the old directory contents are still on disk | 04:49 |
portante | so the old object is in place | 04:49 |
portante | so i think it would make the reconciler work harder, but still might be busted in a rare case | 04:50 |
*** nosnos has joined #openstack-swift | 04:50 | |
portante | torgomatic: ^ | 04:50 |
portante | rare base being all replicas fail in the same way such that object was reported as persisted, but old directory contents are in play when replicas come back online | 04:51 |
portante | I might not be thinking about this correctly this late at night, so just going to bed ... | 04:52 |
torgomatic | portante: interesting... sounds like it'll take quite the coincidence for this to do any damage. | 04:52 |
torgomatic | I wouldn't lose sleep over it. :) | 04:52 |
portante | my threads background causes me to loose sleep over these kinds of "rare" conditions which all too often showed up with folks pointing fingers as the library that did not handle the condition properly | 04:53 |
redbo | What I really want is filesystem write ordering. So if I update the hashes.pkl after the object is moved into place, I'd lose either the hashes.pkl update or both. Either way the replicator can fix it from a good copy. | 04:54 |
redbo | and I wouldn't worry about fsyncing in my fantasy world | 04:54 |
portante | maybe just nightmares before I actually go to sleep ... | 04:54 |
config | +portante: what "file system guys" are you referring to? | 04:55 |
config | +portante: in your comment at xx:49 | 04:55 |
config | +portante: [xx:49] | 04:55 |
config | Is anyone able to answer that? | 04:57 |
config | Thanks | 04:57 |
torgomatic | I know he works for red hat and they employ lots of kernel developers, but that's as much as I know | 04:58 |
portante | ric wheeler | 04:59 |
portante | jeff moyer is a co-worker across the hall from me | 04:59 |
*** ppai has joined #openstack-swift | 04:59 | |
config | +portante: Which file system are they working on? | 04:59 |
portante | they work on many | 05:00 |
config | +portante: and are they primarily working with the C language? | 05:00 |
portante | I would think so, given their kernel background | 05:01 |
redbo | it looks like with this database, doing fadvise to preload the database and keeping the index wins. Though everything is faster than what we do now. http://paste.openstack.org/show/105041/ | 05:01 |
redbo | I'll try to grab a bigger database tomorrow and try it out. | 05:01 |
config | +portante: Do they work with distributed file systems as well, say NFS, and maybe Swift and Ceph? | 05:02 |
redbo | And maybe try it with concurrency to multiple databases.. It is reading more data when prefetching or not using an index. If it's forced to do so from multiple files, it might degrade to worse than the current method. | 05:07 |
*** chandankumar has joined #openstack-swift | 05:13 | |
redbo | and none of that may hold on SSDs where the speed tradeoff for seeks/reading contiguous data is different. | 05:30 |
*** tsg has joined #openstack-swift | 05:32 | |
*** zaitcev has quit IRC | 05:39 | |
*** echevemaster has quit IRC | 05:49 | |
*** kopparam has quit IRC | 05:53 | |
*** kopparam has joined #openstack-swift | 05:54 | |
*** kopparam has quit IRC | 05:58 | |
*** kopparam has joined #openstack-swift | 06:03 | |
*** nshaikh has joined #openstack-swift | 06:05 | |
kopparam | Hello! How can I use glance to use S3 to create/store/delete images? | 06:08 |
*** config has quit IRC | 06:10 | |
*** haomaiwang has quit IRC | 06:12 | |
*** haomaiwang has joined #openstack-swift | 06:12 | |
*** k4n0 has joined #openstack-swift | 06:18 | |
*** Anju has joined #openstack-swift | 06:25 | |
*** haomai___ has joined #openstack-swift | 06:28 | |
*** occupant has quit IRC | 06:29 | |
*** haomaiwang has quit IRC | 06:32 | |
*** foexle has joined #openstack-swift | 06:56 | |
*** geaaru has joined #openstack-swift | 07:00 | |
*** kopparam has quit IRC | 07:05 | |
*** kopparam has joined #openstack-swift | 07:06 | |
*** chandankumar has quit IRC | 07:07 | |
*** kopparam has quit IRC | 07:11 | |
*** chandan_kumar has joined #openstack-swift | 07:13 | |
*** geaaru has quit IRC | 07:22 | |
*** geaaru has joined #openstack-swift | 07:26 | |
openstackgerrit | A change was merged to openstack/swift: Merge master to feature/ec https://review.openstack.org/118331 | 07:26 |
*** chandan_kumar has quit IRC | 07:33 | |
*** occupant has joined #openstack-swift | 07:35 | |
*** occupant has quit IRC | 07:40 | |
*** tsg has quit IRC | 07:42 | |
*** chandan_kumar has joined #openstack-swift | 07:47 | |
*** bvandenh has joined #openstack-swift | 08:00 | |
mattoliverau | time to call it a night, nigth all! (or good day to some) | 08:00 |
*** ttrumm has joined #openstack-swift | 08:05 | |
*** joeljwright has joined #openstack-swift | 08:15 | |
*** chandan_kumar has quit IRC | 08:29 | |
*** kopparam has joined #openstack-swift | 08:31 | |
*** homegrown has joined #openstack-swift | 08:56 | |
homegrown | I have a long running upload script to populate swift, but the auth-token expires. Can i give some service accounts longer auth-tokens? | 09:03 |
joeljwright | homegrown: rather than extending the auth-tokens can you not re-authenticate in a similar way to the python-swiftclient? | 09:12 |
*** aix has joined #openstack-swift | 09:13 | |
joeljwright | homegrown: using keystone auth the swiftclient attempts to reauthenticate when a token expires | 09:18 |
homegrown | joeljwright: will look into it, thanks | 09:19 |
*** mkollaro has joined #openstack-swift | 09:21 | |
*** occupant has joined #openstack-swift | 09:36 | |
*** dmorita has quit IRC | 09:37 | |
*** occupant has quit IRC | 09:41 | |
openstackgerrit | Lorcan Browne proposed a change to openstack/swift: Add "--no-overlap" option to swift-dispersion populate https://review.openstack.org/118411 | 09:46 |
*** kopparam has quit IRC | 10:11 | |
*** kopparam has joined #openstack-swift | 10:11 | |
*** kopparam has quit IRC | 10:12 | |
*** kopparam has joined #openstack-swift | 10:12 | |
*** haomai___ has quit IRC | 10:26 | |
*** bvandenh has quit IRC | 10:44 | |
*** bvandenh has joined #openstack-swift | 10:45 | |
*** DisneyRicky_ has quit IRC | 11:03 | |
*** miqui has quit IRC | 11:05 | |
*** ppai has quit IRC | 11:06 | |
*** DisneyRicky has joined #openstack-swift | 11:09 | |
*** ppai has joined #openstack-swift | 11:17 | |
*** kopparam has quit IRC | 11:27 | |
*** kopparam has joined #openstack-swift | 11:28 | |
*** occupant has joined #openstack-swift | 11:37 | |
*** occupant has quit IRC | 11:42 | |
*** kopparam has quit IRC | 11:42 | |
*** kopparam has joined #openstack-swift | 11:43 | |
*** kopparam has quit IRC | 11:47 | |
*** kopparam has joined #openstack-swift | 11:50 | |
*** ppai has quit IRC | 11:51 | |
*** erlon has joined #openstack-swift | 12:03 | |
*** HenryG is now known as HenryG_afk | 12:04 | |
*** ppai has joined #openstack-swift | 12:04 | |
*** kopparam has quit IRC | 12:08 | |
*** kopparam has joined #openstack-swift | 12:09 | |
*** kopparam has quit IRC | 12:14 | |
peluse | portante, cool thanks. I started one too but have no idea what I did with the code so I can start from scratch again if needed but yeah if you already did some work would be happy to carry it forwward :) Let me know... | 12:14 |
*** nosnos has quit IRC | 12:16 | |
*** nosnos has joined #openstack-swift | 12:17 | |
*** nosnos has quit IRC | 12:21 | |
*** miqui has joined #openstack-swift | 12:25 | |
*** judd7 has joined #openstack-swift | 12:49 | |
*** igor has joined #openstack-swift | 12:52 | |
*** igor has quit IRC | 12:52 | |
*** igor has joined #openstack-swift | 12:53 | |
*** marcusvrn has joined #openstack-swift | 12:55 | |
*** igor has quit IRC | 12:56 | |
*** aix has quit IRC | 12:58 | |
*** bill_az_ has joined #openstack-swift | 13:01 | |
*** sandywalsh has joined #openstack-swift | 13:01 | |
*** aix has joined #openstack-swift | 13:01 | |
*** bkopilov has quit IRC | 13:04 | |
*** tongli has joined #openstack-swift | 13:20 | |
*** echevemaster has joined #openstack-swift | 13:26 | |
*** tdasilva has joined #openstack-swift | 13:38 | |
*** occupant has joined #openstack-swift | 13:38 | |
*** occupant has quit IRC | 13:43 | |
*** ppai has quit IRC | 13:46 | |
*** ttrumm_ has joined #openstack-swift | 13:54 | |
*** ttrumm has quit IRC | 13:58 | |
*** ttrumm_ has quit IRC | 14:05 | |
openstackgerrit | A change was merged to openstack/swift: Only bind SAIO daemons to localhost https://review.openstack.org/118197 | 14:08 |
*** addnull has quit IRC | 14:10 | |
*** annegent_ has joined #openstack-swift | 14:17 | |
*** nshaikh has quit IRC | 14:18 | |
*** dmsimard_away is now known as dmsimard | 14:25 | |
*** HenryG_afk is now known as HenryG | 14:36 | |
*** Anju has quit IRC | 14:41 | |
*** lpabon has quit IRC | 14:46 | |
*** lpabon has joined #openstack-swift | 14:48 | |
*** tsg has joined #openstack-swift | 14:51 | |
*** mahatic has joined #openstack-swift | 15:15 | |
*** bgmccollum_ is now known as bgmccollum | 15:19 | |
*** mwstorer has joined #openstack-swift | 15:21 | |
*** mahatic has quit IRC | 15:26 | |
*** mahatic has joined #openstack-swift | 15:26 | |
*** aix has quit IRC | 15:29 | |
*** judd7 has quit IRC | 15:34 | |
*** judd7 has joined #openstack-swift | 15:34 | |
*** annegent_ has quit IRC | 15:39 | |
*** judd7 has quit IRC | 15:39 | |
*** occupant has joined #openstack-swift | 15:39 | |
*** occupant has quit IRC | 15:44 | |
*** zaitcev has joined #openstack-swift | 15:47 | |
*** ChanServ sets mode: +v zaitcev | 15:47 | |
*** aix has joined #openstack-swift | 15:48 | |
*** bvandenh has quit IRC | 15:54 | |
*** foexle has quit IRC | 15:56 | |
*** mahatic has quit IRC | 16:01 | |
*** evanjfraser has joined #openstack-swift | 16:04 | |
*** cschwede has joined #openstack-swift | 16:04 | |
*** homegrown has left #openstack-swift | 16:04 | |
notmyname | good morning | 16:05 |
*** goodes_ has joined #openstack-swift | 16:07 | |
*** dosaboy_ has joined #openstack-swift | 16:07 | |
*** ondergetekende_ has joined #openstack-swift | 16:08 | |
*** theanalyst has quit IRC | 16:08 | |
*** cschwede_ has quit IRC | 16:08 | |
*** ondergetekende has quit IRC | 16:08 | |
*** k4n0 has quit IRC | 16:08 | |
*** otoolee- has quit IRC | 16:08 | |
*** sileht has quit IRC | 16:08 | |
*** goodes has quit IRC | 16:08 | |
*** peluse has quit IRC | 16:08 | |
*** evanjfraser_ has quit IRC | 16:08 | |
*** StevenK has quit IRC | 16:08 | |
*** DisneyRicky has quit IRC | 16:08 | |
*** ujjain2 has quit IRC | 16:08 | |
*** acoles has quit IRC | 16:08 | |
*** dosaboy has quit IRC | 16:08 | |
*** k4n0_ has joined #openstack-swift | 16:08 | |
*** theanalyst has joined #openstack-swift | 16:09 | |
*** ujjain has joined #openstack-swift | 16:09 | |
*** goodes_ is now known as goodes | 16:09 | |
notmyname | seems that I came down with a cold yesterday :-( | 16:10 |
*** sileht has joined #openstack-swift | 16:11 | |
*** Anju_ has joined #openstack-swift | 16:12 | |
*** StevenK has joined #openstack-swift | 16:13 | |
*** DisneyRicky has joined #openstack-swift | 16:14 | |
*** acoles has joined #openstack-swift | 16:16 | |
*** ChanServ sets mode: +v acoles | 16:16 | |
tdasilva | notmyname: hope you get better soon...sorta related..how's that collarbone btw? | 16:16 |
tdasilva | back to biking yet? | 16:17 |
notmyname | not yet | 16:18 |
notmyname | shoulder is doing well, but still pretty weak | 16:19 |
*** gyee has joined #openstack-swift | 16:20 | |
*** otoolee- has joined #openstack-swift | 16:20 | |
tdasilva | nasty little injury | 16:21 |
*** annegent_ has joined #openstack-swift | 16:25 | |
*** vr1 has joined #openstack-swift | 16:26 | |
vr1 | hello | 16:26 |
notmyname | vr1: hi | 16:26 |
vr1 | is it possible to add en entry point in a non-dev package of swift ? | 16:26 |
notmyname | vr1: what do you mean? | 16:26 |
vr1 | because in devstack or SAIO there is a setup.cfg, if you install from dpkg there is not setup.cfg | 16:26 |
vr1 | and no setup.py | 16:26 |
vr1 | but we need to register a new backend | 16:27 |
vr1 | I don't know if I am clear | 16:28 |
notmyname | vr1: ya, I think I understand | 16:29 |
vr1 | in short, is it possible to quickly patch the swift redhat package by adding a new backend (without writing our own package) | 16:29 |
notmyname | vr1: I think the answer is no. you'll need to make your own package to be deployed alongside the red hat packages | 16:30 |
*** annegent_ has quit IRC | 16:30 | |
vr1 | that's OK thanks | 16:30 |
notmyname | vr1: or alternatively build your own package with everything in it | 16:30 |
vr1 | ok we'll do | 16:31 |
vr1 | thanks a lot | 16:31 |
zaitcev | Why not write your own package though? It's trivial. | 16:33 |
zaitcev | git clone from fedora | 16:34 |
zaitcev | add your own patches | 16:34 |
zaitcev | PROFIT | 16:34 |
zaitcev | It's all automated just for your convenience already for crissakes. Mind boggles that people still find an excuse not to roll their clone RPMs. Even workers at Oracle learned that trick, surely you can do it too. | 16:35 |
zaitcev | Or better yet post a patch to Gerrit to support your backend in upstream Swift | 16:36 |
*** annegent_ has joined #openstack-swift | 16:38 | |
vr1 | need to go, see you later | 16:38 |
*** vr1 has quit IRC | 16:38 | |
*** mahatic has joined #openstack-swift | 16:40 | |
*** annegent_ has quit IRC | 16:43 | |
mahatic | hi notmyname , mattoliverau | 16:45 |
notmyname | zaitcev: I'd like to see the backend stuff better selectable by a config (eg select a diskfile implementation with a use line in the config file) | 16:46 |
notmyname | mahatic: hello | 16:46 |
*** btorch has joined #openstack-swift | 16:47 | |
mahatic | notmyname, I realized that my processor needed an upgrade and I had to buy a new personal laptop. So i did that, installed fedora 20, setup Swift SAIO using virt-manager. And poking around it. | 16:47 |
notmyname | cool | 16:47 |
mahatic | That's what i've been doing for the past two weeks. | 16:47 |
zaitcev | How does virt-manager relate to SAIO? Is that SAIO within a VM running under said F20? | 16:48 |
mahatic | zaitcev, yup | 16:48 |
mahatic | notmyname, like you describe here, https://swiftstack.com/blog/2013/02/12/swift-for-new-contributors/ : I don't have a scratch to itch :) or rather an immediate pain point | 16:49 |
mahatic | notmyname, Can you please suggest a bug that i can get started with? I have been looking into the launchpad, but was wondering if you or anyone else would have any suggestions for a newbie | 16:50 |
notmyname | mahatic: check the topic :-) | 16:50 |
notmyname | mahatic: yesterday I added a link with some simple ideas there | 16:50 |
zaitcev | notmyname: I edited the /ideas a bit | 16:52 |
notmyname | zaitcev: great! | 16:52 |
notmyname | zaitcev: ah, the ring validator. yup | 16:55 |
zaitcev | notmyname: maybe too wordy, sorry, feel free to make laconic | 16:55 |
notmyname | zaitcev: oh, and you don't want auto-reload config files | 16:55 |
zaitcev | notmyname: yeah... I'll be willing to be outvoted by people with actual production clusters on this, but as it is I do not. | 16:56 |
zaitcev | that's why I asked if you ran these ideas past ops in San Antonio | 16:56 |
zaitcev | actually, the explicit bind_port too | 16:57 |
notmyname | zaitcev: no, that wasn't the kind of thing that we were able to discuss in SA | 16:57 |
mahatic | notmyname, in multinode install docs, show mounting with a label? I don't quite get that. Sorry to sound naive! | 16:58 |
notmyname | zaitcev: the feedback I have on the explicit bind_port is that rax and swiftstack both already set it. | 16:58 |
zaitcev | notmyname: most of our deployment tools do to, I think. | 16:58 |
*** bkopilov has joined #openstack-swift | 16:59 | |
notmyname | mahatic: devices can get re-ordered on reboot if you aren't using an explicit static reference to the device when mounting. either uuid or label. so updating the docs to use a label is something pretty simple to do and can keep people from shooting themselves in the foot | 17:00 |
*** tsg has quit IRC | 17:00 | |
zaitcev | mahatic: labels are there so you could pull drives from a node or add drives onto existing controllers. In your VM you can edit the .xml and make /dev/vda /dev/vdb stable | 17:00 |
zaitcev | Oh | 17:00 |
*** tsg has joined #openstack-swift | 17:02 | |
mahatic | notmyname, Can you suggest any videos/demos that I could help me understand more? | 17:02 |
mahatic | notmyname, ah okay. | 17:02 |
notmyname | mahatic: for that one, start with a google search of "mount by label" | 17:02 |
openstackgerrit | Dolph Mathews proposed a change to openstack/swift: warn against sorting requirements https://review.openstack.org/118694 | 17:03 |
*** tkay has joined #openstack-swift | 17:04 | |
*** tkay has left #openstack-swift | 17:04 | |
zaitcev | wow | 17:04 |
*** tgohad has joined #openstack-swift | 17:04 | |
*** tsg has quit IRC | 17:07 | |
notmyname | zaitcev: ? | 17:08 |
zaitcev | notmyname: sorting of requirements.txt | 17:08 |
zaitcev | notmyname: I think it's super weird that order would matter. | 17:08 |
notmyname | ya, I saw some email this morning about it. seem that it's a Big Deal (tm) to people | 17:08 |
mahatic | notmyname, sure. I am. And i thought the SAIO set up doc does that? In here: http://docs.openstack.org/developer/swift/development_saio.html#using-a-partition-for-storage | 17:10 |
notmyname | zaitcev: but I have no objections to adding comments to a requirements file (in fact I had some evil idea about it recently...) | 17:10 |
notmyname | mahatic: no, doesn't look like ti | 17:14 |
notmyname | but I think torgomatic doesn't like comments because those are supposed to be machine-readable docs (and machines don't read comments) | 17:15 |
mahatic | notmyname, ah yes. It should be in this doc http://docs.openstack.org/developer/swift/howto_installmultinode.html | 17:18 |
mahatic | i believe | 17:18 |
notmyname | mahatic: yes. start there. but in both is good :-) | 17:19 |
*** bill_az_ has quit IRC | 17:22 | |
*** bill_az_ has joined #openstack-swift | 17:23 | |
mahatic | notmyname, sure :) | 17:25 |
*** geaaru has quit IRC | 17:29 | |
*** annegent_ has joined #openstack-swift | 17:38 | |
torgomatic | notmyname: depends on what's in them... a comment like "the real versions that work are X..Y" is something that operators will want to know, but if they don't read the requirements.txt, they won't find out | 17:39 |
notmyname | torgomatic: right | 17:40 |
torgomatic | whereas a comment like "don't mess with the file ordering, dopes" is entirely reasonable, as it will be seen by those who want to go poking at stuff for no reason | 17:40 |
*** occupant has joined #openstack-swift | 17:40 | |
torgomatic | or for no *good* reason, at least :) | 17:40 |
ahale | computers don't read comments but computers also don't reorder lists alphabetically for ease of reading | 17:41 |
openstackgerrit | Sarvesh Ranjan proposed a change to openstack/swift: Spelling mistakes corrected in comments. https://review.openstack.org/118701 | 17:41 |
*** annegent_ has quit IRC | 17:43 | |
*** occupant has quit IRC | 17:45 | |
notmyname | wow. the gate is 18 hours deep | 17:46 |
torgomatic | that seems not good | 17:48 |
openstackgerrit | John Dickinson proposed a change to openstack/swift: make the bind_port config setting required https://review.openstack.org/118200 | 17:49 |
notmyname | whoops. left some debug print statements ^^ | 17:49 |
notmyname | all gone now | 17:49 |
notmyname | acoles: if you can ask donagh or others about an explicit port setting, that would be nice. ie do they already set it explicitly (and thus the above patch doesn't affect anything), or are they not setting it and will need to update configs? | 17:52 |
*** morganfainberg is now known as morganfainberg_Z | 17:53 | |
*** mkollaro has quit IRC | 17:54 | |
*** mahatic has quit IRC | 17:57 | |
*** aix has quit IRC | 18:00 | |
*** cutforth has joined #openstack-swift | 18:03 | |
*** tsg has joined #openstack-swift | 18:03 | |
*** tgohad has quit IRC | 18:04 | |
*** occupant has joined #openstack-swift | 18:05 | |
*** angelastreeter has joined #openstack-swift | 18:07 | |
notmyname | https://wiki.openstack.org/wiki/Swift/PriorityReviews updated with an eye toward the openstack integrated release | 18:09 |
torgomatic | is it a baleful eye? | 18:10 |
zaitcev | 1. Full of deadly or pernicious influence; destructive. 2. Full of grief or sorrow; woeful; sad. [Archaic] | 18:14 |
notmyname | torgomatic: http://www.theshiznit.co.uk/feature/cant-unsee-christian-bales-eye-wart.php ?? | 18:15 |
notmyname | zaitcev: interesting. I only knew the 2nd definition | 18:15 |
*** annegent_ has joined #openstack-swift | 18:18 | |
*** gvernik has joined #openstack-swift | 18:31 | |
*** IRTermite has quit IRC | 18:32 | |
*** mahatic has joined #openstack-swift | 18:34 | |
*** zul has quit IRC | 18:43 | |
*** annegent_ has quit IRC | 18:44 | |
notmyname | swift team meeting in 15 minutes | 18:45 |
*** zul has joined #openstack-swift | 18:48 | |
*** angelastreeter has quit IRC | 18:51 | |
taras_ | how does one create a container in swift after setting up SAIO | 18:57 |
taras_ | i was using devstack, but that was slow and eventually broke | 18:57 |
*** elambert has joined #openstack-swift | 18:57 | |
notmyname | taras_: PUT /v1/<your account>/container | 18:57 |
notmyname | taras_: also, you may be interested in https://github.com/swiftstack/vagrant-swift-all-in-one | 18:57 |
notmyname | taras_: the swift CLI can do it too: `swift post new_container_name` | 18:58 |
* notmyname doesn't like the "post" there, but *sigh* | 18:58 | |
taras_ | nice | 18:58 |
taras_ | swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing post nc gives me "not found" | 18:59 |
*** mkollaro has joined #openstack-swift | 18:59 | |
notmyname | taras_: against devstack? | 19:00 |
taras_ | Container 'nc' not found | 19:00 |
taras_ | against SAIO | 19:00 |
*** peluse has joined #openstack-swift | 19:00 | |
torgomatic | try just "swift -A ... post" to make sure the account exists | 19:00 |
peluse | /msg NickServ identify intel123 | 19:00 |
taras_ | ah not found | 19:01 |
taras_ | thanks | 19:01 |
notmyname | peluse: oops | 19:01 |
taras_ | fixed it | 19:01 |
taras_ | thanks | 19:01 |
notmyname | taras_: oh, I think you need to set account_autocreate to true | 19:01 |
notmyname | well, "need" | 19:01 |
peluse | heh | 19:01 |
peluse | big secret now out in the open! | 19:01 |
notmyname | peluse: (1) I hope that's not your password (2) change it | 19:01 |
notmyname | ah! meeting time | 19:02 |
*** ChanServ sets mode: +v peluse | 19:06 | |
*** angelastreeter has joined #openstack-swift | 19:06 | |
acoles | notmyname: will do (re port setting) | 19:08 |
taras_ | hmm | 19:10 |
taras_ | i do have account_autocreate, but still struggling with auth :( | 19:10 |
notmyname | taras_: ok. I'm in the swift team meeting now in #openstack-meeting. I can help after | 19:10 |
taras_ | ok | 19:11 |
taras_ | curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 | 19:11 |
taras_ | seems to give me the token, but same user/url stuff wont wokr with swift cmd | 19:11 |
mahatic | notmyname, where does the meeting happen? not here? | 19:12 |
mahatic | got it | 19:13 |
Anju_ | taras : list is working ? | 19:13 |
taras_ | swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat seems to work | 19:13 |
taras_ | and list seems to work | 19:13 |
taras_ | (no containers) | 19:13 |
Anju_ | hmmm ..then try swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload continer_name object (you want to store) | 19:14 |
tdasilva | mahatic: in the #openstack-meeting channel | 19:15 |
mahatic | yup got it | 19:15 |
mahatic | tdasilva, i'm there. Thanks! | 19:15 |
*** IRTermite has joined #openstack-swift | 19:15 | |
taras_ | Anju_: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload nc .bashrc | 19:16 |
taras_ | Error trying to create container 'nc': 404 Not Found: <html><h1>Not Found</h1><p>The resource could not be found.< | 19:16 |
Anju_ | hmm | 19:16 |
taras_ | Object PUT failed: http://127.0.0.1:8080/v1/AUTH_test/nc/.bashrc 404 Not Found [first 60 chars of response] <html><h1>Not Found</h1><p>The resource could not be found.< | 19:16 |
taras_ | wonder if i messed up in saio setup | 19:16 |
Anju_ | check | 19:17 |
Anju_ | logs | 19:17 |
tdasilva | taras_: can you create a container with curl? | 19:17 |
taras_ | can't | 19:17 |
taras_ | that's something like curl –X PUT -i -H "X-Auth-Token: AUTH_tka516e3754bab43129a71f17aa57cf5b2" http://127.0.0.1:8080/v1/AUTH_test/nc right | 19:17 |
taras_ | ? | 19:17 |
Anju_ | this is the path | 19:18 |
tdasilva | taras_: are you getting the same 404 error? | 19:19 |
taras_ | tdasilva: yes | 19:20 |
*** Anju_ has quit IRC | 19:21 | |
btorch | taras_: are you able to just do a HEAD request against the account and get 2XX back ? | 19:21 |
taras_ | yes | 19:21 |
taras_ | i think i messed up the backend | 19:21 |
taras_ | in saio | 19:21 |
*** Anju has joined #openstack-swift | 19:21 | |
taras_ | lemme try to fix that | 19:21 |
Anju | taras: checks the log ../var/log/swift/all.log | 19:23 |
Anju | you can find the reason here | 19:24 |
Anju | and check your all services are running | 19:24 |
taras_ | ok, yeah, fixed up stuff in /sr | 19:26 |
taras_ | ok, yeah, fixed up stuff in /srv | 19:26 |
taras_ | now things work | 19:26 |
taras_ | sorry for trouble guys | 19:26 |
taras_ | will do the vagrant thing next time | 19:28 |
*** jergerber has joined #openstack-swift | 19:35 | |
*** elambert has quit IRC | 19:49 | |
*** mkollaro has quit IRC | 19:54 | |
notmyname | taras_: you got it all straightened out? | 19:56 |
notmyname | Anju: btorch: thanks for helping taras_ out | 19:57 |
clayg | yay another great meeting | 19:57 |
portante | clayg, torgomatic: does that article make sense to you? | 19:57 |
portante | either of you, might be better english | 19:57 |
* mattoliverau has the flu, so is surprised I survived being awake for the entire meeting! | 19:57 | |
clayg | portante: i stopped when I got to the pretty picture with all the nice colors | 19:57 |
mattoliverau | back to bed, see ya all in a few hours | 19:57 |
portante | the code from the article is better: http://lwn.net/Articles/457672/ | 19:58 |
portante | clayg | 19:58 |
*** tgohad has joined #openstack-swift | 19:59 | |
*** tsg has quit IRC | 19:59 | |
clayg | portante: cool, like notmyname said we need to do some homework - i'll probably pull it up on the train home tonight | 20:00 |
*** gvernik has quit IRC | 20:00 | |
portante | great | 20:00 |
portante | hopefully, it won't cause you to fall asleep and miss your stop! | 20:01 |
clayg | heheheh | 20:03 |
*** tsg has joined #openstack-swift | 20:03 | |
*** annegent_ has joined #openstack-swift | 20:05 | |
*** tgohad has quit IRC | 20:07 | |
notmyname | barrier v. nobarrier were always confusing to me | 20:07 |
portante | yes | 20:08 |
notmyname | barrier == on == writeback cache is flushed to disk? | 20:08 |
portante | I believe so | 20:08 |
notmyname | ie making it a pass-through operation | 20:08 |
notmyname | ok. and that's the default | 20:08 |
notmyname | nobarrier means it could be written to writeback cache and not persisted | 20:08 |
portante | barrier meaning do initiate another i/o to that volume until the cache is written to disk | 20:09 |
portante | yes, I believe that is correct | 20:09 |
portante | do not ... | 20:09 |
notmyname | and nobarrier is ok if you've got something like a battery-backed cache that can persist the data | 20:09 |
portante | yes | 20:09 |
portante | but | 20:09 |
notmyname | there's always a "but" | 20:09 |
portante | you have to have your disks properly configured to not have a cache in play behind that controller | 20:09 |
notmyname | right | 20:10 |
notmyname | it's caches all the way down | 20:10 |
portante | I have to remember this right, but I believe if a controller with a write-back cache detects a disk with a cache enabled, then the controller's cache becomes write-through and all the I/Os go to the disks | 20:12 |
portante | with nobarrier enable or not, IIRC | 20:12 |
portante | I could be wrong about that | 20:12 |
portante | I'll have to check | 20:12 |
*** Anju has quit IRC | 20:13 | |
clayg | notmyname & portante are digging in to the *real* reason we store three copies | 20:16 |
*** Anju has joined #openstack-swift | 20:16 | |
*** fifieldt_ has joined #openstack-swift | 20:17 | |
*** fifieldt has quit IRC | 20:21 | |
btorch | portante: I think that depends a lot on the controller as well, some will disable the drive cache when a bbm is used | 20:30 |
Anju | notmyname: did you think about my changes (limit check) | 20:35 |
*** tdasilva has quit IRC | 20:36 | |
notmyname | clayg: because it's all magic and nobody knows anything!! | 20:38 |
notmyname | Anju: got a link? | 20:39 |
Anju | notmyname: https://review.openstack.org/#/c/118186/ | 20:39 |
*** angelastreeter has quit IRC | 20:40 | |
*** mwstorer has quit IRC | 20:43 | |
notmyname | Anju: looks like clayg had the same concerns I had. also, you'll need to get the unit tests passing | 20:45 |
Anju | notmyname: means you will see after test success ?? :P | 20:48 |
portante | btorch: yes, and some controllers don't | 20:48 |
portante | :) | 20:48 |
notmyname | Anju: well....I still think there's the concern of the subtle api change you're introducing | 20:51 |
Anju | awww | 20:51 |
notmyname | all I'm saying is that I want more than just one or two people to weigh in, and we normally are pretty conservative | 20:52 |
Anju | notmyname: last time you ask about the client handling https://github.com/openstack/python-swiftclient/blob/master/swiftclient/client.py#L467 | 20:54 |
notmyname | Anju: ah ok. so limit could be passed in as 'abc' | 20:55 |
Anju | noo..It is giving an error | 20:56 |
notmyname | portante: fsync'ing the containing directory isn't FS-specific is it? | 20:56 |
Anju | notmyname:- it is giving some ValueError: invalid literal for int() with base 10: 'abc' error | 20:57 |
notmyname | Anju: oh! that's good then | 20:57 |
notmyname | Anju: where is that error raised? what line? | 20:58 |
portante | I don't think it is fs-specific | 20:58 |
Anju | client.py 441 | 20:58 |
clayg | notmyname: it's '%d' % limit | 20:58 |
clayg | so you know -1 is fine, but abc is ValueError | 20:59 |
clayg | good on original author there | 20:59 |
notmyname | clayg: of course. I'll blame the headcold | 20:59 |
portante | it may be that one fs does not require the fsync, e.g. an os.rename() always ends up persisted to disk for some reason, but I would like the vfs layer would hide that | 20:59 |
*** angelastreeter has joined #openstack-swift | 21:01 | |
clayg | acoles: hey - thanks a ton for all of the comments on https://review.openstack.org/#/c/103777/ - i'd sorta not been looking at it too much giving folks time to digest | 21:02 |
clayg | acoles: it all came out of work on the multi-reconciler patch, I was going out of my way to understand the behaviors when multiple actors were both uploading the same objects with the same timestamp and i ran into those very same 503's that you were seeing | 21:02 |
portante | well, that paper we are talking about says that the need to persist the rename of the directory is file system and/or mount option specific | 21:03 |
clayg | which lead me to my current understanding of how the proxy handles 409's on PUT and the opinion that the result of that handling is sorta crappy/wasteful. | 21:03 |
*** annegent_ has quit IRC | 21:04 | |
Anju | clayg: notmyname : The only changes in did in swiftclient is shell.py so I can use the limit option which are here: http://pastebin.com/pg92A3xi | 21:05 |
*** jasondotstar has joined #openstack-swift | 21:05 | |
*** angelastreeter has quit IRC | 21:06 | |
Anju | when I use limit=abc error is not handled gracefully but the error is : TypeError: %d format: a number is required, not str | 21:06 |
*** tsg has quit IRC | 21:06 | |
Anju | notmyname: and when I gave limit = 1 still the same error ..so I changed limit to int(limit) | 21:11 |
Anju | :) | 21:11 |
*** miqui has quit IRC | 21:11 | |
*** dencaval has quit IRC | 21:16 | |
*** tsg has joined #openstack-swift | 21:19 | |
*** annegent_ has joined #openstack-swift | 21:35 | |
*** annegent_ has quit IRC | 21:37 | |
*** annegent_ has joined #openstack-swift | 21:38 | |
portante | notmyname: from the XFS FAQ: http://xfs.org/index.php/XFS_FAQ#Q._Which_settings_does_my_RAID_controller_need_.3F | 21:42 |
portante | see the last sentence of second paragraph of that section | 21:43 |
notmyname | portante: yup | 21:44 |
*** mahatic has quit IRC | 21:44 | |
*** angelastreeter has joined #openstack-swift | 21:53 | |
*** angelastreeter has quit IRC | 21:57 | |
*** mwstorer has joined #openstack-swift | 22:03 | |
*** openstack has joined #openstack-swift | 22:11 | |
*** tab_ has joined #openstack-swift | 22:14 | |
*** joeljwright has quit IRC | 22:15 | |
portante | notmyname: does anybody deploy swift on local FSes besides XFS regularly in production? | 22:17 |
notmyname | portante: not that I know. I've talked to some people who deployed it on an ext varient but they always go back to xfs after seeing what happens with ext when you get a lot of inodes | 22:19 |
notmyname | portante: I suppose we aren't counting glusterfs here ;-) | 22:20 |
*** erlon has quit IRC | 22:20 | |
portante | notmyname: yes | 22:21 |
portante | :) | 22:21 |
taras_ | so i tested getting rid of index + using ints for created_at in sqlite | 22:26 |
taras_ | filesize goes from 1.7mb to 1mb for my 10K entry db | 22:26 |
taras_ | redbo: how did you get so many entries into the db? | 22:27 |
taras_ | it takes me about 17ms per insert | 22:27 |
*** annegent_ has quit IRC | 22:28 | |
tab_ | Let's say that I have 10 machines. Does the proxy services has to live on all tha machines? | 22:30 |
portante | notmyname, redbo, clayg, torgomatic: can anybody create the conditions for zero-byte files at will with XFS? | 22:33 |
*** angelastreeter has joined #openstack-swift | 22:34 | |
torgomatic | portante: not I | 22:46 |
openstackgerrit | Tushar Gohad proposed a change to openstack/swift: EC: Make quorum_size() specific to storage policy https://review.openstack.org/111067 | 22:47 |
redbo | the main source of zero byte files I think is improper shutdown after rsyncs. Since rsync doesnt fsync at all. | 22:48 |
*** gyee has quit IRC | 22:59 | |
*** angelastreeter has quit IRC | 23:01 | |
* clayg has this visual image of torgomatic waving his hand across an object server to exert his will | 23:02 | |
torgomatic | maybe if there's a strong enough magnet in that hand... ;) | 23:03 |
jokke_ | LOL | 23:03 |
clayg | tab_: no, you can put proxies just on the machines that you want to load balance across - as long as they can talk to the storage nodes listed in the rings | 23:04 |
*** jergerber has quit IRC | 23:06 | |
mattoliverau | Morning | 23:15 |
*** openstackstatus has quit IRC | 23:19 | |
*** openstackstatus has joined #openstack-swift | 23:21 | |
*** ChanServ sets mode: +v openstackstatus | 23:21 | |
*** sungju has joined #openstack-swift | 23:21 | |
*** sungju has quit IRC | 23:22 | |
*** dmsimard is now known as dmsimard_away | 23:26 | |
*** astellwag has quit IRC | 23:32 | |
*** astellwag has joined #openstack-swift | 23:33 | |
*** Anju has quit IRC | 23:38 | |
*** mwstorer has quit IRC | 23:39 | |
tab_ | clayg: thx. So in case I have 10 machines, I can have only one with proxy service active on one machine to be able to read and write data to all 10 nodes. | 23:57 |
clayg | tab_: ayup | 23:57 |
clayg | unless that one proxy goes kaboom ;) | 23:57 |
tab_ | that's better than Ceph's radosgw consensous algorithm with Paxos, which in production wants a quorum of machines (which is i guess at least 2) with monitors deamon up, to someone to be enable to read/write .... | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!