*** d34dh0r53 has quit IRC | 00:15 | |
*** d34dh0r53 has joined #openstack-swift | 00:18 | |
*** mvkr has joined #openstack-swift | 01:23 | |
*** ianychoi has joined #openstack-swift | 03:04 | |
*** ianychoi has quit IRC | 03:11 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-swift | 05:33 | |
*** hoonetorg has quit IRC | 05:44 | |
*** hoonetorg has joined #openstack-swift | 05:56 | |
*** diablo_rojo has joined #openstack-swift | 06:20 | |
*** pcaruana has joined #openstack-swift | 07:34 | |
openstackgerrit | Charles Hsu proposed openstack/python-swiftclient master: Support v3 application credentials auth. https://review.opendev.org/699457 | 08:19 |
---|---|---|
*** jistr_ is now known as jistr | 08:29 | |
*** rdejoux has joined #openstack-swift | 08:35 | |
*** rpittau|afk is now known as rpittau | 08:41 | |
*** diablo_rojo has quit IRC | 09:49 | |
*** Fengli1 has joined #openstack-swift | 10:00 | |
*** Fengli has quit IRC | 10:01 | |
*** Fengli1 is now known as Fengli | 10:01 | |
mahatic | Happy new year swift team! | 10:04 |
mahatic | mattoliverau, glad to know you're safe. Take care | 10:04 |
*** spsurya has joined #openstack-swift | 12:48 | |
*** ianychoi has joined #openstack-swift | 13:15 | |
*** theintern_ has joined #openstack-swift | 15:16 | |
*** Fengli has quit IRC | 15:21 | |
*** theintern_ has quit IRC | 15:43 | |
*** theintern_ has joined #openstack-swift | 15:44 | |
*** pcaruana has quit IRC | 16:04 | |
*** theintern_ has quit IRC | 16:08 | |
*** theintern_ has joined #openstack-swift | 16:23 | |
*** rdejoux has quit IRC | 16:32 | |
*** theintern_ has quit IRC | 16:36 | |
*** pcaruana has joined #openstack-swift | 16:43 | |
*** rpittau is now known as rpittau|afk | 16:51 | |
clayg | happy new years! | 17:07 |
*** zaitcev has joined #openstack-swift | 17:26 | |
*** ChanServ sets mode: +v zaitcev | 17:26 | |
*** evrardjp has quit IRC | 17:33 | |
*** evrardjp has joined #openstack-swift | 17:33 | |
*** gyee has joined #openstack-swift | 17:53 | |
ormandj | happy new years :) | 18:41 |
clayg | timburke: i'm going to fix that pep8 error on p 700818 real quick | 19:11 |
patchbot | https://review.opendev.org/#/c/700818/ - swift - Deprecate per-service auto_create_account_prefix - 1 patch set | 19:11 |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Deprecate per-service auto_create_account_prefix https://review.opendev.org/700818 | 19:12 |
*** pcaruana has quit IRC | 19:15 | |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Use less responses from handoffs https://review.opendev.org/700239 | 19:20 |
timburke | clayg, i'm loving auto_create_account_prefix in constraints! seems like a good fit, particularly given its pervasiveness | 19:53 |
clayg | good to hear! When you suggested it I thought it was a great idea | 19:54 |
ormandj | quick question (as if anything i mention is quick): right now, when dealing with a web display for containers, we look if there's less than X objects, if so, get the list, do pagination, display forward/backward/go to end/to to beginning buttons works great. for containers over 1000 objects, we just provide forward/backward/go to beginning - but no go to end since we displayy folders and a container | 20:13 |
ormandj | HEAD shows total object count, so if we were try to use that to paginate, it would cause issues due to some of the objects being within a folder abstraction | 20:13 |
ormandj | how are people working around this aside from just getting a full object list every time | 20:14 |
*** camelCaser has quit IRC | 20:54 | |
clayg | ormandj: "working around this" ... for "containers over 1000 objects"? like... the "go to end" option? maybe `?reverse=true` | 20:55 |
*** camelCaser has joined #openstack-swift | 20:56 | |
clayg | "it seems strange that it's configurable at all" 👍 | 20:57 |
clayg | "this is only configurable for legacy compatibility, please pretend it's not here and don't change it" | 20:57 |
clayg | or just... "don't change it" | 20:57 |
ormandj | clayg: right, giving the ability to seek to the end of the object listing for display where you display folders, so you can't count on the total object count being correct | 21:03 |
ormandj | well, in terms of objects to display that is, it's correct in the count of objects in the container | 21:04 |
clayg | yeah if I wanted to display the "-1" page I'd use the reverse query param (as opposed to if I just wanted the "next" page in which case I'd use a marker query) | 21:19 |
clayg | but then for "backward" from the last page you'd have to do a reverse+marker query | 21:19 |
clayg | I'm curious if this kind of navigation is really all that useful? | 21:20 |
clayg | If you're doing "prefix+delim" queries to display "subfolders" that should get you the first 10K top level folders | 21:21 |
clayg | more than that it's not obvious how much they're going to want to look through them 1K at a time? 🤷♂️ | 21:21 |
clayg | I have auto_shard = true and shard_container_threshold = 50 | 21:27 |
clayg | why is my container with 1K objects not sharding? | 21:27 |
clayg | I'm running -sharder and -replicator in the background on my saio | 21:27 |
clayg | it's sayingcontainer-sharder-6041: Since Thu Jan 2 21:27:10 2020 visited - attempted:0 success:0 failure:0 skipped:1 completed | 21:27 |
clayg | the container shows up in recon drops | 21:28 |
clayg | even, the recon drop for node_index: 1 | 21:30 |
clayg | maybe with auto turned on ... it still wants for me to tell the database to enable sharding? and *then* it will automatically find and replicate shard ranges? | 21:31 |
clayg | post as an admin I guess 🤔 | 21:37 |
clayg | I HAVE SO MANY DBs NOW! | 21:38 |
clayg | it created all the shard range dbs, but got hung after cleaving only two of them 😠 | 21:44 |
clayg | Audit failed for root /srv/node2/sdb2/containers/42/<...>.db (AUTH_test/100%25beef): missing range(s): obj0249- | 21:50 |
clayg | 👎 | 21:50 |
ormandj | clayg: thanks, will sit and think on that for a while. i'm not the ui guy, for good reason ;) | 21:51 |
clayg | it's true tho, none of the replicas have any shard ranges after Name: .shards_AUTH_test/100%beef-a849a64896e6e27dfd4ed768ffa7bd2f-1578001061.04390-9 | 21:53 |
clayg | lower: 'obj0224', upper: 'obj0249' | 21:53 |
clayg | which is weird because there's 1K objXXX in the container | 21:53 |
clayg | maybe i should have used the manage-shard-ranges enable instead of a POST as an admin (but that's how probetests do it!) | 21:56 |
clayg | gah, how does auto sharding work! 😠 | 22:06 |
clayg | @timburke I almost always end up messing up sharding until I turn of auto and do it like we do in the controller, do you run with auto_shard = true !? | 22:07 |
clayg | wtf, with auto still turned on it looks like some of by shard dbs started sharding on their own!? | 22:23 |
clayg | timburke: I think https://gist.github.com/tipabu/c28fb327c0bf0c74ac418b08e0986395 looks GREAT | 22:29 |
clayg | if it's in gerrit I'd approve it - or I can respin with that added later | 22:29 |
mattoliverau | morning | 22:33 |
mattoliverau | clayg: yeah, I think iin the current version of auto sharding, we wanted to still opt in on the container sharding. So you need to activavte it on the root container. Then that container (including shards) will start auto sharding. | 22:39 |
mattoliverau | ideally, when were ready we want to just not have to enable sharding on anything. | 22:39 |
clayg | mattoliverau: thanks! | 22:40 |
mattoliverau | they might also shrink | 22:41 |
clayg | how long does a sharded shard stick around? | 22:58 |
clayg | I mean the old db, that was the shard, that cleaves itself into a sub-root - it's children update the parent root directly, no? | 22:59 |
clayg | hrm... yeah the old shard doesn't recieve any count updates - they go directly to the root | 23:05 |
clayg | I assumed it kind of had to stick around incase any async pendings try send it updates (and then it could redirect same as the root) | 23:05 |
clayg | but at somepoint that probably starts to do more harm than good if it's ranges get out of date | 23:06 |
*** Fengli has joined #openstack-swift | 23:08 | |
timburke | clayg, it's part of _audit_shard_container and based on reclaim_age: https://github.com/openstack/swift/blob/master/swift/container/sharder.py#L748-L759 | 23:37 |
clayg | hell yeah it is, that's awesome | 23:37 |
timburke | er, https://github.com/openstack/swift/blob/2.23.0/swift/container/sharder.py#L747-L758 (in case we come back to these logs in a few months or whatever) | 23:38 |
timburke | but yeah, the sharded shard should still be getting some updates from the root DB; i'm pretty sure its redirects ought to stay pretty up-to-date | 23:40 |
timburke | oh... maybe not, actually... i wonder if https://github.com/openstack/swift/blob/2.23.0/swift/container/sharder.py#L749 should be `broker.merge_shard_ranges(shard_ranges)`, with an "s"... | 23:41 |
timburke | meh. this is why we have the updater notice if it gets a bunch of redirects and just go back to the root | 23:42 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!