mattoliverau | OK so looking at the ethercalc (booking schedule) there are more 9pm - 11pm UTCs (meeting and an hour after meeting) times available then in the 1pm - 5pm (UTC) blocks. Seems most people want to meet then, which was were I was also looking. | 03:55 |
---|---|---|
mattoliverau | There are still some on Monday, because this was suppose to be for sigs and horizontal projects. But maybe we could take a few hours then, and then 1 or more 2 hour blocks around out meeting time. | 03:57 |
mattoliverau | Though I know that latter isn't that great for Europe (and it'll be early morning APAC) but at least its something we're more used to | 03:57 |
mattoliverau | Maybe I could pick a mix, but trying to put them all in the same room over the week: | 04:12 |
mattoliverau | ROOM: Liberty | 04:12 |
mattoliverau | Mon 14 - 16 UTC (2 hours) | 04:12 |
mattoliverau | Tue 13 - 14 UTC (1 hours) | 04:12 |
mattoliverau | Wed 21 - 23 UTC (2 hours) (normal meeting time anyway + an hour) | 04:12 |
mattoliverau | Thu 21 - 23 UTC (2 hours) | 04:12 |
mattoliverau | There is room to extend monday to be more then 2 hours, say if we want time to have a planning session. But maybe we can do that in the 2 hour allotment anyway. Not sure we really need an hour for each topic. | 04:13 |
mattoliverau | There was only 1 hour free on the Tue, maybe we could make that an 'ops feedback session' as it's a better time. And if no one turns up we can just continue our discsussions? | 04:14 |
mattoliverau | I might put ^ in the ethercalc (we just put the name swift, we can decide what to talk about when ourselves). | 04:15 |
mattoliverau | And I think they mostly match what people have said (or could begrundly do) in the poll. But unfortuantly we're coming to this very late so don't really have too many options. | 04:16 |
mattoliverau | OK put those in | 04:29 |
*** evrardjp has quit IRC | 04:36 | |
*** evrardjp has joined #openstack-swift | 04:36 | |
*** ccamacho has joined #openstack-swift | 07:11 | |
*** ccamacho has quit IRC | 07:12 | |
openstackgerrit | Tim Burke proposed openstack/swift master: object-updater: Ignore ENOENT when trying to unlink stale pending files https://review.opendev.org/726738 | 07:23 |
timburke | thanks mattoliverau! you're doing a great job, sorry to hand it off so late in the game | 07:26 |
*** ccamacho has joined #openstack-swift | 07:29 | |
*** rpittau|afk is now known as rpittau | 07:36 | |
*** mikecmpbll has joined #openstack-swift | 08:02 | |
*** dtantsur|afk is now known as dtantsur | 08:04 | |
*** kukacz_ has joined #openstack-swift | 08:11 | |
*** mattoliverau_ has joined #openstack-swift | 08:15 | |
*** ChanServ sets mode: +v mattoliverau_ | 08:15 | |
*** kukacz has quit IRC | 08:15 | |
*** mattoliverau has quit IRC | 08:15 | |
*** mahatic has quit IRC | 08:19 | |
*** mikecmpbll has quit IRC | 09:36 | |
*** mikecmpbll has joined #openstack-swift | 09:37 | |
*** rpittau is now known as rpittau|bbl | 10:15 | |
*** rpittau|bbl is now known as rpittau | 11:58 | |
*** mahatic has joined #openstack-swift | 12:09 | |
*** ChanServ sets mode: +v mahatic | 12:09 | |
rledisez | I've been deploying with sharding recently. One way to find the container that need to be sharded is to look the container.recon file, of course. The other is to search for the biggest partition. I found many partitions that were tens of GB, but with only few objects (like 1500). I ran a VACUUM on them and it saved the, well, tens of GB of space. Did we ever consider to do some kind of auto-VACUUM? I don't know, the auditor mayb | 12:16 |
rledisez | e? | 12:16 |
rledisez | s/many partitions/many databases/ | 12:17 |
*** tkajinam has quit IRC | 12:31 | |
DHE | speaking as a regular user, doing so would lock the database pretty hard and the replica in question would be effectively offline for the procedure. so if nothing else you need to make sure you stagger those out properly | 12:48 |
rledisez | Sure, actually, I lock the file, vacuum into a temporary file (it's way faster) and then move it to the original place and unlock. It's pretty fast (few seconds max). I didn't had to try on db files with a lot of objects (I prefer to shard it first) | 13:18 |
*** dtantsur is now known as dtantsur|brb | 14:17 | |
*** dtantsur|brb is now known as dtantsur | 14:59 | |
-openstackstatus- NOTICE: Our CI mirrors in OVH BHS1 and GRA1 regions were offline between 12:55 and 14:35 UTC, any failures there due to unreachable mirrors can safely be rechecked | 15:09 | |
*** mikecmpbll has quit IRC | 15:46 | |
*** mikecmpbll has joined #openstack-swift | 15:48 | |
*** gyee has joined #openstack-swift | 16:04 | |
*** rpittau is now known as rpittau|afk | 16:09 | |
*** ianychoi_ is now known as ianychoi | 16:09 | |
*** dtantsur is now known as dtantsur|afk | 16:19 | |
DHE | rledisez: I meant lock it in a way that only 1 host a a time could possibly be vacuuming their database. thus only 1 server would appear to be down/hung at a time and largely preserves swift's expectations for quorum, etc | 16:29 |
timburke | good morning | 16:35 |
*** evrardjp has quit IRC | 16:36 | |
*** evrardjp has joined #openstack-swift | 16:36 | |
timburke | rledisez, i seem to remember some experiments involving vacuuming... i don't quite remember what findings fell out of them, though | 16:37 |
*** zaitcev has joined #openstack-swift | 16:56 | |
*** ChanServ sets mode: +v zaitcev | 16:56 | |
DHE | I can see the benefits, I'm just worried that an extended lock on a database could be unhealthy for swift if it happens on multiple hosts at once | 16:59 |
openstackgerrit | Clay Gerrard proposed openstack/swift master: updater: Shuffle suffixes so we don't keep hitting the same failures https://review.opendev.org/726570 | 17:23 |
openstackgerrit | Tim Burke proposed openstack/swift master: updater: Shuffle suffixes so we don't keep hitting the same failures https://review.opendev.org/726570 | 17:33 |
*** viks____ has quit IRC | 18:55 | |
*** ccamacho has quit IRC | 19:25 | |
*** mikecmpbll has quit IRC | 20:17 | |
*** mikecmpbll has joined #openstack-swift | 20:21 | |
mattoliverau_ | I always wondered if we should attempt a vacuum before or during an rsyc_then_merge. It'll mean less data to send if we do it before rsyncing to the node.. or after sending and before merge as the rsynced one lives safely in TMP. Thought I wrote some code at some point but needed testing. I can search for it, iwasnt too much code. I think it was the former version to save bandwidth. | 21:09 |
mattoliverau_ | *or just in rsync in the case that it doesn't exist (ie after rebalances) | 21:10 |
mattoliverau_ | Oh and this is in the container replicator (if that wasn't apparent), I'm not quite awake yet so not sure any of that made sense :p | 21:11 |
mattoliverau_ | Or rather the db_replicator so both accounts and containers get it | 21:14 |
timburke | 600M pendings cleared in 4 days! not too shabby! | 22:52 |
*** tkajinam has joined #openstack-swift | 22:55 | |
mattoliverau_ | nice | 23:18 |
*** mattoliverau_ is now known as mattoliverau | 23:18 | |
DHE | I was thinking that if there were some way to do centralized locking for the cluster a host could take the lock, spot check that its replicas were up, perform a full vacuum of all dbs on all devices, then release the lock | 23:31 |
DHE | just to try to ensure availability as a vacuum would be a host effectively down | 23:31 |
DHE | something like that | 23:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!