21:00:31 #startmeeting swift 21:00:31 Meeting started Wed Aug 4 21:00:31 2021 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:31 The meeting name has been set to 'swift' 21:00:40 who's here for the swift meeting? 21:00:47 o/ 21:00:54 o/ 21:01:02 o/ 21:01:31 as usual, the agenda's at 21:01:33 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:01:45 sorry, i only *just* got around to updating it 21:01:56 #topic PTG 21:02:26 just a reminder to start populating the etherpad with topics 21:02:28 #link https://etherpad.opendev.org/p/swift-ptg-yoga 21:03:03 i did get around to booking rooms, but i still need to put add them to the etherpad, too 21:03:49 i decided to split times again, try to make sure everyone has some time where they're likely to be well-rested ;-) 21:04:16 any questions on the PTG? 21:05:18 Not yet, let's fill out the etherpad and have some good discussions :) 21:05:32 agreed :-) 21:06:09 #topic expirer can delete data with inconsistent x-delete-at values 21:07:02 so i've got some users that are using the expirer pretty heavily, and i *think* i've seen an old bug 21:07:10 #link https://bugs.launchpad.net/swift/+bug/1182628 21:09:19 basically, there's a POST at t1 to mark an object to expire at t5, then another POST at t2 to have it expire at t10. if replication doesn't get rid of all the t1 .metas, and the t5 expirer queue entry is still hanging around, we'll delete data despite getting back 412s 21:10:26 on top of that, since the data got deleted, the t10 expiration fails with 412s and hangs around until a reclaim_age passes 21:10:30 timburke: do you have any evidence we've hit expiry times getting [412, 412, 204] because it expired before the updated POST was fully replicated? 21:11:13 Ugh 21:11:17 any chance we could reap the t10 delete row based on the x-timestamp coming off the 404/412 response? 21:11:46 I can see telling them not to count on POST doing the job, but the internal inconsistency is just bad no matter what. 21:11:53 also I think there's an inline attempt (maybe even going to async) to clean up the t5 if the post at t2 happens to notice it 21:12:48 i've seen the .ts as of somewhere around the start of July, and the expirer kicking back 412s starting around mid-July. i haven't dug into the logs enough to see *exactly* what happened when, but knowing my users it seems likely that they wanted the later expiration time 21:14:09 clayg, there is, but only if the t1 .meta is present on whichever servers get the t2 POST 21:14:42 👍 21:16:35 i *think* it'd be reasonable to reap the t10 queue entry based on the t5 tombstone being newer than the t2 enqueue-time. but it also seems preferable to avoid the delete until we actually know that we want to delete it 21:17:39 'cause i *also* think it'd be reasonable to reap the t5 queue entry based on a 412 that indicates the presence of a t2 .meta (since it's greater than the t1 enqueue-time) 21:18:05 anyway, i've got a failing probe test at 21:18:08 #link https://review.opendev.org/c/openstack/swift/+/803406 21:19:17 Great start 21:20:06 it gets a pretty-good split brain going on, with 4 replicas of an object, two with one delete-at time, two with another, and queue entries for both 21:21:18 noice 21:21:40 love the idea of getting those queue entries cleaned up if we can do it in a way that makes sense 👍 21:21:41 i'm also starting to work on a fix for it that makes DELETE with X-If-Delete-At look a lot like a PUT with If-None-Match -- but it seems like it may get hairy. will keep y'all updated 21:22:02 timburke: just do the HEAD and DELETE to start - then make it fancy 21:22:08 "someday" 21:22:17 (also consider maybe not making it fancy if we can avoid it) 21:23:29 it'd have to be a HEAD with X-Newest, though -- which seems like a pretty sizable request amplification for the expirer :-( 21:24:14 it doesn't *have* to be x-newest - you could just use direct client and get all the primaries in concert 21:24:32 the idea is you can't make the delete unless everyone already has a matching x-delete-if-match 21:26:07 i'll think about it -- seems like i'd have to reinvent a decent bit of best_response, though 21:26:30 next up 21:26:49 #topic last primary table 21:27:21 this came up at the last PTG, and i already see it's a topic for the next one (thanks mattoliver!) 21:27:38 mattoliver even already put a patch together 21:27:41 #link https://review.opendev.org/c/openstack/swift/+/790550 21:28:47 Thanks for the reviews lately timburke 21:28:57 i just wanted to say that i'm excited about this idea -- it seems like the sort of thing that can improve both client-observable behaviors and replication 21:29:16 Very interesting. That for_read, is it something proxy can use too? 21:29:29 Right, I see. Both. 21:30:08 So, where's the catch? How big is that array for a 18-bit ring with 260,000 partitions? 21:30:30 In my limited testing and as you can see in its follow ups it makes a post rebalance reconstruction faster and less CPU bound. 21:30:32 basically, it's an extra replica's worth of storage 21:30:45 /ram 21:30:46 Yeah, so you ring grows an extra replica basically. 21:31:45 interesting 21:32:16 and with proxy plumbing, you can rebalance 2-replica (or even 1-replica!) policies without so much risk of unavailability 21:34:32 Also means on post rebalance we can take last primaries into account and get built in handoffs first (or at least for last primaries). Which is why I'm playing with the reconstructor as a follow-up. 21:35:02 👍 21:35:12 #topic open discussion 21:35:24 those were the main things i wanted to bring up; what else should we talk about this week? 21:36:51 There was that ML thread about the x-delete-at in the past breaking EC rebalance because of old swift bugs. 21:37:28 Moral: upgrade! 21:41:17 zaitcev, looks like you took the DNM off https://review.opendev.org/c/openstack/swift/+/802138 -- want me to find some time to review it? 21:42:47 Just an update, I had a meeting with our SREs re the tracing request patches, they gave some good improvements they'd find useful.. next I plan to do some bench marks to see if it effects anything before I move forward on it. 21:42:48 mattoliver and acoles, how are we feeling about the shard/storage-policy-index patch? https://review.opendev.org/c/openstack/swift/+/800748 21:43:09 nice! 21:44:14 timburke: I think it should be okay. 21:45:00 timburke: I need to look at the policy index patch again since mattoliver last updated it 21:45:03 timburke: Some of the "zero" tests belong into system reader. I was thinking about factoring them out, but haven't done that. It's not hurting anything except my sense of symmetry. 21:45:11 I moved the migration into the replicatior, so there is a bit of new code there, but means we can migrate shards spi before enqueued reconiler (but it still happens in the sharper too). So take a look and we can decide where to do it.. or maybe both. 21:46:08 I wish people looked at this though... it's the first step of the mitigation for stuck updates: https://review.opendev.org/c/openstack/swift/+/743797 21:48:16 I'll take a look at 790550 and 803406. 21:49:18 There is then a follow up to the shard spi migration and that's to get shard containers to respond to GET with the policy supplied to it (if one is supplied) so a longer tail spi migration doesn't effect root container GETS. A shard is an extension of its root, and happily takes objects with a different spi (supplied by the root) so makes sense it should return them on get too. 21:50:01 mattoliver: does that need to be a follow-up? does it depend on the other change? 21:50:33 No it doesn't.. but wanted to test it with clays probe test he wrote :) 21:51:16 oic 21:51:29 So ca n move it off there :) the follow up still needs tests so will do that today. I could always steal clays probe test and change it for this case :) 21:51:40 I'll try to catch up on those patches tomorrow 21:51:43 Just was useful while writing :) 21:52:04 clayg has a habit of being useful :) 21:53:10 😊 21:53:11 zaitcev, i'll take a look at https://review.opendev.org/c/openstack/swift/+/743797 -- you're probably right that we're in no worse situation than we already were. i might push up a follow-up to quarantine dbs with hash mismatches 21:55:41 all right, we're about at time 21:55:52 thank you all for coming, and thank you for working on swift! 21:55:57 #endmeeting