21:01:17 #startmeeting swift 21:01:17 Meeting started Wed May 4 21:01:17 2022 UTC and is due to finish in 60 minutes. The chair is timburke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:17 The meeting name has been set to 'swift' 21:01:23 o/ 21:01:24 who's here for the swift meeting? 21:01:53 o/ 21:03:07 as usual, the agenda's at 21:03:09 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:04:23 but the overall thing i've noticed is: we've got three or four major threads we've been pulling on, and i feel like we need to actually commit to getting some landed or otherwise resolved 21:05:46 i think i'll do things a little out of order 21:06:04 #topic eventlet and locking 21:06:40 a quick overview of why we're even looking at this, though i think both acoles and mattoliver are in the loop already 21:07:17 it all started with wanting to remove some import side-effects with https://review.opendev.org/c/openstack/swift/+/457110 -- specifically, to stop monkey-patching as part of import 21:08:10 the more we dug into it, the more we started to freak out at how broken eventlet monkey-patching of locks is on py3 (see https://github.com/eventlet/eventlet/issues/546) 21:09:32 which led to much hand-wringing and staring at code for a while, and a realization that all object replicators (for example) serialize on logging since the PipeMutex works across processes 21:09:50 Hah! 21:10:47 until we decided that maybe we *don't* need those logging locks *at all*! so we're going to see how https://review.opendev.org/c/openstack/swift/+/840232 behaves and whether it seems to garble our logs 21:11:48 🤞 21:12:58 note that the fact that the PipeMutex works across processes *also* presents some risk of deadlocks -- if a worker dies or is killed while holding the ThreadSafeSysLogHandler's lock, everybody in the process group deadlocks 21:16:01 so, hopefully that all works out, we drop the locking *entirely* in ThreadSafeSysLogHandler, and hope that there aren't any other CPython RLocks that might actually matter. sounds like mattoliver is planning on doing some load testing to verify our assumptions about how UDP logging (whether via the network or UDS) works 21:16:51 Yup, will load test it some today. See if I can break it 21:18:20 nice - try your hardest @mattoliver :) 21:18:26 next up 21:18:32 #topic ring v2 21:18:34 I assume one log message is one large UDP datagram. The protocol does the segmentation into packets for you, and it will not deliver a partially assembled datagram. I think. 21:18:57 Alistair Coles proposed openstack/swift master: backend ratelimit: support per-method rate limits https://review.opendev.org/c/openstack/swift/+/840542 21:19:00 zaitcev, yeah, that was my understanding too 21:19:16 and if it's via a domain socket, even better 21:20:38 we've talked about ring v2 for a couple PTGs now, we've tried implementing it in a couple different patch chains -- but we haven't actually landed much (any?) code for it 21:22:29 so my two questions are: does the current implementation seem like the right direction? and if so, can we land some of it before *all* of the work's ready? 21:23:43 I think the current index approach is correct. And I've been able to extend it to include the builder in my WIP chain. And it's what we discussed at PTGs. So I think it's a win 21:24:37 The new RingReader/RingWriter approach encapsulates it really well. Kudos timburke_ 21:26:49 So for me I'm +1 on thinking it's the right direction, but I might be biased because I've been looking at it and involved closer with it then most. 21:27:07 my feeling, too :-) 21:28:54 so should we commit to reviewing and landing https://review.opendev.org/c/openstack/swift/+/834261? mattoliver, do you have any concerns that some of the follow-ups should get squashed in before landing? 21:32:24 Nah I think it's a great start. And so long as well double check before next release, I'm fine with it. 21:32:44 ie if any follow ups are required. 21:33:14 There was an issue in the RingWriter I think but you fixed that, so cool. 21:34:24 all right! sounds good 21:34:28 next up 21:34:31 #topic backend rate-limiting 21:35:12 acoles, i feel like your chain's getting a little long :-) 21:35:32 yeah I need to rate limit myself 21:36:10 some may get squashed but I am working little by little because we want to get *something* into prod soon 21:36:29 lol 21:36:32 the first couple of patches are refactor and minor fixup 21:37:59 the goal is to provide a 'backstop' ratelimiter that will allow backend servers to shed load rather than have huge queues build up 21:39:45 anything we should be aware of as we review it? it looks like there may be some subtle interactions with proxy-server error limiting 21:43:05 yes, so this patch could use some careful review https://review.opendev.org/c/openstack/swift/+/839088 - on master any 5xx response will cause the proxy to error limit (or increment the error limit counter), but we don't want that to happen when the backend is rate limiting (that would be an unfortunate amplification of the rate limiting) 21:43:51 so the patch adds special handling for 529 response codes (at PTG IIRC we decided 529 could be used as 'too many backend requests') 21:44:31 oh yeah, that could be bad, don't want to rate limite into error limitted 21:44:34 I chose to NOT log these in the proxy since there could be a lot and they won't stop because the proxy won't error limit the node 21:46:06 seems reasonable -- the 529s will still show up in log lines like `Object returning 503 for [...]`, though, yeah? 21:47:43 yes I checked that 21:48:53 https://review.opendev.org/c/openstack/swift/+/840531 seems like a great opportunity to allow for config reloading similar to what we do with rings 21:50:02 haha, yes that's exactly where I am heading with that! actually, clayg's idea 21:50:06 oh yeah, that'll be cool. 21:51:06 but, one thing at a time, I think the finer grained per-method ratelimiting may be higher priority?? we'll see 21:51:19 sounds good 21:51:45 ok, we're getting toward the end of our time -- maybe i'll skip the memcache stuff for now, though i'd encourage people to take a look 21:51:51 #topic open discussion 21:52:01 what else should we bring up this week? 21:55:27 not really, for those who use it, I've been updating VSAIO To use jammy because I got sick of deprecation warnings and older version of things: https://github.com/NVIDIA/vagrant-swift-all-in-one/pull/126 21:55:28 *nothing really 21:56:12 timburke_: you were interested in getting people to look at signal handling, so https://review.opendev.org/c/openstack/swift/+/840154 but I'll take a better look today. 21:56:13 🎉 edge of technology, man! 21:56:15 we've got jammy ci nodes coming up now. There have been a few minor bumps, but they should mostly be working 21:56:16 Personally I don't believe in that limiting thing unless it's global. 21:56:39 Election, master, and algorithm that considers a snapshot of load. 21:56:49 Buuuut 21:57:16 But it's Alistair so it obviously good, so I dunno. 21:57:55 cool thanks clarkb ! 21:58:09 Maybe it will be some kind of thing that can dampen itself and never amplify the congestion. 21:58:51 zaitcev: it is admittedly a fairly blunt tool 22:00:08 mostly depends on the client response to being told to back off, i suppose -- i have a hard time believing it'd be worse than what we've got now though 22:00:47 (which is to say, backend servers backing up so much that proxies have already timed out by the time they've read the request that was sent) 22:01:28 all right -- i've kept you all long enough 22:01:37 thank you all for coming, and thank you for working on swift! 22:01:42 #endmeeting