21:00:22 <timburke> #startmeeting swift
21:00:22 <opendevmeet> Meeting started Wed Aug 18 21:00:22 2021 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:22 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:22 <opendevmeet> The meeting name has been set to 'swift'
21:00:29 <timburke> who's here for the swift meeting?
21:00:52 <kota> o/
21:01:40 <mattoliver> o/
21:03:20 <acoles> o/
21:04:07 <timburke> i forgot to update the agenda, sorry -- but mostly i think we just want to follow-up on the patches from last week
21:04:32 <timburke> #topic container sharding and storage policy reconcililiation
21:04:54 <timburke> so we merged the first patch in the chain: https://review.opendev.org/c/openstack/swift/+/803423
21:05:50 <timburke> how's the patch to propagate the correct policy to the shards looking? still need review?
21:05:57 <timburke> https://review.opendev.org/c/openstack/swift/+/800748
21:07:11 <mattoliver> Yeah, I've rebased it. just needs some review time
21:07:23 <mattoliver> unless that happened while I slept :)
21:10:03 <timburke> i know the first patch also spun off a handful of other patches, most of which either got squashed in or already mereged
21:10:42 <timburke> i think the one exception is https://review.opendev.org/c/openstack/swift/+/804696 -- acoles, is that mostly a belt & bracers sort of a patch?
21:13:59 <acoles> mostly belt and braces, seemed like future us could be surprised to deal with an object listing body in a response with a record type of shard.
21:14:27 <timburke> 👍
21:14:45 <mattoliver> I'll check it out and review it today
21:14:47 <acoles> I was a little worried about that catching us out given the recursive nature of shard listings
21:15:01 <timburke> yeah, seems totally reasonable
21:15:05 <acoles> but so far I'm not aware of any bug as such
21:15:26 <timburke> #topic internal client bottlenecks
21:15:57 <acoles> I guess if any middleware ever started to pay attention to the record type then there could be issues
21:16:25 <timburke> i finally got around to doing some benchmarking with my MultiprocessInternalClient
21:17:59 <timburke> the context, again, was that i had a single-process, greenthreaded data mover that would get CPU bound. on a little test cluster, switching InternalClient for MultiprocessInternalClient got me a 4-5x speedup!
21:18:25 <mattoliver> nice!
21:18:51 <timburke> i even managed to do it without any swift changes, though it'll be a lot cleaner once we've got https://review.opendev.org/c/openstack/swift/+/804544 to add some allow_modify_pipeline plumbing through run_wsgi and run_server
21:19:25 <acoles> sounds promising!
21:19:54 <timburke> thanks for the reviews on that clayg! definitely helped me clean things up a bit and minimize the likelihood that anyone accidentally runs like that for a client-facing proxy
21:20:36 <clayg> @timburke thanks for flailing along with me
21:22:23 <timburke> the main process is still CPU-bound, so next up, i'm going to try fanning out that guy as well
21:22:43 <timburke> that's all i've got
21:22:48 <timburke> #topic open discussion
21:22:58 <timburke> what else should we bring up this week?
21:24:18 <mattoliver> I'm making prgress on my request tracing POC. Now mostly has opentracing support, I hope to have that working soon.
21:24:33 <timburke> nice!
21:24:53 <mattoliver> I've also played with adding statsd account interpolation support https://review.opendev.org/c/openstack/swift/+/804948
21:25:55 <mattoliver> Which will allow someone to add accounts to statsd prefixs when it makes sense. In case larger clusters wants to track account usage/datapoints more.
21:26:23 <mattoliver> wrote code and tests yest.. hopefully test it today.
21:27:37 <mattoliver> Oh and I saw clayg has pushed and commented on the ring derserialization stuff. So thanks! I'll look at that today too!
21:29:46 <mattoliver> #link https://review.opendev.org/c/openstack/swift/+/803665
21:31:16 <timburke> the version number bump is definitely something on my mind, since i still want to allow for wider dev_ids in https://review.opendev.org/c/openstack/swift/+/761794 ;-)
21:31:34 <timburke> (and it runs into similar upgrade problems)
21:31:48 <mattoliver> oh yeah
21:31:56 <clayg> @mattoliver do you have any reason to believe someone would want to have that many metric files?  we already have trouble aggregating cluster stats across our number of servers
21:32:33 <clayg> @timburke awesome!!! ring format v2 here we come!
21:32:57 <timburke> hopefully while still being able to write out v1 rings for a while ;-)
21:33:10 <mattoliver> @clay
21:33:52 <mattoliver> @clay good question... but something that someone like john is interested in.. but now sure it's exactly what we want yet.. wanted to see how it came out.
21:34:02 <mattoliver> *not sure
21:34:16 <mattoliver> (sorry seems I can't type this early) :P
21:35:10 <timburke> surely we can find a way to shard metrics across multiple collectors -- there's definitely an interest in being able to see at a glance which tenant is responsible for a sudden uptick in traffic, for example
21:35:25 <mattoliver> But currently if you don't add an {account} nothing changes to what we have today.
21:36:10 <timburke> seems about like our usual policy: give ops enough rope to hang themselves :P
21:36:43 <mattoliver> lol, that's what I do.. I'm software engineer/hangman :P
21:37:24 <mattoliver> I'll mark it as WIP in any case becasue no doubt it needs more discussion and testing :)
21:39:02 <timburke> all right, seems like we're about done
21:39:13 <timburke> thank you all for coming, and thank you for working on swift!
21:39:23 <clayg> it's possible the way the statsd exporter does metric collection (or how prometheus stores it) makes this exactly what they need/want - but I don't think i'd want that if i was dumping into graphite
21:39:29 <timburke> don't forget to put topics on the PTG etherpad
21:39:48 <timburke> #link https://etherpad.opendev.org/p/swift-ptg-yoga
21:39:55 <timburke> #endmeeting