21:00:22 #startmeeting swift 21:00:22 Meeting started Wed Aug 18 21:00:22 2021 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:22 The meeting name has been set to 'swift' 21:00:29 who's here for the swift meeting? 21:00:52 o/ 21:01:40 o/ 21:03:20 o/ 21:04:07 i forgot to update the agenda, sorry -- but mostly i think we just want to follow-up on the patches from last week 21:04:32 #topic container sharding and storage policy reconcililiation 21:04:54 so we merged the first patch in the chain: https://review.opendev.org/c/openstack/swift/+/803423 21:05:50 how's the patch to propagate the correct policy to the shards looking? still need review? 21:05:57 https://review.opendev.org/c/openstack/swift/+/800748 21:07:11 Yeah, I've rebased it. just needs some review time 21:07:23 unless that happened while I slept :) 21:10:03 i know the first patch also spun off a handful of other patches, most of which either got squashed in or already mereged 21:10:42 i think the one exception is https://review.opendev.org/c/openstack/swift/+/804696 -- acoles, is that mostly a belt & bracers sort of a patch? 21:13:59 mostly belt and braces, seemed like future us could be surprised to deal with an object listing body in a response with a record type of shard. 21:14:27 👍 21:14:45 I'll check it out and review it today 21:14:47 I was a little worried about that catching us out given the recursive nature of shard listings 21:15:01 yeah, seems totally reasonable 21:15:05 but so far I'm not aware of any bug as such 21:15:26 #topic internal client bottlenecks 21:15:57 I guess if any middleware ever started to pay attention to the record type then there could be issues 21:16:25 i finally got around to doing some benchmarking with my MultiprocessInternalClient 21:17:59 the context, again, was that i had a single-process, greenthreaded data mover that would get CPU bound. on a little test cluster, switching InternalClient for MultiprocessInternalClient got me a 4-5x speedup! 21:18:25 nice! 21:18:51 i even managed to do it without any swift changes, though it'll be a lot cleaner once we've got https://review.opendev.org/c/openstack/swift/+/804544 to add some allow_modify_pipeline plumbing through run_wsgi and run_server 21:19:25 sounds promising! 21:19:54 thanks for the reviews on that clayg! definitely helped me clean things up a bit and minimize the likelihood that anyone accidentally runs like that for a client-facing proxy 21:20:36 @timburke thanks for flailing along with me 21:22:23 the main process is still CPU-bound, so next up, i'm going to try fanning out that guy as well 21:22:43 that's all i've got 21:22:48 #topic open discussion 21:22:58 what else should we bring up this week? 21:24:18 I'm making prgress on my request tracing POC. Now mostly has opentracing support, I hope to have that working soon. 21:24:33 nice! 21:24:53 I've also played with adding statsd account interpolation support https://review.opendev.org/c/openstack/swift/+/804948 21:25:55 Which will allow someone to add accounts to statsd prefixs when it makes sense. In case larger clusters wants to track account usage/datapoints more. 21:26:23 wrote code and tests yest.. hopefully test it today. 21:27:37 Oh and I saw clayg has pushed and commented on the ring derserialization stuff. So thanks! I'll look at that today too! 21:29:46 #link https://review.opendev.org/c/openstack/swift/+/803665 21:31:16 the version number bump is definitely something on my mind, since i still want to allow for wider dev_ids in https://review.opendev.org/c/openstack/swift/+/761794 ;-) 21:31:34 (and it runs into similar upgrade problems) 21:31:48 oh yeah 21:31:56 @mattoliver do you have any reason to believe someone would want to have that many metric files? we already have trouble aggregating cluster stats across our number of servers 21:32:33 @timburke awesome!!! ring format v2 here we come! 21:32:57 hopefully while still being able to write out v1 rings for a while ;-) 21:33:10 @clay 21:33:52 @clay good question... but something that someone like john is interested in.. but now sure it's exactly what we want yet.. wanted to see how it came out. 21:34:02 *not sure 21:34:16 (sorry seems I can't type this early) :P 21:35:10 surely we can find a way to shard metrics across multiple collectors -- there's definitely an interest in being able to see at a glance which tenant is responsible for a sudden uptick in traffic, for example 21:35:25 But currently if you don't add an {account} nothing changes to what we have today. 21:36:10 seems about like our usual policy: give ops enough rope to hang themselves :P 21:36:43 lol, that's what I do.. I'm software engineer/hangman :P 21:37:24 I'll mark it as WIP in any case becasue no doubt it needs more discussion and testing :) 21:39:02 all right, seems like we're about done 21:39:13 thank you all for coming, and thank you for working on swift! 21:39:23 it's possible the way the statsd exporter does metric collection (or how prometheus stores it) makes this exactly what they need/want - but I don't think i'd want that if i was dumping into graphite 21:39:29 don't forget to put topics on the PTG etherpad 21:39:48 #link https://etherpad.opendev.org/p/swift-ptg-yoga 21:39:55 #endmeeting