21:00:03 <timburke> #startmeeting swift
21:00:03 <opendevmeet> Meeting started Wed Apr 13 21:00:03 2022 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:03 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:03 <opendevmeet> The meeting name has been set to 'swift'
21:00:11 <timburke> who's here for the swift meeting?
21:00:24 <kota> hi
21:02:27 <acoles> o/
21:02:42 <timburke> you're back! i thought you'd left for the night ;-)
21:03:19 <timburke> i know clayg has a sick kid at home, and it's towards the end of his day anyway, so he might not come
21:03:33 <timburke> dunno about mattoliver
21:03:45 <timburke> but we can go ahead and start
21:04:01 <timburke> as usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:04:04 <timburke> first up
21:04:10 <timburke> #topic ptg recap
21:04:31 <timburke> thanks to everybody for coming last week! it was great seeing everyone again
21:04:40 <mattoliver> Oh o/
21:05:01 <mattoliver> (Forgot this is earlier after day light savings ended)
21:05:33 <timburke> i feel like we had a lot of good discussions about what we're working on, including things like ring v2, sharding, and tsync
21:05:56 <acoles> ptg is always good
21:06:07 <kota> +1
21:06:42 <timburke> the previous ptg, we came away with a bunch of action items -- hopefully the fact that we only bothered to write down three this time means we'll be more likely to do them ;-)
21:07:06 <timburke> the action items this time were:
21:07:24 <timburke> investigate the current state of the art for erasure coding
21:07:39 <timburke> get the test/s3api suite running in the gate
21:08:13 <timburke> and figure out how to tag releases in docker hub as part of the release process
21:08:54 <timburke> who would like to champion any of these?
21:09:36 <mattoliver> I can play with the docker one as I watched tdasilva do the last docker stuff at a in person ptg.
21:10:03 <mattoliver> Not that that means it's help :p
21:10:43 <timburke> 👍 let me know if/when you'd like me to put a release together to test zuul pipeline machinery
21:11:16 <timburke> i can take on the test/s3api suite; i've done similar things before for the CORS tests
21:12:03 <timburke> on the last -- *shrug* we can revisit it later
21:12:30 <timburke> next up
21:12:37 <timburke> #topic liberasure code release
21:13:11 <timburke> i tagged 1.6.3 a few days ago!
21:13:26 <kota> nice
21:13:41 <timburke> it includes the library name suffixing that pyeclib will require to be able to build useful wheels
21:14:51 <timburke> so i guess my next question is, do we want to review and merge https://review.opendev.org/c/openstack/pyeclib/+/817498 ahead of a pyeclib 1.6.1 release?
21:15:27 <timburke> i already started compiling release notes at https://review.opendev.org/c/openstack/pyeclib/+/833471 fwiw
21:17:34 <timburke> eh, i think i'll aim to release sooner rather than later. can always do another ;-)
21:17:53 <mattoliver> Well if you've added the other things needed to build wheels we could add it too to make it easier to build. But either way.
21:19:42 <timburke> well, i went ahead and broke out the stable python abi changes from the rest of the wheel-building stuff, and merged the stable abi patch. i think anyone *could* make useful wheels now, so the patch is only really about having the tooling for it in-tree
21:20:46 <timburke> #topic improving sharded listing performance
21:21:18 <timburke> so we got a complaint about listing performance for one of our larger containers
21:22:09 <timburke> it was a prefix/delimiter query that covered a fairly wide part of the namespace, so the one client request needed to contact several hundred backend shards
21:23:24 <timburke> part of the problem was that a lot of objects had been deleted, so ~100 or so shards were actually *empty*
21:24:20 <timburke> once we did some manual shrinking, the situation improved a bit, but we're still talking to hundreds of shards to get a listing with <50 subdir entries
21:25:06 <timburke> (which means there's a whole bunch of shards that have *no* new information *at all*)
21:25:28 <mattoliver> So a bunch we're answering the same question
21:25:36 <timburke> i took a stab at fixing that in https://review.opendev.org/c/openstack/swift/+/837397 - Skip shards that can't include any new subdir entries
21:26:44 <mattoliver> It's a good optimisation, even if it took me longer to grok then I'd like.
21:27:27 <mattoliver> But that's due to subdir and pseudo folder and how that works.
21:27:28 <timburke> it's probably still under-tested, but if you get a chance, please take a look! there are definitely some subtle fence-post-error sorts of things to consider
21:28:16 <timburke> i should also think about whether it's got any implications for https://review.opendev.org/c/openstack/swift/+/829605 🤔
21:29:32 <timburke> next up
21:29:38 <timburke> #topic ring v2
21:29:47 <mattoliver> I'll play with it today.
21:29:55 <timburke> mattoliver, thanks!
21:30:19 <timburke> i haven't forgotten about it! though i never did get around to pushing up that fresh patchset last week :-(
21:30:40 <timburke> just wanted to let people know that it's still on my radar
21:30:54 <timburke> #topic backend ratelimiting
21:31:15 <mattoliver> Feel free to squash any of my follow up in, ie the serialize/deserialize v2 stuff.
21:31:43 <timburke> 👍
21:31:52 <timburke> it seems like we've seen a decent bit of activity on ratelimiting; how's it going acoles?
21:33:01 <acoles> we discussed this at the ptg and questioned why it would be built in to the storage server rather than a middleware, so I changed my patch to be a middleware
21:33:08 <acoles> https://review.opendev.org/c/openstack/swift/+/836046
21:33:32 <mattoliver> Nice
21:33:35 <acoles> we also discussed what an appropriate response code would be
21:34:07 <acoles> 503 could be confusing, but was assumed to be handled by all existing 'clients'
21:34:29 <mattoliver> What did you end up doing, has 529 been born?
21:34:36 <acoles> however, I noticed some unfortunate proxy log errors when i played with this in an object erver
21:35:26 <acoles> so, we may need to make some proxy changes whatever response code we choose, so 529 might be ok - still wondering what to do about that
21:36:16 <mattoliver> Kk
21:36:21 <acoles> BTW the error I saw logged in proxy was "Apr 13 11:37:02 saio proxy-server: ERROR 503 Expect: 100-continue From Object Server 127.0.0.4:6040/sdb4 (txn: txe9b36cc23552497697801-006256b5de) (client_ip: 127.0.0.1)"
21:37:12 <timburke> that makes some sense -- i bet we get similar log lines on 507
21:37:28 <acoles> no, 507 is handled separately
21:38:23 <timburke> ah, yeah -- https://github.com/openstack/swift/blob/master/swift/proxy/controllers/obj.py#L481-L492
21:38:38 <acoles> something like "'ERROR Insufficient Storage'"
21:38:53 <timburke> all right, that's all i've got
21:39:01 <timburke> #topic open discussion
21:39:04 <acoles> so I wonder if we ought to have "ERROR ratelimited" ???
21:39:16 <timburke> i like that
21:39:29 <timburke> what else should we talk about this week?
21:41:35 <timburke> i started playing around with systemd notify sockets more yesterday
21:41:51 <timburke> including sending STOPPING and RELOADING messages in https://review.opendev.org/c/openstack/swift/+/837633
21:42:19 <acoles> sorry, I need to drop -👋
21:42:37 <timburke> and mostly reimplementing the thing in https://review.opendev.org/c/openstack/swift/+/837641 so the swift-reload guy doesn't have to poll child processes
21:42:46 <timburke> g'night, acoles
21:42:50 <mattoliver> It's Easter long weekend here, so not too much will happen this next week as Friday and Monday are off. But hopefully I'll get a chance to go over the patches I have that we discussed at PTG and will follow up on them. Will add them to the agenda if there is anything to talk about.
21:42:59 <mattoliver> Nice work Tim.
21:43:42 <timburke> all right then
21:43:53 <timburke> thank you all for coming, and thank you for working on swift!
21:43:57 <timburke> #endmeeting