21:02:45 <timburke_> #startmeeting swift
21:02:45 <opendevmeet> Meeting started Wed Jun 16 21:02:45 2021 UTC and is due to finish in 60 minutes.  The chair is timburke_. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:02:45 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:02:45 <opendevmeet> The meeting name has been set to 'swift'
21:02:53 <timburke_> who's here for the swift meeting?
21:03:01 <seongsoocho> o/
21:03:11 <mattoliver> o/
21:03:15 <kota> o/
21:03:45 <timburke_> as usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:03:53 <timburke_> #topic ARM testing
21:03:54 <mattoliver> Go team APAC 😀
21:04:33 <kota> lol
21:04:37 <timburke_> just a quick status update -- we've got patches landed for libec, pyeclib, and unit tests for swift
21:04:48 <kota> excellent
21:04:49 <mattoliver> Nice
21:05:45 <timburke_> mattoliver did a great job getting a patch together for func and probe tests, we just need to make sure the job name accurately reflects the python version we're testing against
21:06:03 <timburke_> https://review.opendev.org/c/openstack/swift/+/793280
21:06:13 <timburke_> ...which bring me to...
21:06:19 <timburke_> #topic py3 func tests
21:06:25 <mattoliver> LOL yeah, that wasn't confusing at all:p
21:06:58 <timburke_> the ARM func test jobs seemed to be testing against py38 despite being based on our py37 func test jobs
21:07:41 <timburke_> i think it came down to the version of ubuntu that was being used; the func test job definition itself basically says "test with whatever version of py3 is available"
21:08:03 <mattoliver> Was it the nodeset we picked or just bad naming of job
21:08:19 <mattoliver> Ojh, I seem to be lagging a bit.
21:08:47 <timburke_> i proposed https://review.opendev.org/c/openstack/swift/+/795425 to switch our py37 func test jobs to use focal instead of bionic, and in that way switch from testing py37 to py38
21:09:15 <mattoliver> Makes sense
21:11:25 <timburke_> and i wanted to get other people's opinions on whether that's what we'd like to do. officially, openstack wants to target py36 and py38 (https://governance.openstack.org/tc/reference/runtimes/xena.html#python-runtimes-for-xena); it *does* seem a little weird that our func test jobs try to split the difference
21:12:42 <timburke_> at the same time, i don't really want to run all the func tests on py27, py36, *and* py38... and then *again* for py38 on ARM...
21:12:46 <mattoliver> I think we should just be basing it off the last LTS and py2
21:14:43 <timburke_> it's worth pointing out that we *do* have experimental jobs for py36 on centos8, too. they're just run on demand, though -- i always make a point of checking them when putting together a release
21:16:09 <timburke_> kota, seongsoocho what are your thoughts on it? does it make much difference in your opinions?
21:16:44 <mattoliver> I can imagine there are people out there who'd run centos8/rhel8 so makes sense needing to double check that.
21:17:05 <kota> hmm...
21:18:22 <kota> imo, py38 is nice to test but i know py36 is still in default for bionic long term support...
21:18:55 <kota> so dropping 37 would be ok
21:20:26 <kota> ah, are we going to drop py36 test too?
21:20:30 <timburke_> fwiw, i can't think of any time i've seen a legit py36 or py37 test failure that still had passing tests for py27 and py38
21:21:02 <timburke_> well, we don't have a py36 job that runs on every patchset currently
21:21:13 <kota> yeah
21:21:24 <timburke_> it's always done on demand, by leaving a comment that's just "check experimental"
21:21:55 <timburke_> and of course, i plan on keeping all the unit test jobs
21:22:02 <mattoliver> Wow bionic will be around until 2028..
21:23:21 <mattoliver> Maybe we need to run latest LTS on every patch and old but potentially current OS'S (bionic, centos8) on experimental
21:24:06 <mattoliver> *Current but old LTS
21:24:18 <timburke_> seems reasonable -- i can look at getting bionic on the experimental queue
21:24:40 <kota> IMHO, when trying to use OSS in NTT group, they always test in their specific OS version deeply so i suppose the upstream doesn't have to keep tracking the OS version always if it has to pay too much cost.
21:25:32 <kota> the downstream can experiment by themselves, maybe?
21:26:14 <timburke_> and of course, we'd continue to support and accept patches for whatever versions people are actually running
21:26:32 <mattoliver> +1
21:27:29 <kota> Focal has been already running in an year so reasonably downstream is able to choose it as the system when they planning...
21:27:40 <kota> timburke_: +1
21:28:21 <timburke_> reminds me of https://github.com/openstack/swift/commit/dca658103 -- keeping trusty support in 2019 :-)
21:28:36 <timburke_> all right, https://review.opendev.org/c/openstack/swift/+/795425 seems good to go. thanks for the input!
21:29:03 <timburke_> #topic sharding and shrinking
21:29:11 <kota> python 36 is shorter life than ubuntu bionic eol :P https://endoflife.date/python
21:29:31 <mattoliver> Lol
21:29:55 <timburke_> ubuntu's getting to feel some of redhat's pain :P
21:30:37 <kota> sorry for interruption, please go ahead
21:30:57 <timburke_> i know acoles and mattoliver have been looking at dealing with tail shards
21:31:28 <timburke_> https://review.opendev.org/c/openstack/swift/+/793543
21:31:29 <timburke_> https://review.opendev.org/c/openstack/swift/+/794582
21:31:58 <timburke_> https://review.opendev.org/c/openstack/swift/+/794869
21:32:13 <mattoliver> Lol yeah, so many different approaches. Found a way to solve it "simply" but doesn't solve well for auto sharding case.
21:32:56 <mattoliver> So going with that to bandaid current deployments.
21:33:02 <timburke_> do we have a main path forward yet, or are we still mostly experimenting with various ideas?
21:33:26 <mattoliver> Giving us more time to decide on the which is the best total (autosharder imcluded) followup.
21:36:26 <mattoliver> https://review.opendev.org/c/openstack/swift/+/794582 is the way forward.. giving us time to decide on the best auto-shard follow up
21:37:18 <timburke_> starred. i'll try to take a look today
21:37:42 <mattoliver> which might be using Contexts to track progress or a smarter scanning backend. But we'd have more time to contemplate those pros or cons of those and not at the cost of tail shards
21:37:58 <timburke_> sounds like a plan
21:38:01 <timburke_> #topic relinker
21:38:49 <timburke_> i don't think i've got any patches to call out here, but wanted to mention that we've started increasing part power for our main erasure coded policy. going well so far!
21:39:42 <seongsoocho> cool !
21:40:28 <timburke_> i still need to write up a doc to consolidate some of what we've learned :-/
21:40:57 <timburke_> #topic dark data watcher
21:41:15 <timburke_> i still haven't reviewed the fixes yet. sorry zaitcev :-(
21:41:36 <timburke_> #topic open discussion
21:41:44 <timburke_> anything else we ought to bring up this week?
21:42:32 <mattoliver> I've been playing with memcache logging and improvements
21:43:53 <mattoliver> Now that shard listings are cached they "could" get bigger to max item size.. so been playing with adding a warning when you start getting to a bigger size:
21:44:30 <mattoliver> https://review.opendev.org/c/openstack/swift/+/794582
21:44:35 <timburke_> nice
21:45:06 <timburke_> i've been running down some tracebacks we've been seeing. which caused me to dust off something pretty old :-)
21:45:16 <mattoliver> And one to implement a basic metadata get (mg) to our memcacheRing client: https://review.opendev.org/c/openstack/swift/+/795484
21:45:31 <timburke_> https://review.opendev.org/c/openstack/swift/+/103779 - Add support for multiple container-reconciler
21:46:01 <zaitcev> So many patches.
21:46:12 <zaitcev> I resorted to reading Priority reviews wiki.
21:46:14 <mattoliver> It's back!
21:46:44 <timburke_> i'm going to try to do a better job keeping that up to date :-)
21:47:18 <mattoliver> mutple reconciler is what 6 years old :)
21:47:58 <zaitcev> Puts PUT+POST, Guru Meditation, and Pluggable Backends to shame.
21:48:27 <timburke_> it's been a very long while since i remember seeing a 1 leading the patch number
21:48:36 <mattoliver> I had a go at rebasing the general task queue to better shard expirer container usage... but not quite done. And there might be a bit of a conflict with SLO async deletes I need to think through.
21:49:52 <mattoliver> general task queue is great, but enqueue is a little more interesting as we do it by policy... at least as it current stands in a patch 2-3 years ago :P
21:51:15 <timburke_> fwiw, the story on the old patch is that in the absence of a good scale-out plan, you either have a single box handling all reconciler work, or you have a bunch of boxes all pulling from the same queue and writing at approximately the same time. we went with the later (no SPOF!) but it can cause EEXIST errors during a part power increase
21:53:00 <timburke_> all right, i think that's about all this week
21:53:10 <zaitcev> ok
21:53:14 <timburke_> thank you all for coming, and thank you for working on swift!
21:53:20 <timburke_> #endmeeting