21:00:02 <timburke__> #startmeeting swift
21:00:03 <openstack> Meeting started Wed Mar  3 21:00:02 2021 UTC and is due to finish in 60 minutes.  The chair is timburke__. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:07 <openstack> The meeting name has been set to 'swift'
21:00:10 <timburke__> who's here for the swift meeting?
21:00:15 <acoles> o/
21:00:16 <mattoliverau> o/
21:00:19 <kota_> hi
21:00:31 <seongsoocho> o/
21:00:42 <rledisez> o/
21:02:18 <timburke__> as usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:02:21 <timburke__> first up
21:02:30 <timburke__> #topic sharding backports
21:02:36 <timburke__> they're all merged!
21:02:52 <mattoliverau> \o/ nice
21:03:27 <kota_> great
21:03:46 <timburke__> unfortunately, zaitcev doesn't seem to be here right now, or i'd ask him if there was anything else he needed (either more patches backported, since we've had some more sharding-related patches on master, or stable releases/tags requested)
21:04:11 <timburke__> so i don't know that there's much to discuss, really. just keeping people posted
21:04:16 <timburke__> but speaking of...
21:04:20 <timburke__> #topic releases
21:04:30 <zaitcev> timburke__: I think it's enough, because it's going to be auto-sharding and stuff from there on.
21:05:16 <timburke__> ok, sounds good. fwiw, we still haven't touched the autosharding algorithm or switched it to default to on or anything
21:05:47 <timburke__> mainly just putting more pieces in place to make it easier to recover if sharding goes sideways and you end up with overlaps
21:06:06 <timburke__> we're getting to the end of the cycle! we should do some releases (both client and server)
21:06:07 <mattoliverau> which we need for autosharding :)
21:06:17 <mattoliverau> +1
21:06:33 <timburke__> i've got a draft of some 2.27.0 release notes at https://review.opendev.org/c/openstack/swift/+/777708
21:06:44 <timburke__> there's a lot of great stuff!
21:07:19 <timburke__> please take a look and let me know if anything needs more details or better wording
21:08:05 <timburke__> on the client side, the gate's fixed! i still need to write up some proper release notes, though (and it's been a while since i have...)
21:09:28 <timburke__> i don't think we've got any outstanding issues that we'd want to hold a release on; be sure to let me know if i'm wrong there :-)
21:09:58 <timburke__> #topic CORS tests
21:11:27 <timburke__> so i wanted to bring this to the community one more time -- acoles clayg and mattoliverau have all at least taken a look at the new tests in https://review.opendev.org/c/openstack/swift/+/533028 and found them more or less useful/readable
21:12:24 <timburke__> ...but the four of us are all at nvidia, and i want to give people a chance to raise any lingering concerns before we go pulling a bunch of javascript tests into our tree
21:12:56 <mattoliverau> yeah, I know theres some js in there, but once you take a look at the framework its not to complicated and really tests CORS functionality. So I think it's a big win.
21:13:34 <kota_> README.rst may be the entry point?
21:13:39 <timburke__> so: kota_, rledisez, zaitcev, seongsoocho -- any worries or doubts that make you think we *shouldn't* add this test suite?
21:13:48 <timburke__> yeah, that's a good starting point
21:14:19 <timburke__> fwiw, it's got a (voting!) gate job
21:14:23 <rledisez> I'm all for it. Better to have JS tests than no test at all. And the JS part doesn't look that terrible
21:14:44 <kota_> Cool stuff. I know selenium a little sounds nice to test of browser based real CORS
21:15:21 <timburke__> acoles put up a dnm patch to see what failures look like: https://12f1c315a5e3b29b1269-6e0919d6274382461a92af7573f772ff.ssl.cf5.rackcdn.com/777405/1/check/swift-func-cors/7cfac1e/cors-test-results.txt
21:15:22 <acoles> FWIW even though I'm at nvidia I did initially push back on timburke__ bringing in the js/browser based tests, but it didn't take me too long to get up to speed on the framework, the README is pretty good, and it does give us robust test capability, so I'm now +2
21:16:22 <acoles> oh, and timburke__ even has the test failures enumerated in an artifact published by zuul :)
21:17:27 <timburke__> the biggest reason (in my mind) to go ahead and do the in-browser tests is that there's no chance the tests aren't representative of what a client would do
21:19:14 <timburke__> i'm not hearing any immediate opposition -- is there anyone that would like to do a review in the next couple days and would rather we not merge it until you have?
21:21:18 <acoles> tick
21:21:24 <acoles> tock
21:21:32 <timburke__> all right. let's do it then!
21:21:37 <acoles> :)
21:21:42 <timburke__> #topic relinker
21:22:16 <timburke__> so we (nvidia) started a part power increase yesterday!
21:22:25 <timburke__> and we hit some bugs :-(
21:22:55 <timburke__> acoles has been great about getting patches up to address some of the races we observed
21:22:58 <kota_> :(
21:23:00 <acoles> :'(
21:23:46 <seongsoocho> :(  I'm still doing a relink job .. (almost 2 weeks per node )
21:23:54 <timburke__> the relinker itself seemed to do well, but the object-server would occassionally fail to link into the new partition, due to one of three errors
21:24:04 <acoles> patch chain starts here https://review.opendev.org/c/openstack/swift/+/778474
21:24:50 <timburke__> seongsoocho, ouch! remind me: how full are the nodes? you might want to adjust the ionice to let it go faster -- i feel like 2 weeks is getting to be a long time
21:25:42 <seongsoocho> each node has 12 disk (used 3TB)
21:26:29 <seongsoocho> 80% full per node
21:27:33 <timburke__> do you have an estimate of how many diskfiles are on each disk? iirc we had just under 1,000,000 per disk
21:28:29 <seongsoocho> And The log says 1million diskfiles are relinked
21:29:32 <timburke__> ok. and you're relinking on all nodes at once, yeah?
21:29:38 <timburke__> or one at a time?
21:30:33 <seongsoocho> yes  I did it all nodes at once
21:30:42 <mattoliverau> running it concurrently on each device (ie multiple reliners, at least until Tims other patch lands), or one at a time.
21:31:35 <timburke__> one disk at a time, irrc -- limitation of the stable version seongsoocho is on
21:31:59 <timburke__> so actually -- i guess 2 weeks doesn't sound so crazy :-(
21:32:26 <seongsoocho> Yes.... too long..
21:32:52 <mattoliverau> well good news all that will get quicker, so long as we fix obj race conditions
21:32:59 <timburke__> ok, i don't know how much else i really want or need to say on this -- i just wanted to give a little update, and a heads-up that there are more part-power-increase patches coming
21:33:13 <timburke__> #topic shrinking
21:33:27 <timburke__> acoles, mattoliverau how's it going?
21:33:59 <timburke__> (i'm assuming not much progress the last day or two, but figure there may have been some work toward the end of last week ;-)
21:34:32 <acoles> I can't remember what we reported in last week's meeting, so...
21:34:35 <mattoliverau> Good, got a little distracted. I'm working on not detecting new ACTIVE shards as shrinkible when there is a small shard container at a primary. An edge case that could be problematic
21:34:48 <acoles> mattoliverau: has added shrinking candidates to the recon dump
21:34:48 <mattoliverau> The recon shrinking_candidates patch landed.
21:35:33 <acoles> I fixed a couple of issues with the tool to find shrinkable shards
21:35:50 <mattoliverau> Also working on a patch for the s-m-s-r tool to use the sharder config, so the tool is easier to use as it'll default to your configuration.
21:36:19 <acoles> ^^ that's almost done IIRC
21:36:44 <timburke__> cool, sounds good
21:36:55 <acoles> I'm working on being able to include an estimate of tombstone row count when deciding to shrink - I'm worried about shrinking an apparently empty shard but actually causing large number of tombstone rows to be moved :(
21:37:23 <mattoliverau> nice
21:37:36 <mattoliverau> config patch: https://review.opendev.org/c/openstack/swift/+/774584
21:38:31 <mattoliverau> also been playing with mean and std dev in relation to shards to see if it'll give us a good glance at how distributed the shard ranges are: https://review.opendev.org/c/openstack/swift/+/777922
21:39:10 <mattoliverau> But that's just a wip while I play around with displaying some kind of variance number to see distribution at a glance.
21:39:37 <timburke__> all those sound super useful. keep up the good work!
21:39:41 <timburke__> #topic open discussion
21:39:51 <timburke__> anytyhing else we should discuss this week?
21:42:08 <timburke__> all right
21:42:11 <timburke__> thank you all for coming, and thank you for working on swift!
21:42:15 <timburke__> #endmeeting