21:00:30 #startmeeting swift 21:01:42 meeting bot out again? 21:02:11 maybe it's already moved networks 21:02:14 really, *sigh* 21:02:20 hmmm.... 21:02:27 well, while it's coming back up, who's here for the swift meeting? 21:02:31 well I woke up to being disconnected, so I suspect did the bot etc? 21:02:45 o/ 21:03:00 o/ 21:03:01 agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:03:21 first up, irc 21:03:28 #topic irc network 21:03:58 so i was originally going to ask for feedback per http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022675.html 21:04:29 but in the last day or two, things have started moving more quickly. the wait-and-see approach seems untenable 21:04:51 and so openstack's moving 21:04:54 #link http://lists.opendev.org/pipermail/service-announce/2021-May/000021.html 21:05:54 everybody probably ought to register nicks on OFTC. i did it yesterday, it's not too bad 21:06:05 A Solomonian decision. 21:06:18 Of course, it's not bad, duh. 21:07:31 i'll plan on hanging around freenode (and libera, for that matter, just in case) to help redirect 21:08:54 i'm thinking we could also use the disruption to also change meeting location; may as well hold meetings in #openstack-swift itself. any objections? 21:09:57 no objection from me 21:10:07 No objection although I hope you realize that it means a meeting without a bot support. 21:10:13 looks good 21:10:32 oh, I was about to ask about the bot, that is useful 21:10:38 when it works 21:10:47 zaitcev, actually, i think it should work. well, provided the bot's able to connect ;-) 21:11:10 we'll confirm next week, i suppose 21:11:19 I referred to logs in the past, in particular when I was unable to make the meeting, or plain forgot honestly. 21:11:43 me too. it'd be a shame to lose them 21:11:49 +1 21:13:02 we'll make sure the meeting's logged; if it's not working, we can change rooms again. i'm fairly certain there are already teams meeting in their normal channels *today*, though 21:13:14 any other comments or questions? 21:13:52 the openstack meeting bot can work in other rooms too, we use it for first contact sig in the #openstack-upstream-institute room 21:14:15 when should i change the network? now? or we can have term to move to the new network? 21:14:47 kota_, from the service-announce email, it looks like the transition will be over the weekend 21:15:11 i see, probably prepare in this week, then switch in the next. 21:15:16 thanks for managing the transition, fungi! 21:15:44 all right, next up 21:15:51 #topic testing on ARM 21:16:05 Et tu, Brut 21:16:29 ricolin recently proposed some new arm64-based unit test jobs 21:16:32 https://review.opendev.org/c/openstack/swift/+/792867 21:17:38 i love this plan! i recently got access to a fairly beefy ARM box through work to do some preliminary testing 21:18:17 Imagine a cluster of GPUs at NVIDIA, backed by Swift on ARM. By NVIDIA, of course (like Tegra++). 21:18:17 currently, the patch just includes unit tests on py38 and py39 -- my question is, how much do we want to flesh out the arm64 test matrix? 21:20:19 Dunno. I'd love to have all of them, including a mix where the client is on x86. Probably not a mixed cluster though. 21:20:29 Because rsync sqlite 21:20:40 probe tests could be interesting. They work in my testing, but they'd stress the functionality more. 21:20:58 zaitcev, now you're making me really want to do it with a mixed cluster :-) 21:21:14 lol 21:21:50 i'm inclined to say there ought to be at least one func test job, and agree that a probe test job would be great to strewss more of the daemons. do we have strong opinions about python versions? 21:23:01 Meh, the newer the better. 21:23:42 perhaps, liberasurecode/PyECLib may also want the AMR test job... 21:24:05 +100, i'll look at doing that after the meeting 21:24:17 thx 21:24:20 (unless someone else would like to volunteer) 21:24:59 * ChanServ gives channel operator status to openstack 21:25:09 the bots back 21:25:15 \o/ 21:25:23 I'm happy to give it a go timburke 21:25:42 cool, thanks! i'll plan on reviewing, then :-) 21:26:35 all right, next up 21:26:45 #topic train extended maintenance 21:27:05 Oh yeah, I have it on my TODO list but I forgot what it entailed. 21:27:13 there's a patch up to transition stable/train to EM 21:27:19 #link https://review.opendev.org/c/openstack/releases/+/790774 21:28:19 i wanted to check in if anyone has strong opinions about whether there ought to be one more tag before doing that. it looks like (by default?) the train-em tag will be at the last-released version 21:28:49 * njohnston is now known as njohnston|away 21:29:28 there are a couple py3 fixes for train that are still open; i should have them merged later this week though 21:31:03 ok 21:32:34 well, i'll try to get a tag together, too -- we'll see whether i get to it 21:32:45 speaking of stable... 21:32:55 #topic failing docs jobs 21:33:43 just a heads-up: our stable/victoria and stable/ussuri gates (as well as swiftclient master!) were broken for a while, but should be better now 21:34:24 Well as long as we didn't commit anything to docs that cannot be processed by the newest sphinx, we ought to be fine, right? 21:34:39 came down to the docs jobs not using upper-constraints, so they pulled in Sphinx 4.0.0, which had a new bindep that we weren't installing 21:35:14 fix for now is to use upper-constraints, but i'm debating about updating bindep so newer Sphinx actually works 21:36:23 moving on 21:36:34 #topic sharding/shrinking 21:36:43 mattoliverau, acoles how's it going? 21:38:06 not much to report on that topic from me this week 21:38:26 👍 no worries 21:38:51 We do have a patchset were we try and adjust tail shards that tend to be small 21:39:02 but things are in flux on that atm. 21:39:07 I have a patch to deprecate the percentage based conf options https://review.opendev.org/c/openstack/swift/+/792182 which mattoliverau has +2 on, so headsup for that. But the percent options remain in place for backwards compat 21:39:55 oh yes, thank @mattoliverau , forgot that - we noticed that there was a systemic tendency to create small shards when sharding :( 21:39:56 https://review.opendev.org/c/openstack/swift/+/791885 - sharding: Add auto mode to rows_per_shard 21:40:09 mattoliverau is going to fix it :) 21:40:19 https://review.opendev.org/c/openstack/swift/+/792182 - Add absolute values for shard shrinking config options 21:40:24 yeah? 21:40:34 yeah, we'll come up with something 21:40:47 @timburke yep, thanks 21:40:49 👍 21:41:06 791885 is where there is some discussion, but still not 100% on a solution, though we know what we want to do, just discussing the approach :) 21:41:52 i'll try to take a look, since you're freeing up my afternoon a bit mattoliverau ;-) 21:42:05 #topic relinker 21:42:37 i updated the tests on https://review.opendev.org/c/openstack/swift/+/790305 - relinker: Remove replication locks for empty parts 21:43:11 it'll no longer hold the partition lock for the entire relinker run 21:43:36 timburke: ok, I'll try to review that again tomorrow 21:43:43 thanks! 21:44:07 #topic dark data watcher 21:44:26 oh yeah, I should loop back around to review that and test the new pgroup signal stuff in my other patch to see how it behaves when we've got multiple workers. 21:44:27 zaitcev, acoles i saw there was some back and forth -- how's it going? 21:44:39 timburke: "it'll no longer hold the partition lock for the entire relinker run" ? - did it? 21:45:25 acoles, the *test* -- previously, it'd take the lock, run the relinker, and finally release it 21:45:47 OIC, of course, makes sense, phew! 21:45:51 LOL 21:46:06 now it'll patch out some stuff so the test only grabs the lock right before a couple interesting points 21:46:18 :thu 21:46:19 timburke: So, we have 3 patches now. I don't remember numbers but it's the one I wrote (for ignoring too-new objects), the one you wrote (for listing a sharded container), and the one I split off the latter, with asking all servers, or absolute quorum. 21:46:19 and there are now two such tests :-) 21:46:32 First goes okay, just needs one more turn 21:46:42 Second you probably know better than I do. 21:46:54 Third is easy as pie, also needs some slight polish though 21:47:00 Or maybe not even that 21:47:20 the grace_age seems like a good addition 21:47:38 https://review.opendev.org/c/openstack/swift/+/788398 - Make dark data watcher ignore the newly updated objects 21:48:06 https://review.opendev.org/c/openstack/swift/+/792713 - needs a quick review and I think it's good to go despite the lack of testing coverage. 21:48:51 Dark Data Watcher: switch to agreement across the whole ring 21:48:52 But the new object thing, that one does need honest tests. I tried to skate by by adding more objects, so some are new, some are whatever. It's good but I guess not good enough. 21:50:00 I thought haveing 4 would be enough, because matrix is 2 by 2: dark/not, new/not 21:50:15 But then there's a dimension when some servers do not reply, so 21:50:40 distributed systems are hard 21:51:40 all right 21:51:45 #topic open discussion 21:51:54 anything else we ought to discuss this week? 21:52:23 I think I have nothing. At least this week. 21:54:37 anybody have time to review https://review.opendev.org/c/openstack/swift/+/792490 - s3api: Allow CORS preflights for pre-signed URLs? wouldn't mind fixing that for fozboz 21:55:45 and https://review.opendev.org/c/openstack/swift/+/793053 - Get TestDarkDataQuarantining passing when policy-0 is erasure-coded would be nice to have -- should be a pretty quick review 21:56:16 oops, forgot about that 21:56:32 add it to the priority reviews and so I have somewhere to look and I'll try and loop around 21:56:57 *that's* what i should do this afternoon! i haven't updated that in quite some time 21:57:53 all right. thank you all for coming, and thanks for working on swift! 21:58:06 next week on OFTC! :-) 21:58:10 #endmeeting