21:00:15 #startmeeting swift 21:00:15 Meeting started Wed Jul 10 21:00:15 2024 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:15 The meeting name has been set to 'swift' 21:00:22 who's here for the swift meeting? 21:00:30 o/ 21:00:32 I'm here 21:01:09 o/ 21:01:40 as usual, the agenda's at 21:01:43 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:01:58 first up, just a few status updates 21:02:10 #topic account-reaper and sharded containers 21:02:19 #link https://bugs.launchpad.net/swift/+bug/2070397 21:02:20 Bug #2070397 - Account reaper do not reap sharded containers (New) 21:02:35 i still haven't even started on a probe test for it :-( 21:03:22 if anyone else would like to take a stab at it, you'd be more than welcome -- otherwise, i'll try to use this meeting to keep it on my radar 21:03:44 #topic cooperative tokens 21:03:49 Oh, because it won't shrink the empty shards 21:04:18 mattoliver, i think it goes deeper than that -- i think it can't even list the objects to delete 21:04:23 Oh, it's talking to the broker, so not getting objects 21:04:42 Damn, yeah Oh course! Good find 21:04:56 yeah, direct_client to get around the 410... 21:05:00 #link https://review.opendev.org/c/openstack/swift/+/890174 21:05:01 patch 890174 - swift - common: add memcached based cooperative token mech... - 33 patch sets 21:05:40 looks like jianjian has gotten some reviews and things keep moving forward 21:06:12 Yup, I've been living at it 21:06:22 *looking 21:07:05 thanks mattoliver! i also seem to remember hearing good things about it as we've been cautiously trying it out in prod, but i don't have any numbers at my fingertips right now 21:07:24 (Sorry visitors sleeping my my office, so on my phone for meeting, so auto correct going to be a pain) 21:07:42 no worries :-) 21:07:46 #topic pkg_resources warnings in editable installs 21:07:56 first patch merged! 21:07:59 #link https://review.opendev.org/c/openstack/swift/+/918365 21:07:59 patch 918365 - swift - Use entry_points for server executables (MERGED) - 4 patch sets 21:08:08 thanks mattoliver and zaitcev! 21:08:29 Well we've rolled it out to 1/2 of prod and initial numbers look awesome and promising (re coop) 21:08:37 next in the chain is still fairly mechanical 21:08:41 #link https://review.opendev.org/c/openstack/swift/+/922151 21:08:41 patch 922151 - swift - Use entry_points for a bunch more executables - 4 patch sets 21:08:48 Nps, will work my way up the chain 21:09:16 That next one is a big one 😀 21:09:24 as we get further out, more code ends up moving, so i expect them to trigger more discussion 21:10:11 #topic ISO timestamps and swift-drive-audit 21:10:28 DeHackEd brought this to our attention 21:10:30 #link https://bugs.launchpad.net/swift/+bug/2072609 21:10:31 Bug #2072609 - swift-drive-audit does not handle ISO timestamps in logs (New) 21:10:49 and even provided a fix in the bug report 21:11:21 i'll aim to get a patch up later today, if anyone can spare some review cycles 21:11:33 Oh nice 21:12:00 Yup, just ping me to remind me 21:12:14 👍 21:12:18 next up 21:12:26 #topic multi-policy containers 21:12:48 fuleco, how's it going? anything else you need from us in the short term? 21:13:18 So for this we are still running some testing and it's been working good until now 21:13:49 I think we're in good shape as it is, just gonna take us some time to wrap it up and send upstream 21:13:50 Cool 21:13:52 🎉 21:14:55 all right then, last item i've got 21:14:58 #topic operator-driven node exclusion 21:15:35 zigo brought my attention to this message on the ML (thanks, i probably would have missed it otherwise!) 21:15:43 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/IZUXNXUQFKBZIP6YIOQJMLXPK5RI6M45/ 21:16:41 core idea is to give operators the ability to tell proxies to ignore specific backend nodes, and dynamically manipulate that ignore list 21:16:56 Oh interesting, I haven't read it yet, bit what they auto adding things to error limitted nodes or something 21:17:05 Lol, so yes 21:17:46 Maybe.. I should look 21:18:10 I guess I'd like to know the usecase 21:18:25 kind of, but not exactly. it's a sticky flag and requires some process to clear it when the node's back up/should receive traffic 21:19:29 my read has been that there are at least a couple different use-cases 21:19:42 Ok, so a "better" or more global error limitted list maybe. 21:20:21 there's flagging known-failed devices that you're in the process of replacing -- and even once the device is replaced and remounted, you might want to avoid proxies talking to it so it can focus on incoming rsyncs 21:20:49 We have discussed in the past the ability to disable nodes in the ring, but this sounds more dynamic, which is good. 21:22:13 Kk, interesting I guess best if a give it a read rather then speculating. Because I wonder how the sticky flag will work etc. 21:22:14 yeah, i was thinking that it might work well in the ring file, too -- there seemed to be some concern about the time required to propagate configs though 21:23:08 i think the cardinality is expected to be pretty small, so everything's written in a flat file, one ip:port to exclude per line 21:24:00 Ok, but sp long as we're sure it'll be small, in swift things tend to grow :😜 21:24:35 I did have that global error limitor patch that uses memcache.. so could do something similar. 21:24:36 but yeah, it definitely felt related (but decidedly different!) to some other ideas we've had over the years, like ring v2 and clayg's overseer 21:25:14 Yeah 21:26:33 the concern about config propagation time kind of makes me want to pick up https://review.opendev.org/c/openstack/swift/+/670674 again and give it some gossip protocol... 21:26:34 patch 670674 - swift - WIP: Allow ring lookups via a service - 3 patch sets 21:27:15 anyway, i wanted to make sure that other people saw the conversation 21:27:22 that's all i've got for this week 21:27:28 #topic open discussion 21:27:37 anything else we should bring up this week? 21:28:57 Well I'm still reorienting after vacation. Am playing with not hashing memcache key prefixes (so I can play with prefix routing) in the memcache layer.. but that still early work. 21:29:34 And just want to have a patch in our back pocket incase it's needed. 21:30:06 Will push something up to gerrit soon but it'll be wip 21:31:12 Well I should say, optionally not hashing prefixes. I wouldn't just change behaviour. We use mcrouter, so could be really interesting for us. 21:32:43 sounds good -- would that be an update to https://review.opendev.org/c/openstack/swift/+/890139 or a new patch? 21:32:44 patch 890139 - swift - proxy-server: use a prefixed shard range cache key - 1 patch set 21:34:28 Yeah, update that, and modernise 21:34:42 👍 21:35:08 Seems swift has moved on a little in the way memcache keys work :) 21:35:16 And we (swift) have some summer school students working on some stuff. They've pushed up some patches and also plan to tackle some sharper and storage policy index ops tooling for us 21:35:39 *sigh* sharder 21:35:52 oh yeah! should we go ahead and merge https://review.opendev.org/c/openstack/swift/+/921664 ? i'm kind of inclined to 21:35:53 patch 921664 - swift - add object-count quota for accounts in middleware - 3 patch sets 21:36:12 Thanks timburke 21:36:27 Yeah, I might go merge that 😀 21:36:38 There is also a reaper patch 21:36:42 https://review.opendev.org/c/openstack/swift/+/922833 21:36:42 patch 922833 - swift - Periodic reaper recon dump showing current progress - 5 patch sets 21:36:52 (which i should be sure to take a look at) 21:36:58 Yup thanks! 21:37:39 There is a follow up to the quota one if anyone is interesting in writing 😀 21:38:40 Basically moving the api to x-account-quota namespace and store in sysmeta 21:39:03 That's all I can think of atm 21:39:41 all right, i think i'll call it early then 21:39:54 thank you all for coming, and thank you for working on swift! 21:39:59 #endmeeting