21:00:01 #startmeeting swift 21:00:01 Meeting started Wed Jun 26 21:00:01 2024 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:01 The meeting name has been set to 'swift' 21:00:08 who's here for the swift meeting? 21:00:25 I am o/ 21:00:50 hi 21:01:43 o/ 21:02:17 sorry, i feel like it's been a while since i held the regular meeting 21:02:31 but i did actually update the agenda this time! 21:02:37 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:02:50 o/ 21:03:04 first up, a couple recently-reported bugs 21:03:15 #topic account-reaper and sharded containers 21:03:28 it looks like they don't work! 21:03:36 #link https://bugs.launchpad.net/swift/+bug/2070397 21:03:37 Bug #2070397 - Account reaper do not reap sharded containers (New) 21:04:39 since the reaper is relying on direct_client instead of internal_client, it gets no objects to delete, tries to delete the container, and gets back a 409 -- next cycle, same deal 21:05:58 this was also brought up on the mailing list, as zaitcev helpfully pointed out to me yesterday 21:06:00 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/KOJS5G5W24PARPLNQS2D5A6M5F3JRM7V/ 21:06:49 That's because I went through exactly this thing with the dark data watcher. 21:07:06 So a simple thing is to copy the code from the watcher and be done. 21:07:29 Maybe some useful refactoring could be done... Or switch both of them to another API... I dunno. 21:08:15 It took so long to engage Alistair's attention on this, that I would prefer just clone the code and forget about it. 21:08:30 yeah, i was wondering if it might make sense to switch to using internal_client... i'm trying to remember why we didn't go that route for the watcher 21:09:12 i think there'd be some complication around avoiding the 410, but surely we could get around that by adding some x-backend-... header flag 21:09:15 Because watcher runs in an object auditor, and internal_client is a part of the proxy. It needs a pipeline set up, for one. IIRC. 21:10:07 hmm. true enough -- and having a pipeline leaves more room for misconfiguration 21:10:53 zaitcev, had you taken a look at the linked patch yet? i haven't, i'll admit 21:11:21 so is the root cause that the reaper uses direct client to list objects? 21:11:21 Not really, sorry. It looked faithfully cargo-culted from the watcher. 21:12:08 acoles, yeah, afaik 21:12:53 I'm wondering, even if all the objects are deleted from all the shards, will the container delete succeed, until all the shards have been shrunk away?? 21:13:36 uh-oh 21:13:39 isn't there this weird property that a container with shards isn't "empty"? although IIRC we haven't several definitions of "empty" 21:14:00 hrm... i thought we had some special handling when all shards report zero objects, but i don't remember for certain... 21:14:24 a good first step might be for one of us to write a failing probe test, then try the linked patch, and iterate from there 21:14:52 I just remember that we have recurring warnings in prod for containers that cannot be reclaimed because they have shards...but maybe there is a workaround for reaper 🤔 21:15:09 yeah a probe test is definitely a good idea 21:15:34 that does sound familiar... 21:15:38 can anyone volunteer to get a probe test together? 21:16:47 The v-word scares me. I'm up to my chest in Cinder nowadays. 21:17:10 hehe -- all right, i'll put it on my to-do list and hopefully i can get to it this week 21:17:18 zaitcev: that still leaves your brain available ;-) 21:17:28 next up 21:17:35 #topic busted docker image 21:17:41 #link https://bugs.launchpad.net/swift/+bug/2070029 21:17:42 Bug #2070029 - swift-docker: saio docker alpine uncorrect (New) 21:18:20 sounds like our published docker image isn't actually working correctly 21:19:27 i guess my first question, though, is how much we really want to commit to maintaining it? it seems like a nice-to-have, but we surely ought to have more validation of the image if we're properly maintaining it 21:20:54 does any core dev use it? AFAIK vSAIO is the recommended dev environment 21:21:49 I don't but it's because we ship our own images in RHOSP. I don't know how to build them, honestly. 21:21:49 I just got to know there is a "saio docker" env 21:21:57 I personally use it when developping here. I noticed the docker image was failing, but I though it was some mistake in my config here 21:22:07 i've got a little single-disk cluster i use it for, but i don't even remember the last time i updated it 21:23:32 I did try to rebuild it using a lxc environment by hand and it still didn't work 21:24:14 thanks for the data point fulecorafa -- fwiw, it seems like swift must not have gotten installed correctly, judging by the bug report 21:25:13 i'll try to dig into it some more -- looks like i was the last one to touch it, in p 853362 21:25:13 https://review.opendev.org/c/openstack/swift/+/853362 - swift - Fix docker image building (MERGED) - 1 patch set 21:25:15 fulecorafa: you may want to try https://github.com/NVIDIA/vagrant-swift-all-in-one which many of use daily for dev 21:25:48 next up 21:25:53 FWIW I did have a go at adding a docker target for vSAIO, almost got it working 21:26:07 #topic log_ prefix for statsd client configs 21:26:15 #link https://review.opendev.org/c/openstack/swift/+/922518 21:26:15 patch 922518 - swift - statsd: deprecate log_ prefix for options - 3 patch sets 21:26:59 this was split out of https://review.opendev.org/c/openstack/swift/+/919444 and the general desire to separate logging and stats 21:26:59 patch 919444 - swift - Add get_statsd_client function (MERGED) - 13 patch sets 21:28:23 but when i got into proxy-logging, i was reminded of how we already have (at least?) two ways of spelling these configs, and i wanted to make sure that we had some consensus to make a recommendation for how the configs *should* be specified going forward 21:29:59 we already have things like log_statsd_host and access_log_statsd_host, and while i like the idea of moving toward just statsd_host or access_statsd_host, i don't really want to add support for *both* 21:32:55 i want to say that the access_ prefix got added because of confusion over how to do config overrides with PasteDeploy 21:34:34 does anybody have input on what should be our preferred option name? or maybe this is all a giant yak-shave that doesn't really have any hope of getting off the ground until we can get rid of PasteDeploy and make config overrides sensible? 21:35:23 can we think of the differentiating prefix being 'access_log_' rather than just 'access_' and then support 'access_log_statsd_host', 'statsd_host', 'log_statsd_host' in that order of preference? 21:35:52 XCKD_14_standards.jpg 21:35:53 meaning, if you want proxy logging to have different statsd host you should configure access_log_statsd_host 21:38:47 that could work... it feels a little funny to me, though, like it's "most-specific > most-generic > middle" when i'd expect middle to be in the middle... *shrug* 21:40:07 well, we don't have to sort it out right now. please leave thoughts/comments/ideas on the review! 21:40:34 speaking of statsd/logging separation... 21:40:42 #topic LoggerStatsdFacade 21:40:47 #link https://review.opendev.org/c/openstack/swift/+/915483 21:40:47 patch 915483 - swift - class LoggerStatsdFacade - 25 patch sets 21:41:52 I though of it as "most-specific > generic > deprecated-generic" but, yeah, it's not ideal 21:42:11 nothing too much to report, just wanted to highlight that work continues and there might be some light at the end of that particular tunnel 21:42:49 i think it's getting close 21:44:23 the last two topics, you can catch up on from the agenda -- i don't think there's anything too much to do for them, but they've been hogging a decent bit of my time the last week or two 21:45:00 the entry-points patch chain starting at https://review.opendev.org/c/openstack/swift/+/918365 could use some review if anyone has bandwidth 21:45:00 patch 918365 - swift - Use entry_points for server executables - 4 patch sets 21:45:28 but especially since we've got a new face here, i wanted to leave time for 21:45:33 #topic open discussion 21:45:42 anything else we should bring up this week? 21:45:53 I would like to bring a topic out, if I may 21:46:01 by all means! 21:47:38 I've been studying how to implement multi-storage-policy containers in swift. A demand for work. I was hopping you guys could share your ideas and/or views on the solution for this 21:48:23 multi-storage-policy as in I can have 2 objects in the same bucket but one would be in normal policy and another in a "glacier" one 21:49:43 symlinks would enable this 21:49:47 fulecorafa, there's definitely been various interest in it over the years -- how are you thinking about objects moving between policies? would you want it to happen automatically after some period of time, or based on user action? 21:50:43 ^^^ I was about to continue that the interesting/challenging part is 'automating' policy placement and migration between policies 21:51:02 From what I've gathered, I think the solution for this would be to use the same basic logic as SLOs: to have a normal bucket and another marked bucket (i.e. bucket+glacier). When uploading a file, given the policy, it would store the actual object in cold 21:51:18 And keep a symlink/manifest of sorts in the original bucket 21:52:19 makes sense enough -- similar to x-amz-storage-class for s3 21:52:37 one thing we have learnt from SLOs and S3 mpus is that it's not a great idea to have 'extra' 'shadow' +segments buckets visible in the users namespace. 21:53:13 but we have the capability to maintain 'hidden' buckets (which is the direction native-MPU is going on feature/mpu) 21:53:32 yeah, we'd almost certainly want to use the null-namespace stuff introduced with the most-recent iteration on versioning 21:53:37 As for objects moving between policies, I know AWS's API has something more on moving objects, but I haven't quite gotten into it. In my specific use-case, we could just use a client to re-put the object? But I'll have to think more on that 21:54:17 acoles, I remenber you commenting this on the orphan patch I've submitted last month. 21:54:30 right 21:54:33 fulecorafa, yeah -- i'd expect to be able to do a server-side copy with a new destination policy 21:54:53 I recon that if we evolve on feature/mpu, we could keep this other marked buckets hidden too right? 21:55:21 yup, i think that's acoles's plan too 21:55:26 I think there might be a good deal of commonality with native MPU 21:55:57 Yan Xiao proposed openstack/swift master: stats: API for native labeled metrics https://review.opendev.org/c/openstack/swift/+/909882 21:56:30 Should be good to go foward with this plan! 21:56:51 possibly even a degenerate use case "Single Part Upload" :) that uses the manifest/symlink/hidden container logic 21:57:44 i'd only note that things may start to get pretty confusing when you've got, say, a versioned MPU uploaded to one of these mixed-policy containers... 21:57:53 Just one last question: For my specific use-case, we're just working with the S3 api in an older version. Even if we do this, the bucket should not be visible to s3 clients, correct? 21:57:56 fulecorafa, will it be enough that application move objects between a normal container and a cold policy container? 21:58:25 there's probably similar cleanup concerns - what happens to the target object in "cold" when the user object is overwritten? it needs to be cleaned up like orphan segments do 21:59:27 timburke: yeah, whilst I see commonality I'm also wary of overloading the goals of native-MPUs. One step at a time.. 21:59:40 jianjian, I don't think so, but it would be a first step. Taking AWS S3 as a counterpart, there should be other different behaviours we would need to take into account. As of now, I think to have a simple 2 policy solution would be a step in the right direction 21:59:45 MPUs + versioning is complex enough for my small brain! 22:00:29 fulecorafa: just curious, if you are able to share the info, do you use object versioning? 22:01:14 acoles: As soon as I started coding I noted that :). I very much liked the way you did the Orphan collector on feature/mpu. I think it would make it very simple to then collect the cold parts 22:02:20 we still have a way to go to on finishing the orphan auditor but we're excited about that approach 22:02:24 acoles: yeah, so we do allow for versioning but just for simple, normal policy files. In the future it may be needed to get support for versioning in other types, but that's not allowed yet 22:02:53 and do your clients use s3 API mostly? 22:03:20 mostly, yes. We're trying to make it easy to migrate 22:04:41 timburke: Yeah, I think it can become complicated quite fast. but I'm not sure there would be another way. Maybe tinkering with the underlying hashing system? (I mean, the one that actually sets where the file can be found on disk 22:07:32 do you expect to need separate disk pools for each policy, or would there likely be a high level of overlap? another similar-but-different idea we've had was to have something like an EC policy, but sufficiently small uploads would be stored replicated on the first three assignments 22:08:00 sorry I need to drop. fulecorafa: thanks for coming along and sharing your ideas, hope we can make progress together. Ask for help here and we'll do our best :) 22:08:31 so it'd all use the same policy index and the same disks, but have different storage overheads 22:08:36 acoles: thank you very much for your help receptivity, alongside the comunity's! have a good one 22:09:03 oh yeah, i suppose we're a decent bit over time :-) 22:09:17 timburke: I think one of the main uses would be to have different disk pools 22:09:22 ack 22:10:14 all right, i should wrap this up -- but i'll add mixed-policy containers to the agenda for next week, fulecorafa 22:10:27 thank you all for coming, and thank you for working on swift! 22:10:34 #endmeeting