21:01:32 <timburke> #startmeeting swift
21:01:33 <openstack> Meeting started Wed Dec 18 21:01:32 2019 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:36 <openstack> The meeting name has been set to 'swift'
21:01:54 <timburke> who's here for the swift meeting?
21:02:07 <rledisez> o/
21:03:45 <timburke> quiet. maybe this'll be a fast one
21:04:02 <timburke> as always, agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:04:19 <timburke> #topic next meeting
21:04:23 <tdasilva> o/
21:04:40 <timburke> just a reminder: no meeting next week (Dec 25) or the week after (Jan 1)
21:04:53 <timburke> so next meeting will be Jan 8
21:05:19 <tdasilva> ack
21:05:55 <timburke> that's all i've got, really -- on to updates!
21:06:03 <timburke> #topic versioning
21:06:09 <mattoliverau> o/ (sorry I'm late)
21:06:14 <timburke> tdasilva, how's it going?
21:06:19 <timburke> mattoliverau, no worries :-)
21:07:10 <timburke> i can also talk about what *i've* found as a reviewer ;-)
21:07:35 <timburke> i've been poking at getting a probe test for sharding a new-style versioned container
21:07:41 <tdasilva> we are continuing to make good progress I think. timburke left comments on the swift versioning so we are working through some of those
21:07:50 <tdasilva> sorry, i type slow
21:08:06 <timburke> it's still early for you ;-)
21:08:30 <timburke> (in particular, i'm looking to shard the backing, reserved-name container)
21:08:34 <tdasilva> just wanted to add we added some todo items to the bottom of the ehterpad: https://etherpad.openstack.org/p/swift-object-versioning
21:09:06 <tdasilva> kind of summarizes last few items we are currently aware of and working on
21:09:51 <timburke> sounds good
21:10:32 <timburke> any other input we need from people, or is it mostly a matter of cranking through the remaining issues?
21:10:34 <tdasilva> so yeah, we are pushing it hard to get it over the finish line soon
21:10:42 <clayg> I’m working on a version of the brain splitter that can use internal client so I can have reconciler probe tests for the null namespace.
21:10:58 <timburke> clayg, *i* was thinking i need that, too!
21:11:38 <clayg> Oh. Well today’s been crazy. Maybe tomorrow.
21:12:12 <timburke> i'll see how far i can get; throw up whatever i've got at EOD, anyway
21:13:10 <timburke> am i right in thinking that we know what needs to happen, and there aren't really any decisions we need to make regarding it?
21:13:26 <tdasilva> oh
21:13:46 <tdasilva> i think we said last week we would discuss again about `enabled by default` ?
21:13:52 <tdasilva> or am i confused?
21:15:26 <timburke> sorry, i'd forgotten. i think i'm of the opinion that i *wouldn't* recommend it by default yet (for upstream, anyway)
21:15:45 <tdasilva> actually, yeah, nevermind: "anyway, something to think about. i don't think we're likely to enable-by-default before the next time we all see each other face-to-face, anyway ;-)	"
21:16:10 <timburke> maybe i'll feel differently once i've got probe tests for sharding/reconciler/expirer ;-)
21:16:19 <tdasilva> especially given the probe tests ...right!
21:16:53 <timburke> sounds like we've got what we need to keep moving
21:17:02 <timburke> #topic profiling/md5 optimization
21:17:21 <timburke> rledisez, how's it going?
21:18:13 <rledisez> well, I didn't di much on that. I mostly think about how I would implement it. I don't want to redo the whole world, so i'll try to be smart in integrating that. but no real code for now
21:18:46 <timburke> 👍
21:19:17 <rledisez> the different combination (crypto/replica, crypto/EC, EC, replica) make it a bit hard to do something really optimized (in the way it's designed). i'm thinking about that too
21:20:34 <rledisez> that's all for me about that
21:21:42 <timburke> i wonder how crazy it'd be to always use footers and have someone pretty far left (like, gatekeeper or something) be responsible for validating the client data and storing the etag... then have everything else punt to a crc or something lighter-weight...
21:22:20 <rledisez> it's exactly what i'm building. I wrote it that way in the etherpad.
21:22:48 <timburke> we need more hours in the day! at least, that's usually *my* problem ;-)
21:23:10 <timburke> #topic RFC-compliant etags
21:23:56 <timburke> so i've got a patch to enable it cluster-wide, but it sounds like rledisez might have a separate middleware that would allow finer-grained control...
21:24:30 <timburke> i haven't seen that patch yet, though ;-)
21:24:30 <rledisez> timburke: I totally forgot to push it. i'll put a reminder for tomorrow
21:25:12 <timburke> thanks! it doesn't have to be perfect -- i'd be happy to help polish docs or tests or what-have-you :D
21:25:33 <rledisez> it's "run-in-prod" polished, no more ;)
21:25:57 <timburke> i just had one other thing i wanted to draw attention to, this time with a little more time for discussion :-)
21:26:05 <timburke> #topic ranged SLO reads
21:26:28 <timburke> so we had a customer complaining about latency variations
21:27:01 <timburke> as i was helping look into it, i noticed that ranged SLO reads have this funny request pattern:
21:27:25 <timburke> proxy sends the range request down to the object server
21:27:49 <timburke> object server just has a manifest, so it responds 416 (that range is way off the end of the thing!)
21:27:52 <zaitcev> I presume this is replicated, but go on.
21:28:30 <timburke> (good point; yes, though the range manipulations we do for EC mean you get much the same behavior)
21:28:45 <timburke> proxy says, well, no, actually i want that *whole manifest*!
21:28:54 <timburke> object server sends it back
21:29:08 <timburke> proxy finds the appropriate segment, requests *that*
21:29:22 <timburke> object server sends it back, and we can actually start serving users
21:30:02 <timburke> i set out to get rid of that 416-refetch behavior and just have the object-server send the whole manifest from the get-go
21:30:48 <zaitcev> Wait, how does proxy know that it's an SLO object to begin with.
21:30:59 <zaitcev> Of course it asks for a range first.
21:31:16 <timburke> i think i designed my solution fairly smartly; it adds a new, not-slo-specific header to tell the object server "hey, if you see any of these pieces of metadata, ignore the range header i sent"
21:31:28 <zaitcev> oh, nice
21:31:44 <timburke> then has slo add X-Static-Large-Object to that list
21:32:01 <zaitcev> or.... you could _always_ ignore range for manifests
21:32:58 <timburke> i was thinking that i don't really want object-server to need to know *more* about large objects -- we've already seen how much trouble it is that they have to know about swift_bytes, for example
21:33:29 <timburke> i was mostly inspired by the x-backend-etag-is-at behavior we already have
21:34:47 <timburke> there is a wart, though, that i'm not sure i like: the EC obj controller (currently) pops off the Range header so that swob behaves itself when it makes the response
21:36:28 <timburke> so i guess i'm wondering: has anyone else seen this behavior or had users complain about it?
21:36:37 <timburke> does this design seem reasonable?
21:37:03 <timburke> and are we ok with that wart? (even in this patch, it means that SLO has to grab the range itself before it calls down to the app, just so it can replace it later...)
21:37:23 <zaitcev> I did not notice, but that's because I barely have any SLOs at all. It sounds completely plausible for the reasons of a-priori information that I just mentioned.
21:38:32 <mattoliverau> I like the approach, re: generic header and middleware can add to it, smart. I guess I need to look to understand the wart some more
21:39:23 <timburke> i tried to capture it in the commit message; you can also see the consequence of it in https://review.opendev.org/#/c/697739/3/swift/common/middleware/slo.py
21:40:12 <mattoliverau> Cool, I'll check it out
21:41:18 <timburke> thanks! i'll see about getting that save/restore logic out to the EC controller, too... i feel like that should be doable, and it'd remove my only real concern
21:41:27 <timburke> that's all i've got
21:41:32 <timburke> #topic open discussion
21:41:58 <timburke> what else would people like to bring up?
21:43:50 <timburke> all right, looks like we can let mattoliverau, tdasilva, and seongsoocho grab some breakfast ;-)
21:44:02 <timburke> thank you all for coming, and thank you for working on swift!
21:44:12 <timburke> talk to you all next year :-)
21:44:19 <timburke> #endmeeting