21:01:02 <notmyname> #startmeeting swift
21:01:03 <openstack> Meeting started Wed Nov  7 21:01:02 2018 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:06 <openstack> The meeting name has been set to 'swift'
21:01:09 <notmyname> who's here for the swift team meeting?
21:01:11 <timburke> o/
21:01:14 <kota_> hello
21:01:21 <rledisez> hi o/
21:01:38 <mattoliverau> O/
21:02:15 <notmyname> welcome
21:02:32 <notmyname> not sure if clayg or tdasliva or zaitcev are around...
21:02:45 <clayg> thanks for the ping
21:02:47 <tdasilva> hi
21:02:55 <notmyname> welcome :-)
21:03:15 <notmyname> the openstack summit in berlin is next week. I'll be there. mattoliverau and kota_ will be there too, right?
21:03:26 <kota_> right
21:03:55 <kota_> I'll be since Monday, then fly out at Friday afternoon
21:04:05 <notmyname> ok
21:04:05 <mattoliverau> nope, I wont be there :(
21:04:10 <notmyname> ah, ok
21:04:25 <notmyname> I'll be there on tuesday and i fly out on friday
21:04:26 <mattoliverau> too soon for the baby.
21:04:33 <notmyname> i understand :-)
21:04:44 <notmyname> I'll be cschwede will be there :-)
21:04:44 <kota_> got it
21:04:49 <notmyname> oh, and rledisez, right?
21:04:56 <rledisez> notmyname: yes, i'll be there
21:05:08 <notmyname> rledisez: are you bringing anyone with you?
21:05:37 <rledisez> yes, after me, the oldest member of the swift team at OVH
21:05:52 <rledisez> i mean, he's not old, you got me i think ;)
21:06:04 <notmyname> :-)
21:06:28 <notmyname> ok, logistics wise, for this meeting...
21:06:45 <notmyname> next week, we should cancel because of the summit
21:07:03 <notmyname> week after that is the US thanksgiving week. I'll leave it questionable if we have that meeting or not
21:07:18 <notmyname> week after that, i'll be on a sales trip and won't be able to host the meeting
21:07:28 <notmyname> and that takes us to december
21:07:35 <kota_> wow
21:07:42 <mattoliverau> wow, the end of year is coming up quick
21:08:08 <notmyname> I know we haven't had a lot of major stuff to discuss during this meeting, but I like having this meeting as a place where we all are online at the same time and can at least say hi and hear what's going on from others
21:08:29 <mattoliverau> +1
21:08:40 <kota_> +1
21:09:16 <notmyname> I don't know what that means, given the logistics of the next few weeks, but that's my opinion on it :-)
21:09:58 <notmyname> but for today, let's talk about some of the ongoing work :-)
21:10:17 <mattoliverau> well let's skip next week, then take it as it comes. I'm happy to run a thanks giving one if need be if we just want a catchup/progress meeting.
21:10:29 <notmyname> mattoliverau: awesome. that sounds great. thanks
21:10:30 <tdasilva> +1
21:10:58 <notmyname> s3api patches look nearly done!
21:11:07 <kota_> thanks mattoliverau
21:11:22 <notmyname> https://review.openstack.org/#/c/592231/ <-- tdasilva wants kota_ to look at this for a +A
21:11:23 <patchbot> patch 592231 - swift - s3api: Include '-' in S3 ETags of normal SLOs - 4 patch sets
21:11:47 <notmyname> https://review.openstack.org/#/c/575838/ <-- kota and tdasilva have both +1'd it, but there's a request for timburke to squash another patch
21:11:47 <patchbot> patch 575838 - swift - Listing of versioned objects when versioning is no... - 3 patch sets
21:12:05 <notmyname> and https://review.openstack.org/#/c/575818/ is the only one without any reviews
21:12:05 <patchbot> patch 575818 - swift - Support long-running multipart uploads - 6 patch sets
21:12:12 <kota_> It's in my radar, I'll try it in this week (or in Berlin)
21:12:23 <kota_> saying about p 592231
21:12:23 <patchbot> https://review.openstack.org/#/c/592231/ - swift - s3api: Include '-' in S3 ETags of normal SLOs - 4 patch sets
21:12:25 <tdasilva> if timburke is ok with it i'd be happy to send a new patchset for 575838
21:12:37 <timburke> yeah, i
21:12:38 <tdasilva> kota_: thanks
21:12:51 <timburke> 've been meaning to squash that in and fix up the commit message...
21:13:10 <notmyname> when these s3api patches land, I want to cut a swift release (2.20)
21:13:12 <tdasilva> timburke: ah ok, you got it then!
21:13:51 <clayg> aww yeah, best swift evar
21:13:58 <tdasilva> notmyname: are we also waiting on p 575818?
21:13:59 <patchbot> https://review.openstack.org/#/c/575818/ - swift - Support long-running multipart uploads - 6 patch sets
21:14:00 <notmyname> clayg: you know it!
21:14:23 <timburke> kota_: you might also be a good person to review https://review.openstack.org/#/c/613452/ -- i want to try to pull the auth bits out of S3Request...
21:14:23 <patchbot> patch 613452 - swift - s3api: Move authenticator logic to separate module - 4 patch sets
21:14:25 <notmyname> tdasilva: yeah, I'd prefer all of the listed s3api patches to be in for a release
21:14:39 <tdasilva> ack
21:15:16 <kota_> timburke: hew, it looks a bit large refacotring...
21:16:39 <timburke> yeah... there's a bit of prep work for trying to get an STS-like endpoint, which has its own, separate way of doing signatures :-(
21:17:00 <timburke> i think the v4 stuff mostly Just Works, but v2 is ... different ...
21:17:01 <kota_> what's STS?
21:17:22 <notmyname> secure token service?
21:17:31 <timburke> https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html -- basically, a way of issuing temporary credentials
21:17:33 <kota_> oic
21:18:38 <kota_> sounds like keystone :P
21:19:14 <timburke> yeah, the request made me think of https://review.openstack.org/#/c/603529/ a bit, too -- a little different, though
21:19:14 <patchbot> patch 603529 - swift - s3 secret caching - 10 patch sets
21:19:42 <notmyname> rledisez: how's losf? anything to update us on? any new patches or learning?
21:20:38 <rledisez> notmyname: not much, alecuyer has been thinking/working on extents optimization in XFS. I just made a small update on an SSYNC patch to add concurrency
21:21:07 <rledisez> https://review.openstack.org/#/c/613987/
21:21:08 <patchbot> patch 613987 - swift - SSYNC: enable multiple SSYNC connections per job - 2 patch sets
21:21:35 <rledisez> (it still misses some tests)
21:21:46 <notmyname> cool. sounds like something that could be useful beyond losf too
21:22:31 <notmyname> oh... speaking of ssync, timburke's been looking at something interesting
21:22:48 <notmyname> so we've got this "bug" https://bugs.launchpad.net/swift/+bug/1510342
21:22:48 <openstack> Launchpad bug 1510342 in OpenStack Object Storage (swift) "Reconstructor does not restore a fragment to a handoff node" [Low,Confirmed] - Assigned to Bill Huber (wbhuber)
21:23:06 <notmyname> "bug" because we intentionally wrote it that way to start with, but it should probably be changed
21:23:14 <notmyname> and timburke's been thinking about how to fix it
21:23:19 <timburke> i really want to track last-sync time, and be able to reconstruct to a handoff if one of the neighbors has been responding 507 for a while
21:23:54 <notmyname> timburke: did you decide on the best place to store it? or should we discuss that here to get gut reactions from others?
21:24:07 <kota_> that's interesting idea to avoid race handoff writing maybe?
21:25:18 <mattoliverau> yeah, where would you store that? on every frag.. and then do you need a qourum or go with the latest timestamp you find?
21:25:19 <timburke> still not sure. either at the suffix or all the way down at the diskfile, but maybe both? *shrug*
21:25:31 <rledisez> kota_: side note related to SSYNC and race conditions -> https://review.openstack.org/#/c/611614/
21:25:31 <patchbot> patch 611614 - swift - Fix SSYNC concurrency on partition - 4 patch sets
21:26:19 <notmyname> rledisez: I think you or alex should probably pay attention to the bug I linked and chat with timburke as he works on a patch. it probably affects ovh a bit
21:26:28 <kota_> rledisez: thx
21:27:09 <rledisez> notmyname: yeah, i'll read the bug report carefully
21:27:12 <timburke> mattoliverau: idea would be that each frag would track its last-sync with its neighbors independently
21:27:56 <timburke> we could probably safely include the source when pushing a new frag to some remote, and new writes from the proxy ought to set it appropriately, but i think those could be later optimizations
21:28:18 <rledisez> timburke: is it something that would travel with a partition? (during a rebalance) or you assume after a rebalance to restart from zero the tracking
21:28:20 <mattoliverau> ahh ok, interesting
21:28:58 <timburke> big thing is, i want a disk that gets unmounted but not removed from the ring to not (hugely) negatively affect the durability of objects that might have a frag on that disk
21:30:23 <timburke> rledisez: seems appropriate for it to travel with the part -- might even get us part of the way toward some of the ideas for tsync? like, we might be able to prioritize reverts following a rebalance with this extra info? not sure yet; still need to play with it
21:30:46 <rledisez> timburke: it sounds reasonable, but are we trying to address the negligence of the operator?
21:31:20 <notmyname> rledisez: lol @ "but can this patch fix operators that aren't doing smart things?"
21:31:30 <timburke> rledisez: yeah -- i've got some customers that really don't like to have to think about their clusters
21:31:45 <kota_> lol
21:31:57 <rledisez> ok, i guess i can stop worrying about my production now, you got it all ;)
21:32:25 <timburke> "i can just check in like once a month and replace drives, right?" while not thinking much about how many failures can pile up hen you've got 2k, 3k drives in your cluster...
21:33:11 <tdasilva> or simply can't afford a team the size of a public provider, smaller players need systems that are a bit smarter, IMHO
21:33:14 <mattoliverau> rledisez: lol
21:33:50 <notmyname> tdasilva: oh, ok, I suppose we can make it better for people other than rledisez :-)
21:34:38 <notmyname> ok, what else do we need to bring up this week?
21:34:45 <notmyname> anything else have a topic to mention?
21:36:18 <notmyname> ok, I guess not then. :-)
21:36:21 <notmyname> thanks for coming today
21:36:28 <notmyname> and thank you for your work on swift
21:36:32 <notmyname> #endmeeting