21:00:09 #startmeeting swift 21:00:10 Meeting started Wed Dec 11 21:00:09 2019 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:14 The meeting name has been set to 'swift' 21:00:19 who's here for the swift meeting? 21:00:20 o/ 21:00:26 o/ 21:00:34 o/ 21:00:40 o/ 21:00:59 o/ 21:01:23 hi everybody! 21:01:27 agenda's at 21:01:30 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:01:43 #topic V release naming 21:01:55 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011477.html 21:02:01 polling's open! 21:02:28 help us name our next release :-) 21:02:36 #link https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_13ccd49b66cfd1b4&akey=39d9234da798e6a3 21:02:47 note that "Vidette" was accidentally included twice -- you might want to ensure that both entries have the same ranking when submitting to make life easier for the election officials 21:03:01 lol 21:03:44 not much more than that; mainly just wanted to make sure everyone was aware and could let their voice be heard :-) 21:03:59 #topic Keystone credentials API 21:04:43 in light of the interest i heard in Shanghai about getting credential API support into swiftclient, i figured i ought to mention a recent security issue... 21:04:48 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011529.html 21:05:53 FWIW, we *do* use the credentials API for secret caching in s3token -- i haven't yet checked to see what sorts of recommendations we may need to make to ensure that continues working... 21:06:03 see also... 21:06:05 #link https://github.com/openstack/swift/commit/b0aea9360 21:07:09 more broadly, this very much still seems like a thing swiftclient should support, though i don't know of anyone actively trying to implement it 21:07:54 but i've also heard from some of *my* customers that they'd be interested, so... maybe i'll end up with some bandwidth for it? 21:09:13 again, wanted to raise awareness as much as anything -- if you *do* have customers using keystone application credentials, you might want to get them patched sooner rather than later 21:09:43 #topic end of year meetings 21:10:20 with the holidays coming up, i figured we ought to look at which scheduled meetings it makes sense to keep 21:10:46 as things are, we'd hit both christmas (Dec 25) and new years (Jan 1) 21:10:54 makes sense, and yeah 25th and 1st make sense 21:11:02 but i don't think it makes any sense to actually hold meetings on those days 21:11:20 hit both the major holidays, wtg :P 21:11:38 anyone feel otherwise? *i* certainly won't be able to host, so any dissenters would be on the hook to chair the meeting ;-) 21:11:40 * rledisez agrees to skip both of them 21:11:57 +1 21:12:41 i *think* we've still got enough going on that it makes sense to keep our meeting for next week, though 21:13:06 plus, it'll give me another opportunity to mention the skipped meetings ;-) 21:13:39 (sorry to kota_ (and maybe others?) about the late notice for skipping the meeting a couple weeks ago) 21:15:08 all right, seems like we're all on the same page :-)f 21:15:42 i'll also make sure i update the "next meeting" date on the wiki right after the meeting next week 21:16:01 on to updates! 21:16:11 #topic versioning 21:16:21 clayg, tdasilva -- how's it going? 21:17:22 I applied some changes to the swift versioning patch early in the week based on reviews, now currently working on a container-sync patch 21:17:51 so that container-sync will skip versioned containers (until we have a better solution for it) 21:19:15 the s3 versioning is also in good shape, i noticed timburke put up a change recently to make sure s3api behaves correctly if versioning is not enabled i think, still need to look at that 21:19:38 my hope is to see these patches merged by next week (fingers crossed) 21:20:09 (sounds like an action item for all of us: think about what you'd like container-sync to do with new-style versioned containers) 21:20:48 i did! pretty sure most everything works like it did before 🤞 21:21:18 i think that's all i got, any questions? 21:22:23 container-sync patch without tests: https://review.opendev.org/#/c/698139/ 21:22:24 patch 698139 - swift - WIP: prevent cont-sync when versioning enabled - 2 patch sets 21:22:35 in case you want to see the direction it is going 21:23:04 if i remember right, there were also some questions about whether we'd want the new versioning enabled by default, or maybe whether we would want to *switch it* to be on-by-default in the near-ish future -- i'd certainly be interested in other people's thoughts on *that* question 21:24:08 need to think about that.. but it is a different api.. and we want people to adopt it. Beside who wouldn't want to use versioning ;) 21:25:01 yeah, i think there are two sides to consider. one is what mattoliverau said + s3api versioning depends on it. OTOH, it does make use of the new reserved name API which is very very new 21:25:40 test it with fire :P (but yeah that is the otherside) 21:26:15 what could go wrong 21:27:01 it sure *feels like* something we'd want to have on-by-default eventually -- seems (to me, anyway) in all ways better to the versioning scheme we had before. so i guess, what sort of proof-points would people like to see before we move to on-by-default? 21:28:14 make sure the api works as intended (ie, we don't go creating some objects in the null namespace that can't be deleted through some client-facing api), make sure we play well with at least container-sync... 21:28:19 anything else? 21:28:53 probably look at expirer/reconciler interactions... 21:29:30 make sure sharding can shard null continers. I assume it can communicate with nulls. ie making null shards 21:29:45 I assume this may have been already tested 21:30:03 but also assum versioned containers will potentually grow large over time 21:30:52 i think we looked at some probe tests for that...let me look 21:30:58 * timburke is not sure that's a good assumption... 21:32:01 heh, maybe not 21:32:37 I'll give it a go then :) 21:32:46 i suppose something like AWS's lifecycle management api would certainly be nice to help limit the growth of old versions... but since users would have to opt-in to enabling versioning, i'm not sure it should be a hard requirement that we have that before making it available by default... 21:33:39 anyway, something to think about. i don't think we're likely to enable-by-default before the next time we all see each other face-to-face, anyway ;-) 21:33:47 #topic lots of files 21:34:30 rledisez, is there much new going on with this investigation? 21:35:24 (i was debating about whether to keep it on the agenda or drop it for now and let you add it back when there's new information to share) 21:35:25 so alecuyer if not here today. he's currently working on some other ovh related things, for the next 3 or 4 weeks. next step for himm will be the battle zfs vs leveldb. so nothing really new for now 21:35:42 👍 21:36:00 ic 21:36:03 i think you can drop it for now, we'll see in january when things start moving again 21:36:13 sounds good 21:36:26 #topic profiling 21:36:41 i saw a watchdog patch recently... 21:36:47 #link https://review.opendev.org/#/c/697653/ 21:36:48 patch 697653 - swift - Replace all "with Chunk*Timeout" by a watchdog - 5 patch sets 21:36:55 that's the one :-) 21:37:15 there is some numbers in a comment that give an idea of the improvement 21:38:17 after that, i'll have an other patch that brings +30% download speed on EC (by removing greenthread/queue in reading fragments from object-servers), but I havce weird issues that, I'm now convinced, are related to greenthread scheduling & co… i don't know how i'll handle that 21:38:28 (weird issues in unit tests only) 21:39:08 wow, nice! 21:39:34 Once I finish reviewing the versions patch, I look forward to taking these for a spin :) 21:39:47 after that i'll have some small patches that bring small improvement (caching some stuff, etc…) 21:40:03 but the bug win is in the patch linked here and the next one 21:40:07 *big 21:40:27 sounds great :D 21:40:52 does anyone have any questions about the watchdog patch? i must admit, i haven't looked at it yet 21:41:35 tdasilva: put some in the reviews. thx for looking at it, I hope i answered them :) 21:42:24 well, with three more items, maybe i'll keep it moving then ;-) 21:42:32 #topic MD5 sums 21:42:36 rledisez: I did a quick scan of your comments, but TBH I need to look closer to make sure I understand it well 21:42:45 #link https://etherpad.openstack.org/p/swift-md5-optimisation 21:42:48 but thanks for responding quickly 21:43:43 that's a first draft of my thoughts on MD5. it's very preliminary work, but i'm interested in any feedbacks 21:45:31 interesting... i don't think i've heard of XXH before... 21:46:44 rledisez: fwiw, i think azure is using crc64... 21:47:24 tdasilva: good to know. i was wondering if crc32 is enought given the default max object size is 5g 21:47:56 i think i rather like the idea of breaking up the client-proxy validation (which is an API that will be hard to break) from proxy-object validation (where we have a lot more freedom) 21:48:02 tdasilva: i did some research and it seems it could be good enough (but i don't have solid argument for that) 21:48:40 rledisez: yeah, i'm not sure either, their objects sizes are even smaller IIUC 21:48:53 rledisez: what's 'MD5 + CRC32c * 3' ? 21:49:37 tdasilva: it's a simulation where we would do MD5 once (for the etag) then 3 times crc32c for crypto/ec (like it's done actually with MD5) 21:49:48 timburke: can you reformulate a bit, i'm not sure i totally got you 21:50:11 you prefer to break client-proxy than proxy-object? 21:50:14 rledisez: but we then ended with something worse than what we have now? 21:50:55 tdasilva: no, right now we have MD5*4 (1.64Gbps). If we had MD5+CRC32c*3, we would have 5.65Gbps => so better :) 21:51:12 so we care about multiple hops: client to proxy server, proxy server to object server. part of the problem for EC is that the data sent to each object server has to be hashed separately, since we can't just use a client-provided etag 21:51:35 similar arguments with encryption on triple-replica, though the overhead isn't quite as bad 21:54:01 I guess it's just a shame we have the md5 contract with the client (the etag). other then that we can seperate things. which is what timburke is saying in the nutshell I think :) 21:54:21 having something in the proxy be responsible for ensuring data integrity with the client while some sort of lighter-weight integrity check for the rest of the pipeline and backend requests seems like a solid win 21:54:35 +1 21:54:48 +1 21:54:59 +1, we are on the same page :) 21:55:49 the upgrade may get interesting -- but presumably we could just have an option that you don't switch on until you've upgraded all the backend servers 21:56:32 if an op ends up "providing" and alg or not, I wonder if we should start returning etag (md5) and etag-alg in the hopes one day to deprecate the first ;) 21:56:35 yeah, that's probably the right solution. but if you're using ssync, you might degrade nicely to moving data with MD5 I guess? but rsync would be an issue 21:56:55 mattoliverau: totally! 21:56:58 probably we need to add sort of Accept: headers for the change between proxy and objext 21:57:18 all right, running low on time. there were just a couple patched i wanted to call attention to 21:57:38 #topic ranged requests against SLOs 21:57:42 #link https://review.opendev.org/#/c/697739/ 21:57:42 patch 697739 - swift - Have slo tell the object-server that it wants whol... - 3 patch sets 21:58:30 introduces a new header, X-Backend-Ignore-Range-If-Metadata-Present, that let's SLO tell object servers that it wants the whole object if there's a X-Static-Large-Object header 21:58:54 otherwise, ranged SLO reads tend to look like: 21:59:20 proxy issues GET with Range which 416s (because manifest is way smaller than large object) 21:59:40 proxy issues GET, dropping Range, which gets manifest 21:59:47 proxy issues GET to segment data 22:00:01 i want to get rid of that 416 back-and-forth 22:00:21 oh i see 22:00:27 #topic RFC-compliant etags 22:00:31 #link https://review.opendev.org/#/c/695131/ 22:00:31 patch 695131 - swift - Add proxy-server option to quote-wrap all ETags - 5 patch sets 22:00:42 i dusted off a 6-year-old Won't Fix bug :{ 22:00:46 er, :P 22:00:52 lol 22:00:55 but i'm also out of time ;-) 22:01:10 thank you all for coming today, and thank you for working on swift! 22:01:15 #endmeeting