21:00:06 #startmeeting swift 21:00:07 Meeting started Wed Jul 1 21:00:06 2020 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:10 The meeting name has been set to 'swift' 21:00:15 who's here for the swift meeting? 21:00:28 o/ 21:00:29 o/ 21:00:40 o/ 21:00:59 howdy! 21:02:02 agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:02:07 first up 21:02:11 #topic Berlin 21:02:25 the Request for Presentations recently opened up 21:02:27 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-July/015730.html 21:02:52 deadline's Aug 4 21:03:19 summit will be Oct 19-23 21:03:48 (assuming pandemic is more or less under control, so... who knows?) 21:04:03 :/ 21:04:20 but i wanted to at least bring the call for presentations to everyone's attention 21:05:04 that's all i've got in terms of announcements 21:05:22 #topic replication servers 21:05:54 so clayg made a very important observation on p 735751 21:05:55 https://review.opendev.org/#/c/735751/ - swift - Allow direct and internal clients to use the repli... - 4 patch sets 21:06:09 go team! 21:06:21 the replication servers currently only respond to SSYNC and REPLICATE requests! 21:06:33 #link https://bugs.launchpad.net/swift/+bug/1446873 21:06:33 Launchpad bug 1446873 in OpenStack Object Storage (swift) "ssync doesn't work with replication_server = true" [Medium,Confirmed] 21:07:13 it won't even respond to OPTIONS, which is supposed to tell you what request methods *are* allowed 21:07:43 ^ that's like @timburke 's favorite joke 🤣 21:08:28 Sorry I'm late, woke up this morning and had IRC troubles, but finally connected. 21:08:40 it looks like it won't be too hard to allow replication servers to service all requests, and make `replication_server = False` basically mean "don't do SSYNC or REPLICATE" 21:08:59 does that seem like a reasonable thing to everyone, though? 21:10:22 i feel like the work involved in making it so i can properly *test* with separate replication servers is going to be way bigger than making it work :-/ 21:12:24 I think the conclusion on the lp bug was drop the semantic of `replicatoin_server = None` - which I think is what you HAVE to run on your replication server's now if you want to do a replication netowork and have EC rebuild still work 21:13:05 seems related to https://review.opendev.org/#/c/337861/ 🤔 21:13:05 patch 337861 - swift - Permit to bind object-server on replication_port - 7 patch sets 21:13:32 yeah probably 😬 21:13:50 i feel like if this was easy we wouldn't have put it off so long 21:14:08 it would be helpful to have sanity checks as we try to merge shit 21:15:24 it's probably worth me putting in some work to make probe tests runnable with separate replication servers regardless. seems like a lot of people run that way (we certainly do!) and anything i can do to make my dev environment more representative of prod seems like a good idea 21:15:51 but, it'll probably be a bit 21:15:58 #topic waterfall EC 21:16:05 clayg, how's it going? 21:16:06 timburke: it's a non-zero cost to have to run a second set of backend *-server, but could be nice 21:16:18 timburke: you're the only one who's commented 😠 21:16:35 hehe 21:16:43 fair enough 21:17:09 i'm really proud of my little feeder system tho - I've started to conceptualize breaking them up as like a poly rhythm sort of situation 21:17:46 where we have on flow of code that's popping ticks predictably (start ndata, then remaining nparity at a predictable pattern) 21:17:56 then there's this *other* beat that's all random just based on when stuff responds 21:18:06 doing that all in one loop was maddness - two loops is better 21:18:35 and then I'm also proud of the logic simplifications that I'd managed to get so far - but it was super helpful to have another brain load it up and WTF at it 21:18:59 some bits are still confusion - but I was desensitized 21:19:26 so, anyone else have some bandwidth to take a look in the next week or two? 21:19:43 you were talking about good buckets and bad buckets and 416 as success ... there might be more cleanup, but I'd need some sort of "bad behavior" to really motiviate me to "get back in there" 21:20:00 otherwise it's just pushing symbols around for subjective qualities 21:20:40 I think I can at least try the patch next week, I should have more time than these past weeks 21:20:44 I'm happy with the tradeoffs for "the non-durable problem" - I know were never did that zoom or w/e - but it's fine (i'm still happy to answer questions as needed) 21:20:51 alecuyer, thank you 21:21:03 the final patch - the "make timeouts configurable per replica" 21:21:28 it's... a little "much" by some estimations. I like the expressiveness; but worry it's the same as the pipeline problem 😞 21:22:04 the more people who would be willing to put on their operator hat and try to grok what those options even *mean* THE BETTER 21:22:32 if I can explain it to YOU guys trivially there's no hope I'll ever be able to write clear docs 21:22:47 ok 21:22:58 would it be worth us documenting them as experimental/subject to change (or even removal)? just as soon as we get a chance to have a better option 21:23:54 Then the last gotcha is the final fate of ECFragGetter - do we trim it down lean and mean and try to pull all remains of EC out of GETorHEADHandler - or is there still some hope we can unify them some way that's sane? 21:24:19 It's really not clear to me; and I guess I'm avoiding trying to code that I'm not confident is a good idea 👎 21:24:54 timburke: I think that'd be reasonable if after everyone looks at them we have some different ideas about directions we might go; or the best we can come up with seems like a lot of work 21:25:23 if we look at them and say "yeah, per policy; per replica - makes sense" then we just write that down with some common examples/patterns and move on 21:26:02 so I think I really could use some feedback right now - from anyone who has cycles - you don't have to grok all the code; or even check out the change 21:26:27 reading the example config in the final patch (based on the converstations we been having) and providing feed back there is a great start 21:26:44 one thing i'm definitely digging is the fact that it's available as a per-policy option immediately -- i think it'll be really handy for a lab cluster to be able to define, say, five EC policies that are identical except for those tunings 21:27:18 I'll make sure I have a look this week, especially big picture so can comment of config options. If I have time I'll try and go a little deeper. 21:27:20 if you can glance through some of the weird corners of how ECFragGetter is starting to diverge from GETorHEADHandler and give me a gut check "yeah probably different enough" or "these smell like minor differences" would also be helpful 21:27:44 #link https://review.opendev.org/#/c/737096 21:27:44 patch 737096 - swift - Make concurrency timeout per policy and replica - 3 patch sets 21:28:06 if you can grok that main ec-waterfall loops - even setting aside weird bucket stuff like shortfall and durable - that'd like above and beyond 21:28:24 (just realized i had a few comments i forgot to publish) 21:28:33 at that point you may as well check it out and try to configure it - it probalby wouldn't take much work to see that it DOES do what it says on the tin 21:29:02 THANK YOU ALL SO MUCH!!! feedback is what I need - couldn't do this without y'all - GO TEAM! 21:29:52 all right 21:29:57 #topic open discussion 21:30:06 what else do we need to bring up today? 21:30:25 libec !!! 21:30:40 so we definately have the old style checksum frags... 21:30:59 everyone run this on your EC .data files -> https://gist.github.com/clayg/df7c276a43c3618d7897ba50ae87ea9d 21:31:21 if your hash matches zlib you're golden! upgrade to >1.16 and rejoice! 21:31:57 if you've got that stupid stupid stupid libec inline checksums - you're probably gunna be stuck with them for awhile (and also don't upgrade - you'll die) 21:32:15 thanks for that snippet clay 21:32:24 well... you might die anyway because 1.15 was like... somehow indeterminate? 21:32:41 well, until everything upgrades ;-) 21:32:45 so we have to do better; but 1.16 isn't quite good enough 21:32:52 alecuyer: timburke wrote it 21:33:31 so, it seems like we need a way to tell libec to continue writing old frags 21:33:58 wow, nice useful bit of coding there! 21:34:00 we also almost certainly need to write a detailed bug ;-) 21:34:18 right, for us to upgrade we need new code to not start writing in the new format yet because old nodes won't know what to do with that shit until the upgrade 21:34:33 I think we could probably just like... "upgrade libec real fast and restart everything!" 21:34:44 but... we'll probably like "be careful" or whatever 🙄 21:35:33 I'm gunna try to talk timburke into a *build* option that's just like "always write the old stupid inline crc value forever" 21:35:53 that way I can just make it WOMM and then ship it and never thing about how much i hate C again 21:36:01 hehe 21:36:42 but... it might not work - timburke is pretty sure we should actually use the zlib version as soon as we can - so an env var with a portable build might be "better" 21:37:21 if by "better" is - you'd rather pay some operational cost to rollout with the latch; then after upgrade remove the latch; and then be on the path of righteousness forever fighting back the forces of kludge and evil 21:38:10 yeah, i'm still fairly nervous that our funky crc doesn't have the same kinds of guarantees that we're expecting from zlib... 21:39:04 timburke: well i'm not sure I can write controller code that can do the upgrade properly! like... we'd have to do a checkpoint release or something 🤮 21:39:25 I'm sure I could build a packge with an option that's like CLAYG_IS_TOO_LAZY_TO_DO_THIS_RIGHT=True 21:39:37 side note on all of this: thank you alecuyer for noticing this and bringing it up last week! much better to be arguing about how best to navigate this now than mid-upgrade ;-) 21:39:43 and then if at somepoint you're sure all legacy swiftstack customers have upgrade you turn that off ;) 21:40:02 yeah FOR SURE - alecuyer is god send ❤️ 21:40:49 thanks but I wish i'd do more - and be faster.. :) but thanks 21:40:50 clayg, we could totally have a controller that always says "write legacy crcs" -- we've done controller checkpoint releases before; after the next one, we tell it to switch over 21:41:50 yeah, again - if we can couple it to an upstream swift change that makes it just an ini/config var I'm totally down 21:42:17 if I have to update our systemd units in the package to turn it on/off i'm pretty sure I'm too dumb to get it right 21:42:52 alecuyer: kota_: mattoliverau: if you have access to any EC data in any cluster anywhere please see if you can get the crc thing to run on it and report back w/i a couple of weeks 21:43:10 we'll probably kick this can down the road for awhile (1.15 is working fine for me!) 21:43:25 that's all I got on libec 21:43:31 yep, will do 21:43:59 anybody think they'll have a chance to look at https://review.opendev.org/#/c/737856/ ? it seems to be working well for ormandj 21:44:00 patch 737856 - swift - py3: Stop munging RAW_PATH_INFO - 2 patch sets 21:44:28 i'd like to backport it to ussuri and train, then ideally do a round of releases 21:45:09 timburke: oh neat - is the test new? 21:45:14 (probably should have done that earlier, but now that there's an additional known issue...) 21:45:16 yeah 21:45:42 b"GET /oh\xffboy%what$now%E2%80%bd HTTP/1.0\r\n" 🤣 21:46:41 oh, and as a heads-up: it looks like there might be some movement on an eventlet fix for that py37/ssl bug 21:46:43 #link https://github.com/eventlet/eventlet/pull/621 21:47:26 sounds good! 21:49:27 all right, last call 21:50:27 thank you all for coming, and thank you for working on swift! 21:50:33 #endmeeting