21:01:18 #startmeeting swift 21:01:19 Meeting started Wed Sep 18 21:01:18 2019 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:22 The meeting name has been set to 'swift' 21:01:27 who's here for the swift meeting? 21:01:31 o/ 21:02:13 o/ 21:02:31 hi o/ 21:03:25 sounds like tdasilva and clayg are going to be a little late, but i think we could get started 21:03:32 agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:04:11 quick reminder that we want to start collecting topics at https://etherpad.openstack.org/p/swift-ptg-shanghai 21:04:52 sure 21:05:23 i'm as guilty as anyone of needing to put things on there. i feel like i've been running around without a chance to stop, think, and organize. i'll make sure i get to that this week 21:05:37 timburke: I can help for Running Swift Cluster as you suggested. I'm just not sure what's expected (how long? slides needed?) 21:06:03 rledisez, i don't know either! i've never done this before ;-) 21:06:17 lol, let's improvise then 21:06:24 :-) 21:06:31 needs help? 21:06:36 i'm also not sure how much interest there will be. yeah, improvisation will probably work best 21:07:15 Hello! 21:07:53 kota_, maybe? i'd had this thought that i might encourage newcomers to express what they're curious about with regard to swift and then have something of a lecture prepared, but... idk 21:08:02 tdasilva: o/ 21:08:43 i think that would still be valuable, but given the lack of newcomers expressing an interest... i'm not sure that it's the best use of time 21:09:01 we'll see what happens 21:09:17 anyway, on to updates! 21:09:24 #topic py3 21:09:50 i didn't get around to making a probe test job running under py3, sorry 21:10:02 (part of it was being out of town the last couple days 21:10:24 needn't to say, sorry ;-) 21:11:10 but we've got a new py3 bug! https://bugs.launchpad.net/swift/+bug/1844368 21:11:10 Launchpad bug 1844368 in OpenStack Object Storage (swift) "fallocate_reserve cannot be specified as a percentage running under python3" [High,Confirmed] 21:11:55 looks to be due to some changes in config parser -- sounds like tdasilva did a great job confirming the issue 21:12:32 i'd like to have it fixed for our train release (which is fast approaching!) but if needed, we can backport the fix 21:12:40 hah, ConfigParser 21:13:42 #topic lots of small files 21:14:21 as alecuyer is not here i'll try to summarize what happened recently 21:14:40 rpc as http is merged in the feature branch 21:14:57 good 21:15:08 we tried to deploy yesterday on production 2.22 + losf (with rpc http) + hashes.pkl support 21:15:27 hashes.pkl to save some CPU 21:15:39 we had to rollback, it seems introducing hashes.pkl introduced some random hangs in REPLICATE 21:15:44 alecuyer is digging it 21:16:17 good to know 21:16:29 next step for performance would probably to cache the list of partitions because it's seems it's very CPU intensive to list the partition 21:16:58 lol, you guys are cowboys.. just deploy to production.. nope doesn't work :P 21:17:24 mattoliverau: i didn't say it all. just on one server at first. we go to prod by 1/10/100/1000 21:17:32 not so crazy cowboys :D 21:17:42 ahh, yeah makes much more sense :) 21:17:51 that means, it would be compatible with grpc version??> 21:18:14 kota_: what do you mean? 21:18:19 i think that part's internal to a particular object server, no? 21:18:50 from the proxy's perspective, the RPC mechanism shouldn't really matter, right? 21:18:52 change one server from grpc to http, right? 21:18:55 yes, the protocol between object-server (python) and the RPC server (golang). it was gRPC but due to eventlet, it's now HTTP protocol 21:19:19 oic 21:19:24 so, it's totally transparent for anything else than the losf diskfile 21:19:26 (and similarly between two different object servers) 21:19:36 yeah, and the index-sever stands for each node. 21:19:44 kota_: exactly 21:19:54 make sense. 21:21:29 i know we here at swiftstack did some performance testing, and darrell talked with alecuyer a bit about what we'd found... unfortunately i didn't keep a close eye on that conversation, but i think we weren't seeing *huge* improvements... 21:22:53 rledisez, i think darrell was talking with you; do you remember what the outcome on that was? if there was any idea on what our bottlenecks or bad configuration might be? 21:24:15 hum, I can't exactly remember. maybe darrell could start an etherpad with what you found and we'll check that and confirm or explain or answer 21:24:26 sounds good 21:24:40 did i miss anything exciting? 21:24:45 i also remember alecuyer mentioned he had some patches that needed proposing... do you think any of that might be because of the code delta between what's on the feature branch vs what you're actually running? 21:24:57 clayg, don't worry, i moved the versioning stuff down ;-) 21:25:28 timburke: yes, we are (sadly) some patches ahead because we patch our prod first, but I think we are really close to the feature branch 21:25:52 and the goal is to run the last stable + feature branch on our cluster 21:26:00 rledisez, that makes perfect sense -- take care of your herd ;-) 21:26:37 but i really *do* want ot be better about getting patches landed on the feature branch -- if you're *running it in prod*, that kinda sounds like a +2 to me ;-) 21:28:17 #topic sharding 21:28:29 mattoliverau, i'm so sorry, i realy need to look at your patches! 21:28:43 no stress, we have lot's going on :) 21:29:03 I've pushed up https://review.opendev.org/#/c/681970/ 21:29:25 which I hope addresses the cleaning up cleave context bug 21:29:30 remind me, do you have anyone running with sharded containers? or are you doing this just because sharding's kinda your baby? 21:30:35 i love that sense of ownership, but i also don't want you getting burnt out on it ;-) 21:30:44 our swift deployments tools (with my Suse hat on) doesn't support sharding, in fact they a little behind. 21:31:16 It's something I want to fix. But suse doesn't make swift a priority other then giving me some time on it upstream 21:31:54 are there other things that Suse would like to see in swift? bugs fixed, features added? 21:32:10 So one day. In essence. I just know the code really well and feel responsible for bugs ;) 21:32:32 mattoliverau: that's super interesting! I didn't realize the tons of headers bug was about not reaping old contexts?! 21:32:41 what can i do to get you something you can show your boss to say, "hey, swift's valuable for us"? :-) 21:33:31 yeah, I wish I knew. Suse just "support" it as apart of OpenStack. They really go out and sell SES to customers with large storage issues. 21:33:57 might also be outside the scope of this meeting, but i wanted to mention it ;) 21:34:14 I'll been vocal about certain use cases as they come up internally. And always put a how cool Swift would be for that.. so I'm working on the inside :) 21:35:00 👍 thanks for that 21:35:03 If I can push the internal deployment tools forward. Just waiting for the customer who wants something big that Ceph can't handle ;P 21:36:11 anyway. i'll try to get those patches reviewed. thanks for all your hard work! 21:36:23 #topic versioning 21:36:33 Anyway. Got some sharding patches up for bugs. Haven't worked on autosharding much. I should play with that some more. comments on the current implemenatino welcome. 21:36:57 onto versioning! 21:37:02 clayg, tdasilva, take it away ;-) 21:37:58 so after a false start trying to expand versioned_writes to work with s3api aws object versioning feature... 21:38:11 we decided to expand versioned_writes to work with s3api aws object versioning feature! 21:38:44 only this time with a for-realzy new swift object versioning api that works a lot more like aws s3 object versioning 21:39:16 tdasilva: is working on that - and I'm sure lots of docs and func tests will be coming to flush that out and describe how we think it should work so we can start to get some feedback 21:39:48 so this is the WIP patch? 21:39:54 #link https://review.opendev.org/#/c/682382 21:40:11 https://review.opendev.org/#/c/682382 21:40:12 patch 682382 - swift - WIP: New Object Versioning mode - 2 patch sets 21:40:14 there's a couple sticking points on the implementation side where there might be some lack of system/storage level features make for less than ideal strategies; so we're experimenting with some different things 21:40:16 better :-) 21:40:53 mattoliverau: correct 21:41:22 so there's lot to talk about that... and looking down the road maybe some discussion about different things we might do with s3api while that work is ongoing 21:43:16 probably worth noting that symlink based versoinsing is still a good idea - but there's some other things in the legacy versioned writes mode(s) that were problematic and it seems better for clients/consumers to just offer a new shiny 21:44:32 we've been throwing around the anlogy of DLOs vs. SLOs - but with the idea that maybe if the alternative implementation is really significantly better (and easier for s3 consumers to adopt) we might eventually "sunset" legacy mode 21:44:59 but really we're mostly focused on getting an object versioning implementation in swift that is amazing and we can build on for the next decade' 21:45:14 fwiw, i'm starting to collect some of the design discussion currently happening internally at https://etherpad.openstack.org/p/swift-object-versioning 21:45:24 oh nice 21:45:50 because it'll be so much better if mattoliverau and kota_ and rledisez can see what we're thinking and offer feedback on it :-) 21:45:59 timburke: thanks for getting up - we can definately help flesh that out as we go 21:45:59 clayg: +1 re: amazing versioning is a good focus 21:46:38 oh, what, sorry, I'm looking another thing. 21:46:52 I'll take a look at the patch today too to get an idea of where things are at. 21:47:06 kota_, no worries :-) we just always appreciate your insights 21:47:13 good to know for the versioning 21:47:26 #link https://etherpad.openstack.org/p/swift-object-versioning 21:47:37 just linking it so it's easy to find in the minutes later 21:47:38 mattoliverau: we already got at least one new primative (static links,) out of the deal - I'm hoping we get at least one more before we're thorugh 21:47:39 mattoliverau: sorry for the lack of docs for the moment 21:47:40 * kota_ is in U.S. timezone actually to join another meeting 21:47:44 ie. when I loose the link :P 21:48:22 I'm sure we could sketch out some stuff on the etherpad that eventually ends up looking like some documentation 21:48:22 no stress guys. y'all doing awesome :) 21:49:20 there are good items for improvement consideration in the etherpad 21:49:22 timburke: I think that's all we got this week - but I'll commit to helping work on that etherpad 21:49:37 sounds good 21:50:10 oh, one more last minute topic! 21:50:15 #topic train release 21:50:23 CHOO CHOO! 21:50:36 in two weeks or so, i want to be tagging a 2.23.0 for train 21:50:49 we should call it, train departure :P 21:50:56 ship it, ship it, ship it, shipit, shipitshipitshipit CHOOO CHOOOO 21:51:09 rofl 21:51:29 timburke: so maybe a new priority review section 21:51:39 please add items to the priority reviews page as you see fit -- i'll make sure i add a release section at the top (and in general clean it up) 21:51:48 mattoliverau, yeah, that :-) 21:51:57 #topic open discussion 21:52:11 anything else we ought to talk about? 21:53:29 shanghai? 21:53:46 can't wait! 21:53:55 is there a who's coming list already? 21:54:13 <- whole week 21:54:24 still need to get my travel sorted, though... and make a state of swift talk 21:54:25 kota_: AWESOME!!! 21:54:37 i'm just trying to learn how to visa 21:54:40 for the record, alex and I have our PTG tickets and plane tickets. we are now trying to get VISA (which is a bit complex for french people). we hope to get them in time 21:55:08 hehe, I don't need VISA for shanghai 21:55:22 clayg, according to https://etherpad.openstack.org/p/swift-ptg-shanghai, it's the five of us: you, me, kota_, rledisez, and alecuyer 21:55:48 #dreamteam 21:55:57 mattoliverau: ??? 😢 21:56:13 me never got the OK, but here others at SUSE are applying for visa's so assume it's not happening. I should go confirm with my manager (whose been away and then away at training) 21:56:16 :( 21:56:20 :-( 21:56:44 annoying because it's _very_ close to my timezone. 21:56:48 :( 21:57:00 ahahha! true! maybe one of the easier plane rides for you for a change! 21:57:01 so I would have thought I'd be cheaper to send too. 21:57:04 hahahah 21:57:09 lol 21:57:12 if nothing else, it'll be nice to have more overlap with you mattoliverau :-) 21:57:43 yeah,lets see what can get through the great firewall of china.. but I hope to be there virtually :) 21:57:45 we'll have to think about how to dial you in; if you can't make it you'll be missed - until the next one! 21:58:02 yeah i'm *nervous* about the tech/equipment/network stuff 😬 21:58:07 i wonder where it'll be... 21:58:16 clayg, yeah, me too. 21:58:19 clayg: we got special recommendation from our SOC team… 21:58:23 probably denver :P 21:58:34 mattoliverau, sounds right :P 21:58:58 all right, i think i'm gonna call it 21:59:00 denver :/ 21:59:05 "soc" is that a french acronym? 21:59:13 thank you all for coming, and thank you for working on swift! 21:59:14 security operation center i think 21:59:16 kota_: probably not. Just joking 21:59:27 #endmeeting