21:00:34 #startmeeting swift 21:00:34 Meeting started Wed Jan 22 21:00:34 2025 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:34 The meeting name has been set to 'swift' 21:00:40 who's here for the swift meeting? 21:00:47 I am! 21:01:58 o/ 21:02:24 * zaitcev peeks in 21:02:51 so i've had a crazy weekend due to moving houses and haven't really had a chance to update the agenda (or think much about what i'd want to call out during this meeting in general) 21:03:14 nps, we'll wing it :) 21:03:36 but i *do* know that there are a couple items of work that are seeing renewed interest: ring v2, and the aws-chunked chain 21:04:29 Is there documentation about what ring v2 is? (sorry, new here) 21:04:33 oh, and p 853697 merged, so a whole bunch of stuff has needed rebasing! 21:05:13 rcmgleite, yes! it's included in the patch: https://review.opendev.org/c/openstack/swift/+/834261 21:05:16 yup, both are important for us at least. the latter is important for any s3api users with the latest boto sdk updates 21:05:49 oh, right, patch-bot is still packed up... i've gotta bring some hardware back online so i've got a bouncer and whatnot... 21:06:09 ringv2 is a new serialization format for rings. The old had some issues, but there are aslo cool new imporovements like can store more devices in the ring (Which is what we really need). 21:06:41 oh, you're running patchbot yourselve timburke lol 21:06:48 *yourself 21:07:22 the check job also published a preview sight of the new docs; i think https://df7f3296a8ac461f2ab3-3dd5e628e5b815e2df15fc764c3bb478.ssl.cf2.rackcdn.com/834261/22/check/openstack-tox-docs/91194ff/docs/overview_ring_format.html#ring-v2 is the interesting bit 21:09:16 but yeah, mattoliver's right -- the big thing driving the renewed interest is going beyond the 64k device limit rings currently have 21:09:56 rcmgleite, welcome! what brings you here? any particular part of swift you're interested in? 21:10:47 Hey! I'm an ex s3 engineer working for a public cloud in a brazilian company. Our object storage offer runs on swift (still yoga for now) 21:11:02 I'm here to learn and if possible ask lots of questions! 21:11:16 oh nice!! welcome! 21:11:23 nice! o/ 21:11:24 Merged openstack/swift master: docs: Fix version call-out for stale_worker_timeout https://review.opendev.org/c/openstack/swift/+/939501 21:11:25 welcome 21:11:26 Thank you! 21:12:42 Currently very interested in the 64k limit change of the ring (also a problem for us). and we've been working on some patches of our own that we would like to push upstream if it makes sense to you 21:13:01 Things like object locking, middlewares that help decouple keystone from swift etc.. 21:13:10 oh, cool! welcome! i seem to remember notmyname talking about at least one brazilian company with a pretty large swift install... 21:13:42 that's us for sure! Rafael (also from my company) has been joining this meeting for a while now 21:14:47 cool, more people working upstream the better! then we can all benefit from each others work 21:15:01 @mattoliver exactly! 21:15:39 Do you folks mind If I ask a couple of questions? Not sure how these meetings usually go 21:16:11 not at all, go for it! you're also welcome to ask questions any time, not just during meetings :-) 21:16:25 rcmgleite: https://gist.github.com/tipabu/435fc63f1aa76d6b346956571a393665 21:16:53 We are working on upgrading from yoga forward.. our first step is Zed. Anything we should be worried about with these upgrades? 21:17:11 By reading the commits and changelog it doesn't look like so.. but wanted to be sure 21:18:20 no, i wouldn't expect. and fwiw, i wouldn't hesitate to upgrade swift straight from yoga to 2.34.0 (or even master) -- though you might have your own reasons for moving more slowly 21:19:12 I think I tend to be overly paranoid... And we have a bunch of changes to swift that we never pushed upstream that get very annoying to merge when pulling new changes.. 21:19:13 if possible, i'd upgrade object, then container, then account, then proxy 21:19:34 yeah, fair enough. long-lived forks are no fun 21:20:00 Yeah we are trying to catch up and send proper proposals so that we don't keep our fork anymore.. 21:20:07 thats why we try to push as much upstream as possible.. less maintence burden 21:20:14 \o/ 21:20:15 is there a rational for having to choose a specific order to upgrade? 21:23:22 I remember I have working on a few expirer related changes recently, and one of the goal is to be order-agnostic for the upgrade process. I think probably Tim wants to be cautious in general. 21:24:03 servers are kept backward compatible with older clients (at least, as best we can, and to the extent that we don't, we treat it as a bug and fix it) 21:24:19 so older proxy should always be able to talk to newer backend 21:24:39 but newer proxy might want to use features only available with newer backend 21:24:57 alot of the smarts and features are in the proxy, they can talk to older storage nodes. But it's just best practice as Jian says. And meansyou can use new features as sson as the proxies are upgraded :) 21:25:15 oh interesting. that makes sense! Thanks for the context 21:26:05 the exact order of the backend upgrades is less important, but we usually try to think about them in that order (iirc) 21:27:01 i think for the most part it's less of a concern these days; it's been a while since we added a bit new protocol change like sysmeta or listing overrides... 21:27:13 Second question: we want to update a bit how ranged gets work -> ie: we want storage nodes to also receive a range and be able to fetch only the exact amount of data needed from disk. I wanted to share the design with you and have proper discussions before we go full on implementing it. What's the best way to do this? Do you discuss in PR direclty? Or have a design session? 21:27:34 Great! Thanks guys! 21:28:05 rcmgleite, that should already be happening -- what've you been seeing? 21:28:24 (it's more of a general question about how designs are done within this group/project) 21:28:59 We tend to have a design doc in any form you like, we put it on the ideas page (normally), and can discuss it at a meeting or PTG. Then break out into more meetings or a something as required. It's pretty flexible. 21:29:23 We just started discussing about this - but since we don't have internal checksums per range, My assumption is that you have to read the entire content from disk to be able to apply integrity checks on whatever storage nodes are sending back 21:29:36 #link https://wiki.openstack.org/wiki/Swift/ideas 21:29:46 Thanks @mattoliver! 21:30:19 I might have missed how you do it though 21:30:25 basically want the idea (or more flushed idea/design doc) that people can read / comment on. and then we can take it from there. 21:30:41 Cool! That makes sense! 21:30:50 if its a huge feature we sometime use a feature branch in git upstream to speed up development 21:31:36 storage_polices, sharding and currently a better MPU (then slo) is currrently being worked on in this manner 21:32:08 That's interesting! I can find those by looking at branches on the github repo? 21:32:27 yeah, but your best place to see progress is on gerrit 21:32:37 hm, ok! 21:33:47 I use this gerrit dashboard (sorry about the big link): 21:33:57 https://review.opendev.org/dashboard/?foreach=%28project%3Aopenstack%2Fswift+OR+project%3Aopenstack%2Fpython%2Dswiftclient+OR+project%3Aopenstack%2Fswift%2Dbench+OR+project%3Aopenstack%2Fpyeclib+OR+project%3Aopenstack%2Fliberasurecode%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D%2D1+NOT+label%3ACode%2DReview%3C%3D%2D2+is%3Amergeable&title=Swift+Review+Dashboard&Starred+%28by+myself%29=%28is%3Astarred%29+AND+status%3Aopen&Small+things= 21:33:57 delta%3A%3C%3D25+limit%3A10+%28NOT+label%3ACode%2DReview%2D1%2Cswift%2Dcore+OR+%28label%3ACode%2DReview%2D1%2Cswiftclient%2Dcore+AND+project%3Aopenstack%2Fpython%2Dswiftclient%29%29&Needs+Final+Approval+%28to+land+on+master%29=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+NOT+owner%3Aself+label%3ACode%2DReview%3E%3D2+%28NOT+label%3ACode%2DReview%2D1%2Cswift%2Dcore+OR+%28label%3ACode%2DReview%2D1%2Cswiftclient%2Dcore+AND+ 21:33:57 project%3Aopenstack%2Fpython%2Dswiftclient%29%29+branch%3Amaster&Open+Backport+Proposals=branch%3A%5Estable%2F.%2A+status%3Aopen&Feature+Branches=branch%3A%5Efeature%2F.%2A+status%3Aopen+NOT+branch%3A%5Efeature%2Fhummingbird&Open+patches=NOT+label%3AWorkflow%3E%3D1+NOT+label%3ACode%2DReview%2D1%2Cswift%2Dcore+NOT+label%3ACode%2DReview%3E%3D2+branch%3Amaster 21:35:18 thats an old one. I thnk tim has newer probably better ones :P 21:35:26 note that most of the feature branches on the github repo are basically historical artifacts -- the only active feature branch today is feature/mpu 21:36:06 I save those URLs in something like this: https://slasti.zaitcev.us/zaitcev/swift/ 21:36:11 Wait 21:36:19 http://slasti.zaitcev.us/zaitcev/swift/ 21:36:27 Private CA cert, sorry. 21:37:00 great, thx! Anyone else want to jump in before I ask a last question? 21:37:29 nah go ahead 21:38:18 Is there any fork/initiatives that you know of that are related to strong consistency? 21:38:36 I know swift is eventually consistent by design... so not too hopeful 21:38:47 but it's something we've been seeing the need for more and more... 21:40:39 yeah, we're seeing the need more and more, too... i don't recall any forks pursuing it, though 21:41:30 not that I'm aware of. At nvidia we've been thinking about what it might look like to get more strongly consistent in container listings for example. But the cost of stong consistency at scale is latency. Thought maybe we need to have container policies where you might be able to opt-in for a different consistency model. but more just spitballing still 21:41:53 could be a great topic at a future PTGs, when we an all brainstorm together. 21:42:03 Ceph RGW is consistent. But I don't like that code of it. I worked on it on and off. It's in an-hoc C++, everything is super random. Classes, indexes... 21:42:27 Minio maybe okay but it has licensing issues. 21:42:54 yeah, I worked on ceph RGW and yeah had latency issues because of metadata strong consistency (when I worked on it) 21:43:05 It's great to hear that the need for that is also there for you.. we might bring up some ideas soon! 21:43:55 acoles: has some bucket inventory code we run downstream to get a more consistant although historic view of bucket listings. 21:44:27 but yeah, more we can all work together on problems we face the better we can solve them and make swift better :) 21:45:25 @rcmgleite, what kind of consistency you are interested in, listing or obj overwrite? 21:47:25 both. listing can be annoying for analytics workloads but overwrites are also bad for things like bucket policy updates, locking updates etc.. 21:47:51 it's something customers don't expect - that you say: Lock my file for deletions, they get a 200 and the next call they do is a delete and their object is gone 21:48:34 it can totally be something we could improve in our implementation of locking (which we very much want to push upstream) 21:48:47 but it's one of the many cases where eventual consistency is biting us 21:49:23 yup, makes sense 21:50:46 rcmgleite: I look forward to digging into some of these issues and seeing what you all have done! 21:51:11 Yup! let's do that! thanks for all the information folks! really helpful! 21:52:01 all right, anything else we want to discuss? 21:52:30 look forward to seeing patches from you, @rcmgleite 21:52:35 As for ringv2 work, I've been pushing it down the road some more, got 2 more small patches. One to save 2byte minimum forr dev in builders 21:53:06 i saw, thanks mattoliver! 21:53:31 and I to add a `swift-ring-builder version` command, so we have an easy way to confirm ring versions, seeing as we haven't changed the file name semantics. 21:54:06 yeah, that might be better than what i was doing over in https://review.opendev.org/c/openstack/swift/+/857234 21:54:12 just playing with the code in my vSAIO and trying to get used to using it day 2 day. 21:54:49 i saw https://github.com/NVIDIA/vagrant-swift-all-in-one/pull/170 too :-) 21:54:55 oh yeah, ring-info, so a new utility, could also work. 21:55:39 oh yeah, if you actaully want to build v2 rings, it would be nice for vSAIO to support it. 21:56:33 although that only works if your running the branch because before that --format-version isn't a valid option. 21:57:25 ring_version=1 could just not write the --format-version option I guess.. that'll work until we change the default version in the future :hmm: 21:57:36 i suppose i could try to pull that out as another patch in front -- and justify it as "well, you want to be able to test loading v0 rings, don't you?" :P 21:58:11 loll, true.. but surely we're just going to land ringv2 soon right ;) 21:59:22 i was so sad when i noticed the "Created 4 years ago" when zaitcev linked to my gist... 21:59:38 all right, we're about at time 21:59:47 Our velocity is a bit low. 21:59:52 thank you all for coming, and thank you for working on swift! 21:59:56 #endmeeting