21:00:23 <timburke> #startmeeting swift
21:00:24 <openstack> Meeting started Wed Feb  5 21:00:23 2020 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:27 <openstack> The meeting name has been set to 'swift'
21:00:32 <timburke> who's here for the swift meeting?
21:00:39 <kota_> hi
21:00:42 <seongsoocho> o/
21:00:52 <rledisez> hi!
21:01:06 <mattoliverau> o/
21:02:05 <tdasilva> o/
21:02:05 <clayg> oh yeah sorry :D
21:02:23 <timburke> i didn't update the agenda since last week, but i didn't really have much to add ;-)
21:02:30 <timburke> #topic swift 2.24.0
21:02:36 <timburke> we had a release!
21:02:48 <timburke> best swift yet :D
21:02:54 <kota_> awesome!
21:03:00 <rledisez> until the next one ;)
21:03:26 <timburke> thanks for all the bug reports, patches, and reviews everyone!
21:04:41 <timburke> #topic open discussion
21:05:22 <zaitcev> Not much going on with dark data, dsariel was working on adding a probe test.
21:05:40 <zaitcev> Oh, and
21:05:42 <timburke> i saw that go by! still need to take a look
21:06:14 <zaitcev> I have a customer with a medium sized cluster, they have object replicators seriously underperforming.
21:06:33 <zaitcev> Maybe the inter-region links are overloaded, I don't know. Complete mystery.
21:07:25 <zaitcev> Anyway, nothing to amaze RAX and Swiftstack people, but very curious for me. What I meant to say though, there's not much hooks for analysis of replicator performance that I see.
21:07:40 <timburke> interesting. how many regions?
21:07:46 <zaitcev> I am currently adding logger.debug(), it's at that level
21:07:49 <zaitcev> 2
21:07:58 <zaitcev> Replication factor 4
21:08:02 <zaitcev> fairly standard
21:08:27 <timburke> no write affinity, yeah?
21:08:34 <zaitcev> No
21:08:44 <zaitcev> But well, it's the replicator. What does it matter.
21:08:51 <zaitcev> I'm not talking about PUT
21:09:06 <timburke> with affinity on, you're making a lot more work for the replicator to do
21:09:38 <zaitcev> okay, but still.... It's crawling at about 40 seconds per partition
21:09:47 <zaitcev> anyway, probably not something for the meeting
21:10:16 <timburke> i does remind me how we really want to get ourselves more into the data path for replication, though... tsync! (someday)
21:10:17 <zaitcev> I saw a whole bunch of interesting reviews and cannot keep up.
21:10:32 <timburke> i've been a little head-down working on some latency issues that (i think) are related to container listings and general container load (though i'm not entirely clear on *why*)
21:10:36 <timburke> kicked out p 706010 (and probably increases my prioritization of p 675014)
21:10:36 <patchbot> https://review.opendev.org/#/c/706010/ - swift - sharding: filter shards based on prefix param when... - 1 patch set
21:10:38 <patchbot> https://review.opendev.org/#/c/675014/ - swift - Latch shard-stat reporting - 6 patch sets
21:10:39 <zaitcev> something about symlinks
21:11:35 <zaitcev> Also. Since I'm tracing replicator, I noticed it makes 2 REPLICATE requests for every partition, even empty ones.
21:12:11 <zaitcev> I was on core for how many years now? An I still do not know how it works :-)
21:12:39 <clayg> I'm still trying to grok the last reported shard latch patch
21:12:47 <timburke> fwiw, that still (rather often!) happens to me :-)
21:13:07 <mattoliverau> I'll try and get to reviewing those sharding patches this week. Sorry I've been distracted
21:14:38 <timburke> worrying about whether your house is actively on fire (or recently so) seems like a good reason to be distracted, but a poor reason to be apologizing ;-)
21:15:10 <mattoliverau> lol
21:15:14 <clayg> zaitcev: should be REPLICATE (get hashes) rsync (sync data) REPLICATE (tell remote to recalculate data)
21:15:28 <zaitcev> clayg, thanks, I see
21:15:36 <clayg> zaitcev: I feel like empty parts should see local hashes match the first value returned from get hashes and not do another one 🤔
21:16:09 <kota_> clayg: that's what I think so
21:17:20 <timburke> oh, zaitcev! you might have ideas -- i still want to get probe tests running on py3 in the gate (p 690717) but i'm running into trouble getting pyeclib/liberasurecode installed
21:18:12 <timburke> on centos7 we just yum install from... somewhere... but that won't do for centos8, apparently
21:18:37 <zaitcev> timburke: sorry, I only know about our own context, where we pull those libraries from our repositories. Gate probably tries to pull from upstream, unless it's in bindep
21:18:44 <zaitcev> what
21:19:12 <timburke> i guess i could get libec installed from source... but surely there are packages built *somewhere* that we could use...
21:20:23 <timburke> i guess: does anyone know which repo the centos7 image is using to install python-pyeclib (and thereby pulling in liberasurecode)?
21:20:28 <zaitcev> Huuuh. I get liberasurecode installed, but not python-pyeclib.
21:20:40 <zaitcev> I'll look into where RDO gets it.
21:20:57 <tdasilva> https://cbs.centos.org/koji/buildinfo?buildID=17582
21:21:22 <zaitcev> no, that's easy. Find PyECLib
21:21:22 <tdasilva> looks like 1.6 is going to be in ussuri, cloud-8 i'm assuming refers to centos8???
21:21:38 <tdasilva> https://cbs.centos.org/koji/buildinfo?buildID=27938
21:23:20 <zaitcev> "Package python3-pyeclib-1.5.0-10.fc31.x86_64 is already installed."
21:23:24 <tdasilva> https://cbs.centos.org/koji/buildinfo?buildID=28302
21:23:29 <zaitcev> ok, it's called "python3-pyeclib" in Fedora
21:23:40 <tdasilva> pyeclib is still only in candidate
21:24:25 <timburke> mmm... ok
21:24:58 <timburke> i was also catching up on the mailing list this morning, noticed http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012347.html that sounds like something i need to read more closely
21:25:12 <tdasilva> i guess centos8 package would have to wait until red hat has a ussuri release
21:25:43 <timburke> so it's kinda chicken-and-egg, then :-/
21:26:24 <timburke> maybe i can get the liberasurecode playbooks to install from source -- something for me to look into, anyway
21:26:52 <timburke> oh yeah, speaking of the mailing list
21:27:03 <timburke> http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012167.html says tickets for vancouver are on sale!
21:27:37 <timburke> http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012239.html tells you about how *you* can help shape what content we see there
21:28:14 <kota_> oh yeah,
21:28:21 <tdasilva> at one point I was looking at https://wiki.centos.org/SpecialInterestGroup/Storage
21:28:23 <kota_> the page is #link https://www.openstack.org/events/opendev-ptg-2020/
21:28:49 <timburke> if you're hoping to go and your employer is unlikely to sponsor it, look into the travel support program: https://wiki.openstack.org/wiki/Travel_Support_Program
21:29:42 <seongsoocho> yay ~
21:29:46 <timburke> looks like the application link there is still for shanghai, but i'd expect an announcement on the mailing list when vancouver applications open up
21:30:01 <kota_> it looks like `opendev` is from `forum` in the paste
21:30:05 <kota_> past events?
21:31:38 <rledisez> does some people already know if they're going in vancouver?
21:33:09 <timburke> looks like my wife may have her own conference that same week, so i might have to have someone give the project update in my place
21:34:48 <mattoliverau> timburke: or you just bring your swiftlets ;P
21:35:03 <kota_> I'm planning to go there but need to get approval that is not so hard, maybe.
21:35:07 <zaitcev> what an unorthodox idea
21:35:51 <rledisez> me and alecuyer are currently waiting fpr approval
21:36:09 <timburke> 👍
21:36:48 <timburke> we've got time; i'm sure plans will firm up more over the next month or two
21:36:53 <timburke> any other topics to bring up?
21:38:12 <timburke> all right then
21:38:24 <timburke> thank you all for coming, and thank you for working on swift!
21:38:27 <timburke> #endmeeting