21:01:47 <timburke> #startmeeting swift
21:01:47 <opendevmeet> Meeting started Wed Apr 20 21:01:47 2022 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:47 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:47 <opendevmeet> The meeting name has been set to 'swift'
21:01:57 <timburke> who's here for the swift meeting?
21:02:15 <mattoliver> o/
21:02:22 <kota> o/
21:03:52 <timburke> as usual, the agenda's at
21:03:54 <timburke> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:03:59 <timburke> first up
21:04:05 <timburke> #topic pyeclib release
21:04:34 <timburke> we have https://github.com/openstack/pyeclib/releases/tag/1.6.1 now!
21:04:42 <mattoliver> \o/
21:04:49 <kota> great
21:05:51 <timburke> it includes fixes for py310, as well as enabling people to build abi3 wheels (so you can build once for py35, for example, and still be able to install on later minor releases
21:06:35 <timburke> next up
21:06:46 <timburke> #topic s3api security bug
21:07:54 <timburke> i was reviewing some code and noticed that when s3api validates hmacs, it does it with a simple ==, which can reveal information about how much of the signature is valid
21:08:41 <timburke> fix was to use the same streq_const_time() function we've been using in tempurl and formpost
21:08:53 <timburke> thanks for reviewing https://review.opendev.org/c/openstack/swift/+/837773 mattoliver and acoles!
21:09:13 <mattoliver> nps, I learnt something :)
21:09:39 <timburke> the bug has been present as long as we've had s3api in-tree, so i plan on backporting all the way to rocky
21:09:51 <mattoliver> kk
21:11:01 <kota> ok
21:11:18 <timburke> fixes are already merged through victoria; i still need to propose patches for rocky, stein, train, and ussuri
21:11:31 <timburke> but it leads to my next topic...
21:11:38 <timburke> #topic state of the gate
21:12:13 <timburke> i'm pretty sure *every* stable gate was broken the last couple days
21:12:41 <timburke> there were a variety of problems -- the first also affected master
21:13:33 <timburke> basically, a fix for a recent git CVE didn't play well with how devstack installs things
21:13:36 <timburke> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028160.html
21:14:19 <timburke> that's been fixed back through ussuri at least, but it's still working its way through some branches
21:14:47 <timburke> see https://review.opendev.org/c/openstack/devstack/+/838556 and https://review.opendev.org/c/openstack/devstack/+/838679 for example
21:15:49 <timburke> even before that, though, stein through victoria have been broken since the start of the month due to some fixes for https://bugs.python.org/issue43882
21:16:34 <mattoliver> wow, thanks for getting on top of this.
21:16:35 <timburke> and for even longer, pike, queens, and rocky have been broken because they're inexplicably trying to build docs under py3
21:16:45 <mattoliver> I really need to pay more attention to the mailing list again.
21:17:02 <timburke> idk if i've been "on top of this" given how long they've been broken ;-)
21:17:44 <mattoliver> lol, true, when compared to me, you are. But fair point.
21:17:48 <timburke> the good news is, i've got patches to fix things, and we should have a functional gate again by the end of the week (i think)
21:18:04 <mattoliver> nice
21:18:45 <mattoliver> Ill make an active effort to better watch things in the mailing like and check the gerrit dashboard from time to time.
21:18:53 <mattoliver> *mailing list
21:19:35 <timburke> thanks! if anyone's interested in keeping up with stable gate failures, i recommend subscribing to http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint and filtering for swift
21:19:57 <mattoliver> kk
21:20:13 <timburke> the main reason i've been prioritizing it is that i've been getting emails everyday about it ;-)
21:20:40 <timburke> all right, that's what i wanted to cover this week
21:20:44 <timburke> #topic open discussion
21:20:51 <timburke> anything else we should bring up?
21:22:39 <timburke> i've been getting more nervous about us dropping some of our extra monkey patching in https://review.opendev.org/c/openstack/swift/+/457110
21:23:16 <timburke> in particular, i rediscovered https://github.com/eventlet/eventlet/issues/546 -- apparently eventlet doesn't/can't green existing locks on py3
21:23:54 <mattoliver> ouch
21:25:56 <kota> :(
21:26:26 <mattoliver> I guess the question is it just doesn't or can't. Ie if the former, is there something we can do for it, provide a patch (or bug the project) or the latter.
21:26:45 <timburke> it's significant to point out that the failure mode is *not* greenthreads blocking more than needed, but rather that existing RLocks (including logging._lock) become basically worthless since the owning pthread is the same between different greenthreads
21:27:20 <timburke> i think i've got the start of an eventlet fix in https://github.com/eventlet/eventlet/pull/754 -- but it's currently only for py310
21:27:59 <mattoliver> ok, well a fix is a good direction. even if we pin to a minimum version of py3
21:28:14 <timburke> i'm going to look at at least greening logging._lock on older pythons, though
21:28:27 <timburke> not sure if there are other rlocks we need to worry about
21:30:03 <mattoliver> yeah, but logging is a great place to start
21:31:17 <mattoliver> wow, what an issue to find., thanks for looking into it and starting to come up with a solution
21:31:22 <mattoliver> need any help?
21:32:22 <timburke> i think i know what i need to do next for it, thanks
21:33:28 <mattoliver> Well I've been playing with a better way to deal with v2 serialiazation/deserialization. More of a stepping stone for if/when we want to start including builder data in the ring, but only load what we want when we only want the ring. Mostly in  https://review.opendev.org/c/openstack/swift/+/834423 part of the chain
21:34:50 <mattoliver> still playing with it, but I think it's better then just having it all in one fuction that might need to be repeating in the builder too (when we put builder data structures in the ring too).
21:35:50 <timburke> nice!
21:36:24 <mattoliver> Does bring up some interesting thoughts though. the ring deserialization code needs the dev_bytes length from the "metadata" which is in a different RingSerialization concrete class.
21:37:25 <mattoliver> So metadata needs to come first. I wonder if in this version, the dev_bytes length should actually live with the ring datastructure (but would have to be replicated in the history rings).
21:39:03 <mattoliver> not a major issue, but could make it more self contained.
21:41:35 <mattoliver> the following patch in the chain adds the ring history stuff to the serialization, and seems to add in well to this approach. No change from the ringdata side. Just a new ring indexes and adding the concrete classes. No extra code to the serialization side :)
21:41:58 <timburke> maybe we could do something with the 64-bit length we put at the start of each section? use the top 8 bits to indicate whether the contents should be taken 1, 2, or 4 bytes at a time, then the rest as the number of bytes/words/dwords...
21:42:04 <timburke> nice
21:42:56 <mattoliver> yeah, something like that. or just add an extra 1 byte at the start. Dunno, wanted to feel people out about it.
21:43:22 <mattoliver> 64-bit is rather long, though
21:44:51 <mattoliver> anyway, wanted to play with this, cause interested in taking a look at what it'll look like with some builder components in the ring (so we wouldn't need builders anymore). Ie just give a ring to the ring builder tool. And check that it's v2 ring and act accordingly :)
21:46:19 <timburke> all right, i think i'll call it
21:46:35 <timburke> thank you all for coming, and thank you for working on swift!
21:46:40 <timburke> #endmeeting