opendevreview | OpenStack Proposal Bot proposed openstack/swift master: Imported Translations from Zanata https://review.opendev.org/c/openstack/swift/+/861022 | 03:15 |
---|---|---|
opendevreview | Alistair Coles proposed openstack/swift master: proxy: refactor error limiter to a class https://review.opendev.org/c/openstack/swift/+/858790 | 09:52 |
opendevreview | Alistair Coles proposed openstack/swift master: proxy: refactor error limiter to a class https://review.opendev.org/c/openstack/swift/+/858790 | 11:31 |
opendevreview | Alistair Coles proposed openstack/swift master: Refactor memcache config and MemcacheRing loading https://review.opendev.org/c/openstack/swift/+/820648 | 12:46 |
opendevreview | Alistair Coles proposed openstack/swift master: Global error limiter using memcache https://review.opendev.org/c/openstack/swift/+/820313 | 12:46 |
DHE | timburke_: for context, I have a 2.23.1 based cluster with a 10+10 EC policy. randomly GETs of big files just won't start. I'm running curl with tempurl authentication and it just sits there.... it's not too common - I'd say less than 1% of requests - but it does happen | 18:10 |
DHE | figured I'd apply some updates to the cluster since it's getting old at this point | 18:10 |
timburke_ | DHE, kinda sounds like something we fixed in 2.27.0: https://github.com/openstack/swift/blob/master/CHANGELOG#L709-L711 | 18:19 |
timburke_ | you could try manually applying https://github.com/openstack/swift/commit/86b966d950000978e2438f1bd5d9e2bf2e238cd1 -- it's (fortunately) a pretty small change | 18:20 |
timburke_ | does it hang just the one request, or the whole process? | 18:20 |
timburke_ | either way, you might be able to use https://github.com/tipabu/python-stack-xray/blob/master/python-stack-xray to get a sense of where it's hanging, though it can be tricky if it's a busy server | 18:26 |
DHE | just the one request. I terminate curl and try again. it works. swift itself has been largely fine. | 18:29 |
DHE | yeah I tampered with python itself to fix this. I think I'm the one who helped push that through in the first place | 18:30 |
timburke_ | ah, right! :-) | 18:30 |
DHE | for non-EC jobs (which is 99.9% of the workload) it's been humming along nicely | 18:30 |
timburke_ | logs have much of anything to say? i know it'd be a little tricky since there's no transaction id sent back to the client | 18:31 |
timburke_ | if not, getting a stack is probably the best way forward -- might be able to spin up a separate proxy-server instance and hitting that directly until it hangs, then run the xray script | 18:33 |
kota | good morning | 20:57 |
timburke_ | o/ | 20:59 |
*** timburke_ is now known as timburke | 20:59 | |
timburke | #startmeeting swift | 21:00 |
opendevmeet | Meeting started Wed Oct 12 21:00:27 2022 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. | 21:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 21:00 |
opendevmeet | The meeting name has been set to 'swift' | 21:00 |
timburke | who's here for the swift team meeting? | 21:00 |
kota | o/ | 21:00 |
cschwede | o/ | 21:01 |
timburke | whoa! a cschwede! | 21:01 |
cschwede | :) | 21:01 |
zaitcev | A rare meeting indeed. | 21:02 |
mattoliver | o/ | 21:02 |
mattoliver | cschwede is here too! | 21:02 |
timburke | not sure if acoles or clayg are around -- i think clay's kind of busy trying to get an nvidia release lined up | 21:03 |
timburke | first up | 21:03 |
timburke | #topic PTG | 21:03 |
timburke | i've booked time slots and updated the etherpad list to point to the right place! | 21:04 |
mattoliver | oh nice! | 21:04 |
kota | good | 21:04 |
timburke | i went with 2100-2300 M-Th (though i wonder if we could get away with starting a little earlier to get more of acoles's (and cschwede's?) time | 21:05 |
cschwede | great, happy to meet all of you at least virtually :) | 21:05 |
timburke | though it wouldn't help kota -- timezones are hard :-( | 21:06 |
kota | in UTC? | 21:06 |
timburke | yes -- so i know it'd be a bit of an early start | 21:06 |
kota | seems like just morning not so matter | 21:06 |
mattoliver | if earlier makes it easier for Al and cschwede then I'm ok with it. | 21:08 |
timburke | if we try for an hour earlier, it's 5am -- might be "just morning" for you, but that doesn't match my experience ;-) | 21:08 |
mattoliver | lol | 21:08 |
kota | it's still ok, i think | 21:09 |
kota | :) | 21:09 |
timburke | i went with just M-Th to make sure we don't run into kota and mattoliver's weekend, and kept shorter slots to try to keep everyone well rested | 21:09 |
timburke | if you've got topics for the PTG, please add them to the etherpad! | 21:10 |
timburke | #link https://etherpad.opendev.org/p/swift-ptg-antelope | 21:10 |
timburke | and if we feel like we need more time, we can always book more slots | 21:11 |
timburke | that's all i've got for ptg logistics -- any questions or comments? | 21:12 |
mattoliver | cool, I'll go through it. And add stuff I can think of. | 21:12 |
timburke | 👍 | 21:12 |
mattoliver | Do we want to book a "normal" time for a ops feedback? | 21:13 |
mattoliver | happy to do it in my timezone though (makes it much easier for me) ;) | 21:13 |
timburke | i don't know what "normal" means :P | 21:13 |
timburke | but yeah, an ops feedback session is probably a good idea | 21:14 |
mattoliver | a time that have more overlap with the attendees of PTG. Although it probably be just mostly us. | 21:14 |
timburke | ah, yeah -- looks like a lot of the rest of the ptg is in the 1300-1700 UTC slot | 21:16 |
timburke | i think i'll let mattoliver schedule that one then :-) | 21:17 |
mattoliver | lol, shouldn't have opened my big mouth :P kk :) | 21:17 |
timburke | all right -- i don't have much else to bring up | 21:19 |
timburke | #topic open discussion | 21:19 |
timburke | what else should we talk about this week? | 21:19 |
timburke | cschwede, i figured you'd have something, after making the effort to attend ;-) | 21:21 |
clarkb | oh I hav esomething | 21:21 |
clarkb | we're (opendev) looking to bump our default base job node to ubuntu jammy on the 25th | 21:22 |
cschwede | timburke: not really, i just wanted to ensure I get all the important PTG infos :) | 21:22 |
timburke | :-) | 21:22 |
clarkb | email has been sent about it, but I know ya'll bindep file doesn't work with jammy currently so wanted to call it out (I suspect that your jobs are probably fine since they probably specify a specific node type already) | 21:22 |
timburke | clarkb, oh! i bet that's part of the recent interest in https://review.opendev.org/c/openstack/swift/+/850947 | 21:22 |
clarkb | timburke: I left a comment on the bindep file in your py310 change with a note about correcting that | 21:23 |
clarkb | ya that change | 21:23 |
mattoliver | oh good thing I moved the vagrant swift all in one environment over to jammy then :P | 21:23 |
timburke | thanks -- the bigger issue right now is the parent change -- our continued reliance on nosetests is definitely a problem now :-( | 21:24 |
clarkb | I have no idea if this will be the case for swift due to all the IO but in Zuul there is a distinct difference in test runtimes between 3.8 and 3.10 with 3.10 being noticeably quicker which is nice | 21:24 |
mattoliver | I've heard there is a performance boost in later py3's. nice | 21:25 |
timburke | i've noticed similar sorts of performance benefits when benchmarking some ring v2 work -- going py27 -> py37 -> py39 -> py310 kept making things better and better :-) | 21:25 |
mattoliver | down with nose, time to move everything to pytest. | 21:25 |
timburke | now if only i could get that first jump done in the clusters i run | 21:26 |
clarkb | re pytest one thing I've encouraged others to do is use ostestr for CI because it does parallelized testing without hacks (thought maybe pytest has made this better over time) and it is a standard test runner which means you can run pytest locally to get the more interactive tracebacks and code context output | 21:28 |
clarkb | but if you use pytest in CI you very quickly end up being pytest specific and running with standard runners is much more difficult | 21:28 |
mattoliver | great tip thanks clarkb | 21:29 |
clarkb | looks like your test jobs don't take a ton of time. But for a lot of projects not running in parallel would make them run for hours | 21:30 |
timburke | our tests (unfortunately) generally can't take advantage of parallelized tests -- unit *might* be able to (though i've got concerns about some of the more func-test-like tests), but func and probe are right out | 21:32 |
timburke | all right, i think i'll call it | 21:35 |
timburke | thank you all for coming, and thank you for working on swift! | 21:35 |
timburke | see you next week for the PTG! | 21:36 |
timburke | #endmeeting | 21:36 |
opendevmeet | Meeting ended Wed Oct 12 21:36:05 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:36 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.html | 21:36 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.txt | 21:36 |
opendevmeet | Log: https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.log.html | 21:36 |
timburke | hmm... speaking of CI, we should figure out what's going on with the fips func test jobs -- they started failing a couple weeks ago (9/29): https://zuul.opendev.org/t/openstack/builds?job_name=swift-tox-func-py39-centos-9-stream-fips&project=openstack/swift | 21:54 |
timburke | i don't immediately see any difference between the packages installed (or setup writ large) for the most recent pass vs that next failure, either... | 21:55 |
opendevreview | Merged openstack/swift stable/yoga: CI: Add nslookup_target to FIPS jobs https://review.opendev.org/c/openstack/swift/+/858277 | 22:30 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!