*** dmellado8191813 is now known as dmellado819181 | 05:06 | |
fungi | we're back to full available quota in all rackspace regions now | 12:43 |
---|---|---|
fungi | deleted nodes were cleaned up some time during my low-power standby hours | 12:44 |
frickler | fungi: nice work, do we also want to re-enable image uploads, then? maybe start with a partial set first? seems the nodepool containers have been redeployed, so the timeout should be back to 1h currently | 15:11 |
fungi | i haven't opened a ticket about image uploads in rackspace iad yet | 15:14 |
fungi | i think they're still having trouble | 15:15 |
fungi | based on my manual upload attempt yesterday, with no other uploads going on there it still took over 30 minutes after the upload completed before the image appeared in their image list | 15:15 |
fungi | also i haven't cleaned up the hundreds of leaked images we noticed in the dfw or ord regions either | 15:16 |
frickler | but do we know what the expected upload time is? maybe 30 mins is normal and we just never noticed? can you try your upload against another region for comparison? or did you do that already and I missed that result? | 15:21 |
frickler | I'll test an upload via nodepool for iad in the meantime | 15:22 |
frickler | meh, with the new build id format, the image-list output is too wide for my standard terminal size :( | 15:24 |
fungi | i did not test uploading to another region so trying that now, though this will at best provide an upper bound since we don't have uploads to other regions paused | 15:27 |
fungi | i've started an identical test to their ord region anyway just to see what it might tell us | 15:27 |
fungi | frickler: looks like in ord it takes around 18 minutes from upload completion to image appearing in the list | 15:56 |
fungi | i'll test dfw too | 15:56 |
frickler | ok, that's faster but also not too much faster. how long does the upload take? I think the time we see in nodepool will be the sum of both | 16:01 |
fungi | keep in mind this is uploading to a cloud where we have not paused uploads. when we unpaused uploads to iad we saw processing times for manual upload tests skyrocket by >10x | 16:05 |
fungi | i don't think the time to perform the upload is relevant, though i have been recording it anyway, what i'm mainly measuring is the time between when the upload completes and when the image appears in the list. upload times are going to vary because the clouds regions are necessarily in different geographical regions separated by substantially different distances and going over much different | 16:07 |
fungi | parts of the internet with different bandwidth constraints et cetera | 16:07 |
fungi | i'm intentionally factoring that part out | 16:08 |
fungi | yesterday's iad upload took 4m30.807s, today's ord upload took 8m37.690s and today's dfw upload took 4m44.768s | 16:10 |
fungi | all tested from bridge01 which is (i think) in rackspace's dfw region | 16:11 |
fungi | which is another reason i'm not particularly interested in upload times: i'm not uploading from one of our nodepool builders. glance-side processing times will be entirely independent of where we're uploading from anyway | 16:12 |
fungi | post-upload image processing time in dfw was around 15-16 minutes | 16:20 |
fungi | again, in a region where we don't have uploads paused | 16:20 |
fungi | so without uploads paused to iad we see approximately 2x the post-upload processing time as we do in dfw and ord when uploads from our builders are not paused | 16:21 |
fungi | er, with uploads paused to iad | 16:21 |
frickler | 2023-08-18 16:15:44,365 DEBUG nodepool.OpenStackAdapter.rax-iad: API call create_image in 3180.6606140406802 | 16:40 |
frickler | so that worked, but pretty close to 1h, so likely we don't want to enable too many uploads there for now | 16:41 |
frickler | maybe only ubuntu-jammy and rocky-9, which would seem to cover at least well > 50% of our node requests? | 16:42 |
frickler | and then delete the other images or keep running with outdates ones? | 16:42 |
fungi | yeah. we could probably slowly work through them one by one | 16:43 |
clarkb | did we pause via config change or the command line tool? would be easier to go one by one if we use the command line tool | 16:47 |
frickler | clarkb: via config change. the manual uploads I did were by removing the "pause: true" from the config on the builder, the upload will continue even if the hourly run restores the config | 16:50 |
frickler | I'm not sure we can do a pause per region via cli? | 16:50 |
fungi | clarkb: i think the cli is for pausing specific images not providers | 16:50 |
clarkb | aha | 16:50 |
fungi | in this case we wanted to pause all uploads to one provider, while still building and uploading all our images to other providers | 16:51 |
fungi | and the only way i saw to do that was in the config | 16:51 |
frickler | yes, image-pause has only the image name as parameter | 16:51 |
clarkb | fungi: today is turning into a weird one for me. errands on top of bike ride. But I wanted to check in if you've had a chance to review the gitea change | 20:13 |
clarkb | I'm thinking we should be able to land that early next week as well as the etherpad 1.9.2 upgrade if we can do some reviews. I'm happy to answer questions etc too | 20:13 |
clarkb | man is getting updates on my computer. I wonder how often man updates | 20:27 |
clarkb | fungi: I just found https://discuss.python.org/t/pep-725-specifying-external-dependencies-in-pyproject-toml/31888 discussions like that make me feel like we're so far ahead of the curve that its detrimental ... | 20:32 |
clarkb | It is good people are trying to work on these things but also feels like we're discarding all of the historical work here | 20:33 |
JayF | We don't do a very good job of evangelizing tech that we've built to support openstack to the rest of the ecosystem | 20:34 |
JayF | (part of that is probably due to being so far ahead of the curve in a lot of ways) | 20:34 |
clarkb | in this particular case we were basically told to go away to be fair | 20:38 |
clarkb | we tried really hard for a while to work with pypa but they were not interested and this is where we've ended up. They've now reinvented multiple things we've done that we tried to work with them on almost a decade later | 20:38 |
fungi | yeah, we've been working a decade into the future, so nobody wants to hear our ideas, and by the time they become relevant we've forgotten they're even novel | 21:12 |
fungi | which is why i try to chime in on discussions like that one | 21:13 |
fungi | since we're already 10 years in the future, maybe we should weigh in on https://discuss.python.org/t/the-10-year-view-on-python-packaging-whats-yours/ | 21:17 |
fungi | er, i mean https://discuss.python.org/t/the-10-year-view-on-python-packaging-whats-yours/31834/ | 21:18 |
clarkb | fwiw I think the "make a package and install it without too much fuss" is largely solved at this point. Which is excellent. The largest missing piece is package verification of some sort imo | 21:19 |
clarkb | I know they've long been working on that but as far as I know there is no cheeseshop supported method for that today | 21:19 |
fungi | the current complaint is that there was a packaging user survey that *seemed* to indicate that there was too much "choice" in packaging frameworks and tools, so some in the python packaging community are basically using that to say nobody can develop anything new because that would create "yet another way" for something | 23:03 |
fungi | it's been used over and over in the pep 722 discussion to try to push the proposal from a simple list of package names in comment lines to an embedded toml file inside comments, prematurely engineering for "whatever people might want to add" (even though there's no indication tools would do anything with it) | 23:05 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!