Friday, 2023-08-18

*** dmellado8191813 is now known as dmellado81918105:06
fungiwe're back to full available quota in all rackspace regions now12:43
fungideleted nodes were cleaned up some time during my low-power standby hours12:44
fricklerfungi: nice work, do we also want to re-enable image uploads, then? maybe start with a partial set first? seems the nodepool containers have been redeployed, so the timeout should be back to 1h currently15:11
fungii haven't opened a ticket about image uploads in rackspace iad yet15:14
fungii think they're still having trouble15:15
fungibased on my manual upload attempt yesterday, with no other uploads going on there it still took over 30 minutes after the upload completed before the image appeared in their image list15:15
fungialso i haven't cleaned up the hundreds of leaked images we noticed in the dfw or ord regions either15:16
fricklerbut do we know what the expected upload time is? maybe 30 mins is normal and we just never noticed? can you try your upload against another region for comparison? or did you do that already and I missed that result?15:21
fricklerI'll test an upload via nodepool for iad in the meantime15:22
fricklermeh, with the new build id format, the image-list output is too wide for my standard terminal size :(15:24
fungii did not test uploading to another region so trying that now, though this will at best provide an upper bound since we don't have uploads to other regions paused15:27
fungii've started an identical test to their ord region anyway just to see what it might tell us15:27
fungifrickler: looks like in ord it takes around 18 minutes from upload completion to image appearing in the list15:56
fungii'll test dfw too15:56
fricklerok, that's faster but also not too much faster. how long does the upload take? I think the time we see in nodepool will be the sum of both16:01
fungikeep in mind this is uploading to a cloud where we have not paused uploads. when we unpaused uploads to iad we saw processing times for manual upload tests skyrocket by >10x16:05
fungii don't think the time to perform the upload is relevant, though i have been recording it anyway, what i'm mainly measuring is the time between when the upload completes and when the image appears in the list. upload times are going to vary because the clouds regions are necessarily in different geographical regions separated by substantially different distances and going over much different16:07
fungiparts of the internet with different bandwidth constraints et cetera16:07
fungii'm intentionally factoring that part out16:08
fungiyesterday's iad upload took 4m30.807s, today's ord upload took 8m37.690s and today's dfw upload took 4m44.768s16:10
fungiall tested from bridge01 which is (i think) in rackspace's dfw region16:11
fungiwhich is another reason i'm not particularly interested in upload times: i'm not uploading from one of our nodepool builders. glance-side processing times will be entirely independent of where we're uploading from anyway16:12
fungipost-upload image processing time in dfw was around 15-16 minutes16:20
fungiagain, in a region where we don't have uploads paused16:20
fungiso without uploads paused to iad we see approximately 2x the post-upload processing time as we do in dfw and ord when uploads from our builders are not paused16:21
fungier, with uploads paused to iad16:21
frickler2023-08-18 16:15:44,365 DEBUG nodepool.OpenStackAdapter.rax-iad: API call create_image in 3180.660614040680216:40
fricklerso that worked, but pretty close to 1h, so likely we don't want to enable too many uploads there for now16:41
fricklermaybe only ubuntu-jammy and rocky-9, which would seem to cover at least well > 50% of our node requests?16:42
fricklerand then delete the other images or keep running with outdates ones?16:42
fungiyeah. we could probably slowly work through them one by one16:43
clarkbdid we pause via  config change or the command line tool? would be easier to go one by one if we use the command line tool16:47
fricklerclarkb: via config change. the manual uploads I did were by removing the "pause: true" from the config on the builder, the upload will continue even if the hourly run restores the config16:50
fricklerI'm not sure we can do a pause per region via cli?16:50
fungiclarkb: i think the cli is for pausing specific images not providers16:50
fungiin this case we wanted to pause all uploads to one provider, while still building and uploading all our images to other providers16:51
fungiand the only way i saw to do that was in the config16:51
frickleryes, image-pause has only the image name as parameter16:51
clarkbfungi: today is turning into a weird one for me. errands on top of bike ride. But I wanted to check in if you've had a chance to review the gitea change20:13
clarkbI'm thinking we should be able to land that early next week as well as the etherpad 1.9.2 upgrade if we can do some reviews. I'm happy to answer questions etc too20:13
clarkbman is getting updates on my computer. I wonder how often man updates20:27
clarkbfungi: I just found discussions like that make me feel like we're so far ahead of the curve that its detrimental ...20:32
clarkbIt is good people are trying to work on these things but also feels like we're discarding all of the historical work here20:33
JayFWe don't do a very good job of evangelizing tech that we've built to support openstack to the rest of the ecosystem20:34
JayF(part of that is probably due to being so far ahead of the curve in a lot of ways)20:34
clarkbin this particular case we were basically told to go away to be fair20:38
clarkbwe tried really hard for a while to work with pypa but they were not interested and this is where we've ended up. They've now reinvented multiple things we've done that we tried to work with them on almost a decade later20:38
fungiyeah, we've been working a decade into the future, so nobody wants to hear our ideas, and by the time they become relevant we've forgotten they're even novel21:12
fungiwhich is why i try to chime in on discussions like that one21:13
fungisince we're already 10 years in the future, maybe we should weigh in on
fungier, i mean
clarkbfwiw I think the "make a package and install it without too much fuss" is largely solved at this point. Which is excellent. The largest missing piece is package verification of some sort imo21:19
clarkbI know they've long been working on that but as far as I know there is no cheeseshop supported method for that today21:19
fungithe current complaint is that there was a packaging user survey that *seemed* to indicate that there was too much "choice" in packaging frameworks and tools, so some in the python packaging community are basically using that to say nobody can develop anything new because that would create "yet another way" for something23:03
fungiit's been used over and over in the pep 722 discussion to try to push the proposal from a simple list of package names in comment lines to an embedded toml file inside comments, prematurely engineering for "whatever people might want to add" (even though there's no indication tools would do anything with it)23:05

Generated by 2.17.3 by Marius Gedminas - find it at!