*** dviroel_ is now known as dviroel | 11:27 | |
fungi | just a heads up, i got asked in #openstack-kolla to reenqueue failed periodic jobs. seems you can actually do it, you just need to include --newrev (i passed the results of `git show-ref origin/master`) | 12:28 |
---|---|---|
fungi | zuul-client enqueue-ref --tenant=openstack --pipeline=periodic --project=openstack/kolla --ref=refs/heads/master --newrev=4c31f7a3f2002d77dd715dfbb5c2eb74192149d4 | 12:28 |
fungi | that appears to be working as expected | 12:28 |
Clark[m] | Any reason they didn't trigger jobs themselves with a change? | 12:35 |
Clark[m] | Neat that we can do a reenqueue like that, but shouldn't be necessary in this case | 12:35 |
fungi | changes don't get enqueued into timer trigger pipelines | 12:36 |
Clark[m] | Correct, but you should be able to trigger an equivalent job via a change | 12:37 |
fungi | and yes this is suboptimal when a periodic build uploads a broken artifact that causes all their other jobs to fail until the next periodic run | 12:37 |
fungi | i think it's their compromise to avoid uploading new docker images for every change that merges | 12:37 |
fungi | but they could certainly rework it so that the upload is only triggered in gate or promote when a specific file (dockerfile or whatever) gets changed | 12:39 |
fungi | like we do for our container images | 12:39 |
Clark[m] | Aha, it is artifact publishing that broke. That makes more sense. | 12:39 |
fungi | yep | 12:39 |
SvenKieske | mhm, don't really now how our artifcat build pipeline get's triggered, seems worthy to investigate, just need to find the time for that. | 12:45 |
fungi | possibly a big part of why they don't want to run this in gate or promote is that it takes around 2.5 hours to complete the jobs | 12:45 |
fungi | in part because they build every image for a particular package and then upload them all in one job, rather than having separate jobs per image/component | 12:46 |
fungi | er, they build every image for a particular platform i mean | 12:47 |
*** d34dh0r5- is now known as d34dh0r53 | 13:52 | |
clarkb | doesn't look like https://review.opendev.org/c/opendev/system-config/+/892057 got approved yet | 14:17 |
clarkb | should I approve it now and then if we run out of time to restart/test while frickler is around today just plan for a restart later? | 14:18 |
fungi | yes please | 14:19 |
fungi | sorry been distracted by painters | 14:20 |
clarkb | done | 14:20 |
opendevreview | Harry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent https://review.opendev.org/c/openstack/diskimage-builder/+/892541 | 14:21 |
fungi | i've done fresh test imports of mailing lists for about half the production domains, saving openstack for last and may not get to it until after lunch | 14:24 |
frickler | sorry I was also distracted by other issues | 14:25 |
frickler | but I'll be around for testing assuming the patch won't take circles getting merged | 14:26 |
clarkb | here's hoping it goes through quickly :) | 14:26 |
opendevreview | Harry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent https://review.opendev.org/c/openstack/diskimage-builder/+/892541 | 14:27 |
fungi | clarkb: https://review.opendev.org/892387 is the starlingx matrix channel logging patch you asked about yesterday, btw | 14:27 |
clarkb | fungi: +2'd but can you double check the note I made to ensure that isn't a problem | 14:38 |
fungi | clarkb: yep, that's safe | 14:41 |
fungi | they're starting to have discussions in those new channels too, so probably the sooner we can get them logging the better | 14:41 |
clarkb | ok lets approve that then. Done | 14:42 |
fungi | thanks! | 14:42 |
clarkb | infra-root how does https://etherpad.opendev.org/p/4xnhgK1TFnLsD8WuYMME look for email about zuul version stuff cc corvus frickler gmann JayF | 14:55 |
JayF | My only comment would be timing; if it does cause pain for anyone that's going to be a surprise task right in the middle of the hot time of the release | 14:59 |
fungi | okay, test migration of all lists other than lists.openstack.org has completed successfully on latest mm3, and that one is in progress now (fingers crossed this held node has a big enough rootfs, it's going to be *very tight*) | 15:00 |
clarkb | JayF: yes, the problem there is waiting meants waiting until october and we're already falling behind on the anisble upgrade path (they do releases every 6 months or so) and we're hoping that since two tenants have moved smoothyl we can get away with a quicker transition and start keeping up with ansible | 15:01 |
clarkb | october is 2/3 of the way throught the ansible 8 lifetime | 15:02 |
JayF | yeah, I understand, just pointing that out | 15:02 |
clarkb | I think what we would do in the case of problems is revert the change but then also work to fix the problem (using the test method to confirm) and then reset the default to 8 fairly quickly | 15:03 |
fungi | text of the announcement lgtm, and the plan is reasonable | 15:03 |
clarkb | reality is maybe one or two projects will test and give us the all clear (or find a problem and fix it and tell everyone else to fix it but they won't) then we'll switch and thats when we'll actually find any problems if they exist | 15:03 |
clarkb | and if we wait until october thats a month and a half of extra time we aren't collecting that data | 15:04 |
clarkb | which leads to us not getting ansible 9 out in time... | 15:04 |
frickler | though supporting ansible 9 should not be strictly tied to dropping 6? | 15:07 |
clarkb | I don't think its super strict, but each ansible install is like 800MB or something silly so we'll end up with 3GB container images if we support 6, 8, and 9. | 15:08 |
clarkb | I personally would like to avoid that. It makes development painful/slow | 15:08 |
fungi | okay, i realized i could blow away the git cache on this held node to free up plenty of additional space for the lists.openstack.org migration test, so should work out okay | 15:10 |
fungi | looks like i'm going to need to get to a lunch appointment before 892057 merges, just a heads up | 15:11 |
clarkb | on gitea99 I logged in as root in my browser to start a session, I stopped the service, appended a string to the jwt secret string, started the service, checked my session was still valid in the browser (seems to be) and checked that the private.pem key did not change (it did not) | 15:16 |
clarkb | so I think this oauth2 stuff is safe on the upgrade path | 15:16 |
opendevreview | Merged opendev/system-config master: gerrit: bump index.maxTerms https://review.opendev.org/c/opendev/system-config/+/892057 | 15:22 |
opendevreview | Merged opendev/system-config master: Add StarlingX Matrix channels to the logbot https://review.opendev.org/c/opendev/system-config/+/892387 | 15:22 |
fungi | looks like it merged, but i need to head out so won't be around for restart testing when it deploys, sorry | 15:22 |
fungi | i should be back in an hour-ish | 15:23 |
clarkb | looks like it is deploying right now (timing worked out for that) | 15:23 |
corvus | clarkb: msg lgtm. i put in an extra sentence at the end clarifying (i hope?) that speculative execution is sufficient; feel free to adjust/remove of course. just thought that might be useful for some folks. | 15:25 |
corvus | i seem to be a slightly lighter blue than your light blue :) | 15:26 |
clarkb | wfm | 15:26 |
clarkb | frickler: are you still around and able to test if we restart gerrit? | 15:26 |
clarkb | The config file did update | 15:26 |
clarkb | I'd like to `docker-compose down` then move the waiting dir for the replication plugin aside then `docker-compose up -d` if you are still here | 15:27 |
frickler | clarkb: ack | 15:28 |
clarkb | ok I'm going to warn the openstack release team then proceed wit hthat plan | 15:29 |
clarkb | also how about a #status notice Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service. | 15:31 |
frickler | +1 | 15:31 |
clarkb | #status notice Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service. | 15:32 |
opendevstatus | clarkb: sending notice | 15:32 |
-opendevstatus- NOTICE: Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service. | 15:32 | |
clarkb | once that reports it is done I'll do what I descirbed above | 15:32 |
opendevstatus | clarkb: finished sending notice | 15:34 |
clarkb | frickler: it is restarted and I can get the web ui again | 15:37 |
clarkb | frickler: I think we are ready for you to try and list starred changes | 15:37 |
frickler | clarkb: yay, that works. though I really wonder why I would have starred a change like https://review.opendev.org/c/openstack/oslo.messaging/+/76686 | 15:40 |
clarkb | I think I've starred at least one or two changes due to misclicks | 15:41 |
clarkb | I seemto recall the layout of one of the older web UIs made that easy | 15:41 |
clarkb | that is excellent news. I think we can leave it on the current limit and monitor it. We probably don't need to rollback as long as this seems happy | 15:41 |
frickler | I'll still look into writing a script that unstars all merged or abandoned changes. maybe with an age limit | 15:42 |
clarkb | I need to take a small break, but then I'll try to send the ansible 8 email afterwards and work on some code reviews I'm behind on | 15:45 |
clarkb | email sent | 16:47 |
fungi | okay, back, looks like i missed the restart | 16:59 |
clarkb | yup was quick and seems to have addressed hte problem | 16:59 |
fungi | lists.openstack.org test migration is still in progress thanks to the immense message count for the openstack-stable-maint archive | 17:00 |
fungi | no obvious errors yet though | 17:01 |
clarkb | remember when we ran tumbleweed images because we thought maybe people would like to test the latest and greatest packages... turns out no one really cares to do the work | 17:35 |
JayF | clarkb: I think re: that ml thread, a lot of people are missing the point that our CI system *is a production environment* and having random broken system stuff in there (including python beta or dot-oh bugs) stops the work of a ton of developers | 17:36 |
JayF | it's a slider of "stability" and "testing enough stuff" and if anything we've already got an insanely large matrix | 17:37 |
fungi | not only did nobody run opensuse tumbleed jobs, but nobody even had interest in keeping the images buildable | 17:38 |
clarkb | JayF: sort of. It has the flexibility to do what they want through periodic jobs or experiemtnal jobs etc. The problem is literally anytime we have invested any effort into helping people with it everyone else ignores us and its a giant waste of time and effort | 17:38 |
fungi | we removed them not because they were unused, but because they were unbuildable | 17:38 |
fungi | we need to revisit gentoo as well. we've been unable to build new gentoo images for a full year now | 17:39 |
JayF | fungi: can you link and assign me that bug? | 17:39 |
JayF | I have a personal project that a DIB gentoo image would do wonders for | 17:39 |
clarkb | fwiw sean's suggestion would probably be trivial to attempt and is proibably an hour or two of someone's time to poc | 17:39 |
JayF | and I can use this as an excuse to fix it in dib | 17:39 |
JayF | clarkb: there's not enough of us to do all of the things and care about all of the things :( Some small % of not wanting a larger matrix is "the list of things I can care about simultaneously is full", not just in terms of hours in the day, but in terms of mental capacity | 17:40 |
fungi | JayF: this is the closest thing to a bug report because there's also been nobody around who has had the time or interest to look into it and file one: https://nb01.opendev.org/gentoo-17-0-systemd-0000228578.log | 17:40 |
fungi | emerge: there are no ebuilds built with USE flags to satisfy "dev-python/typing-extensions[python_targets_pypy3(-)?,python_targets_python3_8(-)?,python_targets_python3_9(-)?,python_targets_python3_10(-)?,python_targets_python3_11(-)?]". | 17:41 |
JayF | > 2022-08-04 09:14:00.316 | + echo 'PYTHON_TARGETS="python3_9"' | 17:41 |
JayF | that python target isn't supported anymore for general use | 17:41 |
JayF | this is just required maintenance, maybe even just bumping it to 3_10 might be enough | 17:41 |
fungi | yes, proof that these sorts of things require constant attention because, unlike lts distro versions, they change continually | 17:41 |
clarkb | JayF: yes, I understand that. A healthy response to that should be "we will do what is reasonable" and not "we need to do all these things regardless!". I feel like OpenDev vs OpenStack is a good illustration of the difference in those two approaches. OpenDev has pretty clearly and loudly said we'll stop doing things that don't have bodies behind them and we've shut stuff down and | 17:42 |
clarkb | its been great for us. OpenStack meanwhile seems far less willing to trim dead weight and wants to hang onto as much as possible at the expense of those who have the time to help | 17:42 |
JayF | fungi: One of those things where likely, if we want to keep it working, we'll have to thin the DIB layer (e.g. specifying PYTHON_TARGETS specifically is not something that is recommended for general gentoo use, even though we should expose it for end-users who wanna set it) | 17:42 |
clarkb | note it still feels like opendev has more than it can handle. But the scope of that is far smaller today than before and it helps our sanity I think. At least it helps mine | 17:42 |
fungi | yes, someone focused on tuning dib to require less maintenance and attention would be one approach | 17:43 |
JayF | let me take a swing at this at some point (probably weekend?) I suspect it's low hanging for someone with gentoo experience | 17:43 |
JayF | I've wanted an excuse to get more invovled with dib, I know enough about gentoo to fix this (and might even use it!) so I think you have a winner | 17:44 |
JayF | just probably not something I can charge the 9-5 with lol :D No gentoo in production at G-Research, believe it or not :P | 17:44 |
fungi | infra-root: https://review.opendev.org/869210 for upgrading the mailman 3 server should be ready to review. See my comment with the held node info if you want to check out the completed test imports of copies of production mailing lists | 17:55 |
fungi | once that merges, assuming no new and unforeseen problems arise, we can work on scheduling out the remaining migration windows | 17:55 |
clarkb | fungi: nice. I'll put it on the list to review. | 17:57 |
clarkb | For now though I'm going to take the secrets lock and add the new unused jwt secret for gitea so that we can upgrade gitea when ready | 17:57 |
fungi | sounds good | 17:59 |
clarkb | ok thats done | 18:00 |
clarkb | I'm going to try and pop out for a bike ride midday today though so unsure how around I'll be to upgrade gitea and/or mm3 today | 18:01 |
fungi | there's no rush. i should take today's favorable weather as an opportunity to catch up on overdue yardwork | 18:01 |
fungi | see diablo_rojo sing our praises (starting around 4 minutes in): https://www.youtube.com/watch?v=OlcIDv4iyy0 | 18:31 |
opendevreview | Harry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent https://review.opendev.org/c/openstack/diskimage-builder/+/892541 | 18:34 |
fungi | i'm in the process of cleaning up 72 new leaked images in rackspaces iad region, as well as 380 in dfw and 393 in ord | 19:08 |
fungi | 845 total leaked images to delete | 19:08 |
fungi | i'm injecting the requests with a 10-second delay between each in order to not raise their ire | 19:11 |
fungi | should require ~2.5 hours to complete depending on how slowly each call returns | 19:12 |
frickler | hmm, I wonder where these 72 images came from. pretty sure I didn't have that many failed upload attempts | 20:23 |
fungi | likely from the brief period where we tried to reenable it before we reverted that again | 20:43 |
fungi | since i hadn't done any more cleanup at that point | 20:43 |
clarkb | fungi: in the mm3 change https://review.opendev.org/c/opendev/system-config/+/869210/8/docker/mailman/web/mailman-web/settings.py#56 might allow us to unfork that file in our ansible role. I can't remember if that was he only thing we had to fork for (git diff should clarify I guess). Note we should probably do that in a followup to the upgrade not as part of the upgrade | 21:30 |
clarkb | fungi: do you know where/what sets the new SMTP_HOST_* vars in https://review.opendev.org/c/opendev/system-config/+/869210/8/docker/mailman/core/docker-entrypoint.sh I'm wondering if we need to set that in our docker compose environment | 21:30 |
clarkb | I don't see it in the rest of teh change | 21:30 |
fungi | i don't see that we set it anywhere | 21:31 |
clarkb | we may want to grep it in the upstream repo to see how they use those vars and decide if we need to set them. I think we may override the exim config anyway so it may not be important | 21:32 |
clarkb | but those were the only two things I saw as worth followup on. All the versions of software seem to match the upstream release announcement | 21:32 |
fungi | though it does optionally get consumed in settings.py | 21:32 |
fungi | it's referred to in the readmes but not set by anything | 21:33 |
fungi | i think it's there for cases where you want to set up outbound smtp auth | 21:34 |
fungi | then you can define those values in the dockerfile | 21:34 |
fungi | since we send out directly from the server's own mta it's not needed, we don't allow anyone besides localhost to relay through it to remote addresses | 21:35 |
clarkb | ya reading the readme that became more clear. I wonder if that empty string will be put in places and break outbound smtp though | 21:35 |
clarkb | fungi: did you test outbound smtp through mailman on your held node? if that works I think we can proceed as is | 21:35 |
fungi | i can try. should be able to send something through an ml on it and then check what's stuck in exim's deferral queue | 21:36 |
clarkb | ++ | 21:37 |
fungi | as long as exim is attempting remote delivery (it won't succeed because of the custom iptables block) then that's sufficient to confirm it is allowing mailman to send outbound | 21:37 |
clarkb | yup since its the mailman -> exim not exim -> world connection we're worried about here | 21:38 |
fungi | clarkb: i think this adequately captures it: https://paste.opendev.org/show/bM2NnW6NZGDPpQ8swDZJ/ | 21:51 |
fungi | i trimmed out the similar delivery failures for other recipients subscribed to that ml | 21:52 |
clarkb | yup that message id seems to match in all three logs. I'll +2 the change in that case | 21:52 |
TheJulia | o/ Hey guys, you can reclaim that hold I have | 22:27 |
fungi | thanks TheJulia! did it help at all? | 22:27 |
TheJulia | Yeah, it helped me understand it wasn't the logging and actually helped me figure out what wsa wrong iwth the overall job config | 22:28 |
fungi | heh, that's so computers | 22:28 |
fungi | anyway, autohold has been cleaned up, thanks! | 22:28 |
JayF | fungi: TheJulia was running facefirst into the OVN-not-respecting-MTU issue too :( | 22:33 |
JayF | OpenStackers are of one mind even when we aren't working directly together, it seems ;) | 22:33 |
TheJulia | indeed.... I've not flipped the table yet today, but the urge is super strong | 22:34 |
* TheJulia thinks rocket motors | 22:34 | |
clarkb | I self medicate with bike rides outside | 22:34 |
fungi | yeah, thankfully it's not one of those table-flipping days for me, just trying to wrangle my jungle of a yard into some semblance of not-getting-fined-by-the-town | 22:35 |
JayF | I've had like, 4 days in a row where I've had an item I don't wanna do on my todo list, towards the end. Finally think I've run myself out of things ahead of it :) | 22:35 |
fungi | sorry to hear that. can i help by giving you more things to do instead? ;) | 22:36 |
clarkb | I rode past https://www.digitalrealty.com/data-centers/americas/portland/pdx12 and its sibling PDX11 though and suddnely I was reminded computers exist. Those datacenters are absolutely massive too | 22:36 |
JayF | fungi: that's what you did yesterday :) https://lists.openstack.org/pipermail/openstack-discuss/2023-August/034854.html | 22:37 |
TheJulia | clarkb: I may just go chill on the recumbant bike for a while and play inside job on the tv | 22:37 |
clarkb | I really enjoy it particularly this time of year. Though its probably far too hot in your area to be outside for long right now | 22:38 |
clarkb | so ++ to inside job | 22:38 |
JayF | Yeah I'm mostly done for the day too, my chill time is usually my porch swing outside. Sadly about to be in the part of the year where "outside = monsoon" | 22:38 |
JayF | oh TheJulia speaking of, you all avoid any damage? | 22:38 |
TheJulia | Yeah, house is mostly untouched.... We have no road/path to get into the airport | 22:38 |
TheJulia | or the preferred supermarket | 22:39 |
JayF | I saw photos where it didn't even look like a flood, it just looked like feet of mud in some places | 22:39 |
TheJulia | the roads are like... gone... and a train burried in mud and everythign that derailed | 22:39 |
JayF | That's probably going to take a long time to get fixed, too :( | 22:40 |
TheJulia | yeah | 22:40 |
TheJulia | we don't know if the roads up to our mountain hideaway are washed out yet either, I'll find out friday | 22:41 |
clarkb | can you still escape to LA or is that all messed up too? | 22:41 |
TheJulia | dunno about LA, from what I've gathered the peaks nearby sheltered and captured a lot of the rain | 22:41 |
TheJulia | well, sheltered areas west of us | 22:41 |
TheJulia | we could likely get to LA, but that is like... 2-4 hours of driving depending on the day | 22:41 |
TheJulia | otherwise I'd go see Robert Picardo sing on Saturday | 22:42 |
JayF | I suspect you all are well supplied | 22:42 |
* TheJulia wonders if there is a mobile emitter.... | 22:42 | |
clarkb | ya we don't venture to saettle super often and its a similar distance time wise | 22:43 |
clarkb | but if I really had too I've always thought I could fly out of seatac if necessary | 22:43 |
JayF | clarkb: I feel like we might have had this conversation before, but I didn't know you were up here | 22:43 |
TheJulia | heh | 22:44 |
TheJulia | deja vu | 22:44 |
clarkb | JayF: I'm in the portland area | 22:45 |
JayF | clarkb: I'm just north of JBLM in Lakewood. | 22:45 |
JayF | clarkb: so if you ever do come thru to Seattle and want to say hello, I have a smoker and I know how to use it to make tasty bbq lunches (assuming you eat meat) | 22:46 |
clarkb | I do and that would be awesome. Don't currently have a seattle trip on the calendar but we tend to make it there a couple times a year to visit friends and family | 22:46 |
clarkb | the last time we went we decided to do it all in one day because seattle hotel prices are absurd now | 22:47 |
JayF | Just give me a bit of heads up; i'm about an hour south of Seattle | 22:47 |
clarkb | even in southcenter it was like $350/night + parking | 22:47 |
JayF | that is pricey; I can't believe it'd still be that expensive if you went a little further south to Kent though | 22:47 |
fungi | feet of mud (and sand, and seagrass) is what post-flood cleanup looks like out here, fwiw | 22:48 |
fungi | so doesn't sound that odd | 22:48 |
clarkb | I always enjoy seeing the grass stuck in the chainlink fence when the creek near me floods | 22:48 |
clarkb | "the water got this high" | 22:48 |
TheJulia | fungi: We lack boats here... that whole "it is normally just sand as far as the eye can see!" thing | 22:48 |
JayF | fungi: in .nc.us, I'm used to seeing it more as sand than like MUD-MUD, if that makes sense? I think I'm drawing a distinction (where none may exist?) between "wet sand" and "mud from soil/ground" | 22:48 |
fungi | whatever dies and sinks to the bottom of the marsh gets picked up by the tide and dumped in the street (and in our house) | 22:48 |
fungi | you'd think "how can the marsh floor get dumped *inside* your house?" but that's just it. the first thing the tide does is blow out all the doors and windows on the bottom floor so it has better access | 22:50 |
TheJulia | fluids dynamics! | 22:50 |
TheJulia | and well, water doesn't pressurize | 22:50 |
* fungi still has some doors that need re-hanging for the past 5 years | 22:50 | |
TheJulia | fun | 22:50 |
JayF | I actually have a GC coming tomorrow to quote water damage repair. That all came from the sky down into the house though -- if my house floods half of western washington will be underwater | 22:52 |
TheJulia | ... sigh | 22:52 |
TheJulia | I need to get someone out for my house, but it would almost be easier to hire a male friend to make that phone call with the way some folks are behaving these days | 22:53 |
fungi | yeah, that seems like an unnecessary amount of added challenge | 22:53 |
TheJulia | only took ~4 months to find someone to trim a tree | 22:54 |
JayF | I found a local place a few years ago, GC owned/operated by a woman, all the office staff are women. It's such a nice change. Good communication without it being corporatey. | 22:54 |
fungi | ugh | 22:54 |
fungi | wish we had contractors like that out here, yeah | 22:54 |
JayF | Being able to email back and forth with the GC instead of it just being some dude named Ted who stops by in a beat up truck every now and then to hit something with a hammer | 22:54 |
JayF | so nice | 22:54 |
TheJulia | omg that sounds heavenly | 22:55 |
clarkb | JayF: not sure if you've ben but hood canal up to port townsend/port angeles is one of my favorite places to explore | 22:55 |
fungi | ted was just here this morning scraping down the awful popcorn ceiling in our guestroom. he'll be back tomorrow to hit it with hammers though | 22:55 |
JayF | clarkb: I've not, I'll add that to the list. We (my wife and I) can't travel much together right now because of our pet situation. There's a lot of places in WA we haven't seen yet. | 22:55 |
JayF | We did make it to ocean shores | 22:55 |
clarkb | quinault is also amaxing on the sound end of the olympics | 22:55 |
clarkb | *south end | 22:56 |
clarkb | I've never made it to vampire country though | 22:56 |
JayF | vampire country? | 22:57 |
JayF | really the only things I've seen here local are Seattle-things and a day trip to Aberdeen then Ocean Shores. We then got a doggo who is scared of approximately everything so we can't leave him with anyone :( | 22:57 |
* TheJulia raises an eyebrow and wonders if this is where she should be living | 22:57 | |
clarkb | Forks where twilight is set | 22:58 |
TheJulia | oh | 22:58 |
TheJulia | nvmd | 22:58 |
fungi | next to sasquatch country? | 22:58 |
JayF | TheJulia: I've told you multiple times that .wa.us is the place to be :D | 22:58 |
JayF | TheJulia: no hurricanes, guaranteed[1] | 22:58 |
JayF | 1: offer not valid in climate change situations | 22:58 |
TheJulia | heh | 22:58 |
opendevreview | Jay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile https://review.opendev.org/c/openstack/diskimage-builder/+/892627 | 23:10 |
JayF | I found a (potentially) easy/obvious break in the gentoo build, maybe I'll get lucky and pluck a low hanging fruit :D | 23:10 |
clarkb | the testing of the very foundation of the distro images is pretty good | 23:25 |
clarkb | so hopefully you get workable results that you can refine off of | 23:25 |
JayF | it's one of those things where gentoo is a rolling release distro, but as long as you are careful not to version-lock anything it should be "stable" in terms of interface for image building, with the small exception of news items | 23:26 |
JayF | if I can get this working, I'll just make it a point to check these builds anytime I get a news item, that'll give plenty of warning (I run ~arch, basically the equivalent of "testing" vs arch which is "stable") | 23:27 |
JayF | in this case: new systemd released, and I think is stable, that requires merged-usr, which means you gotta use that profile (they are about to release 23.0 profiles soon which will fix that awkwardness) | 23:27 |
JayF | in fact... profiles are probably the closest analogue in gentoo to release | 23:28 |
opendevreview | Jay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile https://review.opendev.org/c/openstack/diskimage-builder/+/892627 | 23:28 |
fungi | yeah, my main concern with debian is that testing isn't really a rolling release (it pauses for freezes and such) while unstable isn't always guaranteed to be installable due to dependency transitions so may result in extended periods of unbuildable images | 23:29 |
fungi | i personally use unstable on most of my systems, but i also have intimate familiarity with how to un-break it. it doesn't seem suitable for ci jobs | 23:30 |
JayF | Basically the rule with gentoo is, if you run ~arch you're going to end up with weird breakages from time to time that usually are resolved by ignoring your package manager for 48 hours and retrying an upgrade (or fixing/reporting the bug)... for arch, it's extremely rare for it to be meaningfully broken | 23:35 |
JayF | so I hope I can get it working and monitor it | 23:36 |
opendevreview | Jay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile https://review.opendev.org/c/openstack/diskimage-builder/+/892627 | 23:44 |
fungi | infra-root: rackspace leaked images have been cleaned up in all three regions now | 23:46 |
corvus | fungi: thanks! if it happens again, ping me and i'll take a look with you | 23:55 |
*** dtantsur_ is now known as dtantsur | 23:58 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!