corvus | launcher behavior with the periodic jobs looks ok | 02:35 |
---|---|---|
corvus | looks like flex is operating again | 13:32 |
hashar | fungi: thank you for the release of git-review 2.5.0 :] | 13:40 |
fungi | hashar: you're welcome! | 13:51 |
fungi | corvus: i got a ticket in my personal rackspace account yesterday about network maintenance for flex sjc3, but it's not until wednesday next week | 13:54 |
fungi | i didn't see any notices for yesterday | 13:54 |
fungi | it does look like theu've started listing flex on https://rackspace.service-now.com/system_status but the history there is all green back through the weekend | 13:57 |
fungi | looks like they're listing a new (iad3 maybe?) pop too | 13:58 |
fungi | clarkb: corvus: yeah, i think it's fine to do the at-rest checksum for now, odds are it won't be that much extra time and depending on whether i/o or cpu are the bottleneck we could maybe look at something parallelizable e.g. simd acceleration or something | 14:05 |
fungi | but first let's see how the simple implementation works out in practice | 14:05 |
fungi | also the change failed in the gate | 14:05 |
fungi | nox-py311 and 312 both hitting timeouts in different tests (test_launcher_image_expire and test_launcher_crashed_upload) | 14:08 |
corvus | yeah, i expect more tests need updating for the new fixtures; it's #2 on my list | 14:10 |
fungi | i'm going to disappear in a bit to run an errand and grab lunch while i'm out, probably mia 14:30-16:00 utc or thereabouts | 14:12 |
corvus | nodepool has seen a request for 'openEuler-22-03-LTS' which is an image not handled by zuul-launcher. how should we proceed? | 14:17 |
fungi | what was the requesting project? | 14:22 |
fungi | something cfn/.* presumably? | 14:22 |
corvus | 2025-07-10 13:33:08,601 DEBUG zuul.Pipeline.openstack.check: [e: a528ca3243c040c499eb48fae80de896] Build <Build 919aec1210f54070a577c8e40ce12b95 of devstack-platform-openEuler-22.03-ovn-source voting:False> of <QueueItem d40c654da20f4076a285ed78ee23ced7 live for [<Change 0x7f8520131290 openstack/devstack 954606,1>] in check> completed | 14:24 |
corvus | i'm guessing that one | 14:24 |
corvus | https://zuul.opendev.org/t/openstack/job/devstack-platform-openEuler-22.03-ovn-source | 14:25 |
corvus | yep | 14:25 |
fungi | aha, okay i'll let the openstack qa folks know, but since it's non-voting i bet it's been broken for a long time | 14:25 |
corvus | that has definitions on stable/2024.1 and unmaintained/2023.1 | 14:25 |
corvus | it actually has some recent successes | 14:26 |
fungi | ah, so already removed in later branches anyway | 14:26 |
corvus | yep that's what it looks like | 14:27 |
corvus | so i think the message something like "this is running on stable/2024.1 and unmaintained/2023.1 but it's about to stop working because no one is maintaining the openeuler image in opendev, so if it's important to keep running, talk to #opendev about volunteering to maintain that image or if it's not important, you may want to go ahead and remove the job" | 14:29 |
corvus | then my next question is, how long do we leave nodepool running for that? | 14:30 |
fungi | yeah, i actually asked them in #openstack-qa if there would be objections to me pushing a couple of patches to just remove it from the pipelines | 14:30 |
fungi | i'll do that after i get back from lunch | 14:30 |
fungi | anyway, headed out now | 14:30 |
corvus | cool. also, in that message we sent out a couple months ago, we told everyone that openeuler was going to be removed anyway | 14:31 |
corvus | maybe we can shutdown nodepool later today then. also, in our email, we said that people would start getting node_errors for unsupported images "tentatively july 1" | 14:32 |
Clark[m] | I'm surprised the devstack job can pass but maybe they updated it to not use our mirror | 14:32 |
corvus | looking at the current behavior, i think there's still a tweak we need to do to avoid accepting requests too early: https://review.opendev.org/954618 | 14:39 |
clarkb | corvus: +2 from me | 14:50 |
corvus | clarkb: what was the blocker for upgrading grafana? https://review.opendev.org/954000 | 14:58 |
clarkb | corvus: oh I meant to followup on that. The screenshots have a lot of empty graphs. I wanted to hold a node and see if that is a selenium timing thing or an actual problem with the upgrade | 14:59 |
clarkb | https://e36203e60051d918bd96-b4b1a7d89013756684de846d3b70c9e9.ssl.cf2.rackcdn.com/openstack/946744e8dd314260a4b13c16f57f71b6/bridge99.opendev.org/screenshots/zuul-status.png for example | 14:59 |
clarkb | I'll work on that now | 15:00 |
corvus | ah. i'm looking at the graphs, and i think it's a selenium thing | 15:00 |
corvus | like, grafana deciding that the other graphs aren't in view, so it doesn't render them | 15:00 |
clarkb | ya I think that is likely as I seem to recall grafana puts an annotation on the graphs that are broken but it should be quick to double check | 15:01 |
opendevreview | Clark Boylan proposed opendev/system-config master: DNM Force failure in grafana testing to hold a node https://review.opendev.org/c/opendev/system-config/+/954624 | 15:04 |
clarkb | ok a hold is in place for that now so as soon as the grafana is running we should be able to load that servicel ocally and check the graphs directly | 15:07 |
clarkb | corvus: https://158.69.72.204/d/21a6e53ea4/zuul-status?orgId=1&from=now-6h&to=now&timezone=utc the graphs look good to me. I think you're right and we need to scroll to the bottom of the page to get the screenshots to work properly | 15:27 |
clarkb | I've +2'd the upgrade change. Maybe see if fungi or frickler want to review it in the next little bit and approve otherwise? | 15:29 |
opendevreview | Clark Boylan proposed opendev/system-config master: Scroll grafana pages to force all graphs to load https://review.opendev.org/c/opendev/system-config/+/954624 | 15:36 |
clarkb | lets see if that fixes the screenshots in the job | 15:36 |
fungi | looks like our root alias did receive network maintenance ticket notifications from rackspace too | 16:22 |
fungi | presumably similar to the one i got at my personal account, though i haven't confirmed yet | 16:23 |
clarkb | ha I think the scroll script worked but not completely. https://efb4199e6af2b2032769-ac03866ed76044a5727f648521330347.ssl.cf5.rackcdn.com/openstack/17978b012dfd4dedb0b858a89ba9118e/bridge99.opendev.org/screenshots/zuul-status.png | 16:24 |
clarkb | its changing the window location not scrolling past each graph as it goes. So the top and bottom render and the middle does not. I'll have to see about doing a proper 100 pixel at a time scroll or something | 16:25 |
fungi | makes sense, but also i wish browser-based applications like grafana had a get var you could set to just render everything immediately | 16:26 |
clarkb | ya that would be nice | 16:26 |
fungi | ?imnotreallyabrowser=true | 16:26 |
corvus | https://github.com/grafana/grafana/issues/17257#issuecomment-2736070206 | 16:28 |
corvus | apparently we can set that on the dashboard, but that would affect real users too | 16:28 |
corvus | (maybe we want that? i dunno) | 16:29 |
fungi | implemented as of a few months ago i guess | 16:29 |
fungi | https://github.com/grafana/grafana/issues/105656 seems to be a followup | 16:30 |
stephenfin | clarkb: sorry, I forgot to open HexChat today. The pbr series should be green now. I had (a) included mock in five.py despite it not being a runtime dependency and (b) not include __future__.absolute_imports in the _compat modules, causing them to pick up the wrong packaging. Both addressed | 16:31 |
stephenfin | fungi: clarkb: Also, I know you use nox for zuul and other opendev projects now, but https://github.com/tox-dev/tox/pull/3556 just merged. We probably want to migrate to that everywhere once it's released | 16:32 |
fungi | looks like the site-wide grafana toggle was added as far back as 11.2.0 | 16:32 |
fungi | stephenfin: very cool! | 16:33 |
stephenfin | in fact, it's probably something worth bot proposing, assuming the constraints update scripts are flexible enough to do that. I'll see if elodilles or tonyb have any ideas tomorrow | 16:34 |
fungi | though in zuul jobs with tox-siblings we separately preinstall the dependencies i think so that we can also install some deps from git sources when used in cross-repository testing scenarios | 16:34 |
fungi | but for local testing it sounds like a real win | 16:35 |
fungi | https://opendev.org/zuul/zuul-jobs/src/commit/8110acc/roles/tox/library/tox_install_sibling_packages.py does the heavy lifting there | 16:37 |
* stephenfin looks | 16:37 | |
fungi | it's possible it just magically works the way we're invoking things there too | 16:38 |
fungi | but the takeaway is that it's not necessarily *just* the project being tested that needs to be excluded from constraints | 16:39 |
Clark[m] | stephenfin: I see you rechecked the compat change. It failed building a wheel for PBR but that was as far as I got in understanding the failure | 16:39 |
Clark[m] | If it fails again on recheck I guess we need to dig deeper. But it tests for child changes were happy it probably was just a fluke | 16:40 |
stephenfin | that's my assumption, based on the stack above it | 16:40 |
opendevreview | Merged opendev/system-config master: Upgrade grafana to 12.0.2 https://review.opendev.org/c/opendev/system-config/+/954000 | 16:43 |
Clark[m] | fungi: stephenfin: I think sibling installs work by first running tox no tests to install everything according to the tox rules. Then it does a package listing and any that match the siblings list get installed from source. Then it runs tox without the install step to run the tests | 16:45 |
stephenfin | fungi: I think those things are orthogonal? If understand that correctly, we let tox create the venv, then overwrite what's installed? | 16:45 |
stephenfin | jinx | 16:45 |
Clark[m] | So that should work regardless of constraints in tox. In fact most Openstack tox configs already apply constraints just manually in an overridden install command | 16:45 |
stephenfin | This constraints feature lets us drop the 'deps' section entirely in most cases, and instead rely on tox pull and installing package dependencies for us, with '[testenv] extras' to ensure we install test or doc dependencies | 16:46 |
stephenfin | Eventually, I would like to see us move away from requirements.txt files entirely and put everything in pyproject.toml, but there's a lot more tooling to be reworked before we can do that | 16:47 |
stephenfin | s/tox pull/tox pulling/ | 16:47 |
fungi | yeah, so it should be compatible with that new mechanism as well | 16:50 |
fungi | stephenfin: i don't know of anything in particular stopping us from moving off requirements.txt files, we've already done that for several opendev tools | 16:50 |
fungi | stephenfin: for example https://opendev.org/opendev/bindep/src/branch/master/pyproject.toml#L31-L36 | 16:51 |
stephenfin | bindep doesn't use constraints I assume? | 16:51 |
fungi | we also replaced test-requirements.txt and similar lists but that's more controversial for now, see the comments further down about pep 735 support | 16:52 |
stephenfin | that new tox feature was the main thing holding things https://review.opendev.org/c/openstack/openstacksdk/+/953484/ | 16:52 |
fungi | yeah, good point, constraints throws a wrench into that | 16:52 |
fungi | or did anyway | 16:52 |
stephenfin | I was also under the assumption that the check-requirements job was checking requirements.txt files, but maybe it uses packaging | 16:52 |
stephenfin | (the lib) | 16:52 |
fungi | well, opendev also doesn't rely on central requirements coordination like openstack dows | 16:53 |
fungi | s/dows/does/ | 16:53 |
stephenfin | right | 16:53 |
fungi | so yes the conventions that openstack/requirements automation relies on may need to get adjusted | 16:54 |
stephenfin | gtema said the same thing about codegenerator, but that also doesn't use constraints or the broader requirements tooling | 16:54 |
fungi | but purely from a pbr-using project perspective, it's working well | 16:56 |
opendevreview | Clark Boylan proposed opendev/system-config master: Scroll grafana pages to force all graphs to load https://review.opendev.org/c/opendev/system-config/+/954624 | 16:58 |
clarkb | setting behavior: "smooth" might be sufficient for getting it to actually scroll past each graph? If that works then I don't think we need to bother with changing dashboard or server settings | 16:58 |
opendevreview | Tristan Cacqueray proposed zuul/zuul-jobs master: Ignore .htaccess files iin the zuul-manifest https://review.opendev.org/c/zuul/zuul-jobs/+/954644 | 17:10 |
fungi | anyway, 2025-07-16 18:00-22:00 utc looks like a planned network outage for rackspace flex sjc3 in case we want to do anything in preparation for that | 17:19 |
fungi | though hard to be sure those are the times since they just list them in "pst" so i'm not sure if they really mean pdt | 17:19 |
fungi | it's during a time of year when most/all locales observing pacific time use a daylight savings offset | 17:20 |
fungi | really wish providers would just use utc instead, its way less ambiguous in situations like this | 17:21 |
fungi | dan_with: do you happen to know of "pst" maintenance times for rackspace are really pacific daylight time this part of the year? | 17:21 |
fungi | cardoe: ^ you might know too | 17:22 |
corvus | https://grafana.opendev.org/d/0172e0bb72/zuul-launcher3a-rackspace-flex?orgId=1&from=now-6h&to=now&timezone=utc&var-region=$__all | 17:22 |
corvus | do a shift-reload on that... grafana has upgraded and the gauges are correct now! | 17:23 |
clarkb | looks great | 17:23 |
fungi | very nice! | 17:23 |
fungi | superimposing rested over the quota really tells a far better story | 17:23 |
fungi | er, requested | 17:24 |
corvus | i notice that our ram limit (which is, unfortunately, not easily viewable on that dashboard, unless you do math) is lower in sjc3 than dfw3. that's why we're hitting a limit of like 35 instances in sjc3, under our instance limit | 17:24 |
clarkb | hrm I want to say that the memory limit is what determiend the original 50 max nodes limit in nodepool | 17:25 |
clarkb | I wonder if they changed that on the cloud side later? | 17:25 |
corvus | it's something on the order of 256gb in sjc3 and 512gb in dfw3 | 17:26 |
opendevreview | Merged opendev/git-review master: Clean up all references to branchauthor after removal of usage https://review.opendev.org/c/opendev/git-review/+/951467 | 17:31 |
fungi | i've proposed devstack changes removing openeuler jobs and nodeset from old branches | 17:34 |
corvus | fungi: thanks! | 17:34 |
fungi | response in their irc channel was generally favorable | 17:35 |
corvus | i have restarted both zuul launchers and both zuul web servers | 17:37 |
corvus | https://zuul.opendev.org/t/openstack/nodeset-requests and https://zuul.opendev.org/t/openstack/nodes look a little different now if you want to take a look | 17:37 |
corvus | hrm, someone is using an ubuntu-jammy-32GB label... | 17:37 |
corvus | also the images page is updated https://zuul.opendev.org/t/openstack/image/ubuntu-noble | 17:44 |
fungi | oh, that's all awesome. i hadn't even been aware of the images page until now | 17:45 |
clarkb | corvus: maybe openstack helm using the large node? | 17:52 |
clarkb | I seem to recall they complained in the past when they would node failure | 17:52 |
clarkb | and were resistant to migrating jobs to more smaller nodse as a workaround | 17:53 |
clarkb | I think the grafana scroll hack is working now. Except the zuul-status screenshot wants a login. Other dashboards appear to have rendered all the graphs. I guess I recheck that to see if the login thing is consistent | 17:58 |
clarkb | stephenfin: looks like the pbr-installation-openstack-jammy job is going to fail again | 17:59 |
clarkb | we can set up a hold for that too if we think it will help | 18:01 |
clarkb | I'll clean up my hold for grafana now that the upgrade is done | 18:02 |
corvus | there are a couple of ubuntu-jammy-arm64 ready nodes (likely orphaned from dequeued items) that aren't being used. i think the most recent update to delay accepting requests may help with that in the future (and these nodes predate that), so i'm going to disregard it for now. | 18:18 |
corvus | actually, i will manually set them to "used" so they get deleted and the quota recovered | 18:18 |
clarkb | corvus: the node timeout would eventually clear them out after ~8 hours if left alone? | 18:20 |
corvus | yeah, or they could get picked up by a future request once we burn through the backlog | 18:20 |
corvus | this is a really good graph of the pressure that the arm provider is under: https://grafana.opendev.org/d/2c6f499090/zuul-launcher3a-osuosl?orgId=1&from=2025-07-09T18:24:08.083Z&to=2025-07-10T18:19:20.302Z&timezone=utc&var-region=$__all | 18:21 |
corvus | unfortunately, once my fix for delaying requests takes effect, we won't see graphs like that anymore, because the backlog will be held in the request stage, not the node stage | 18:22 |
corvus | (we should probably try to make a requests-per-label graph to see that in the future) | 18:22 |
corvus | but at least right now, you can see the magnitude of the backlog for arm nodes pretty easily | 18:23 |
clarkb | looks like a fairly diverse crowd asking for the arm resources too | 18:27 |
clarkb | kolla, nova, swift, cinder, neutron | 18:27 |
clarkb | even system-config | 18:27 |
clarkb | https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_55c/openstack/55c1f535872345af985a260631fe8e62/bridge99.opendev.org/screenshots/zuul-status.png this screenshot looks a lot better if anyone wants to review https://review.opendev.org/c/opendev/system-config/+/954624 | 18:28 |
clarkb | fungi: looking at https://review.opendev.org/c/zuul/zuul-jobs/+/954644 which stops adding .htaccess files to the zuul manifest (so those files won't show up in the zuul dashboard) I think this is ok but I seem to recall some openstack doc jobs do want to upload .htaccess files. I don't think this affects the upload of the files just of what the zuul dashboard tries to expose | 18:35 |
clarkb | ya looking at our base job the manifest generation and upload steps are distinct so I don't think the openstack docs use cases are affected. That said maybe we should be generating manifests with a complete listing for accuracy. I'll leave a review | 18:38 |
fungi | thanks, commented | 18:39 |
clarkb | ya I think this boils down to "is it better to be accurate in listing things for completeness (eg to avoid confusion about whether or not the file was present and uploaded) or to avoid potential subsequent errors through user interaction | 18:41 |
corvus | yes i don't think we should lose fidelity there | 18:57 |
frickler | did I miss something on my trixie change? https://zuul.opendev.org/t/openstack/image/debian-trixie still looks empty. I can check myself tomorrow if needed | 19:42 |
fungi | just saw a node_failure result on https://zuul.opendev.org/t/openstack/build/4393463ace0a4c42b452bc50174c3915 | 19:45 |
fungi | i guess we're not doing fedora images, no? | 19:45 |
Clark[m] | We deleted fedora a long time ago | 19:47 |
keekz | @fungi i looked up that rackspace change on 2025-07-16 and 2025-07-16 18:00-22:00 utc is correct. i don't know how / where they got pacific time from... my teams have always used UTC. | 19:50 |
fungi | thanks keekz. it's phrased in pst in the tickets | 19:51 |
fungi | nothing you control i'm sure | 19:51 |
fungi | i guess they figure people using the san jose region probably all live in california and would appreciate a personal touch with tz conversions ;) | 19:51 |
keekz | yeah, i see that. internally the change also has central US time (where headquarters is at). no clue why they're using pacific time ever for anything :) | 19:52 |
keekz | service-now also doesn't distinguish any time zone for the maintenance. it just gives a time with no zone 🤷 i think it depends on who created the change because i've always used UTC myself | 19:53 |
fungi | but thanks for confirming my assumption that they meant pacific daylight time for the offset | 19:53 |
fungi | i guess "our users might also appreciate utc times because they're not necessarily local to the data centers" would be my main feedback | 19:54 |
fungi | next time i close out one of those notification tickets i'll try to remember to stick that in feedback comment | 19:55 |
fungi | personally, i've got (legacy, not flex) servers in their sydney region too, and that's not a tz i can convert in my head and track annual daylight time shifts for | 19:57 |
clarkb | frickler: I think some of the other images were missing necessary metadata on the jobs to indicate which images they build? Not sure if that is the case for trixie | 19:57 |
corvus | the most recent periodic image buildset failed, so there were no uploads from that (it's all-or-none...we may want to revisit that choice, but that's what it is now) | 20:00 |
corvus | but this gate build should have at least caused an upload: https://zuul.opendev.org/t/opendev/buildset/4d5004904115442db010b4f6f58a21df | 20:01 |
corvus | unfortunately, that was 2 days ago, and i deleted the servers that would have handled that, so we don't have those logs | 20:01 |
corvus | i'll trigger a manual build of trixie to get more logs | 20:02 |
corvus | or... maybe not.... because the build trigger returned a 404 | 20:02 |
opendevreview | James E. Blair proposed opendev/zuul-providers master: Add debian-trixie image to providers https://review.opendev.org/c/opendev/zuul-providers/+/954658 | 20:05 |
corvus | frickler: ^ i think that's what's missing. you will also need to add labels for it to the providers. | 20:05 |
corvus | i suspect if we had the logs from the launcher they would have said that there were no uploads so it deleted the artifacts. | 20:06 |
opendevreview | Merged opendev/zuul-providers master: Add debian-trixie image to providers https://review.opendev.org/c/opendev/zuul-providers/+/954658 | 20:17 |
corvus | btw, a clue to that was visiting this page: https://zuul.opendev.org/t/opendev/provider/rax-dfw-main | 20:28 |
corvus | (debian-trixie did not appear before, but does now) | 20:28 |
opendevreview | Clark Boylan proposed opendev/infra-specs master: Update existing specs to match the current reality https://review.opendev.org/c/opendev/infra-specs/+/954662 | 20:31 |
clarkb | infra-root ^ I applied some best judgement to the lists there in order to update the staet of things | 20:31 |
clarkb | figured I should do that before writing a new spec for opendev comms in matrix | 20:31 |
clarkb | I did leave the storyboard specs as is as I'm not sure if we want to consider them all abandoned or worthy of effort or what | 20:31 |
fungi | looks like the specs repo has some docs building bitrot too | 20:39 |
clarkb | I should've expected that. I'll look into fixing that too | 20:41 |
opendevreview | Clark Boylan proposed opendev/infra-specs master: Update existing specs to match the current reality https://review.opendev.org/c/opendev/infra-specs/+/954662 | 21:30 |
opendevreview | Clark Boylan proposed opendev/infra-specs master: Make infra specs buildable again https://review.opendev.org/c/opendev/infra-specs/+/954670 | 21:30 |
clarkb | something like that maybe | 21:30 |
clarkb | getting it to look not terrible was more difficult than I expected | 21:33 |
opendevreview | Merged opendev/system-config master: Update sync-project-config to delete https://review.opendev.org/c/opendev/system-config/+/953999 | 21:34 |
clarkb | I wonder if we should push a trivial grafana and/or gerrit acl config update that simply renames a file to see that ^ is happy | 21:35 |
clarkb | the deploy jobs for that are going to run against grafana and review and zuul fwiw | 21:35 |
fungi | i'll find out if the foundation folks think the openinfra acls need dco enforcement, in which case i can push a quick acl patch for that | 21:35 |
clarkb | fungi: the key bit is that the acl file also be renamed so that the rsync delete behavior gets exercised | 21:36 |
clarkb | but ya should be able to put those two things into one change and not add a bunch of noise to the git log | 21:36 |
fungi | oh | 21:37 |
fungi | sure | 21:37 |
clarkb | I don't expect it to be an issue but rather than discover a problem years? later like corvus did with the grafyaml files might be good to double check upfront | 21:38 |
corvus | i figured now is an okay time to merge that... getting later in the week but not too late :) | 21:40 |
clarkb | yup. Though looking closer the infra-prod-service-review job doesn't exercise it at all. Its the manage-projects job that does. The infra-prod-service-grafana job should've exercised it though | 21:40 |
clarkb | just in a noop manner at the moment | 21:41 |
clarkb | oh the gerrit role also syncs project-config but I'm not sure if it strictly needs to. I think it is manage-rpojects that uses the data | 21:51 |
clarkb | clarkb@review03:/opt/project-config/gerrit/acls$ find ./ -name '*.config' | wc -l returns 737 which matches the count I have locally | 21:52 |
clarkb | so ya short of actually creating a file rename / deleted sitaution I'm not sure what else I can check. This seems to be correct until we introduce that situation | 21:53 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: DCO enforcement for all OpenInfra Foundation repos https://review.opendev.org/c/openstack/project-config/+/954672 | 22:14 |
clarkb | fungi: one thing to note is that manage projects runs have been very slow due to overloaded gitea :/ | 22:28 |
clarkb | things look okish right now though. Or at least less bad than when I checked yseterday | 22:28 |
fungi | clarkb: one comment on the specs build fixup change, otherwise lgtm | 22:35 |
opendevreview | Clark Boylan proposed opendev/infra-specs master: Make infra specs buildable again https://review.opendev.org/c/opendev/infra-specs/+/954670 | 22:37 |
opendevreview | Clark Boylan proposed opendev/infra-specs master: Update existing specs to match the current reality https://review.opendev.org/c/opendev/infra-specs/+/954662 | 22:37 |
clarkb | fungi: good point ^ that should handle it | 22:37 |
fungi | oh, also in the preview i noticed at the bottom it says "OpenStack Foundation" for the copyright | 22:38 |
fungi | not sure if that's worth adjusting now or save for later | 22:38 |
clarkb | fungi: yup I didn't want to change that since its a copyright line. Ifigured that deserved a commit of its own with justification if we want to change it | 22:38 |
fungi | agreed | 22:38 |
clarkb | I don't like "hiding" copyright/attribution changes amongst other updates | 22:38 |
fungi | makes total sense, yep | 22:39 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!