opendevreview | Ian Wienand proposed opendev/system-config master: install-launch-node: upgrade launch env periodically https://review.opendev.org/c/opendev/system-config/+/879387 | 00:01 |
---|---|---|
opendevreview | Clark Boylan proposed opendev/system-config master: Fix rax reverse DNS setup in launch https://review.opendev.org/c/opendev/system-config/+/879388 | 00:03 |
clarkb | ianw: ^ like that maybe? | 00:03 |
clarkb | I've got to start on dinner now. Feel free to push updates to that. | 00:04 |
ianw | thanks, that feels like things are in more logical places | 00:07 |
ianw | i think we have something wrong with our plugin generation | 02:27 |
ianw | for gerrit | 02:27 |
ianw | sudo docker-compose run shell java -jar /var/gerrit/bin/gerrit.war init -d /var/gerrit --list-plugins | 02:27 |
ianw | * download-commands version v3.7.2 | 02:28 |
ianw | maybe this one is clearer | 02:31 |
ianw | - name: gerrit.googlesource.com/plugins/commit-message-length-validator | 02:31 |
ianw | override-checkout: v3.6.4 | 02:31 |
ianw | https://opendev.org/opendev/system-config/src/commit/a24509a7d70f0be7aa7e2e4de2555d0551802dc2/zuul.d/docker-images/gerrit.yaml#L63 | 02:31 |
ianw | * commit-message-length-validator version v3.7.2 | 02:31 |
ianw | list plugins shows it at v3.7.2 on review02 | 02:31 |
Clark[m] | ianw: they are very likely the same thing because they retag old versions. Maybe the build process takes the latest tag version? | 02:32 |
Clark[m] | Yes 3.6.4 and 3.7.2 are the same thing on https://gerrit.googlesource.com/plugins/commit-message-length-validator | 02:32 |
ianw | yeah, i wonder if that's it, that bazel is basically putting the wrong stamp in the .jar | 02:33 |
ianw | i am interested in this because i was just validating the downgrade path | 02:33 |
ianw | which does work to go 3.7 -> 3.6 ... but then i listed the 3.6 plugins and thought "that's weird ..." | 02:33 |
Clark[m] | They differ on review notes. Do we install that one and does it list as 3.6.4 on prod? | 02:33 |
Clark[m] | https://gerrit.googlesource.com/plugins/reviewnotes/ | 02:34 |
ianw | full list is https://paste.opendev.org/show/bwWcqd0agVQuXnXemgV4/ | 02:34 |
ianw | reviewnotes show 3.6.4 | 02:34 |
Clark[m] | Ya I think that is what is happening. Delete-project also differs | 02:35 |
ianw | yeah, i'd agree all the ones showing up as 3.7 appear to be multiple tags in gerrit | 02:36 |
ianw | i guess crisis averted, just something to be aware of, especially in a downgrade pressure situation | 02:36 |
ianw | i'll add it to the downgrade notes | 02:36 |
ianw | for reference https://23.253.56.187/c/x/test-project/+/3 is a downgrade. i've now completed everything i can think of for the 3.7 upgrade checklist. i'm going to work on adding in notes for renames | 02:37 |
opendevreview | Ian Wienand proposed opendev/system-config master: Upgrade Gerrit to version 3.7 https://review.opendev.org/c/opendev/system-config/+/879412 | 03:44 |
opendevreview | Ian Wienand proposed opendev/project-config master: Add renames for April 6th outage https://review.opendev.org/c/opendev/project-config/+/879414 | 04:18 |
Clark[m] | ianw: I think that needs the empty groups list in the yaml too | 04:28 |
ianw | ok | 04:55 |
opendevreview | Ian Wienand proposed opendev/project-config master: Add renames for April 6th outage https://review.opendev.org/c/opendev/project-config/+/879414 | 05:10 |
*** amoralej|off is now known as amoralej | 06:10 | |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire puppet-rally - Step 1: End project Gating https://review.opendev.org/c/openstack/project-config/+/879419 | 06:22 |
*** elodilles_pto is now known as elodilles | 08:04 | |
opendevreview | gnuoy proposed openstack/project-config master: Add OpenStack hypervisor charm https://review.opendev.org/c/openstack/project-config/+/879436 | 10:18 |
opendevreview | gnuoy proposed openstack/project-config master: Add OpenStack hypervisor charm https://review.opendev.org/c/openstack/project-config/+/879436 | 10:57 |
lecris[m] | Could we bring back tarball releases for https://opendev.org/openstack/ansible-collections-openstack? According to Fedora packaging guidelines https://docs.fedoraproject.org/en-US/packaging-guidelines/Ansible_collections/#_collection_source, it is preferred to use the upstream source | 11:57 |
*** amoralej is now known as amoralej|lunch | 12:13 | |
lecris[m] | Why are most packages dependent on pbr build? | 12:32 |
fungi | lecris[m]: these seem like questions for the openstack community, not the opendev collaboratory. specifically you'd need to ask the openstackansible folks about their choice to run (or not run) tarball publishing ci jobs. you can find them in the #openstack-ansible irc channel here on oftc | 12:53 |
lecris[m] | Same for the pbr? | 12:54 |
fungi | as for why most projects use pbr, it's so that they don't need to duplicate version information, change history and other git metadata into the worktree in order to include them in source distributions | 12:54 |
fungi | pbr infers versioning from git tags when building source distributions, generates changelogs, and a number of other similar sorts of tasks | 12:55 |
lecris[m] | Hmm there already is setuptools_scm for versioning, but what about the other that are needed? | 12:56 |
fungi | setuptools_scm is very new. we created pbr about 12 years ago to allow us to have declarative packaging rules and template them consistently across hundreds of projects | 12:57 |
lecris[m] | One thing is that it doesn't support PEP517/PEP621 which will help migrate the spec recipies | 12:57 |
fungi | pbr added the idea of a setup.cfg file so we could strip pretty much all arbitrary code out of setup.py | 12:57 |
fungi | which setuptools eventually copied years later | 12:58 |
fungi | and yes, pbr does support pep 517, but openstack projects aren't using it that way yet | 12:58 |
fungi | you can configure pbr as a build backend in pyproject.toml | 12:58 |
lecris[m] | Is it in master or a recent release? | 12:58 |
fungi | it's been supported in a number of recent releases | 12:59 |
fungi | i forget how far back now | 12:59 |
lecris[m] | So the documentation is outdated? https://docs.openstack.org/pbr/latest/user/using.html#pyproject-toml | 12:59 |
fungi | that documentation says it supports being used as a pep 517 build backend. why are you thinking it's out of date? | 13:00 |
lecris[m] | > Eventually PBR may grow its own direct support for PEP517 build hooks, but until then it will continue to need setuptools and setup.py. | 13:00 |
fungi | right now its pep 517 support is as a setuptools plugin | 13:01 |
fungi | that's saying it may eventually be made usable for pep 517 cases without setuptools at all | 13:02 |
lecris[m] | But the "setup.py" bit? Is it still necessary? | 13:02 |
fungi | sure, that doesn't make it non-pep-517-compliant though | 13:02 |
jrosser | fungi: #openstack-ansible-sig is for the ansible collection | 13:03 |
fungi | jrosser: oh, good to know. i assumed from the name it was an openstackansible deliverable | 13:03 |
lecris[m] | Hmm, so currently can it be migrated to pyproject.tom+setup.py file? | 13:04 |
jrosser | fungi: OSA predates the notion of ansible collections by a long long time so there is an unfortunate name collision there | 13:04 |
fungi | lecris[m]: yes, at the moment the only reason a setup.py is required is to set a runtime flag in setuptools. its contents in a pep 517 context can be as minimal as "import setuptools; setuptools.setup(pbr=True)" | 13:05 |
lecris[m] | Hmm, ok, might be worth preparing a few reviews to move those to a PEP621 | 13:06 |
fungi | lecris[m]: also technically, the channel for the pbr maintainers is #openstack-oslo, i just happen to be familiar with the answers to your questions since i help with it | 13:06 |
*** amoralej|lunch is now known as amoralejh | 13:07 | |
*** amoralejh is now known as amoralej | 13:07 | |
lecris[m] | Although something opendev related, can we configure webhooks or equivalents? I am discussing with packit folks if we can get some support there | 13:08 |
fungi | sure, zuul jobs can call webhooks to trigger external systems. a good example is probably the one we have for readthedocs | 13:09 |
lecris[m] | So it is on the repo's zuul job rather than in the opendev/gitea config? | 13:09 |
fungi | lecris[m]: yes, a project could define a job in its own .zuul.yaml for calling an external webhook | 13:11 |
fungi | though the one for readthedocs is here for reference: https://opendev.org/openstack/project-config/src/commit/db4de26/zuul.d/jobs.yaml#L1131-L1150 | 13:11 |
lecris[m] | Oh this one is propagating to all jobs? | 13:13 |
fungi | that one happens to be defined centrally so multiple projects can reuse it (since it happens to rely on a common set of credentials) | 13:14 |
fungi | the playbook referenced there simply calls this role from the zuul stdlib: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/trigger-readthedocs/tasks/main.yaml | 13:14 |
lecris[m] | The secrets are stored in zuul? | 13:15 |
fungi | the credentials for that one are encoded as a zuul secret, yes: https://opendev.org/openstack/project-config/src/commit/db4de26766a0acf0db9068482b209cec16b1cc57/zuul.d/secrets.yaml#L578-L592 | 13:16 |
lecris[m] | Ok, but there are no built-in zuul webhooks that can later be used for other non-opendev projects? | 13:18 |
fungi | zuul has per-project asymmetric encryption keys, and a client tool you can use to access the public key for each project from its rest api and locally encrypt whatever you need in a secret: https://zuul-ci.org/docs/zuul/latest/project-config.html#encryption | 13:18 |
lecris[m] | Or gerrit actually | 13:18 |
fungi | not sure what you mean by "built-in zuul webhooks that can later be used for other non-opendev projects" | 13:18 |
fungi | maybe you can describe what you're trying to automate? | 13:19 |
fungi | are you trying to use opendev's zuul to automate an external process based on development events in a project hosted in opendev, or automate something for a project in opendev triggered by some external event? | 13:21 |
lecris[m] | Basically hooking packit so that it pushes builds to copr, downstream fedora (or equivalent) etc. | 13:21 |
fungi | i'm not familiar with packit and copr, and only vaguely familiar with fedora's build systems... but trying to restate to make sure i understand, you want a tag event for a project in opendev to trigger package builds in fedora's infrastructure? | 13:23 |
lecris[m] | So for packit we need a few ways of communicating:... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/VUZNpmmirCTzRYqLRMAxcrQE>) | 13:23 |
fungi | i know fedora's using zuul already, so could just point their zuul at our gerrit's event stream as a source connection and run jobs (for packit or something else) based on tag push events | 13:24 |
lecris[m] | Minimally, yes tags/releases, but preferably both commits to branches and PRs | 13:24 |
lecris[m] | The main goal is to make sure the packages are being built and a required dependency is being met in each distro | 13:26 |
fungi | bookwar doesn't seem to hang out in here (unless lurking via the matrix bridge), but might be good to reach out to about fedora's packaging automation. i know we've had some very productive discussions with her about that stuff in the recent past | 13:26 |
lecris[m] | Can we get a full git repo from gerrit or only patches? | 13:27 |
fungi | we strongly recommend you clone from opendev.org and only pull patches from review.opendev.org in order to minimize unnecessary load on the code review system | 13:28 |
fungi | it sounds like what you want to do could be accomplished by connecting some sort of listener to our gerrit's event stream. doesn't have to be a zuul, there's a jenkins plugin for connecting to gerrit, but also it's just a basic ssh connection with json data that can be decoded by just about anything | 13:29 |
fungi | for a while we had an mqtt broker we also published all that into, but nobody ever used it for anything non-experimental so we eventually turned it off | 13:30 |
lecris[m] | Like a time-based listener? | 13:30 |
lecris[m] | Wouldn't that be too consuming compared to a webhook? | 13:30 |
fungi | well, i mean, the webhook that doesn't exist isn't time consuming at all but also doesn't do anyone much good ;) | 13:31 |
fungi | gerrit has a stream-events function in its ssh-based api, so we make that available to anyone who wants to connect to and consume it | 13:32 |
lecris[m] | Hmm, but the connection has to be maintained to get the latest updates? | 13:33 |
fungi | yes | 13:33 |
fungi | we're a volunteer open source collaborative run by a skeleton crew of sysadmins with some development background and almost no available time, so building things just because they might be nice unfortunately falls below the priority threshold | 13:33 |
fungi | most of the time we strive for "good enough" and are happy when we can achieve that with our available resources | 13:34 |
lecris[m] | Yeah, that's why I was considering if there are built-in webhooks in either gerrit or zuul | 13:34 |
fungi | i'll admit, the concept of a "web hook" is still a bit vague to me, but when i think of a web hook being provided by a service, it's to allow external processes to trigger something in that system. it sounds like you're not asking for web hooks built into our systems, but some ability for our systems to call web hooks in some other systems? | 13:36 |
fungi | but also, maybe i have a fundamental misunderstanding of what a web hook is | 13:37 |
lecris[m] | More like if the systems already have webhook capability coded that can be reused | 13:37 |
fungi | yeah, we have zuul, which runs arbitrary ansible, and that can make web requests to whatever we want | 13:38 |
fungi | we see that as built in support for calling web hooks, but i guess that's not what you're talking about | 13:38 |
lecris[m] | Indeed but if there is a bult-in, there is a standard form of the information being pushed (commit, PR, etc.) which can be re-used both for opendev and other projects that use other hosted such services | 13:39 |
lecris[m] | For example, in the gitea webhooks, it mimick github's information, so we can reuse the code for github with minimal changes | 13:40 |
fungi | where is that standard defined? | 13:40 |
fungi | ahh, github is the standard? | 13:40 |
opendevreview | Lucas Alvares Gomes proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent https://review.opendev.org/c/openstack/project-config/+/879456 | 13:40 |
fungi | for gitea, who gets to configure what web hooks it calls in which systems? is that something the project owners specify or is there a callback registration you use? | 13:41 |
lecris[m] | Primarily the upstream service. Gitea has its webhook standard that is adapted from Github, so if we change it for gitea, it supports other gitea instences as well | 13:41 |
lecris[m] | For gitea it's baked in the gitea executable | 13:42 |
fungi | but i mean, taking your example where gitea is signalling some event to your packit service, does the project admin in gitea tell it to send those events to packit, or does the operator of the packit system register a callback into gitea to say it wants to receive certain events? mainly curious what the control flow and coordination protocol is like | 13:43 |
lecris[m] | Ok, had to double-check that I don't have the order mixed up. It's the former, the project registers a webhook URL, e.g. https://webhook.packit.dev/gerrit/openstack/project. Then whenever there is a new event (git commit, new branch, etc.) gitea prepares a payload with the relevant information and passes it to webhook.packit.dev | 13:49 |
fungi | this is one of the reasons i dislike the vague "web hook" terminology, it elides much of the complexities of inter-service rpc | 13:49 |
fungi | so based on your description, the maintainers of the project in gitea are the ones to decide that events should cause a signal to be sent to the packit service when an event occurs which meets the relevant conditions | 13:51 |
fungi | that's not too dissimilar from the maintainers of the project setting a ci job which sends a signal to the packit service under the same circumstances | 13:52 |
lecris[m] | Yes, but by default webhooks send all possible information | 13:54 |
fungi | i don't understand why that would be any different from what a ci job could supply | 13:55 |
opendevreview | Lucas Alvares Gomes proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent https://review.opendev.org/c/openstack/project-config/+/879456 | 13:55 |
fungi | in our case, zuul has a significant amount of state information for events which can be used by jobs and so could be included in whatever http requests were made by the job to an external system | 13:56 |
lecris[m] | In the ci job, you have to manually create the payload, while if there is built-in webhook, it will send the maximum information so the other end simply filters the relevant info | 13:56 |
fungi | someone at github or devs for gitea had to create that payload too. the up side to making it a role in zuul's standard library is that it's then standardized and reusable by any project | 13:57 |
lecris[m] | And if it is in the service, there are standardized names, e.g. repo_name: my_repo vs respoistory: my_repo, etc. | 13:57 |
lecris[m] | Indeed that, but one step further if that could be in zuul itself, then there is less to be configured | 13:58 |
fungi | if there's a published specification for that standardized payload, then making an ansible role to transmit that data doesn't sound like a substantial development task | 13:59 |
fungi | zuul tries to not embed things which can be externalized into ansible, in order to provide increased flexibility | 13:59 |
fungi | but "in zuul" could mean multiple things. it's like saying "this should be included in python. no not the python stdlib, in the interpreter itself" (something in python's stdlib can be considered sufficiently "in python" for most purposes) | 14:01 |
lecris[m] | Well the payload is dependent on the service. github payload vs rtd payload are different. The former has fields of commit the latter may have documentation_url. But within the families of services it can be standardized or adapted from one to another. Gitea and gitlab borrow from github, zuul would borrow from droneCI, etc. | 14:02 |
fungi | right, i meant a role that implemented whatever payload specification you're talking about. it would be named for the standard it's implementing of course | 14:03 |
fungi | not "an anything webhook" that somehow tried to support all standards at once, because that would be madness | 14:03 |
lecris[m] | Indeed, but also the triggers must be relevant, i.e. new commit, new branch, new tags, sure, but there is also new comment (e.g. /packit redo build), new user ownership | 14:05 |
fungi | or alternatively, someone writes a specification particular to zuul rather than trying to copy the behavior of another system, but if there's a public standard that could be emulated closely i expect that would be preferable. i know we talked about some of this in the cdf interop sig a while back | 14:05 |
fungi | the roadblocks we ran up against is that different ci systems have fundamentally different designs and support different sorts of workflows, so creating a standard that covers all of them seemed insurmountable | 14:08 |
lecris[m] | I am thinking of enabling these features in the repository that defines these, e.g. webhooks.gitea.URL=webhookpackit.dev, similarly for a gerrit and zuul one. Then internally, we just use the built-in/configure equivalent to give the relevant info for each service | 14:08 |
lecris[m] | For packit the main information required is in the gitea+gerrit+launchpad | 14:09 |
lecris[m] | Primarily gitea+gerrit. There is already built-in gitea support that only needs to configure URLs, while gerrit I am not sure | 14:10 |
fungi | yes, from zuul's perspective it gets all that information from the gerrit event stream, and supplies it as ansible vars which can be embedded in a url task as parameters, it's mostly a matter of templating that out in the desired format | 14:11 |
fungi | well, in our case mostly from gerrit event steams anyway, but zuul also has github, gitlab and other source connection drivers so it can supply information from them as well depending on the context of the pipeline | 14:12 |
lecris[m] | But what about the difference in events? Commits, tags, comments, etc.? | 14:13 |
lecris[m] | The zuul would then have to be called for each small event, which would be excessive | 14:14 |
fungi | i'm not sure what you mean by "called" | 14:29 |
fungi | zuul consumes the event streams/feeds from all of its configured source connections, and takes whatever actions are defined that match the parameters in the received events | 14:29 |
fungi | so zuul is already being "called" by every event in every stream it's multiplexing | 14:30 |
fungi | aside from a small amount of up-front noise filtering, it looks at each event and decides if it matches a trigger, and if so enqueues it to run whatever jobs are defined for the pipelines triggered from such an event | 14:32 |
lecris[m] | But you would have N repos configured to filter through for each event, compared to if it's triggered directly by the service | 14:32 |
fungi | many jobs are run simply from pattern matches in review comment events | 14:32 |
fungi | again, not sure what you mean by "directly triggered by the service." which service? it seems like you're making a lot of architectural assumptions | 14:33 |
lecris[m] | Gitea/gerrit | 14:33 |
fungi | well, again, that's why i originally pointed out gerrit's event stream. that's what zuul consumes directly too | 14:34 |
lecris[m] | Yes but it goes gerrit -> zuul -> playbook -> packit, instead of gerrit -> packit | 14:35 |
fungi | it may not meet the modern expectation that "all protocols are taco bell^W^Whttp" but it addresses the basic use case | 14:35 |
fungi | and exists today available for use already | 14:36 |
lecris[m] | Zuul has to do a lot of heavy-lifting there both in the filtering the events on gerrit -> zuul, which if it's handled by ansible it would be another overhead, and then preparing payloads zuul -> playbook | 14:37 |
lecris[m] | With gitea at least it has webhooks baked in, so we don't have to go gitea -> zuul -> playbook -> packit for new tags, and new commits | 14:39 |
fungi | yes, i mean the gerrit event stream is already available if you want to consume that data and bypass zuul to do so | 14:41 |
lecris[m] | Well, yes, but the whole point of the webhooks is to limit the requests from the application (packit) and to lessen the need of the application to keep in sync | 14:44 |
fungi | sure, i get that. we don't have that though, we have the gerrit stream-events api or zuul jobs calling ansible to send data somewhere | 14:45 |
lecris[m] | Hmm, I don't suppose gerrit folks would be interested in webhook development? | 14:50 |
lecris[m] | Oh wait, it already has https://gerrit-documentation.storage.googleapis.com/Documentation/3.6.0/config-plugins.html#webhooks | 14:51 |
fungi | there may be a gerrit plugin, but we would need to do development work on our side to integrate whatever configuration mechanism that uses to our infrastructure-as-code continuous deployment automation | 14:51 |
lecris[m] | Other than enabling a webhook based on the configuration of the projects repo, I don't think there is anything else needed | 14:52 |
fungi | "other than [trivial sounding development work that really isn't necessarily trivial] ..." ;) | 14:54 |
lecris[m] | I mean: https://gerrit.googlesource.com/plugins/webhooks/+doc/master/src/main/resources/Documentation/config.md#plugin_configuration. It doesn't look like there is a per/project configuration, but it doesn't look that bad | 14:55 |
fungi | think back to the last time someone approached you about adding something in one of your complex projects, didn't actually have a patch and hadn't even looked at your source code, but insisted "surely it's simple" | 14:58 |
fungi | i'll try to take a look at the plugin's docs, but likely won't have time until after my next several meetings | 15:00 |
clarkb | in general I would say we already have a generic way of accomplishing this task using tools we understand well. It feels weird to add another system to do it that we have to understand and debug | 15:01 |
clarkb | re pep517 setuptools itself is pep517 capable and we piggy back off of that in pbr to avoid needing to rewrite major parts of pbr to remove setuptools | 15:01 |
lecris[m] | Indeed, but the plugins and enabling services sounds simpler than creating a custom zuul job. And I was surprised looking at the gerrit configuration that it's rather small. Gitea on the other hand, I admit, it would be more complicated if there is no api access to it. | 15:03 |
clarkb | lecris[m]: I expect the zuul job would be straightforward and likely completely reusable. So write once and then anyone can reuse it | 15:03 |
lecris[m] | clarkb: re: adding new system. I think we are overlooking that the plugin is already maintained externally. We don't need to understand and debug it. Like we don't need to understand and debug gitea right? | 15:05 |
clarkb | Again the upside is we already have that tool and it is much easier to debug than the gerrit server | 15:05 |
clarkb | lecris[m]: we absolutely do need to understand and debug gitea... | 15:05 |
clarkb | running services like this is non zero effort. Figuring out upgrades, pushing bugfixes we do all of that | 15:05 |
clarkb | we already need to do that for zuul and zuul exposes its logs to the end user in an easy to consume manner | 15:06 |
clarkb | gerrit does not | 15:06 |
clarkb | there is also no mechanism for managing a secret (password/token/etc) in the gerrit projet config today | 15:07 |
clarkb | but zuul supports this making a potential zuul hook system more flexible | 15:07 |
lecris[m] | <clarkb> "lecris: I expect the zuul job..." <- I think here we are swinging the pendulum too much on the other side. We would need to understand how to filter events, handle webhooks etc., which the webhook plugin have better expertise in. We are already maintaining the gerrit configuration, so the work to add the plugin will be less than the zuul playbook | 15:10 |
lecris[m] | As for debugging on zuul vs the plugin, we would have to do the same debugging on either implementation (i.e. for the initial setup), but for the plugin, we know that someone has done 90% of the work | 15:11 |
clarkb | in general we have been trying to take functionality out of gerrit | 15:12 |
lecris[m] | clarkb: That is not an issue, because most webhooks are not encrypted by a configured secret | 15:12 |
clarkb | we were stuck on version 2.13 for years and it took a herculean effort to upgrade to 3.2 because of all the stuff glommed on | 15:12 |
clarkb | I think we need to start with that as a general given. Gerrit should be kept as simple as possible. | 15:12 |
clarkb | With that as a given we have another tool that integrates with gerrit called zuul that enables extremely flexible event driven actions. | 15:13 |
clarkb | We use this for replicating to github for example even though gerrit could do it | 15:13 |
clarkb | its not like these decisions are happenign in a vacuum there is history and long term effort behind it. I personally do not want ot add generic webhooks to gerrit | 15:14 |
clarkb | the upside is that simplifying gerrit allows us to keep it upgraded. We are also able to run zuul effectively and keep it up to date. The end result is that we can focus our efforts in places that make us more efficient | 15:15 |
lecris[m] | One issue is that webhooks are more intense in the number of events. Syncing with github would be suitable for a CI system, but I don't know about a webhook system, especially if it has to be maintained by a minimal team | 15:16 |
lecris[m] | Note that we also need to understand the gerrit webhook plugin to mimick the payload there, since upstream would want to develop once for any gerrit system | 15:18 |
lecris[m] | * Note that we also need to understand the gerrit webhook plugin to mimick the payload there, since application downstream (packit) would want to develop once for any gerrit system | 15:19 |
lecris[m] | Btw, going back to pbr and/or the current packaging approach. I see that not even the optional dependencies are included in setup.cfg. Makes packaging rather difficult to maintain | 15:32 |
opendevreview | Andy Wu proposed openstack/project-config master: Add cinder-nfs charm to OpenStack charms https://review.opendev.org/c/openstack/project-config/+/879469 | 15:33 |
lecris[m] | I.e. what is BuildRequires, what is Requires, what is for testing, etc. | 15:33 |
lecris[m] | Normally we would use one-line: %pyproject_buildrequires -x test | 15:34 |
clarkb | pbr loads requirements.txt as requires. test-requirements are installed separately in test envs | 15:34 |
clarkb | the only pbr requires should be pbr + setuptools unless epxressed in the setup.py or setup.cfg or pyproject.toml | 15:34 |
fungi | lecris[m]: they can be put in pyproject.toml now that it's a thing, they just haven't been. remember these are 13 year old projects you're talking about | 15:35 |
clarkb | *the only pbr build requires | 15:35 |
fungi | updating to newer packaging standards when the old ones still work is viewed as low-priority makework by most projects | 15:35 |
fungi | they'd rather spend what limited time they have on improving their software, and like to ignore packaging topics as much as they possibly can as long as nothing breaks | 15:36 |
lecris[m] | Yeah, let the packagers deal with that >< | 15:36 |
JayF | Also until recently, pyproject.toml support in setuptools was lacking -- you couldn't even install a package in develop mode until a few weeks ago. | 15:36 |
fungi | alternatively, upstream developers can spend their time doing packaging and stop making software ;) | 15:37 |
lecris[m] | At least it helps to know some basics like "pbr" is the only `BuildRequires`. I see in the spec file that they were not taking that in consideration | 15:38 |
fungi | i'm a major supporter of distro package maintainers and the tireless work they put in, but there's a significant (and growing) slice of open source software development that views traditional packaged distributions as "legacy" process to be magically replaced by publishing random container images | 15:39 |
lecris[m] | But otherwise, there is no EPEL9 compatibility issues for implementing PEP621? | 15:39 |
fungi | so while openstack tries to help the various downstream distributors, there is a sentiment leaking slowly into projects that it would be "easier" to just forget all the things we do to try to make stuff packagable and assume everyone will use random container images | 15:40 |
fungi | which makes it that much harder to get them to care about packaging | 15:40 |
lecris[m] | I don't know, doesn't pypi also encourage PEP621? | 15:41 |
clarkb | to be fair before pyproject.toml pbr was probably one of the easier ways to figure this out for external packaging since it was well defined | 15:41 |
clarkb | a random setup.py could do anything. pbr constrained that | 15:41 |
lecris[m] | That would not be legacy packager issue, simply adapting a common standard that pypi supports helps with Fedora packaging as well | 15:42 |
clarkb | I don't know that pbr supports 621 since it would need to grow support for reading that info out of pyproject.toml instead of setup.cfg | 15:43 |
clarkb | but that can probably be added. I also don't know how that relates to epel9 (this is the wrong group for that I suspect) | 15:44 |
clarkb | I guess it might half work if setuptools supports it | 15:44 |
fungi | lecris[m]: the same people who think distros are "legacy" also don't see a lot of point in packaging things for pypi either (though humorously, still rely heavily on things others publish to it when making images) | 15:46 |
lecris[m] | Yeah, that's the issue I recently encountered in that setuptools in EPEL9 and Fedora36 does not read pyrpoject.toml, while F37 onwards have that baked in. I was wondering if that was a limiting factor to that migration, if it was discussed | 15:46 |
JayF | the primary limiting factor in any openstack migration of packaging tooling is the number of repos we have :) | 15:46 |
fungi | clarkb: actually pbr doesn't need to do 621 for anything other than the project name. i have a pbr-using project which moved everything else from setup.cfg to pyproject.toml | 15:47 |
lecris[m] | fungi: *goes to cry in a corner* | 15:47 |
clarkb | fungi: cool so half works is actually 99% works :) | 15:47 |
fungi | setuptools directly handles the rest | 15:47 |
JayF | that level of inertia requires quite a bit to customer/business-case-making to overcome | 15:47 |
fungi | clarkb: the only metadata i didn't have any luck moving out of setup.cfg was the name, pbr looks directly for it there | 15:48 |
fungi | i'll eventually push a patch to fix that if i can work out where it should be patched | 15:48 |
clarkb | fungi: is toml parsing in stdlib? if not we probably have to vendor that | 15:48 |
lecris[m] | fungi: How about dependencies? It does not have to be from requirements.txt? | 15:48 |
clarkb | lecris[m]: no, pbr reads them from requirements.txt ut if you define them elsewhere in standard setuptools locations that should be respected | 15:49 |
fungi | clarkb: it is and isn't. there's a toml lib in newer python versions' stdlib but packaging tools vendor in a lib for cases where it's not available | 15:49 |
lecris[m] | clarkb: 3.11 iirc, otherwise they use toml lib | 15:49 |
lecris[m] | Oh it's the opposite? 🤔 | 15:50 |
fungi | lecris[m]: clarkb: yes, i can use pbr with all of project.dependencies in pyproject.toml, no requirements.txt files at all | 15:50 |
fungi | you need fairly recent setuptools, but it just relies on setuptools' support for it | 15:52 |
clarkb | basically pbr was born out of distutils2 ideas that got killed abandoning a lot of what pbr had done. The idea was to make standard configuration practices for packaging. This included how you define dependencies ut also versioning etc. In the many years since setuptools has swung around and added these features bit by bit | 15:52 |
clarkb | because pbr acts as a setuptools add on essentially you can do all of the normal setuptools stuff too | 15:52 |
JayF | clarkb: if openstack were better at OSS-marketing, we might could've saved the entire python packaging ecosystem from this chaos :D (I'm joking but there might be a nugget of truth to this) | 15:53 |
clarkb | JayF: well pypa was very opposed to pbr despite being based on their ideas becaus at some point they decided their ideas were bad | 15:53 |
clarkb | they just failed to communicate that externally until it was too late. | 15:53 |
fungi | primarily things like setuptools adopting a (slightly modified) version of pbr's setuptools.cfg declarative data files, and the emergence of plugins like setuptools-scm (and others whose names escape me for some of its other features), though they don't have quite the level of pep 440 + semver implementation it does | 15:53 |
clarkb | And since then seem to have acknowledged it wasn't all a terrible idea since a lot of new setuptools features are things pbr has done since day one | 15:53 |
lecris[m] | Hmm, so is there something specific to pbr that is still needed that couldn't be taken from recent setuptools? | 15:54 |
fungi | the main one openstack relies heavily on is semver signalling | 15:54 |
JayF | a secondary one is you need a *darn good reason* to change things across hundreds of repos/deliverables | 15:55 |
fungi | being able to infer likely future version numbers based on information in git metadata, in order to provide sensible development versioning for non-tagged commit states | 15:55 |
fungi | like knowing when there will be a minor or major version increase coming in the next release is inferred from specially formatted commit footers | 15:55 |
lecris[m] | <JayF> "a secondary one is you need a *..." <- Yes, but if we ever migrate to PEP621, would be free work to change to setuptools | 16:00 |
fungi | until the next packaging standards change comes along | 16:01 |
lecris[m] | Oh pbr doesn't just use git describe information? | 16:01 |
clarkb | no it creates pep440 compliant versions based on what it knows about the repo state | 16:01 |
fungi | no, not even close. that's not nearly robust enough to predict and create pep 440 compliant prerelease and development versions that sort correctly | 16:02 |
clarkb | it also records the shas in package metadata since pep440 doesn't allow you to embed them | 16:02 |
fungi | though there's work in progress to try to annotate additional git information in python package metadata, that's something pbr has been doing for over a decade | 16:02 |
lecris[m] | But does it also use .git_archival.txt? Is the information there not enough? | 16:03 |
JayF | I think we've been trying to say that many of those things likely didn't exist when PBR was created and supporting these things | 16:04 |
fungi | basically, we solved most of these problems 10+ years ago, and now that the rest of the python packaging system who rejected our implementations is coming back around to realizing people had similar needs, there's not a lot of urgency from projects to switch to something that provides no additional benefit just because it's suddenly been standardized | 16:04 |
*** amoralej is now known as amoralej|off | 16:04 | |
lecris[m] | For reference it does have the hash and describe... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/oXeOSxiQSddxTGhCdWolkSBB>) | 16:05 |
fungi | we've had an implementation of all of this all along. people had prejudices against what we produced for a variety of reasons, and while they've since proven that the things we did were actually needed, there's a lot of amnesia and nih inherent to the whole python packaging scene | 16:05 |
lecris[m] | I understand the PBR did it first, but are we keeping up with setuptools on the other fronts? | 16:05 |
fungi | lecris[m]: who is "we" in this case? if you don't know whether pbr is keeping up with setuptools, then it's an odd way to phrase the question | 16:06 |
fungi | pbr works with the latest releases of setuptools, as evidenced by pbr-using projects being buildable with latest setuptools | 16:08 |
fungi | i try to keep on top of deprecation warnings by testing my projects with PYTHONWARNINGS=error and running down any new ones that appear so that it stays that way | 16:09 |
clarkb | JayF: not only did they not exist the rest of the python world was strongly against what we were doing. Then maybe 6ish years ago there was a shift where they seemed to realize these features were useful and they started showing up in standard tools | 16:10 |
fungi | lecris[m]: there have been some stumbling blocks, for example the vendoring of distutils into setuptools after it was removed from the stdlib suddenly shadowing distro-supplied distutils, but stuff like that tends to get worked out upstream in setuptools, i and others have helped to patch things there over time | 16:11 |
clarkb | and ya when you've had all these features that are just now showing up for 12ish years it makes motivation to change everything low | 16:11 |
JayF | I mean, I hold zero of those ... emotional desire to not change it for philosophical/hisotrical reasons | 16:11 |
JayF | but like, we're talking about literally hundreds of changes across hundreds of repos, some of which are maintained by nobody or by like 5% of a full time person | 16:12 |
JayF | the bar for making that kind of change is ENORMMOUSLY high, as proven by us staying with tox even after the abuse it put us through | 16:12 |
clarkb | JayF: I think the risk is that they will chnge it all under you again and again and again which is what history says is likely to happen. PBR on the other hand is quite stable | 16:12 |
fungi | by the time they came around to wanting the features we had in pbr, there was an entrenched memory handed down of "pbr bad, avoid" so they basically redid everything in a vacuum rather than just reusing our solutions and code | 16:12 |
clarkb | some new file format will show up and we'll have to rewrite all our toml | 16:14 |
JayF | "Why TOML?" "Everyone got tired of writing yaml for kubernetes and wanted a break" /s | 16:14 |
clarkb | in this case I think they evaluated json and python's variant of ini | 16:15 |
clarkb | json isn't really human readable (fair) and ini can't do complex data srtuctures iirc | 16:15 |
fungi | yeah, i don't have high hopes that the "ooh squirrel!" folks who decided toml was the shiny new thing everyone should use (because rust did it) won't get distracted by something else in two years and write all new standards | 16:15 |
fungi | i also love that one of the reasons they discounted using yaml was because "indentation confuses people" | 16:16 |
lecris[m] | Hmm, now I understand more about the issue. I just hope that the standards don't grow too much dissimilar in that you need to package a different way for each build system. As long as it can be feature par or feature equivalent, at least with respect to the PEPs | 16:16 |
JayF | fungi: so they thought that .... python developers .... don't understand indentation? | 16:16 |
JayF | lecris[m]: I think that's the intention of the new peps in place. I suspect we'll wait to take action until it's evident they succeeded at that :D | 16:17 |
fungi | lecris[m]: yes, we're trying to keep pbr capable of working with the new packaging standards as they solidify, but just because pbr can support them doesn't mean projects using pbr will use them | 16:17 |
lecris[m] | Ok, then the only issue is getting the upstream projects to consolidate the metadata | 16:18 |
fungi | and we have a (much stronger than usual for the python packaging ecosystem) sense of backward-compatibility for pbr. we still test that it works with python 2.7 for example | 16:19 |
lecris[m] | But is it confirmed that pbr can read the metadata on EPEL9? I saw issues with setuptools reading pyproject.toml metadata | 16:20 |
clarkb | no we don't do anything with epel | 16:20 |
fungi | how new is the setuptools on epel9? | 16:20 |
lecris[m] | It's before pyproject.toml support | 16:20 |
fungi | you need fairly recent setuptools for full pyproject.toml support, and if you're installing in editable/usedevelop mode there are still some rough edges with the setuptools pep517 build backend | 16:21 |
lecris[m] | https://pkgs.org/search/?q=python3-setuptools 53 in some cases | 16:21 |
fungi | i can't speak to epel since i don't use proprietary distros, but we do run a bunch of pbr-using projects on centos stream 9 and rocky linux 9 | 16:22 |
clarkb | re supporting python2 you have to do that because setup_requires are pulled by easy_install and can't filter by python version support metadata :( | 16:22 |
fungi | and i think the openeuler we have may be close to rhel9 | 16:22 |
clarkb | fungi: ya openeuler is a different kernel but a lot of theo ther packages are aligned with rhel9 iirc | 16:23 |
fungi | so anyway, maybe whatever issues you saw with setuptools were specific to something on proprietary rhel | 16:23 |
lecris[m] | But if pbr cannot bridge the gap there, it seems that moving completely to PEP621 would be on hold until EPEL10 | 16:23 |
fungi | i have no idea what epel10 is. some red hat thing i guess? | 16:23 |
lecris[m] | fungi: Same issue with F36 and centos-stream | 16:24 |
fungi | to be honest, i know next to nothing about red hat derivatives. long time debian user/contributor here | 16:24 |
*** iurygregory_ is now known as iurygregory | 16:24 | |
fungi | but i gather red hat, as a for-profit company, has the funds to solve problems like that if they see them as a priority | 16:25 |
lecris[m] | fungi: I mean the next release of RHEL, because rebasing setuptools on all the packages will probably not pass | 16:25 |
fungi | no idea what rebasing setuptools means, but presumably something specific to rh/rpm packaging workflow? | 16:25 |
lecris[m] | An update to setuptools will need an update to dependent packages as well since it is a build system, similar to a gcc update | 16:27 |
jrosser | i wonder what actual issue there is, as openstack is already packaged for RH-ish systems via the RDO repos | 16:28 |
jrosser | that would be a great place to start to see how it is done | 16:28 |
fungi | yes, you would presumably trigger new builds of all reverse-build-depends (in debian lingo) | 16:28 |
fungi | there's also a #rdo channel where the red hat openstack package maintainers congregate | 16:30 |
lecris[m] | I mean using PEP621 only. Moving all metadata to pyproject.toml | 16:30 |
lecris[m] | rdo repos and everything work. I was asking about if pbr can bridge the gap on EPEL9 such that centralizing the metadata would be possible | 16:32 |
fungi | i doubt that will happen any time soon in most openstack projects for a variety of reasons, none of which have to do with pbr related challenges. also setuptools isn't likely to stop supporting its old configuration formats without a very lengthy deprecation period. the setuptools maintainers as a whole are risk-averse which is part of why it took so long to get support for pyproject.toml | 16:32 |
fungi | metadata there in the first place | 16:32 |
lecris[m] | But would it be feasible to start that, ideally migrating the metadata to pyrpoject.toml, but at the very least moving the requirements.txt to setup.cfg. Like a multi repo review where we can steadily add on top of it? It would make downstream packaging a trivial task, and we can just reuse the patches there until then | 16:38 |
fungi | it's not all that straightforward. a lot of openstack projects have at least three requirements files used in different contexts, so those would need to be translated to extras which don't work quite the same way. i'm not saying it's insurmountable, but it's definitely work nobody's likely to prioritize over normal project development efforts unless there's some critical external change that | 16:41 |
fungi | forces their hand | 16:41 |
lecris[m] | Well yes, but at least to get the ball rolling | 16:43 |
lecris[m] | I currently suspect that there is an issue with the requirements.txt being ambiguous if it's BuildRequires or Requires, and that makes for example ansible-collections-openstack not buildable because of dependency issue that should only be runtime dependency not build dependency | 16:45 |
clarkb | I don't think that is ambiguous in a pbr context. | 16:46 |
jrosser | for that, it's an ansible collection, not really a python project though? | 16:46 |
clarkb | requirements.txt area always runtime deps with pbr iirc. You have to explicitly set setup_requires in your setup.py for build deps or use pyproject.toml | 16:46 |
fungi | in openstack projects, typically "build dependencies" are tracked as setup-requires and not kept in the requirements.txt (which is for install-requires essentially) | 16:47 |
lecris[m] | Hmm, then I'll have to look closer on what is going on there | 16:48 |
fungi | if you find pbr being included as a runtime dependency, there are a couple of possible reasons. one is that it's just included erroneously and not actually needed, the other is that it's actually also being used as a runtime lib (usually for its ability to fetch versions from metadata, though there are ways to do the same with just stdlib modules these days so could be replaced with some small | 16:49 |
fungi | amount of development effort in those projects) | 16:49 |
fungi | my preference is to avoid using pbr as a runtime dependency in projects unless they're projects directly interfacing with package data in ways that pbr simplifies, which is to say a very small number of packaging-oriented tools not the vast majority of projects | 16:50 |
JayF | it's extremely common in openstack to take pbr as a runtime dep just to get the version out | 16:52 |
JayF | e.g. https://github.com/openstack/ironic/blob/af0e5ee096fa237290776969a37f3bced96b7456/ironic/version.py#L18 | 16:52 |
lecris[m] | It's not pbr itself being runtime dependence, I will check that again later. It's that runtime dependence like openstacksdk is treated like build requirement preventing the build | 16:53 |
fungi | which can also be done in a few lines of python calling the pkg_resources or importlib, but there's also been enough churn there recently that i can understand projects wanting to have a library to abstract away all the guesswork of trying to figure out what solution works for that on an arbitrary python version | 16:53 |
lecris[m] | Imo they should change to generated version files there | 16:53 |
fungi | they do. the generated version files are part of the package metadata, but how you access those files from python differs depending on what version of python you're running | 16:54 |
fungi | more recent python versions can use importlib, though very new versions of python have changed importlib in ways that you still have to try a couple of different possibilities | 16:55 |
lecris[m] | Shouldn't that be handled by the generator and simply from __version__.py import __version__? | 16:55 |
fungi | oh, you mean pbr should add or edit a script which embeds a version string? that can quite easily get out of sync with the package metadata, especially in downstream repackaging contexts | 16:57 |
lecris[m] | Where __version__.py is a generated file on each install or edittable or otherwise | 16:57 |
fungi | downstream distributors often alter python package metadata | 16:57 |
lecris[m] | Why would it get out of sync when downstream is calling pip install to generate those? | 16:58 |
fungi | and the goal is to have the project know what version the distributor has set | 16:58 |
fungi | again, you're making many, many architectural assumptions based on your own experience which is not universal | 16:58 |
lecris[m] | Assumption on pip install or on how pbr generates the version? | 16:59 |
fungi | assumption on how redistributors in general repackage python libraries. there are many ways used by different distros, we're trying to solve for a variety of use cases not just yours | 17:00 |
lecris[m] | With projects using setuptools_scm we rely on the project being a git archive and the build system taking all of the information from a .git_archival.txt. That is not the case with pbr? | 17:00 |
fungi | pbr can take version information from git metadata, from envvars, from configuration, from existing package metadata files... | 17:01 |
lecris[m] | Sure, but couldn't it take the information once, write it in generated files for import __version__ so that there is no more runtime requirement? | 17:04 |
clarkb | it does, it writes it to the package metadata which can be read back out again | 17:04 |
lecris[m] | The only case then that woould be affected would be edittable builds which could use pbr runtime depenency | 17:04 |
lecris[m] | So then why do projects use pbr as a runtime requirement to get the version? | 17:06 |
fungi | in order to not have to try several different ways of accessing python package metadata | 17:06 |
fungi | because the python community keeps changing its mind as to how packaging metadata should be accessed, and has made several non-backwards-compatible changes in that regard | 17:07 |
JayF | It's literally a single line in Ironic, and I don't have to worry about putting more than that single line in any of the nearly 2 dozen projects that are in the Ironic program | 17:07 |
fungi | it's not hard, but it's also not trivial and requires domain knowledge that developers would rather not have to learn | 17:08 |
JayF | anytime I can farm out something like that, which isn't in Ironic's core competency of provisioning bare metal, I'll do it every single time | 17:08 |
JayF | just like I'm glad that e.g. metal3 developers outsources hardware provisioning to us -- we're good at it, they can focus on k8s integration | 17:08 |
lecris[m] | But it's also a single line in __init__.py from __version__.py import __version__ | 17:08 |
JayF | You're saying that works in every single case, including all python versions and platforms we support? | 17:09 |
lecris[m] | Yes, it's up to pbr to generate __version__.py accordingly | 17:09 |
JayF | So that won't work in cases where there's not an install step for the package? | 17:10 |
fungi | and to assume downstream package maintainers don't alter package metadata to amend versions after pbr generates that script | 17:10 |
lecris[m] | Adding whatever guards are necessary for python version and platforms, but writing the metadata statically | 17:10 |
opendevreview | Andy Wu proposed openstack/project-config master: Add cinder-nfs charm to OpenStack charms https://review.opendev.org/c/openstack/project-config/+/879469 | 17:11 |
fungi | basically, one of the design principles of pbr is that information should never be duplicated because that leads to divergence, ordering problems, sometimes race conditions in processes | 17:11 |
lecris[m] | JayF: In those cases you would do editable install where a dynamic version look-aside is used like the one shown in ironic | 17:11 |
fungi | putting the version in two separate places is a risk that pbr is designed to avoid | 17:11 |
* JayF already knows more about version identification in installed python packages than he wanted to have to know lol | 17:12 | |
lecris[m] | fungi: Yes, downstream maintainers have to edit metadata in %prep stage (although it's strongly discouraged to do so) | 17:12 |
fungi | but if they do, then pbr's runtime version reporting will reflect that automatically | 17:13 |
JayF | In RPM/Fedora, they are. | 17:13 |
JayF | In other repositories, such as Gentoo, it's required in some cases (to normalize the version in cases of some specific strings in versions, like '-') | 17:13 |
lecris[m] | Yes, but pbr will be a runtime dependency which in principle it shouldn't be needed | 17:14 |
fungi | sure, it can be replaced as a runtime dependency with half a dozen lines of python depending on how many python interpreter versions you want to support | 17:14 |
lecris[m] | Yes, but I would say that that's a reasonable approach for the downstream users of that package | 17:17 |
lecris[m] | The same dozen lines are run internally in pbr, while separating the dependency makes the package more portable | 17:18 |
JayF | https://github.com/openstack/requirements/blob/master/global-requirements.txt one down, almost 500 to go? /s | 17:18 |
JayF | At a certain point it stops seeming valuable to spend time reducing dependencies when you already have *literally an entire project* dedicated to managing project dependencies because you have so many lol | 17:19 |
JayF | There probably is value in reducing it, but that's so far down everyones' list that it's not going to be a focus | 17:19 |
lecris[m] | Where is the global-requirements.txt used? I was referring to individual projects which would be managing that information internally in their setup.cfg | 17:20 |
JayF | Sometimes you have to give up something when you scale up... you're seeing some of those tradeoffs actively :) | 17:20 |
fungi | it's a reasonable idea to split out a pbr-runtime package (or pbr-version or something) that can do the version lookup tasks as a separate very small dependency, though i don't know who has motivation or time to work on making that | 17:22 |
* JayF nudges lecris[m] ^ | 17:22 | |
JayF | that would be the sort of incremental change that is possible at our scale | 17:22 |
clarkb | global-requirements is a level above the projects, it maintains lists of libs that can be used to help ensure reuse and unnecessary addition of libs that perform the same/similar tasks. Also does basic license checking | 17:22 |
jrosser | lecris[m]: i'm going to say it again :) but you've been talking about ansible-collections-openstack a lot here, so thats an ansible collection not really a python project, and have you seen this? https://opendev.org/openstack/ansible-collections-openstack/src/branch/master/docs/releasing.md#publishing-to-fedora | 17:30 |
fungi | also, to reiterate, the collections, and pbr for that matter, are maintained by teams that have their own irc channels which aren't this one ;) | 17:31 |
lecris[m] | Indeed, I am trying to modernize the rdo spec files there to be more in line with the Fedora packaging guidelines. Currently those are tailor made for rdo only and can't be added downstream in that state | 17:32 |
fungi | this is the channel where we discuss maintenance of the collaboration services that make up opendev | 17:32 |
lecris[m] | Indeed, but isn't project guidelines relevant here, e.g. moving away from requirements.txt? | 17:33 |
fungi | possibly relevant to openstack, not to everything in opendev | 17:33 |
fungi | openstack has its own irc channels (many of them) | 17:33 |
fungi | also rdo has #rdo | 17:34 |
lecris[m] | Well at least the webhook discussion was not out of scope | 17:35 |
fungi | yep | 17:35 |
ianw | do we install a theme for our etherpad or something? i just hit another etherpad and was surprised how different it looked | 20:51 |
ianw | https://etherpad.wikimedia.org/p/hello looks more like that | 20:52 |
Clark[m] | We use the old theme which is more dense. The new one looks like google docs | 20:55 |
ianw | huh i guess i never realised there was a new one :) | 20:59 |
TheJulia | o/ Any chance I can get a node held from a failing jenkins job? | 21:08 |
TheJulia | ironic-grenade, we're seeing it fail on numerous changes right now :( | 21:12 |
Clark[m] | We don't have Jenkins :) is there a specific repo or change or just that job? | 21:18 |
JayF | Pretty much any ironic-grenade that's failing is likely to work | 21:18 |
JayF | unless we discover some different way for it to fail :-| | 21:18 |
clarkb | apparently project is a required field I'll set it to openstack/ironic since the vast majority of the jobs run against that project | 21:22 |
clarkb | JayF: TheJulia the next failure of ironic-grenade against openstack/ironic will be held | 21:23 |
JayF | thank you! | 21:23 |
TheJulia | clarkb: muchas gracias | 21:23 |
JayF | TheJulia: I'm going to go recheck that grenade-only change iury posted and see if we can get it to spawn a failure | 21:23 |
TheJulia | JayF: I'm going to recheck one of the other stacked change since it had 2x fails in a row and is entirely unrelated | 21:23 |
TheJulia | unless you scream "noooo" in the next minute | 21:24 |
JayF | is that the ironic or ironic-lib? | 21:24 |
TheJulia | ironic | 21:24 |
JayF | ack; good. I thought the one you were looking at was -lib or I wouldn't have rechecked iury's :D | 21:24 |
TheJulia | i figured out the ironic-lib one | 21:24 |
TheJulia | it was me mixed up the mock | 21:24 |
JayF | actual failure hiding inside all the random failures | 21:24 |
TheJulia | heh | 21:28 |
fungi | real bugs like to hide in thickets of false bugs to evade detection | 21:38 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: remove-registry-tag: no_log assert https://review.opendev.org/c/zuul/zuul-jobs/+/879518 | 21:41 |
clarkb | ianw: I've reviewed the wheel audit script and it lgtm | 21:41 |
clarkb | left a comment to indicate that in gerrit too | 21:41 |
ianw | clarkb: thanks! | 21:43 |
ianw | have we used "vos backup" before? | 21:43 |
clarkb | if we have I don't remember :) | 21:44 |
ianw | i'm thinking to start cleaning up the volumes more or less one-by-one, we can backup before we run anything destructive, then remove the backup when we're happy | 21:45 |
fungi | i document vos backup in the file deletion section, i've used it on occasion for that process | 21:46 |
ianw | oh right, yeah you can actually *read* documentation, haha https://docs.opendev.org/opendev/system-config/latest/afs.html#deleting-files | 21:48 |
ianw | i guess you just "vos remove" the backup volume? | 21:49 |
fungi | yep | 21:51 |
fungi | it's a judgement call. i don't usually do it unless i'm making file changes that i deem especially risky/error prone | 21:52 |
fungi | and mainly useful in case you need to immediately undo something | 21:52 |
opendevreview | Ian Wienand proposed opendev/project-config master: Add renames for April 6th outage https://review.opendev.org/c/opendev/project-config/+/879414 | 21:57 |
opendevreview | Merged zuul/zuul-jobs master: promote-image-container: do not delete tags https://review.opendev.org/c/zuul/zuul-jobs/+/878612 | 22:02 |
opendevreview | Merged zuul/zuul-jobs master: build-container-image: expand docs https://review.opendev.org/c/zuul/zuul-jobs/+/879008 | 22:02 |
opendevreview | Merged zuul/zuul-jobs master: promote-container-image: add promote_container_image_method https://review.opendev.org/c/zuul/zuul-jobs/+/879009 | 22:04 |
opendevreview | Merged zuul/zuul-jobs master: remove-registry-tag: role to delete tags from registry https://review.opendev.org/c/zuul/zuul-jobs/+/878614 | 22:04 |
opendevreview | Merged zuul/zuul-jobs master: promote-container-image: use generic tag removal role https://review.opendev.org/c/zuul/zuul-jobs/+/878740 | 22:04 |
opendevreview | Merged zuul/zuul-jobs master: remove-registry-tag: update docker age match https://review.opendev.org/c/zuul/zuul-jobs/+/878810 | 22:04 |
opendevreview | Merged zuul/zuul-jobs master: remove-registry-tag: no_log assert https://review.opendev.org/c/zuul/zuul-jobs/+/879518 | 22:04 |
ianw | clarkb / fungi: i've updated https://etherpad.opendev.org/p/gerrit-upgrade-3.7 in the rename, section 4 "force merge changes" to do what fungi suggested -- merge first two renames, wait and clear the no-op manage-projects job, then unemergency for the last | 22:15 |
fungi | i suggested it half in jest, but it really does require the fewest changes to the process. though it also requires a bit of extra care | 22:17 |
ianw | i think it's the best idea, we just want to confirm the jobs are doing what we think they're doing on the day | 22:18 |
Clark[m] | Ya it's nice to see that everything is running as expect at the end | 22:19 |
fungi | good point, i do appreciate that it's a quick confidence test | 22:20 |
opendevreview | Ian Wienand proposed openstack/project-config master: Ironic program adopting virtualpdu https://review.opendev.org/c/openstack/project-config/+/876231 | 22:22 |
opendevreview | Ian Wienand proposed openstack/project-config master: Rename x/xstatic-angular-fileupload->openstack/xstatic-angular-fileupload https://review.opendev.org/c/openstack/project-config/+/873843 | 22:22 |
opendevreview | Ian Wienand proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent https://review.opendev.org/c/openstack/project-config/+/879456 | 22:22 |
ianw | ^ there were no conflicts, but i've stack those ontop of each other in the order they came in | 22:22 |
fungi | also i'm not a huge fan of the squashing idea because we lose some easy tracking of the individual changes back to their authors if we abandon 2/3 of them | 22:23 |
fungi | granted, it's only a very minor concern | 22:24 |
ianw | i think if we want to call it on any further renames, we can commit https://review.opendev.org/c/opendev/project-config/+/879414 | 22:26 |
fungi | when did we schedule the maintenance window for? i've already forgotten | 22:27 |
ianw | planned April 6, 2023 beginning at 22:00 UTC | 22:27 |
fungi | thanks, i wanted to say utc thursday but i wasn't certain | 22:27 |
fungi | so yeah, we're 48 hours out, this seems like a reasonable cut-off time | 22:27 |
ianw | just for an abundance of caution -> https://opendev.org/opendev/system-config/src/branch/master/playbooks/manage-projects.yaml i'm confident that manage-projects will do nothing with gitea and review in emergency | 22:30 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!