fungi | oh cool | 00:11 |
---|---|---|
opendevreview | Steve Kowalik proposed opendev/bindep master: Support and prefer unittest.mock https://review.opendev.org/c/opendev/bindep/+/878733 | 02:17 |
frickler | Clark[m]: ildikov: codesearch found lots of references like this, not sure if they're actually being used https://opendev.org/airship/tempest-plugin/src/branch/master/.zuul.yaml#L53 | 04:25 |
ildikov | frickler: thank you, I’ll check them tomorrow! | 04:40 |
opendevreview | Merged openstack/project-config master: Add metal3-io/metal3-dev-env to openstack tenant https://review.opendev.org/c/openstack/project-config/+/878223 | 05:40 |
*** amoralej|off is now known as amoralej | 06:17 | |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: registry-tag-remove: role to delete tags from registry https://review.opendev.org/c/zuul/zuul-jobs/+/878614 | 06:18 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: promote-container-image: use generic tag removal role https://review.opendev.org/c/zuul/zuul-jobs/+/878740 | 06:18 |
priteau | Hello. Did something change with regards to topics in Gerrit? I have removed the topic from https://review.opendev.org/c/openstack/kolla-ansible/+/871279 to change it, and I don't see any UI element to add a topic back. | 07:15 |
priteau | Nevermind, it is hidden under SHOW ALL | 07:15 |
*** jpena|off is now known as jpena | 07:20 | |
frickler | priteau: yes, I had the same issue the other day, not sure if that is a recent UI "optimization" | 07:25 |
mnasiadka | Speaking about Gerrit - in Kolla-Ansible I see Review-Priority ticked as green even if it's not voted (i.e. 0) - where do we fix that? | 07:42 |
frickler | mnasiadka: that's not a bug from gerrit pov, it only considers submit-requirements. since review-priority is non-blocking, this is satisfied with RP=0 | 07:53 |
frickler | I think clarkb had some more context for that | 07:53 |
mnasiadka | ok, that makes sense | 07:53 |
frickler | infra-root: looks like we may need to force-merge some horizon zuul config changes. they neglected to remove the xstatic-font-awesome repo before its retirement and now patches to drop it fail because of issues in different branches | 09:04 |
frickler | https://review.opendev.org/c/openstack/horizon/+/878751 vs. https://review.opendev.org/c/openstack/horizon/+/875326 | 09:04 |
fungi | mnasiadka: frickler: yes, basically if review-priority doesn't require a positive vote to merge then it will show a green checkmark indicating that requirement is satisfied, since apparently that's what gerrit upstream thinks the change list should be showing to people (whether or not a change is ready to merge) | 12:17 |
*** amoralej is now known as amoralej|lunch | 12:18 | |
lecris[m] | I've noticed that the opendev documentation says that we can host projects on OpenDev, but I can't find any instructions of where to get started | 12:18 |
lecris[m] | The Get Started page is about developing on already present projects, not on making a new one | 12:19 |
lecris[m] | Or alternatively is there another way to get a zuul workflow for a project? | 12:21 |
*** gthiemon1e is now known as gthiemonge | 12:24 | |
frickler | fungi: yes, technically this makes sense, but it destroyed the intended use of the RP field, unless one is using custom dashboards | 12:28 |
fungi | lecris[m]: zuul is open source, anyone can deploy and run the requisite services and connect them to the supported code review platform of their choice. there are also companies like vexxhost who offer managed zuul-as-a-service and other project hosting platforms platforms like softwarefactory which also provide zuul services to projects hosted with them. as for opendev's project creation | 12:30 |
fungi | process, we have it documented in our infrastructure manual here: https://docs.opendev.org/opendev/infra-manual/latest/creators.html | 12:30 |
fungi | it's basically all done via code review adding project information to structured data files in a configuration repository, and then project creation in gerrit and related systems is handled through post-merge continuous deployment automation (also driven by zuul) | 12:32 |
lecris[m] | Thanks for the link might be helpful to link this on the front page: https://opendev.org/ | 12:34 |
fungi | good point, though mostly we cater to projects that want to be hosted in opendev due to their maintainers having already contributed to other projects hosted in opendev and who therefore have some familiarity with the required workflows and processes | 12:37 |
fungi | since we're essentially a community collective of projects who pool our resources to help maintain a common set of open source collaboration tools for each other | 12:38 |
lecris[m] | So currently we cannot make user level projects, and it needs to be reviewed. I'll give it a thought. There is no free/trial version for hosted zuul to play around? | 12:38 |
fungi | lecris[m]: the opendev collaboratory isn't zuul-as-a-service, if that was your impression. we're running an integrated project hosting platform which happens to include zuul as its ci/cd component, but it's a necessarily opinionated set of connected tools and workflows, not simply a means of getting access to a zuul someone has hosted for you | 12:40 |
lecris[m] | Indeed, I am vaguely familiar with the openstack projects on opendev, I was wondering about the other zuul services you've mentioned | 12:41 |
fungi | lecris[m]: at https://zuul-ci.org/start there's a link to vexxhost which has a commercial zuul-as-a-service offering, not sure if they have a free trial | 12:43 |
lecris[m] | Thanks I'll check it ouy | 12:45 |
fungi | lecris[m]: also red hat runs https://softwarefactory-project.io/ which is a hosting platform that incorporates zuul (they run one projects can be hosted in, but also they make a sort of distribution from it) | 12:47 |
fungi | lecris[m]: as for hosting in opendev, yes we discourage creating throwaway test projects, but are happy to host an open source project you have some plan to maintain | 12:49 |
lecris[m] | Ok, I've checked them both, but it seems making a self-managed zuul to be more accessible than those | 12:50 |
fungi | if you're just wanting to play around with zuul, there's a "quickstart" all-in-one container based (docker-compose) deployment in the zuul/zuul repository which includes a running gerrit container connected to it and some rudimentary log hosting through apache | 12:50 |
fungi | that "quick start installation" linked from https://zuul-ci.org/start | 12:51 |
lecris[m] | Is there a playbook for it? | 12:51 |
fungi | yeah, it actually gets run on every proposed change for the zuul project as a full integration test... i'll dig up the link | 12:52 |
fungi | lecris[m]: https://opendev.org/zuul/zuul/src/branch/master/playbooks/tutorial | 12:53 |
fungi | if you're curious, here's how it gets run as part of a zuul job too: https://opendev.org/zuul/zuul/src/branch/master/.zuul.yaml#L181-L194 | 12:54 |
frickler | lecris[m]: if you are done with the quickstart and want a more production like deployment, maybe have a look at https://github.com/osism/ansible-collection-services/tree/main/roles/zuul, this is what I (co-)wrote and use for our downstram zuul | 12:54 |
frickler | downstream even | 12:55 |
lecris[m] | Thanks for the links 🎉 | 12:55 |
lecris[m] | So even the playbook uses docker-compose I see | 12:56 |
lecris[m] | Still works on podman though right? | 12:57 |
fungi | yes, i think we actually use podman for that, but would need to check | 12:58 |
*** amoralej|lunch is now known as amoralej | 13:00 | |
fungi | mmm, https://opendev.org/zuul/zuul/src/branch/master/playbooks/tutorial/pre.yaml#L4 calls the ensure-docker role: https://zuul-ci.org/docs/zuul-jobs/container-roles.html#role-ensure-docker but it may "just work" if switched to https://zuul-ci.org/docs/zuul-jobs/container-roles.html#role-ensure-podman | 13:03 |
mnasiadka | Is there a way to set a storyboard project as read only? (we'd like to move Kayobe to launchpad) | 13:24 |
fungi | mnasiadka: yes, i should be able to switch it to disabled, just a sec | 13:25 |
fungi | confirmed, there's an is_active column in the projects table. looks like we switched that to 0 and updated the description for the project when we did this in the past | 13:27 |
fungi | mnasiadka: so we should be able to do it to kayobe, are you wanting that done now or on a certain day? | 13:28 |
mnasiadka | fungi: I think we'd like to create a launchpad project first - I'll reach out with a request to do it once we have it | 13:29 |
mnasiadka | speaking of which - is there some special magic to create a launchpad project? | 13:29 |
fungi | mnasiadka: i haven't done it in a long time, so you're likely to figure it out as fast as i do. just make sure you set the owner to a group that's owned by the openstack administrators group, like how nova's is configured, that allows me or someone else in that group to step in and handle things like ptl replacement if something happens and you're not available to conduct a smooth transition of | 13:31 |
fungi | leadership | 13:31 |
lecris[m] | I never understood why some projects have storyboard, while most are on launchpad. Historical reasons or are the good usecases for storyboard? | 13:31 |
mnasiadka | fungi: will do | 13:32 |
fungi | lecris[m]: launchpad was unmaintained for a very long time when we started creating an alternative. also there were lots of projects using closed-source kanban boards for task tracking and prioritization (trello especially) and we wanted to offer them an open source alternative to that | 13:33 |
fungi | however the people doing sb development came and went, and we were never able to build a critical mass of people using it who also wanted to help develop/maintain the code for it | 13:34 |
fungi | probably the biggest challenge is that most of our projects, traditionally, were python based and web frameworks had popularly started shifting to javascript which very few contributors had significant experience or comfort level with | 13:35 |
lecris[m] | Hmm, but why not gitea issues? Launchpad doesn't have kanban either right? | 13:35 |
fungi | gitea wasn't even a twinkle in anyone's eye in 2013 | 13:35 |
lecris[m] | Oh, that long ago. But eventually opendev adopted gitea, so why not then? | 13:36 |
fungi | also gitea issues will require a pretty heavy lift to how we currently deploy and run gitea, right now we just have a farm of independent read-only gitea servers we replicate commits into from gerrit, no user account management | 13:37 |
frickler | https://lists.opendev.org/pipermail/service-discuss/2022-November/000382.html has some context about that | 13:38 |
fungi | we used gitea as a replacement for cgit, because people preferred it for code browsing. right now it's really just set up as a code browser interface for us | 13:38 |
lecris[m] | Oh yeah. Probably will be hard until federated users is a thing 🤔 | 13:38 |
fungi | (and we had been using cgit as a replacement for gitweb) | 13:38 |
fungi | even solving the account management for gitea, we'll need some sort of shared backend because we get too much traffic for a single gitea server | 13:40 |
fungi | i don't think we've ruled it out, we're just operating on a skeleton crew trying to continue maintaining things, and have limited time for significant feature work unless more people start helping out with general maintenance tasks first | 13:41 |
lecris[m] | Hmm, I thought that would be automatically handled by the git ssh which is not (or at least can be configured to) managed by the gitea server | 13:42 |
fungi | we've learned that volunteers to maintain a new service or implement some new feature doesn't help much, because people get bored and move on quickly and leave the rest of us stuck maintaining whatever they built and abandoned | 13:42 |
fungi | lecris[m]: what would be handled? offloading the request volume? | 13:43 |
fungi | or the shared data backend? | 13:43 |
lecris[m] | git traffic through ssh. That is the main bottleneck right? Not the actual frontend management | 13:44 |
fungi | we don't serve git repositories over ssh, they're over https | 13:44 |
lecris[m] | Oh, right, that would be a strain on gitea directly | 13:45 |
fungi | ssh doesn't work from lots of places... corporate networks where the firewall admins think ssh means hackers have taken control of your workstation, mainland china, et cetera | 13:46 |
lecris[m] | What about for the backend? | 13:47 |
fungi | but also we get a lot more non-git web traffic than you might expect, particularly crawlers | 13:47 |
fungi | synchronizing git content between the servers is a solved problem, we do that already, the challenge is other features (which we currently have disabled) like issues, wiki, anything that needs user management | 13:48 |
fungi | keeping all that synchronized between multiple gitea servers historically wasn't trivial, though maybe it is now | 13:49 |
lecris[m] | Ok, I've found probably the [relevant issue](https://gitea.com/gitea/helm-chart/pulls/205) on the HA setup | 13:50 |
fungi | looks like that design relies on a shared block device writeable from multiple systems | 13:51 |
lecris[m] | Well, one place that has it solved would be gitlab, but probably too much of a hustle to migrate there | 13:53 |
frickler | which might in fact be available in vexxhost. but certainly would require a good amount of testing | 13:53 |
fungi | and it's not clear that it's a solution to load distribution (active-active clustering), that may be purely for failover (active-standby) | 13:53 |
fungi | lecris[m]: also there's the problem that gitlab is open-core, and we try to avoid open-core projects | 13:54 |
lecris[m] | Indeed. Didn't gitea start to move to open-core? | 13:55 |
frickler | at least there is a fork that seems to have gained some traction | 13:56 |
fungi | they created a company, i don't think they (yet) said they were going to make a commercial distribution with features absent from the open source version and reject similar patches to add those features to the open source version | 13:56 |
fungi | and also, yes, there's a separate community fork of gitea which we've been tracking as an alternative if that were to happen | 13:57 |
lecris[m] | What's the fork named? | 13:58 |
fungi | forgejo | 13:58 |
lecris[m] | Thnx. Funny how we have a fork of a fork | 14:00 |
fungi | yep | 14:00 |
fungi | granted, many open source projects are forks of forks of forks dating back decades | 14:01 |
fungi | https://forgejo.org/ seems to be the current url for it | 14:02 |
clarkb | fwiw mordred and corvus sort of pioneered the run gitea as an active active cluster. All the history of that remains in opendev/system-config. We didn't realize until after the initial deployment that the gitea processes were trampling over each other via the code search index | 14:25 |
clarkb | it took a while but gitea eventually added support for an external index like elasticsearch in response to that, but by then we had abandoned the gitea cluster in favor of the independent giteas and switching back is a bit of work at this point | 14:26 |
clarkb | in particular the original design relied on kubernetes and we don't currently have a good way to deploy kubernetes except by hand | 14:26 |
lecris[m] | Main/all infrastructure is ansible based? | 14:28 |
clarkb | lecris[m]: I'm not sure what Main/all is. But a number of years ago https://opendev.org/opendev/system-config/src/branch/master/kubernetes deployed a (broken due to code indexes) active active gitea cluster on k8s | 14:28 |
clarkb | bugs were filed with gitea with what was learned and we changed the deployment as gitea didn't support that deployment setup | 14:29 |
lecris[m] | I mean for example, gitea is deployed via ansible for opendev.org instead of kubernetes or other such deployments? | 14:31 |
clarkb | lecris[m]: the current deployment system of our production giteas is ansible managing docker-compose/docker | 14:31 |
lecris[m] | Ok, so more or less similar to openstack deployments afaiu? | 14:33 |
clarkb | lecris[m]: https://review.opendev.org/c/opendev/system-config/+/877541 gives a good window into what the ongoing maintenance is like. THis change if/when landed will build and publish new gitea 1.19 docker images, then followup jobs will run ansible to upgrade our giteas in a rolling fashion behind a load balancer which should result in no user visible downtime | 14:33 |
clarkb | lecris[m]: it has nothing to do with openstack deploymens | 14:33 |
clarkb | some people also use docker compose and docker and ansible to deploy openstack, but there isn't any shared machinery other than using the same tools | 14:33 |
lecris[m] | I thought nowadays, only the ansible+docker based developments are actively maintained for openstack. Tripleo, kayobe, kolla | 14:35 |
clarkb | lecris[m]: I don't think that is accurate. vexxhost has their operator based system iirc. Charms still receive changes and so on | 14:35 |
clarkb | But even if that were the case that doesn't give openstack a monopoly and deploying things with docker and ansible. My point is that we may share common tools but the deployment systems and playbooks and roles used don't have any overlap. Its not openstack related | 14:36 |
fungi | and rdo has rhel packages, debian has deb packages, ubuntu has uca with packages and their charms orchestration... | 14:37 |
clarkb | I would say the choices we have made are very much not kolla like :) | 14:37 |
clarkb | kolla spends a lot of energy on building images on different platforms from source and distro packages and we explicitly do not. We build a single image using common base tools for the language or operating environment | 14:38 |
fungi | oh, actually the main debian deployment solution relies on puppet modules now for a lot, i think | 14:38 |
lecris[m] | Yes indeed, just that knowing a bit on the whole stack of one, helps with the other. I've basically learned how to debug docker containers, ansible workflows, etc from tripleo. | 14:39 |
corvus | lecris: hi, i'm the original author and one of the maintainers of zuul. i also provide support for self-hosted zuul via my company Acme Gating. i also have a zuul configurator on the acme website that might help you get started quickly with several of zuul's code hosting options and commercial cloud providers (i also support openstack of course, but that's not as easy to package up :) here's a link: | 14:41 |
corvus | https://acmegating.com/acme-enterprise-zuul/#start | 14:41 |
lecris[m] | clarkb: Speaking of which, would still love to see Fedora support someday :D. TripleO at least has a bootstrapping issue from centos to Fedora due to missing `btrfs` | 14:42 |
frickler | clarkb: fungi: corvus: any thoughts on the horizon issue I mentioned earlier? | 14:43 |
clarkb | frickler: in https://review.opendev.org/c/openstack/horizon/+/878751 can the missing job also be fixed at the same time? | 14:43 |
clarkb | frickler: same question for https://review.opendev.org/c/openstack/horizon/+/875326 looks like. Basically you should be able to remove the missing job from the config so zuul doesn't complain about it while also removing the repo? | 14:44 |
lecris[m] | Thanks, I want to get my feet wet with some test projects before committing to one or another. Thanks for the information though, I'll keep it in mind | 14:44 |
corvus | makes sense. feel free to use the configurator there -- it's very similar to the zuul quickstart (which i also wrote) but includes cloud and code review configuration out of the box | 14:46 |
corvus | basically, the zuul quickstart is designed to teach you how to bootstrap a zuul system using fundamental concepts. the configurator is designed to get a zuul system up and running immediately (without teaching you anything). | 14:47 |
frickler | clarkb: the ussuri patch removes the job that the master patch complains about, not sure what else is to be removed. each patch is still blocked by be other one | 14:48 |
fungi | so cross-branch deadlock basically, requiring one or the other to bypass testing? | 14:50 |
lecris[m] | Hmm, probably will play around with the quickstart and the role since I am also writing a playbook for my test cluster. Btw, why not also a openstack cloud settings? Too much variability? | 14:50 |
frickler | fungi: that's how I read it, yes | 14:50 |
fungi | and this was the result of removing the project from the tenant config i guess? | 14:52 |
frickler | "Zuul encountered a syntax error while parsing its configuration in the repo openstack/horizon on branch master." on the ussuri patch and vice versa | 14:52 |
clarkb | the master job that the ussuri change complains about is present though | 14:52 |
clarkb | but I guess it doesn't see that because the master config is broken so that new config isn't loadale | 14:52 |
corvus | lecris: exactly -- with the configurator, i was trying to make something that works out of the box for the most people. i have customers using openstack and help them with it, but it's not something that's easy to pre-package. | 14:52 |
clarkb | frickler: fungi: in that case we might be able to force merge just the master change and then ussuri would see the config as valid and be mergable normally? | 14:53 |
frickler | clarkb: ah, o.k., I didn't check that deep. then maybe the job removal isn't actually needed, just the project drop | 14:53 |
frickler | but that would also explain how this situation could happen, since I was wondering how the job could get removed without zuul complaining in the first place | 14:54 |
clarkb | ya I think my suggestion would be to force merge the xstatic project removal from horizon master. Then recheck ussuri | 14:54 |
corvus | frickler: it would be interesting to see the results of just proposing a change with the project removal | 14:55 |
corvus | there's a possibility it might not trip zuul's "this change made this error" check | 14:56 |
frickler | I'll update the ussuri patch. https://review.opendev.org/c/openstack/horizon/+/875326 is already just that | 14:56 |
corvus | oh then clarkb 's suggestion is probably best. alternatively, we could enable circular deps, but i think we'd probably prefer to force-merge :) | 14:58 |
lecris[m] | Speaking of horizon, is there a roadmap for skyline if it will replace it at some time? What's the main advantage of developing that? | 14:58 |
fungi | lecris[m]: we don't really get that deep into the projects being hosted in opendev, it's probably more of a question for the #openstack-tc channel or the openstack-discuss mailing list | 14:59 |
lecris[m] | Oh, ok | 15:00 |
fungi | we do use a variety of openstack deployments, but mainly via api integration and cli, so not all that up on the webui scene for it | 15:01 |
frickler | corvus: still the some failure. maybe one of you can do the merge, I'm still in a PTG session in parallel, otherwise I can do it my morning | 15:01 |
frickler | s/some/same/ | 15:01 |
mnasiadka | Getting NODE_FAILURE for all aarch64 jobs that Kolla runs - https://review.opendev.org/c/openstack/kolla/+/878663 | 15:26 |
clarkb | mnasiadka: looking at https://grafana.opendev.org/d/391eb7bb3c/nodepool-linaro?orgId=1 and https://grafana.opendev.org/d/42e9cd74b4/nodepool-osuosl?orgId=1 it doesn't seem to be a complete failure. We probably need to look at the nb03.opendev.org launcher service logs to see more | 15:28 |
clarkb | mnasiadka: is it specific to a particular distro or distro release maybe? | 15:28 |
mnasiadka | centos9, debian, ubuntu and rocky9 is probably enough of a mix to assume at least one would be working ;-) | 15:29 |
clarkb | I'm in a meeting now but will log in to check logs as soon as this is done | 15:33 |
lecris[m] | Can someone point me to where I can get the spec files for rdo? I want to port some of them to copr to test some basic Fedora migration | 15:33 |
clarkb | lecris[m]: you'll need to ask rdo. | 15:34 |
fungi | they have a #rdo irc channel, but looks like you just found it | 15:40 |
clarkb | mnasiadka: linaro failed 300-0020850007 with "no valid host found" and OSUOSL failed with "No fixed IP addresses available" | 15:48 |
clarkb | the linaro failure occured at ~14:40 which is when linaro was busy | 15:49 |
clarkb | so the no valid host is possibly a valid failure and we've got our quota limits set improperly? | 15:49 |
fungi | or a leak | 15:49 |
clarkb | fungi: yes that is possible too. | 15:50 |
clarkb | on the osuosl side I suspect we share the ip pool with other users and there isn't much we can do about that | 15:50 |
clarkb | anyway I think a recheck would be appropriate and may generate more useful datapoints | 15:50 |
clarkb | ok ya the arm64 pipeline is running jobs now and the grafana graphs reflect nodes are booting for that | 16:05 |
clarkb | I think this may have been an unfortunate timing issue combining lack of resources in both clouds | 16:05 |
clarkb | frickler: fungi: when yo uget a chance can I get reviews on https://review.opendev.org/c/opendev/zone-opendev.org/+/878710 to finish up gitea01-04 cleanups | 16:10 |
*** jpena is now known as jpena|off | 16:22 | |
frickler | Ramereth: ^^ maybe you can check that IP failure | 16:27 |
Ramereth | frickler: looking | 16:39 |
Ramereth | looks like we're running low on public IPs on that network | 16:41 |
clarkb | we can dial back max-servers if necessary. Also nodepool will respect quota changes if you sa you change the instance quota on the cloud side | 16:43 |
Ramereth | do you require public IP addresses? I was going to work on deploying a new NAT provider network | 16:44 |
Ramereth | (not sure why I didn't do that in the first place a long time ago and just rely on floating IPs, but dealing with that now...) | 16:45 |
*** amoralej is now known as amoralej|off | 16:49 | |
clarkb | Ramereth: we do require it as it vastly simplifies our multicloud deployment | 16:53 |
clarkb | Ramereth: we can do ipv6 only test nodes for public IPs though and then outbound can NAT through a single ipv4 address | 16:53 |
clarkb | unfortunately ipv4 still required due to github and other resources not having ipv6 addresses | 16:53 |
clarkb | basiclly if we get ipv6 then I think we end up only needing ~1.5 IPv4 addrs. 1 for our mirror/proxy caching node and the other for general test node outbound NAT which can be shared with the rest of the cloud hence the .5 | 16:54 |
*** sfinucan is now known as stephenfin | 17:20 | |
Ramereth[m] | I will see what we can do. I have limited hours this week as I have company visiting. But last I looked there were only 3 IPs available. So if you limit your jobs to that, it should get you by at least. | 17:24 |
opendevreview | Merged opendev/zone-opendev.org master: Remove gitea01-04 server DNS records https://review.opendev.org/c/opendev/zone-opendev.org/+/878710 | 17:40 |
*** dhill is now known as Guest9167 | 20:11 | |
clarkb | fungi: is there any way to tell pip to not cache wheels it builds for certain packages? In particular zuul's use of libre2 ends up causing problems on tumbleweed for me because it updates often enough that the .so moves and the wheel is invalid | 21:00 |
fungi | clarkb: there's a --no-cache-dir option according to --help | 21:03 |
fungi | labeled as "Disable the cache." | 21:04 |
clarkb | I think that disables the entire cache which I don't want | 21:04 |
fungi | not sure about on a per-project basis though | 21:04 |
clarkb | its relaly only re2 linked things that I regularly have to run a find in .cache/pip/wheels to delete | 21:04 |
fungi | i suppose you could selectively scrub the cache with a script between invocations | 21:04 |
clarkb | ya maybe thats the best thing just store the find incantation to delete them somewhere | 21:04 |
fungi | i wipe the cache every time i rebuild my default python, but sounds like you're seeing issues more often than that | 21:05 |
clarkb | yes, in this case its the libre2(-devel) package updating creating a new .so file breaking existing wheels | 21:05 |
fungi | i suppose you could also wipe it any time you libre2 is upgraded | 21:05 |
clarkb | but pip has no validation of wheel external links before using the wheels | 21:05 |
fungi | well, also wheels don't build binary extensions, they only ship prebuilt ones | 21:06 |
fungi | i guess you're building re2 from sdist on install due to lack of sufficient wheel for your platform? | 21:07 |
clarkb | there are no wheels published on pypi iirc | 21:07 |
fungi | pip does cache what it builds, and that could lead to what you're seeing | 21:07 |
clarkb | so ya its building from scratch and then caching that | 21:07 |
clarkb | then 3 months later I'm rebuilding the zuul test env and it explodes | 21:07 |
fungi | not a lot of options for specifically disabling the built cache vs the download cache | 21:08 |
clarkb | with a stable distro release this is never an issue bceause the .so file never moves | 21:11 |
clarkb | and should remain compatible though I suppose its possible a security update should cause you to need to rebuild too? probably very unlikely though | 21:11 |
fungi | yeah, sover bumps are verboten within a debian stable release, for example | 21:16 |
fungi | but i run on debian unstable, and also i build my python interpreters from upstream source anyway, so i end up manually invalidating my pip cache extremely often | 21:17 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: promote-image-container: do not delete tags https://review.opendev.org/c/zuul/zuul-jobs/+/878612 | 21:30 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: registry-tag-remove: role to delete tags from registry https://review.opendev.org/c/zuul/zuul-jobs/+/878614 | 21:30 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: promote-container-image: use generic tag removal role https://review.opendev.org/c/zuul/zuul-jobs/+/878740 | 21:30 |
opendevreview | Merged zuul/zuul-jobs master: build-container-image: directly push with buildx https://review.opendev.org/c/zuul/zuul-jobs/+/878487 | 21:50 |
Ramereth | clarkb: I'm going to setup another public provider network, but it will be a few days (likely not until next week) until it's ready | 21:58 |
clarkb | Ramereth: no rush. This is why we've got the othe rcloud to supplment things. As I mentioned I think the timing was just really bad for that one chnage and rerunning things seeemd to go much better. THank you for looking at it and that sounds like a good plan to mitigate this | 22:03 |
ianw | clarkb: yeah that gets me *every* time running zuul tests | 22:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!