opendevreview | Merged openstack/project-config master: Drop gating for x/networking-opencontrail https://review.opendev.org/c/openstack/project-config/+/912678 | 00:20 |
---|---|---|
opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/912694 | 02:32 |
*** mrunge_ is now known as mrunge | 06:54 | |
hashar | fungi: +1ed thanks for the git-review release notes | 07:11 |
hashar | on another topic, I had a look at upgrading tox from 3 to 4 and it turned out to be a nightmare. I have noticed you have switched to use `nox` and I am wondering whether some rationale was written about it | 07:16 |
hashar | or how devs accepted switching from `tox` to `nox` | 07:16 |
hashar | my guess is that is merely in commit message and the switch has "just happened" ™ (ex https://review.opendev.org/c/zuul/zuul/+/867057 ) | 07:18 |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire puppet-ec2api: End Project Gating https://review.opendev.org/c/openstack/project-config/+/912710 | 07:38 |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire puppet-ec2api: Remove Project from Infrastructure System https://review.opendev.org/c/openstack/project-config/+/912713 | 07:45 |
frickler | hashar: there was a lot of discussion and testing happening when tox 4 came out, but I don't remember anything being permanently documented. note also that openstack itself is still using tox | 08:12 |
frickler | seems this is the best I can find https://meetings.opendev.org/meetings/infra/2023/infra.2023-01-10-19.01.log.html#l-76 | 08:16 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add Freezer to gerritbot https://review.opendev.org/c/openstack/project-config/+/912716 | 08:25 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Notify in IRC regarding patches to OSA unmaintained branches https://review.opendev.org/c/openstack/project-config/+/912717 | 08:29 |
hashar | frickler: that is awesome thank you for much for having taken the time to find some meetings minutes :) | 09:17 |
hashar | I had my share of tox v4 related issues which I have documented on https://phabricator.wikimedia.org/T345695 | 09:17 |
hashar | but I think ultimately it is probably not worse pursuing any further and I am leaning at nox based on opendev adoption | 09:18 |
hashar | I gotta reach out to our python devs and see what they think of it hence why I am looking for some arguments :) | 09:18 |
fungi | hashar: the tox upgrade challenges are tractable once you know where the rough edges are (especially around use_develop and skip_sdist), but also nox seems to work fairly well | 12:50 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add #openstack-freezer to accessbot https://review.opendev.org/c/openstack/project-config/+/912767 | 13:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add Freezer to gerritbot https://review.opendev.org/c/openstack/project-config/+/912716 | 13:11 |
noonedeadpunk | fungi: that is unfortunate side-effect I didn't thought about.... | 13:44 |
fungi | yeah, normally the first person to /join gets implicitly set as a channel operator, but if others join and they drop and the channel never goes completely empty, then they can't regain that if they didn't register the channel to begin with | 13:45 |
noonedeadpunk | I wonder what would be a reasonable thing to do here | 13:46 |
noonedeadpunk | as getting everyone out sounds tricky | 13:46 |
noonedeadpunk | Like when I joined there were couple of ppl as you might imagine | 13:47 |
fungi | calling them by name in the channel and asking them to /part for a few days might get their attention | 13:48 |
fungi | worst case, we have contacts in oftc who can bypass the restrictions to override that for us when the channel name begins with a prefix they associate with our projects (#openstack- being one), but it takes time and i would have to dig up notes on the process so preferable if other options are exhausted first | 13:49 |
noonedeadpunk | ++ | 13:50 |
noonedeadpunk | I was thinking also of doing just #freezer - as it's empty (or well, I'm chan op there) | 13:51 |
fungi | that's certainly a faster option | 13:51 |
noonedeadpunk | but would prefer it being preffixed | 13:51 |
noonedeadpunk | well, it should go through governance first I assume. So not that sure what would be faster :D | 13:52 |
fungi | i mean, as far as adding a channel to accessbot and then other bots it would be faster, but yes updating the channel mentioned for the project in openstack/governance may not be | 13:55 |
frickler | a simple project update should just need two tc members to approve, no mandatory waiting period | 13:59 |
frickler | but I'd certainly also prefer to stick to #openstack- prefixed channel names, the existing exceptions to that always cause confusion for me | 14:00 |
corvus | plus we actually do have the group contact benefits of #openstack- | 14:01 |
corvus | https://docs.opendev.org/opendev/system-config/latest/irc.html#access are the docs for creating an irc channel | 14:03 |
*** blarnath is now known as d34dh0r53 | 14:51 | |
hashar | fungi: I have hit a few walls with tox such as lack of forcing an env variable to be passed (eg: `XDG_CACHE_HOME` and `CI`) and a chicken and eggs with latest tox depending on an arbitrary recent version of setuptools which does not support some old pythons 3 :D | 14:52 |
hashar | I guess I will stick to tox 3 and investigate nox :) | 14:53 |
corvus | hashar: i love nox. zuul has a slightly more complex than average noxfile if you want to see see some advanced stuff: https://opendev.org/zuul/zuul/src/branch/master/noxfile.py | 14:54 |
hashar | neat thank you! | 14:55 |
clarkb | I think nox has worked well. I really like that it composes standard tools more so than tox and its apis have been very stable so far | 15:09 |
opendevreview | Clark Boylan proposed opendev/base-jobs master: Remove centos-7 nodeset https://review.opendev.org/c/opendev/base-jobs/+/912786 | 15:23 |
opendevreview | Clark Boylan proposed openstack/project-config master: Remove centos-7 image uploads from Nodepool https://review.opendev.org/c/openstack/project-config/+/912787 | 15:24 |
opendevreview | Clark Boylan proposed openstack/project-config master: Remove centos-7 nodepool image builds https://review.opendev.org/c/openstack/project-config/+/912788 | 15:30 |
clarkb | those changes should have appropriate WIP flags and depends on to prevent early merging | 15:31 |
clarkb | looks like openstack ansible still has stable/pike branches open and they use the centos-7 nodeset... | 15:34 |
fungi | yeah, noonedeadpunk and jrosser were talking about manual branch deletion options | 15:35 |
noonedeadpunk | yup, it's not that we wanna have them | 15:35 |
noonedeadpunk | I was going to gather list of repos/branches somehow as they're realy more random things here and there | 15:36 |
fungi | noonedeadpunk: the ironic folks did something similar for their bugfix branches since those aren't managed by the release team, though that's a more ongoing activity i guess | 15:37 |
fungi | https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/ironic.config#L17 | 15:38 |
noonedeadpunk | I mean, I could do that and then revoke access | 15:39 |
noonedeadpunk | *revert the patch | 15:39 |
fungi | yes, some projects have also done that in the past, it's a reasonable enough process | 15:42 |
fungi | in centos-7 news, i got go-ahead from keystone and neutron ptls to bypass review/gating and merge those removals, i've not had any luck getting in touch with the solum ptl but the tc has declared them inactive anyway so i'll just do it there too | 15:46 |
clarkb | sounds like a plan | 15:47 |
fungi | then i'll do tge same with the remaining devstack change and see what new zuul config errors it produces | 15:47 |
clayg | clarkb: anything we can do get a bump on this feature-branch? 911950: Add feature/mpu branch for Swift | https://review.opendev.org/c/openstack/releases/+/911950 | 16:33 |
fungi | clayg: looks like that's for the openstack release managers in the #openstack-release channel? | 16:33 |
fungi | i've temporarily merged the cleanup change in devstack (after merging the outstanding blockers we knew about in keystone, neutron-vpnaas and solum). it appears this increased the config error count by only 3, which is orders of magnitude fewer than i expected | 16:35 |
fungi | diffing, it's for openstack/keystone branches stable/victoria, stable/wallaby, stable/xena which i already merged cleanup changes on | 16:38 |
fungi | i wonder if configuration was already broken on those branches and zuul is operating off a cached version? | 16:38 |
fungi | huh, nope, looks like i may have never pushed those three backports for some reason and somehow didn't notice | 16:40 |
fungi | great news! after merging the missing three keystone stable backports, the zuul config errors list is identical to what it was before i merged the devstack cleanup, so that was actually everything related to the custom devstack-single-node-centos-7 nodeset | 16:51 |
fungi | so i'll abandon the revert and unrevert | 16:52 |
frickler | that sounds like success | 17:00 |
fungi | it's far more success than i was expecting at least | 17:01 |
fungi | i'll gladly take it as a win | 17:01 |
fungi | and now we have no centos-7 use on master branches of anything (nor on stable/2024.1 branches for openstack projects which have them), so if nothing else the removal on friday shouldn't impact openstack release work | 17:02 |
fungi | which was my main concern, given the relative timing | 17:03 |
opendevreview | Merged opendev/git-review master: Add missing release notes and manpage updates https://review.opendev.org/c/opendev/git-review/+/912653 | 17:08 |
opendevreview | Merged opendev/git-review master: It's patchset not patch set https://review.opendev.org/c/opendev/git-review/+/912681 | 17:08 |
fungi | heading out for a quick lunch, should hopefully be back in an hour (or less) | 17:17 |
noonedeadpunk | ugh, there's just 1 folk left in the freezer channel with nick osn.. don't anyone accidentally know who's that? As they do present in many openstack channels today... | 17:40 |
noonedeadpunk | and frankly - not sure they're active, so unlikely they will quit it. | 17:55 |
fungi | noonedeadpunk: /whois claims they're connected via an aws vm, so might be a total zombie. in a bunch of other #openstack-.* channels too, but only idle 5.5 days so possibly still around | 19:18 |
noonedeadpunk | yup, seen that already | 19:19 |
noonedeadpunk | 5 days is promising though | 19:19 |
fungi | i'll do a quick search against all the logged openstack channels for their possible activity | 19:24 |
fungi | noonedeadpunk: no luck, all the matches for "osn" this year (case insensitive) other than this exact discussion were typos | 21:46 |
fungi | clarkb: there was some question from other reviewers on whether https://review.opendev.org/849219 should be included in the next git-review release, and i seem to recall you had an opinion | 21:51 |
opendevreview | Birger J. Nordølum proposed openstack/diskimage-builder master: fix(rocky): don't uninstall linux-firmware https://review.opendev.org/c/openstack/diskimage-builder/+/912816 | 22:49 |
opendevreview | Birger J. Nordølum proposed openstack/diskimage-builder master: fix(rocky): don't uninstall linux-firmware https://review.opendev.org/c/openstack/diskimage-builder/+/912816 | 22:50 |
Reverbverbverb | Has anyone seen this sort of thing? Fails consistently (same 5-byte error). Some sort of network error? I'm on a cellular hotspot, tethered to my iPhone, if that's a factor. https://paste.openstack.org/show/bN9V9KQqo6z3mYnE6Hc7/ | 22:59 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!