*** tosky has quit IRC | 00:15 | |
*** DSpider has quit IRC | 01:33 | |
*** dviroel has quit IRC | 02:36 | |
*** raukadah is now known as chandankumar | 04:35 | |
*** ykarel|away has joined #opendev | 05:08 | |
*** ysandeep|away is now known as ysandeep|rover | 05:12 | |
*** ykarel|away is now known as ykarel | 05:19 | |
*** redrobot0 has joined #opendev | 05:30 | |
*** redrobot has quit IRC | 05:34 | |
*** redrobot0 is now known as redrobot | 05:34 | |
*** d34dh0r53 has joined #opendev | 05:53 | |
*** marios has joined #opendev | 05:54 | |
*** ykarel_ has joined #opendev | 06:00 | |
*** ykarel has quit IRC | 06:01 | |
*** ykarel_ is now known as ykarel | 06:12 | |
*** whoami-rajat__ has joined #opendev | 06:13 | |
*** ykarel_ has joined #opendev | 06:27 | |
*** ykarel has quit IRC | 06:28 | |
*** sboyron has joined #opendev | 06:41 | |
*** ykarel_ is now known as ykarel | 06:45 | |
*** ykarel has quit IRC | 07:08 | |
*** eolivare has joined #opendev | 07:37 | |
*** ralonsoh has joined #opendev | 07:38 | |
*** ykarel has joined #opendev | 07:45 | |
*** hashar has joined #opendev | 07:53 | |
*** fressi has joined #opendev | 07:57 | |
*** slaweq_ has joined #opendev | 07:59 | |
*** rpittau|afk is now known as rpittau | 08:00 | |
ykarel | ysandeep|rover, looks like limestone hit back | 08:06 |
---|---|---|
ykarel | i see jobs failing there | 08:06 |
ykarel | example https://578d70a47ab08636787d-9f757b11a1d2b00e739d31e1ecad199a.ssl.cf2.rackcdn.com/774394/3/check/tripleo-ci-centos-8-containers-multinode/b324d95/job-output.txt | 08:06 |
ykarel | https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1aa/774394/3/check/tripleo-ci-centos-8-standalone/1aaa456/job-output.txt | 08:06 |
ysandeep|rover | ykarel, ack | 08:06 |
ykarel | i see more like this | 08:07 |
ykarel | https://zuul.openstack.org/status | 08:07 |
ysandeep|rover | #opendev infra anyone available to check issue with limestone mirror | 08:07 |
ianw | http://mirror.regionone.limestone.opendev.org/ seems ok to me | 08:09 |
ykarel | seems issue is with ipv6,ipv4 seems fine | 08:13 |
ianw | [Mon Feb 8 07:52:53 2021] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: | 08:16 |
ianw | there was some warnings starting around then | 08:16 |
ianw | [Mon Feb 8 08:00:51 2021] afs: file server 104.130.138.161 in cell openstack.org is back up (code 0) (multi-homed address; other same-host interfaces may still be down) | 08:16 |
ianw | and then nothing since then. | 08:16 |
ianw | so possibly between that time it was a bit unhappy | 08:16 |
ykarel | but as per logs i see errors before 07:52 too | 08:19 |
ykarel | around 07:45 | 08:19 |
ianw | well 7:52 was when it decided tasks were stuck and starting putting things in dmesg ... so it might still be related | 08:20 |
ianw | weirdly there are no symbols in the backtrace | 08:21 |
*** slaweq_ is now known as slaweq | 08:23 | |
ykarel | i see errors 8:02 but may be that's too related, let's see if it hit now again | 08:24 |
*** hemanth_n has joined #opendev | 08:24 | |
ykarel | https://e29f078388bc55cedcb5-0aa6565d1e0c06a8a7c8c1f2b9ada9fc.ssl.cf1.rackcdn.com/773537/4/check/openstack-tox-pep8/3fea8e3/job-output.txt | 08:24 |
*** jpena|off is now known as jpena | 08:26 | |
*** tosky has joined #opendev | 08:45 | |
*** zbr is now known as zbr|pto | 08:45 | |
akahat | chandankumar, arxcruz|ruck ysandeep|rover marios sshnaidm|afk bhagyashris please take a look when you are free: https://review.opendev.org/c/openstack/tripleo-ci/+/773413 https://review.rdoproject.org/r/#/c/31659/ | 09:27 |
*** sshnaidm|afk is now known as sshnaidm | 10:21 | |
*** DSpider has joined #opendev | 10:21 | |
*** dtantsur|afk is now known as dtantsur | 10:48 | |
*** dviroel has joined #opendev | 11:17 | |
*** jpena is now known as jpena|lunch | 12:36 | |
*** hemanth_n has quit IRC | 12:38 | |
ysandeep|rover | ianw, hey o/ any luck with limestone mirror? | 12:58 |
*** dtantsur is now known as dtantsur|brb | 13:01 | |
*** clayg has quit IRC | 13:26 | |
*** mattmceuen has quit IRC | 13:26 | |
*** gouthamr has quit IRC | 13:26 | |
*** portdirect has quit IRC | 13:26 | |
*** CeeMac has quit IRC | 13:27 | |
*** walshh_ has quit IRC | 13:27 | |
*** whoami-rajat__ has quit IRC | 13:27 | |
*** hillpd has quit IRC | 13:27 | |
*** snbuback has quit IRC | 13:27 | |
*** guilhermesp has quit IRC | 13:27 | |
*** dviroel has quit IRC | 13:27 | |
*** ildikov has quit IRC | 13:27 | |
*** aprice has quit IRC | 13:27 | |
*** jmorgan has quit IRC | 13:27 | |
*** diablo_rojo_phon has quit IRC | 13:27 | |
*** mwhahaha has quit IRC | 13:27 | |
*** johnsom has quit IRC | 13:27 | |
*** jrosser has quit IRC | 13:27 | |
*** gmann has quit IRC | 13:27 | |
*** knikolla has quit IRC | 13:27 | |
*** mnaser has quit IRC | 13:27 | |
*** zaro has quit IRC | 13:27 | |
*** parallax has quit IRC | 13:27 | |
*** clayg has joined #opendev | 13:28 | |
*** CeeMac has joined #opendev | 13:28 | |
*** hillpd has joined #opendev | 13:28 | |
*** guilhermesp has joined #opendev | 13:28 | |
*** walshh_ has joined #opendev | 13:28 | |
*** portdirect has joined #opendev | 13:28 | |
*** johnsom has joined #opendev | 13:28 | |
*** mnaser has joined #opendev | 13:29 | |
*** jpena|lunch is now known as jpena | 13:29 | |
*** gouthamr has joined #opendev | 13:30 | |
*** mattmceuen has joined #opendev | 13:31 | |
*** snbuback has joined #opendev | 13:31 | |
*** whoami-rajat__ has joined #opendev | 13:31 | |
*** jmorgan has joined #opendev | 13:31 | |
*** parallax has joined #opendev | 13:31 | |
*** gmann has joined #opendev | 13:31 | |
*** TheJulia has quit IRC | 13:32 | |
*** dviroel has joined #opendev | 13:32 | |
*** aprice has joined #opendev | 13:32 | |
*** knikolla has joined #opendev | 13:32 | |
*** ildikov has joined #opendev | 13:32 | |
*** zaro has joined #opendev | 13:33 | |
*** mwhahaha has joined #opendev | 13:34 | |
*** jrosser has joined #opendev | 13:34 | |
*** artom has joined #opendev | 13:38 | |
*** TheJulia has joined #opendev | 13:44 | |
fungi | happened a few days ago too... i wonder if ipv6 connectivity between the nodes and the mirror server occasionally breaks | 14:19 |
fungi | or maybe we're seeing a recurrence of the rogue route announcement leaks between test nodes and the mirror? | 14:19 |
fungi | we did catch it doing that in limestone a year or so ago, before we saw it happen in vexxhost | 14:20 |
*** ysandeep|rover is now known as ysandeep|mtg | 14:22 | |
*** dtantsur|brb is now known as dtantsur | 14:28 | |
*** ysandeep|mtg is now known as ysandeep|rover | 14:43 | |
dtantsur | hey folks. a lot of our jobs are RED with ModuleNotFoundError: No module named 'setuptools_rust' when installing cryptography | 14:53 |
dtantsur | does it ring any bells? I assume it's an upstream problem | 14:53 |
dtantsur | example https://zuul.opendev.org/t/openstack/build/2b7a39718d9641eca58732dbc2e709ee/log/job-output.txt#28928 | 14:53 |
dtantsur | also breaks release test jobs | 14:53 |
dtantsur | honestly, "update to the latest pip" is something I'd prefer not to do, since it usually brings its own problems | 14:54 |
dtantsur | I see some people are working around: https://review.opendev.org/c/openstack/tenks/+/774474 | 14:55 |
fungi | first i've heard of it, but i'll start researching, thanks | 14:56 |
fungi | mtreinish: ^ you may also have some ideas, seems like i remember you knowing arcane things about python rust integration | 14:56 |
* dtantsur loves to see rust integration, but not when it breaks him | 14:57 | |
fungi | initial speculation on just seeing the error message is that it actually wants a newer version of setuptools, not pip | 14:58 |
*** fressi has quit IRC | 14:59 | |
fungi | oh, i see, setuptools_rust is its own package on pypi | 15:01 |
fungi | so maybe it's an install_requires of cryptography | 15:01 |
fungi | cryptography 3.3.2, 3.4 and 3.4.1 were released yesterday | 15:02 |
fungi | aha, they decided to implement it in a pyproject.toml file | 15:04 |
fungi | so all but fairly recent pip have no idea it needs to be installed | 15:04 |
dtantsur | le sigh | 15:05 |
fungi | probably time to start just pinning cryptography, or rip back out all the effort we put into using distro packaged pip | 15:05 |
dtantsur | worth raising to the distros? | 15:05 |
fungi | feel free, but the previous times i brought it up the answer was e.g. "you should be using distro packaged python-cryptography, not trying to pip install" | 15:06 |
fungi | at least debian/ubuntu are unlikely to backport newer pip to their existing releases | 15:09 |
fungi | interestingly though, your sample error happened on an ubuntu-focal (20.04 lts) node, which should have pip 20.0.2 | 15:09 |
fungi | https://github.com/pyca/cryptography/issues/5753 | 15:10 |
fungi | dtantsur: ^ that's where i expect discussion to continue | 15:10 |
fungi | as far as whether upstream unwinds it, i mean | 15:10 |
dtantsur | hmm, 20.0.2 is weird | 15:11 |
dtantsur | I wonder if they bundle the same pip with virtualenvc | 15:12 |
dtantsur | we'll also need a pretty recent rust compiler | 15:13 |
fungi | yeah, the distro packaged python3-virtualenv on focal installs python-pip-whl 20.0.2 for virtualenv to use | 15:14 |
fungi | https://packages.ubuntu.com/python-pip-whl | 15:15 |
fungi | https://packages.ubuntu.com/focal/python3-virtualenv | 15:16 |
* dtantsur tries in podman | 15:16 | |
jrosser | in osa world we use the distro pip to build a virtualenv, then upgrade the venv pip to be some specific version we pin in our repo | 15:16 |
jrosser | that works quite solidly and means we're not at the mercy of the distro | 15:17 |
fungi | that's a viable workaround, though if this is virtualenv invoked by tox you need to jump through extra hoops to make it work | 15:17 |
dtantsur | I've successfully installed cryptography 3.4.1 in a venv on focal without updating pip | 15:18 |
dtantsur | from a wheel, if it matters (so maybe it has the rust bits already compiled?) | 15:18 |
fungi | we also have a parameter to the ensure-virtualenv role (i think that's the one) to override and use versions from pip | 15:18 |
fungi | dtantsur: i wonder, did the example failures maybe race the wheel upload to pypi? | 15:19 |
fungi | also i think you can pip install --no-binary to force it to try with the sdist instead | 15:20 |
dtantsur | well, not impossible. at least in the example I have it's downloaded from a tarball | 15:20 |
dtantsur | (but it's with pip 9.0.3) | 15:20 |
fungi | 9.0.3 would be weird, that's what ubuntu-bionic ships i think? | 15:20 |
dtantsur | CentOS 8 in our case | 15:20 |
fungi | oh, that ironic-lib-wholedisk-bios-ipmi-direct-src job is building in a chroot? | 15:21 |
dtantsur | yup (in DIB) | 15:21 |
fungi | aha, that explains it then | 15:22 |
*** tbarron is now known as tbarron|out | 15:22 | |
fungi | yeah, so you need a centos oriented solution, and you're going through some extra layes | 15:22 |
fungi | layers | 15:22 |
dtantsur | works if I update pip to 19.1.1 | 15:22 |
*** hashar has quit IRC | 15:24 | |
*** sshnaidm has quit IRC | 15:34 | |
*** hashar has joined #opendev | 15:37 | |
*** jpena is now known as jpena|brb | 15:46 | |
*** ysandeep|rover is now known as ysandeep|away | 15:57 | |
*** sshnaidm has joined #opendev | 15:58 | |
*** jpena|brb is now known as jpena | 16:01 | |
clarkb | the wheel race is normal with cryptography | 16:03 |
clarkb | they always update sdists first, then we break for like 6 hours and then everything is fine again when the wheels show up | 16:04 |
clarkb | this was typical even before they started integrating rust because many jobs don't install libssl-dev or whatever the header package is for openssl | 16:04 |
fungi | yeah, so to recap, distro packaged pip on centos and ubuntu bionic or earlier can't install latest cryptography from sdist | 16:04 |
fungi | doesn't help that they released three new versions in a day, so probably went quite a while with no wheels for whatever was latest at each of those | 16:05 |
clarkb | Iv'e suggested that they upload wheels then sdists to avoid this problem but there was some issue related to that. I think it has to do with wheel builds being a bit more "as we are able" | 16:06 |
clarkb | but I do't recall the specific reason | 16:06 |
fungi | yeah, they build lots of different wheels on different platforms, they may not want to block uploading the sdist if one of those isn't working | 16:07 |
fungi | clarkb: two other items... more signs of intermittent ipv6 unreachability for the limestone mirror (starting to wonder if it's router announcements leaking between tenants again), and also we've got jobs complaining they're out of disk space on citycloud (failing to resize the rootfs?) | 16:08 |
clarkb | the rootfs resizing is done by all our nodes with growfs or similar iirc. I would expect similar failures across the baord if it was that. Did the iamge disk size change maybe? | 16:09 |
fungi | not sure, i added df calls to validate-host so we can try to get a better idea of what they look like at the start of those builds | 16:10 |
fungi | and right, it would be weird for that to only happen in one provider | 16:11 |
fungi | another possibility is the rootfs size associated with that flavor we're using in citycloud shrunk | 16:11 |
clarkb | ya, or particular jobs running tehre (airship?) have suddenly increased disk needs | 16:12 |
fungi | in the cases which were brought up, it was devstack in the context of grenade claiming there was not enough free space remaining in /var to download more packages | 16:13 |
fungi | well, more specifically, an apt-get call (one of many, well into the setup) saying there was insufficient space in /var/cache/apt/archives to download more packages | 16:13 |
*** _mlavalle_1 has quit IRC | 16:15 | |
clarkb | that it doesn't fail the first time it tries to do writes implies growfs is able to find some room. I believe the free spaceo nthe image without growing the filesystem is tiny | 16:16 |
clarkb | but might need to just boot an instance there and check it out too. These are jobs running on bionic or focal? | 16:16 |
fungi | probably focal, but i'll see if i can find a recent failure from it | 16:17 |
fungi | clarkb: https://zuul.opendev.org/t/openstack/build/1e5ee6221cc74100b90b1714895cd5b3 | 16:20 |
fungi | looks like the rootfs starts out at 15gb with around 10gb used | 16:20 |
clarkb | huh thats more free space that I expected | 16:21 |
fungi | well, that's also before we add the swapfile i think | 16:22 |
clarkb | ya that is super early | 16:22 |
clarkb | however that should happen after growfs which is during boot | 16:23 |
clarkb | so thats showing us that either growfs is failing or the disk is small. Maybe syslog will shed some light if we capture the early boot portion of it? | 16:23 |
fungi | yeah, the job fails to early to capture the usual files we would in a normal run | 16:24 |
fungi | er, too early | 16:24 |
fungi | so probably need to hold/build a node there | 16:24 |
clarkb | ah | 16:24 |
fungi | devstack/grenade jobs normally collect df output on their own, but due to the early failure conditions i ended up adding it to validate-host because then it gets sucked up with the zuul info | 16:26 |
*** mlavalle has joined #opendev | 16:27 | |
fungi | oh, and that example was on ubuntu-bionic after all. i guess master branch grenade jobs still use bionic | 16:29 |
*** ykarel has quit IRC | 16:29 | |
clarkb | fungi: would lsblk show us raw disk sizes? | 16:29 |
fungi | yeah, should anyway | 16:30 |
*** marios has quit IRC | 16:35 | |
fungi | seeing if i can catch a normal ubuntu-bionic node in use there and snag its syslog | 16:45 |
fungi | and i'll lsblk on it too | 16:46 |
fungi | worst case i'll boot something there manually | 16:46 |
fungi | not a bionic node, but i jumped onto a regular ubuntu-focal node which was in use and /dev/vda1 is 94G not 15 | 16:51 |
fungi | so either we're getting different sized images some times, or we're failing to grow the rootfs sometimes | 16:52 |
fungi | er, different sized flavors i mean | 16:52 |
fungi | | 0022913763 | airship-kna1 | ubuntu-bionic | 0a40080f-03df-414c-96ef-9b4b784cabb3 | 89.46.86.200 | | in-use | 00:00:00:06 | locked | | 16:58 |
fungi | /dev/vda1 94G 9.4G 80G 11% / | 16:58 |
fungi | so it's not even all ubuntu-bionic nodes | 16:58 |
fungi | this is going to prove hard to pin down | 16:59 |
clarkb | that certainly makes the mystery more mysterious | 17:03 |
*** rpittau is now known as rpittau|afk | 17:05 | |
*** artom has quit IRC | 17:14 | |
*** ralonsoh has quit IRC | 17:38 | |
*** eolivare has quit IRC | 17:43 | |
*** jpena is now known as jpena|off | 18:00 | |
*** hashar has quit IRC | 18:01 | |
*** artom has joined #opendev | 18:11 | |
*** ildikov has quit IRC | 18:25 | |
*** ildikov has joined #opendev | 18:25 | |
*** artom has quit IRC | 18:56 | |
*** diablo_rojo has joined #opendev | 19:33 | |
diablo_rojo | fungi, let me know when you wanna chat about channel renames :) | 19:34 |
fungi | diablo_rojo: yep, i should be done cooking in roughly an hour if you'll be around | 19:47 |
*** whoami-rajat__ has quit IRC | 19:53 | |
diablo_rojo | fungi, works for me! | 19:55 |
*** artom has joined #opendev | 20:11 | |
*** dtantsur is now known as dtantsur|afk | 20:20 | |
clarkb | I've updated the meeting agenda for tomorrow. Plan to send it out shortly. Please add (or remove) any agenda items if you've got them | 20:35 |
clarkb | fungi: ianw: if you get a sec for reviews https://review.opendev.org/c/opendev/system-config/+/765021 and child are some gerrit housekeeping things I've had up a while (the 3.3 image builds) | 20:45 |
clarkb | and now to look at gerrit accounts again I guess | 20:45 |
* clarkb has to put on the brain melter prevention helmet | 20:46 | |
diablo_rojo | Good luck! | 20:47 |
*** d34dh0r53 has quit IRC | 20:49 | |
*** d34dh0r53 has joined #opendev | 20:59 | |
clarkb | thanks! | 21:07 |
fungi | diablo_rojo: sorry, taking longer than anticipated but will be done eating shortly | 21:10 |
diablo_rojo | fungi, no worries at all! | 21:11 |
diablo_rojo | Started working on some of the steps. | 21:11 |
clarkb | there are definitely a few patterns starting to emerge in the exteranl id issues as well | 21:27 |
clarkb | one appears to be a variant of the issues with preferred emails where people seem to change employer, update their email then things get sad | 21:28 |
clarkb | but in this case the preferred email continues to have an exteranl id in two different accounts so the error isn't a missing external id but a conflit | 21:28 |
clarkb | anyway I'm doing human detectiving to keep trying to classify things (I think I have ~4 groups now) and am writing down some ideas for fixing the various situations | 21:29 |
clarkb | another interesting thing is that almost universally these accounts seem to have not been used recently. I think if people have trouble with accounts and are active users they tend to talk to us about it | 21:30 |
clarkb | another group (though with only one set of conflicts in it currently) is the third party CIs with the same email addr | 21:31 |
fungi | diablo_rojo: so we're on https://etherpad.opendev.org/p/openinfra-channel-renames | 21:39 |
fungi | i'll add a link in there to https://docs.opendev.org/opendev/system-config/latest/irc.html#renaming-an-irc-channel for reference | 21:40 |
fungi | also https://docs.opendev.org/opendev/system-config/latest/irc.html#access since most of those still need that done as well | 21:42 |
fungi | diablo_rojo: what are the email and kick entries for? | 21:43 |
fungi | for the bots line, that's going to be opendev/system-config (meetbot) and openstack/project-config (accessbot, statusbut) changes | 21:45 |
fungi | we can batch them up though, in my opinion | 21:46 |
fungi | one project-config change to add the new channels there, then system-config change depends on that, then we can stage removals for the old channels in both repos to be merged after the transitional period is completed | 21:47 |
diablo_rojo | fungi, I figured we would email the lists when the changes are done | 21:47 |
diablo_rojo | I started a project config change | 21:47 |
fungi | aha, so that's signing off on an announcement | 21:47 |
diablo_rojo | Yeah | 21:47 |
fungi | and kick is...? | 21:47 |
fungi | oh, when we kick people out of the old channel, got it | 21:47 |
diablo_rojo | If after a certain period of time we wanted to kick people out of the old channel | 21:48 |
diablo_rojo | yup | 21:48 |
fungi | yeah, sorry, i'm slow ;) | 21:48 |
diablo_rojo | SO far I have registered, set guards and add openinfra access to ptg, forum, diversity, and board | 21:49 |
diablo_rojo | Beautiful. | 21:49 |
diablo_rojo | For the project config change, I was adding them to the accessbot. Should I remove the openstack versions from that file as well? | 21:50 |
fungi | what about ttx's suggestion that ptg and forum should share a common events channel (summit too)? | 21:50 |
diablo_rojo | Oh I am cool with that. | 21:51 |
diablo_rojo | I missed that suggestion on the ML I guess. | 21:51 |
fungi | is that something we want to entertain, or do you feel that the discussion played out contrary to that | 21:51 |
fungi | ? | 21:51 |
* fungi double-checks his memory | 21:51 | |
diablo_rojo | No I think one channel makes sense. | 21:51 |
diablo_rojo | openinfra-events then? | 21:51 |
fungi | http://lists.openstack.org/pipermail/foundation/2021-January/002951.html | 21:52 |
diablo_rojo | Oh perfect. | 21:53 |
diablo_rojo | Yeah that makes sense. | 21:53 |
fungi | also -summit is missing from the pad? | 21:53 |
diablo_rojo | Reduced siloing | 21:53 |
diablo_rojo | I legit forgot we had one | 21:54 |
diablo_rojo | Added now though. | 21:54 |
fungi | looks like we also used #openinfra-summit last time, but it was never registered so i'd err on the side of just letting that fade back into the aether | 21:54 |
diablo_rojo | Yes +2 | 21:54 |
diablo_rojo | Totally agree | 21:54 |
fungi | granted there are 60+ folks in it at the moment | 21:55 |
diablo_rojo | We could set up a redirect for it while we are doing the rest of this | 21:55 |
fungi | probably joined during the summit and just never left | 21:55 |
fungi | well, it's not registered | 21:55 |
fungi | oh, though i'm the chanop | 21:55 |
diablo_rojo | fungi, finished registering, setting the guard and adding the openstack access to #openinfra-events | 21:55 |
fungi | so i can register it and do the stuff | 21:55 |
fungi | yeah, lemme do the reg and access dance on #openinfra-summit so we can prep to reroute it | 21:56 |
diablo_rojo | perfect | 21:56 |
diablo_rojo | And I guess we want to do a redirect from openstack-foundation to openinfra as well? | 21:57 |
fungi | yes | 21:57 |
fungi | that's already on your pad though | 21:58 |
diablo_rojo | Right. | 21:58 |
diablo_rojo | So many things to keep track of lol | 21:58 |
fungi | our duelling red and black text backgrounds remind me of my favorite double set of dice | 22:00 |
diablo_rojo | Its very pretty :) | 22:00 |
diablo_rojo | Do any of the 4 channels need gerritbot? (events, board, diversity, or the regular openinfra) | 22:02 |
fungi | i'll check whether their source channels have it, but probably not | 22:02 |
fungi | what was going to happen with the refstack channel, btw? | 22:02 |
diablo_rojo | Ahh good call. I can work on that if you wanna do the system-config change. | 22:03 |
diablo_rojo | Since I've already started the project-config one. | 22:03 |
fungi | oh, yeah, the ptg channel gets gerritbot announcements of ptgbot changes | 22:04 |
fungi | the rest are just accessbot and meetbot | 22:04 |
fungi | and statusbot | 22:05 |
diablo_rojo | Perfect. | 22:07 |
diablo_rojo | Got accessbot and gerritbot stuff done. | 22:08 |
diablo_rojo | You said statusbot is also in project config? | 22:08 |
fungi | i think so, ckecking... | 22:09 |
diablo_rojo | Not seeing a top level dir, but it might be hiding | 22:09 |
fungi | ahh, nope | 22:09 |
fungi | system-config in hiera/common.yaml, same file as meetbot just another section | 22:10 |
diablo_rojo | ohhhh yeahhhh that sounds right. | 22:10 |
diablo_rojo | Cool, then I will go ahead and push this project config pathc | 22:11 |
openstackgerrit | Kendall Nelson proposed openstack/project-config master: Setup OpenInfra Channels https://review.opendev.org/c/openstack/project-config/+/774550 | 22:11 |
fungi | reviewing | 22:13 |
diablo_rojo | Merci! | 22:13 |
diablo_rojo | I don't seem to be able to update topics in openstack-diversity or openstack-board. | 22:18 |
fungi | diablo_rojo: propose a change to add your nick to the operators list in accessbot/channels.yaml | 22:22 |
fungi | i'll fast approve it | 22:22 |
fungi | that'll give you operator access to all the channels we manage | 22:22 |
fungi | assuming you're okay with the idea of basically volunteering to help shepherd that collective set of irc channels | 22:23 |
fungi | diablo_rojo: the access job failure on the first patchset claims the three new channels aren't registered with chanserv... looking into why it would think that | 22:26 |
diablo_rojo | I will get that change up now | 22:27 |
fungi | diablo_rojo: `/msg chanserv access #openinfra-board list` and so on definitely claims "#openinfra-board is not registered." | 22:28 |
fungi | you did the commands listed at https://docs.opendev.org/opendev/system-config/latest/irc.html#access ? | 22:28 |
openstackgerrit | Kendall Nelson proposed openstack/project-config master: Add diablo_rojo to AccessBot Operators https://review.opendev.org/c/openstack/project-config/+/774555 | 22:29 |
diablo_rojo | fungi, yeah I did | 22:29 |
diablo_rojo | My history shows me running them at 12:59:52 (my time) and onwards | 22:30 |
fungi | weird | 22:32 |
fungi | if you run the command i quotes above, what does it say? | 22:32 |
fungi | er, command i quoted above | 22:32 |
fungi | i'm starting to regret using a qwerty keyboard with every time i type s where i meant d and vice versa | 22:33 |
diablo_rojo | Uhh when I run the list command nothing comes back.. | 22:37 |
diablo_rojo | No error message though | 22:37 |
*** slaweq has quit IRC | 22:38 | |
diablo_rojo | I guess you could try registering them fungi? | 22:43 |
fungi | were you joined to the channel and an operator when you registered it? | 22:45 |
*** sboyron has quit IRC | 22:46 | |
diablo_rojo | I was definitely joined to the channel. | 22:47 |
diablo_rojo | I might have missed the op step | 22:47 |
diablo_rojo | I ran the op command and then the register command again. Then I ran the list command and still nothing. | 22:49 |
fungi | if you were the only person in the channel you'd have autoamtically been the op | 22:49 |
fungi | if you're not, then whoever joined the channel before you will need to do it | 22:49 |
diablo_rojo | Oh well there were people in the openinfra-board channel already | 22:50 |
diablo_rojo | looks like none of them have op though? | 22:50 |
diablo_rojo | Well no one currently there. | 22:50 |
diablo_rojo | Maybe it was jbryce? During the last board meeting? | 22:51 |
fungi | looks like jbryce was a chanop when i originally joined | 22:51 |
fungi | but is no longer identified as a chanop | 22:52 |
jbryce | i appear to no longer have any special privileges there and can't register it either | 22:56 |
diablo_rojo | Huh. | 22:57 |
fungi | jbryce: thanks for confirming, that is certainly odd | 22:58 |
fungi | we can request that the people in that channel /part and hope that they all do, at which point the first person to rejoin will be a chanop | 22:59 |
fungi | alternatively, as we now have control of the #openinfra channel we can go ahead with registering an organization namespace with freenode, though their process for that is convoluted and requires legal documentation, last i looked through it | 22:59 |
fungi | diablo_rojo: what about the others? we can break #openinfra-board out from the rest | 23:01 |
fungi | or revisit my earlier suggestion to merge the board and foundation channels | 23:01 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] add script for pruning borg backups https://review.opendev.org/c/opendev/system-config/+/774561 | 23:03 |
*** CeeMac has quit IRC | 23:03 | |
*** parallax has quit IRC | 23:04 | |
*** walshh_ has quit IRC | 23:04 | |
*** walshh_ has joined #opendev | 23:04 | |
*** CeeMac has joined #opendev | 23:04 | |
*** mwhahaha has quit IRC | 23:04 | |
*** snbuback has quit IRC | 23:04 | |
diablo_rojo | fungi, diversity and events should be good to go | 23:04 |
*** snbuback has joined #opendev | 23:05 | |
*** mwhahaha has joined #opendev | 23:05 | |
*** parallax has joined #opendev | 23:05 | |
fungi | and #openinfra as well (it's already added to accessbot too) | 23:05 |
diablo_rojo | There were already people in openinfra and I couldn't get op there either | 23:07 |
fungi | yeah, i took care of it already | 23:07 |
fungi | that one took negotiating with the previous registrants | 23:07 |
fungi | so out options for #openinfra-board are: A. ask everyone to /part and hope they do so we can claim chanop; B. register #openinfra as an organizational namespace and ask freenode staff to grant us chanop; C. decode to forward #openstack-board to a different channel e.g. #openinfra | 23:08 |
fungi | s/out/our/ | 23:08 |
fungi | s/decode/decide/ | 23:08 |
fungi | it's getting to be that time of night where i shouldn't be typing | 23:08 |
diablo_rojo | I would lean towards A first? | 23:09 |
fungi | my adjacent qwerty key typos are growing | 23:09 |
diablo_rojo | I can ping each person in the channel. | 23:09 |
fungi | yeah, i'll do my part by /part'ing now ;) | 23:09 |
diablo_rojo | We can pick up tomorrow if you're looking at wrapping up for today :) | 23:09 |
fungi | i'm still around | 23:10 |
fungi | just may be of decreasing utility as the evening proceeds | 23:10 |
diablo_rojo | Totally fair :) | 23:11 |
diablo_rojo | I pinged everyone in the channel so hopefully we see that list shrink. | 23:11 |
diablo_rojo | In the meantime then, should I get the system-config change together? Or did you have that going already? | 23:12 |
fungi | it's not a ton, otherwise yeah i'd have written option A off as entirely intractable | 23:12 |
diablo_rojo | Yeah and they are all pretty responsive people and we know them reasonably well :) | 23:13 |
fungi | i haven't started the system-config change, you'll need to set it depends-on your project-config change because our checks will prevent adding bots to any channel for which we lack access | 23:13 |
fungi | but if you want to split #openinfra-board out, we can probably get the others merged sooner | 23:13 |
diablo_rojo | Okay. I can get that together and set the depends-on. | 23:13 |
diablo_rojo | Yeah that might be best. | 23:13 |
openstackgerrit | Kendall Nelson proposed openstack/project-config master: Setup OpenInfra Channels https://review.opendev.org/c/openstack/project-config/+/774550 | 23:15 |
openstackgerrit | Kendall Nelson proposed opendev/system-config master: Setup OpenInfra Channels https://review.opendev.org/c/opendev/system-config/+/774563 | 23:19 |
diablo_rojo | Done! | 23:19 |
diablo_rojo | fungi, ^ | 23:19 |
diablo_rojo | ( also linked to it in the etherpad) | 23:20 |
fungi | awesome, taking a look | 23:20 |
diablo_rojo | Thank you! | 23:20 |
fungi | but really, we have fairly thorough check jobs for these | 23:20 |
diablo_rojo | Yeah I figured :) | 23:20 |
fungi | so i'll mostly wait for them to make sure stuff matches | 23:20 |
diablo_rojo | Oh shit I forgot the depends on | 23:20 |
diablo_rojo | Ugh | 23:20 |
fungi | it's only a commit --amend away | 23:20 |
diablo_rojo | Doing it now. | 23:20 |
fungi | the depends-on won't affect testing anyway i don't think, it's mostly to record that we can't merge it until the first change merges and is deployed | 23:21 |
fungi | so just a helpful reference to reviewers | 23:21 |
diablo_rojo | The system config needs to depend on the status config right? | 23:21 |
fungi | system-config depends on project-config | 23:22 |
fungi | for this | 23:22 |
fungi | it's really the accessbot addition which needs to deploy first, and that's in project-config | 23:22 |
openstackgerrit | Kendall Nelson proposed opendev/system-config master: Setup OpenInfra Channels https://review.opendev.org/c/opendev/system-config/+/774563 | 23:22 |
diablo_rojo | Hokay. Hopefully thats all good now. | 23:22 |
fungi | the check for system-config may actually follow the depends-on, we'll have to see | 23:23 |
clarkb | fungi: I wonder if we need to clarify the statement made at https://github.com/pyca/cryptography/issues/5771#issuecomment-775477095 | 23:24 |
clarkb | we provide constraints, but we also expect you to use an up to date cryptography if we haven't pinned it | 23:24 |
clarkb | but I don't want to add to the noise on that issue with a side thing | 23:24 |
fungi | yeah, maybe better to just let it slide as an inaccurate example | 23:25 |
fungi | and also there are already calls to lock the issue | 23:28 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] add script for pruning borg backups https://review.opendev.org/c/opendev/system-config/+/774561 | 23:44 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] backup monitor https://review.opendev.org/c/opendev/system-config/+/774564 | 23:44 |
fungi | diablo_rojo: subtle typo on 774550 is causing the failure (yay testing!), see inline comment | 23:53 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!