SpamapS | Oh my.. it's been a while since I developed on Zuul on this box.. | 00:44 |
---|---|---|
SpamapS | f7f8ea61..0a5e3308 master -> origin/master | 00:44 |
SpamapS | * [new tag] 3.10.0 -> 3.10.0 | 00:44 |
SpamapS | Your branch is behind 'origin/master' by 2891 commits, and can be fast-forwarded. | 00:45 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: Switch jobs to use fedora-34 nodes https://review.opendev.org/c/zuul/zuul-jobs/+/795636 | 01:10 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: Ensure dnf-plugins-core before calling "dnf copr" https://review.opendev.org/c/zuul/zuul-jobs/+/796979 | 01:10 |
ianw | mnaser: ^ couple of tweaks | 01:11 |
corvus | spamaps: lol :) btw, zuul/tools/test-setup-docker.sh is what i use to get an environment for running tests (runs mysql and zk in containers) | 01:20 |
corvus | spamaps: that's probably new since the last time you pulled :) | 01:20 |
corvus | spamaps: and if you want to run tests on python >= 3.9, then add "tests.unit.test_scheduler.TestSchedulerSSL" to your exclude list (there's a fix for that, but we're semi-frozen and also about to remove gearman, so we might just wait for that) | 01:22 |
SpamapS | Ty, I think I remember that script but tox was in my muscle memory. | 01:34 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: Switch jobs to use fedora-34 nodes https://review.opendev.org/c/zuul/zuul-jobs/+/795636 | 02:14 |
opendevreview | James E. Blair proposed zuul/zuul master: Replace TreeCache in component registry https://review.opendev.org/c/zuul/zuul/+/796582 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Add ExecutorApi https://review.opendev.org/c/zuul/zuul/+/770902 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Change zone handling in ExecutorApi https://review.opendev.org/c/zuul/zuul/+/787833 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Switch to string constants in BuildRequest https://review.opendev.org/c/zuul/zuul/+/791849 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Clean up Executor API build request locking and add tests https://review.opendev.org/c/zuul/zuul/+/788624 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Fix race with watches in ExecutorAPI https://review.opendev.org/c/zuul/zuul/+/792300 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Execute builds via ZooKeeper https://review.opendev.org/c/zuul/zuul/+/788988 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Move build request cleanup from executor to scheduler https://review.opendev.org/c/zuul/zuul/+/794687 | 02:21 |
opendevreview | James E. Blair proposed zuul/zuul master: Handle errors in the executor main loop https://review.opendev.org/c/zuul/zuul/+/796583 | 02:21 |
SpamapS | Oh wow the test suite got much bigger! | 02:47 |
SpamapS | My little crappy laptop has been running tests for 45 minutes now. ;) | 02:47 |
corvus | yeah... it's... well tested. :) some of the cloud nodes take 1hr20m. selective testing during development is helpful | 02:59 |
corvus | (i think we could speed tests up with a db migration rollup strategy, but that's going to take some planning) | 03:00 |
SpamapS | Yeah I figured I should run them all just to validate and knock the rust off things. :) | 03:04 |
SpamapS | I should probably spin up an Ubuntu VM on my gaming laptop and use that.. it has a lot more oomph :) | 03:05 |
corvus | they paralellize very well :) | 03:05 |
SpamapS | I have 2 cores! But.. they're BAD cores. model name: AMD A9-9420e RADEON R5, 5 COMPUTE CORES 2C+3G | 03:08 |
SpamapS | {0} tests.unit.test_gerrit_legacy_crd.TestGerritLegacyCRD.test_crd_branch [45.901647s] ... ok | 03:12 |
opendevreview | Merged zuul/zuul-jobs master: Ensure dnf-plugins-core before calling "dnf copr" https://review.opendev.org/c/zuul/zuul-jobs/+/796979 | 03:12 |
opendevreview | Merged zuul/zuul-jobs master: Switch jobs to use fedora-34 nodes https://review.opendev.org/c/zuul/zuul-jobs/+/795636 | 03:30 |
opendevreview | Merged zuul/zuul-jobs master: ensure-zookeeper: better match return code https://review.opendev.org/c/zuul/zuul-jobs/+/793537 | 03:30 |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 05:49 | |
*** marios is now known as marios|ruck | 06:02 | |
*** jpena|off is now known as jpena | 07:18 | |
*** rpittau|afk is now known as rpittau | 08:17 | |
*** raukadah is now known as chandankumar | 09:26 | |
swest | corvus: https://github.com/python-zk/kazoo/issues/645 | 10:12 |
*** jpena is now known as jpena|lunch | 11:41 | |
*** bhagyashris_ is now known as bhagyashris | 11:50 | |
mhuin | hello, anybody ever ran into zookeeper connection timeouts? My Zookeeper service seems up and running, I've set up TLS as explained in zuul's doc, and I can netcat into port 2281. But zuul can't seem to connect | 12:26 |
gtema | ZK is really a messy sw. I have some issues with it (i.e. zuul-web start slowly with some delay exactly connecting to it), but not timeouts. Check ZK logs, cause i.e. if you try to establish non ssl connection to ssl port you have broken connection | 12:30 |
gtema | or if cluster got broken you can maybe establish connection, but not able to do anything in it | 12:33 |
mhuin | gtema, thanks for the tips | 12:34 |
*** jpena|lunch is now known as jpena | 12:38 | |
*** raukadah is now known as chandankumar | 13:13 | |
*** rpittau is now known as rpittau|afk | 14:09 | |
opendevreview | James E. Blair proposed zuul/zuul master: WIP reenqueue https://review.opendev.org/c/zuul/zuul/+/797116 | 14:22 |
masterpe[m] | I'm trying to configure zookeeper to use tls, I have disabled clientPort in zoo.cfg and I have specifyed hosts=localhost:2281 in zuul.conf but I get in de logs "WARNING zuul.zk.base.ZooKeeperClient: Retrying zookeeper connection". But I don't see any traffic on lo when I sniff with tcpdump? | 14:29 |
mhuin | masterpe[m], ha! I am having the same problem as well and it's driving me crazy | 14:29 |
mhuin | I've tried to follow the steps in the ensure-zookeeper role in zuul-jobs, but no success | 14:30 |
mhuin | https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-zookeeper/tasks/setup_tls.yaml | 14:30 |
fungi | this is what our zk config for opendev looks like: https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/zookeeper/templates/zoo.cfg.j2 | 14:32 |
corvus | masterpe: any info in the zk server logs? | 14:33 |
*** marios is now known as marios|ruck | 14:34 | |
mhuin | fungi, I have a standalone deployment, but I don't think that matters in that case, right? | 14:36 |
fungi | mhuin: by standalone you mean single-node zk deployment rather than a cluster? i don't think that's likely to matter from a connectivity perspective | 14:37 |
gtema | what makes definitely sense for the SSL ZK: | 14:37 |
gtema | - forward and reverse DNS | 14:38 |
mhuin | fungi, yep | 14:38 |
gtema | (dns is crucial for host verification) | 14:38 |
gtema | ensuring ZK starts properly and is able to get quorum - in SSL setup not self undernstandable | 14:39 |
corvus | mhuin: one zk server, or multiple? | 14:43 |
mhuin | corvus: one standalone | 14:43 |
mhuin | I know the 2281 port is open and listening | 14:43 |
mhuin | but logs aren't very helpful on zuul's nor zk's side. I'm probably missing something though... | 14:44 |
gtema | can you try zkCli.sh? | 14:44 |
gtema | maybe you need zkCli.sh -server localhost:2281 | 14:44 |
corvus | mhuin: that setup is similar to the unit tests; tools/test-setup-docker.sh sets up that environment | 14:45 |
gtema | or depending on the SSL setup you might really need to connect using you real IP address, otherwise ZK rejects connect since it fails to validate | 14:45 |
mhuin | gtema you might be on to something here | 14:45 |
corvus | i mean to say, that script will set up a single-node zk environment on localhost, which i use when running unit tests; the unit tests configure zuul to connect to localhost:2281 | 14:46 |
mhuin | corvus, yes, I looked at the unit tests first to see how to do it | 14:46 |
gtema | in my real cluster SSL setup I can't connect using localhost or 127.0.0.1 - I need a real IP address (but I run ZK inside of container) | 14:47 |
mhuin | gtema: progress with zkCli > SASL config status: Will not attempt to authenticate using SASL (unknown error) | 14:47 |
mhuin | love unknown errors | 14:47 |
gtema | you definitely need to look into zk logs | 14:48 |
*** jpena is now known as jpena|out | 14:48 | |
gtema | that is all the reason I really "love" java apps | 14:48 |
corvus | (we shouldn't be using sasl) | 14:48 |
gtema | it's a normal condition to only throw exceptions. Good luck figuring out | 14:48 |
gtema | @corvus - that's right, but zkCli somehow tries this by default | 14:48 |
gtema | if you only mention ssl - is wants sasl | 14:49 |
gtema | anyway, when your client can't connect properly you only have chance figuring out from zk logs (I know how great they are) | 14:49 |
corvus | mhuin: maybe the most productive thing would be to paste the zk server logs from when zuul is trying to connect? | 14:52 |
gtema | maybe | 14:52 |
mhuin | corvus, I'm tailing both the logs from zuul and from zk, but I only see the timeout notifications from zuul. it's like nothing happens on zk | 14:53 |
mhuin | I did see the connection errors when using zkCli though | 14:53 |
gtema | what if you zuul config is trying to connect to wrong ip (localhost from container, etc) | 14:54 |
corvus | mhuin: are you sure you got the port number right? i switch the numbers all the time | 14:54 |
mhuin | corvus, I just checked, it's set to 2281 on both sides | 14:55 |
mhuin | gtema, I'm not using containers and everything is on the same VM | 14:55 |
gtema | ok | 14:55 |
gtema | and on both sides it is localhost? | 14:57 |
mhuin | gtema, yes, I think so | 14:57 |
gtema | maybe just for ensuring try to set real ip in zuul config | 14:58 |
gtema | sometimes apps can start only listening on ipv6 | 14:58 |
corvus | yeah, v4/v6 could be an issue | 14:59 |
* corvus < https://matrix.org/_matrix/media/r0/download/matrix.org/JdcNmEdtMlFKqiMiJDJETniw/message.txt > | 14:59 | |
corvus | did that pastebomb irc? | 14:59 |
mhuin | it turned into a link | 15:00 |
corvus | awesome | 15:00 |
gtema | :) matrix is absolutely fine with that | 15:00 |
corvus | mhuin: ^ there's a hello-world script for you, may make testing easier | 15:00 |
corvus | that's more or less zuul's zk client connection setup | 15:00 |
mhuin | thanks corvus | 15:01 |
corvus | (i ran that successfully against my zk running in docker with tools/test-setup-docker.sh | 15:01 |
corvus | netstat -l |grep 2281 shows: | 15:02 |
corvus | tcp 0 0 0.0.0.0:2281 0.0.0.0:* LISTEN | 15:02 |
corvus | so my zk is listening on ipv4 only | 15:02 |
mhuin | ugh, it's listening on ipv6 | 15:04 |
fungi | i seem to recall making java apps dual-stack involves separate listening sockets for each address family | 15:04 |
mhuin | netstat -laputen | grep 2281 | 15:04 |
mhuin | tcp6 0 0 :::2281 :::* LISTEN 0 476052 369509/java | 15:04 |
fungi | mhuin: can you confirm whether that's reachable over a valid ipv4 address for the host? | 15:05 |
fungi | it's possible modern java has solved the single socket dual-stack problem | 15:06 |
mhuin | ok so corvus' test script did manage to connect | 15:08 |
mhuin | ok ... bad file permissions | 15:15 |
* mhuin facepalms | 15:15 | |
corvus | mhuin: for the certs? | 15:15 |
fungi | basic things like that are often what consumes most of my personal sanity as well | 15:15 |
corvus | (client side?) | 15:16 |
corvus | just wondering if there's a check we can add to zuul | 15:16 |
mhuin | corvus, yes - mind you, I'm testing the packaging of zuul for fedora. so zuul runs as the zuul user on the system | 15:16 |
mhuin | the copy of the certs used by zuul needed to be owned by zuul | 15:16 |
corvus | so we should add a read check to the zuul client init | 15:16 |
mhuin | corvus, it's probably specific to the way we deploy | 15:17 |
fungi | ahh, yeah the client connection does need read access to its private key | 15:17 |
mhuin | but it probably wouldn't hurt to ensure the files are readable | 15:17 |
corvus | yeah, but it's an easy error for any deployer to make, and if the kazoo client initializer doesn't emit a useful error in that case, it's better that we do :) | 15:17 |
mhuin | I'm actually surprised no permission error is brought up | 15:17 |
corvus | exactly | 15:17 |
* masterpe[m] < https://matrix.org/_matrix/media/r0/download/matrix.org/loWCPRvVyqcBSAzmLcLskjyv/message.txt > | 15:43 | |
* masterpe[m] < https://matrix.org/_matrix/media/r0/download/matrix.org/KXoAOpfteqaiDSVuAjhIbznW/message.txt > | 15:43 | |
* masterpe[m] < https://matrix.org/_matrix/media/r0/download/matrix.org/zakAsLKxGTxJabdqUCGsawPl/message.txt > | 15:43 | |
mhuin | masterpe[m], so for me it was a file permissions problem on the certs, zuul could not read them | 15:44 |
* masterpe[m] < https://matrix.org/_matrix/media/r0/download/matrix.org/MlPACqwwTfmrCjpkTxlxjzGp/message.txt > | 15:44 | |
corvus | masterpe: any chance you have the same problem as mhuin? zuul running as a different user and couldn't read the certs? | 15:44 |
masterpe[m] | ohw | 15:45 |
masterpe[m] | That can be | 15:45 |
mhuin | although now I have anotehr problem, the scheduler complains about the key store password | 15:45 |
mhuin | there's no mention of that on the doc though | 15:45 |
mhuin | raise RuntimeError("No key store password configured!") | 15:46 |
mhuin | well I see it mentioned here: https://zuul-ci.org/docs/zuul/discussion/components.html?highlight=key%20store#attr-keystore | 15:48 |
opendevreview | James E. Blair proposed zuul/zuul master: Verify ZK certs can be read https://review.opendev.org/c/zuul/zuul/+/797135 | 15:49 |
corvus | mhuin: oh, that doc could possibly be clearer -- it's nothing related to the zookeeper connection | 15:50 |
corvus | mhuin: it's just a password that zuul uses to encrypt its own data | 15:50 |
mhuin | corvus, ok, so it's not related to the error I'm seeing now? | 15:50 |
corvus | mhuin: it is related to that error | 15:51 |
corvus | it's just not a zookeeper connection issues | 15:51 |
corvus | mhuin: basically: just make up a password and put it in zuul.conf and you're done ;) | 15:51 |
mhuin | ah gotcha | 15:51 |
mhuin | is that something recent? we have a fairly recent deployment of zuul for sf.io and I don't see this option set | 15:52 |
corvus | mhuin: but make it really long and random; like "pwgen -s 256" or something -- really i think fungi suggested it should be at least as long as any private keys you are likely to encrypt as zuul secrets) | 15:53 |
corvus | mhuin: pretty soon after 4.0 i think | 15:53 |
corvus | it's been a few months | 15:53 |
mhuin | hmm weird, I'll check the logs then | 15:54 |
gtema | @corvus - any info on whether I ever need to rotate it, or what to do if it's lost, etc? | 15:55 |
fungi | yes, in short since the password you provide for the secret store is going to be protecting keys held in that keystore, it would be unfortunate for brute-forcing the password protecting those keys to be easier than brute-forcing one of the keys themselves | 15:56 |
corvus | mhuin: fyi, the purpose of that is so that we can put the zuul secrets encryption keys in zookeeper, in encrypted form, without worrying about having to secure the zk data storage (some locations have "encryption at rest" policies, which this should hopefully comply with) that's why it's called the "keystore" password | 15:56 |
gtema | yeah, it's relatively clear. But apparently I have used initially a relatively short pwd and have no clue now whether I can simply change it, or whether I need to clean ZK if I decide to change it (export/import of the keys) | 15:57 |
corvus | gtema: right now, the keys are still written to disk on the scheduler, and, if they don't exist in zk, are read from disk. so they are effectively a backup of the data in zk. you should be able to stop the scheduler, delete all of zk, change the password, then start the scheduler and it will re-read the keys from disk and use the new password. | 15:58 |
corvus | gtema: before we remove the "filesystem backup" feature, we'll make sure we have a command line utility to export/import the keys, so the same would be possible. | 15:58 |
corvus | and then eventually, hopefully a real key rotation mechanism :) | 15:58 |
gtema | oki, thks | 15:59 |
*** marios|ruck is now known as marios|out | 16:02 | |
*** jpena|out is now known as jpena | 16:22 | |
opendevreview | James E. Blair proposed zuul/zuul master: Execute builds via ZooKeeper https://review.opendev.org/c/zuul/zuul/+/788988 | 16:30 |
opendevreview | James E. Blair proposed zuul/zuul master: Move build request cleanup from executor to scheduler https://review.opendev.org/c/zuul/zuul/+/794687 | 16:30 |
opendevreview | James E. Blair proposed zuul/zuul master: Handle errors in the executor main loop https://review.opendev.org/c/zuul/zuul/+/796583 | 16:30 |
*** jpena is now known as jpena|off | 16:48 | |
SpamapS | interesting, a bunch of tests failed with this: " OSError: [Errno 24] Too many open files" | 17:06 |
fungi | sounds like a personal problem ;) | 17:07 |
clarkb | open files (-n) 1024 is my local ulimit value and I was able to run the zuul test suite as recently as a week ago | 17:07 |
fungi | but yeah, maybe low open file limit? or maybe some runaway test opened waaaay too many files | 17:07 |
SpamapS | I wonder if it's something with the slowness of the system | 17:07 |
fungi | that could also maybe cause a pileup | 17:08 |
clarkb | my ulimit isn't super high which makes me suspect some sort of runaway. I guess slow tests hanging around and piling up fds could do it | 17:08 |
SpamapS | Ran: 1129 tests in 7014.8591 sec. | 17:08 |
fungi | that's... a lot of seconds | 17:08 |
clarkb | locally the tests run in about 2400 seconds | 17:08 |
SpamapS | - Failed: 586 | 17:08 |
SpamapS | All that I've seen so far failed with too many open files in the kazoo code | 17:08 |
clarkb | SpamapS: are you using the tools/test-setup-docker.sh script to set up zookeeper and friends? | 17:09 |
SpamapS | yes | 17:09 |
SpamapS | It's possible though that this laptop just isn't beefy enough to run zk | 17:09 |
clarkb | ok, that mounts zookeepers data dir on tmpfs which should help a bit with speed | 17:09 |
SpamapS | my IO is pretty fast as it's an SSD. It's the CPU and memory bus that are just incredibly slow | 17:10 |
SpamapS | so far I re-ran 10 of the failing tests and they all passed | 17:12 |
SpamapS | so yeah, something that closes sockets just got behind | 17:12 |
fungi | maybe try reducing the parallelism? | 17:12 |
fungi | it's probably guessing high based on cpu core count | 17:13 |
clarkb | --concurrency=`python -c "import multiprocessing; print(int(multiprocessing.cpu_count()/2))"` | 17:13 |
SpamapS | clint@clint-Inspiron-3185:~/src/zuul-ci/zuul$ python -c "import multiprocessing; print(int(multiprocessing.cpu_count()/2))" | 17:14 |
SpamapS | 1 | 17:14 |
SpamapS | So it's not that. :) | 17:14 |
fungi | yeah, don't think you can go much lower without having to deal with infinities ;) | 17:14 |
SpamapS | No I really think that it's just the thread or task that closes sockets just not getting CPU time. | 17:15 |
SpamapS | Not a real concern at all. Nobody should be using this terrible machine to run the entire suite. :) | 17:15 |
clarkb | between mysql, zookeeper and the test suite 2 total cpus may not be sufficient | 17:15 |
avass[m] | yean usually likes opening a lot of files, could it be that somehow? | 17:15 |
avass[m] | yarn* | 17:15 |
opendevreview | James E. Blair proposed zuul/zuul master: Shard BuildRequest parameters https://review.opendev.org/c/zuul/zuul/+/797149 | 18:37 |
opendevreview | Merged zuul/zuul-jobs master: Add role to enable FIPS on a node https://review.opendev.org/c/zuul/zuul-jobs/+/788778 | 18:50 |
opendevreview | James E. Blair proposed zuul/zuul master: Shard BuildRequest parameters https://review.opendev.org/c/zuul/zuul/+/797149 | 20:05 |
opendevreview | James E. Blair proposed zuul/zuul master: Compress sharded ZK data https://review.opendev.org/c/zuul/zuul/+/797156 | 20:14 |
corvus | masterpe: element tells me you are a mod (power level 50) in the matrix room -- do you know how that happened? | 20:50 |
masterpe[m] | no | 21:00 |
masterpe[m] | Can it be that I'm using my own home server? | 21:01 |
gtema | Corvus, I guess this happened when people joined room before they became managed by us. I got also mod on ansible-sig room (pretty much joined first) | 21:02 |
gtema | Perhaps created IRC room through matrix | 21:03 |
masterpe[m] | I think I joined this oftc room first yesterday. | 21:03 |
gtema | Hm, then it's something different | 21:03 |
corvus | several other people are on their own homeserver, and it's been around for quite a while... | 21:03 |
corvus | i'll see if i can ask in #matrix-irc | 21:04 |
corvus | masterpe: don't worry about removing perms right now; let's leave it alone to see if we can figure out how it happened | 21:04 |
corvus | okay, super weird idea: one of the access levels supported by oftc's chanserv access list is "MASTER"... and... well, that's a substring of masterpe. that makes no sense to me, but it's the only connection i see. | 21:09 |
masterpe[m] | Then I must be on multiple channels have moderator rights. | 21:10 |
corvus | masterpe: but only OFTC portal rooms -- are you in any others? if not, consider joining #_oftc_#opendev:matrix.org and we can see what happens there | 21:11 |
corvus | (this would not apply to freenode or libera.chat) | 21:11 |
corvus | masterpe: i note in an irc client, you are not a chanap, so it's just a matrix power level issue | 21:21 |
*** ChanServ sets mode: +o corvus | 21:21 | |
*** ChanServ sets mode: -o corvus | 21:22 | |
corvus | apparently the main bridge developer is on holiday, so i'm not expecting an answer any time soon :) | 21:30 |
corvus | i'm assuming we could fix the problem by toggling the mod status like i just did for myself, but i'm still curious how it happened. so i'll leave it for a bit in case we do get a response | 21:30 |
corvus | masterpe: thanks for the info and the #opendev test :) | 21:30 |
clarkb | corvus: left a few comments on https://review.opendev.org/c/zuul/nodepool/+/781926 one of which I felt was worth a -1 | 21:31 |
masterpe[m] | corvus: Your welcome. | 21:32 |
opendevreview | James E. Blair proposed zuul/nodepool master: Azure: update documentation https://review.opendev.org/c/zuul/nodepool/+/781926 | 21:40 |
corvus | clarkb: thanks, fixed! | 21:40 |
opendevreview | James E. Blair proposed zuul/nodepool master: Rename pip4/6 to public_ipv4 https://review.opendev.org/c/zuul/nodepool/+/793508 | 21:41 |
opendevreview | Merged zuul/nodepool master: Azure: don't require full subnet id https://review.opendev.org/c/zuul/nodepool/+/780402 | 21:47 |
opendevreview | Merged zuul/nodepool master: Azure: add quota support https://review.opendev.org/c/zuul/nodepool/+/780439 | 21:47 |
corvus | i just ran the test suite repeatedly on several changes in the build-requests-in-zk stack; "sum of execute time" stays around 9700 seconds, so that's probably not going to be a huge performance impact. | 21:58 |
opendevreview | James E. Blair proposed zuul/zuul master: Add item UUID to MQTT reporter https://review.opendev.org/c/zuul/zuul/+/797165 | 22:09 |
SpamapS | With ulimit set to 4096, I don't get any too many files errors, but I do get a lot of other tests breaking that pass running singularly. I just think my little laptop is too slow to run the whole suite. :-P | 23:06 |
SpamapS | There may be races that only show up on really slow CPUs. ;) | 23:07 |
opendevreview | Merged zuul/nodepool master: Azure: implement launch retries https://review.opendev.org/c/zuul/nodepool/+/780682 | 23:10 |
SpamapS | So, question: what's this ansible-core vs. ansible comunity thing? | 23:12 |
SpamapS | And... what version of which does Zuul care about? | 23:12 |
SpamapS | wow what a mess of a thing | 23:13 |
SpamapS | Ok I think I get it... ansible-core is the engine and ansible community is all the "batteries included" modules. | 23:14 |
SpamapS | So looks like Zuul would care about community. | 23:14 |
corvus | spamaps: yeah, i think so -- at least, assuming community is more or less the set of modules that we took for granted before | 23:29 |
SpamapS | The thing we're installing is community. | 23:29 |
SpamapS | And I think that's the simpler if not fatter choice. | 23:30 |
corvus | cool :) | 23:30 |
SpamapS | About to test 2.10 ;) | 23:30 |
corvus | maybe when we grow native support for galaxy or whatever, we could pare down what zuul installs, then jobs can request more. then again, maybe everyone wants community and we just keep doing that. :) | 23:30 |
SpamapS | Yeah it could result in a leaner Zuul. | 23:31 |
SpamapS | Though I don't know if anybody is complaining about the installed size of zuul itself. | 23:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!