corvus | clarkb: remote: https://review.opendev.org/c/zuul/zuul/+/956058 Launcher: fix error when retrying multinode request [NEW] | 00:18 |
---|---|---|
corvus | i think that's what happened with those ready nodes. easiest way to un-wedge is actually to dequeue the changes that correspond to those requests. i'll do that now. | 00:18 |
corvus | i dequeued two changes from periodic-weekly and there are no nodes in bhs1 now | 00:23 |
clarkb | thanks +2 from me | 00:36 |
clarkb | I guess if that provider name didn't match and any one failed to enter the block we'd process them all. makes sense given the commitm essage that this bug would cause problems | 00:37 |
corvus | yep, https://paste.opendev.org/show/bfzxqcmJZY0uH5OqWQwP/ is the smoking gun | 00:50 |
corvus | 3 of those 4 were ready nodes at the time | 00:50 |
*** sfinucan is now known as stephenfin | 10:06 | |
*** ykarel_ is now known as ykarel | 10:15 | |
*** darmach4 is now known as darmach | 12:43 | |
corvus | i restarted the launchers with the new bugfix for retries | 14:53 |
fungi | thanks corvus! | 14:56 |
clarkb | looks like that reset the state of all the nodes in raxflex sjc3 | 15:22 |
clarkb | infra-root: I haven't yet but plan to add that ibm ip that fails to ssh to gerrit to the disallowed list for review03 after some breakfast. Does anyone have a system-config change we can land afterwards to trigger the infra-prod-base job sooner than the daily run later today? | 15:23 |
clarkb | I'm hoping to confirm that applied correctly well in advance of the 0200 daily start | 15:23 |
fungi | clarkb: i have https://review.opendev.org/955386 that could use approving, but not sure if it will trigger the job you want since it's only changing files under doc/source | 15:34 |
clarkb | fungi: I suspect it won't but I'll approve it once private vars are updated and we'll find out | 15:35 |
fungi | wfm | 15:35 |
corvus | are we interested in merging this presumed fix to puppet-storyboard? https://review.opendev.org/952947 | 15:37 |
corvus | i wrote that when i cleaned up the corresponding errors in the system-config puppet job | 15:37 |
frickler | clarkb: https://review.opendev.org/c/opendev/system-config/+/954547 looks like it should work? | 15:37 |
clarkb | corvus: I think system config uses its own list but we can land that one for completeness | 15:38 |
corvus | yes it does | 15:39 |
clarkb | frickler: yup that one shoukld do it, let me finish my toast then I can get these balls rolling | 15:39 |
clarkb | infra-root ok private vars on bridge are updated. Did anyone want to double check that before I approve some of these changes? | 15:47 |
fungi | looking | 15:50 |
clarkb | if you want ot double check the ip youcan look at the gerrit error log | 15:50 |
clarkb | its full of messages complaining about failed ssh connections from the ip | 15:50 |
fungi | "Unable to negotiate key exchange" | 15:53 |
fungi | 27k lines match that address in the current error_log | 15:54 |
clarkb | ya I believe it is a very old jenkins with java ssh client that can't do key negotiations | 15:54 |
clarkb | unfortunately it occurs too early in the auth process to record a username | 15:54 |
fungi | and yeah, i was expecting to see it in sshd_log but i guess it doesn't make it that far | 15:54 |
clarkb | so we'll block the ip instead | 15:54 |
fungi | lgtm | 15:54 |
fungi | also interesting that we don't have any group_vars for gerrit, just the individual review servers over time | 15:55 |
fungi | we probably don't need the files for review01/02 any longer | 15:56 |
fungi | nor the group_vars for gerrit-dev | 15:56 |
clarkb | I think the reason for that is the way we deploy new servers requires things to happen in a bit of an order/lockstep between instances? | 15:56 |
clarkb | but it may also just be cargo culting | 15:56 |
fungi | probably most of it could go in a gerrit group_vars file, and just the minimal things that need to differ in host-specific files | 15:57 |
clarkb | ya that is likely so | 15:58 |
clarkb | corvus: re the rabbitmq change you note the new home is https://github.com/voxpupuli/puppet-rabbitmq but the change effectively does s/puppetlabs/puppet/ in the module name. I guess those are equivalent on the module index? Either way I think its largely a noop for us beyond record keeping so I +2'd | 15:59 |
opendevreview | James E. Blair proposed opendev/system-config master: Run python container mirror job https://review.opendev.org/c/opendev/system-config/+/956095 | 16:00 |
clarkb | oops I should've caught that in my review | 16:02 |
corvus | clarkb: yes, that's my understanding as someone who doesn't understand :) | 16:02 |
opendevreview | Merged opendev/system-config master: Drop devstack-gate documentation https://review.opendev.org/c/opendev/system-config/+/955386 | 16:06 |
clarkb | that change has only enqueued jobs for bridge | 16:07 |
clarkb | fungi: https://review.opendev.org/c/opendev/system-config/+/954547 is the next change I'll attempt to land to deploy the iptables update on review03. DO you want to review it before I approve it? | 16:08 |
fungi | +2 | 16:10 |
opendevreview | Merged opendev/system-config master: Run python container mirror job https://review.opendev.org/c/opendev/system-config/+/956095 | 16:11 |
clarkb | thanks I've approved it | 16:11 |
opendevreview | Merged opendev/system-config master: Update Ansible config to set ssh ServerAliveInterval https://review.opendev.org/c/opendev/system-config/+/954547 | 16:21 |
clarkb | ha that only triggered the bootstrap bridge job I think beacuse it modifies the ansibel config on bridge | 16:21 |
opendevreview | Clark Boylan proposed opendev/system-config master: Update base roles README https://review.opendev.org/c/opendev/system-config/+/956097 | 16:25 |
clarkb | ok ^ is my solution. Is it a hack? yes but it should get the job done | 16:25 |
corvus | clarkb: you trying to run infra-prod-base? | 16:45 |
clarkb | corvus: yes, that playbook runs the iptables role against all hosts so should update iptables in review03 with the ip address block rule | 16:46 |
corvus | clarkb: i was thinking of enqueing a periodic run to get the python image. that runs that job too. | 16:46 |
corvus | if i go ahead and do that now, that may get you your job sooner than the gate would | 16:47 |
clarkb | wfm | 16:47 |
clarkb | I can abandon the other change if that ends up getting it done | 16:47 |
corvus | ok. and yeah, i wouldn't abandon it until it's done. :) | 16:48 |
corvus | clarkb: enqueued now; base is just waiting on bootstrap | 16:48 |
clarkb | looks like the two LE cert renewal awrning emails we got may be due to the LE job not running since Sunday and these two certs needed to renew either monday or tuesday utc? | 16:49 |
clarkb | so its possible that enqueing the periodic buildset will also fix those a bit earlier assuming there isn't some larger problem preventing LE from running | 16:50 |
clarkb | ok iptables updated on review03 but the base job failed because the mirror in sjc3 is unreachable and because the vexxhost backup server is unreachable | 16:57 |
corvus | are either of those problems known? | 16:58 |
clarkb | corvus: the vexxhost backup server is known | 16:58 |
clarkb | I don't think we made teh connection to it breaking the base job preventing other jobs from running yesterday, but the main issue is know | 16:59 |
clarkb | I'll look at the sjc3 mirror now | 16:59 |
clarkb | just going to double check review03 connectivity loosk correct after its update first | 16:59 |
corvus | anything i can do? | 16:59 |
clarkb | corvus: I think I'm good for now | 17:00 |
clarkb | I can reach review03 via https, ssh port 22 and port 29418 and the last occurrence of the ibm ip filling our error_log is from [2025-07-29T16:56:22.977Z] | 17:00 |
clarkb | so I think review03 is looking good to me. Now to check on our sjc3 mirror | 17:01 |
corvus | according to scrollback, we were thinking of just booting the backup server today. i'll go ahead and do that. | 17:01 |
clarkb | ack | 17:01 |
clarkb | mirror02.sjc3.raxflex.opendev.org is in an ERROR state but server show doesn't give any indication of why | 17:02 |
clarkb | and console log show fails with an error indicating there is no console log to show | 17:03 |
clarkb | dan_with: ^ is this something you might be interested in? The server does use a cinder volume but that is marked in-use and doesn't appear to be part of any problem | 17:04 |
clarkb | heh after trying to run the console log show there is a fault now. The one indicating there is no console log to show. So something went wrong and nova didn't record the specifics but marked it as ERROR anyway | 17:05 |
clarkb | then when I tried to perform an operation against it it updated the server with the details for that specific fault | 17:05 |
clarkb | I'll go ahead and attempt top stop and start this mirror | 17:05 |
clarkb | it pings again but pam says sshd isn't ready for me yet | 17:07 |
clarkb | maybe it is fscking? | 17:07 |
corvus | any better luck with console show? | 17:08 |
clarkb | yup that shows a login prmpt so let me try ssh again | 17:08 |
clarkb | ok its up I think its happy again. I'll test the web frontend next | 17:08 |
clarkb | https://mirror.sjc3.raxflex.opendev.org/ responds and https://mirror.sjc3.raxflex.opendev.org/ubuntu/ has content implying afs is working too | 17:09 |
clarkb | so I think the mirror is happy again though we don't know why it crashed yet | 17:09 |
corvus | on the subject of backup02.ca-ymq-1.vexxhost.opendev.org -- server show said it was running; console log show indicated no problems. but i rebooted anyway. it's up again now, with the same behavior as before. i'm now suspecting it was up the whole time, and we're having connectivity problems. | 17:10 |
corvus | ssh -4 backup02.ca-ymq-1.vexxhost.opendev.org | 17:10 |
corvus | ssh: connect to host backup02.ca-ymq-1.vexxhost.opendev.org port 22: No route to host | 17:10 |
clarkb | 2025-07-27T03:35:01.776920+00:00 mirror02 CRON[243622]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) | 17:10 |
clarkb | this is the last message in syslog on the mirror prior to me booting it back up again | 17:11 |
clarkb | so I think it did properly crash rather than just have ssh connectivity issues | 17:11 |
clarkb | dan_with: ^ that gives us a time window I think | 17:11 |
clarkb | corvus: hrm | 17:11 |
corvus | also gotten this: ssh: connect to host backup02.ca-ymq-1.vexxhost.opendev.org port 22: Connection timed out | 17:12 |
clarkb | I tested ping from review03 since it is in the same cloud region to rule out edge router stuff and it cannot ping via ipv4 or ipv6 either | 17:12 |
clarkb | guilhermesp ricolin may need to look into that one? We don't set up security group rules differently for review03 and the backup server so I don't think its anything on our config side | 17:13 |
fungi | clarkb: fwiw, a personal server of mine in flex sjc3 has spontaneously rebooted several times in recent weeks, not sure what may be going on there, maybe unannounced maintenance of some sort | 17:13 |
clarkb | fungi: in this case the server didn't reboot it was just down. But ya maybe related | 17:14 |
corvus | guilhermesp: ricolin 5665c088-8ce4-410d-9edd-53633c0d0b76 is the server that we can't reach at 199.204.45.196 | 17:14 |
corvus | clarkb: do you think if we put the mirror in emergency the base job will be happy? | 17:14 |
clarkb | corvus: yes | 17:14 |
fungi | corvus: you were able to get a console log show from backup02.ca-ymq-1.vexxhost? when i was trying yesterday i kept getting back connection timeout errors from the backend | 17:14 |
corvus | fungi: yes, no problem there; fast response, and no error indications that i saw | 17:15 |
fungi | huh, odd | 17:15 |
clarkb | fungi: corvus I wonder if that implies the vm was down for a time (when fungi observed it) then came back up but when it came back up its network ports were not properly configured | 17:16 |
clarkb | and that not properly configured network port state persists after a subsequent reboot (the one corvus triggered) | 17:16 |
corvus | clarkb: good theory; wish we had timestamps | 17:16 |
fungi | well, nova was reporting it in an "active" state when i looked, just wouldn't return a console log | 17:16 |
corvus | #status log added backup02.ca-ymq-1.vexxhost.opendev.org to emergency file since server has lost connectivity | 17:17 |
opendevstatus | corvus: finished logging | 17:17 |
corvus | do we think a power off and on would be any different than reboot for something like that? | 17:18 |
clarkb | corvus: I doubt it. I suspect the underlying libvirt stuff is very similar and relies on acpi | 17:18 |
corvus | (i guess that's "server stop" and "server start") | 17:18 |
clarkb | I suspect we need something different for nova and neutron to reevaluate networking details if that is the issue | 17:18 |
fungi | yeah, if it's a misbehaving hypervisor host, probably the only out would be forcing a migration | 17:19 |
fungi | i guess a "resize" might have that effect | 17:19 |
guilhermesp | hi guys sorry, a lot going on. td;lr of the issue and region? | 17:20 |
clarkb | guilhermesp: in ca-ymq-1 backup02.ca-ymq-1.vexxhost.opendev.org is not reachable via network | 17:20 |
fungi | server id 5665c088-8ce4-410d-9edd-53633c0d0b76 | 17:20 |
clarkb | guilhermesp: yesterday we couldn't get the console log for it either. At that time nova considered ti ACTIVE | 17:20 |
guilhermesp | for some time or just started today? | 17:20 |
guilhermesp | lemme take a look at this guy | 17:20 |
fungi | over 24 hours at least | 17:20 |
clarkb | then today we tried a manual reboot to see if that would change the situation and it did not help | 17:21 |
clarkb | (reboot via nova api) | 17:21 |
guilhermesp | ack let me see whats going on | 17:21 |
clarkb | thank you! | 17:21 |
clarkb | frickler: re mirroring trixie I think we can also clear out xenial mirrors and arm64 bionic mirrors due to not having those images in niz now? | 17:23 |
fungi | yes | 17:24 |
frickler | +1 | 17:24 |
guilhermesp | i think i found a root cause here clarkb fungi gimme a moment to get it sorted | 17:28 |
clarkb | guilhermesp: awesome let us know if we can do anything to help | 17:28 |
opendevreview | Merged opendev/system-config master: Update base roles README https://review.opendev.org/c/opendev/system-config/+/956097 | 17:30 |
clarkb | fungi: fwiw ^ that ended up not being needed I just didn't end up getting around to marking it abandoned yet. But it shouldn't hurt anything and is a good check that the base job can succeed now | 17:31 |
fungi | sure | 17:34 |
clarkb | yup it succeeded | 17:38 |
guilhermesp | clarkb: fungi its responding to me now | 17:46 |
guilhermesp | could you check? | 17:46 |
clarkb | guilhermesp: I am able to ssh in now thank you | 17:47 |
fungi | yeah, i just logged in too | 17:47 |
fungi | lgtm | 17:47 |
guilhermesp | no worries, apologies for missing the other pings here | 17:47 |
clarkb | looks like the mount point is there properly | 17:47 |
clarkb | fungi: can you check the status of your prune that was running over the weekend? | 17:47 |
fungi | working on it | 17:47 |
clarkb | guilhermesp: do you know if there was anything we could do to debug or resolve it from our side? just wondering if we could have addressed ti ourseilves or identified the issue oursvesl | 17:48 |
fungi | i suspect the prune had already completed since disk utilization on /opt/backups-202010 is down to 61%, but i'll start another one shortly just to be sure | 17:48 |
corvus | #status log removed backup02.ca-ymq-1.vexxhost.opendev.org from emergency file | 17:48 |
guilhermesp | no clarkb -- it was an issue on openvswitch level | 17:48 |
opendevstatus | corvus: finished logging | 17:48 |
guilhermesp | something i can suggest next time is maybe shooting an urgent ticket through vexxhost`s portal so someone can be paged and look into it right away | 17:49 |
clarkb | guilhermesp: ok that is good to know. Thanks | 17:49 |
clarkb | fungi: I think the command we use logs to disk too? | 17:49 |
fungi | i ran the prune again just now and it ended within ~20 seconds | 17:50 |
clarkb | any obvious errors? if not I think we're good | 17:50 |
fungi | /opt/backups/prune-2025-07-27-18-43-49.log ends with... | 17:50 |
fungi | terminating with success status, rc 0 | 17:51 |
fungi | | Sun Jul 27 19:30:13 UTC 2025 done! | 17:51 |
clarkb | perfect | 17:51 |
opendevreview | Merged zuul/zuul-jobs master: Remove xenial test jobs https://review.opendev.org/c/zuul/zuul-jobs/+/955903 | 18:42 |
clarkb | guilhermesp: if you are still around we also replaced gitea-lb02 with gitea-lb03 due to problems with ipv6 addresses sticking on the node in the sjc1 region. We kept gitea-lb02 around in case it was something you wanted to debug | 19:07 |
clarkb | guilhermesp: but gitea-lb03 seems to be working well now so we can clean up the old VM if not | 19:07 |
clarkb | corvus: I took a quick look at raxflex sjc3 and I think there is only one leaked fip. The rest of the fips appear to belong to actual nodes. Those nodes have the new non sequential ids in names so I don't think they are lefotvers from nodepool. Neither grafana nor https://zuul.opendev.org/t/openstack/nodes seem to be aware of those nodes so I think the nodes may have leaked? | 19:48 |
clarkb | it does occur to me that we can enable the floating ip leak cleanup periodic task within zuul launcher now that nodepool is shut down so I'll figure out how to do that and push a change for that | 19:49 |
clarkb | btu I suspect there is some leak/bug affecting sjc3 nodes otherwise | 19:49 |
fungi | could be the api is reporting them back as deleted and then they change back to active, or maybe they go away completely and then somehow return from the grave later | 19:50 |
fungi | like if there was a nova database rollback or something | 19:50 |
clarkb | could be. They all show as ACTIVE now | 19:51 |
opendevreview | Clark Boylan proposed opendev/zuul-providers master: Enable floating IP cleanup in raxflex regions https://review.opendev.org/c/opendev/zuul-providers/+/956110 | 19:54 |
clarkb | 335aa3a3-8c58-4a19-baac-e88355a5763e is the leaked fip. I won't manually delete it in order to give ^ something to work on | 19:55 |
corvus | clarkb: thanks... i'm looking into the possibly-leaked nodes | 19:57 |
clarkb | corvus: any concern with landing https://review.opendev.org/c/opendev/zuul-providers/+/956110 at this point? Not sure if we think that might impact your debugging | 20:43 |
corvus | clarkb: nope, approved. i have a handle on the issue, just working on a solution | 20:45 |
opendevreview | Merged opendev/zuul-providers master: Enable floating IP cleanup in raxflex regions https://review.opendev.org/c/opendev/zuul-providers/+/956110 | 20:45 |
corvus | clarkb: (they aren't actually leaked; they are ready nodes, but they won't be used because their provider is the lowest priority because it has the highest quota usage. i'm working on changing that. later we will also need to make them more visible; right now unassigned ready nodes don't show up in the ui, but they should. so 2 probs.) | 20:47 |
clarkb | aha that explains why they aren't listed | 20:48 |
clarkb | however the grafana graphs didn't show them as useable either so maybe there is ab ug there too? | 20:48 |
corvus | same basic bug; they aren't attached to any provider (because they can be used by multiple providers). so the real solution there is going to be a generic way to get ready nodes for providers that we can use in several places. | 20:50 |
corvus | (the stats are generated from providers) | 20:51 |
clarkb | got it | 20:51 |
opendevreview | Clark Boylan proposed opendev/system-config master: Stop mirroring Xenial mirror content https://review.opendev.org/c/opendev/system-config/+/956114 | 21:01 |
opendevreview | Clark Boylan proposed opendev/system-config master: Drop Bionic ARM64 mirror content https://review.opendev.org/c/opendev/system-config/+/956115 | 21:05 |
clarkb | ok pushed those changes so we don't forget. I'm not sure what manual intervention is necessary and whether that happens before or after landing those chagnes (I think after) | 21:06 |
clarkb | the leaked fip is still in place. Not sure how often that runs or if we need to restart the process to pick up changes like that? I'll keep an eye on it | 21:07 |
fungi | i think we have some documentation on the process for dropping old reprepro content | 21:07 |
clarkb | https://docs.opendev.org/opendev/system-config/latest/reprepro.html#removing-components looks like we do | 21:07 |
clarkb | and that confirms that landing the changes first is the correct step | 21:08 |
fungi | looking at eavesdrop01, i think the reason we put /var/lib/limnoria on a cinder volume is that it's running on a 2gb flavor which only gets a 40gb rootfs and a 20gb ephemeral disk, but we've got 20gb of logs already | 21:48 |
clarkb | makes sense | 21:49 |
clarkb | That and the data is semi precious so not having it on an ephemeral disk is a good idea | 21:49 |
fungi | i'm double-checking that there's not a bunch of cruft data on there inflating the numbers | 21:50 |
fungi | 18gb in /var/lib/limnoria/opendev/logs/ChannelLogger/oftc and the largest directory under it is #openstack which is 1.3gb, so seems legit | 21:51 |
fungi | putting it in the ephemeral disk is just a no-go size wise unless we use a bigger flavor, and having it on the rootfs would be a liability if it filled up | 21:52 |
clarkb | ya I think we should continue to use the volume or a new volume and copy data over | 21:54 |
fungi | can try moving the existing volume in a quick cut-over (stop services on old server, detach there, attach to new side, proceed merging changes for everything), and if that fails early at step #2 it's easy to roll back and switch to the rsync plan, if it doesn't fail at step #2 then the rest should be smooth | 21:56 |
fungi | the basic steps to detaching cleanly are to make sure lsof reports nothing open on the device, umount it, deactivate the vg, then should be able to detach through the nova api | 21:58 |
fungi | hard umount obviously, not soft umount or vgchange won't be able to deactivate the group | 22:00 |
fungi | unless there are no open files, which of course would be the whole reason to do soft anyway | 22:01 |
fungi | clarkb: for 956114 there's a testinfra test looking for debian-docker-xenial | 22:09 |
clarkb | fungi: thanks I'll fix that | 22:10 |
fungi | testinfra/test_mirror-update.py::test_reprepro_configs | 22:10 |
opendevreview | Clark Boylan proposed opendev/system-config master: Stop mirroring Xenial mirror content https://review.opendev.org/c/opendev/system-config/+/956114 | 22:11 |
opendevreview | Clark Boylan proposed opendev/system-config master: Drop Bionic ARM64 mirror content https://review.opendev.org/c/opendev/system-config/+/956115 | 22:11 |
clarkb | reading the zuul launcher code we should cleanup floating IPs when listing instances which I would expect ot happen fairly regularly. Its been over an hour but the fip is still there so maybe this doesn't work as expected? | 22:12 |
fungi | clarkb: any guess as to why all of these show "updated last month" even though most have been dead for years? https://opendev.org/openinfra | 22:21 |
fungi | if you drill down into them then they show the expected last updates | 22:21 |
corvus | clarkb: remote: https://review.opendev.org/c/zuul/zuul/+/956119 Improve handling of unassigned ready nodes [NEW] | 22:24 |
corvus | that should get us our sjc3 quota back sooner | 22:24 |
corvus | not sure why i said sooner | 22:24 |
fungi | sooner than never? ;) | 22:25 |
corvus | anyway, that should use the ready nodes, or -- once the entire system is at quota, it should get around to using them. that will probably happen when the periodic jobs run. | 22:25 |
corvus | i don't know that i'd want to restart with that change today, so, probably the periodic jobs will end up taking care o f it | 22:26 |
clarkb | fungi: I think those timestamps are based on database records | 22:29 |
clarkb | fungi: so we did something (maybe when we rescanned branches /tags) to get the db records to update | 22:29 |
fungi | so maybe if the server is replaced they all get new timestamps | 22:29 |
clarkb | when we replace the server we copy the db over | 22:29 |
fungi | oh, or that | 22:29 |
clarkb | I think it has to do with gitea just being unsmart about db records and something side effected the rows and bumped up their timestamps | 22:30 |
fungi | yeah, makes sense | 22:31 |
opendevreview | Jeremy Stanley proposed opendev/zone-opendev.org master: Add eavesdrop02 records https://review.opendev.org/c/opendev/zone-opendev.org/+/956121 | 22:46 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Add eavesdrop02 to inventory https://review.opendev.org/c/opendev/system-config/+/956122 | 22:47 |
clarkb | corvus: in my head I keep thinking the cloud is the provider but really its the abstraction of the tenant config within zuul that maps onto a cloud | 22:57 |
clarkb | anyway the terminology here is something I'm going to have to rewire in my brain | 22:57 |
fungi | yeah, i have to keep reminding myself that while the ready nodes are associated with what i generally refer to as a provider, they're not in a provider structure in zuul | 22:59 |
corvus | closest thing to the real cloud is an endpoint | 23:00 |
corvus | (but what is a cloud anyway? and are we sure they're real?) | 23:00 |
fungi | i can see them when i look up toward space | 23:01 |
fungi | except today all i see is a blinding star, so maybe imaginary after all? | 23:01 |
corvus | that's just what the water balls in your face want your brain to think | 23:02 |
* fungi is a noisy sack of mostly water | 23:03 | |
clarkb | corvus: ok I posted two questions to that change | 23:05 |
corvus | clarkb: thx replied | 23:12 |
clarkb | thanks that helps +2 from me | 23:13 |
clarkb | looks like unit tests will fail for that change though | 23:18 |
clarkb | maybe more fallout from the change in api output? Seems like that is a common side effect problem with other tests failing that I've run into in the past | 23:18 |
corvus | maybe i'll try it with a git add | 23:21 |
*** adamcarthur56 is now known as adamcarthur5 | 23:51 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!