*** bhagyashris is now known as bhagyashris|rover | 04:08 | |
*** ysandeep|away is now known as ysandeep | 05:51 | |
*** anbanerj|ruck is now known as frenzy_friday | 07:08 | |
*** jpena|off is now known as jpena | 07:30 | |
*** rpittau|afk is now known as rpittau | 07:30 | |
opendevreview | Merged openstack/project-config master: grafana: further path fixes https://review.opendev.org/c/openstack/project-config/+/810338 | 07:47 |
---|---|---|
*** ysandeep is now known as ysandeep|lunch | 09:24 | |
*** ysandeep|lunch is now known as ysandeep | 10:24 | |
*** jcapitao is now known as jcapitao_lunch | 10:42 | |
*** rlandy is now known as rlandy|rover | 10:49 | |
*** ysandeep is now known as ysandeep|afk | 11:19 | |
*** ysandeep|afk is now known as ysandeep | 11:29 | |
*** jpena is now known as jpena|lunch | 11:31 | |
*** jcapitao_lunch is now known as jcapitao | 11:50 | |
*** jpena|lunch is now known as jpena | 12:33 | |
dpawlik | hey fungi, should I do a next patch set that will change CI job nodeset to something new for change https://review.opendev.org/c/opendev/puppet-log_processor/+/809424 ? | 12:52 |
fungi | dpawlik: that's part of why we want to stop running it. there's no compatible version of puppet for newer ubuntu, so you'd also have to redo all the configuration management | 13:48 |
dpawlik | fungi: eh, thats. bad | 15:04 |
fungi | dpawlik: yes, it's a major factor in why we don't want to continue to maintain all of it, it's stuck on an eol ubuntu version because we'd need to redo the configuration management for a newer version of puppet which isn't backward-compatible with the versions which were available for xenial | 15:09 |
fungi | and also we had decided well before then to stop using puppet for new systems and switch to ansible (plus docker where appropriate) | 15:09 |
dpawlik | good move. I think for many it was a big issue to go to the new puppet release. Especially when it all lands on production, for sure tested on beta zone but sometimes can not predict some scenario for test | 15:14 |
dpawlik | fungi: I create a simple script for pushing logs to other gearman service: https://softwarefactory-project.io/r/c/software-factory/sf-infra/+/22669 . Maybe it can be more optimized...but so far this version works in my local lab | 15:15 |
dpawlik | fungi: there are still some FIXME question to fill, so final release will be soon ;) | 15:16 |
opendevreview | Merged openstack/project-config master: Retire puppet-freezer - Step 1: End project Gating https://review.opendev.org/c/openstack/project-config/+/808072 | 15:50 |
*** rpittau is now known as rpittau|afk | 16:02 | |
*** ysandeep is now known as ysandeep|dinner | 16:06 | |
*** rlandy|rover is now known as rlandy|ruck | 16:24 | |
opendevreview | Merged openstack/project-config master: Retire puppet-freezer - Step 3: Remove Project https://review.opendev.org/c/openstack/project-config/+/808675 | 16:31 |
*** jpena is now known as jpena|off | 16:33 | |
*** ysandeep|dinner is now known as ysandeep | 17:09 | |
gouthamr | hola, https://ethercalc.openstack.org/ times out often - i didn't see any maintenance messages | 18:14 |
gouthamr | wondering if its just seeing a lot of traffic, and folding? | 18:14 |
gouthamr | cc: vhari | 18:15 |
fungi | gouthamr: it has a longstanding bug not fixed upstream afaik which allows a user to cause it to crash. our configuration management seems to start it again as a side effect of how it's deployed, but i can look at it now | 18:26 |
gouthamr | oh, i didn't know | 18:26 |
gouthamr | thanks fungi! | 18:26 |
fungi | though looks like it's been running steadily since friday | 18:26 |
fungi | it could be there's other things going on as well | 18:26 |
fungi | digging deeper now | 18:27 |
fungi | resource utilization graphs don't show anything particularly out of sorts | 18:28 |
gouthamr | is it loading for you? | 18:35 |
fungi | no, i'm trying to figure out where the node process for it might be logging before i decide to restart it, the process seems like it might be hung though as apache is logging that it's gettign a socket timeout communicating wit the node server | 18:36 |
fungi | ahh, looks like the systemd journal is catching the stdout/stderr from it | 18:39 |
fungi | but nothing useful is being emitted | 18:40 |
fungi | #status log Restarted the ethercalc service on ethercalc.openstack.org as it seemed to be hung from the Apache proxy's perspective (though was not logging anything useful to journald) | 18:43 |
opendevstatus | fungi: finished logging | 18:43 |
fungi | gouthamr: vhari: see if that's any better? | 18:43 |
gouthamr | fungi: Yes! It loads fine for me :) thanks for kicking it back up | 18:44 |
fungi | thanks for letting us know. i honestly have no idea what it was doing, the process was still running and still logging that it was getting connections, but the apache proxy which front-ends it was complaining about socket timeouts (all over localhost) | 18:46 |
fungi | also node is a bit of a black box to me | 18:46 |
gouthamr | fungi ack, and this may be a red herring - but it crashed a couple of times in the past two weeks when shared a link during the manila meeting | 18:51 |
gouthamr | this one: https://ethercalc.openstack.org/kxz9jz7tuopk | 18:52 |
gouthamr | possibly ~10 people accessing it at once - so wondering if its something the proxy couldn't handle | 18:53 |
fungi | it's at least helpful feedback, though that seems like a small enough number it shouldn't have been a problem | 18:59 |
fungi | might be something about a particular browser/plugin combo one of those users has which triggers a bug though | 18:59 |
*** ysandeep is now known as ysandeep|out | 19:33 | |
*** geguileo is now known as Guest815 | 23:02 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!