*** adamcarthur55 is now known as adamcarthur5 | 02:52 | |
opendevreview | Ghanshyam proposed opendev/irc-meetings master: Retire TripleO: remove the TripleO meeting info https://review.opendev.org/c/opendev/irc-meetings/+/910692 | 02:58 |
---|---|---|
opendevreview | Merged opendev/irc-meetings master: Retire TripleO: remove the TripleO meeting info https://review.opendev.org/c/opendev/irc-meetings/+/910692 | 03:28 |
tkajinam | I'm wondering if there is an easy way to partially restore tripleo projects (like only zuul projects ?), to unblock a few projects | 07:29 |
tkajinam | I know ideally we should address these config errors but we are close to freeze/release timings so it may be helpful if we can pend that work now till the release is made... | 07:30 |
frickler | tkajinam: this should likely rather be discussed in an openstack context, but do you have examples where this is actually blocking? I would very much rather like to help to move forward and not have a release that somehow still depends on tripleo things | 09:23 |
*** ykarel is now known as ykarel|away | 14:33 | |
opendevreview | Merged openstack/project-config master: Add OpenAPI related repos to OpenStackSDK project https://review.opendev.org/c/openstack/project-config/+/910580 | 14:39 |
TheJulia | frickler: tkajinam: One thing I suspect all involved might want to be careful is a desire to maintain continuity, where as maybe we don't need continuity. It is okay to let things go, after all. Just... difficult. :) | 14:54 |
TheJulia | Just thinking out loud, it might make sense to leverage coredumpctl for core files on CI nodes, in the event the run context of the process cannot write to the current working directory | 15:05 |
fungi | what's involved in turning that on? | 15:07 |
TheJulia | basically resetting the sysctrl | 15:07 |
fungi | does that have to be done during boot or can it be done at runtime? | 15:07 |
TheJulia | kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %e | 15:08 |
TheJulia | runtime it looks like, although I'm still fighting to get dnsmasq to give me a core dump | 15:08 |
TheJulia | okay, apparently it is resource limiting which is preventing me at the moment | 15:08 |
fungi | if you do manage to figure it out, that'll be the first step in collecting/recording crashdumps in jobs (obviously we'd want to avoid doing that for sensitive jobs where it might leak memory regions containing secrets or something) | 15:09 |
TheJulia | yeah | 15:09 |
Clark[m] | Ya that seems like something jobs should opt into explicitly rather than us setting a default to on | 15:11 |
Clark[m] | But we can have a role to do it and make it easy | 15:11 |
TheJulia | so other challenge is the ulimit is also defaulted out to something stupidly low | 15:24 |
fungi | heading out to grab lunch and vote but shouldn't be too long if anyone needs anything | 15:59 |
fungi | should be back by 17:30 utc at the latest | 15:59 |
SvenKieske | mhm, it is possible to have multiple parents inside a zuul job, isn't it? | 16:16 |
SvenKieske | when can we replace dnsmasq with an evolved dnsdist? :D | 16:17 |
TheJulia | looks like the only way to get a core, as deep as something like dnsmasq, is to manually tag the process because it gets its limits constrained for core dumps down to 0 somewhere along the way | 16:21 |
TheJulia | so prlimit ++ --pid option | 16:21 |
Clark[m] | SvenKieske it is possible to have multiple parents but I'm not sure it is encouraged/documented. system-config-run-containers in opendev/system-config is an example of this | 16:22 |
SvenKieske | Clark[m]: thanks, for now I have avoided doing it and just copied instead some stuff around, it's all in a testing stage currently, so not a big deal | 16:24 |
SvenKieske | my linter also yells at me when I add multiple parents, but that might just be misconfigured. | 16:24 |
Clark[m] | You can't provide the parents as a list. The way you declare this is pretty specific | 16:25 |
SvenKieske | TheJulia: I fuzzily remember that the ability to being coredumped somehow is tied to CAP_PTRACE, do you happen to know if that is correct? because that might just be dropped, especially in containerized deployments. | 16:25 |
TheJulia | ... that might actually be it | 16:26 |
SvenKieske | Clark[m]: ah I see, yeah it wants a string, so comma delimited or something I guess? | 16:26 |
TheJulia | I just... I'm already low on spoons and it being in a namespace has not helped :) | 16:26 |
TheJulia | but I got one | 16:26 |
SvenKieske | well that's why I wrote :) hope it helps | 16:26 |
Clark[m] | No, it is more complicated than that. Hence the lack of docs and recommendation. You'll want to look at the example if you try it | 16:28 |
SvenKieske | also dnsmasq afaik does drop some caps on it's own (also a fuzzy and maybe wrong memory). we recently had to change podman caps afaik for dnsmasq to also let cap_net_raw through because cap_admin was not enough, there was an old ML thread linked where the handling of caps in dnsmasq was discussed | 16:29 |
SvenKieske | if you're interested it's all linked here, maybe there is something useful for your debugging as well there: https://bugs.launchpad.net/kolla-ansible/+bug/2055282 | 16:30 |
SvenKieske | I really wish for a better/modern dnsmasq alternative though :-/ it's a very useful software for sure, but I feel it's not really well equiped for various cloud scenarios. | 16:31 |
TheJulia | SvenKieske: so, I've got one of the folks who lives/breaths it in another window and it took him a while but in his case it was the systemd unit config on his test machine | 16:33 |
TheJulia | :) | 16:33 |
SvenKieske | ah, yeah well, systemd units also drop many capabilities by default these days | 16:33 |
TheJulia | yeah | 16:34 |
SvenKieske | oh oh... something broken on the interwebz? ERROR! Error when finding available api versions from default (https://galaxy.ansible.com) (HTTP Code: 503, Message: Service Unavailable) | 16:35 |
SvenKieske | mhm, works manually at least from my machine | 16:35 |
Clark[m] | I would say use our proxy cache but it hasn't worked since galaxy redid all of its APIs | 16:42 |
SvenKieske | yeah well, it failed in opendev CI :D https://zuul.opendev.org/t/openstack/build/81a67d99f02a4591b681af74432376b0 | 16:55 |
Clark[m] | Right but that was against the upstream servers. I can't make them not 503 | 16:56 |
SvenKieske | I'm used to mirrors being flaky and crumbling CI and all, but I think it's the first time I saw an actual 503 error code in opendev ci | 16:56 |
SvenKieske | sure :) | 16:56 |
Clark[m] | It has nothing to do with us. 503 is a server error and the server is wherever galaxy hosts | 16:56 |
SvenKieske | yes I know, that was a sincere "sure"! | 16:58 |
SvenKieske | communication via the internet is still hard. | 16:59 |
SvenKieske | mhm, github also doesn't like me, prints an angry looking unicorn instead :D | 17:01 |
SvenKieske | reload helps | 17:01 |
gtema | svenkieske - it's friday evening. What else do you expect, everybody is applying fixes now | 17:02 |
SvenKieske | gtema: I personally am a big fan of read only friday, I have spent enough weekends fixing production after botched friday noon deployments :) | 17:02 |
SvenKieske | I guess I call it a day :D | 17:03 |
gtema | :) | 17:03 |
SvenKieske | did you got any positive replies regarding the new rusty openstacksdk stuff btw? | 17:03 |
SvenKieske | I wanted to post to the mailinglist, that being said, to voice some public support there | 17:04 |
gtema | not really, you are still welcome | 17:04 |
SvenKieske | will do that! | 17:07 |
gtema | ty | 17:07 |
fungi | okay, i'm back. looks like i missed some lively banter while lunching | 17:31 |
clarkb | I'm finding myself have a lazy morning and this afternoon I'm doing taxes. Probably for the best after yesterday's fun :) | 17:43 |
clarkb | I did start to get some initial centos 7 and xenial cleanup changes pushed to openstack requirements yesterday though | 17:43 |
fungi | yeah, today seems like a good day for catching up on stuff, far less exciting than yesterday anyway | 17:54 |
clarkb | also it is kinda of snowing | 18:02 |
fungi | kinda snowing is the best kinda snowing | 18:03 |
TheJulia | clarkb: fungi: unless you guys scream in short order, I'm going to give ssh access to that held vm to one of RH's developers to prod the coredump some | 18:07 |
clarkb | TheJulia: I think that is fine | 18:07 |
clarkb | general test nodes should be safe to hand out like that as they don't have any important credentials on them (just public keys) | 18:07 |
clarkb | this isn't universally true but should be for your held node | 18:07 |
clarkb | thank you for the heads up | 18:07 |
fungi | TheJulia: go for it | 18:09 |
clarkb | the centos 7 stuff definitely has more tendrils than buster or opensuse did. swift seems to still run jobs for it across their projects so I've asked them about lceaning up there | 18:14 |
clarkb | then a handful of otherplaces appear to have centos 7 jobs too | 18:14 |
fungi | topic on python discourse was asking about retiring the manylinux1 standard since it's based on rhel/centos 7 | 18:24 |
timburke | the main tricky thing i see there is that centos 8 stream ships pip 9.0.3, which doesn't know about manylinux2010 | 18:37 |
*** ralonsoh_ is now known as ralonsoh | 19:08 | |
prometheanfire | I guess fyi, some recent python updates (beyond urwid) seem to have stopped allowing gertty to sync, I think maybe I'll try it in a venv, downgrading libs to see what fixes it | 20:17 |
corvus | prometheanfire: side note: https://review.opendev.org/910358 looks like it's not compatible with old urwid version (2.1.2) i haven't tried it with newer | 20:41 |
corvus | prometheanfire: let me know if you'd like a pip freeze from my venvs to set a lower bound to help with tracking anything down | 20:42 |
prometheanfire | corvus: sure, that could help, latest release of urwid and git master of gertty should work | 20:47 |
prometheanfire | it's other libs that are not working as well anymore | 20:47 |
corvus | prometheanfire: https://paste.opendev.org/show/b0GGmCDSccHVUu2Jk5nc/ | 20:50 |
prometheanfire | older sqlalchemy will help a bit too | 21:02 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!