tonyb | clarkb, fungi: I expect it's a tipit issue but the cert on openstackid.org has expired. `openssl s_client -servername openstackid.org -connect openstackid.org:443 </dev/null | grep -E '^[dv]'` | 01:59 |
---|---|---|
tonyb | notAfter=Jun 18 17:09:38 2024 GMT | 01:59 |
fungi | tonyb: thanks! i'll give them a heads up, but also id.openinfra.dev is the canonical name for the service these days and i think they only maintained the old hostname for backward-compatibility | 02:03 |
tonyb | That makes sense. I don't actually recall what click trail I followed that ended up at openstackid.org | 02:05 |
tonyb | whatever it was, clearly needs to be updated | 02:05 |
frickler | clarkb: I found out that the hostId is not shown in the meta data, but in the nova API. "openstack server show gitea09.opendev.org -c hostId". there's a shared host for 09+14, another for 11+13, 10 and 12 are different | 05:33 |
frickler | see also https://docs.openstack.org/api-ref/compute/#id21 | 05:52 |
frickler | the nodepool builders are generating some CryptographyDeprecationWarning in their cron jobs for a couple days now. likely not critical yet, but maybe keep an eye on it? | 07:20 |
frickler | fwiw, abandoning a change in gerrit also had a noticable delay for me twice just now, though "only" about 5s instead of 15 | 08:35 |
frickler | also, just for the record, the last nova CVE patch (for now) merged tonight, so nothing to look out for anymore. I've thus self-approved https://review.opendev.org/c/openstack/project-config/+/923509, waited long enough IMO | 08:38 |
opendevreview | Merged openstack/project-config master: Drop repos with config errors from x/ namespace https://review.opendev.org/c/openstack/project-config/+/923509 | 09:04 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 09:49 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 10:04 |
frickler | gitea found out that I'm a Ghost, no more hiding https://opendev.org/openstack/tap-as-a-service/releases/tag/ocata-eol | 10:17 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 10:20 |
frickler | ok, it is not only UI for gerrit, "git push -d gerrit stable/ocata" took about 20s showing "Remote: processing changes". I'll add "time" next time for more data, got a bunch to clean up still | 10:21 |
frickler | but I think this is consistent with slow IO being the cause | 10:22 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 10:35 |
frickler | hmm, all further deletions for that repo (openstack/tap-as-a-service) went in < 3s | 10:37 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 11:22 |
frickler | oh, nice, openstack-ansible-tests gate is running in an unnamed queue with window_size: infinity. now that's what I call headroom ;) | 11:46 |
fungi | frickler: i think the cryptography deprecation message is also showing up in upstream zuul tests, there was some discussion of it in the zuul matrix too | 12:25 |
*** elodilles_ooo is now known as elodilles | 12:32 | |
fungi | discussion there landed at https://github.com/paramiko/paramiko/pull/2421 | 12:42 |
fungi | so i guess once there's a new paramiko release with that or a similar fix and we get newer nodepool container images with that version installed, it'll go away again | 12:44 |
frickler | ah, thx for the pointer | 13:01 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 13:11 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 13:41 |
opendevreview | Jan Gutter proposed zuul/zuul-jobs master: Update ensure-kubernetes with podman support https://review.opendev.org/c/zuul/zuul-jobs/+/924970 | 14:04 |
clarkb | the gitea ghost user appears to be the "we don't know who you are" stand in value | 15:34 |
clarkb | infra-root I just fowarded an email from the Works on Arm folks about the hosting for the current linaro arm cloud. TL;DR is that it is going away on August 10 (15 days notice...). I guess I'll respond to that email and let them know I can start widning down our usage of it next week | 15:51 |
frickler | ah, I received some weird github notification the other day but forgot to mention it. also where did you forward the mail to? | 15:54 |
clarkb | frickler: I forwarded it to infra-root. And ya they posted initially to the original github issue saying to check our inboxes but I had no email so I emailed them directly. They didn't respond to that they just sent a new email but now we know | 15:55 |
clarkb | in any case I'll write a response today thanking them for the use of the hardware and let them know I'll start sunsetting the usage on our side next week. Looks like the arm building is in osuosl so we don't need to redeploy anything shoudl be straightforward | 15:56 |
frickler | +1 | 15:57 |
fungi | any idea if osuosl's hardware is also on load from works-on-arm or was directly donated to them? | 16:03 |
fungi | i suppose Ramereth would know, if around | 16:04 |
fungi | s/on load/on loan/ | 16:04 |
clarkb | I suspect osuosls is compeltely separate. They have their own datacenters and everything | 16:04 |
clarkb | its also different hardware iirc | 16:04 |
Ramereth[m] | ours are separate | 16:04 |
fungi | ah, yeah maybe this is less of a concern for the hardware itself and more that arm/ampere doesn't want to keep paying equinix for the colo | 16:05 |
fungi | thanks for confirming Ramereth[m]! | 16:05 |
Ramereth[m] | Do you need anything else from me? | 16:05 |
Ramereth[m] | Are you needing more capacity? | 16:05 |
fungi | Ramereth[m]: we probably wouldn't turn down additional capacity, and will have less when the linaro environment disappears on us, but probably what we're really missing more is diversity of providers for arm resources so just adding more at osuosl wouldn't necessarily help with that | 16:06 |
*** dasm is now known as Guest1321 | 16:07 | |
clarkb | ya I think we can see how we do after the shutdown and if things become painful we can talk about capacity. But for now I think we're good and in a wait and see mode | 16:07 |
Ramereth[m] | fungi: okay good to know. I have regular contact with Ampere so I could bring that up if you'd like. Or if you have other questions/concerns related to their support | 16:07 |
fungi | Ramereth[m]: that's great to know, thanks! will definitely keep that in mind once we see what happens with this | 16:07 |
clarkb | Ramereth[m]: I may also end up at FOSSY or around FOSSY (not completely sure yet) if you wanted to discuss more in person | 16:08 |
fungi | and also the existing resources there are very much appreciated, thanks again for all the hard work you and your crew put into it | 16:08 |
clarkb | but I don't think there is any urgency on your side. We continue to appreciate the resources you've made available to us! | 16:08 |
Ramereth[m] | clarkb: great! let me know. It would be great to sync up | 16:10 |
opendevreview | Clark Boylan proposed openstack/project-config master: Set linaro cloud's max servers to 0 https://review.opendev.org/c/openstack/project-config/+/925029 | 16:38 |
clarkb | I don't think we are in a rush to land ^ but figured I may as well push some of the changes | 16:39 |
opendevreview | Carlos Eduardo proposed openstack/project-config master: Implement manila-unmaintained-core group https://review.opendev.org/c/openstack/project-config/+/924430 | 18:05 |
JayF | I'm having a really weird experience r/n with Gerrit. https://review.opendev.org/c/openstack/ironic-python-agent/+/924634 change, click on the -ipmi-direct-src job, click logs tab, navigate to controller/logs/ironic-bm-logs, click the first node-0 log. See it says "This logfile could not be found", but if you then go to the top right, and click "View log", you can navigate to the logs from the web directory of log files. | 18:45 |
fungi | that's technically zuul not gerrit, but yeah, looking into it now | 18:49 |
fungi | and the raw link when clicked tries to download rather than displaying in my browser, content type must be odd | 18:52 |
fungi | i wonder if it could be the url encoding of : throwing it off | 18:53 |
fungi | JayF: do all log files exhibiting that behavior have : in their names, and only those as far as you've seen? | 18:56 |
JayF | this is literally the only time I've ever seen this behavior | 18:56 |
JayF | but I assumed the same you did | 18:56 |
fungi | the zuul manifest for that build has raw : in the log filenames, the raw url has %-encoded : but i would assume the zuul dashboard code is smart enough to know that since it's what's providing both links | 18:59 |
Clark[m] | The manifest is generated by the jobs not the dashboard I think | 19:08 |
Clark[m] | So it could be a bug | 19:08 |
opendevreview | Dan Smith proposed openstack/project-config master: Add openstack/os-test-images project under glance https://review.opendev.org/c/openstack/project-config/+/925043 | 19:09 |
opendevreview | Dan Smith proposed openstack/project-config master: Add openstack/os-test-images project under glance https://review.opendev.org/c/openstack/project-config/+/925043 | 19:12 |
opendevreview | Dan Smith proposed openstack/project-config master: Add openstack/os-test-images project under glance https://review.opendev.org/c/openstack/project-config/+/925043 | 21:05 |
opendevreview | Clark Boylan proposed openstack/project-config master: Use the osuosl mirror for deb packages in image builds https://review.opendev.org/c/openstack/project-config/+/925048 | 22:00 |
opendevreview | Clark Boylan proposed openstack/project-config master: Remove labels and diskimages from the linaro cloud https://review.opendev.org/c/openstack/project-config/+/925049 | 22:00 |
opendevreview | Clark Boylan proposed openstack/project-config master: Remove linaro cloud from Nodepool https://review.opendev.org/c/openstack/project-config/+/925050 | 22:00 |
clarkb | https://review.opendev.org/c/openstack/project-config/+/925048 and https://review.opendev.org/c/openstack/project-config/+/925029 should be safe to land at any time | 22:00 |
clarkb | I can shephered the other changes through as these initial cleanups land | 22:01 |
clarkb | looking at my calendar we did service coordinator nominations from Tuesday February 6 to Tuesday February 20 previously. 6 months from then is August 6 to August 20. I think we can do nominations from August 6 to August 20 then have an election from August 21 to August 28 if necessary. That also neatly ensures things are done before I'm traveling for the summit | 22:09 |
clarkb | I'll put that on next week's meeting agenda and bring it up more formally there. But if there are concerns or suggestions sooner feel free to bring them up sooner (email or irc is fine) | 22:10 |
clarkb | I've approved https://review.opendev.org/c/openstack/project-config/+/924430 for manila-unmaintained-core I don't think we need permission or anything for taht since openstack unmaintained branches are meant to be less controlled and open to whoever is willing to manage them | 22:40 |
clarkb | that said I'm not sure who should seed the new group. is if the change owner for that change cc fungi | 22:40 |
opendevreview | Merged openstack/project-config master: Implement manila-unmaintained-core group https://review.opendev.org/c/openstack/project-config/+/924430 | 22:52 |
fungi | clarkb: yeah, i'm not really sure either. i'd defer to the change proposer as well, but it's not entirely clear | 23:19 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!