NeilHanlon | hm. that's a tough one if using rsync. `dnf reposync` has a '--newest-only' feature which only keeps the most recent NEVRA in a repo when syncing it, but, that has some downfalls too | 01:10 |
---|---|---|
tonyb | clarkb: I think the update went okay: | 01:16 |
tonyb | 0 s:CN = openinfraci.linaro.cloud | 01:16 |
tonyb | i:C = US, O = Let's Encrypt, CN = R3 | 01:16 |
tonyb | a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 | 01:16 |
tonyb | v:NotBefore: Jan 16 22:54:35 2024 GMT; NotAfter: Apr 15 22:54:34 2024 GMT | 01:16 |
Clark[m] | tonyb: great I will check it shortly. Did the how to on the server make sense? I'm glad I wrote that last time I did this | 01:22 |
tonyb | Yeah the doc was good. | 01:23 |
clarkb | `openssl s_client -connect openinfraci.linaro.cloud:5000` lgtm as well | 01:24 |
NeilHanlon | also looks good for me fwiw | 01:28 |
NeilHanlon | clarkb: https://github.com/rpm-software-management/dnf-plugins-core/blob/master/doc/repomanage.rst might be of interest, to prune the repo. this is what I was trying to remember. | 01:29 |
NeilHanlon | essentially, run this against each repo in some manner, prune the old packages, and then run 'createrepo' on them. | 01:30 |
NeilHanlon | though, this means those paths would also have to be added to an excludelist when pruned, so they are not re-synced | 01:30 |
Clark[m] | NeilHanlon: thanks that gives me something to look at | 02:13 |
opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/905809 | 02:33 |
opendevreview | Merged openstack/project-config master: Add weblate api key in zuul secret https://review.opendev.org/c/openstack/project-config/+/905701 | 02:55 |
opendevreview | Merged openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/905809 | 03:23 |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire heat-cfnclient: End Project Gating https://review.opendev.org/c/openstack/project-config/+/905818 | 03:54 |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire heat-cfnclient: Remove Project from Infrastructure System https://review.opendev.org/c/openstack/project-config/+/905820 | 03:58 |
opendevreview | Takashi Kajinami proposed openstack/project-config master: Retire heat-cfnclient: Remove Project from Infrastructure System https://review.opendev.org/c/openstack/project-config/+/905820 | 04:02 |
*** benj_2 is now known as benj_ | 08:41 | |
*** dhill is now known as Guest14435 | 13:36 | |
seongsoocho | Hi team. I'm working on a migrate i18n translation tools from zanata to weblate. Currently, only openinfra accounts can be registered as users in weblate. I am curious if the zanata@openstack.org email that zanata previously used to call the Zuul API is also an openinfra account. | 14:09 |
fungi | seongsoocho: i'll check, hold on | 14:12 |
fungi | seongsoocho: yes, it is: https://openstackid-resources.openstack.org/api/public/v1/members?expand=groups,all_affiliations&relations=affiliations,groups&filter[]=email==zanata@openstack.org | 14:13 |
seongsoocho | okay. Then, can I receive the password for that account by email? | 14:14 |
fungi | seongsoocho: i'll have to figure out where messages get routed for that address but there's a good chance i have access to it and can help there | 14:15 |
seongsoocho | thanks. It's not urgent. | 14:16 |
fungi | seongsoocho: it's an alias for our infra-root inbox, so yes i have access to any messages sent to that address | 14:17 |
fungi | seongsoocho: also, it looks like i have a record of what the password for that account was at one time, it may still be the same | 14:21 |
fungi | testing it out now | 14:21 |
Clark[m] | Is that the account we use in zuul to push to zanata? If so changing that password may break those jobs | 14:22 |
seongsoocho | Yes | 14:23 |
fungi | seongsoocho: the password we have on file for that login still works at id.openinfra.dev, i just tested it | 14:23 |
seongsoocho | That's a good news | 14:23 |
fungi | are you trying to create a zuul secret containing it? for what repo? | 14:23 |
seongsoocho | for i18n 's new translation platform "weblate" | 14:24 |
seongsoocho | openstack.weblate.cloud | 14:24 |
fungi | seongsoocho: i get that, but what are you trying to do with the password? create a zuul secret for use in jobs (in which case you might be able to reuse the existing one unless you're putting it in a different repo), or something else? | 14:25 |
seongsoocho | Or you can just give me an api key of zanata user from weblate . https://openstack.weblate.cloud/accounts/profile/#api | 14:26 |
ianychoi | I believe seongsoocho's intention is to exactly match https://review.opendev.org/c/openstack/project-config/+/905701/1/zuul.d/secrets.yaml to an account's secret which has openstackid for weblate login. Another way would be to create another alias like weblate@openstack.org with openstackid, create an user on openstack.weblate.cloud, and update the secret | 14:26 |
fungi | and openstack.weblate.cloud can't be authenticated via id.openinfra.dev? | 14:27 |
seongsoocho | You can auth via id.openinfra.dev. and I want to make an api key for zanata@openstack.org account. | 14:28 |
ianychoi | openstack.weblate.cloud's only authentication method is now via id.openinfra.dev. seongsoocho needs CI user's credential which push/pulls from Weblate to openstack repos | 14:29 |
ianychoi | (Thanks to Brian's development!!) | 14:29 |
fungi | oh, you want to use the zanata@openstack.org account to log into openstack.weblate.cloud interactively, and create a weblate api key using that account, then create a zuul secret that contains that api key | 14:30 |
ianychoi | Exactly (and from my side I am not insisting to stay to use zanata@openstack.org. It can be another alias like infra@ or weblate@ ...) | 14:31 |
fungi | Clark[m]: ^ do you think we should reuse the existing openinfraid account for zanata, or create a new one for weblate? i can probably add the new alias with my admin access to the foundation mail hosting for the openstack.org domain | 14:32 |
Clark[m] | Reusing may be confusing long term. Like when we had zuul commenting as Jenkins in Gerrit. I think if we can avoid that it would be best | 14:35 |
fungi | okay, i'll add a weblate@ alias and create an account in openinfraid for that | 14:36 |
fungi | seongsoocho: ianychoi: infra-root: i've created a dedicated id.openinfra.dev account for weblate@openstack.org (password added to our list) and an e-mail alias for that forwarding to our infra-root inbox; i've also created an account on openstack.weblate.cloud with that id and generated a rest api key there which i've added to our list. working on the zuul secret for it now | 14:48 |
Clark[m] | Thank you! | 14:51 |
*** blarnath is now known as d34dh0r53 | 14:56 | |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Update secret for Weblate REST API key https://review.opendev.org/c/openstack/project-config/+/905884 | 14:58 |
fungi | seongsoocho: ianychoi: ^ is that what you wanted? | 14:58 |
ianychoi | Yep that's whey I understand from today I18n meeting thank you fungi! I think seongsoocho is now sleeping :p | 15:00 |
fungi | i understand. i would be too! | 15:00 |
fungi | anyway, if either of you need anything else for this, just let us know | 15:00 |
ianychoi | Sure really appreciate it! | 15:01 |
seongsoocho | thanks fungi !!!!! | 15:01 |
fungi | any time | 15:01 |
ianychoi | (Oh he is not sleeping LoL) | 15:01 |
fungi | and sorry it took me a few minutes to wrap my head around what you were asking for ;) | 15:01 |
seongsoocho | it's okay. I should have gone into more detail. Thank you for doing the favor so quickly. | 15:06 |
fungi | it was my pleasure, as always | 15:07 |
ianychoi | No worries I personally appreciate seongsoocho's help a lot, while I am still lack of lots of knowledge on the infra and opendev. I am seeing the secret part on the repo for the first time :p | 15:07 |
opendevreview | Elod Illes proposed openstack/project-config master: Add puppet-ceph-release right for special stable branch handling https://review.opendev.org/c/openstack/project-config/+/905976 | 16:26 |
*** gthiemon1e is now known as gthiemonge | 16:41 | |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Switch from legacy to new style keycloak container https://review.opendev.org/c/opendev/system-config/+/905469 | 17:54 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Switch from legacy to new style keycloak container https://review.opendev.org/c/opendev/system-config/+/905469 | 18:21 |
clarkb | automated cert refreshing seems consistently broken so we'll need to dig into that. openstack.org and the linaro cloud are both happy now | 19:15 |
clarkb | judging by acme.sh log fiel timestamps on mirror02.ord the last time acme.sh ran there was Jan 13 | 19:41 |
clarkb | which was right before it needed to renew | 19:42 |
clarkb | the timing is kind of amazingly good there | 19:42 |
clarkb | and that was the last time the job ran | 19:42 |
clarkb | infra-prod-base is failing | 19:44 |
clarkb | which causes the other jobs run daily to be skipped | 19:44 |
clarkb | codesearch ran out of disk and that breaks ansible's ability to run the base playbooks on codesearch | 19:46 |
clarkb | I'm digging into why codesearch is sad now | 19:46 |
clarkb | /var/lib/hound/data is 34GB and / is 40GB large | 19:49 |
clarkb | is an appropriate action here to stop the service (this might be difficult because docker can't run without disk space. Maybe we can compact journals) delete the contents of that dir then start the service again after a reboot and let it rebuild that data? | 19:49 |
clarkb | fungi: ^ do you know? | 19:59 |
fungi | i wouldn't consider the timing amazing. when that job starts failing, the servers we'll get notified about soon thereafter are the ones that were just on the cusp of needing to get their certs refreshed | 20:02 |
fungi | clarkb: not really sure what the ideal remediation for hound is, but i'll see if there's anything in my shell history. we should be able to add it to the emergency disable list to get deploy jobs running again though | 20:04 |
clarkb | good idea. Though we only need to do that if this persists through this evening since the periodic jobs queue at 02:00 UTC | 20:05 |
fungi | clarkb: it looks like in the past i used `journalctl --vacuum-size=500M` to free up some space... | 20:05 |
clarkb | ya that part should be fine. Its more of a what do we do about the 34GB of codesearch data | 20:06 |
fungi | and then downed and upped the container | 20:06 |
fungi | oh, i blew away /var/lib/hound/data/ | 20:06 |
clarkb | aha ok so the process I describe above is the one to use | 20:06 |
fungi | well, `sudo rm -rf /var/lib/hound/data/*` anyway | 20:06 |
fungi | before starting the container | 20:06 |
clarkb | I'll start on that process. Thank you for checking | 20:07 |
fungi | but also reboot | 20:07 |
clarkb | ++ to reboot after clearing things up and before starting services | 20:07 |
fungi | since the rootfs filled up, can't really trust anything that was running after that occurred | 20:07 |
clarkb | I'm going to delete the idx- and vcs- dirs separately so I can get a sense for the data distribution here | 20:08 |
clarkb | indexes are about 1.9GB | 20:09 |
clarkb | and vcs-* was about 32GB | 20:10 |
clarkb | rebooting momentarily | 20:10 |
clarkb | back and the service is starting up. I believe this will take about 10 minutes? | 20:12 |
fungi | hopefully. i don't recall, honestly | 20:12 |
clarkb | its filling data/ back up again at least. I'm watching it log the repos it is processing in the container log file | 20:15 |
clarkb | it took closer to 19-20 minutes. The service is back up again | 20:30 |
clarkb | there is 28GB of disk available. I guess those vcs dirs must be leaky | 20:30 |
clarkb | the indexes could be leaky too but they were too small to account for that size delta | 20:31 |
fungi | looks like our cacti tracking for that filesystem only extends back to october... http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=71508&rra_id=all | 20:33 |
fungi | definitely looks like it could warrant a separate cinder volume for the data though | 20:34 |
fungi | or use the ephemeral disk | 20:34 |
clarkb | ya then we won't have to claer it out as often | 20:35 |
clarkb | frickler: ^ fyi I think this should enable the LE cert refresh job to run now. And if all goes well the cert alerts should go away afterwards | 20:36 |
* clarkb finds lunch | 20:36 | |
fungi | thanks! | 20:46 |
frickler | oh, so a totally not LE related failure source, nice | 21:00 |
fungi | yeah, something failing a task on the base job has been the reason for certs not updating at least 95% of the time | 21:08 |
tonyb | when I build a new hound server I can add an additional disk for that data. | 22:38 |
fungi | that's an option, but if we build it in rackspace we can just put it on the ephemeral disk (and the same can be done with the existing server for that matter) | 22:43 |
tonyb | fungi: Oh, I didn't realise we always got an ephemeral disk in rax. | 23:08 |
fungi | yeah, it comes by default, 80gb i think? | 23:11 |
fungi | we wouldn't normally use it for important data, but since this is just all cache really it's not important | 23:11 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!