ianw | fungi: if you get a chance before we do gerrit things too; https://review.opendev.org/c/opendev/system-config/+/812279 adds its/actions.config which we seemed to drop in the puppet switch. should be a no-op in production | 00:08 |
---|---|---|
fungi | oh, yep thanks! | 00:08 |
opendevreview | Ian Wienand proposed opendev/system-config master: [wip] letsencrypt : don't hit staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 00:50 |
opendevreview | Merged opendev/system-config master: gerrit: add its actions.config file https://review.opendev.org/c/opendev/system-config/+/812279 | 01:19 |
opendevreview | Ian Wienand proposed opendev/system-config master: [wip] letsencrypt : don't hit staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 01:57 |
opendevreview | Ian Wienand proposed opendev/system-config master: [wip] letsencrypt : don't hit staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 02:50 |
ianw | clarkb: one thing to confirm maybe is i see the "avatar" image broken @ https://opendev.org/zuul/ whilst it is something random on your new https://198.72.124.104:3081/zuul/ instance. is that a fix, or something we have different in production? | 03:00 |
opendevreview | Ian Wienand proposed opendev/system-config master: [wip] letsencrypt : don't hit staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 03:28 |
opendevreview | Ian Wienand proposed opendev/system-config master: [wip] letsencrypt : don't hit staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 03:48 |
*** ykarel|away is now known as ykarel | 04:18 | |
opendevreview | Ian Wienand proposed opendev/system-config master: letsencrypt : don't use staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 04:39 |
opendevreview | Ian Wienand proposed opendev/system-config master: Setup Letsencrypt for ptgbot site https://review.opendev.org/c/opendev/system-config/+/804791 | 04:39 |
opendevreview | Ian Wienand proposed opendev/system-config master: Setting Up Ansible For ptgbot https://review.opendev.org/c/opendev/system-config/+/803190 | 04:39 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: setup web interface https://review.opendev.org/c/opendev/system-config/+/812419 | 04:39 |
*** ysandeep|out is now known as ysandeep | 05:27 | |
priteau | Good morning. Is Zuul struggling again today? Seeing many of the recently submitted jobs all in queued or waiting state. | 06:33 |
*** jpena|off is now known as jpena | 07:29 | |
opendevreview | Marios Andreou proposed openstack/diskimage-builder master: Correct path for CentOS 9 stream base image https://review.opendev.org/c/openstack/diskimage-builder/+/806819 | 07:54 |
*** ykarel is now known as ykarel|lunch | 08:15 | |
*** ysandeep is now known as ysandeep|lunch | 08:28 | |
*** ykarel|lunch is now known as ykarel | 09:45 | |
*** ysandeep|lunch is now known as ysandeep | 09:46 | |
opendevreview | Radosław Piliszek proposed opendev/irc-meetings master: Cancel Masakari team meeting https://review.opendev.org/c/opendev/irc-meetings/+/812650 | 09:52 |
*** hjensas is now known as hjensas|lunch | 09:58 | |
frickler | priteau: seems we have a bit of backlog, likely due to the release being in progress. hopefully that should resolve itself later today | 10:02 |
fungi | priteau: are the ones you were looking at a few hours ago still a problem or have they been making progress? | 10:11 |
priteau | They've completed. It looks like there's around 30 minutes queue for new jobs. | 10:12 |
fungi | yeah, seems the cycle-trailing deployment projects (tripleo and osa in particular) have approved a bunch of changes, and also the numerous periodic jobs which start around 06:00 utc are still finishing up | 10:15 |
fungi | according to https://grafana.opendev.org/d/5Imot6EMk/zuul-status we're backlogged on available job nodes and on executor memory | 10:18 |
*** ysandeep is now known as ysandeep|away | 11:00 | |
opendevreview | Merged openstack/project-config master: Add neutron-dynamic-routing-stable-maint group https://review.opendev.org/c/openstack/project-config/+/809232 | 11:17 |
opendevreview | Merged opendev/irc-meetings master: Cancel Masakari team meeting https://review.opendev.org/c/opendev/irc-meetings/+/812650 | 11:34 |
*** dviroel|out is now known as dviroel | 11:41 | |
opendevreview | Merged openstack/diskimage-builder master: Drop lower version requirement for networkx https://review.opendev.org/c/openstack/diskimage-builder/+/812453 | 11:42 |
*** jpena is now known as jpena|lunch | 12:00 | |
*** ysandeep|away is now known as ysandeep | 12:44 | |
opendevreview | Merged opendev/system-config master: letsencrypt : don't use staging in the gate https://review.opendev.org/c/opendev/system-config/+/812610 | 13:00 |
opendevreview | Merged opendev/system-config master: Setup Letsencrypt for ptgbot site https://review.opendev.org/c/opendev/system-config/+/804791 | 13:00 |
Clark[m] | fungi: anything I can help look at with the release? | 13:05 |
Clark[m] | Are we waiting on the items enqueued to the tag pipeline? I wonder if that pipeline's priority is lower than the release pipeline's priority | 13:07 |
Clark[m] | No tag has precedence high like release and other post merge pipelines | 13:14 |
clarkb | aha it is slow due to a semaphore | 13:21 |
clarkb | in that case working as designed I guess | 13:21 |
opendevreview | Merged openstack/diskimage-builder master: Fix cron not installed in debian https://review.opendev.org/c/openstack/diskimage-builder/+/806990 | 13:23 |
clarkb | fungi: I think we're looking at at least 5 hours or so to get through that tag queue. Likely longer due to waiting for nodes after getting the semaphore | 13:23 |
clarkb | fungi: any idea if that is a problem? | 13:23 |
*** jpena|lunch is now known as jpena | 13:24 | |
clarkb | ianw: for the avatar thing the current gitea requets them using a sequential numeric id which 404s and the updated gitea uses what appears to be more like a uuid | 13:28 |
clarkb | ianw: I can't say for sure that 1.15.3 will fix that issue, but it does seem the behavior changes so it wouldn't surprise me. It is possible they updated the db table for that but then didn't update the api request side or something | 13:28 |
clarkb | I don't think it is critical, but something we followup on after the upgrade I guess | 13:28 |
fungi | clarkb: the release notes builds aren't blockers for the release, no | 13:36 |
clarkb | diablo_rojo_phone: ianw: fungi: note that https://review.opendev.org/c/opendev/system-config/+/804791/ isn't running eavesdrop job(s) so there may be missing file matchers | 13:47 |
clarkb | the child changes are probably more likely to match and then run the job though | 13:47 |
opendevreview | Shnaidman Sagi (Sergey) proposed zuul/zuul-jobs master: Include podman installation with molecule https://review.opendev.org/c/zuul/zuul-jobs/+/803471 | 14:17 |
opendevreview | Merged opendev/system-config master: Setting Up Ansible For ptgbot https://review.opendev.org/c/opendev/system-config/+/803190 | 14:18 |
fungi | clarkb: yeah, i wondered why that didn't deploy | 14:22 |
fungi | infra-prod-service-eavesdrop is queued in deploy for 803190 now | 14:26 |
fungi | but it's also queued in the opendev-prod-hourly pipeline so may get a cert sooner | 14:27 |
*** dviroel is now known as dviroel|afk | 14:41 | |
fungi | mmm, infra-prod-base and infra-prod-letsencrypt failed, will dig into those momentarily | 14:54 |
fungi | looks like the base failure is due to an interrupted package upgrade on afs01.ord.openstack.org | 15:23 |
fungi | i'll see what's up with that | 15:23 |
clarkb | The task includes an option with an undefined variable. The error was: 'letsencrypt_self_generate_tokens' is undefined is the LE issue. I'll push a fix for that one | 15:24 |
fungi | rerunning dpkg --configure -a | 15:24 |
fungi | clarkb: thanks! i guess we missed setting a default there | 15:24 |
clarkb | yup I think we just need to add it to the defaults file | 15:25 |
fungi | somehow we ended up with mismatched kernel header packages on afs01.ord | 15:26 |
fungi | The following NEW packages will be installed: linux-headers-5.4.0-88 linux-headers-5.4.0-88-generic linux-image-5.4.0-88-generic linux-modules-5.4.0-88-generic linux-modules-extra-5.4.0-88-generic | 15:27 |
fungi | The following packages will be upgraded: linux-generic linux-headers-5.4.0-84-generic linux-headers-generic linux-image-generic | 15:27 |
fungi | once those are in place, it should be able to recover from the dkms build failure | 15:28 |
opendevreview | Clark Boylan proposed opendev/system-config master: Fix letsencrypt_self_generate_tokens defaults https://review.opendev.org/c/opendev/system-config/+/812720 | 15:31 |
clarkb | fungi: ^ I think that will fix the LE issue | 15:31 |
fungi | lookin' | 15:31 |
fungi | clarkb: should we exercise that by dropping the explicit False in the letsencrypt test job? | 15:33 |
fungi | (assuming that works the way i think it does) | 15:33 |
clarkb | fungi: ++ let me see if I can do that | 15:35 |
opendevreview | Clark Boylan proposed opendev/system-config master: Fix letsencrypt_self_generate_tokens defaults https://review.opendev.org/c/opendev/system-config/+/812720 | 15:36 |
clarkb | that should do it I think | 15:36 |
fungi | that way if the default doesn't work like we expect, the job should now fail | 15:36 |
*** ykarel is now known as ykarel|away | 15:38 | |
fungi | clarkb: mmm, no i don't think that's going to do what i was thinking | 15:39 |
clarkb | fungi: oh right I agree it will see the zuul specific override | 15:39 |
fungi | i guess we override the default in playbooks/zuul/templates/group_vars/letsencrypt.yaml.j2 | 15:39 |
clarkb | ya I don't think there is a wy to do that unless we use a different group vars for that specific job | 15:39 |
clarkb | and figuring that out in ansibel is likely to be annoying. I'll revert the latest update for tnow | 15:39 |
fungi | right, i'm fine with patchset 1 then | 15:40 |
opendevreview | Clark Boylan proposed opendev/system-config master: Fix letsencrypt_self_generate_tokens defaults https://review.opendev.org/c/opendev/system-config/+/812720 | 15:40 |
fungi | thanks, sorry for the runaround | 15:41 |
clarkb | no worries, it was a good idea | 15:41 |
*** jpena is now known as jpena|off | 15:43 | |
*** jpena|off is now known as jpena | 15:44 | |
fungi | should this worry us? https://grafana.opendev.org/d/5Imot6EMk/zuul-status?viewPanel=37&orgId=1&from=now-7d&to=now | 15:47 |
clarkb | fungi: yes, that points to a likely znode leak in zuul | 15:48 |
clarkb | corvus: ^ fyi | 15:48 |
clarkb | I guess introduced with a restart on the 4th? was there a restart on monday? | 15:48 |
fungi | looks like it was fairly steady for the week prior to that | 15:49 |
clarkb | ah ok | 15:49 |
clarkb | inspecting the zk db direclty will probably make it clear where the leak is happening | 15:49 |
fungi | steady as in flat | 15:49 |
fungi | and then on or around the 4th started to grow linearly | 15:49 |
fungi | the last zuul upgrade was 2021-09-30 according to https://wiki.openstack.org/wiki/Infrastructure_Status | 15:51 |
opendevreview | Merged opendev/system-config master: Update our gitea images to bullseye https://review.opendev.org/c/opendev/system-config/+/809269 | 15:53 |
clarkb | the big jumps we see are periodic jobs queuing up I think | 15:53 |
clarkb | note based on napkin math we did a while back I think we expect our cluster to be able to do millions of nodes without too much trouble. I don't think this is a restart and delete all znodes reset just yet | 15:55 |
clarkb | that said zuul does seem a bit slwo to work through some of the events its got, though it is also a busy morning | 15:58 |
fungi | nah, more "why is this steadily going upward and never really downward but just since monday" sort of thing | 15:59 |
*** sboyron_ is now known as sboyron | 16:01 | |
*** sboyron is now known as Guest2002 | 16:02 | |
clarkb | the gitea job has begun. It will work through giteas one by one upgrading them and will stop if any fail | 16:02 |
clarkb | it does them in order too so you can check 01 first. one thing we should probably double check is replication too in addition to web updates since we removed the bullseye backports openssh install since we are on bullseye now | 16:03 |
clarkb | I don't expect issues since it should be the same openssh, but it is probably the main thing affected by this since gitea itself is a go binary | 16:03 |
fungi | we no longer have the bug where the sshd discards git pushes while gitea is restarting, right? | 16:06 |
clarkb | correct, that was fixed by starting ssh after gitea web was started | 16:06 |
clarkb | the problem before was starting ssh first allowed gerrit to do the push but then the web component was never made aware of it | 16:06 |
clarkb | https://gitea01.opendev.org:3081/opendev/system-config web seems to be working | 16:07 |
clarkb | I see replication events exiting the queue (with a much smaller number of retries likely due to the restarts) | 16:10 |
clarkb | I think we're good, but a good final check would be to check your refs/changes/xyz/yz is present after pushing a new patchset or similar for after merging something | 16:11 |
corvus | clarkb, fungi: thanks, i'll take a look at znodes | 16:13 |
*** ysandeep is now known as ysandeep|out | 16:15 | |
clarkb | The gitea image upgrade to bullseye appears to have completed. I haven't seen anything that concerns me yet | 16:25 |
clarkb | I'd like to get a bike ride in in a bit so won't land the 1.15.3 upgrade just yet but would like to do that next when I can help babysit | 16:25 |
corvus | ah, looks like there's a problem with the change cache cleanup | 16:33 |
corvus | i need to use the repl to investigate more | 16:34 |
clarkb | the openstack release is basically done at this point so that should be safe. There are 38 release notes build jobs still in the queue but the release team didn't think those were super important | 16:35 |
fungi | those are mainly just refreshing the release notes to make sure the tagged versions show up instead of "under development" | 16:35 |
corvus | cool. i'm still planning on proceeding lightly and don't expect any interruption yet. i'll let you know if what i find changes that. | 16:36 |
fungi | (or instead of release candidate versions) | 16:36 |
clarkb | yuriys: related to the above message I think you're clear to set instance quota to 0 in the inmotion cloud and fiddle with placement settings whenever you like | 16:37 |
corvus | clarkb, fungi: i understand the issue. it's not something i can fix easily with the repl; i think we should fix the zuul bug and restart as soon as is convenient. i do think we can allow this to grow for a while (days?) without it being urgent. | 16:46 |
fungi | corvus: yes, as clark mentioned we estimated being able to support orders of magnitude more znodes | 16:47 |
clarkb | corvus: I agree, last time we did some metrics on this we figured millions of znodes would be fine on our cluster | 16:47 |
fungi | i just happened to notice its growth looked unbounded | 16:47 |
clarkb | fungi if you have time https://review.opendev.org/c/opendev/system-config/+/803231 is the next change I'd like to land post bike ride | 16:49 |
clarkb | fungi: the child change has a held node on the gitea job which you can use to inspect things | 16:49 |
fungi | yeah, i'll look it over | 16:49 |
fungi | thanks | 16:49 |
*** jpena is now known as jpena|off | 17:00 | |
corvus | i have shut down the zuul repl | 17:21 |
fungi | thanks corvus! | 17:23 |
opendevreview | Merged zuul/zuul-jobs master: Revert "Revert "Include tox_extra_args in tox siblings tasks"" https://review.opendev.org/c/zuul/zuul-jobs/+/812005 | 17:31 |
fungi | just a heads up, that ^ caused some problems for tripleo jobs the first time it merged, it's now exercised with the case they ran into (and fixed), but there could be more corner cases lingering we didn't spot before it got reverted | 17:35 |
*** dviroel|afk is now known as dviroel | 17:41 | |
fungi | clarkb: also when you're back, ianw commented in here earlier about suddenly having generated avatars on org pages like https://198.72.124.104:3081/zuul/ | 17:50 |
fungi | doesn't seem like it presents a problem, but does seem to be a difference | 17:51 |
fungi | and https://198.72.124.104:3081/opendev/disk-image-builder (as opposed to openstack) is the result of a rename test, correct? | 17:53 |
fungi | #status log Manually corrected an incomplete unattended upgrade on afs01.ord.openstack.org | 17:56 |
opendevstatus | fungi: finished logging | 17:57 |
*** ysandeep|out is now known as ysandeep | 18:03 | |
opendevreview | Merged opendev/system-config master: Fix letsencrypt_self_generate_tokens defaults https://review.opendev.org/c/opendev/system-config/+/812720 | 18:59 |
opendevreview | Danni Shi proposed openstack/diskimage-builder master: Update keylime-agent and tpm-emulator elements https://review.opendev.org/c/openstack/diskimage-builder/+/810254 | 19:05 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Finish ptgbot configuration https://review.opendev.org/c/opendev/system-config/+/812739 | 19:10 |
fungi | diablo_rojo_phone: clarkb: ianw: ^ | 19:10 |
Clark[m] | fungi: ya OpenDev/system-config is pushed in to show us what a real repo looks like and dib is part of rename testing | 19:11 |
Clark[m] | Re the avatar stuff I think it did he generated one before and they may have broken it and now it is fixed again? I'm thinking we upgrade and drug further from there if necessary | 19:11 |
fungi | also we have a cert for ptgbot now, the le fix worked! | 19:11 |
fungi | i'll approve the web config change | 19:12 |
Clark[m] | Yay | 19:12 |
yuriys | I've set time aside for Friday clark! but appreciate the greenlight. | 19:12 |
Clark[m] | fungi I chose system-config as the push repo since it is already available on the test node. Made it easy | 19:15 |
*** ysandeep is now known as ysandeep|out | 19:24 | |
clarkb | fungi: I guess we can have ianw review it and ack the avatar thing and I can be around to babysit it from there | 19:34 |
fungi | wfm | 19:38 |
fungi | clarkb: if you have a moment, can you take a look at 812739? as far as i know that's all we're still missing for the ptgbot deployment work | 19:47 |
clarkb | I sure do, let me see | 19:47 |
fungi | aside from ianw's 812419 which i approved a few minutes ago | 19:48 |
clarkb | I went ahead and approved 812739 since impact if anything isn't quite right is minimal | 19:49 |
fungi | yeah, worst case it continues not working ;) | 19:49 |
opendevreview | Merged opendev/system-config master: ptgbot: setup web interface https://review.opendev.org/c/opendev/system-config/+/812419 | 19:51 |
opendevreview | Merged opendev/system-config master: Finish ptgbot configuration https://review.opendev.org/c/opendev/system-config/+/812739 | 20:22 |
ianw | fungi: so it seems like we're just waiting for 812739 to deploy and then we expect ptgbot to be up? | 21:22 |
ianw | the other thing was we probably want to cname ptgbot.openstack.org; i can look into that if you like | 21:23 |
clarkb | ianw: re deployment yes that was my understanding. Also if you want to rereview https://review.opendev.org/c/opendev/system-config/+/803231 I think we can land that now and debug avatar stuff afterwards if the issue persists | 21:24 |
clarkb | I'm around to babysit that for the next 3-4 hours or so | 21:24 |
ianw | wfm | 21:25 |
ianw | avatar stuff wasn't a big issue, just the broken image looks a bit ugly | 21:26 |
clarkb | the debian upgrade happened this morning. My rough plan is to look at the gerrit stuff tomorrow. I think the foundation has some updates to the cla that we can batch into that restart | 21:27 |
ianw | ok sounds good, yeah i was going to write a checklist tomorrow so that sounds good | 21:28 |
clarkb | the debian upgrade for gitea I mean | 21:30 |
ianw | looks like that deployed, ptgbot.opendev.org looks subborn | 21:34 |
ianw | Up About an hour ptgbot-docker_ptgbot_1 | 21:35 |
ianw | i'll do a manual restart, but we can double check the restart logic | 21:35 |
ianw | localhost:8000 is responding | 21:36 |
ianw | oh i see, a typo in the http side. https://ptg.opendev.org works | 21:37 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: fix servername on http side https://review.opendev.org/c/opendev/system-config/+/812749 | 21:38 |
clarkb | ianw: +A'd | 21:39 |
fungi | +2'd | 21:40 |
fungi | thanks! | 21:40 |
ianw | {{ ptgbot_config_copied is changed | ternary('--force-recreate', '') }}" | 21:41 |
ianw | it definitely feels like that deploy should have force recreated the container | 21:41 |
jrosser | i always get nervous when the thing preceeding a ternary is not in ( ) | 21:42 |
ianw | TASK [ptgbot : Put ptgbot config in place] | 21:43 |
ianw | "changed": false, | 21:43 |
ianw | fungi: is it possible you hand-edited in the fixes, and since it reapplied the same thing didn't flag as changed? | 21:43 |
fungi | ianw: i did not, no | 21:44 |
fungi | only looked at what was deployed and proposed changes based on that problem | 21:44 |
ianw | ... interesting ... | 21:44 |
ianw | it changed in "Running 2021-10-06T20:26:12Z" | 21:49 |
ianw | where it did recreate. maybe i just fooled myself thinking it wasn't working because i was trying to hit http | 21:51 |
ianw | that seems to be it | 21:52 |
clarkb | I'm not actually in the channel on oftc, what is the channel? I probably should join it | 21:52 |
diablo_rojo_phone | #openinfra-events ? | 21:52 |
clarkb | thats the one, thanks | 21:52 |
diablo_rojo_phone | No problem. | 21:53 |
clarkb | I don't think I see the bot in there? So it might still be having problems | 21:53 |
clarkb | ianw: ^ fyi | 21:53 |
ianw | hrm, indeed | 21:53 |
clarkb | Did this bot get the necessary updates to properly identify with oftc since they don't do sasl? | 21:54 |
clarkb | I wonder if that is the problem | 21:54 |
ianw | i believe it did | 21:54 |
fungi | the account in private hostvars was created and tested during the oftc migration | 21:56 |
fungi | or maybe i only thought i'd done that | 21:57 |
fungi | also entirely possible, there was rather a lot going on at that time | 21:57 |
clarkb | heh hpe is still sending us email with my name on it from when I set up the hpcloud account | 22:05 |
clarkb | that one was fun too because they charged my corporate account the first billing cycle and I had to go explain to people why it was cheaper for them to not bill me and have me do an expense report | 22:05 |
ianw | ok; problem part 2 : openinfraptg openinfra-events :Illegal channel name | 22:06 |
clarkb | does it need the # | 22:07 |
clarkb | ? | 22:07 |
ianw | i think so, let me try | 22:07 |
ianw | ok, it's there now | 22:08 |
clarkb | I confirm I see it in the channel | 22:08 |
clarkb | ianw: I can fast approve an update to the channel list for the eavesdrop group ansible vars file | 22:08 |
clarkb | or let me know if you want me to push that update | 22:09 |
clarkb | I think you might have to quote the name becuse # is a yaml comment starter | 22:09 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: add leading # to channel name https://review.opendev.org/c/opendev/system-config/+/812754 | 22:10 |
clarkb | approved, hopefully yaml is happy with that | 22:11 |
fungi | i guess it's not like statusbot in that regard | 22:13 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: add certificate for ptgbot.openstack.org https://review.opendev.org/c/opendev/system-config/+/812755 | 22:21 |
ianw | ^ i've created the _acme-challenge record for that | 22:21 |
clarkb | I've +2'd but will let fungi double check if he likes (otherwise I think you can +A) | 22:23 |
ianw | hrm, hang on ... | 22:25 |
ianw | we've got "ptg.openstack.org" and "ptgbot.opendev.org" | 22:25 |
clarkb | oh I suspect we may want both to be ptg.o.o | 22:26 |
clarkb | since we don't care too much about it being an irc bot on the web front end | 22:26 |
clarkb | fungi: diablo_rojo_phone ^ does that make sense? | 22:26 |
ianw | yeah i think we might have all just been so used to typing "bot" it snuck in there | 22:26 |
diablo_rojo_phone | Yeah that makes sense. | 22:28 |
fungi | the old site was ptg.openstack.org, the new site is ptgbot.opendev.org, right | 22:29 |
fungi | or are you saying we should make it ptg.opendev.org? | 22:29 |
clarkb | fungi: correct ptg.opendev.org | 22:30 |
fungi | i thought i had already asked that on earlier changes and the consensus was for ptgbot.opendev.org as the new site name (because i had the same question) | 22:30 |
clarkb | since the 'bot' is an implementation details | 22:30 |
diablo_rojo_phone | I feel like it should just change to opendev from openstack. | 22:30 |
diablo_rojo_phone | Right. I don't think bot needs to be included in the url | 22:30 |
clarkb | diablo_rojo_phone: ya I suspect we'll do a redirect but you still need an ssl cert for the old name to make that work | 22:30 |
fungi | okaym that's fine, having ptgbot as the site name seemed to have been intentional in the ansible change | 22:31 |
ianw | fungi: yeah, i think it's just a situation normal wires crossed :) | 22:31 |
diablo_rojo_phone | I think that was mostly because I was using status bot as an example. | 22:31 |
diablo_rojo_phone | Exactly like what ianw said. | 22:31 |
ianw | i think at this point, we'd be best just to put in some s/ptgbot/ptg/ edits before anyone links to it | 22:32 |
diablo_rojo_phone | That's definitely my mistake. | 22:32 |
fungi | ahh, okay wfm, i had previously told foundation folks ptgbot.opendev.org though since that's what the changes were imlpementing | 22:32 |
fungi | so i think it's already written into draft announcements, we'll just need to make sure they fix those | 22:32 |
fungi | sorry, i assumed the site name used in the earlier changes was intentional | 22:33 |
fungi | (i had actually recommended ptg.openinfra.dev, for that matter) | 22:34 |
fungi | infra-prod-remote-puppet-else failed in deploy for 812739: https://zuul.opendev.org/t/openstack/build/ce49e1aa83bd4e16bbba75043db98a15 | 22:35 |
clarkb | that shouldn't matter too much since there isn't any puppet left in eavesdrop | 22:36 |
clarkb | but probably want to understand why in gneral | 22:36 |
diablo_rojo_phone | I can make sure that gets updated in future emails fungi | 22:36 |
opendevreview | Ian Wienand proposed opendev/zone-opendev.org master: Rename ptgbot to ptg https://review.opendev.org/c/opendev/zone-opendev.org/+/812757 | 22:37 |
clarkb | I was 30 secnds too late ont hat one | 22:39 |
clarkb | ya'll are quick | 22:39 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: rename site to ptg.opendev.org https://review.opendev.org/c/opendev/system-config/+/812755 | 22:40 |
clarkb | and again | 22:42 |
ianw | in my defense, everything was pretty much open in emacs :) | 22:42 |
clarkb | no its a good thing. Less thinking for me :) | 22:43 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: rename site to ptg.opendev.org https://review.opendev.org/c/opendev/system-config/+/812755 | 22:43 |
ianw | ^ missed testinfra | 22:44 |
clarkb | the gitea upgrade should be merging shortly | 22:45 |
opendevreview | Merged opendev/system-config master: Upgrade gitea to 1.15.3 https://review.opendev.org/c/opendev/system-config/+/803231 | 22:47 |
opendevreview | Merged opendev/system-config master: ptgbot: fix servername on http side https://review.opendev.org/c/opendev/system-config/+/812749 | 22:47 |
opendevreview | Merged opendev/zone-opendev.org master: Rename ptgbot to ptg https://review.opendev.org/c/opendev/zone-opendev.org/+/812757 | 22:51 |
clarkb | https://gitea01.opendev.org:3081/opendev/system-config has been upgraded | 22:55 |
clarkb | avatars are still broken | 22:55 |
fungi | so maybe their working is an artifact of our tests | 22:56 |
clarkb | ya, I suspect something around our upgrading from old versions maybe | 22:57 |
clarkb | if you look at the requests they are different in that we have a simple numeric id request in prod and a uuid request in testing | 22:57 |
clarkb | but other than that it is looking happy from what I can see so far | 22:57 |
clarkb | and that isn't a new regression | 22:57 |
clarkb | we have upgraded through 04 at this point. STill seems good. I expect it will get to 08 without any problems | 23:01 |
clarkb | ianw: fungi: I expect the foundation should be proposing cla.html file updates, should we go ahead and approve the bullseye update for gerrit now? then plan to restart once the cla.html is updated? | 23:03 |
clarkb | I'm not super concerned about that update since most of the software runs in its own vm (similar to gitea running in its own independent binary) | 23:03 |
clarkb | the biggest risk is going to be jeepyb but that has decent testing on python3 now too iirc | 23:03 |
fungi | sounds good to me, i'm around if things go sideways | 23:04 |
clarkb | fungi: https://review.opendev.org/c/opendev/system-config/+/809286 is the gerrit bullseye change if you want to give that a look | 23:05 |
fungi | already there | 23:05 |
clarkb | thanks! | 23:06 |
ianw | ++ on the upgrade | 23:08 |
fungi | interesting that the undifferentiated openjdk:11 image is buster (oldstable) instead of bullseye (stable) | 23:08 |
clarkb | fungi: ya I don't know when they tend to update those | 23:08 |
clarkb | I figure it is better to be explicit either way (there is a buster specific image too) | 23:08 |
fungi | agreed, lgtm | 23:09 |
clarkb | gitea08 is done. the job is still running but this lookslike a successful gitea upgrade | 23:10 |
clarkb | cue "we did it, we did it, we did it, yay!" | 23:11 |
clarkb | (Dora the Explorer for anyone curious) | 23:11 |
ianw | now i'm going to have "i'm a map i'm a map i'm a map i'm a map" in my head | 23:12 |
ianw | (i think i forgot a i'm a map) | 23:12 |
opendevreview | Merged opendev/system-config master: ptgbot: add leading # to channel name https://review.opendev.org/c/opendev/system-config/+/812754 | 23:12 |
fungi | i'm counting myself lucky to barely know what you're talking about | 23:14 |
clarkb | fungi: haha, there are definitely worse programs but yes | 23:15 |
ianw | fungi: i think for ptg.openstack.org we need a separate vhost to do a 301 right? | 23:22 |
ianw | i note for something like codesearch.openstack.org we don't do that, but i think it just happens to work because there's only one site running on codesearch01.opendev.org and it defaults to that | 23:22 |
fungi | ianw: no need for a separate vhost, you can make it a serveralias and then redirect within that same vhost to the preferred url | 23:24 |
fungi | oh, though with multiple sites and sni that does become challenging if the serveralias isn't a subjectaltname on the same cert | 23:24 |
fungi | with sni in play, apache will be using the cert to match up the vhost i think | 23:26 |
clarkb | I've always done it as a separate vhost. Possibly in the same file though | 23:26 |
ianw | in this case i have added ptg.openstack.org to the same cert | 23:28 |
fungi | then it should be able to just be a redirect/rewrite within the vhost | 23:28 |
fungi | the rewritecond we use on codesearch is a fine example, yep | 23:30 |
fungi | but we need it as a serveralias in that vhost | 23:30 |
fungi | which wasn't necessary on codesearch because it has only one vhost | 23:31 |
opendevreview | Ian Wienand proposed opendev/system-config master: ptgbot: Add ServerAlias for ptg.openstack.org https://review.opendev.org/c/opendev/system-config/+/812763 | 23:35 |
ianw | fungi: ^ that? | 23:35 |
clarkb | I see we already have the redirect there we just need the alias to match and then do the redirect | 23:36 |
ianw | can anyone think of a files: .zuul match example for "everything but this file"? | 23:57 |
ianw | the letsencrypt job shouldn't match on playbooks/roles/letsencrypt-create-certs/handlers/main.yaml | 23:58 |
Clark[m] | I think zuul uses re2 for those so no negative lookahead match | 23:59 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!