| *** elodilles is now known as elodilles_pto | 09:18 | |
| corvus | i'm restarting the schedulers and launchers to pick up some changes that didn't make the weekend restart; they don't affect the mergers and executors | 15:20 |
|---|---|---|
| fungi | thanks! | 15:34 |
| fungi | i guess we're hoping for another zuul release rsn | 15:34 |
| corvus | well, there was a pair of changes that got split across the restart, it's not critical, but i'd like us to be testing the end result, not the midway | 15:36 |
| fungi | makes sense | 15:41 |
| clarkb | corvus: was that the stack that looks at transaction ids in objects? | 15:50 |
| corvus | clarkb: that landed on friday; the split was the gerrit event revert (the revert landed on friday, but the delay config landed on saturday). it shouldn't change anything for us. | 15:53 |
| corvus | (also, a change to remove some of the ignorable zkobject errors merged, so i'm restarting the launchers so we benefit from that) | 15:53 |
| clarkb | got it thanks | 15:53 |
| clarkb | so I guess we're running zk 3.9 now with the changes in zuul that take advantage of the extra functionality | 15:54 |
| corvus | yep; theoretically that even happened right after the launchers reconnected (since they perform the capability check on connection). it would be pretty hard to verify that though; the behavior's pretty subtle. | 15:57 |
| clarkb | infra-root there is a gpg-agent running on bridge so I can't (easily) pull up secrets right now. Looks like the process is parented to init (1) and was started on the 14th | 16:01 |
| clarkb | I suspect that this may be related to the matrix room setup that tonyb did. Does anyone else know what may have created it? | 16:02 |
| clarkb | I am hoping to get into the EMS control panel to update billing details so related to matrix but not on the management side not the usage side | 16:02 |
| clarkb | (I really don't understand how gpg intends for this to work outside of a desktop environment. Auto starting an agent then refusing to use an already running agenda and dying when you try to do anything is a really poor experience) | 16:04 |
| clarkb | jproulx has an interesting question about running AFS over openstack with OVS and NAT. I don't think we've seen any problems but I don't know that we have that setup in any of our clouds (most use direct IPs and not NAT and the one that does use floating IPs is using OVN I think) | 16:08 |
| corvus | clarkb: yeah i ran into that too; i tried killing it and running edit-secrets and it just kept doing the same thing. i eventually just decided to let emacs use it. | 16:09 |
| corvus | i don't know if something changed in the packages, or if we need to reboot or what | 16:09 |
| clarkb | corvus: oh interesting. You're saying you killed the process and it came back atuomatically? | 16:10 |
| clarkb | or did it only come back after running emacs again? | 16:10 |
| clarkb | (I'm trying to understand what my workarounds might be here) | 16:10 |
| clarkb | the gpg-agent process is running with --use-standard-socket which seems to fill out a few sockets in ~/.gnupg/ like ~/.gnupg/S.gpg-agent. The man page says that applications should try to use the socket in GPG_AGENT_INFO then fall back to this one. I guess the way we run emacs with an explicit gpg-agent invocation means we can't fallback since we want a new agent? | 16:22 |
| clarkb | corvus: were you setting GPG_AGENT_INFO to the socket value /root/.gnupug/S.gpg-agent and then running emacs directly? | 16:22 |
| corvus | after running again | 16:24 |
| fungi | clarkb: yeah i was just about to hit send on a reply to the afs thread | 16:24 |
| corvus | i just ran edit-secrets again | 16:24 |
| clarkb | corvus: got it so it should be safe for me to kill the gpg agent and run edit-secrets? Then when I'm done maybe check to see if it cleaned up after itself? | 16:24 |
| clarkb | emacs and gpg are two things I don't use regularly so just want to double check I'm not doing anything silly before proceeding | 16:25 |
| corvus | i think so | 16:30 |
| clarkb | ack I'll proceed with that appraoch if no on objects in the next little bit | 16:30 |
| fungi | my notes say i use `gpgconf --kill gpg-agent` to make sure it's gone | 16:39 |
| fungi | should be fine | 16:39 |
| clarkb | corvus: killing the agent then running edit-secrets seems to have worked for me. After closing emacs via ^X^C there is no gpg-agent running for me any longer | 16:40 |
| clarkb | fungi: ah ok. I did kill $pid and it shutdown after that | 16:40 |
| clarkb | so now I'm wondering if A) it has to do with how emacs is asked to exit maybe if it returns some nonzero rc gpg-agent doesn't exit or B) something to do with the ssh env affecting the runtime? I do a sudo su - to clear out the env | 16:41 |
| corvus | oh weird. maybe there was something about my environment causing it to re-run. i thought i tried a new shell.... :/ | 16:43 |
| corvus | i just tried it again and it worked normally | 16:45 |
| corvus | (and cleaned up) | 16:45 |
| clarkb | weird | 16:45 |
| clarkb | I went ahead and tested one mroe time just to see if it is consistent and so far it is for me. No leaked agent | 16:59 |
| clarkb | and also EMS billing should be all sorted | 16:59 |
| corvus | btw, i did statusbot matrix on friday: https://review.opendev.org/967252 | 17:00 |
| corvus | do we have a topic or hashtag? | 17:00 |
| clarkb | corvus: the spec says `opendev-matrix` | 17:02 |
| clarkb | I think our spec template still says we should use topics, but I think using hashtags at this point is probably a good idea | 17:02 |
| clarkb | I'll update the matrix for opendev meeting agenda item to link to the hashtag url for that hashtag | 17:05 |
| clarkb | Looks like a number of agenda items can be cleared out. Anything else to add? | 17:05 |
| opendevreview | James E. Blair proposed opendev/infra-specs master: Use hashtags instead of topics https://review.opendev.org/c/opendev/infra-specs/+/967401 | 17:07 |
| corvus | i think https://review.opendev.org/965972 is moot, right? | 17:08 |
| clarkb | corvus: yes I think we ended up rebalancing up nearer to the quota values in a change from fungi | 17:10 |
| fungi | oh, sorry i missed that was hanging out there and should have incorporated it more directly with depends-on or through revision | 17:11 |
| corvus | no problem! | 17:12 |
| corvus | i mean, i reviewed the changes from fungi | 17:12 |
| fungi | i am sometimes in a hurry and forget to check conflicting changes | 17:13 |
| clarkb | corvus: https://review.opendev.org/c/opendev/system-config/+/962826 is another possibly superceded change now that we only allow access to gitea via the load balancer. fungi indicated last week that he would still be in favor of landing it. Do you have an opinion? I'm kinda thinking not landing it is maybe best if we think there isn't a good reason to anymore simply due to possible | 17:13 |
| clarkb | issues if we get the canonical url values wrong and search engines see that info | 17:13 |
| clarkb | that said our test cases there should be decent and help us avoid that particular issue | 17:14 |
| clarkb | if we feel strongly about proceeding with setting the canonical url values I think we can do so | 17:15 |
| fungi | mainly i think otherwise search engines which previously indexed the alternative urls might take a very long time to expire those entries | 17:15 |
| fungi | but i agree it's not strictly necessary and this will be eventually consistent anyway | 17:16 |
| fungi | (and they might even take longer to act on the canonical url metadata from the correct urls than they take to notice the incorrect ones are no longer reachable) | 17:16 |
| clarkb | I checked gitea load averages this morning and they look good. I can't see for sure we actually made things better vs crawlers being less aggressive but it makes me hopeful this is in impvement to not have consistently poor performance | 17:19 |
| opendevreview | Merged opendev/infra-specs master: Use hashtags instead of topics https://review.opendev.org/c/opendev/infra-specs/+/967401 | 17:19 |
| clarkb | I quickly approved ^ because its a mechanics of change not what we want to change update | 17:20 |
| corvus | yeah, i lean toward fungi's argument that even if it's not strictly necessary due to the current routing, i think it's a good change. having said that, i understand it's risky and would require work that i'm not volunteering to do, so i'm not going to ask that you merge it, but if you're willing, i think it's worth it. :) | 17:20 |
| clarkb | ack In that case I won't abandon it and will instead plan to refresh my understanding of the topic and figure out proceeding with it | 17:21 |
| clarkb | ok I remember what this change is doing. Basically it sets rel="canonical" Link headers for urls so that the backend urls don't pollute things in search engines. We do that in apache so that we don't have to patch gitea and we do so with some mod rewrite magic | 17:28 |
| clarkb | I think the main risks are that either A) the default ROOT_URL value doesn't get set properly or B) there is some additional url construction case we aren't covering so have canonical urls that don't match what they should point to | 17:29 |
| clarkb | both seem like low risks so ya I think we can probably proceed. Today even if we want to | 17:29 |
| fungi | i think it also serves as a good template for other services where we might want to do something similar in the future, for different reasons | 17:30 |
| fungi | though as previously discussed, for zuul.o.o we could just redirect from the indivdual backend names by rewriting any request that isn't for the canonical name | 17:31 |
| fungi | since it didn't seem like there was any value in even us trying to connect to them individually | 17:32 |
| fungi | and even then, we could just use /etc/hosts overrides if needed to work around the redirect behavior | 17:32 |
| clarkb | ya I'm not too worried about the impacts it might have to us. More want to ensure whatever data we identify as canonical is actually canonical | 17:33 |
| opendevreview | Merged opendev/system-config master: Update zuul-client image location https://review.opendev.org/c/opendev/system-config/+/967111 | 17:37 |
| clarkb | mnasiadka: I see the mirror config role update change is marked WIP. I guess I should avoid reviewing it for now then? | 18:31 |
| mnasiadka | clarkb: yes (do not review), I will work on it in coming days - was busy with Kolla Flamingo release | 18:39 |
| clarkb | thinking about that gitea change further I think we will restart apache2 but not gitea when that chagne deploys. I think this is ok since the interesting bits happen in apache and not gitea. That said we chagne the ROOT_URL from https://opendev.org/ to https://opendev.org (no more trailing /) so probably still worth a manual gitea restart to ensure all is happy afterwards. That is | 18:39 |
| clarkb | the only other gotcha I can think of at the moment. I guess I should plan to approve it soon. corvus did you want to rereview it (you +2'd a previous patchset) | 18:39 |
| clarkb | mnasiadka: got it, I want to make sure I don't ignore that change so feel free to let me know when you think it is ready to be looekd at | 18:39 |
| mnasiadka | Will do | 18:39 |
| clarkb | infra-root I'll plan to approve https://review.opendev.org/c/opendev/system-config/+/962826 at ~2000 UTC if there is no other input beforehand. That should give me plenty of time to eat lunch while it runs through gate testing | 18:43 |
| clarkb | (if I approve it now lunch will likely be interrupted by checking the results of that update) | 18:43 |
| fungi | sgtm | 18:44 |
| corvus | ++ | 18:56 |
| clarkb | fungi: for followups to jproulx I believe all three mirrors in raxflex are behind a floating ip and I suspect (but don't know for certain) that that cloud runs ovn | 19:52 |
| clarkb | ah looks like you just responded | 19:52 |
| fungi | yeah | 19:53 |
| clarkb | oh wait the gitea ROOT_URL appends a / | 19:56 |
| clarkb | so the value shouldnt' change at all | 19:56 |
| clarkb | so ya the main thing to look at is apache | 19:56 |
| clarkb | and I'm just about ready to approve the change | 19:57 |
| clarkb | ok change is approved now | 20:02 |
| fungi | thanks! | 20:02 |
| clarkb | I've got a socks proxy going to gitea-lb03 again so that I can check the rollout of the chagne using firefox web develoepr tools (the response headers should update post rollout to include the canonical Link header info | 21:06 |
| clarkb | and once it is done on each backend everyone should be able to get the info talking to the load balancer | 21:07 |
| fungi | looks like the gate jobs are finishing up | 21:36 |
| fungi | i'm going to need to step away before the deploy job really gets going though, it's looking like | 21:36 |
| clarkb | I'm still around waiting patiently | 21:37 |
| opendevreview | Merged opendev/system-config master: Set canonical Link paths for gitea resources https://review.opendev.org/c/opendev/system-config/+/962826 | 21:37 |
| fungi | infra-prod-service-gitea is starting finally, but i have to run | 21:40 |
| fungi | i'll check back in a couple of hours, hopefully it goes smoothly | 21:40 |
| clarkb | thanks | 21:40 |
| clarkb | 09 has updated | 21:42 |
| clarkb | https://gitea09.opendev.org:3081/opendev/system-config/ through socks proxy has this header now: Link: <https://opendev.org/opendev/system-config/>; rel="canonical" | 21:43 |
| clarkb | https://gitea09.opendev.org:3081/assets/css/index.css?v=v1.25.1 has this header now: Link: <https://opendev.org/assets/css/index.css?v=v1.25.1>; rel="canonical" | 21:45 |
| clarkb | so I think the non query string and query string cases are both looking correct | 21:45 |
| clarkb | and as expected the app.ini config file for gitea was not updated and the containers were not restarted. Only apache2 config was updated and I think we only reloaded it too | 21:46 |
| clarkb | I've only checked http responses for 09so far as I switched to montioring the server side rollout once I was happy with 09's headers. 09-12 lgtm and all their apache2 processes are new. 13 and 14 are still rolling through the old processes | 21:49 |
| clarkb | the job just finished successfully. Once apache2 looks good on 13 and 14 I'll check headers directly via socks proxy on the other backends | 21:49 |
| clarkb | ok apache processes all look up to date | 21:50 |
| clarkb | directly checking each backend lgtm. I can also git clone through the load balancer without problems (I just wanted to sanity check git isn't affected in an unexpected way) | 21:53 |
| clarkb | so I think this is good. If anyone else has a moment to check I dont' think you need to look at each backend directly and can just hit the load balancer frontend and check headers with browser tools or curl or whatever | 21:54 |
| clarkb | and if we find anything that doesn't look right reverting should be quick safe and easy | 21:55 |
| clarkb | the meeting agenda is looking fairly light this week which is not a bad thing. I think we've been amanging to clear out some of the backlog queue over the last couple of weeks. Thank you to everyone who has helped make that happen | 21:56 |
| clarkb | But also if there is something you would like to see on the agenda or think I removed somethign too soon feel free to let me know. I'll be sending that out later tody and happy to get updates in now | 21:56 |
| fungi | okay, back and the apache responses lgtm | 23:40 |
| tonyb | Yup gitea looks good to me | 23:47 |
| clarkb | last call on meeting agenda updates. Otherwise I'm sending it out in a few minutes | 23:56 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!