| @harbott.osism.tech:regio.chat | lgtm | 07:10 |
|---|---|---|
| @harbott.osism.tech:regio.chat | fyi I'm still seeing this, can anyone double check? not sure if that is a regression in zuul or some issue with our deployment? | 07:53 |
| @tafkamax:matrix.org | If it's an UI problem, have you checked the console logs? maybe there is something there 😅 | 08:38 |
| @tafkamax:matrix.org | I always start with that if I feel the UI is not as it should be. | 08:38 |
| @tafkamax:matrix.org | Could be easier to identify ze problem. | 08:41 |
| @harbott.osism.tech:regio.chat | infra-root: just ftr I force-merged https://review.opendev.org/c/openstack/tempest/+/977922 to unblock nova | 09:03 |
| -@gerrit:opendev.org- Zuul merged on behalf of Takashi Kajinami: [openstack/diskimage-builder] 974342: Declare Python 3.11/12 support https://review.opendev.org/c/openstack/diskimage-builder/+/974342 | 13:36 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Takashi Kajinami: [openstack/diskimage-builder] 974343: Bump minimum python version to 3.8 https://review.opendev.org/c/openstack/diskimage-builder/+/974343 | 14:02 | |
| @fungicide:matrix.org | #status log Restarted Mailman services on the lists server in order to free up some memory and restore the system to a more responsive state | 14:28 |
| @status:opendev.org | @fungicide:matrix.org: finished logging | 14:28 |
| @harbott.osism.tech:regio.chat | heads up: yet another new tox version released, depends on latest virtualenv. will likely break openstack fully once new images built to include it become active | 14:31 |
| @tkajinam:matrix.org | The problem is not solely caused by new virtualenv, but it's caused by combination of old virtualenv cap in upper-constraints which is not compatible with the new tox | 14:33 |
| @tkajinam:matrix.org | sorry I meant "The problem is not solely caused by new tox" | 14:41 |
| @fungicide:matrix.org | in the past we deemed pinning back to old virtualenv versions suboptimal for this sort of reason | 14:42 |
| @tkajinam:matrix.org | yeah we pinned tox to <4 to old stable branches | 14:42 |
| @tkajinam:matrix.org | though that's required updating bunch of jobs with `ensure_tox_version` var IIRC | 14:43 |
| @tkajinam:matrix.org | * though that required updating bunch of jobs with `ensure_tox_version` var IIRC | 14:43 |
| @fungicide:matrix.org | or pinning any of the packaging and environment toolchain really, though part of that was because easy_install for setup_requires ignored it completely | 14:43 |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 977897: Cap django-health-check for ara installations https://review.opendev.org/c/opendev/system-config/+/977897 | 14:44 | |
| @clarkb:matrix.org | fungi: ^ I guess we can try to land the set of changes that failed yesterday whenever we're ready? | 16:16 |
| @fungicide:matrix.org | yep, just waiting until others were around, i'll recheck the gitea one now | 16:25 |
| @clarkb:matrix.org | thanks | 16:25 |
| @fungicide:matrix.org | i also rechecked the lists.o.o waf addition | 17:49 |
| @fungicide:matrix.org | today looks like a bad day schedule-wise to try the limnoria trixie change again | 17:49 |
| @fungicide:matrix.org | lots of teams with meetings continuing into late-utc hours on wednesdays | 17:50 |
| @fungicide:matrix.org | if we could time it to happen between 19:00 and 21:00 utc it would probably not disrupt anyone (looks like the openstack requirements team hasn't been using their 20:30 slot for the past 5 years), but that's still cutting it tight | 17:52 |
| @fungicide:matrix.org | tomorrow we could land it any time after 16:00 utc, looks like (there are two meetings listed for the 16:00 block but one hasn't been used for 4 years and the other is for a group that was dissolved years ago) | 17:56 |
| @clarkb:matrix.org | that seems fine to do ti tomorrow | 17:58 |
| -@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/irc-meetings] 978008: Drop the Interop WG meeting https://review.opendev.org/c/opendev/irc-meetings/+/978008 | 17:59 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/irc-meetings] 978008: Drop the Interop WG meeting https://review.opendev.org/c/opendev/irc-meetings/+/978008 | 18:15 | |
| @fungicide:matrix.org | okay, both system-config changes are in the gate now | 18:27 |
| @fungicide:matrix.org | ooh! i think our waf trap on docs.opendev.org caught and blocked some new crawlers earlier today! | 18:41 |
| @fungicide:matrix.org | two addresses within the same /24 identified themselves as `Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)` and they have reverse dns to records in the crawl.baidu.com domain, so it seems like baiduspider ignores our disallow in https://docs.opendev.org/robots.txt | 18:43 |
| @fungicide:matrix.org | two more from addresses in wholly different /8 ranges (no reverse dns but whois places both of them in chinanet allocations) declared they were `Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36` | 18:46 |
| @fungicide:matrix.org | so three different crawlers in total that we've seen touching the tripwire, not a lot but maybe it's accelerating | 18:48 |
| @clarkb:matrix.org | I'm half tempted to throw the baidu spider in our default block list with this evidence they ignore our robots.txt | 18:52 |
| @fungicide:matrix.org | even more damning, those four addresses all hit the tripwire within a 3-second timespan, which i think implies that baidu is not only refusing to obey it, they're also using drones in other networks pretending to be browsers | 18:53 |
| @fungicide:matrix.org | somaybe we didn't block two different crawlers, we just blocked one | 18:54 |
| @clarkb:matrix.org | interesting | 18:54 |
| @fungicide:matrix.org | the requests claiming to be baiduspider came in at 03:52:44 and 03:52:45, then the requests claiming to be chrome 48 arrived at 03:52:46 | 19:01 |
| @fungicide:matrix.org | and just going by the user agent strings alone i'd have assumed the baiduspider ones were spoofed, but they had matching forward and reverse dns in crawl.baidu.com which means at best someone's hacked baidu's domain hosting or servers | 19:03 |
| @fungicide:matrix.org | i'm reading that baiduspider is supposed to obey robots.txt, but also that they may cache an old robots.txt for a while and not refresh it often, so it's possible they simply didn't notice we added one to that domain recently | 19:25 |
| @clarkb:matrix.org | got it. So probably don't change anything for a bit in case it is a stale cache | 19:26 |
| @fungicide:matrix.org | yeah, but also those other two random ip addresses requesting the same exact url one second after baiduspider while using spoofed browser ua strings seems like a nearly impossible coincidence | 19:28 |
| @clarkb:matrix.org | yes definitely feels fishy | 19:29 |
| -@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 975320: Build the gitea images on debian trixie https://review.opendev.org/c/opendev/system-config/+/975320 | 19:31 | |
| -@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 974933: Add modsecurity waf rules to lists.opendev.org https://review.opendev.org/c/opendev/system-config/+/974933 | 19:31 | |
| @fungicide:matrix.org | those are in deploy, looks like we're probably 5 minutes out from infra-prod-service-gitea starting | 19:32 |
| @fungicide:matrix.org | i take that back, it's starting now | 19:33 |
| @clarkb:matrix.org | my current load balancer destination is gitea10 | 19:34 |
| @clarkb:matrix.org | so I may not set up the socks proxy and just check 10 once it updates | 19:34 |
| @fungicide:matrix.org | the container is restarting on gitea09 already | 19:34 |
| @fungicide:matrix.org | er, not yet, pulling | 19:34 |
| @fungicide:matrix.org | and now restarting | 19:34 |
| @fungicide:matrix.org | it's on to gitea10, so you'll presumably get rebalanced to another backend shortly | 19:35 |
| @clarkb:matrix.org | it looks like I'm still balanced to 10 and it seems to work | 19:37 |
| @clarkb:matrix.org | once it is done we'll want to check that replication is happening successfully too | 19:38 |
| @fungicide:matrix.org | yeah, it's pulling on 11 now | 19:38 |
| @fungicide:matrix.org | so if 10 lgty this was probably successful | 19:39 |
| @clarkb:matrix.org | visually it looks fine. I haven't tested git stuff or looked for any replicated content yet | 19:42 |
| @fungicide:matrix.org | containers on gitea14 have restarted and the job is wrapping up | 19:43 |
| @clarkb:matrix.org | I think it is only through 13? | 19:44 |
| @clarkb:matrix.org | but 14 should be done shortly | 19:44 |
| @fungicide:matrix.org | oh, you're right, that was 13 i saw restart, thought it was already on to 14 at that point | 19:44 |
| @clarkb:matrix.org | and now 14 is done | 19:45 |
| @fungicide:matrix.org | okay, **now** all of them are done | 19:45 |
| @clarkb:matrix.org | now we just need someone to push a new change or patchset' | 19:46 |
| @fungicide:matrix.org | and the lists.o.o deploy will be starting in a few | 19:46 |
| @clarkb:matrix.org | git clone worked for me | 19:46 |
| @fungicide:matrix.org | and for me | 19:46 |
| @clarkb:matrix.org | https://zuul.opendev.org/t/openstack/buildset/4db4997bfbf3483e933c5fd2a8abb9b3 is the successful deploy buildset | 19:47 |
| @fungicide:matrix.org | infra-prod-service-lists3 is running | 19:48 |
| @clarkb:matrix.org | still no new patchsets in the gerrit open queue | 19:51 |
| @clarkb:matrix.org | https://opendev.org/openstack/python-watcherclient/commits/branch/master shows the merged change from https://review.opendev.org/c/openstack/python-watcherclient/+/976000 so I think replication is working | 19:56 |
| @clarkb:matrix.org | https://zuul.opendev.org/t/openstack/buildset/3ae9f5ab59d545e68b98bd2d452f2f2d shows successful deployment to lists | 19:57 |
| @clarkb:matrix.org | and I can still see lists.opendev.org archives | 19:57 |
| @fungicide:matrix.org | it looks like we may not automatically pull new images in infra-prod-service-lists3 | 19:57 |
| @fungicide:matrix.org | `docker image list` is returning an opendevorg/mailman-core from "16 months ago" | 19:58 |
| @fungicide:matrix.org | i'm going to start a root screen session on lists01 and pull/restart | 19:58 |
| @clarkb:matrix.org | fungi: hold on | 19:58 |
| @fungicide:matrix.org | holding | 19:58 |
| @clarkb:matrix.org | I want to check one thing first | 19:59 |
| @fungicide:matrix.org | sure | 19:59 |
| @clarkb:matrix.org | fungi: I awnted to check that we didn't switch to a quay.io hosted image and have a newer quay.io/opendevorg/mailman-core/web but we don't because we haven't | 19:59 |
| @clarkb:matrix.org | you'll see that on some servers that we have an old docker hub hosted image and we've transitioned to the quay.io hosted image and htey both show up in lists. But lists is jammy so we haven't switched the container image location to quay yet and your analysis is good. However, we also didn't run an image promotion job in that deploy buildset | 20:00 |
| @clarkb:matrix.org | did we build a new image? | 20:00 |
| @clarkb:matrix.org | fungi: I don't think we expected a new image this appears to have only affected apache config on the host | 20:01 |
| @fungicide:matrix.org | oh, you're right, that change was only adding apache configuration | 20:01 |
| @fungicide:matrix.org | i was confusing it with the trixie image upgrades | 20:01 |
| @fungicide:matrix.org | which this wasn't | 20:01 |
| @clarkb:matrix.org | https://lists.opendev.org/robots.txt has the expected content | 20:02 |
| @clarkb:matrix.org | and presumably the waf rules are also in plcae now. So I don't think we need to pull and restart | 20:02 |
| @fungicide:matrix.org | as does /etc/apache2/sites-enabled/50-lists.opendev.org.conf | 20:02 |
| @fungicide:matrix.org | process timestamps on the apache workers also indicate they restarted | 20:02 |
| @fungicide:matrix.org | i'll test the tripwire | 20:03 |
| @fungicide:matrix.org | works | 20:05 |
| @fungicide:matrix.org | i'll note it also works cross-domain, from a test location i hit the tripwire and blocked my access on one domain, then tried to request a url from another domain on the same server and got back the expected http/403 | 20:06 |
| @fungicide:matrix.org | i a few days i'll work up the mod_proxy equivalent of the mod_sed change we installed on docs.opendev.org | 20:07 |
| @clarkb:matrix.org | I think that implies the ip table is shared amongst vhosts | 20:07 |
| @fungicide:matrix.org | er, in a few days | 20:07 |
| @clarkb:matrix.org | good to know | 20:07 |
| @fungicide:matrix.org | note that all the domains on lists01 share a common vhost, with a list of serveraliases, content is differentiated by domain at the application level | 20:08 |
| @clarkb:matrix.org | oh that may explain it instead | 20:08 |
| @clarkb:matrix.org | we use a vhost template so I assumed it was multiple vhosts, but I guess not | 20:08 |
| @fungicide:matrix.org | compare to static02 where docs.opendev.org and docs.openstack.org use separate apache vhost definitions, the block on one isn't transitive to the other | 20:10 |
| @fungicide:matrix.org | (i just tested that one again to confirm) | 20:10 |
| @clarkb:matrix.org | https://github.com/yeongbin05/django-xbench may be useful for profiling mailman | 20:11 |
| @fungicide:matrix.org | though maybe if we added the same secrule configuration to docs.openstack.org it might share the same table as docs.opendev.org, so we should test again if we expand it to more domains on that server | 20:12 |
| @clarkb:matrix.org | ++ | 20:14 |
| @fungicide:matrix.org | in fact, when making that change, we can add the test for it ahead of time | 20:14 |
| @clarkb:matrix.org | Re django-xbench I half wonder if we can find some cheap performance wins in the codebase if we go looking | 20:15 |
| @clarkb:matrix.org | I've managed to check replication on another change and it looks good. I'll pop out for a late lunch shortly since I think both of these deployments are operating as expected | 20:21 |
| @fungicide:matrix.org | cool, enjoy! | 20:30 |
| @fungicide:matrix.org | i'll probably do the same in about an hour for an early dinner | 20:30 |
| @fungicide:matrix.org | looking at what vhosts on static02 currently serve an existing robots.txt and aren't merely a redirect to some other site/service with one, that's: developer.openstack.org docs.opendev.org docs.openstack.org | 20:41 |
| @fungicide:matrix.org | one of which we've already covered | 20:41 |
| @fungicide:matrix.org | so we'll need separate changes for the repositories hosting developer.openstack.org and docs.openstack.org robots.txt files if we want to add disallow entries for those | 20:42 |
| @fungicide:matrix.org | for the rest we can probably just add an alias entry to the generic robotx.txt on the server, like docs.opendev.org is already using | 20:44 |
| @fungicide:matrix.org | and some of those can also be skipped because they're just deep-link redirects into another site | 20:46 |
| @fungicide:matrix.org | the remainder that we could add the generic robots.txt to: ask.openstack.org docs.airshipit.org docs.starlingx.io gating.dev governance.openstack.org meetings.opendev.org releases.openstack.org security.openstack.org service-types.openstack.org specs.openstack.org static.opendev.org tarballs.opendev.org | 20:48 |
| @fungicide:matrix.org | i'll try to work up the robots.txt alias addition for those dozen vhosts and separate changes for developer.openstack.org and docs.openstack.org robots.txt entries | 20:49 |
| @fungicide:matrix.org | okay, stepping out for a bite and a quick errand, back shortly | 20:59 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!