Wednesday, 2026-02-25

@harbott.osism.tech:regio.chatlgtm07:10
@harbott.osism.tech:regio.chatfyi I'm still seeing this, can anyone double check? not sure if that is a regression in zuul or some issue with our deployment?07:53
@tafkamax:matrix.orgIf it's an UI problem, have you checked the console logs? maybe there is something there 😅08:38
@tafkamax:matrix.orgI always start with that if I feel the UI is not as it should be.08:38
@tafkamax:matrix.orgCould be easier to identify ze problem.08:41
@harbott.osism.tech:regio.chatinfra-root: just ftr I force-merged https://review.opendev.org/c/openstack/tempest/+/977922 to unblock nova09:03
-@gerrit:opendev.org- Zuul merged on behalf of Takashi Kajinami: [openstack/diskimage-builder] 974342: Declare Python 3.11/12 support https://review.opendev.org/c/openstack/diskimage-builder/+/97434213:36
-@gerrit:opendev.org- Zuul merged on behalf of Takashi Kajinami: [openstack/diskimage-builder] 974343: Bump minimum python version to 3.8 https://review.opendev.org/c/openstack/diskimage-builder/+/97434314:02
@fungicide:matrix.org#status log Restarted Mailman services on the lists server in order to free up some memory and restore the system to a more responsive state14:28
@status:opendev.org@fungicide:matrix.org: finished logging14:28
@harbott.osism.tech:regio.chatheads up: yet another new tox version released, depends on latest virtualenv. will likely break openstack fully once new images built to include it become active14:31
@tkajinam:matrix.orgThe problem is not solely caused by new virtualenv, but it's caused by combination of old virtualenv cap in upper-constraints which is not compatible with the new tox14:33
@tkajinam:matrix.orgsorry I meant "The problem is not solely caused by new tox"14:41
@fungicide:matrix.orgin the past we deemed pinning back to old virtualenv versions suboptimal for this sort of reason14:42
@tkajinam:matrix.orgyeah we pinned tox to <4 to old stable branches14:42
@tkajinam:matrix.orgthough that's required updating bunch of jobs with `ensure_tox_version` var IIRC14:43
@tkajinam:matrix.org* though that required updating bunch of jobs with `ensure_tox_version` var IIRC14:43
@fungicide:matrix.orgor pinning any of the packaging and environment toolchain really, though part of that was because easy_install for setup_requires ignored it completely14:43
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 977897: Cap django-health-check for ara installations https://review.opendev.org/c/opendev/system-config/+/97789714:44
@clarkb:matrix.orgfungi: ^ I guess we can try to land the set of changes that failed yesterday whenever we're ready?16:16
@fungicide:matrix.orgyep, just waiting until others were around, i'll recheck the gitea one now16:25
@clarkb:matrix.orgthanks16:25
@fungicide:matrix.orgi also rechecked the lists.o.o waf addition17:49
@fungicide:matrix.orgtoday looks like a bad day schedule-wise to try the limnoria trixie change again17:49
@fungicide:matrix.orglots of teams with meetings continuing into late-utc hours on wednesdays17:50
@fungicide:matrix.orgif we could time it to happen between 19:00 and 21:00 utc it would probably not disrupt anyone (looks like the openstack requirements team hasn't been using their 20:30 slot for the past 5 years), but that's still cutting it tight17:52
@fungicide:matrix.orgtomorrow we could land it any time after 16:00 utc, looks like (there are two meetings listed for the 16:00 block but one hasn't been used for 4 years and the other is for a group that was dissolved years ago)17:56
@clarkb:matrix.orgthat seems fine to do ti tomorrow17:58
-@gerrit:opendev.org- Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org proposed: [opendev/irc-meetings] 978008: Drop the Interop WG meeting https://review.opendev.org/c/opendev/irc-meetings/+/97800817:59
-@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/irc-meetings] 978008: Drop the Interop WG meeting https://review.opendev.org/c/opendev/irc-meetings/+/97800818:15
@fungicide:matrix.orgokay, both system-config changes are in the gate now18:27
@fungicide:matrix.orgooh! i think our waf trap on docs.opendev.org caught and blocked some new crawlers earlier today!18:41
@fungicide:matrix.orgtwo addresses within the same /24 identified themselves as `Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)` and they have reverse dns to records in the crawl.baidu.com domain, so it seems like baiduspider ignores our disallow in https://docs.opendev.org/robots.txt18:43
@fungicide:matrix.orgtwo more from addresses in wholly different /8 ranges (no reverse dns but whois places both of them in chinanet allocations) declared they were `Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36`18:46
@fungicide:matrix.orgso three different crawlers in total that we've seen touching the tripwire, not a lot but maybe it's accelerating18:48
@clarkb:matrix.orgI'm half tempted to throw the baidu spider in our default block list with this evidence they ignore our robots.txt18:52
@fungicide:matrix.orgeven more damning, those four addresses all hit the tripwire within a 3-second timespan, which i think implies that baidu is not only refusing to obey it, they're also using drones in other networks pretending to be browsers18:53
@fungicide:matrix.orgsomaybe we didn't block two different crawlers, we just blocked one18:54
@clarkb:matrix.orginteresting18:54
@fungicide:matrix.orgthe requests claiming to be baiduspider came in at 03:52:44 and 03:52:45, then the requests claiming to be chrome 48 arrived at 03:52:4619:01
@fungicide:matrix.organd just going by the user agent strings alone i'd have assumed the baiduspider ones were spoofed, but they had matching forward and reverse dns in crawl.baidu.com which means at best someone's hacked baidu's domain hosting or servers19:03
@fungicide:matrix.orgi'm reading that baiduspider is supposed to obey robots.txt, but also that they may cache an old robots.txt for a while and not refresh it often, so it's possible they simply didn't notice we added one to that domain recently19:25
@clarkb:matrix.orggot it. So probably don't change anything for a bit in case it is a stale cache19:26
@fungicide:matrix.orgyeah, but also those other two random ip addresses requesting the same exact url one second after baiduspider while using spoofed browser ua strings seems like a nearly impossible coincidence19:28
@clarkb:matrix.orgyes definitely feels fishy19:29
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [opendev/system-config] 975320: Build the gitea images on debian trixie https://review.opendev.org/c/opendev/system-config/+/97532019:31
-@gerrit:opendev.org- Zuul merged on behalf of Jeremy Stanley https://matrix.to/#/@fungicide:matrix.org: [opendev/system-config] 974933: Add modsecurity waf rules to lists.opendev.org https://review.opendev.org/c/opendev/system-config/+/97493319:31
@fungicide:matrix.orgthose are in deploy, looks like we're probably 5 minutes out from infra-prod-service-gitea starting19:32
@fungicide:matrix.orgi take that back, it's starting now19:33
@clarkb:matrix.orgmy current load balancer destination is gitea1019:34
@clarkb:matrix.orgso I may not set up the socks proxy and just check 10 once it updates19:34
@fungicide:matrix.orgthe container is restarting on gitea09 already19:34
@fungicide:matrix.orger, not yet, pulling19:34
@fungicide:matrix.organd now restarting19:34
@fungicide:matrix.orgit's on to gitea10, so you'll presumably get rebalanced to another backend shortly19:35
@clarkb:matrix.orgit looks like I'm still balanced to 10 and it seems to work19:37
@clarkb:matrix.orgonce it is done we'll want to check that replication is happening successfully too19:38
@fungicide:matrix.orgyeah, it's pulling on 11 now19:38
@fungicide:matrix.orgso if 10 lgty this was probably successful19:39
@clarkb:matrix.orgvisually it looks fine. I haven't tested git stuff or looked for any replicated content yet19:42
@fungicide:matrix.orgcontainers on gitea14 have restarted and the job is wrapping up19:43
@clarkb:matrix.orgI think it is only through 13?19:44
@clarkb:matrix.orgbut 14 should be done shortly19:44
@fungicide:matrix.orgoh, you're right, that was 13 i saw restart, thought it was already on to 14 at that point19:44
@clarkb:matrix.organd now 14 is done19:45
@fungicide:matrix.orgokay, **now** all of them are done19:45
@clarkb:matrix.orgnow we just need someone to push a new change or patchset'19:46
@fungicide:matrix.organd the lists.o.o deploy will be starting in a few19:46
@clarkb:matrix.orggit clone worked for me19:46
@fungicide:matrix.organd for me19:46
@clarkb:matrix.orghttps://zuul.opendev.org/t/openstack/buildset/4db4997bfbf3483e933c5fd2a8abb9b3 is the successful deploy buildset19:47
@fungicide:matrix.orginfra-prod-service-lists3 is running19:48
@clarkb:matrix.orgstill no new patchsets in the gerrit open queue19:51
@clarkb:matrix.orghttps://opendev.org/openstack/python-watcherclient/commits/branch/master shows the merged change from https://review.opendev.org/c/openstack/python-watcherclient/+/976000 so I think replication is working19:56
@clarkb:matrix.orghttps://zuul.opendev.org/t/openstack/buildset/3ae9f5ab59d545e68b98bd2d452f2f2d shows successful deployment to lists19:57
@clarkb:matrix.organd I can still see lists.opendev.org archives19:57
@fungicide:matrix.orgit looks like we may not automatically pull new images in infra-prod-service-lists319:57
@fungicide:matrix.org`docker image list` is returning an opendevorg/mailman-core from "16 months ago"19:58
@fungicide:matrix.orgi'm going to start a root screen session on lists01 and pull/restart19:58
@clarkb:matrix.orgfungi: hold on19:58
@fungicide:matrix.orgholding19:58
@clarkb:matrix.orgI want to check one thing first19:59
@fungicide:matrix.orgsure19:59
@clarkb:matrix.orgfungi: I awnted to check that we didn't switch to a quay.io hosted image and have a newer quay.io/opendevorg/mailman-core/web but we don't because we haven't19:59
@clarkb:matrix.orgyou'll see that on some servers that we have an old docker hub hosted image and we've transitioned to the quay.io hosted image and htey both show up in lists. But lists is jammy so we haven't switched the container image location to quay yet and your analysis is good. However, we also didn't run an image promotion job in that deploy buildset20:00
@clarkb:matrix.orgdid we build a new image?20:00
@clarkb:matrix.orgfungi: I don't think we expected a new image this appears to have only affected apache config on the host20:01
@fungicide:matrix.orgoh, you're right, that change was only adding apache configuration20:01
@fungicide:matrix.orgi was confusing it with the trixie image upgrades20:01
@fungicide:matrix.orgwhich this wasn't20:01
@clarkb:matrix.orghttps://lists.opendev.org/robots.txt has the expected content20:02
@clarkb:matrix.organd presumably the waf rules are also in plcae now. So I don't think we need to pull and restart20:02
@fungicide:matrix.orgas does /etc/apache2/sites-enabled/50-lists.opendev.org.conf20:02
@fungicide:matrix.orgprocess timestamps on the apache workers also indicate they restarted20:02
@fungicide:matrix.orgi'll test the tripwire20:03
@fungicide:matrix.orgworks20:05
@fungicide:matrix.orgi'll note it also works cross-domain, from a test location i hit the tripwire and blocked my access on one domain, then tried to request a url from another domain on the same server and got back the expected http/40320:06
@fungicide:matrix.orgi a few days i'll work up the mod_proxy equivalent of the mod_sed change we installed on docs.opendev.org20:07
@clarkb:matrix.orgI think that implies the ip table is shared amongst vhosts20:07
@fungicide:matrix.orger, in a few days20:07
@clarkb:matrix.orggood to know20:07
@fungicide:matrix.orgnote that all the domains on lists01 share a common vhost, with a list of serveraliases, content is differentiated by domain at the application level20:08
@clarkb:matrix.orgoh that may explain it instead20:08
@clarkb:matrix.orgwe use a vhost template so I assumed it was multiple vhosts, but I guess not20:08
@fungicide:matrix.orgcompare to static02 where docs.opendev.org and docs.openstack.org use separate apache vhost definitions, the block on one isn't transitive to the other20:10
@fungicide:matrix.org(i just tested that one again to confirm)20:10
@clarkb:matrix.orghttps://github.com/yeongbin05/django-xbench may be useful for profiling mailman20:11
@fungicide:matrix.orgthough maybe if we added the same secrule configuration to docs.openstack.org it might share the same table as docs.opendev.org, so we should test again if we expand it to more domains on that server20:12
@clarkb:matrix.org++20:14
@fungicide:matrix.orgin fact, when making that change, we can add the test for it ahead of time20:14
@clarkb:matrix.orgRe django-xbench I half wonder if we can find some cheap performance wins in the codebase if we go looking20:15
@clarkb:matrix.orgI've managed to check replication on another change and it looks good. I'll pop out for a late lunch shortly since I think both of these deployments are operating as expected20:21
@fungicide:matrix.orgcool, enjoy!20:30
@fungicide:matrix.orgi'll probably do the same in about an hour for an early dinner20:30
@fungicide:matrix.orglooking at what vhosts on static02 currently serve an existing robots.txt and aren't merely a redirect to some other site/service with one, that's: developer.openstack.org docs.opendev.org docs.openstack.org20:41
@fungicide:matrix.orgone of which we've already covered20:41
@fungicide:matrix.orgso we'll need separate changes for the repositories hosting developer.openstack.org and docs.openstack.org robots.txt files if we want to add disallow entries for those20:42
@fungicide:matrix.orgfor the rest we can probably just add an alias entry to the generic robotx.txt on the server, like docs.opendev.org is already using20:44
@fungicide:matrix.organd some of those can also be skipped because they're just deep-link redirects into another site20:46
@fungicide:matrix.orgthe remainder that we could add the generic robots.txt to: ask.openstack.org docs.airshipit.org docs.starlingx.io gating.dev governance.openstack.org meetings.opendev.org releases.openstack.org security.openstack.org service-types.openstack.org specs.openstack.org static.opendev.org tarballs.opendev.org20:48
@fungicide:matrix.orgi'll try to work up the robots.txt alias addition for those dozen vhosts and separate changes for developer.openstack.org and docs.openstack.org robots.txt entries20:49
@fungicide:matrix.orgokay, stepping out for a bite and a quick errand, back shortly20:59

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!