openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 00:11 |
---|---|---|
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image https://review.opendev.org/722148 | 00:21 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 00:40 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 00:58 |
*** hamalq has quit IRC | 01:04 | |
*** DSpider has quit IRC | 01:07 | |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image https://review.opendev.org/722148 | 01:42 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 02:07 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 02:14 |
*** chandankumar has quit IRC | 02:18 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 02:19 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 03:35 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image https://review.opendev.org/722148 | 03:56 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 03:57 |
openstackgerrit | Merged opendev/system-config master: borg-backups: add some extra excludes https://review.opendev.org/757965 | 04:15 |
*** chandankumar has joined #opendev | 04:25 | |
*** ykarel|away has joined #opendev | 04:44 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 04:51 |
*** ykarel|away is now known as ykarel | 04:58 | |
*** ykarel_ has joined #opendev | 05:04 | |
*** ykarel has quit IRC | 05:08 | |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image https://review.opendev.org/722148 | 05:09 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 05:12 |
*** ykarel_ is now known as ykarel | 05:18 | |
*** marios has joined #opendev | 05:20 | |
*** prometheanfire has quit IRC | 05:32 | |
*** ralonsoh has joined #opendev | 05:33 | |
*** sboyron has joined #opendev | 05:47 | |
*** prometheanfire has joined #opendev | 05:48 | |
*** slaweq has joined #opendev | 06:08 | |
*** roman_g has joined #opendev | 06:11 | |
*** eolivare has joined #opendev | 06:31 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: [wip] reprepro https://review.opendev.org/757660 | 06:46 |
*** andrewbonney has joined #opendev | 07:06 | |
*** fressi has joined #opendev | 07:06 | |
*** ysandeep|away is now known as ysandeep | 07:09 | |
*** sshnaidm|afk is now known as sshnaidm | 07:36 | |
*** DSpider has joined #opendev | 07:39 | |
*** tosky has joined #opendev | 07:43 | |
*** lpetrut has joined #opendev | 07:50 | |
*** rpittau|afk is now known as rpittau | 07:52 | |
*** mkalcok has joined #opendev | 07:58 | |
*** hashar has joined #opendev | 08:24 | |
*** roman_g has quit IRC | 09:08 | |
*** roman_g has joined #opendev | 09:16 | |
*** moguimar has left #opendev | 10:01 | |
*** zigo has joined #opendev | 10:11 | |
*** Dmitrii-Sh has quit IRC | 11:20 | |
*** Dmitrii-Sh has joined #opendev | 11:20 | |
*** ysandeep is now known as ysandeep|brb | 11:23 | |
*** ysandeep|brb is now known as ysandeep|afk | 11:32 | |
*** priteau has joined #opendev | 11:39 | |
openstackgerrit | Jeremy Stanley proposed opendev/system-config master: Switch openstack/compute-hyperv->x tarball redir https://review.opendev.org/758096 | 11:54 |
fungi | infra-root: ^ relatively urgent, i'll handle the file moves myself once it merges | 11:55 |
fungi | i'm going to go ahead and get started on the corresponding file move | 12:07 |
fungi | #status log manually moved files from project/tarballs.opendev.org/x/compute-hyperv into .../openstack/compute-hyperv per 758096 | 12:10 |
openstackstatus | fungi: finished logging | 12:10 |
clarkb | fungi: is there also a job publishing update that needs to happen? | 12:12 |
fungi | clarkb: nope, that's how we discovered the problem | 12:12 |
fungi | tarballs for it are being published into openstack not x | 12:12 |
clarkb | got it | 12:12 |
fungi | so we moved the previous tarballs and redirected, but the release scripts couldn't find the new ones because they weren't being published to where the redirect goes | 12:13 |
clarkb | I think you can self approve the redirect fix now if you want | 12:14 |
fungi | i figured i'd give it until check results returned in case anyone else wanted to review, since i wasn't planning to direct-enqueue it to the gate anyway | 12:16 |
fungi | openstackgerrit has gone silent on us | 12:31 |
*** fressi has quit IRC | 12:34 | |
*** ysandeep|afk is now known as ysandeep | 12:38 | |
openstackgerrit | Aurelien Lourot proposed openstack/project-config master: Mirror ironic charms to GitHub https://review.opendev.org/758109 | 12:52 |
openstackgerrit | Merged opendev/system-config master: Switch openstack/compute-hyperv->x tarball redir https://review.opendev.org/758096 | 13:04 |
*** hashar has quit IRC | 13:11 | |
*** priteau has quit IRC | 14:09 | |
*** priteau has joined #opendev | 14:18 | |
*** lpetrut has quit IRC | 14:40 | |
AJaeger | config-core, please review https://review.opendev.org/756717 and https://review.opendev.org/758109, https://review.opendev.org/753199 | 14:49 |
*** ysandeep is now known as ysandeep|away | 15:25 | |
clarkb | review-test has finished replicating to gitea99. Total disk use was 41GB after that. I'm running the gc now to compare post gc size | 15:26 |
clarkb | Then I'll dobule check we haven't replicated all-users and open the firewall for the web server there again | 15:26 |
clarkb | btu I think it looks good | 15:27 |
clarkb | oh I should trigger another full replication and see how slow that is now that its replicated too | 15:28 |
clarkb | I'll do that post gc | 15:28 |
openstackgerrit | Merged openstack/project-config master: Mirror ironic charms to GitHub https://review.opendev.org/758109 | 15:32 |
*** ykarel is now known as ykarel|away | 15:34 | |
openstackgerrit | Merged openstack/project-config master: Add SNMP Armada App to StarlingX. https://review.opendev.org/756717 | 15:36 |
mnaser | im getting really slow load times when i open http://opendev.org/openstack/governance | 15:42 |
*** ykarel|away has quit IRC | 15:42 | |
fungi | i'll check if the server farm is getting overloaded again | 15:43 |
clarkb | mnaser: if you inspect the ssl cert you get it will tell you which backend you're talking to | 15:44 |
clarkb | (it is one of the cert names) | 15:44 |
fungi | was slow for me with gitea03 | 15:45 |
clarkb | firefox dev tools say a good chunk of the time for me is tls setup | 15:46 |
mnaser | in my case i did a local curl and it was even the http backend that did a 301 redirect that was slow | 15:47 |
clarkb | we just did a pass of project config updates too | 15:48 |
clarkb | I wonder if that invalidates our caches | 15:48 |
fungi | established tcp connections through the lb jumped by a couple orders of magnitude a little before 02:00 utc | 15:48 |
clarkb | that was a result of applying https://review.opendev.org/756717 | 15:48 |
fungi | http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66611&rra_id=all | 15:49 |
clarkb | oh ya thats it then, thats also our haproxy limit | 15:49 |
clarkb | which would explain slow connection setups | 15:49 |
clarkb | we're queueing in there behind the dos | 15:49 |
fungi | odds are we're seeing attack/abuse again but i'll check specific backends too | 15:49 |
clarkb | the week 39 block that looks like this was the ddos from china ISP IPs | 15:49 |
fungi | yep, i was thinking the same | 15:50 |
clarkb | if we're seeing the same sort of attack then we can flip the backend ports to 3081 to try ianw's filtering | 15:50 |
fungi | seeing basically the same jump on all backends | 15:50 |
clarkb | ya if you look at /var/gitea/logs/access.log you see the silly user agent strings again | 15:51 |
fungi | so i guess step next is to see if we can yeah | 15:51 |
fungi | that | 15:51 |
clarkb | https://gitea01.opendev.org:3081/ is the filtered version | 15:52 |
fungi | "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)" | 15:52 |
clarkb | I think we can tell haproxy to talk to 3081 instead of 3000 ? | 15:52 |
fungi | also a lot of "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" | 15:52 |
clarkb | and then does apache do the http -> https redirect for us /me looks at that config | 15:52 |
fungi | actually this may not be so easy | 15:54 |
fungi | the winning ua by a factor of >6x the next most common one is "git/2.18.4" | 15:55 |
*** slaweq has quit IRC | 15:55 | |
fungi | though i suppose we assume it's only browser traffic which matters, in which case "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)" is 2x more common than the 3rd place ua | 15:56 |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Switch to filtered apache backends https://review.opendev.org/758200 | 15:56 |
clarkb | fungi: I think ^ will put our planned mitigation for this in place | 15:56 |
clarkb | if we want to apply that by hand to a single backend first we can do that too | 15:56 |
clarkb | or maybe do a 50-50 | 15:56 |
fungi | aha, i see, there's a bunch of agent strings we thwap in that vhost | 15:57 |
clarkb | yes, ianw implemented that in case we had what happaned last time reappear and this appears to be now :) | 15:58 |
*** slaweq has joined #opendev | 15:58 | |
*** marios has quit IRC | 15:58 | |
clarkb | we edit /var/haproxy/etc/haproxy.cfg by hand then run `docker-compose kill -s HUP haproxy` to pick up the new config | 15:58 |
fungi | i say we just go for it. this explains a failure we saw during openstack release processing earlier too | 15:58 |
fungi | i'll try gitea01 first and see if it calms down there | 15:59 |
clarkb | works for me. I think my biggest concern is that the http -> https redirects may not work but I think gitea does that as a port 80 to port 443 redirect which teh load balancer then handles | 15:59 |
clarkb | ++ | 15:59 |
clarkb | fungi: let me know if you want me to help with anything | 16:01 |
fungi | i've done the above for gitea01 | 16:01 |
clarkb | now we wait for haproxy to bleed the old connections | 16:02 |
clarkb | I believe I talk to gitea01 /me tests | 16:02 |
fungi | and cacti to poll | 16:02 |
clarkb | confirmed I talk to gitea01 from my source IP and it seems much quicker? | 16:02 |
clarkb | also http -> https redirecting seems to work | 16:02 |
clarkb | at the very least doesn't seem to be a regression | 16:03 |
clarkb | still seeing some weird user agents though | 16:04 |
fungi | actually i'm talking to gitea06 on this machine now and it's still faster than it was | 16:04 |
clarkb | maybe they are on persistent tcp connections to the old haproxy which should eventually time out | 16:04 |
clarkb | fungi: I vote we move the other 7 to 3081 and approve https://review.opendev.org/758200 then we can refine the UA filtration list as necessary | 16:05 |
clarkb | (I think the ones that are getting through are different than what we already filter | 16:05 |
*** rpittau is now known as rpittau|afk | 16:05 | |
fungi | doing that now | 16:06 |
fungi | and done | 16:06 |
fungi | i'm still able to browse stuff even after a hard refresh | 16:07 |
clarkb | \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0" <- that seems to be a common one sneaking through | 16:07 |
clarkb | mnaser: ^ maybe give it another minute or two but then try again and see if it is happier for you? | 16:08 |
fungi | trident/5.0 seems to be the backend for msie 9.0 | 16:09 |
clarkb | which is like windows XP era | 16:10 |
clarkb | I don't mind blocking that :) | 16:10 |
clarkb | sorry it was vista :) | 16:10 |
clarkb | \"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" ie 6 is xp :P | 16:11 |
fungi | https://en.wikipedia.org/wiki/Internet_Explorer_9#User_agent_string | 16:11 |
fungi | circa 2011 for initial release | 16:12 |
clarkb | cacti is showing the connections fall off now too | 16:12 |
fungi | msie9 is apparently ~0.06% of the desktop browser share based on recent stats | 16:13 |
clarkb | do you think we should update our UA list for completeness or leave it as is since what we've got seems to be sufficient? | 16:13 |
fungi | we should probably at least wait for the change to roll out and make sure things are stable before we add any more to the filter list | 16:14 |
clarkb | sounds good | 16:14 |
*** hashar has joined #opendev | 16:15 | |
clarkb | memory consumption looks good too so we aren't all of a sudden disrupting that balance via addition of apache | 16:17 |
fungi | oh yeah, gitea01 has recovered rapidly: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66632&rra_id=all | 16:18 |
clarkb | ya its basically able to throw away 80% of the connections immediately then worry about the other mostly valid connections | 16:19 |
smcginnis | IE 9? Yikes. | 16:20 |
clarkb | smcginnis its not actually IE9, what ianw discovered last time this happened is that the weird UAs we were seeing lined up almost perfectly to some botnet ddos tool on github | 16:21 |
clarkb | and from that ianw built the apache filter list we're now using as of 20 minutes ago or so | 16:21 |
*** tosky has quit IRC | 16:21 | |
smcginnis | Interesting choice for a user agent string for a DDOS tool. :) | 16:22 |
clarkb | oh those aren't even the fun ones | 16:23 |
clarkb | there are a bunch for things like ipods and hp tablets | 16:24 |
smcginnis | Hah | 16:24 |
* smcginnis imagines a farm of ipods | 16:24 | |
fungi | yeah, basically it has some 50 different legitimate (if old) looking uas it rotates through at random for its requests | 16:25 |
fungi | in order to evade detection | 16:25 |
fungi | and doesn't obey robots.txt exclusions or nofollow attributes for links | 16:25 |
fungi | we've had to set up a proxy filter in front of each of the gitea servers blocking requests based on that list of uas | 16:27 |
fungi | because the workload is also spread across random ip addresses from most of the major telecommunications providers in china | 16:27 |
fungi | so the alternative (and what we did last time) is basically to block most chinese users | 16:27 |
clarkb | we all ianw a beer now too :P | 16:28 |
clarkb | *all owe | 16:28 |
fungi | i'd argue we already did ;) | 16:28 |
fungi | a case of the finest chinese beer for ianw! | 16:28 |
*** eolivare has quit IRC | 16:34 | |
openstackgerrit | Bernard Cafarelli proposed openstack/project-config master: Update neutron grafana dashboard https://review.opendev.org/758208 | 16:36 |
clarkb | after git gc we are down toe 28GB disk used by gitea on gitea99 | 16:40 |
fungi | how does that compare to production gitea servers? | 16:40 |
clarkb | pulling those numbers now too | 16:41 |
clarkb | its harder to quantify on prod because its a single volume for all of / | 16:41 |
clarkb | so have to wait for du to run | 16:41 |
*** iurygregory has quit IRC | 16:41 | |
clarkb | /var/gitea/data/git/repositories is 18GB on gitea99 and 13GB on gitea01 | 16:41 |
fungi | connections through the lb are looking much better now: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66611&rra_id=all | 16:41 |
*** hamalq has joined #opendev | 16:41 | |
fungi | though you can tell it's still easily an order of magnitude higher than normal for us | 16:42 |
clarkb | 9.3GB of index data for both | 16:42 |
clarkb | fungi: ya thats the extra UAs that are sneaking through I think | 16:42 |
fungi | i expect so | 16:42 |
clarkb | re gitea disk use I think that means we're fine. We've got 29GB to grow into on gitea01 which is more than the total size we expect then when we gc the disk use will fall dramatically | 16:43 |
clarkb | I also think we can gc while replication is happening | 16:43 |
clarkb | now to rerun replication and see if it is super slow still | 16:43 |
*** iurygregory has joined #opendev | 16:44 | |
clarkb | also I've done more digging in git docs and the refspecs seem to only support the * glob | 16:44 |
clarkb | I was hoping we might be able to do something like character classes and only replicate changes/XY/ABCXY/[0-9]* | 16:45 |
clarkb | so far I think the only thing that has broken where we were using an existing true api was the commentlinks | 16:48 |
clarkb | I've just jinxed it :P | 16:48 |
clarkb | everything else is hacks like the js stuff and direct db access | 16:49 |
clarkb | noop replication is much quicker fwiw | 16:49 |
clarkb | should be done in a few more minutes | 16:49 |
fungi | oh good | 16:50 |
clarkb | relpication finished at some point between me saying it had a few minutse and me returning from getting a drink | 17:08 |
*** hamalq has quit IRC | 17:08 | |
fungi | bar trips are a legitimate unit of time | 17:09 |
fungi | sub-bt completion | 17:09 |
clarkb | fungi: https://review.opendev.org/#/c/758200/ passed testing if you want to approve it to match production now | 17:10 |
*** hamalq has joined #opendev | 17:12 | |
*** sgw has joined #opendev | 17:12 | |
*** mlavalle has joined #opendev | 17:26 | |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Add four more gitea ddos UA strings https://review.opendev.org/758219 | 17:31 |
clarkb | fungi: ^ those the 4 common ones I see sneaking through | 17:31 |
clarkb | Also I don't know why I said "all three" in the commit message after saying "Add four" | 17:36 |
clarkb | I was up early today and my brain is not quite in gear | 17:37 |
fungi | meh | 17:37 |
fungi | yeah | 17:37 |
fungi | same here, along with a late night closing out elections | 17:37 |
*** ralonsoh has quit IRC | 17:42 | |
clarkb | we've also got 3 more well behaved bots: http://www.semrush.com/bot.html https://aspiegel.com/petalbot and https://zhanzhang.toutiao.com/ | 17:47 |
*** ykarel|away has joined #opendev | 17:48 | |
*** ykarel|away has quit IRC | 17:53 | |
*** hashar is now known as hasharDinner | 17:55 | |
*** smcginnis has quit IRC | 17:59 | |
*** smcginnis has joined #opendev | 18:01 | |
openstackgerrit | Merged opendev/system-config master: Switch to filtered apache backends https://review.opendev.org/758200 | 18:09 |
AJaeger | I'd like to merge https://review.opendev.org/742736 , a zuul-jobs change consolidates our log uploading. Any infra-root around to monitor in case it does not work as expected? Then I'll approve now... | 18:13 |
fungi | looking now, but yes i'm around for at least a few more hours | 18:13 |
clarkb | AJaeger: I'm watching the kids for abit but am arpund enough | 18:13 |
AJaeger | do you want to review first, fungi? | 18:13 |
AJaeger | thanks, clarkb and fungi | 18:13 |
fungi | AJaeger: nah, i just wanted to familiarize myself with it so i know what sort of breakage to be on the lookout for | 18:15 |
fungi | merge at your discretion | 18:15 |
AJaeger | approved - thanks | 18:15 |
AJaeger | are we running https://review.opendev.org/#/c/753222/ in our Zuul instance now? So, can we merge https://review.opendev.org/#/c/753199/ which "Refactor fetch-sphinx-tarball to be executor safe"? | 18:17 |
fungi | i'll check | 18:17 |
fungi | but i think that eneded up in the most recent restarts | 18:18 |
AJaeger | fungi: if it is, would be great if you could review 753199 as well. Thanks for checking! | 18:18 |
AJaeger | change merged two weeks ago - if we restarted everything, we should be fine | 18:18 |
clarkb | if it passes testing then it mustbe restarted with that change | 18:18 |
clarkb | otherwise it would fail | 18:18 |
fungi | there's probably a docker-esque way to check start times, but stat on /var/lib/zuul/executor.socket seems to be fairly accurate | 18:20 |
fungi | the oldest executor.socket is on ze01, created 2020-10-01 16:10:38 utc | 18:21 |
clarkb | fungi: you can do docker ps -a or just usenormal ps | 18:22 |
AJaeger | and the change merged before that - ok, please review 753199 | 18:22 |
fungi | 753222 was uploaded to dockerhub at 2020-09-30 23:52:35 | 18:22 |
*** mkalcok has quit IRC | 18:22 | |
fungi | so that's nearly 16.5 hours for it to pull an updated image | 18:22 |
fungi | clarkb: yeah, i never can remember how to get ps to show precise timestamps when a process is more than a day old | 18:23 |
openstackgerrit | Merged zuul/zuul-jobs master: Consolidate common log upload code into module_utils https://review.opendev.org/742736 | 18:28 |
fungi | AJaeger: so i guess there's another change which we've got to take advantage of 753199 and run the role on executors again? | 18:38 |
openstackgerrit | Merged zuul/zuul-jobs master: Refactor fetch-sphinx-tarball to be executor safe https://review.opendev.org/753199 | 18:51 |
clarkb | new thread on gerrit list says 2.16.23 will improve notedb migration performance | 18:51 |
clarkb | no indication of when that will release though | 18:51 |
*** andrewbonney has quit IRC | 18:55 | |
clarkb | actuall we may already be building with those changes | 18:56 |
fungi | depending on how much it improves, could allow us to pack the entire upgrade sequence into a single-day outage, but honestly i like the break the notedb step gives us | 18:56 |
clarkb | yup we checkout stable-2.16 so we should haev them | 18:56 |
fungi | we could likely all use the midpoint rest | 18:56 |
clarkb | they are all about 2 months old and our image is much newer | 18:57 |
clarkb | unfortunately I think that means our 8 hour migration is the fast version | 18:57 |
fungi | fortunately, we're getting the 8 hour migration and not the 80 hour migration, i guess | 18:58 |
clarkb | I got all excited for a minute there | 18:58 |
clarkb | I guess on the plus side they are continuing to improve the process and that will help us | 18:58 |
*** avass has joined #opendev | 19:02 | |
*** avass has left #opendev | 19:03 | |
*** avass has joined #opendev | 19:06 | |
*** yourname_ has joined #opendev | 19:07 | |
*** yourname_ has quit IRC | 19:09 | |
*** yourname_ has joined #opendev | 19:09 | |
clarkb | to test manage-projects I've realized I need to sort out authentication because we restored a prod db which ahs the prod pubkey in the user account but we shouldn't use the same key to avoid any chance of cross talk (I need to check if we even have a key too) | 19:16 |
clarkb | ya I think we have a new key that mordred generated I just need to set up the account on review-test | 19:19 |
*** priteau has quit IRC | 19:20 | |
clarkb | yup I get permission denied as expected | 19:20 |
fungi | can probably still auth with the ssh host key as "Gerrit Code Review" and then add the user that way | 19:21 |
fungi | or, yeah, your openid is probably an admin | 19:22 |
fungi | i keep forgetting it's a production snapshot ;) | 19:22 |
clarkb | ya I'm trying to do it with my account using `gerrit set-account` but it is telling me unavailable | 19:24 |
*** roman_g has quit IRC | 19:25 | |
clarkb | that makes me wonder if it is an acl issue | 19:28 |
clarkb | hrm actually it may be beacuse that user has no email address set? there is a traceback complaining about a null email | 19:29 |
* clarkb tries setting the email too | 19:29 | |
clarkb | wow I think that aws it I added a bogus email addr and now it is happy | 19:31 |
clarkb | fungi: maybe we should add an email addr to that account now pre migration to avoid dealing with this in the future? | 19:31 |
clarkb | root@ maybe? | 19:31 |
clarkb | `ssh -p 29418 -i /home/gerrit2/review_site/etc/ssh_project_rsa_key openstack-project-creator@review-test.opendev.org gerrit ls-projects` is working now | 19:32 |
fungi | mmm, yeah probably | 19:35 |
fungi | that reminds me we still need to decide how we want to set up our shared role addresses going forward so we can wean ourselves off openstack.org for one more thing | 19:36 |
fungi | maybe just exim+(courier/cyrus/something) on a small vm, we don't really need a webmail interface for those | 19:37 |
fungi | heck, i'd be satisfied with exim local delivery and mailx from a shell prompt, at least initially | 19:37 |
clarkb | pine! | 19:42 |
fungi | see, you're getting fancy | 19:45 |
fungi | but yeah, an imap mailbox or several would not be hard to manage. i've been maintaining my own mailserver on debian for decades | 19:46 |
clarkb | fungi: if you get a chance today can you look at review-test's manage-projects setup? I updated /usr/local/bin/manage-projects to use the correct gerrit docker image, added a ~gerrit2/acls/clarkb/clarkb-test-project.yaml, added a ~gerrit2/projects.yaml as well as the earlier ssh key addition to the openstack-project-creator account. ~gerrit2/projects.ini was not modified as it appears to be already set up to | 19:51 |
clarkb | talk to review-test but should be double checked too | 19:51 |
clarkb | I'm mostly worried about wire crossing and accidentally adding a clarkb-test-project to prod | 19:51 |
fungi | will do | 19:52 |
fungi | i agree it would be unfortunate if it tried to update prod, though it really should not have sufficient credentials to do that | 19:52 |
clarkb | ya that is my expectations | 19:52 |
fungi | anyway, i'll double-check | 19:52 |
clarkb | thanks | 19:53 |
*** tosky has joined #opendev | 19:53 | |
clarkb | I'ev also noticed that jeepyb is hardcoded to create local mirror dirs | 19:55 |
clarkb | not today, but soon I'll see about making that conditional | 19:55 |
fungi | oh, good catch, yeah we wanted that at one time, even still not that long ago | 19:58 |
fungi | clarkb: the jeepyb config looks safe to me | 20:25 |
clarkb | fungi: ok want to run it in the root screen really quickly? | 20:26 |
clarkb | I've got the command queued up in the root screen and will run it momentarily | 20:27 |
fungi | oh, yeah, i had to reboot my workstation but i'll reattach now | 20:27 |
fungi | running now | 20:27 |
fungi | and done | 20:27 |
clarkb | that seems suspicuoulsy quick and qiet | 20:28 |
fungi | yes | 20:28 |
fungi | also screen seems to have switch to your geometry | 20:28 |
fungi | switched | 20:28 |
clarkb | fungi: ya I made it smaller for you | 20:28 |
clarkb | I had disconnected too so when I reconnected it was huge then I made it arbitrarily smaller | 20:29 |
clarkb | it doesn't seem to have created the project | 20:29 |
fungi | that looks suspiciously empty | 20:29 |
clarkb | https://review-test.opendev.org/admin/repos/q/filter:clarkb is empty | 20:29 |
clarkb | the writing cache file message is in a finally block | 20:30 |
fungi | this seems like something to tackle tomorrow when we're both less busy | 20:30 |
clarkb | ya I'll look at it a bit more but I'm pretty sure it basically ssh'd then did nothing | 20:31 |
fungi | that's what it seems like, at least | 20:32 |
clarkb | my hunch based on lack of output is we're hitting if args.projects and project not in args.projects: continue | 20:32 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Revert "Consolidate common log upload code into module_utils" https://review.opendev.org/758247 | 20:33 |
*** yourname_ is now known as avass | 20:33 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul-jobs master: Remove unneeded gce from upload_utils https://review.opendev.org/758248 | 20:34 |
clarkb | and I'm off ot make lunch now. Will pick this up later | 20:39 |
fungi | enjoy! i'm about to go pitch in on dinner | 20:40 |
openstackgerrit | Tobias Henkel proposed zuul/zuul-jobs master: Remove unneeded gce from upload_utils https://review.opendev.org/758248 | 20:43 |
openstackgerrit | James E. Blair proposed zuul/zuul-jobs master: Revert "Refactor fetch-sphinx-tarball to be executor safe" https://review.opendev.org/758250 | 20:54 |
*** hasharDinner has quit IRC | 20:57 | |
openstackgerrit | Merged zuul/zuul-jobs master: Remove unneeded gce from upload_utils https://review.opendev.org/758248 | 20:58 |
openstackgerrit | Merged zuul/zuul-jobs master: Revert "Refactor fetch-sphinx-tarball to be executor safe" https://review.opendev.org/758250 | 21:07 |
*** sboyron has quit IRC | 21:09 | |
clarkb | ianw: we deployed your gitea user agent filtering to prod today | 21:11 |
clarkb | we got hit by the same ddos again. I have a change up to add 4 more UAs too | 21:12 |
fungi | ianw: oh, in other news, we merged an emergency change to un-move one of the tarballs projects which had been part of the original openstack namespace exodus but was subsequently adopted by an official team and moved back | 21:15 |
fungi | hopefully that was the only one in that situation | 21:15 |
ianw | yeah, just catching up on various scrollback :) | 21:15 |
ianw | thanks for fixing my screw-ups :) | 21:15 |
ianw | fungi: it might be worth updating the opendev/project-config files with that, in case we have to make any more adjustments? | 21:16 |
ianw | seems unlikely though | 21:16 |
fungi | i suspect we did, since we use that for input into the gerrit rename playbook, but i haven't checked | 21:17 |
ianw | hrm, possibly my script was wrong then | 21:18 |
ianw | 20190531.yaml: - old: x/compute-hyperv | 21:19 |
ianw | 20190531.yaml: new: openstack/compute-hyperv | 21:19 |
ianw | my script probably didn't handle the second occurrence moving it back | 21:19 |
ianw | if old_tenant.startswith('openstack') and \ | 21:21 |
ianw | not new_tenant.startswith('openstack'): | 21:21 |
fungi | hah | 21:21 |
ianw | that needed to be expanded with something like "if new_tenant is openstack and old-tenant was ..." | 21:22 |
ianw | is that the only instance or should i go through it again? | 21:22 |
ianw | - old: x/whitebox-tempest-plugin | 21:23 |
ianw | new: openstack/whitebox-tempest-plugin | 21:23 |
fungi | it's the only one handled by release management, but yeah there may be release independent stuff | 21:23 |
ianw | - old: x/devstack-plugin-nfs | 21:23 |
ianw | new: openstack/devstack-plugin-nfs | 21:23 |
fungi | okay, so we have a handful of others to clean up i guess | 21:24 |
fungi | odds are nobody was using the tarballs site for those | 21:24 |
ianw | kayobe | 21:25 |
ianw | fungi: this should be the list -> http://paste.openstack.org/show/799052/ | 21:27 |
fungi | ianw: yeah those all look familiar. keep in mind that some may have tagged new releases since they were moved, so now have old tarballs in x and new tarballs in openstack (that was at least the case for compute-hyperv) | 21:29 |
ianw | yep, ok i gotta do school run but will come back and fix these up | 21:30 |
ianw | clarkb: now i think about it, i'm not sure that adding those strings will cause apache to pick up the new config? i'm not sure there's a handler for that | 21:55 |
clarkb | hrm we can either manually restart or add a handler I guess | 21:56 |
clarkb | sorry my brain is checked out at this point. Was up early to check on release things and never cuaght back up from there | 21:56 |
*** slaweq_ has joined #opendev | 21:56 | |
*** slaweq has quit IRC | 21:59 | |
ianw | i can look at a handler for future and do a restart for now | 22:00 |
ianw | x/pyeclib openstack/pyeclib | 22:00 |
ianw | x/whitebox-tempest-plugin openstack/whitebox-tempest-plugin | 22:00 |
ianw | x/kayobe openstack/kayobe | 22:00 |
ianw | are the 3 remaining projects that have tarballs that should be moved back to openstack | 22:01 |
*** slaweq_ has quit IRC | 22:02 | |
fungi | yeah, i guess none of those released as part of victoria | 22:04 |
*** slaweq has joined #opendev | 22:05 | |
ianw | i'll just fix that up manually now | 22:07 |
*** erbarr has quit IRC | 22:13 | |
openstackgerrit | Ian Wienand proposed opendev/project-config master: [dnm] scripts to cleanup tarballs.opendev.org https://review.opendev.org/754257 | 22:13 |
*** erbarr has joined #opendev | 22:15 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: tarballs: remove incorrect redirects https://review.opendev.org/758259 | 22:16 |
*** jentoio has quit IRC | 22:18 | |
*** jentoio has joined #opendev | 22:18 | |
ianw | clarkb: no i'm wrong -- notify: gitea Reload apache2 -- should deploy ok | 22:20 |
ianw | #status log moved x/pyeclib x/whitebox-tempest-plugin x/kayobe back under openstack/ in tarballs AFS (https://review.opendev.org/758259) | 22:21 |
openstackstatus | ianw: finished logging | 22:21 |
*** slaweq has quit IRC | 22:36 | |
*** qchris has quit IRC | 22:41 | |
ianw | those extra matches failed in the gitea test | 22:47 |
ianw | HTTPSConnectionPool(host='localhost', port=3000): Max retries exceeded with url: /api/v1/user/orgs?limit=50&page=1 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f204cff35f8>: Failed to establish a new connection: [Errno 111] Connection refused',)) | 22:47 |
ianw | [36m2020/10/14 21:55:26 [0m[32mcmd/web.go:204:[32mrunWeb()[0m [1;45m[C][0m Failed to start server: [1mopen /certs/cert.pem: no such file or directory[0m | 22:48 |
ianw | we don't capture the acme.sh output but i guess the letsencrypt failed | 22:50 |
fungi | that seems like the most fragile dependency for installing that file | 22:51 |
fungi | so a reasonable guess | 22:51 |
ianw | there's a couple of containers around that provide for fake acme workflows, it's on my todo list to implement something like that for gate testing | 22:52 |
*** qchris has joined #opendev | 22:54 | |
*** hamalq has quit IRC | 23:02 | |
fungi | infra-root: i'm not really around at the moment, but i see that rackspace has just opened tickets for us about possible host outages impacting zk03 and nl01 | 23:04 |
fungi | may be worth keeping an eye on | 23:04 |
ianw | ok, will do, probably with those services first we'll notice is system-config deployment failing | 23:06 |
*** hamalq has joined #opendev | 23:08 | |
*** tosky has quit IRC | 23:27 | |
*** mlavalle has quit IRC | 23:29 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!