Hey everyone, you can now follow @codebergstatus to get more granular notifications about our current system. We decided to keep the main account clear from the noise of smaller issues and allow you to explicitly opt-in to such notifications.
Important and positive news will continue to be shared on this account 😉
Once again, we are fighting with an ongoing DoS attack. We are very sorry for the disruption of the service and hope to bring the service back online soon.
We are currently fighting against a DDoS attack against our service and our status page. We are analyzing network traffic with the help of our ISP at the moment and let you know once we have updates to share.
Every couple of months, a couple of us come together to write down what we've been working on behind the scenes. The last time we did that was June. There are many exciting things (for us, at least!) that have happened since; you can read all about them in the update that we are sharing with you today:
And since we're sharing background: Codeberg has a working group for matters involving Public Relations—some of us are deeply involved with Codeberg's day-to-day operations, others of us have experience in putting words together for projects and group efforts like ours. (Teamwork makes the dream work!)
We did spend a lot of time on this update (and went over many details, which is why we're posting on here so late 🌃).
Hey everyone! Codeberg will go down for about 10 minutes starting 19.00 UTC (20.00 CET) to create database indices which are expected to improve performance. This requires a short interruption of service.
We apologize for the inconvenience and short notice.
Great news: Today, we have managed to distribute our Forgejo instance across two servers. A main instance still handles most traffic, but requests from #CodebergPages are now handled by another server. Going forward, we'll try to route more read-only traffic there to distribute the load.
This is an important milestone, because such a setup was never tested with #Forgejo at this scale.
This should improve availability of both Codeberg.org and Codeberg Pages.
We apologize for the long performance degradation today. Finally, we identified all of the 'tricks' that AI crawlers found today. They no longer bypass the anubis proof of work challenges.
A novelty for us was that AI crawlers seem to not only crawl URLs that are actually presented to them by our frontend, but they converted the URLs into a format that bypassed our filter rules.
By the way, you can track the changes we have been doing via
Two of our three servers decided to restart at around 04:00 CEST today. This caused Codeberg to be offline for several hours. We have restarted the servers and all services should be working again. We are currently investigating why these servers decided to restart themselves.
If you use Git via HTTPs and get rate-limited now, please provide feedback to us about your usage. Also, feel free to use SSH or extend your usage of caching, if possible.
Today, we're struggling under the load of excessive Git cloning of all repos, and we have made the hard decision to restrict these operations for all users.
Your code is free and libre, but not for sale to large actors.
@thesamesam Unfortunately, I'm not sure if encouraging anyone to reinforce the vendor-lock-in of Microsoft GitHub by making maintainers financially dependent on that platform, is in spirit with our mission. ~f
@zacchiro Yes, the crawlers completed the challenges. We tried to verify if they are sharing the same cookie value across machines, but that doesn't seem to be the case.
Calling our usage of anubis an attack on our users is far-fetched. But feel free to move elsewhere, or host an alternative without resorting to extreme measures. We're happy to see working proof that any other protection can be scaled up to the level of Codeberg. ~f
However, as far as we can see, it does not sufficiently protect from crawling. As the bot armies successfully spread over many servers and addresses, damaging one of them doesn't prevent the next one from doing harmful requests, unfortunately. ~f
@gturri Anubis sends a challenge. The browser needs to compute the answer with "heavy" work. The server then has "light" work and verifies the challenge.
As far as we can tell, the crawlers actually do the computation and send the correct response. ~f
We are a non-profit, community-led organization that helps free and open source projects prosper. Our services include Git hosting (using @forgejo), Weblate, Woodpecker CI and Pages.This account managed by three volunteers that dedicate few hours per week to social media; this prevents us from responding to everyone's inquiries. If you wish assistance or to report a problem, see: https://docs.codeberg.org/contactFAQ: https://docs.codeberg.org/getting-started/faq/