@shitpisscum@7666@shitpisscum@Economic_Hitman >I left it federating for over 2 years so the db is too big (~11.000.000 activities) Rookie numbers. pleroma=# select count(1) from activities; count ---------- 28892409 (1 row)That's year and a half, I think. Seems to be chugging fine on a spare PC with shitty pre-Ryzen AMD APU. So yeah, you probably need a little more RAM and clock cycles. Upgrading both postgres and pleromer would definely help, and you might also be able to spare some queries by caching objects via nginx. Screenshot_20231003_191413.png
@7666@Economic_Hitman@shitpisscum
>what are you goofballs doing to cause this
I left it federating for over 2 years so the db is too big (~11.000.000 activities) and is often timing out. For some reason Pleroma is extremely bad at handling db timeouts and is often locking up (not crashing, if it was crashing systemd would restart it).
FAQ
How are you going to fix it
Getting a more powerful db server.
Alternative way would be to hire Erlang devs to fork the project and implement some sort of exception handling but then you're stuck with paying someone to maintain your own fork. And your db is still timing out, you just kind of hide it (getting a blank tl instead of "502 uwu shit's fucked hihih"). So yea, I'll just get a more powerful server lol.
What about pg_repack, updating Pleroma, pgtune.leopard.in.ua etc?
Might give some temporary performace boost but I guarantee it'll be back here in a few months.
Logs?
[error] Internal server error: %DBConnection.ConnectionError{message: "connection not available and request was dropped from queue after 414ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:\n\n 1. Ensuring your database is available and that you can connect to it
>mine's fine
Congrats, you won the "Works on my machine" award. You will be contacted by ShitPissCum Services South Eastern Europe within the next 5 to 10 week days to arrange the delivery
@mint@7666@shitpisscum@shitpisscum@Economic_Hitman
Over two years;
select count(1) from activities;
count
----------
53683089
(1 row)
Postgres connections can be tuned to have larger caches. https://pgtune.leopard.in.ua/ can give reasonable defaults. In practice, you may want to have cache sizes as big as available memory allows, and limit total number of workers accordingly. And naturally, the most recent part of tables is preferably to be located on nvme SSD.