@matrix@PurpCat@phnt honestly it's really frustrating that out of the box with no config a typescript node.js application doesn't have the same problems
@lain@PurpCat@phnt@matrix its ok right now, the current issue is a bug in pleroma when unauthenticated access to local objects is disabled it kills some federation
@matrix@sun@PurpCat feld had an almost working prototype for multiple nodes at some point I think. There was an issue with Cachex I think and that was it.
@mint@PurpCat@matrix@sun You can loadbalance multiple postgres nodes in a read scenario. And in the case of Pleroma, writing to the DB usually isn't as I/O heavy.
@matrix@PurpCat@phnt@sun there is a cluster native caching mechanism available to use which has a beautifully simply syntax because you just decorate the functions you want to cache and which ones to bust the cache. Then you also configure which caches should be independent per node and which should be shared across the cluster
@matrix@PurpCat@phnt@sun also yes the main reason for not using redis/valkey is that it's an unnecessary dependency when this type of functionality is core to the language already and it will just perform better because the OS doesn't need to context switch to access the cache
@matrix@PurpCat@phnt@sun also, if you still really really wanted to use Redis then Nebulex supports using it as a backend without really needing you to make any code changes, so it would give flexibility for people
@sun@PurpCat@feld@matrix@lain I also remember graf talking about failover Pleroma (Rebased at that time) nodes he has set up. Same with Postgres. So in this one specific case, he almost does run multi-node.
> Multiple nodes that still have a single bottleneck in form of pumping I/O back and forth between the node and postgres.
if you're evenly distributing ingress traffic from the proxy to the nodes, it should (depending on lb algorithm) spread traffic evenly(ish) across the nodes then down the io tunnel to pg. if the pg node had a low latency link (ie same network/lan) i/o should be less of an issue i would think.
i recently experimented with pleroma on kubernetes with an out of band pg backend over wireguard (local, not via vps) and threw stressor on it. performance was pretty good tbh