@i@mint Mind you we had full time devs on it from late 2017-2020, but the focus was almost exclusively on the backend. There were two frontend projects exploring the space as well. If I told you how much money was spent exploring the viability of the fediverse you'd never believe me. It has a lot of zeroes. Unfortunately the timing was just off and something else filled the void for my employer
@i@mint Lifeline will recover stuck / orphaned jobs but will not retry them if it was their "last attempt". Oban Pro has a better Lifeline plugin that can rescue these too based on custom rules.
I just forked theirs to detect if it was the last job and to reset it so it will be tried again.
@i@mint It's true, but the dude made like the world's best modern job queue software so I can't be too mad at him for wanting to run a business and make a living.
Back in ~2018 I almost recruited him to work on Pleroma. We had funding for him, but he was too involved in contract work so it didn't go anywhere beyond initial discussion. I was ready to drive down to Chicago and bring him into our office too 😢
@i@mint It's still a passion project and I really love elixir so it's not going away. Lain wants to start cutting out complexity and unused functionality. There's low hanging fruit for performance improvements. e.g., I have plans to completely refactor and simplify the caching too with a new better approach (Nebulex). I hope to be running a Pleroma instance across multiple redundant tiny computers at home soon as proof of concept that we can scale horizontally and scale down just fine (database and media on another server, but good enough. Serious people can cluster / load balance that too)
It's also possible to run Pleroma with no frontend webserver. I've done this in another project to experiment. Works great! It can get its own certificate with LetsEncrypt and bind on real 80/443.
Older Oban used the db triggers. It has very slight overhead, so they changed to the new method for more performance for the most demanding use cases.
If Postgres can't keep up with work and queries start timing out, Ecto/Postgrex (db driver and connection pooler) crashes and restarts. This would cascade up to Oban. And I think in some edge case it can cause Oban to come back online but not properly start the queue processing.
Now you stopped processing jobs. Super weird.
I have a lower resource test server I'm running now and following some relays -- feld@friedcheese.us. Feel free to flood me with follow requests from a giant bot network, I'll need more followers to stress this further 🤭
@mint I take it you had another freeze/crash even with the latest changes in the branch? Well, let's see how long you can go on Oban 2.18. Nothing in that changelog looks relevant but hey, stranger things have happened 🤪
"State congressional power (representatives) is limited by the population, and we don't want to give the slave states too much power so slaves are only 2/3 a person, and then we should make sure that we have our own stooges decide who really won the election in case the people have been compromised..."
A democracy founded on intense fear and distrust of its own people is how it seems to read sometimes