GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 02:56:31 JST Phantasm Phantasm

    fedi-pleroma-maintenance-brain.webp
    In conversation about 2 months ago from fluffytail.org permalink

    Attachments


    1. https://upload.fluffytail.org/media/38/26/5d/38265d5e2bf5fc2ca41f06d6c7c2de8b0d2ff1bfa67aa9e6b3f55fad5515c4b4.webp?name=fedi-pleroma-maintenance-brain.webp
    • Doughnut Lollipop 【記録係】:blobfoxgooglymlem:, snacks and Johnny Peligro like this.
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 05:17:11 JST Phantasm Phantasm
      in reply to
      • lainy
      @lain Verita is running that task daily and switches Postgres into and from replication mode on the fly, because it is supposed to help it go quicker.
      In conversation about 2 months ago permalink
      Johnny Peligro likes this.
    • Embed this notice
      lainy (lain@lain.com)'s status on Tuesday, 30-Sep-2025 05:17:12 JST lainy lainy
      in reply to
      @phnt i TOLD him not to do the stupid object removal taks
      In conversation about 2 months ago permalink
      Johnny Peligro likes this.
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 05:17:28 JST Phantasm Phantasm
      in reply to
      • lainy
      • :blank:
      @i @lain The run it, but the Pleroma DB bloat is largely a meme from lack of any maintenance. Nyanide was around 80GB after 2 years; my test instance was 4GB after consuming posts from my follows with zero relays for a year, my instance is ~30GB after 2.5 years of being subscribed to largest Pleroma instances relays and Ryona Agency is I think also around the 80GB mark after ~3 years with mostly only bot spam deleted. My instances and Ryona reject deletes.

      I ran the task on my test instance and it halved the size to around 2.2GB. It works.

      It is also manageable. I pay 9.90 USD for this 120GB garbage IO box and most of it is for my Git mirrors.
      In conversation about 2 months ago permalink
      Johnny Peligro likes this.
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 30-Sep-2025 05:17:30 JST :blank: :blank:
      in reply to
      • lainy
      @lain @phnt what's the alternative? can't keep buying a bigger slab forever
      In conversation about 2 months ago permalink
    • Embed this notice
      lainy (lain@lain.com)'s status on Tuesday, 30-Sep-2025 05:17:31 JST lainy lainy
      in reply to
      @phnt this is the defrag of pleroma maintenance
      In conversation about 2 months ago permalink
    • Embed this notice
      Johnny Peligro (mischievoustomato@tsundere.love)'s status on Tuesday, 30-Sep-2025 05:17:49 JST Johnny Peligro Johnny Peligro
      in reply to
      • lainy
      • :blank:
      @i @phnt @lain delete the whole thing, start again fresh on another subdomain
      In conversation about 2 months ago permalink
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 05:20:22 JST Phantasm Phantasm
      in reply to
      • :blank:
      @i The answer isn't a "have you tried being less of a poor?" since I've literally told you to run the prune task if you need to. It solves the problem. If you want to archive the Fediverse, than that's your decision and disk space is part of that consideration. More archived data always equals more data. Choose one or the other.

      And the prune taking too long, that's mostly an issue of shitty IO on VPSes. The prune one cawfee club took like 3 weeks and did not finish on the BuyVM crap 200 IOPS limited slab. And it finished in something like 2 days on grips'es laptop. Same with cum salon, pernia ran it on the VPS which was hosted I think on Oracle. And I assume that IO is also shit there. My IO is also shit on here, a repack takes like 5 hours on this ~30GB DB, because it is limited to 35MB/s.

      The prune on the 4GB DB of pl.borked.technology on OVH's "secondary" disk limited to ~5MB/s took 8 hours to finish without a repack.

      >not to mention needing even more space to fit a repack in the first place
      pg_dump into a compressed file, dropdb pleroma, pg_restore.
      In conversation about 2 months ago permalink
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 05:20:22 JST Phantasm Phantasm
      in reply to
      • :blank:

      @i Also since you are subscribed to like 100+ relays, it might be time to run delete from activities where data->>'type' = 'Announce' and split_part(data->>'actor', '/', 4) = 'relay'. I think that's the right query.

      In conversation about 2 months ago permalink
      prettygood likes this.
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 30-Sep-2025 05:20:25 JST :blank: :blank:
      in reply to
      • lainy
      @phnt @lain the answer shouldn't be "have you tried being less of a poor?", cum.salon couldn't even finish the default db prune in two weeks, not to mention needing even more space to fit a repack in the first place

      at least a reinstall doesn't brick the domain forever anymore
      In conversation about 2 months ago permalink
    • Embed this notice
      prettygood (prettygood@socially.drinkingatmy.computer)'s status on Tuesday, 30-Sep-2025 05:48:34 JST prettygood prettygood
      in reply to
      • :blank:
      @phnt @i reminds me I need to share my maintenance script. I've been pruning old posts and clearing the relay activities weekly (yeah I know its turbo aggressive) during a scheduled downtime and my disk usage is very modest.
      In conversation about 2 months ago permalink
    • Embed this notice
      prettygood (prettygood@socially.drinkingatmy.computer)'s status on Tuesday, 30-Sep-2025 06:01:13 JST prettygood prettygood
      in reply to
      • :blank:
      @phnt @i hmm. I dunno if Linode is that shitty. I should look at some storage latency stats or something. Hell I could get the storage and just set up a replica writing to it and compare, that's valid.
      In conversation about 2 months ago permalink
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 06:01:14 JST Phantasm Phantasm
      in reply to
      • :blank:
      • prettygood
      @prettygood @i The bolt-on storage usually has shittier performance than the main storage anyway. You would have to do a complete reinstall and set up lvmcache for it to probably work reasonably.

      Like running Pleroma on the BuyVM slab is basically impossible after few months, because it is that limited. Just loading FE would probably kick it over for a few minutes. Same with search.
      In conversation about 2 months ago permalink
    • Embed this notice
      prettygood (prettygood@socially.drinkingatmy.computer)'s status on Tuesday, 30-Sep-2025 06:01:15 JST prettygood prettygood
      in reply to
      • :blank:
      @phnt @i my instance runs on a very heavily taxed VPS and I'm too cheap to buy bolt-on storage just to move the postgres DB onto it. I interact with things I want to keep around.
      In conversation about 2 months ago permalink
    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 06:01:16 JST Phantasm Phantasm
      in reply to
      • :blank:
      • prettygood
      @prettygood @i If I was to delete objects of the image bots I follow that have zero interactions, I would also probably cut down the DB size by a third. I don't run the prune task mostly because I like to search for posts from a year ago that are relevant $today in some way. And I can handle the data.

      LVM-VDO with lz4 compression would also help with disk space usage.
      In conversation about 2 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 30-Sep-2025 20:35:46 JST  
      in reply to
      • Yukkuri
      • :blank:
      • prettygood
      • di0nysius the patomskyite
      @iamtakingiteasy @i @w0rm @phnt @prettygood How does that impact post editing or any other activities resulting in object change if said object is on the archived partition?
      In conversation about 2 months ago permalink
      prettygood likes this.
    • Embed this notice
      Yukkuri (iamtakingiteasy@eientei.org)'s status on Tuesday, 30-Sep-2025 20:35:51 JST Yukkuri Yukkuri
      in reply to
      • :blank:
      • prettygood
      • di0nysius the patomskyite
      @phnt @i @w0rm @prettygood Here; also pushed commit inverting known activity types to the exclude_type filter, so an index would be used during post deletes.

      https://eientei.org/objects/fafbe44b-51a9-469e-a9cc-95a4d877693c
      In conversation about 2 months ago permalink

      Attachments


    • Embed this notice
      Phantasm (phnt@fluffytail.org)'s status on Tuesday, 30-Sep-2025 20:35:55 JST Phantasm Phantasm
      in reply to
      • Yukkuri
      • :blank:
      • prettygood
      • di0nysius the patomskyite
      @w0rm @i @prettygood You can also do partitioned tables with Pleroma where old posts live on slow storage and new posts live on fast storage. It requires some Pleroma patches though. @iamtakingiteasy did just that semi-recently.
      In conversation about 2 months ago permalink
    • Embed this notice
      di0nysius the patomskyite (w0rm@dsmc.space)'s status on Tuesday, 30-Sep-2025 20:35:57 JST di0nysius the patomskyite di0nysius the patomskyite
      in reply to
      • :blank:
      • prettygood
      @phnt @i @prettygood

      A) Post ephemerality
      B) VPS wg backhaul because storage is actually cheap.
      In conversation about 2 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.