GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    ♡ miss olivia ♡ (olivia@pl.voltrina.net)'s status on Tuesday, 15-Oct-2024 01:17:01 JST ♡ miss olivia ♡ ♡ miss olivia ♡
    >pleroma database has grown by 7GB in a span of 4 days
    what
    In conversation about 7 months ago from pl.voltrina.net permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:16:56 JST  
      in reply to
      • :blank:
      @i @olivia Bet a good half of that would happen to be a bunch of those mystery misskey.io posts.
      In conversation about 7 months ago permalink

      Attachments

      1. Misskey.io
        Misskey.io は、地球で生まれた分散マイクロブログSNSです。Fediverse(様々なSNSで構成される宇宙)の中に存在するため、他のSNSと相互に繋がっています。 暫し都会の喧騒から離れて、新しいインターネットにダイブしてみませんか。 お問い合わせはこちらhttps://go.misskey.io/support Powered by Misskey
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 15-Oct-2024 01:16:57 JST :blank: :blank:
      in reply to
      @olivia how do you have so many stuck jobs lol, normal oban should hover at a couple of k jobs, if you don't mind losing the unfederated backlog, you can just delete from the table freely

      mind showing what pleroma=> select state, queue, attempt, count(*) from oban_jobs group by state, queue, worker, attempt order by count(*) desc; prints?
      In conversation about 7 months ago permalink
    • Embed this notice
      ♡ miss olivia ♡ (olivia@pl.voltrina.net)'s status on Tuesday, 15-Oct-2024 01:16:59 JST ♡ miss olivia ♡ ♡ miss olivia ♡
      in reply to
      • :blank:
      oban_jobs seems to be the culprit
      In conversation about 7 months ago permalink

      Attachments


      1. https://media.voltrina.net/media/3a574e2e93ce8140cd8d452693c056819c9cf7bbef73e08ba89067818f48296d.png
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 15-Oct-2024 01:17:00 JST :blank: :blank:
      in reply to
      @olivia which tables? https://pl.voltrina.net/phoenix/live_dashboard/ecto_stats

      some people keep seeing massive instance killing database growth but never share the details
      In conversation about 7 months ago permalink

      Attachments


    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:32:19 JST  
      in reply to
      • :blank:
      @i @olivia Here's another report of the same shit happening: https://git.pleroma.social/pleroma/pleroma/-/issues/3335
      20 retries might indeed be too much (I have around 1500 jobs right now, most of which are completed, a good half of those that aren't are retries for DRC since verita's :cloudflare: rules keep blocking me or something), but I'm more interested in how the fuck do they manage to pile up so hard. Forcefetched the mentioned post without any issues.
      In conversation about 7 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: git.pleroma.social
        Pleroma server flooding "Object rejected while fetching" and using all CPU like a runaway diesel engine (#3335) · Issues · Pleroma / pleroma · GitLab
        guys help... one of my pleromas is flooding this same error over and over again extremely rapidly, the beam and postgres are maxing out all cpus, the log...
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 15-Oct-2024 01:32:20 JST :blank: :blank:
      in reply to
      @olivia forgot feld made most things a background when removing worker from select but not group, probably as mint says, remote fetch workers stuck with 20! retries, working through spam

      run watch 'sudo -Hu postgres psql pleroma -c "delete from oban_jobs where attempt > 3;"' for some hours and it should clear up
      In conversation about 7 months ago permalink
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
      ♡ miss olivia ♡ (olivia@pl.voltrina.net)'s status on Tuesday, 15-Oct-2024 01:32:21 JST ♡ miss olivia ♡ ♡ miss olivia ♡
      in reply to
      • :blank:

      there you go

      state | queue | attempt | count -----------+----------------------+---------+--------- available | background | 0 | 3773277 available | background | 2 | 1863333 available | background | 1 | 1022895 available | background | 3 | 494206 available | background | 4 | 266730 available | background | 5 | 130975 available | background | 6 | 67975 available | background | 7 | 34023 available | background | 0 | 28042 available | background | 8 | 16993 available | background | 9 | 8558 available | notifications | 0 | 6916 available | background | 10 | 4201 available | background | 11 | 2264 available | background | 12 | 988 available | background | 13 | 480 available | background | 14 | 205 completed | federator_outgoing | 1 | 150 completed | federator_incoming | 1 | 130 completed | search_indexing | 1 | 127 available | background | 15 | 117 cancelled | federator_incoming | 1 | 85 scheduled | background | 0 | 77 retryable | background | 10 | 67 retryable | background | 2 | 56 available | background | 0 | 55 retryable | background | 3 | 34 retryable | slow | 19 | 28 cancelled | slow | 1 | 25 executing | federator_incoming | 1 | 23 available | check_domain_resolve | 0 | 20 available | mailer | 0 | 20 retryable | slow | 18 | 17 retryable | background | 13 | 16 discarded | federator_incoming | 5 | 16 executing | background | 3 | 12 executing | background | 2 | 12 executing | federator_incoming | 4 | 11 retryable | background | 4 | 11 executing | federator_incoming | 2 | 10 retryable | background | 6 | 9 retryable | federator_outgoing | 4 | 9 executing | federator_incoming | 3 | 9 retryable | background | 19 | 8 available | background | 19 | 8 retryable | slow | 17 | 8 retryable | background | 12 | 6 retryable | background | 9 | 6 executing | federator_outgoing | 1 | 6 retryable | background | 5 | 6 available | background | 0 | 5 executing | federator_incoming | 5 | 5 available | background | 0 | 5 discarded | slow | 5 | 5 retryable | background | 8 | 5 executing | background | 4 | 4 retryable | background | 11 | 4 retryable | background | 7 | 4 available | background | 18 | 3 retryable | slow | 15 | 3 completed | federator_incoming | 2 | 2 executing | federator_outgoing | 5 | 1 retryable | background | 14 | 1 retryable | background | 15 | 1 retryable | slow | 11 | 1 retryable | slow | 16 | 1 executing | background | 1 | 1 completed | slow | 1 | 1 completed | web_push | 1 | 1 available | background | 17 | 1 available | background | 0 | 1 (71 rows)
      In conversation about 7 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:36:23 JST  
      in reply to
      • 
      • :blank:
      @i @olivia Maybe it's related to pinned posts; I have patched my pleromer to not fetch them when seeing a new actor.
      In conversation about 7 months ago permalink
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 01:42:21 JST feld feld
      in reply to
      • 
      • :blank:
      @i @olivia @mint dedup for what exactly?
      In conversation about 7 months ago permalink
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 15-Oct-2024 01:42:22 JST :blank: :blank:
      in reply to
      • 
      • feld
      @mint @olivia i wonder if @feld would ever consider a dedup MRF into pleroma via simhash_ex or ex_lsh, since it would also require a cachex table, and those have to be defined ahead of time, unlike whenever we eventually switch to nebulex
      In conversation about 7 months ago permalink
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 01:46:18 JST feld feld
      in reply to
      • 
      • :blank:
      @i @olivia @mint was there a spam campaign of mostly same text, but no links and the existing MRFs don't detect it?
      In conversation about 7 months ago permalink
    • Embed this notice
      :blank: (i@declin.eu)'s status on Tuesday, 15-Oct-2024 01:46:19 JST :blank: :blank:
      in reply to
      • 
      • feld
      @feld @olivia @mint the PASTA WITH EXTRA SPAM, like almost all the previous nuisances would have been discarded if 99% matches of the exact same text posts were ignored
      In conversation about 7 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:51:14 JST  
      in reply to
      • :blank:
      • feld
      @feld @i @olivia There was, I wasn't affected, some used AntiMentionSpam, keyword or reject policies. That said, it isn't related to current issue with RemoteFetcherWorkers piling up into millions (which I believe are only spawned by pinned post fetching pipeline in vanilla pleromer).
      In conversation about 7 months ago permalink
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:56:22 JST  
      in reply to
      • 
      • :blank:
      • feld
      @feld @i @olivia Indeed, the three posts mentioned in the issue are the same three posts that's pinned on affected actor's profile. Don't notice anything out of ordinary in his collection aside from said posts having shitton of emojis.
      In conversation about 7 months ago permalink
      ✙ dcc :pedomustdie: :phear_slackware: likes this.
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 01:58:10 JST feld feld
      in reply to
      • 
      • :blank:
      @mint @i @olivia weird, why would it keep fetching them? can you confirm for me the profile so I can take a closer look?


      also the dupes shouldn't happen with latest develop branch, at least if it tried it would cancel inserting the job every time because a duplicate one existed (until pruner kicks in and clears up old Oban jobs)
      In conversation about 7 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 01:59:43 JST  
      in reply to
      • :blank:
      • feld
      @feld @i @olivia Profile is https://misskey.io/users/9mhsmldaly3m08ft, the issue mentions some transmogrifier error but I haven't gotten it when forcefetching all three posts.
      In conversation about 7 months ago permalink

      Attachments


    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 02:04:36 JST feld feld
      in reply to
      • 
      • :blank:
      @mint @i @olivia that redirects me to an account named beer_bastard. Same?
      In conversation about 7 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 02:09:36 JST  
      in reply to
      • 
      • :blank:
      • feld
      @feld @i @olivia The exact error might be irrelevant since they might also have some geoblocks or other :cloudflare: shenanigans going. What's more concerning is pileup happening in a first place; now that I'm thinking about it might be recursion.
      1. pleromer receives an activity referencing that guy's profile/post
      2. it fetches them
      3. fetch pipeline kicks in
      4. pinned posts fetching happens as a part of pipeline
      5. pleromer inserts RemoteFetcherWorker jobs for those posts
      6. said jobs try to fetch pinned posts again
      If that's the case (too lazy to confirm, sorry) and fetcher jobs start erroring out, the queue raises exponentially. Hopefully not?
      In conversation about 7 months ago permalink
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 02:10:08 JST  
      in reply to
      • :blank:
      • feld
      @feld @i @olivia Yeah, misskey uses flakes in their AP actor IDs.
      In conversation about 7 months ago permalink
      feld likes this.
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 02:11:37 JST feld feld
      in reply to
      • 
      • :blank:
      @mint @i @olivia

      > 5. pleromer inserts RemoteFetcherWorker jobs for those posts

      these inserts were not set to be unique, but they are now
      In conversation about 7 months ago permalink
       likes this.
    • Embed this notice
       (mint@ryona.agency)'s status on Tuesday, 15-Oct-2024 02:14:54 JST  
      in reply to
      • :blank:
      • feld
      @feld @i @olivia Indeed, but that's more of last frontier measure. There would still be some friction left pertaining to checking whether such job exists and raising an exception if it does.
      In conversation about 7 months ago permalink
    • Embed this notice
      feld (feld@friedcheese.us)'s status on Tuesday, 15-Oct-2024 02:17:24 JST feld feld
      in reply to
      • 
      • :blank:
      @mint @i @olivia you don't want to raise an exception on a duplicate job in Oban; that would break a lot of stuff needlessly. It just drops the job silently. It's not an error scenario that needs to raise / cause the process to abort.
      In conversation about 7 months ago permalink
       likes this.
    • Embed this notice
      Vaghrad (vaghrad@asphodel.rip)'s status on Tuesday, 15-Oct-2024 21:13:57 JST Vaghrad Vaghrad
      in reply to
      • 
      • :blank:
      • feld
      @mint @i @feld @olivia found this post by searching for the beer bastard guy, guess that explains why my cpu was cooking itself to death the past few days :FF_Bomb:
      implementing https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4253/diffs?commit_id=a887188890a6b8c9e97c6cafe1776bb151e63843 from this thread and deleting the stuck oban jobs seems to have fixed it for now, thank you guys
      In conversation about 7 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: git.pleroma.social
        Oban: more unique job constraints (!4253) · Merge requests · Pleroma / pleroma · GitLab
        A couple of these had unique settings applied, but we're changing the limit to :infinity We have the Pruner enabled by default which will prune completed/cancelled/errored...
       and feld like this.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.