@silverpill Thanks, I looked up what the cc field was yesterday and that makes sense now. I was thinking of making a some kind of auto-filter script that monitors queue length, hits a threshold, looks at logs and filters problem instances. Maybe even have it poll the instance 5-10min if it's a 502 and remove the filter when it comes back up. Could also be possible to do something even better in mitra, if we hit a 502 move all activities that will hit that server in the queue to a secondary queue that does the slow poll until it's working and then moves everything back to the main queue? Just some thoughts, might be too much to maintain in the Mitra itself
Jul 02 01:40:36 wizard.casa mitra[1433501]: 2025-07-02T01:40:36 mitra_activitypub::queues [WARN] failed to process activity (HTTP status server error (502 Bad Gateway) for url (https://pubeurope.com/users/fr/statuses/114780187162115427)) (attempt #1): {"@context":"https://www.w3.org/ns/activitystreams","actor":"https://newsmast.community/users/politics","cc":["https://pubeurope.com/users/fr","https://www.w3.org/ns/activitystreams#Public"],"id":"https://newsmast.community/users/politics/statuses/114780189123638405/activity","object":"https://pubeurope.com/users/fr/statuses/114780187162115427","published":"2025-07-01T21:51:34Z","to":["https://newsmast.community/users/politics/followers"],"type":"Announce"}
I'm not sure how newsmast works and I should've just maybe filtered pubeurope.com? I see newsmast cc's other instances that throw up failures like threads.net, I'll remove the filter and see what happens
also filtered loma.ml and bsky.brid.gy (might bring loma back if anyone misses it and it's federation stabilizes, but saw almost 400 failed to re-fetch errors in the logs since 12:00, over 2000 for kemono :aaa: )
@ps I see, I can access and use the instance but not federate over yggdrasil, @silverpill the only thing I can think of that might work is using the proxy_url param with something to route yddgrasil traffic? or am I overthinking it
pretty sure we're blocked by threads (what caused the queue backup), I haven't tried removing the filter to see what happens, but honestly don't see much value in federating with threads unless someone here wants me to double check
mostly showing off the grafana dash new VPS with this post :average_enjoyer:
@silverpill I don't remember having to do anything crazy, I enabled the endpoint w/basic auth
then in telegraf.conf [[inputs.prometheus]] urls = ["$endpoint_url"] username = password =
then I ran it into my influxdbv2 bucket, I think I did get an error when trying the prometheus datasource in grafana directly, but I wanted historical data, so I didn't really even look that hard
Admin* with a sledgehammer and schizophrenia (not really, not all the time)*Experimenting with Mitra https://codeberg.org/silverpill/mitra ; Instance subject to total nuclear termination with notice