For anyone backing up a pleroma instance DB, if you're using an sql backup (all variants of pg_dump) - YA DON'T HAVE A WORKABLE RESTORE WITHOUT SOME MASSAGING! You MUST have indexes pre-created to load data. Post creation of indexes effectively won't work...unless your DB is small.
Soooo I think I know what causes the restores to basically never complete, maybe...I'm near the limits of my psql debugging capability because I've never hit this so I actually don't exactly know how to see what the CREATE INDEX is actively doing... But tl;dr; 1.8T rows. One Point Eight TRILLION rows. But...HOW?!
The dump+load upgrade method I tend to use causes this. In the default mode it creates the schema (sequences setup, all the functions, that sort of thing, BUT NO INDEXES), loads the data, then creates the indexes.
The index it's never finishing creating is calling this function:
CREATE OR REPLACE FUNCTION public.activity_visibility(actor character varying, recipients character varying[], data jsonb) RETURNS character varying LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE SECURITY DEFINER AS $function$ DECLARE fa varchar; public varchar := 'https://www.w3.org/ns/activitystreams#Public'; BEGIN SELECT COALESCE(users.follower_address, '') into fa from public.users where users.ap_id = actor; IF data->'to' ? public THEN RETURN 'public'; ELSIF data->'cc' ? public THEN RETURN 'unlisted'; ELSIF ARRAY[fa] && recipients THEN RETURN 'private'; ELSIF not(ARRAY[fa, public] && recipients) THEN RETURN 'direct'; ELSE RETURN 'unknown'; END IF; END; $function$Well. users is ~500k rows, and this function would be called for the ~3.7M activities rows....at a point in time where users has no indexes! So each invocation of activity_visibility would scan ~500k rows, times 3.7M rows in activities, and .... that's ~1.8T rows.
Why did I footgun myself like this when pg_upgrade exists? Well, binary incompatibilities, index collation differences, in the past causing Major Heartburn. I'm not going to dive into how it might be possible to fix the dump+load procedure, since, well, yeah. But for anyone backing up a pleroma DB be warned. Either take a binary copy (f/ex pg_basebackup) or dump schema and data seperately so you can load the data with indexes. Or find another workaround!
cc @feld
I think this is too complicated. Also, Move is meant to be used when origin and target are collections, but in the FEP they are both actors.
Why not Update?
Or, if you want to be explicit, consider introducing a new activity? For example, UpdateAudience:
{ "id": "https://example.social/update-audience" "type": "UpdateAudience", "actor": "https://example.social/uid/1", "to": ["https://www.w3.org/ns/activitystreams#Public"], "cc": [ "https://example.social/audience/1/followers", "https://example.social/audience/2/followers", ], "object": "https://example.social/context/1", "prevAudience": ["https://example.social/audience/1"], "nextAudience": ["https://example.social/audience/2"], }@phnt C2S API has always been a solution looking for a problem, but it is similar enough to FEP-ae97 API, so I have no issue with people devoting their time to fixing C2S.
However, almost nobody actually works on it. There is a lot of cheap talk, but anyone who actually tries to implement C2S quickly realizes how broken it is and gives up. Most progress so far has been made by a single developer (btw: I began to document some aspects of his implementation in FEP-9f9f: Collections).
>fixing the complete mess of a specification and making a v2 spec that isn't ambiguous and open-ended as a typical corporate privacy policy
The working group is too busy renaming https://www.w3.org/ns/activitystreams#Public to as:Public
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.