@oscherler@mattly@jenniferplusplus The scenario I fear will come true is like this: Platinum care: human diagnosis (maybe additional AI) Gold care: AI care checked by humans Standard care: AI care (only checked by humans after complaints)
Also, of course doctors will be able to see 50% more patients because of all the AI help. 🤪
@jenniferplusplus Collapsing notifications are one good reason for alternate clients. My favorite is Phanpy: https://phanpy.social/ but others like Elk. (I know there is at least one mobile native client too, but both of the above web clients are also good on mobile.)
At this point I can't imagine using the "Mastodon" clients for any reason other than compatibility testing.
I'm wondering what kind of safety tools you have in mind? I've been thinking about possible chosen-3rd-party screening to help protect frequently targeted people (in both public and direct/semi-private messages). In my idea, a person could choose the level of screening they want from filtering severe abuse only all the way up to filtering out non-constructive or negative messages.
I'll spare you the full rant this time unless you want it.
@hrefna The examples you list sound good--I hadn't considered limiting notifications before. (Probably because I turn off all real-time notifications.) I like extra clicks (like a USEFUL CW--I disable them because almost all are just annoying).
I can't quite picture a reasonable UI for blocks affecting others in my social network (other than a shared blocklist). I am slowly working on ideas for score-based ranking of notifications and replies, but only with local data. 1/x
I am suggesting a new screening option for reply limits. This would be in addition to easy rules like "mentioned people may reply" and "people followed by the original poster may reply".
The basic idea would be that the original poster somehow lists (maybe on their profile?) one or more screening services that they trust. When a stranger replies, the original poster's server would hide/suspend the message until a screening service approves it.
@hrefna Reply limits are useful. Followers-only replies is a great idea to require a little effort, but it has some issues IMO:
* With open following it is too easy for bad actors to follow just for a reply, while *also* being too much of a barrier for many good actors who want to contribute.
* Approval-required follows are way too much effort for most people. Even if promptly approved, it is likely too late for a good actor who wanted to make a quick positive comment.
* Perhaps in low-risk situations the original poster could also click-through and approve replies.
* One would set up accounts with screening services and link them to a fedi account. The original poster's server would forward the messages for approval.
* I think this could be done with only changes to the original poster's server, not the reply-guy server. It would be nice if the reply server knew of the screening to warn the person replying.
@tchambers@lrhodes In addition to reply controls, I'd like to add an ability for the original thread poster to "cut/prune" bad replies--just removing them from the thread tree. They would not be deleted (that is a decision for a moderator), just not attached to the original thread.
Technically this would probably be a new message sent out informing clients that message "id2" should no longer be threaded under message "id1".
This is missing something: people see bad *replies* not chosen by the follower or booster.
Even an innocent cat picture can get nasty replies. Normal users will read every reply they are mentioned in even if downranked.
If reply controls are implemented in time and widely available/used this may be less of a problem, although at the cost of friendly stranger replies. 1/x
@renchap@b0rk Another idea to consider is showing the profile text of the person being replied to. There one could say things like "Read my FAQ before replying." or "Please no advice unless I specifically ask for it."