A coworker just shared a couple of scary stories about how targeted scams have become.
In one scam attempt, he received a phone call in his daughter's voice asking for help paying for a tow truck. He thought that didn't sound right, so he hung up and called her and she answered her phone nonchalantly with no idea about what was going on. But someone had found her voice somewhere, knew her relationship with him, and targeted the scam with precise details.
In another scam attempt, he was contacted by someone in the soccer league he coaches for asking for help getting gift cards for some legitimate-sounding purpose. In that case, it took a few rounds of communication before he realized that something wasn't right. But again, someone had the data on him and his role/relationship to the soccer league, and customized that scam to a degree that he didn't think anything of it at first.
It's scary out there. I think we're all at risk, but I worry most about my older relatives living on their own.
@BeAware AI augmented moderation would actually be pretty easy to put together, though you probably shouldn't give it the banhammer directly and just have it report to you.
Set up a LLM, Llama 3 8B based models should perform well, then craft a standard prompt explaining what you are looking for in detail. It can be quite long to nail down all the details.
In your prompt, include a specification for a machine readable classification string that can be extracted via script. Then run it on posts
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.