Attending a Bosch event in my area, for a change I’m not on the panel. Oh interesting, not only is Kate Crawford on the panel, but Microsoft just donated a lot of $$ to "Germany’s” (& through that the EU’s) AI. The event is Chatham Rule, so I can't say who says what. Other people on teh panel: Alexandra Geese, Jeanettte Hofmann Pradnya Bivalkar. Someone thinks there's an espistiemic domcratic crisis because no one can tell what's true. Why this is more true of AI than human liars I have no idea.
@j2bryson I think a major problem is that legacy media has in many ways discredited themselves, as have most ostensibly democratic political systems. Never moreso than with the ongoing genocide in Gaza with its massive material support from Western states and largely incidental reporting by many media outlets.
It's blatantly obvious if one cares to look that there are massive amounts of hypocricy and very few (if any) principles that can't be bent or broken to support Your Own Team.
Another speaker: the impact of disinformation is entirely dependent on whether there are good media. The problem in countries including the US where politicians and major media promote disinformation, they are just using misinformation as an identity flag [I've been writing about this for years too! Well, mostly blogging, but some forthcoming science finally, yeah.] I love this speaker, they’re pointing out the fear of new media & manipulation in printing press & photography.
Yet another speaker mistquotes the speaker I liked to say that trad media can counter social media, then says none of it has money so they all need clickbait, money only comes mediated via google and facebook. The previous speaker agrees with funding issue for media, but that’s not the same as ascribing power to AI. Says the problem is kids these days not reading legacy media so how do we create a sound media landscape that reaches out.
@j2bryson Yeah, in theory everything can be faked, but there are still frictions and ways to verify sources. Don't trust screenshots- go to the site itself. Don't follow links, double check the URL. There are ways to introduce frictions and to make it harder to get tricked.
A speaker shows that non private data e.g. use product purchases or favourite entertainers. Another speaker talks about not wanting to be censored by big tech, we cannot make infrastructure do our work for us. Good reason to exempt platforms for the news. Someone says Facebook chose to promote negative stuff, it wasn't just the people's clicks it was preemptive. "Varieties of Democracy" says autocrising raises with toxic polarisation is not economics but hate speech [wrong] what autocrises.
India it's really sad, people lose lives, don't just shout at each other.
We haven't seen culture controlled by 8 people before.
OK, we're on to q&a now. I asked why can't we teach people about recognising a valid source, someone said it was the same as watermarks, the source can be faked. I don't see that, URLs, newspapers, channels – seems to rest on the assumption we just find every story floating in the æther with some kind of tag we won't believe on it?
We can now change this, we could ban microtargetting. THe #DSA kind of half bans it, all kinds of things you can't target on “sensitive" data but as usual can guess that from insensitive data like where you live. Big problem need #antitrust and #competition because only tech control the advertising. [#DMA] Germany unfortunately keeps talking about literacy, she sides with the “it's all power" [Wrong. Recognising truth is not recognising image manipulation, it's recognising reliable sources.]
Now we're hearing about the #DSA (as we should, #AIAct is not as big a deal for elections. Companies that own AI, and social networks, also they have a profile on everyone. They not only target advertisements but also content to you [if you use recommenders, which I think you must on tiktok and can't on mastodon]. Negative gets a lot of reach which means ltos of rightwing crap. And these phenomena interact. German foreign office found misinfo twitter bot from Russia took ages to get it down.
A speaker disagrees, thinks it's the social media that has been a conduit to news for decades. But now the means of production are owned by the same people making generativeAI. Musk wiped out twitter's misinformation people and simultaneously created grok. So this is fundamentally a different landscape than even 12-18 months ago. Speaker change: these debates happens every time there's a new media. The "toxic destabilising undertaingy" person is unfortunately more charismatic as a speaker.