@CatherineFlick Even as decision support it can be very dangerous, as it may very well give the impression of 'objectivity' to fundamentally social and subjective metrics.
Conversation
Notices
-
Embed this notice
pettter (pettter@mastodon.acc.umu.se)'s status on Friday, 02-Jun-2023 18:21:24 JST pettter -
Embed this notice
Prof. Catherine Flick (catherineflick@mastodon.me.uk)'s status on Friday, 02-Jun-2023 18:21:25 JST Prof. Catherine Flick AI should be a decision support system, not an autonomous one. We’ve seen time and time again how it doesn’t “think”, it just optimises for the end goal. Anthropomorphising it is dangerous because it lulls us into thinking it can imitate human decision making. https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
-
Embed this notice