I'm frequently struck by the phrase "unintentional bias" which comes up frequently in work on LLMs. Whose intentions are in question? Why do we have to absolve those people all the time?
Conversation
Notices
-
Embed this notice
Prof. Emily M. Bender(she/her) (emilymbender@dair-community.social)'s status on Monday, 17-Feb-2025 06:41:36 JST Prof. Emily M. Bender(she/her)
- Rich Felker repeated this.
-
Embed this notice
Nicole Parsons (npars01@mstdn.social)'s status on Monday, 17-Feb-2025 12:02:32 JST Nicole Parsons
The developers of LLM's & AI have been warned about bias for over a decade yet the industry makes strong efforts to purge the harbingers.
https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/
Why?
Purges of dissenting staff occur because the investors behind these initiatives want the results so badly, they're prepared to fund it limitlessly, no matter its poor quality.
https://www.theregister.com/2025/02/12/larry_ellison_wants_all_data/https://www.theregister.com/2024/09/16/oracle_ai_mass_surveillance_cloud/
Silicon Valley is more interested...
1/2