100%! We did it! 🏆🥇
Great job everyone!
100%! We did it! 🏆🥇
Great job everyone!
@mekkaokereke I'm sure this is an easy fix. Maybe put "don't be racist" in the prompt.
@tob 🤣
@mekkaokereke Turning Test but for racism
@mekkaokereke didn't Amazon try this - and immediately shut it down for the same reason - like a decade ago?
No, that was completely different! 🤡
That was a primitive model, using random forests or something, that accidentally quantified and exposed bias. Old school. Remedial. An inch above trying to make fire by rubbing sticks together, or slapping flint rocks together above a clump of moss. Basic.
This is cutting edge LLMs, based on Transformer architectures... that accidentally quantified and exposed bias. High tech! Much innovation!
The future is now(tm)!
Generally unsurprising. Use biased data to produce a biased algorithm, get biased results.
But I am surprised that "name" is included in the evaluation factors. I can see no use for including that.
@SkipHuffman @mekkaokereke That's part of the secret sauce! If half your workforce is named "John", another John cannot hurt. And that's the kind of invaluable functionality that's promising to convince droves of CEOs they have hit pure gold.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.