Conversation
Notices
-
Embed this notice
hypolite (hypolite@friendica.mrpetovan.com)'s status on Friday, 06-Jan-2023 00:21:09 JST hypolite @thelinuxEXP "Unlearning" isn't feasible that I know of, but there is a kind of "negative learning" where the model will try to steer clear from the provided material. I believe that's what is used to avoid most sensitive output from the current public models. -
Embed this notice
Nick @ The Linux Experiment (thelinuxexp@mastodon.social)'s status on Friday, 06-Jan-2023 00:21:10 JST Nick @ The Linux Experiment The more I learn about AI tools for writing, generating images, etc… The more I think they should all be put on pause while regulation is put in place to decide who owns what and how you can decide to not have your stuff used to train an algorithm, or what you’re entitled to if it is indeed used.
Datasets used should be fully open and accessible and it should be possible to say you don’t want your creations to be included, and have them « unlearned » as well, if that’s even technically feasible
-
Embed this notice