I often program and write for pleasure. Both are skills I like to exercise. Often, but not always, I'll give the results of that hobby away for free.
But here comes Google, saying "We want a presumption that we can use all of your works -- whatever copyright license you choose -- to train our AI systems."
Just fuck off. You already use some of my software for free. Why when I say that this particular item of prose is for me, do you presume to want that too.
It's some sort of tech-bro colonialism.
(Don't give me any "But you can opt out" b.s. "Just like robots.txt" isn't a viable opt-out mechanism. The people who do metadata and rights professionally -- libraries and rights collection agencies -- don't use that mechanism, but link the metadata to the work via ISBNs and the like, or embed the metadata as seen by the copyright notice on every movie or the Dublin Core markup. How about we use a opt-in mechanism, a set of rights to make a copy, we could call it "copy right".)
Australia doesn't have fair use in its copyright law. It has fair dealing -- a list of allowed exceptions to copyright, narrowly drawn. There is no way that list includes "training AI".
Q: Is it interesting to you how their [Gebru and Mitchell's] warnings were received compared with the fears of existential risk expressed by ex-Google “godfather of AI” Geoffrey Hinton recently?
A: If you were to heed Timnit [Gebru]’s warnings you would have to significantly change the business and the structure of these companies [such as Google]. If you heed Geoff’s warnings, you sit around a table at Davos and feel scared.
Geoff’s warnings are much more convenient, because they project everything into the far future so they leave the status quo untouched. And if the status quo is untouched you’re going to see these companies and their systems further entrench their dominance such that it becomes impossible to regulate. This is not an inconvenient narrative at all.
Q: Unlike many other tech entrepreneurs and academics you didn’t sign either of the two recent petitions, the Future of Life institute “pause AI” letter or last month’s Center for AI Safety “existential threat” letter.
A: No. I don’t think they’re good faith. These are the people who could actually pause it if they wanted to. They could unplug the data centres. ...
@evan I'd also object to this for a second reason -- curriculum overload.
There is currently too much crammed into schooling, and that pushes against time for teaching basic life skills. If you want to teach digital literacy then something else has to go. That should be an explicit choice, because implicit choices nearly always remove play and exploration, which are one of the most valuable schooling activities for both primary and secondary learning.
Life: cycling, bushwalking, amateur radio VK5TU, Linux tinkering.Work: network engineering, systems programming, technical team leadership.Location: Adelaide, Australia.You're welcome to use the content without attribution; except for art like photos and films. Get in touch if they're not marked with a copyright license you find useful.Posts SFW with M-rated language at times.