AI is a marketing term, not a technical term of art.
The term “artificial intelligence” was coined in 1956 by cognitive and computer scientist John McCarthy
– about a decade after the first proto-neural network architectures were created.
In subsequent interviews McCarthy is very clear about why he invented the term.
First, he didn’t want to include the mathematician and philosopher Norbert Wiener in a workshop he was hosting that summer.
You see, Wiener had already coined the term “cybernetics,” under whose umbrella the field was then organized.
McCarthy wanted to create his own field, not to contribute to Norbert’s
– which is how you become the “father” instead of a dutiful disciple.
This is a familiar dynamic for those of us familiar with “name and claim” academic politics.
Secondly, McCarthy wanted grant money.
And he thought the phrase “artificial intelligence” was catchy enough to attract such funding from the US government,
who at the time was pouring significant resources into technical research in service of post-WWII cold war dominance.
Now, in the course of the term’s over 70 year history, “artificial intelligence” has been applied to a vast and heterogeneous array of technologies that bear little resemblance to each other.
Today, and throughout, it connotes more aspiration and marketing than coherent technical approach.
And its use has gone in and out of fashion, in time with funding prerogatives and the hype-to-disappointment cycle.
So why, then, is AI everywhere now?
Or, why did it crop up in the last decade as the big new thing?
(2/8)