snippet from the NYT article: For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that "John is too stubborn to talk to" means that John is so stubbort that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? [... because it might analogize...] The correct explanations of language are complicated and cannot be learned just by marinating in big data.
https://cdn.masto.host/daircommunitysocial/media_attachments/files/109/990/444/217/921/981/original/ec814648f9969d6b.png