A small indicator of the level of semantic understanding and accuracy of LLMs:
the number of times autocorrect invokes “it’s” when it should be “its”.
Now imagine that manner of ‘understanding’ being applied to, say, whether you should be considered for an interview or whether your face matches a criminal suspect