What do expert annotators see that nonexperts don’t? To understand why experts far outperformed nonexperts at detecting AI-generated text, we analyze the comments each annotator provided in their explanations. Overall, nonexperts often mistakenly fixate on certain linguistic properties compared to experts. One example is vocabulary choice, where nonexperts take the inclusion of any “fancy” or otherwise low-frequency word types as signs of AI-generated text; in contrast, experts are much more familiar with exact words and phrases overused by AI (e.g., testament, crucial).8 Nonexperts also believe that human authors are more likely to form grammatically-correct sentences and thus attribute run-on sentences to AI, while experts realize the opposite is true: humans are more likely than AI to use ungrammatical or run-on sentences. Finally, nonexperts attribute any text written in a neutral tone to AI, which results in many false positives because formal human writing is also often neutral in tone.
https://cdn.masto.host/zirkus/media_attachments/files/114/495/478/824/777/088/original/a4843db17f73636f.png