I look forward to this providing a scientific measurement of how unreliable entire categories of thought are for producing meaningful output, like "we determined that critical race theory can be used to justify anything"
I can see one possible difficulty with their method, since LLMs are just textual analysis, it may not be able to tell the difference between a category full of bullshit and a category that is just plain filled with a lot of context and ambiguity in the real world. and this may be especially true of the social sciences.
@lain so yeah I think its still useful information because it can warn you when a field of thought just plain has more places for manipulators to hide and justify things. maybe like race research is necessarily filled with complexity but you need to be extra vigilant about not just believing what you're told. which is the opposite of what people in these fields tell you, curious