@emilymbender I don't have an article, but to summarize, it's pretty much an advanced form of lossy compression for text, running on someone else's server, which one can interface with a web interface that's written in JavaScript which is under a proprietary license.
"OpenAI" has scraped a bunch of text from the internet, most under proprietary terms, some under free terms (but with certain requirements like attribution) and shoved the whole lot through a neural network and the end result is a very energy-hungry language model that accepts question text and vomits out convincing combinations of the input text that matches the question text somewhat, with the output filtered through grammar, spelling and tone checking so it looks very convincing.
By pure chance it gets some things right, but it gets most things wrong due to how much of the input was incorrect and the semi-lossy reproduction nature.
@emilymbender would you count all the stuff on “prompt engineering” part of the lay literature? This is really an interesting question, getting at what people think about when they think about ChatGPT, AI.
@emilymbender Not specific to ChatGPT, but this Drexel/IBM paper is along those lines: "People’s Perceptions Toward Bias and Related Concepts in Large Language Models: A Systematic Review" - https://arxiv.org/pdf/2309.14504.pdf
@emilymbender@johnlaudun not quite what you are looking for but I have been informally collecting examples of 'advice to teachers' and feel there is a need to understand how ppl bcm generative AI 'experts'. Some of the prompts suggest the experts are not even trying out their own recommendations, let alone exploring them in collaboration with students. I cd point u to some positivist studies of student 'perceptions of genAI', treating students as lay ppl and perceptions as measurable.
Kim, K., Kwon, K., Ottenbreit-Leftwich, A. et al. Exploring middle school students’ common naive conceptions of Artificial Intelligence concepts, and the evolution of these ideas. Educ Inf Technol 28, 9827–9854 (2023).
This study aims to explore the middle schoolers' common naive conceptions of AI and the evolution of these conceptions during an AI summer camp. Data were collected from 14 middle school students
@emilymbender I've not seen academic studies, but have been surprised - when speaking to lay audiences - by how well a basic explanation (text prediction + RLHF) gets them to the point where they can answer the questions they identified at the beginning of the sessions: can you trust the answers; is there bias; what sort of questions do they answer well...?