@futurebird @david_chisnall @CptSuperlative @emilymbender Not a criticism but an observation: despite knowing in an a priori way that it couldn't work, you tried that with some kind of expectation (I think?) that it might work.
This makes me realize that a large part of touching LLMs safely isn't just having your own sound mental model for how they work, but also for how human minds/your own mind work (and might be fooled by them).
Or, one can take the safe path and just never touch LLMs to begin with.