@ramsey I don’t follow. A prompt is an instruction, no? The whole idea is that the data and instructions are intertwined which is what led to the initial set of prompt injection attempts that have now been mitigated to one extent or other. So a more accurate statement would be along the lines of: It shouldn’t accept instructions that contradict its initial prompt.