This is an inescapable, biological aspect of human cognition: we *can't* maintain vigilance for rare outcomes. This has long been understood in automation circles, where it is called "#AutomationBlindness" or "#AutomationInattention":
https://pubmed.ncbi.nlm.nih.gov/29939767/
Here's the thing: if nearly all of the time the machine does the right thing, the human "supervisor" who oversees it becomes *incapable* of spotting its error.
10/