I've wondered lately if it would be good to double down on the self-voicing (or more generically, self-outputting?) approach. The current Windows "screen reader APIs" (that's what we call them), including the one I developed myself 10+ years ago, are too simplistic; they don't allow the application to be called back when a speech utterance is complete or when a button is pressed on a Braille display, and they don't allow applications to take over screen reader keyboard commands.