I have an #accessibility question for #blind and #PartiallySighted users who use text-to-speech tools.
Has the recent development of machine-learning based synthetic voices such as Piper, Coqui, Mimic3 and Amazon Polly had any useful impact for you, or do you still use e-speak style voices?
Do the tools you regularly use support "Neural Voices" of the kind mentioned above?
I've noticed that integration is limited - for example, I can get transcription and reading tool Speech Note (https://flathub.org/apps/net.mkiol.SpeechNote) to read me anything I like in a range of up-to-date voices, but Gnome Orca - my OS-wide screen reader - remains intensely hard to customise.
But I'm a Linux user, and thus aware that accessibility remains an embarrassingly low priority in practical open source development, even where the component parts exist.
For example, few of the projects linked by the Piper team (https://github.com/rhasspy/piper?tab=readme-ov-file#people-using-piper) are focused on accessibility.