I find it interesting that NVDA does not read captions associated to figures with first-letter nav (and it may be that no screen reader does) - it makes reading through articles like this, where each heading has a picture under it, but also words, quite fatiguing. Just navigate it by graphics alone if you are using a screen reader, there's no knowing which belongs to which heading, and no knowing which headings can be skipped, because they contain same alt with a caption. https://www.businessinsider.com/burger-king-closing-restaurants-review-2023-11
Aha, progress, Voicify removed the Hitler AI Voice model, sure the RVC and files will exist online, and people will use it if they want it, but at least they as a platform decided to stand against it and finally take it down. Good on them, at least their services can't be used to generate content in the likeness of that voice. Some will say it's their freedom and right to do so, but it also is the freedom or collective reasoning of a platform to choose not to host such models, publicly anyway.
Many blind people, as I just found out, may not be aware that you can use Touch Typing to enter your Apple Watch Passcode (since Watch OS 10), rather than double-tapping each key individually. This surprised me. But when you are on the passcode screen, just rotate the rotor to find "typing style" and just like on iOS, you can adjust that to "touch typing." It may just not be obvious to people that on this screen is a standard keyboard so they never check it.
To be fair, we can give GPT 4 this as the correct answer. That was GPT3.5 from before. "To bring up the NVDA menu, you can press the `NVDA` key + `N`. The `NVDA` key is usually either the `Insert` key or the `Caps Lock` key, depending on your settings. Once the menu is open, you can navigate through it using your arrow keys. If you're looking to customize the key that acts as the `NVDA` key or explore more shortcuts, you can usually do that from the Preferences or Settings within the NVDA menu."
Ahahahahaha this is funny, one way that GPT can halucinate: "To bring up the NVDA menu, you can use JavaScript to trigger it. Here's a code snippet that demonstrates how to do this:" ```// Get a reference to the NVDA controller element const nvdaController = document.querySelector('#nvda-controller'); function openNVDAContextMenu() { const event = new MouseEvent('click', { bubbles: true, cancelable: true, view: window, }); nvdaController.dispatchEvent(event); }``` yeah right!
Oh yes, GPT-4 will happily tell you to shift the keyboard focus to the form field which has an error and fire off an alert message, even though those two things can be an anti-pattern: The focus shift will most likely make the alert not surface. It took a bit of prodding to get it to suggest instead to use a link that takes the person to the error field next to the message, because it still leaves user control. This is why I worry for AI used for accessibility work by non-experts.
Originally from Hungary, but living and working in the US now. Web engineer, avid philosopher. Fun, random, optimistic. Friend to many, with an open mind. Passionate about accessibility and usability. Tinkering with Raspberry Pi, meditation, ham radio (K7HUN,) more.