Google Chrome Developers771 тыс
Следующее
Опубликовано 27 июля 2016, 21:01
Continuing our series on Accessibility (a11y) today we're talking about screen readers! Screen readers take the semantic information from your elements and produce an alternative, spoken UI, for user with visual impairments. But when we're designing Web Components, we're creating tags that have never existed before in the browser, meaning they don't have built-in semantics. So what's a screen reader supposed to do? Today on Polycasts we'll look at how we can leverage ARIA to add semantics to our elements to make sure they're properly announced and all of our users can interact with them.
Screen Readers:
VoiceOver
apple.com/accessibility/osx/vo...
NVDA
nvaccess.org
JAWS
freedomscientific.com/Products...
ChromeVox
chromevox.com
Although we didn't mention it Windows also includes the Narrator screen reader (Win + Enter to activate)
W3C ARIA Authoring Practices 1.1
goo.gl/8qs7VF
Polymer Slack:
goo.gl/WHjzMH
Screen Readers:
VoiceOver
apple.com/accessibility/osx/vo...
NVDA
nvaccess.org
JAWS
freedomscientific.com/Products...
ChromeVox
chromevox.com
Although we didn't mention it Windows also includes the Narrator screen reader (Win + Enter to activate)
W3C ARIA Authoring Practices 1.1
goo.gl/8qs7VF
Polymer Slack:
goo.gl/WHjzMH
Свежие видео