Why Organisations Need an Review of Accessibility in their AI Strategy Inclusive AI: Strategic Review
AI is transforming how organisations design, deliver, and scale digital services. But for people with disabilities, it’s also creating new barriers at unprecedented speed.
Recent research shows AI systems frequently misrepresent disabled people, reinforce stereotypes, and make biased decisions in hiring and screening. The University of Washington found that GPT‑based résumé screening downranks candidates when disability is implied and even offers ableist justifications (UW News). Some of the challenges that AI cause include:
- Regulatory & legal pressure – the EU AI Act requires transparency for interactive/generative AI (Article 50 overview); digital accessibility lawsuits remain elevated (UsableNet 2025 mid‑year).
- Stereotyped or harmful content – AI generated text/images often render disabled people as tragic, passive, or invisible (NYC Bar Association report).
- Dataset gaps → inaccessible outputs – Generative models can fail on disability-relevant prompts due to underrepresentation (Springer AI & Society) and studies of image generators’ bias (CVPR/arXiv).
- Everyday interaction failures – Accessibility concerns (e.g. privacy of information disclosed to gain improved accessibility) recur across domains; experts call for disability-representative data in training and LLM testing (AFB consensus slides) and guiding principles (AFB principles).