Can AI help boost accessibility? These researchers tested it for themselves

Engineering  |  News releases  |  Research  |  Technology

November 2, 2023

Seven researchers on the University of Washington tested AI instruments’ utility for accessibility. Though researchers discovered instances through which the instruments have been useful, additionally they discovered important issues. AI-generated photos resembling these helped one researcher with aphantasia (an incapability to visualise) interpret imagery from books and visualize idea sketches of crafts, but different photos perpetuated ableist biases.University of Washington/Midjourney

Generative synthetic intelligence instruments like ChatGPT, an AI-powered language instrument, and Midjourney, an AI-powered picture generator, can probably help individuals with varied disabilities. These instruments might summarize content material, compose messages or describe photos. Yet the diploma of this potential is an open query, since, along with recurrently spouting inaccuracies and failing at primary reasoning, these instruments can perpetuate ableist biases.
This 12 months, seven researchers on the University of Washington carried out a three-month autoethnographic research — drawing on their very own experiences as individuals with and with out disabilities — to check AI instruments’ utility for accessibility. Though researchers discovered instances through which the instruments have been useful, additionally they discovered important issues with AI instruments in most use instances, whether or not they have been producing photos, writing Slack messages, summarizing writing or attempting to enhance the accessibility of paperwork.
The workforce introduced its findings Oct. 22 on the ASSETS 2023 convention in New York.
“When expertise modifications quickly, there’s all the time a threat that disabled individuals get left behind,” stated senior creator Jennifer Mankoff, a UW professor within the Paul G. Allen School of Computer Science & Engineering. “I’m a very sturdy believer within the worth of first-person accounts to help us perceive issues. Because our group had numerous people who might expertise AI as disabled individuals and see what labored and what didn’t, we thought we had a novel alternative to inform a narrative and study this.”
The group introduced its analysis in seven vignettes, typically amalgamating experiences into single accounts to protect anonymity. For occasion, within the first account, “Mia,” who has intermittent mind fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the instrument was sometimes correct, it typically gave “fully incorrect solutions.” In one case, the instrument was each inaccurate and ableist, altering a paper’s argument to sound like researchers ought to speak to caregivers as a substitute of to chronically ailing individuals. “Mia” was capable of catch this, because the researcher knew the paper properly, however Mankoff stated such refined errors are a few of the “most insidious” issues with utilizing AI, since they will simply go unnoticed.
Yet in the identical vignette, “Mia” used chatbots to create and format references for a paper they have been engaged on whereas experiencing mind fog. The AI fashions nonetheless made errors, however the expertise proved helpful on this case.
Mankoff, who’s spoken publicly about having Lyme illness, contributed to this account. “Using AI for this activity nonetheless required work, however it lessened the cognitive load. By switching from a ‘technology’ activity to a ‘verification’ activity, I used to be capable of keep away from a few of the accessibility points I used to be dealing with,” Mankoff stated.
The outcomes of the opposite exams researchers chosen have been equally blended:

One creator, who’s autistic, discovered AI helped to put in writing Slack messages at work with out spending an excessive amount of time troubling over the wording. Peers discovered the messages “robotic,” but the instrument nonetheless made the creator really feel extra assured in these interactions.
Three authors tried utilizing AI instruments to extend the accessibility of content material resembling tables for a analysis paper or a slideshow for a category. The AI applications have been capable of state accessibility guidelines however couldn’t apply them constantly when creating content material.
Image-generating AI instruments helped an creator with aphantasia (an incapability to visualise) interpret imagery from books. Yet once they used the AI instrument to create an illustration of “individuals with quite a lot of disabilities trying completely satisfied however not at a celebration,” this system might conjure solely fraught photos of individuals at a celebration that included ableist incongruities, resembling a disembodied hand resting on a disembodied prosthetic leg.

“I used to be shocked at simply how dramatically the outcomes and outcomes assorted, relying on the duty,” stated lead creator Kate Glazko, a UW doctoral scholar within the Allen School. “In some instances, resembling creating an image of individuals with disabilities trying completely satisfied, even with particular prompting — are you able to make it this fashion? — the outcomes didn’t obtain what the authors wished.”
The researchers be aware that extra work is required to develop options to issues the research revealed. One notably advanced drawback entails growing new methods for individuals with disabilities to validate the merchandise of AI instruments, as a result of in lots of instances when AI is used for accessibility, both the supply doc or the AI-generated result’s inaccessible. This occurred within the ableist abstract ChatPDF gave “Mia” and when “Jay,” who’s legally blind, used an AI instrument to generate code for an information visualization. He couldn’t confirm the outcome himself, however a colleague stated it “didn’t make any sense in any respect.”  The frequency of AI-caused errors, Mankoff stated, “makes analysis into accessible validation particularly vital.”
Mankoff additionally plans to analysis methods to doc the sorts of ableism and inaccessibility current in AI-generated content material, in addition to examine issues in different areas, resembling AI-written code.
“Whenever software program engineering practices change, there’s a threat that apps and web sites turn out to be much less accessible if good defaults should not in place,” Glazko stated. “For instance, if AI-generated code have been accessible by default, this might help builders to study and enhance the accessibility of their apps and web sites.”
Co-authors on this paper are Momona Yamagami, who accomplished this analysis as a UW postdoctoral scholar within the Allen School and is now at Rice University; Aashaka Desai, Kelly Avery Mack and Venkatesh Potluri, all UW doctoral college students within the Allen School; and Xuhai Xu, who accomplished this work as a UW doctoral scholar within the Information School and is now on the Massachusetts Institute of Technology. This analysis was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.
For extra info, contact Glazko at [email protected] and Mankoff at [email protected].

Tag(s): Center for Research and Education on Accessible Technology and Experiences • Jennifer Mankoff • Kate Glazko • Paul G. Allen School of Computer Science & Engineering

https://www.washington.edu/news/2023/11/02/ai-accessibility-chatgpt-midjourney-ableist/

Recommended For You