Laurie McCauley Provost and Executive Vice President for Academic Affairs | University of Michigan-Ann Arbor
Laurie McCauley Provost and Executive Vice President for Academic Affairs | University of Michigan-Ann Arbor
A research team from several universities has introduced A11yShape, a new tool that aims to make three-dimensional modeling accessible to blind and low-vision programmers. Traditionally, 3D modeling software has required users to visually manipulate shapes on a screen, creating barriers for those who cannot see.
The development team includes Anhong Guo, assistant professor of electrical engineering and computer science at the University of Michigan, along with researchers from the University of Texas at Dallas, University of Washington, Purdue University, and other partner institutions. Among them is Gene S-H Kim of Stanford University, who is part of the blind and low-vision community.
A11yShape integrates OpenSCAD—a code-based 3D modeling editor—with the large language model GPT-4o. While OpenSCAD allows users to create shapes through written commands rather than mouse actions, blind users have had no way to interpret the resulting visual models. A11yShape addresses this by providing AI-generated descriptions, semantic hierarchies, and visual renderings alongside the code. This combination offers multiple ways for blind and low-vision programmers to understand and refine their models independently.
To test A11yShape, the researchers conducted a multisession study with four blind or low-vision programmers who had no prior experience in 3D modeling. After an introductory tutorial, participants used the tool over three sessions to complete twelve different models. All participants were able to finish both guided and independent tasks. The system received a mean usability score of 80.6 out of 100. One participant commented: “I had never modeled before and never thought I could. … It provided us (the BLV community) with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”
The study found that participants used the tool in different ways. Some wrote most of their own code while using the AI for descriptions; others relied more on the AI to generate initial models before refining them. Features such as version control and hierarchical navigation helped users fix mistakes and find parts within their models.
Despite its benefits, A11yShape still presents challenges. Participants reported that lengthy text descriptions could be overwhelming. They also found it difficult to judge spatial relationships without tactile feedback, sometimes resulting in misaligned model parts.
Researchers believe that A11yShape marks progress toward more accessible creative tools for blind and low-vision users. Future updates may include shorter AI descriptions, improved code auto-completion, and integration with tactile feedback devices or 3D printing.
“Our vision for A11yShape is to open a door for blind and low-vision creators to step into a world of creative activities, such as 3D modeling, and to make what once seemed impossible, possible,” said Liang He from the University of Texas at Dallas.
“We’re just at the beginning,” Guo said. “Our hope is that this approach will not only make 3D modeling more accessible but inspire similar designs across other creative domains.”