A new method to teach children augmented reality
Researchers at The University of Texas at San Antonio (UTSA) identified a new approach to help children operate Augmented Reality (AR). According to UTSA computer science experts, a major barrier to wider adoption of the technology for experiential learning is based on AR designs geared toward adults that rely on voice or gesture commands. By conducting in-classroom testing among elementary school students, UTSA researchers uncovered that AR programs are best delivered using controller commands, followed by programs that communicate with age-specific language.
"The majority of AR programs urge users to speak commands such as 'select' but a child doesn't necessarily communicate in this manner. We have to create AR experiences that are designed with a child in mind. It's about making experiential learning grow and adapt with the intended user," said John Quarles, co-author and associate professor in the UTSA Department of Computer Science. "Currently, many voice commands are built to recognize adult voices but not children."
Quarles, along with Brita Munsinger, co-lead on the project, designed the research study to replace more complex word instructions with easier commands that would be best understood by the younger subjects. This allowed the children to reduce time and error in completing a series of tasks.
"One of my favorite parts of working in human-computer interaction is the impact your work can have. Any time someone uses technology, there's an opportunity to improve how they interact with it," said Munsinger. "With this project, we hope to eventually make augmented reality a useful tool for teaching STEM subjects to kids."
The UTSA study was conducted in classrooms with children ages 9 -11 who wore Microsoft HoloLens and were then asked to complete a series of tasks. In the analysis, students by far exhibited less error, fatigue, and higher usability when interaction with AR was based on completing tasks that relied on hardware controllers. Voice and gesture selection both took longer than controller selection. Children's fatigue levels also were highest when participants had to make gesture commands. Moreover, this modality was the least usable interaction, while the controller was rated highest on usability.
(Content Courtesy: https://www.eurekalert.org/pub_releases/2020-02/uota-uft022420.php)