Play-based learning earns NSF grant for visualization prof

Sharon Chu

See more animated stories created by children at the TAMU Embodied Interaction Lab

Using motion-tracking technology, Texas A&M visualization researchers are developing and testing an enhanced play system aimed at boosting children's imaginations and enriching their story-telling and writing skills.

“The goal is to use play, something very natural to children, to let them express ideas without being blocked by the technical aspects of language,” said Sharon Chu, an assistant professor of visualization and principal investigator on the three-year, $550,000 National Science Foundation project.

The process entails using motion tracking cameras to film children, outfitted with sensors, as they act out a story of their own creation. Their movement, translated in real time by software, is mirrored on a screen by an animated avatar previously drawn by the actor. The initial product: a video drawn, acted and told by the child.

The video is next viewed by the child actor, who, when tasked to write the story, draws inspiration from the experience gained in the creation process.

“Tom walked to the store,” said Chu. “Any child can write that. But when they enact it, they might use richer language. It’s part of why enactment works.”

The idea, Chu said, is “to split the learning so they focus first on the ideas, then on the technical aspects of language, ultimately improving their storytelling and writing abilities.”

These steps harnesses embodied cognition, a psychological theory that what someone does affects how they think about it. Because the child assumes a character they created and made real, they feel more committed when they write about it and do so more imaginatively.

Chu plans to package the motion capturing technology in a portable classroom kit for teachers.

The system is aimed at students ages nine to 11, who are learning to write more technically in school. The system, she said, will prove particularly helpful for non-native English speakers trying to learn language specifics, but expects it to help native and non-native speakers improve their grammar, spelling and writing as well.

The technique has also proven to enhance shy children’s self-esteem.

“Introverted children don’t offer as many ideas in groups, but we’ve found, when they see their animation, they offer many more ideas,” she said, “and become assertive and collaborative.”

Keeping the video animation simple allows children to draw their own 2-D characters, backgrounds and props.

“We wanted it to be like a paper character,” Chu said. “Doing a full tracking in 3-D is very complicated and it is hard to make it look appealing unless it’s hyperrealistic. Kids also like the cartoon effect.”

Chu’s finished Digital Micro-Enactment Storytelling System, or D.I.M.E., will require only a short training session before an educator is able to operate it.

Having tested the first version, which requires four cameras, trace points and wearable motion tracking gear in a large empty space, Chu said it will take about a year to develop a smaller, low-cost portable system more easily deployed in a classroom setting.

The first year of the three-year project will be spent on preliminary studies and system designs that will be tested the following year in charter schools. In the third year, Chu expects a few local language arts classes to adopt the technology.

“If, after three years we have a sustainable system, we can either apply for another grant to deploy it on a larger scale or pursue its commercialization,” she said.

Sarah Wilson

posted December 8, 2017