Language Facilitation through Graphics and Graphical Animation

--------------------------------------------------------------------


Contributors
Beth Mineo, Denise Peischl, Chris Pennington
Abstract
The project is designed to gather information about the way individuals respond to and operate with two-dimensional representations of actions. The activities necessary to accomplish this will proceed in three phases. First, the stimulus presentation system must be developed. Next, data must be collected. Finally, the data must be analyzed and interpreted and the findings shared with consumers, service providers, other researchers, and the manufacturing community.

TABLE OF CONTENTS


[NLI] [ASEL]

Last Updated: Wed Sep 19 1994 by Chris Pennington <penningt@asel.udel.edu>


Background

There are many products on the AAC market that use pictures as the means for transferring meaning, and we are beginning to see the emergence of animation capabilities in a subset of these products. We currently have very little knowledge about the manner in which individuals with disabilities operate with these kinds of two dimensional representations. We can, however, rely on what we know about language acquisition and language behavior to provide the AAC field with a scaffold for exploration of the picture issue.

Purpose

This project will investigate the representation of actions in two-dimensional forms. It will determine the relative efficacy of a number of approaches for representing movement, including static pictures, video, and animated pictures. The results will provide guidance to those selecting and customizing AAC systems, as well as to manufacturers who are trying to make their products maximally responsive to the needs of people who rely on picture-based AAC systems.

Method

We plan to do a series of investigations regarding the ease with which individuals understand and use picture-based representations of action concepts. Studies will examine the relative effectiveness of various approaches to representing action, including static photographs with disequilibrium cues, static line drawings with disequilibrium cues, static line drawings with visual marking cues, video segments depicting action, and animated line drawings. These will be compared to measures of the subjects' comprehension of the actions when presented via a live model. Studies will include subjects of a variety of ages/cognitive levels, both with severe communication disabilities and without. We will examine the effect of representation type on learnability and language performance. We will also assess the relationship between language skills and operation with the various types of action depictions.

Data collection protocols will define the order of screen presentations to the subjects and the items depicted on each screen. In most cases, four items will be presented simultaneously and the subject will have to select the one requested by the examiner/system. For example, a screen would be segmented into quadrants, and a static line drawing with visual marking cues would appear in each cell. The drawings would represent four different actions. In the assessment of comprehension of two-dimensional representations, subjects will be asked to indicate the item that corresponds to a spoken action word. In the assessment of learnability, subjects will receive feedback regarding the accuracy of their responses. The dependent variable will be the number of trials needed to reach a criterion level of identification performance. In the video and animated line drawing conditions, the contents of all cells will be in motion when presented to the subject. In the live model condition, four people will simultaneously engage in the target action and three foil actions. The order of presentation of the protocols will be counterbalanced across subjects.

Subject performance with each of the levels of representation will be analyzed in regard to the subjects' age, physical abilities, cognitive level, and extant receptive and expressive communication skills.

System Description

Portions of this research will be accomplished via the implementation of an automated stimulus presentation system. The system will be implemented on a Apple Macintosh Quadra computer with 20" monitor. This system provides high performance graphics capabilities, built-in digitized sound support, and facilities to develop animation software through the Apple Quicktime extensions. The system includes a touch screen for subject input. Stimulus sets will contain representations of 20 action words demonstrated to be among those emerging earliest in the vocabularies of young children (Huttenlocher et al., 1983). The words in each stimulus set will be represented at the same level of complexity (video, static line drawings with visual marking cues, etc.).

Progress

Results

Publications

Acknowledgements

This work has been supported by a Rehabilitation Engineering Center grant from the National Institute on Disability and Rehabilitation Research. Additional support has been provided by the Nemours Foundation.