CopyCat

CopyCat ASL recognition system

CopyCat is designed both as a platform to collect gesture data for our ASL recognition system and as a practical application which helps deaf children develop working memory and language skills while they play the game.

Please view the video below

The system uses a video camera and wrist mounted accelerometers as the primary sensors. In CopyCat, the children use ASL to communicate to the heroine of the game, Iris the cat. For example, the child will sign to Iris, "ALLIGATOR ON CHAIR" (glossed from ASL). If the child signs poorly, Iris looks puzzled, and the child is encouraged to attempt the phrase again. If the child signs clearly, Iris "poofs" the villain and continues on her way. If the child cannot remember the correct phrase to direct Iris, she can click on a "help" button. The system  then shows a short video with a signer demonstrating the correct ASL phrase. The child can then mimic the signer to communicate with Iris. This is similar to the help a mother provides a child when the child is not sure what to say in a sitiuation.  

          Gesture-based interaction expands the possibilities for deaf educational technology by allowing signing children to interact with the computer using  their gesture-based language. An initial goal of the system, suggested by our partners at the Atlanta Area School for the Deaf, is to elicit phrases which involve three and four signs from children who normally sign in phrases with one or two signs. This task encourages more complex sign construction and helps develop working memory for language.

The initial version of CopyCat had used a "Wizard of Oz" approach where an interpreter simulates the computer recognizer. This method allows research into the development of an appropriate game interface as well as data collection to train our hidden Markov model (HMM) based ASL recognition system. The current version utilizes computer reecognition to determine the corectness of the children's signing. The recognizer is accurate 85% of the time in correctly assesing the correctness of the children's signing. This level of accuracy did not cause the children any undue frustration and also alowed the children to make significant gains on measures of working memory, language comprehension and language expression.

Publications

Zafrulla, Z., Brashear, H., Hamilton, H., and Starner, T. (2010). A novel approach to American Sign Language (ASL) Phrase Verification using Reversed Signing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

Weaver, K. A., Hamilton, H., Zafrulla, Z., Brashear, H., Starner, T., Presti, P., , and Bruckman, A. (2010). Improving the Language Ability of Deaf Signing Children through an Interactive American Sign Language-Based Video Game. In
Proceedings of 9th International Conference of the Learning Sciences.

Zafrulla, Z., Brashear, H., Hamilton, H., and Starner, T. (2010). Towards an American Sign Langauge Verifier for Educational Game for Deaf Children. In Proceedings of International Conference on Pattern Recognition.

Brashear, H. (2010). Improving the Efficacy of Automated Sign Language Practice Tools. PhD thesis, Georgia Institute of Technology, College of Computing.

Brashear, H., Zafrulla, Z., Starner, T., Hamilton, H., Presti, P., and Lee, S. (2010). CopyCat: A Corpus for Verifying American Sign Language During Game Play by Deaf Children. In 4th Workshop on the Representation and Processing of Sign
Languages: Corpora and Sign Language Technologies, Proceedings of the 7th

Yin, P. (2010). Segmental Discriminative Analysis For American Sign Language Recognition And Verification. PhD thesis, Georgia Institute of Technology, College of Computing.

Yin, P., Essa, I., Starner, T., and Rehg, J. M. (2008). Discriminative Feature Selection for Hidden Markov Models Using Segmental Boosting. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2008),
Las Vegas, Nevada, USA.

Yin, P., Starner, T., Hamilton, H., Essa, I., and Rehg, J. M. (2009). Learning Basic Units in American Sign Language Using Discriminative Segmental Feature Selection. In IEEE International Conference on Acoustics, Speech, and Signal Processing
(ICASSP 2009), Taipei, China.

Brashear, H., Park, K.-H., Lee, S., Henderson, V., Hamilton, H., and Starner, T. (2006). American Sign Language Recognition in Game Development for Deaf Children. In Assets ’06: Proceedings of the 8th International ACM SIGACCESS
Conference on Computers and Accessibility, Portland, Oregon. ACM Press.

Lee, S., Henderson, V., Brashear, H., Starner, T., Hamilton, S., and Hamilton, H. (2005). User-centered Development of a Gesture-based American Sign Language Game. In NTID Instructional Technology and Education of the Deaf Symposium,
Rochester, NY.

Henderson, V., Lee, S., Brashear, H., Hamilton, H., Starner, T., and Hamilton, S. (2005). Development of an American Sign Language Game for Deaf Children. In IDC ’05: Proceeding of the 2005 Conference on Interaction Design and
Children, New York, NY, USA. ACM Press.

Lee, S., Henderson, V., Hamilton, H., Starner, T., Brashear, H., and Hamilton, S. (2005). A Gesture-based American Sign Language Game for Deaf Children. In Proceedings of CHI, pages 1589–1592, Portland, Oregon.

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.