Abstract/Details

Generating American Sign Language classifier predicates for English-to-ASL machine translation


2006 2006

Other formats: Order a copy

Abstract (summary)

A majority of deaf 18-year-olds in the United States have an English reading level below that of a typical 10-year-old student, and so machine translation (MT) software that could translate English text into American Sign Language (ASL) animations could significantly improve these individuals' access to information, communication, and services. Previous English-to-ASL MT projects have made limited progress by restricting their output to subsets of ASL phenomena---thus avoiding important linguistic and animation issues. None of these systems have shown how to generate classifier predicates (CPs), a phenomenon in which signers use special hand movements to indicate the location and movement of invisible objects (representing entities under discussion) in space around their bodies. CPs are frequent in ASL and are necessary for conveying many concepts.

This project has created an English-to-ASL MT design capable of producing classifier predicates. The classifier predicate generator inside this design has a planning-based architecture that uses a 3D "visualization" model of the arrangement of objects in a scene discussed by the English input text. This generator would be one pathway in a multi-path English-to-ASL MT design; a separate processing pathway would be used to generate classifier predicates, to generate other ASL sentences, and to generate animations of Signed English (if the system lacked lexical resources for some input).

Instead of representing the ASL animation as a string (of individual signs to perform), this system encodes the multimodal language signal as multiple channels that are hierarchically structured and coordinated over time. While this design feature and others have been prompted by the unique requirements of generating a sign language, these technologies have applications for the machine translation of written languages, the representation of other multimodal language signals, and the production of meaningful gestures by other animated virtual human characters.

To evaluate the functionality and scalability of the most novel portion of this English-to-ASL MT design, this project has implemented a prototype-version of the planning-based classifier predicate generator. The classifier predicate animations produced by the system have been shown to native ASL signers to evaluate the output.

Indexing (details)


Subject
Computer science
Classification
0984: Computer science
Identifier / keyword
Applied sciences; American Sign Language; Classifier predicates; Machine translation
Title
Generating American Sign Language classifier predicates for English-to-ASL machine translation
Author
Huenerfauth, Matt
Number of pages
296
Publication year
2006
Degree date
2006
School code
0175
Source
DAI-B 67/12, Dissertation Abstracts International
Place of publication
Ann Arbor
Country of publication
United States
Advisor
Marcus, Mitch; Palmer, Martha
University/institution
University of Pennsylvania
University location
United States -- Pennsylvania
Degree
Ph.D.
Source type
Dissertations & Theses
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
3246169
ProQuest document ID
305259134
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Document URL
http://search.proquest.com/docview/305259134
Access the complete full text

You can get the full text of this document if it is part of your institution's ProQuest subscription.

Try one of the following:

  • Connect to ProQuest through your library network and search for the document from there.
  • Request the document from your library.
  • Go to the ProQuest login page and enter a ProQuest or My Research username / password.