American Sign Language
Linguistic Research Project

 

[under construction! this page is a work in progress... ]

ASL feature tracking and identification of non-manual expressions of linguistic significance : Data visualizations

The linguistically annotated American Sign Language (ASL) corpora collected from native ASL signers by linguists at Boston University have formed a basis for collaborative research with computer scientists at Rutgers University, supported by the National Science Foundation, to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows.

The details of this research are described in publications [1]-[4] listed at the bottom of this page.

Here we present the visualizations that have resulted from the analysis of markings associated with specific types of grammatical information in ASL, showing computer-based measurements of eyebrow height, eye aperture, head rotation along the 3 axes, and the forward/backward position of the upper body. The non-manual expressions have been grouped, as outlined below; conditional and 'when' clauses have been identified with a single label, as have several different types of markings identifying topic and focus.

Clicking on each image will play a movie, stepping through the video frame by frame. The alignment indicator in the graph will show the position of the frame that is being shown.

The plan is to integrate such displays with our DAI Web interface [see [5] below] (http://secrets.rutgers.edu/dai/queryPages/), allowing searchability and fuller access to the annotations and synchronized video files with multiple view of the signing (front view; side view; and the close-up view of the face shown here).

Constructions illustrated below:

Conditional/When Clauses
Negation
Relative Clauses
Rhetorical Questions
Topic/Focus
Wh-Questions
Yes/No Questions


WARNING: The movie files are large and may be slow to load. Patience is a virtue.


Conditional/When Clauses

File 36 - U 0

File 36 - U 2

File 36 - U 3

File 36 - U 5

File 36 - U 6

File 36 - U 7

File 36 - U 8

File 36 - U 9

Warning: frame loss in the above example

 

 

File 36 - U 10

Warning: frame loss in the above example

 

 

File 36 - U 12

Warning: frame loss in the above example

 

 

File 36 - U 13

Warning: frame loss in the above example

 

 

File 36 - U 14

File 36 - U 15

File 36 - U 16

File 36 - U 17

File 36 - U 18

File 36 - U 21

File 36 - U 22

File 36 - U 23

File 36 - U 24

File 36 - U 25

File 38 - U 0

File 40 - U 8

Warning: frame loss in the above example

 

 

File 50 - U 17

File 50 - U 18

File 51 - U 19

File 51 - U 20

File 51 - U 28

Warning: frame loss in the above example

 

 

File 52 - U 6

File 52 - U 7

Warning: frame loss in the above example

 

 

File 52 - U 11

 

Back to the top


Negation

File 36 - U 3

File 36 - U 9

Warning: frame loss in the above example

 

 

File 39 - U 0

File 39 - U 2

Warning: frame loss in the above example

 

 

File 39 - U 3

Warning: bad tracking in the above example

 

 

File 39 - U 4

Warning: bad tracking in the above example

 

 

File 39 - U 10

File 39 - U 11

File 40 - U 0

File 40 - U 2

File 40 - U 3

Warning: frame loss in the above example

 

 

File 40 - U 4

File 40 - U 5

File 40 - U 7

File 40 - U 8

Warning: frame loss in the above example

 

 

File 40 - U 9

File 42 - U 6

File 50 - U 2

File 50 - U 6

File 50 - U 7

File 50 - U 8

File 50 - U 9

File 50 - U 11

File 50 - U 12

File 50 - U 15

File 50 - U 17

File 50 - U 18

File 50 - U 19

File 50 - U 20

File 50 - U 23

File 51 - U 3

File 51 - U 4

File 51 - U 5

File 51 - U 9

File 51 - U 13

File 51 - U 14

File 51 - U 16

File 51 - U 17

File 51 - U 18

File 51 - U 26

File 51 - U 27

Warning: frame loss in the above example

 

 

File 51 - U 28

Warning: frame loss in the above example

 

 

File 51 - U 29

File 51 - U 30

File 51 - U 31

File 52 - U 2

File 52 - U 14

File 52 - U 16

 

Back to the top


Relative clauses

File 42 - U 0

File 42 - U 1

File 42 - U 2

File 42 - U 3

File 42 - U 4

File 42 - U 5

Warning: frame loss in the above example

 

 

File 42 - U 6

File 42 - U 7

File 42 - U 8

File 42 - U 9

File 52 - U 13

 

Back to the top


Rhetorical Questions

canonical: with raised brows

File 41 - U 2

File 51 - U 26

type 2: with lowered brows, often followed by an eyebrow raise

File 41 - U 1

File 41 - U 3

File 41 - U 11

 

Back to the top


Topic/Focus

File 36 - U 6

File 37 - U 2

Warning: frame loss in the above example

 

 

File 37 - U 5

File 37 - U 10

Warning: bad tracking in the above example

 

 

File 37 - U 13

Warning: bad tracking in the above example

 

File 38 - U 0

File 38 - U 3

File 38 - U 4

Warning: frame loss in the above example

 

 

File 38 - U 5

File 38 - U 7

File 38 - U 8

File 38 - U 10

File 39 - U 0

File 39 - U 1

File 39 - U 2

Warning: frame loss in the above example

 

 

File 39 - U 3 ("as for" topic)

Warning: bad tracking in the above example

 

 

File 39 - U 3 (contrastive focus)

Warning: bad tracking in the above example

 

 

File 39 - U 4

Warning: bad tracking in the above example

 

 

File 39 - U 5 ("as for" topic)

File 39 - U 5 (contrastive focus)

File 39 - U 6

File 39 - U 7

Warning: frame loss in the above example

 

 

File 39 - U 8

File 39 - U 9

File 39 - U 10

File 39 - U 11

File 40 - U 3

Warning: frame loss in the above example

 

 

File 40 - U 4

File 40 - U 8

Warning: frame loss in the above example

 

 

File 42 - U 5

File 50 - U 1

File 50 - U 2

File 50 - U 3

File 50 - U 4

File 50 - U 5

File 50 - U 6

File 50 - U 7

File 50 - U 8

File 50 - U 9

File 50 - U 11

File 50 - U 20 - eyebrow height problems

File 50 - U 21

File 50 - U 22

File 51 - U2

File 51 - U4

File 51 - U7

File 51 - U9

File 51 - U 12

File 51 - U 13

File 51 - U 14

File 51 - U 15

File 51 - U 16

File 51 - U 17

File 51 - U 18

File 51 - U 19

File 51 - U 21

File 51 - U 22

File 51 - U 23

File 51 - U 24

File 51 - U 25

File 51 - U 27

Warning: frame loss in the above example

 

 

File 52 - U 1

File 52 - U 2

File 52 - U 4

File 52 - U 5

File 52 - U 8

File 52 - U 9

File 52 - U 10

File 52 - U 14

File 52 - U 15

File 52 - U 16

File 52 - U 17

File 52 - U 18

 

Back to the top


Wh-questions

File 38 - U 0        

File 38 - U 3

File 38 - U 4

Warning: frame loss in the above example

 

 

File 38 - U 5

File 38 - U 7

File 38 - U 8

File 38 - U 9

File 38 - U 10

File 38 - U 11

File 50 - U 11

File 50 - U 19

 

Back to the top


Yes/no Questions

File 37 - U 2

Warning: frame loss in the above example

 

 

File 37 - U 4

File 37 - U 5

File 37 - U 6

File 37 - U 8

File 37 - U 9

File 37 - U 10

Warning: bad tracking in the above example

 

 

File 37 - U 11

Warning: frame loss in the above example

 

 

File 37 - U 13

Warning: bad tracking in the above example

 

 

File 51 - U 5

File 52 - U 3

 

Back to the top



Credits

The illustrations provided here reflect collaborative research by computer scientists at Rutgers University (Dimitris Metaxas, Bo Liu, Jinjing Liu, Xi Peng, Yu Tian, Fei Yang, Peng Yang, Xiang Yu, and Shaoting Zhang) and linguists at Boston University (Carol Neidle, assisted by many BU students, including Corbin Kuntze, Robert G. Lee, Rebecca Lopez, Joan Nash, Indya Oliver, Emma Preston, Tory Sampson, Jessica Scott, Blaze Travis, among many others). In ongoing collaborative research with Matt Huenerfauth, we are also working to apply the findings from this research to improving sign language generation via computer-driven signing avatars.

References

Acknowledgments

We are grateful for support from the National Science Foundation (grants #1065013, 1059218). We are also immensely grateful to the ASL consultants who have contributed invaluably to this research especially, for this project: Rachel Benedict, Jonathan McMillan, Braden Painter, and Cory Behm. We would also like to thank Charles McGrew at Rutgers University for help in putting together this webpage.