Researchers unpack sign language’s visual advantage

by Pelican Press
0 views 8 minutes read

Researchers unpack sign language’s visual advantage

sign language interpreter
Credit: Pixabay/CC0 Public Domain

Linguists have long known that sign languages are as grammatically and logically sophisticated as spoken languages—and also make greater use of “iconicity,” the property by which some words refer to things by resembling them. For instance, the sound of an English “bang” iconically resembles a sharp noise, and a meow resembles the crying sound of a cat.

Notably, in American Sign Language (ASL) and in numerous other sign languages, there are often two ways to say roughly the same thing—one using standard words (signs) and the other using highly iconic expressions, called “classifiers,” which serve to create visual animations.

But how normal signs and pictorial-like representations are integrated to create meaning is not well understood.

Philippe Schlenker, a researcher at France’s National Center for Scientific Research (CNRS) and New York University, and Jonathan Lamberton, a Deaf native signer of ASL, an independent researcher and a former interpreter for the New York City mayor’s office, propose an answer in a pair of studies in the journal Linguistics & Philosophy, the first of which was co-authored with Marion Bonnet, Jason Lamberton, Emmanuel Chemla, Mirko Santoro, and Carlo Geraci.

They conclude that ASL can supplement its usual grammar (often with the word order subject-verb-object) with a distinct pictorial grammar in which iconic representations appear in the order they would on a comic book’s illustrated panels—not because ASL borrows techniques from comics, but because the same cognitive mechanism, pictorial representation, is involved in classifiers and comics.

Furthermore, just as is the case in comic-book drawings, viewpoint choice is crucial in how classifiers are represented. Spoken language must resort to different modalities (speech and gestures) to create a comparable synthesis of grammar and pictorial representations.

“These studies highlight the importance of visual animations in language, with consequences for grammar and meaning alike,” explains Schlenker. “The traditional view of language as a discrete system is thus incomplete: Within language, discrete words can be complemented with gradient visual animations, in one and the same modality in sign language and in two modalities—speech versus gestures—in spoken language.”

The simultaneous presence of normal signs and highly iconic classifiers in sign language has long been known. For instance, if an instructor wants to say, “Yesterday, during the break, a student left,” there are two ways to express the action in ASL. As with the English word leave, the signer may use a normal verb that renders unspecified the manner of movement. But the signer may also use an upright index finger—a classifier—to create a simplified animation of an upright person moving out of the room—e.g., fast or slow, towards the right or towards the left, directly or with a detour.

“The classifier functions as a kind of animated puppet inserted in the middle of a sentence,” explains Jonathan Lamberton. “This gives rise to an extraordinary mix of normal signs and pictorial-like representations.”

But how are these two components integrated? Schlenker and Jonathan Lamberton, together with Bonnet, Jason Lamberton, Chemla, Santoro, and Geraci, started with word order. In ASL, the basic word order is SVO—subject-verb-object—as in English. But it has been observed that classifiers often prefer for their objects to come before the verb—e.g., subject- object-verb (SOV).

The researchers propose that classifiers override the basic word order of ASL because they create visual animations. But they start with an observation that initially deepens the mystery. If a classifier is used to represent a crocodile eating a ball, both the subject and the object preferably appear before the verb—e.g. SOV. But if the classifier represents a crocodile spitting out a ball it had previously ingested, SVO order is regained.

Here’s how this can be explained by analyzing classifiers as pictorial-like representations, the authors say. Owing to their pictorial-like nature, classifiers preferably go with the order that would be found in a comic—if you were to view it left to right. For the crocodile eating a ball, one would typically see the crocodile and the ball before the eating, which is why the subject and object come before the verb (e.g. SOV). By contrast, for the crocodile spitting out a ball, one sees the crocodile (the subject) and the spitting first, and only then the ball (the object) coming out of the crocodile—which is why SVO order is regained.

“In spoken language, words can’t create visual animations, but gestures can,” notes Schlenker. “This work on sign language classifiers offers a new perspective on gestures in spoken language.”

It is an old observation that in sequences of silent gestures (pantomimes), speakers of diverse languages preferably use SOV, even if this goes against the order of their native language, as is the case in English (which is SVO). But the authors show that this SOV preference only holds for “eat-up-type” gestures. When “spit-out-type” gestures are considered, SVO order is regained, just as with ASL classifiers. And here too, the explanation is that gestures appear in the order that would be found in a comic.

In their second study, Schlenker and Lamberton ask how the meanings of standard signs and classifiers are integrated. Since the 1960’s, the meaning of sentences has been analyzed with logical methods, the researchers explain. Some have recently posited that there can be a logic of pictorial representations. Schlenker and Lamberton propose that the rich meaning components of sign language are integrated by combining the logic of words and the logic of pictorial-like representations.

More specifically, the “glue” between them is the notion of a viewpoint, corresponding to the position of a video camera: the camera position for the animation representing the student leaving will likely correspond to the instructor’s viewpoint.

However, there is considerable flexibility in the manipulation of viewpoints. Sometimes two classifiers that occur in the same sentence are evaluated with respect to distinct viewpoints, as happens if an instructor teaches linguistics in one classroom and philosophy in another classroom and wants to represent one student leaving the philosophy classroom fast and another student leaving the linguistics classroom slowly: each animation can come with its own camera position or viewpoint.

“This is but the tip of the iceberg, as viewpoint manipulation can get even more sophisticated,” notes Schlenker.

Here too, sign language classifiers offer a new perspective on gestures in spoken language. While spoken words can’t create visual animations, gestures can. And the viewpoint-dependency of sign language classifiers can be found in gestures as well, down to the details.

This dovetails with an old idea, the authors conclude: While speech alone can’t match the rich iconic component of sign language, speech with gestures sometimes can.

More information:
Philippe Schlenker et al, Iconic Syntax: sign language classifier predicates and gesture sequences, Linguistics and Philosophy (2023). DOI: 10.1007/s10988-023-09388-z

Philippe Schlenker et al, Iconological Semantics, Linguistics and Philosophy (2024). DOI: 10.1007/s10988-024-09411-x

Provided by
New York University


Citation:
Researchers unpack sign language’s visual advantage (2024, September 24)
retrieved 24 September 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

#Researchers #unpack #sign #languages #visual #advantage

You may also like