Vivian Cook   Online Writings

What is human about language?

Chapter cut from V.J. Cook, Inside Language 1997 for space reasons

Note: figs and IPA not working well

The previous chapters have shown that both the variation between different languages and the variation within each language are within the bounds of an overall system. All languages have a phoneme system, even if their phonemes differ; all languages have phrase structure, even if their word orders vary. This chapter examines whether these common factors in languages are unique to human beings or whether they are used in other communication systems, looking first at animal communication and the unique characteristics of human language, then at whether human language can be taught to apes and how computers can use language.

Senses of ‘Language’
To show the range of senses for the word ‘language’ in English, here is a selection of the many book titles starting with The Language of .....
The Language of ...

Advertising, Art, Ballet, Change, Clothes, Colour, Decision, the Devil, Drama Critical Theory, Drink, the Elderly, Fiction, Flowers, the Garden, the Genes, the Goddess, Goldfish, Hair, the Heart, Heroes, the Horse, Inequality, Layout, Love, Madness, Magic and Gardening, Modern Politics, Morals, Mystery, Nature, Negotiation, the Night, Philosophy, Purcell, Renaissance Poetry, Risk, the Rite, Shakespeare, the Self, Space, Sport, Stamp Collecting, Television, Your Cat

Needless to say, most of these go way outside the meaning of language used in this book.

I. language and other species

People’s first impression is that animals speak a foreign language. Cat-owners are convinced that, if they knew the words and the grammar, they could understand what their cats were saying, rather like Dr Dolittle. There is indeed a book called How to Talk to your Cat. This section looks at some of the best-known animal systems to see if there is any substance to this belief.

The best-known communication system in another species, based on the pioneering work of Karl von Frisch, is the stylised ‘dances’ of bees. When an exploring bee finds a suitable source of honey, it flies back to the hive and communicates its location to the other bees by dancing in semi-circles to right and to left of a straight axis, hence known as a ‘wagging’ dance. The other bees join in this dance with the original messenger and then go off to find the honey.

In the version of the dance that takes place on the flat shelf at the front of the hive the bee dances in a figure of eight pattern, alternately to right and left of a short straight section. The top of the dance points towards the position of the sun in the sky, just as a compass sign on a map points to the north. Shifting the straight part of the dance to right or to left indicates the angle away from the sun . The bees therefore signal the direction of the honey by relating it to the position of the sun in the sky.

So, if the honey is 90° to the right of the sun, the straight portion of the dance shifts 90° to the right.

If the honey is 45° to the left of the sun, the straight axis of the dance shifts 45° to the left, and so on for all other directions. The bees have a highly effective way of signalling direction by using the straight axis of the dance as a pointer. When the bees are clinging to a vertical surface inside the hive rather than to a horizontal surface, the dance has to be modified so that the direction of the sun changes to ‘up’ and the angle is marked off by slanting the axis of the dance away from the vertical, as can be seen by holding the diagram vertically. How fast the bee dances communicates how far away the honey is: the slower the dance, the further the bees have to go. Putting direction and distance together, the bees can go straight to the honey, ignoring other possible sources to right or to left or nearer or further away. Other types of communication may also be involved, such as the noise made by the dance and the smell of the pollen on the bee.

The same system can be used to convey other information than the location of honey. For example, when the bees are swarming and need to find a new home, a bee who finds a suitable place dances its location in a wagging dance. Other bees set out, find it, and join in the dance on their return. But a rival bee may find somewhere else equally attractive and dance out another location. It too gets adherents who join in its dance. After several hours, the whole swarm decides between the competing dances and flies off to the new location. On the face of it, this procedure bears a strong resemblance to a democratic election in which the candidates dance out their policies and eventually win a majority of the electorate.

The bees have a precise method of conveying information, adaptable to vertical or horizontal dimensions. However, the system depends on the sky being visible so that the bees can orientate themselves by the sun’s light. An overcast day makes communication difficult. Nor can the system deal with unusual directions. Bees do not succeed in communicating about honey suspended immediately above them or honey that is put directly in their hive.

While bees unquestionably communicate with each other, the doubt remains whether this process resembles communication through human language. Though information is indeed being transferred from one bee to others, the dance only tells the bees where to find honey. It is restricted in scope to the location of objects, a language consisting only of map references. Unusual locations or weather conditions cannot be dealt with, let alone topics such as the reason why Arsenal were beaten by West Ham last week. Human language, however, can in principle be used to talk about anything, whether real or imaginary. It is not limited to certain topics or ideas but is free to include any message at all. In addition, it has many functions other than communication of ideas, as was seen in Chapter Seven.

Perhaps other species closer to human beings than bees communicate in more human-like ways. In the wild, chimps greet each other in a range of ways: they shake hands, pat each other on the back, and hug each other. They indicate ‘hurry’ by vigorous shaking of the hand from the wrist; they beckon ‘come here’ by stretching their hands out. Gorillas too have gestures for beckoning and ‘hurry’. But such gesture language does not have the precise information content of the bees’ dance and it does not appear to form part of a whole communication system. Apes also use a variety of calls, but most express feelings such as rage or fear rather than transferring information from one ape to another. There is no clear ‘message’ being communicated from one to the other.

Some animal calls appear to have greater depths of meaning. Vervet monkeys have distinct warning cries for different kinds of danger. One cry makes the other monkeys climb the trees to avoid a leopard; another makes them hide in bushes and look at the sky for an eagle; a third makes them band together to fight a snake. Other species such as squirrels and lemurs have signals for different predators. In a sense these cries communicate different meanings. It is, however, unclear whether a particular signal communicates ‘Take to the trees’ or ‘Beware leopards’. They are more like the automatic alarm signal of the rabbit’s white tail than a deliberately chosen ‘message’.

The evidence for true communication systems in non-monkey species is still sparse. Dolphins indeed make complex noises but no consistent meaning has yet been found for them apart from hints that some sounds signal emotions and others signal individual identity. Blue-backed whales produce complex noises with a linear order; the noises evolve gradually from one year to the next. But at the moment the bees’ clear-cut message system seems the only true non-human communication system for conveying precise information—at least that human beings have been able to work out.

II. Unique features of human language

Even conceding that these animal communication systems are languages, the question remains whether they have the characteristics of human language. This section looks at some of the features of human language that distinguish it from animal systems, starting with the processes of speech production and perception, and going on to more abstract properties of language such as the arbitrary connections between sound and meaning, and the ability to create new messages.

A. Language and human speech production

To some extent the nature of human language must depend upon the organs that human beings use for speech, with the exception of deaf sign languages and Warlpiri sign language. Chapter Three showed that the essentials for speech were lungs to put air in motion, vocal cords to produce voice, a tongue and lips to modify the sounds as they pass through the airway, and a nose to produce nasal sounds.

Many species possess this basic apparatus. However, human speech makes use of unique features of the human body. In ordinary speech volume and pitch are maintained fairly constant rather than linked to the amount of air in the lungs. To achieve this effect, it is necessary to keep the air pressure constant during the whole period of breathing out; that is to say, the chest muscles have to gradually compensate for the falling pressure as the lungs deflate. The way of achieving this even pressure anatomically is to have ribs that slant downwards from the spine, a characteristic shared by human beings, chimpanzees, and gorillas, but not by human babies under three months. Hence the production of a continuous stream of speech is linked to the physical peculiarities of a small number of species.

The vocal cords in the larynx have the function in many species of preventing food from entering the lungs, since the throat is used for both eating and breathing. Like the vocal cords of humans, those of monkeys lend themselves to producing sounds by allowing rapid puffs of air to pass through. Physiologically, primates could therefore produce voice from the vocal cords. The unique human feature is how the passages to the stomach and the lungs connect. Other mammals separate these two functions by having high larynxes in the throat, so that animals can breathe and drink at the same time. For example a cat with a stuffy nose cannot breathe through its mouth. In humans, the larynx is lower and the two passages are not separate. The vocal cords in human beings are therefore not so efficient at sealing off the air passage as in other species, as demonstrated by the experience of food ‘going down the wrong way’. Most parents acquire techniques for ejecting such obstructions from the throats of children.

Small babies, however, can manage to breathe and drink at the same time, like other mammals, because, up to three months of age, their larynxes are higher than those of adults. Only after this period does the larynx descend in the throat, necessitating a different type of breathing. A possible cause of some cot-deaths (Sudden Infant Death Syndrome) is the baby trying to revert to the earlier breathing technique with a throat that will no longer permit it. Reconstructions of Neanderthal man suggest that they had high larynxes utilising the connection between breathing and swallowing common to other mammals rather than that peculiar to man. If they had speech, it bore little resemblance to that of humans.

B. sound space in the mind

The characteristics of speech may also have a neurological basis in the human brain. Take the /i/ sounds of a year-old child, an adult woman, and an adult man. The frequencies of the note and the modifications made by the differing size of the mouth, lungs, etc, produce sound that are very different objectively speaking. Yet the listener is somehow able to recognise a common /i/ sound in all of them. One theory is that listeners’ minds instantly calculate the size of the speaker’s vocal organs and then extrapolate from a single /i/ sound the likely size of the speaker, whether a baby or Pavarotti, to take two extremes. This calculation forms a basis for adjusting their perception of the whole repertoire of sounds they hear from that person. So, once they have heard an /i/, they can adjust their hearing to the speaker’s /a/ sound and to all the other sounds. A single sound lets them tune in to the speaker’s voice.

In one experiment, listeners heard an /i/ from different voices, followed by the same vowel sound. They interpreted the second sound differently according to which voice they had heard first. Their prior exposure had given them different starting points for estimating the size of the speaker’s mouth. Chapter Three described how the basic two dimensions of the human mouth, back to front and open to close, led to the simplest vowel system being a three way contrast between /i/, /u/, and /a/. The listeners’ perception of the speaker’s sounds is based on these dimensions. They tune in to the other vowels by estimating what the person’s /i/ and /u/ must be: they discover the dimensions of their ‘vowel space’. Indeed one reason for exchanging hello on the phone may be to provide a short standardised sample of speech to allow the listener to tune in a particular voice.

A Neanderthal, or indeed a monkey, would be unable to carry out this adjustment because the radically different shapes of the mouth and tongue form a sound ‘space’ with very different dimensions inside their minds. Whatever other system they might use, they would have extreme difficulty with human speech sounds since they do not relate to the natural sound space of their own mouths. In a sense it takes a human mind in a human body to make the mental model used in speech perception.

C. Perceiving in categories

Categorial perception, first seen in Chapter Three, has been linked to the basis of speech perception in human. In listening to speech, a sound is always one sound or another, rarely something mid-way between. While each English phoneme has a variety of pronunciations (allophones), for example the more ‘front’ tongue position of the /k/ of kit with the more ‘back’ /k/ of car, one phoneme never shades into another. Regardless of the different ways in which /k/ can be said in English, the sound is always perceived as a /k/ or a /g/, never something in between. That is to say, human perception of consonants proceeds in discrete jumps, as described in Chapter Three: a sound is either one phoneme or another, never something halfway between, rather like one of those optical illusion figures in which a person sees either an old or a young woman, but never both at the same time. Humans fit speech sounds into boxes, into categories, rather than treating sounds as a continuous gradation.

This ‘categorial perception’ was once thought to be unique to human beings. Animal cries vary in intensity but do not shift abruptly from one category to another. A gorilla’s cry may shade from anger to intense rage through sheer volume but it does not become a different sound. Human beings can cope with the fast speed of speech because they sort the sounds automatically into categories. However, other species have now been found capable of categorial perception.. The Voice Onset Time experiments with /t/ and /d/ described in Chapter Three were given to chinchillas, a mammal closer to a rat than a human, who could successfully tell the difference between the sounds, as could pygmy marmosets and macaque monkeys. Hence categorial perception cannot be unique to human beings. Some anecdotes of animals imply understanding of human speech involving categorial perception. Konrad Lorenz mentions three dogs called Aris, Paris, and Harris, who had no problem in distinguishing their own names, and a dog who reacted differently to Katzi (kitten), Spatzi (sparrow) and Eichkatzi (squirrel). Kanzi, the chimp to be described below, comprehends 150 words of spoken English. Though the skill of perceiving human speech sounds is rare among non-human species, it nevertheless exists.

The fact that the processes of both production and comprehension employ specific means unavailable to other species does not prove that these are indispensable to language. Chapter Three mentioned sign languages where the usual speech or hearing organs are not involved in language but which have all the characteristics of human language. The production and perception processes of language may not be intrinsic to human language itself but accidental consequences of the human body. The comparison could equally well go in the other direction: human beings would find it hard to produce the range of gorilla sounds or hear the sounds of bats.

D. Creativity of human language

Human languages create or borrow new words for new things whenever they are needed, as seen in Chapter Four. I have just faxed someone through my modem; fax and modem are new objects with new words that scarcely existed ten years ago. Human language is inherently flexible and adapts to new circumstances and new things to say. Animal languages are inflexible because their stock of ‘words’ is effectively fixed.

Since Chomsky’s work of the 1950s, one of the main distinctive features of human language is seen to be its creativity in being able to communicate new messages. Given that I want to say Twenty five elephants are dancing the tango on my lawn, the English language rises to the occasion by supplying a grammatical form and vocabulary, despite the fact that nobody has ever wanted to say this sentence before or ever will again. Most of the sentences people produce or hear in the course of a day are new in so far as they have never been said or heard in that precise form before. New sentences are created all the time. The newspaper caption to a famous photograph is President Mandela grips Speaker Betty Boothroyd’s hand as the pair descend into Westminster Hall. Ten years before no-one could have predicted that English would need to provide this sentence.

Animal languages seem fixed in a single form; a cat cannot say anything new, only repeat what has been said before. A bee can make new ‘sentences’, provided they concern the location of honey or hives. Human language is creative in the technical sense that any speaker can make up a sentence no-one has ever heard before; any listener can understand a novel sentence no-one has ever said before. Creativity is not just W.B. Yeats putting words together to create new sentences such as The unpurged images of day recede. All of us have the talent of creating new sentences, even if less effectively, for example the child’s sentence heard by Julian Dakin My guinea-pig died with its legs crossed. Creativity is a basic fact of human language, not an added extra. Chomsky originally used the notion of creativity to attach associationist theories by arguing that in principle connections of stimulus and response cannot explain totally new sentences. The secret of creativity seems to be the grammatical system through which new sentences can be produced. One of the most crucial things that children have to acquire is the creativity of language.

    Novel Sentences

To illustrate creativity, here is a sample of sentences from diverse sources that are unlikely to have occurred in this form before, yet are perfectly comprehensible.

The last time a private eye solved a murder was never. (James Ellroy)
Good is not what it has been this morning. (traffic reporter on local radio)
Sport for the average man is actually being priced out of his pocket. (sports pundit on radio).
What a two days Shouaa has had from Syria. (TV commentator)
This door is alarmed. (Marks and Spencers’ exit)
You’ll open wide him—I’ll subdivide him. (Guinevere and Knight in the film Camelot)
If you have been, thank you for listening. (radio presenter)
The Hungarian Grand Prix is now on with a vengeance. (sports commentator)
The chat as ever is with yourselves on the telephone. (radio d.j.)
If you want to make sure you’re eating enough of fibre, it’s worth knowing that there’s as much fibre in twenty-one new potatoes as you find in just a single bowl. (TV commercial)
O dark dark dark. They all go into the dark,
The vacant interstellar spaces, the vacant into the vacant. (T.S. Eliot, after Milton)

D. The arbitrariness of human language

A less definable characteristic of human language is its arbitrariness, which takes several forms. First there is no necessary connection between the object and the word that represents it. A rose could be called a sorp and smell as sweet. Different languages indeed call the same object by different names. English rose may indeed be rose in French but it is bara in Japanese and warda in Arabic. The connection between objects and words is largely arbitrary.

Language is also arbitrary in that it relies on combinations of a small set of sounds or shapes that do not have meaning in themselves. The sounds / b / , / / , / g / have no meaning separately; the question ‘What’s an / /?’ cannot be answered by explaining what / / means. Only when / / is combined with the other sounds of English to get /bg/ (bog) or /gb/ (gob) does the sound become meaningful. Phonemes and letters do not have meaning but they combine to form words that do, as seen in Chapters Three and Five.

Animal languages in a sense have a limited list of ‘words’, like those Konrad Lorenz found in crows. In animal communication, a ‘word’ is an entity of its own. Each of the monkeys’ cries has a distinctive meaning, ‘snake’, ‘eagle’, and so on. They cannot be decomposed into a small set of meaningless components like phonemes. Animals have a dictionary consisting of a limited number of signs but they do not have sound or writing systems.

In human languages the set of words is open-ended, formed from a strictly limited set of components, whether phonemes, gestures, or letters. The fact that these symbols are themselves meaningless and arbitrary allows them to generate a vast stock of words. Though Roman alphabets vary slightly from one language to another, their 26 letters can encode, not only all the words in the Oxford English Dictionary, say, but all the words in the dictionaries of French, Italian, Malaysian, etc, as well, with a handful of additional symbols. Arbitrariness of the actual phonemes or letters is a highly useful characteristic that gives language its infinite flexibility, unlike the total rigidity of animal systems

III. Teaching monkeys human language

In their natural habitat, animals show few signs of the kind of communication system that makes up human language. The potential for language could nevertheless be latent inside them, waiting to be revealed. A number of experiments in the last thirty years have attempted to establish whether a human language can be taught to another species. The assumption is that the problems apes have with human speech come from their different physiology and vowel ‘space’, rather than from mental incapacity for language. If apes were taught a human language that did not involve speech, their ability to learn language could be tested independently of their problems with producing and hearing speech sounds.

Apes and visual language

One approach was to invent a new visual language called Yerkish that could be taught to apes. The apes communicated with a human being via a computer keyboard. When the ape touched a key, a ‘lexigram’ was displayed on a screen above. Lexigrams consisted of simple geometric signs similar to crude characters: for example stood for ‘dessert’ and for ‘coconut’.

The answers to the ape came from a human being sitting in another room, partly through signs on the screen, but also through tickling, the provision of sweets, and so on. A typical example is an ape called Sherman requesting ‘Want orange drink’ on the keyboard; when he got the drink, he typed ‘Pour orange drink’; later, to get a drink out of his reach, he requested ‘Give straw’. By typing in the right symbols, the monkey could thus get food, drink, or even films, as well as attention from human beings. Yerkish functioned as a communication system where information was transferred from ape to human. Yerkish furthermore was arbitrary in that the symbols consisted of combinations of nine abstract shapes. A sentence in Yerkish could be up to seven symbols long.

The first chimp to be taught by this system, called Lana, succeeded in producing strings of Yerkish symbols, such as ‘Please Tim give apple’ or ‘Question you give coke to Lana in cup’. She could also put together new combinations of lexigrams for objects for which there was no word in her vocabulary. When she wanted an orange, for example, she produced the signs ‘Question Tim give apple which-is orange’.

Still more remarkable is the pigmy chimpanzee called Kanzi. Kanzi’s mother was taught Yerkish in the usual way, accompanied by her son, who appeared to take little interest in what was going on. But, when she left the project temporarily, Kanzi suddenly showed that he had picked up Yerkish simply by observing his mother being taught. By the age of five years, he was handling about 150 ‘words’; at six he could respond successfully to around 300 different ‘sentences’ in natural settings, using a transportable board with Yerkish symbols. One successful routine involved Kanzi naming any one of seventeen locations in the surrounding estate, such as ‘tree-house’, and then taking the human being there, with 100% accuracy. Clearly Kanzi was able to comprehend certain aspects of communication, although much of his conversation was only concerned with food.

Apes and sign language

Sign language is another way of circumventing the problems of producing speech from non-human speech organs. Human sign languages have fully developed gesture systems rather similar to those of speech, as mentioned in Chapter Three. While some of the gestures of human sign languages resemble ‘natural’ gestures like those of the wild chimp, most are as stylised and arbitrary as any of the sounds of speech, just as the origins of the letter "A" as the shape of an ox’s horns are now less than obvious. The British Sign Language gesture of pointing at someone with the right hand clenched and first finger extended may well be readily interpreted as ‘you’, but it is far from obvious that the first finger of the right hand stroking down the right cheek from ear to chin means ‘woman’. Hence there is no more a universal world sign language than a universal spoken language, with large differences between American Sign Language, British Sign Language, Chinese Sign Language, and so on.

A gorilla called Koko was taught American Sign Language (ASL) and spoken English simultaneously from one year of age; a sentence in ASL was used at the same time as a spoken English equivalent. Initially she was taught ASL for five hours a day, partly by imitating signs, partly by moving her hands in the appropriate ways. Then she was put in an environment where ASL was used for about ten hours a day by a variety of human companions. By the age of 5½, she had mastered 246 signs of ASL, such as ‘alligator’, ‘cake’, ‘small’, and ‘pour’. More importantly, she had started to put these separate signs together into two-word combinations such as ‘Food-more’, ‘Me-up-hurry’, and ‘No-gorilla’, many of which she could not have received from her human companions. A toy zebra was called a ‘white tiger’, a cigarette lighter a ‘bottle match’ and a mask a ‘face hat’. These sentences have an uncanny resemblance to the two-word sentences like mommy sock found in children’s early speech, as discussed in Chapter Seven.

Can apes be taught language?

At first sight these experiments, and the several others that were performed, are impressive. Apes do seem capable of acquiring a rudimentary human communication system together with a reasonable number of words. They also use a definite word order. Koko for instance used adjective–noun order 75% of the time, as in ‘dirty mouth’, ‘dirty taste’ and ‘old wrong grass’. Moreover the apes are capable of producing new ‘sentences’ such as ‘cookie rock’ when required – the creativity previously attributed only to human beings.

Yet linguists have reacted to these experiments with extreme scepticism. For instance one danger is reading things into the apes’ language which are not actually there. It is all very well to claim that Lana said ‘Question Tim give apple which-is orange’. What actually happened is that she tapped a series of buttons that produced six symbols on a screen. What she ‘meant’ by them is unknown. Yet, to give her credit, a human being might find a computer that talked back to them fairly confusing. Yerkish was hardly communicating directly with a visible listener in a concrete interaction like most human language.

The signs that the apes produced were analysed as if they were ASL. Some, however, may have been versions of the ape’s natural signs. A chimp called Washoe for example used the natural ‘hurry-up’ and ‘give me’ signs. But her natural gesture for ‘give’ was interpreted by observers as an ASL sign. Her spontaneous gesture of putting the arms up to be tickled was interpreted as the ASL sign for ‘more’. However, in both cases they lacked crucial features of the ASL sign. Hence much of the apes’ apparent signing ability in ASL is little more than their natural gesture system, lacking the arbitrary nature of a true sign language.

More crucially, ASL has meaningless components that combine to make words, in the usual human fashion. The ASL signs for ‘summer’ and ‘ugly’ use the same handshapes and movement but differ in where they are produced. Like the difference in tongue contact between /t/ and /d/, it is a matter of articulation. A single ASL sign is made up of several components. ASL does not consist of natural gestures but is a systematic system of arbitrary signs, even if the natural origins of some signs are transparent. The chimps, however, treated each sign as a whole, not as having separate components. In particular they did not see any significance to the place where the sign was made, for example on the shoulder versus on the stomach, but used it anywhere on the body, a similar ‘mistake’, say, to treating /k/ as /t/.

So it is debatable whether the apes were showing any capacity for language other than the ability to acquire a certain number of signs and to put some of them in a sequence. Chapter Two saw how human language has phrase structure; the sentence is formed out of interlocking phrases; it is not just words in order. At best the apes had a linear sequence of signs. Kanzi was also only 75% correct on adjective–noun order. A human child rarely makes a mistake with the position of adjectives at any stage of language acquisition; book blue for blue book is highly unlikely (apart from the fact that in child English the verb is can be missing, i.e. book blue meaning ‘the book is blue’ rather than ‘the blue book’).

There are then considerable doubts whether the apes’ communication was truly language-like. However, to some extent the goal-posts have been shifted. Apes can undoubtedly be taught to communicate information through a language system of a sort. The slightly different claim now is that the type of system they employ does not resemble a human language. The upshot is that it still has not been proved that a human language can be taught to another species. While the conditions for the acquisition of language in these experiments were ideal, it did not take place convincingly. The missing factor is the inherent structure of the human mind, to be elaborated below. To be fair, it should be pointed out that several of the case studies such as Kanzi and Koko are actually ‘bilingual’ acquisition of two human languages, ASL and English, rather than ‘monolingual’ acquisition of a single language. While human children have no problems in acquiring more than one language in early childhood, apes may find it more confusing to acquire two languages at once.

Even compared with a young human child, the apes did not begin to tap the complexity of human language. Furthermore, apart perhaps from Kanzi, apes had to be taught language, as Chomsky has often argued; they did not pick it up from their parents as described in Chapter Seven. Even if apes have been taught a human language successfully, this does not explain why language has not spontaneously arisen in the wild if the capacity is latent in their minds. It would be strange for it to be latent without them ever actually using it. But the features of our own human language may of course blind us to the languages of animals.

An interesting counter-experiment is to see whether the languages taught to apes could be taught to human beings who have failed to learn language successfully. What happens if you teach language-deficient children a simple visual language like Yerkish? Four such children were trained for nine weeks to use Yerkish with plastic shapes. The children were able to express ideas such as ‘give’ and ‘take’, negatives, and questions, so they had in effect acquired the same type of system as the apes. If children who are relatively unsuccessful at learning human language can learn such systems with ease, they are evidently not the same as ordinary human language.

However, the same argument about chimps not being able to speak could be given in reverse. These human children are capable of making human speech sounds, so the experiment may have overlooked the ease of sounds to human beings. This loophole was filled by an experiment with a computer system which substituted sounds for shapes. The four children tested learnt at least six ‘nouns’ and four ‘verbs’. After nine sessions, one child could produce two- and three- element ‘sentences’ such as ‘Give doll’ and even some four-element ‘sentences’ such as "Doll give monkey to-cat". Sounds were not the crucial element since the children could manage without them.

To sum up this section, teaching language to apes has met with limited and controversial success. The parts of language that apes acquire do not seem to be those that are most crucial to human language. Language appears to be unique to the human species, at least in the form that we know it.

A Day in the Life of a Chimp
Here is a selection of the Yerkish sentences produced by Sherman, a 4½ year-old chimp, on one day, given in order of frequency.
Out room Pour orange drink Austin [another chimp] Open Sue [a researcher] M&M [a sweet] Give Columbus Stick Columbus Room out Outdoors Gone Wrench Room Give out room Money Milk Magnet Give Sherman M&M Sherman out room Want orange drink Give open Tickle Door Sherman give M&M Sherman M&M Sherman room out Give pudding Scare Sweet potato Give pour drink  Slide Door Austin Yes Sue Go outdoors Orange drink Sponge Juice Blanket Key No Give money Give milk Yes 

source: Savage-Rumbaugh and Rumbaugh, 1980

It is hard to decide whether the resemblances between these and the two-word utterances of children seen in Chapter Seven are accidental or show a similar knowledge of language .

IV. Computers and language

The question of whether computers can think has been raised ever since they were first invented. The problem is deciding when a computer is doing something that would be called thinking in a human being. Alan Turing proposed a classic way of testing this problem, called the ‘Turing test’. Suppose a person sits in a room with only a computer terminal for company, rather like Sherman. The person has to communicate with two other people through the keyboard; one of the people is a human being sitting at another terminal, the other is the computer program itself. If the person can tell whether a person or a machine is ‘communicating’, the computer has failed the test. But, if the computer cannot be distinguished from the real person, it passes the test. In other words, if the behaviour of a computer cannot be distinguished from that of a thinking person, it must be credited with thinking just we credit our friends and relations with thinking. If it smells like a Boojum, tastes like a Boojum, and looks like a Boojum, there is not much point in denying it is a Boojum. The question has shifted from thinking to language: if a computer can use language in the same way as a human being then it must be able to think.

Of course a computer could easily be distinguished from a person by its superior ability to carry out complex activities. Ask it the square root of 34.7879 and it produces 5.897372296201 in a split second: a human using pen and paper would come in many minutes later. But this answer is a matter of rote calculation, not thinking. Language provides a better test-case than mathematics. If a person cannot tell a computer’s language from that of a human being, the computer is in effect using language in the same way as a human being: in which case, language would no longer be unique to human beings.

Joseph Weizenbaum devised a program called ELIZA to take the Turing test. He started by making the same concession to the computer as to the ape by letting it bypass the actual sounds of speech and communicate in writing. Here is an short extract from an ELIZA session at a computer keyboard in which a ‘patient’ is communicating with a computer-simulated psychiatrist. As always, the convention is that the computer talks in capital letters.

Men are all alike.
They’re always bugging us about something or other.
Well my boyfriend made me come here.
He says I’m depressed much of the time.
It’s true I am unhappy.
DO you think coming here will help you not to be unhappy?
I need some help, that much seems certain.
What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother
Tell me more about your family. ....

Apparently the computer is responding intelligently to whatever the person types in. It comments on feelings, I AM SORRY TO HEAR YOU ARE DEPRESSED; it asks for elaboration, Tell me more about your family. It asks for reasons, DO you think coming herE WILL help you not to be unhappy? In other words it seems to be fully engaged in the conversation, like a human being. People who have used this program felt the strong illusion that the computer was understanding them; Weizenbaum’s secretary would ask him to leave the room so that she could converse with it in private. The users often then treated the computer as if it were a human being and thus it in effect passed the Turing test.

The actual computer program, however, gives the game away. For the computer is only looking for strings of letters in the input which match those in its word-list; when it finds a match, it produces another pre-set string. So it is programmed to look for the six letter string mother. When it finds it, the computer produces the predetermined string Tell me more about your family. It will do exactly the same if it finds I hate my mother, I love my mother, the mother of wars, Mothering Sunday or, probably, He smothered it with attention. There is no sense in which it knows that mother is a word with a meaning; all it possesses is a match between a string of letters and another, which might as well be §¢Ð, and âå µµÒ¾‰ as mother and TELL ME MORE ABOUT YOUR FAMILY. The computer is also programmed to produce the sequence I AM SORRY TO HEAR YOU ARE DEPRESSED whenever it encounters the nine-letter sequence depressed. It would apply this technique equally to the string I am not depressed or He depressed the button.

A second trick is repeating what is typed in with a small twist, a common enough trick in ordinary conversation. Given a string of letters including the two letter strings my and me, change them to the strings your and you and repeat the sentence. So My boyfriend made me come here becomes YOUR BOYFRIEND MADE YOU COME HERE. Similarly Yes sir she’s my baby becomes YES SIR SHE’S YOUR BABY, ‘Tis my delight on a shiny night ‘TIS YOUR DELIGHT ON A SHINY NIGHT, all without the computer doing more than change a few letters. The program has not comprehended the sentence in any way; it does not understand the grammatical structure or even that these strings of letters between spaces are ‘words’. A few other tricks add to the illusion. A string of words from the user’s input may be stored and then flung back later; Tell me what you meanT when you said ... you like bananas/Come FROM WIGAN/Play the PIANO.

The ELIZA program almost passes the Turing test. But this assembly of tricks cannot reflect how people use language. The Turing test is not demanding enough, particularly when it admits reflective psychotherapy in which the analyst deliberately turns the patient’s remarks back onto themselves. It is all too easy for the computer to pass. Using language in a human way involves knowing what it means, which involves at least having a dictionary of ‘words’ and meanings, a grammar for establishing the structure and for creating new sentences, and all the other paraphernalia introduced so far in this book.


This ELIZA dialogue between a human user and computer comes from a program for teaching English as a Foreign Language, called Escape from Utopia. The goal is for the student to escape from Utopia with a secret message; in this extract for an actual session, the user has been apprehended by the secret police and is being interrogated.
Computer: So we’ve caught you at last! What is the name you’re giving at the moment?
User: Smith.
C: Mr, Mrs, or Ms?
U: Mr.
C: What do you say is your nationality, Mr Smith?
U: Ruritanian.
C: And what job did you say you have?
U: I am a plumber.
C: Your name isn’t really Smith, is it?
U: Yes it is.
C: When did you start to be a spy?
U: I am not a spy.
C: You mean my information is incorrect then?
U: Yes it is.
C: You sound very positive about that.
U: I have never been a spy.
C: Only stupid people are plumbers! Where were you trained as a terrorist,
Mr Smith?
U: Do I have to answer?
C: I am not here to answer questions, you are.
U: I want to leave.
C: What colour does it say your eyes are on your passport, Smith?
U: Brown.

Source: BBCB program by V. Cook

A more recent test of thinking has been applied by John Searle, called the Chinese Room. Imagine a sealed room with two letterbox slots. Into one goes a message in English; two minutes later the same message comes out the other in Chinese. Inside the room, there might be a bilingual speaker of English and Chinese, who spends the time actually translating the message. or there might be someone who knows no Chinese but has a complex set of instructions: if the letters spell man choose the sign ; if the letters spell walk choose and so on. In the one case there is meaning since the person has to understand the English word before they can produced the Chinese version, in the other case, there is a meaningless mechanical system with no language as such. And yet, from the outside the Chinese room, the observer cannot tell which is which. According to Searle, the computer never does anything but match mechanically; all computer language is like ELIZA.

Computers and background knowledge of the world

For at least thirty years, computer experts have been predicting that the breakthrough with computer use of human language would happen tomorrow. The programs that ‘understand’ and use natural human speech or indeed writing with any human-like level of skill are still for tomorrow, not today. One of the main obstacles has proved to be the human being’s knowledge of the speech situation. The computer can be programmed to handle the words and even the grammar of written language to some extent — to recognise that happy is an adjective, John a proper name, and will be is a combination of auxiliary will and verb be, or even that, put together, they may make up a question, Will John be happy? But it does not know the context within which the sentence makes sense.

Suppose a computer is programmed to deal with restaurants. A customer might ask for Egg and chips please, a simple request. This presents no problem to the computer, which can match egg, chips etc in its dictionary and retrieve information about them. But this capacity is far from enough to deal with the language of the restaurant. Customers know, but computers do not, that the food in a restaurant is usually prepared or cooked, that it will be brought to the table, and that it must be paid for. None of this background is stated in the language involved. People don’t need to be told what restaurants are for, but computers do. It is only if these standard requirements are breached in some way that they get mentioned: the restaurant is a help-yourself buffet; the customer has run off without paying, or whatever. But computer programs have to be specifically told such background information in order to get anything out of even such a mundane remark as Egg and chips please.

The expectations of each situation were described by Roger Schank in terms of ascript’. A script has a certain number of dramatic ‘roles’; a restaurant conversation needs a customer and a restaurant employee whether a waiter, waitress, counter assistant, cashier, or whatever. The customer and the waiter know the behaviour expected of them; the types of request they can make, Egg and chips please not I demand to be taken to your leader, and the types of response, OK, not What a disgraceful suggestion.

The restaurant script consists of a number of separate ‘scenes’: finding a seat, getting the menu, ordering the food, getting the courses in sequence, asking for the bill, paying the bill. Each of these has its appropriate actions and language. A waiter would be assumed to be unbalanced if he presented us with the bill at the start of the meal, or gave us coffee before soup. Finally, like any play, scripts require props. Any context has its own paraphernalia; a restaurant has tables and chairs, plates, food, and so forth. Customers know which props to expect; if they haven’t got a knife, they make the waiter bring one.

The speakers’ view of the world contains many scripts, each with their roles, scenes and props — going to the doctor’s, travelling by bus, taking an examination, getting married, and so on. Unfortunately for the computer, the number of possible scripts is vast and it is not necessarily clear which one will be summoned up on a given occasion. Going to a restaurant for a meal may invoke additional scripts for meeting the opposite sex, disciplining children, exchanging secrets, negotiating contracts, having emergency medical treatment, and so on. A single situation seldom confines human beings to a single script. On one occasion the only speech addressed to me in a restaurant in France was C’est beau, être grandpere (Isn’t it nice to be a grandfather?), completely unpredictable in the situation, but natural enough as an apology from the cashier for holding up the queue by listening to by the previous customer going on about his grandchildren. In short, to be able to handle language in the way that human beings do, the computer needs to be programmed, not only with the sheer language knowledge of vocabulary and grammar but also with all the information that people take for granted about the world they see every day.

All in all, computers are not very comfortable with human language. They can pretend to use it but they have no idea what it means. The day when the computer will be able to use language effectively is still in the future, as witness the still laborious path to get an efficient system for machine translation between languages.

V. The distinctive features of human language

The main features of language discussed in this chapter can be summed up in the following list. Needless to say, each part of it would require much elaboration and qualification to be fully tenable. Other aspects are not included, such as the links between language and human thinking encountered in Chapter Four and the use of language for social purposes seen in Chapter Seven, which may well be shared with animal systems.

Overall then, while there is considerable uncertainty about many of the details, it seems that human language is indeed the sole property of the human race, if language is defined by the above characteristics. The proof of its uniqueness can only be found by testing the language of another intelligent species, perhaps on some planet in Alpha Centauri, to see whether these characteristic are due to the human beings who use language or are inevitable components in any language.

Some Crucial Features of Language     



wild  apes 


Communication of specific  information














Perceiving in categories    




Phrase structure   




Sources and further references

I. language and other species

The classic work on bees is: von Frisch, K. (1953) The Dancing Bees, Harcourt Brace Jovanovitch, New York, trans. D. Ilse. An account of natural ape communication is in: Petersen, M.R., and Jusczyk, P.W. (1984), ‘On perceptual predispositions for human speech and monkey vocalisations’, in P. Marler and H. Terrace (eds.), The Biology of Learning, Springer, 585-616. A book on talking to cats is: Moyes, P. (1978), How to Talk to your Cat, London, A.Barker

II. Unique Features of human language

For differences in children’s vocal tracts see: Kent, R.D. & Miolo, G. (1995), ‘Phonetic Abilities in the First Year of Life’, in Fletcher, P. & MacWhinney, B. (eds.), The Handbook of Child Language, Blackwell. The classic work on Neanderthals and sound space is: Lieberman, P. (1984), The Biology and Evolution of Language, Harvard U.P. Lorenz’s encounters with animals are described inter alia in: Lorenz, K. (1954), Man Meets Dog, Methuen, London. The notion of creativity was first found in Chomsky’s famous review: Chomsky, N. (1959), ‘Review of B.F. Skinner Verbal Behavior’, Language, 35, 26-58, and is in many of his later publications.

III. Teaching monkeys human language

The most accessible and fair-minded account of the research with apes is: Wallman. J. (1992), Aping Language, CUP. Some of the original accounts are: Savage-Rumbaugh, S., & D.M. Rumbaugh, ‘Language Analogue Project: Phase II’, in Children’s’ Language Vol. 2, ed. K Nelson, Gardener Press 1980, 267-308: Savage-Rumbaugh, S., & Lewin, R. (1994), Kanzi: the Ape at the Brink of the Human Mind, Doubleday, London. The work with human children is in: Barna, S. (1975), ‘Childhood aphasia: a preliminary investigation of some auditory and linguistic variables’, Brunel University. Reported in Cromer (1991): Hughes, J. (1974-5), ‘Acquisition of a non-vocal ‘language’ by aphasic children’, Cognition, 3, 41-55

IV Computers and Language

The Turing test case comes from: Turing, A.M. (1950), ‘Computing machinery and intelligence’, Mind, LIX. The original ELIZA work is described in: Weizenbaum, J. (1976), Computer Power and Human Reason, Freeman, San Francisco. The Chinese room is in: Searle, J., 1980, ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3: 417–57. The concept of scripts comes from: Schank, R., and Abelson. R. (1977), Scripts, Plans, Goals, and Understanding, New Jersey, Lawrence Erlbaum

V human languages

The most cited source on the design features of human language is: Hockett, C.F. (1960’, ‘The origin of speech’, Scientific American, 203, 88-9

See also  Apes  Linguistic Relativity