Articles

(No.27) Our Brain is Full of Language?

Language, normally consisting of voice or vision, is a string of signals for communication. It started out as voice only. Even today, many languages remain voice only. Of course, other types of signals are commonly used, like sign language by gesture, or braille by touch. A honeybee uses 8 shaped dances combined with smell and contact.

Languages used by animals or insects disappear after being used for communication. Let's consider human languages. A written language evolves after its inception, but development speed is not equal. One of the oldest characters is Sumer, which was thought to imitate the shape of an ox or sheep. After the character shape is fixed, it represents an object itself, or pronunciation of the object, or a combination to express a phonetic equivalent to describe a sentence. The hieroglyphs of Egypt, Han characters of China, and Mayan characters are typical examples.

Written languages introduced much later have different stories. Japanese, for example, was written using the Han characters of China. The first job of a Japanese writer is to match Han characters (Han Zi) to Japanese pronunciations. Most written languages introduced less than 3000 years belong to this type. Greek, Latin, and Arabic all fall into this group. Many scholars believe that even Chinese Han is classified into this group. At the early stage of Han Zi introduction, we cannot find evidence of a trial and error stage.

As mentioned before (Column No.25 " Touch One's Heart with Images"), the visual cortex of the human brain has the powerful function of pattern matching. If the number of patterns is small, matching speed and robustness is astonishing. According to a story by K. Lorenz, a cat grown under vertical stripes cannot recognize horizontal stripes, which indicate that animals' ability to recognize specific patterns was created only at a specific age and this capability is used unconsciously ( C. Blakemore and G.F. Cooper "Development of the brain depends on the visual environment", Nature 228, pp. 477-478, 1970). Speed reading techniques for European languages may rely on the same matching capability.

It may be reasonable to put a candidate pattern at a nearby location in the brain for patterns that require high speed access and matching. If not, signal transmission takes longer. . The Han font (Kanji in Japanese) has thousands of character patterns. If that large number of patterns is sequentially matched, total processing time may be much longer than with other fonts, English for example. This matching process can be divided into three levels as shown in Fig.1. At the 1st level, the matching stroke is identified, at the 2nd level the radical is identified, and at the final 3rd level the character pattern is matched. When individual matching times are represented as ta , tb , tc , the total matching time will be ta+tb+tc. As nrepresents the number of patterns, n≈104.

ta+tb+tc<<n•ta

which shows that the total time required to match the three levels is much shorter than with serially matched processing, n•ta.

Fig. 1: Input pattern and character pattern matching
Fig. 1: Input pattern and character pattern matching


Let's now look at dictation. In normal speech, the number of phonemes pronounced in one second is about 20, which is about 5 to 10 syllables. A phoneme is categorized as a consonant and vowel, each of which requires 20 to 100 milliseconds to be recognized by a human, excluding some exceptions with very slow speech or a long vowel.

In my own experience, I recognize Kana, a syllable unit denoting Japanese pronunciation. For dictating purposes, I can identify words from several applicants of syllable strings. If the meaning is clear, I write using Kanji or Kana. Sometimes, I can't recall the specific Kanji. Sometimes, I use a radical to be cleaned up later (Fig.2). With English, the process is rather complicated. For English pronounced by Japanese, I wait until each word is clearly spoken, and then write the English word. If the spelling is unclear, I use katakana. For native-like speakers of English, I recognize syllables, and then translate them into an English word. If the spelling is unclear, I just write down a possible syllable with letters of the alphabet. I have no difficulty searching for alphabetic expressions as there are only 26 letters.

Fig.2: Recognition step for Japanese
Fig.2: Recognition step for Japanese


What about Chinese, which has thousands of characters? A colleague uses pinyin for unknown characters, or for the characters he can't recall. What did the older generation do, as they had not learned pinyin? It seems that they often used other characters having the same or similar sound or, sometimes, they just used part of the font (radical) (Fig.3). We can understand that a simple font and frequently used characters are easy to remember. This indicates that character pattern memory also has hierarchy.

Fig.3: Chinese recognition steps
Fig.3: Chinese recognition steps


The Korean alphabet (Hangul) has fewer than 30 phonemes, so it should present no problem for dictation, even though the total number of combinations is in the hundreds. I have no way to confirm this assumption without a Korean friend nearby. Fig.4 summarizes the response time to identify each symbol. I believe this response time is similar among different languages. Here, the graph indicates the order; word recognition requires 0.1 to 1.0 second.

Fig. 4: Recognition time
Fig. 4: Recognition time


When the complete character set is ready, how is the relationship between character strings and phonetics organized? During the past 200 years, the relationship between the human brain and perception has become increasingly clear. It is often mentioned that speech signals are processed in the left hemisphere of the brain, but visual signals are processed in the right hemisphere ( tatsuru.com in Japanese): "For westerners, drawings and phonetic symbols are language. They do not mix. Each is handled by different parts of the brain. This phenomenon is proved by a symptom called aphasia. A native speaker of an alphabetic language suffers aphasia if the region where phonetic symbols are processed is damaged. However, if the same region of a Japanese is damaged, hiragana will be a problem, but Kanji can still be read. Apparently, two regions are activated at the same time when dealing with Japanese." In other words, Japanese and perhaps Chinese too, store a large number of character patterns, which require much more region than the left brain, and are stored in the right brain (Fig.5).

Fig.5: Character pattern storage model in human brain
Fig.5: Character pattern storage model in human brain
(central part is assumed to be high speed accessing area)


Fig. 4 and Fig. 5 indicate that there seems to exist high speed response memory that can process phoneme signals near the voice signal input neurons. High speed indicates a short distance between neurons, which indicates that high speed processing neurons must be packed into a small region.

There are some who have a strange sense of feeling. For example, a man feels "column" when he tastes mint, or he can see "red" when his pocket bell rings. Those people often mix the senses (Richard E. Cytowic, The Man Who Tasted Shapes, 1995). Conversely, some people feel pain when they see specific patterns. This phenomenon can be reasonably explained by assuming that high speed response regions are packed into a small area, which causes interference among signals.

On the other hand, there is no central controller in the brain (Richard E. Cytiwic). Processing functions are distributed throughout the brain, and are interconnected by networks. The structure is very much like the modern Internet. The human brain requires about 25% of the body's energy just to sustain it. This energy consumption changes little regardless of whether the brain is used for deep thinking or memory, which also resembles a computer network.

Can we increase the hierarchy of language processing layers to evolve language? We seem to find other room in our brain by using far regions. The human brain has hundreds of billions of synapses, each with 1000 connections. The total number of connections reaches a quadrillion (Nancy C. Andreasen; The Creating Brain: The Neuroscience of Genius). We've all heard the story about London taxi drivers, who have to pass a strict test after memorizing a detailed map of the city. Medical examinations reveal that these drivers have larger hippocampus than a control group. Considering all mammals, the size of the brain has a weak but positive correlation with intelligence. All things considered, we may be already approaching the limit in managing memory in the brain.

Japanese spent long periods learning high level concepts or religion from China. In general, it is said that high level expression in a language reflects the cultural level of civilization, which requires a long time to master (T. Kamei, et al. "History of Japanese-7, Heibonsha, pp.629 2007 (written in Japanese)). As a result, most high level concepts in Japanese are expressed in Kanji (Chinese Characters). With this base, Japanese found it easy to understand newly introduced concepts from the West in the Edo era of the 18th and 19th centuries, and then translate them into newly invented Kanji phrases like philosophy, science, economy and gymnasium. These new Kanji phrases were then reintroduced to China and now form a common ground between the two countries. This kind of interrelation will continue to grow.

(Ej, 2008.02)

Back to index

Page Top