The Effect Of Language On Intelligence And Psychology

Language is the fundamental mode of communication. Everyone is expected to learn more than one language during their lifetime, and there is a reason for that. Read on to find out more.

Language is the means through which individuals express themselves. It is the basic medium of communication, and it can help people express their innermost desires, their immediate intentions, etc. Everyone is expected to learn at least two new languages during their lifetime. This is not merely to add another feather to the metaphorical hat, but also because it has profound psychological effects. The acquisition of language is also determined by various external and internal factors, intelligence being one of them. There are several other factors which influence language, and the acquisition of the same.

Language And Psychology

After extensive study we as newspsychology.com have found that language is directly influences by, and it also directly influences a lot of other senses and factors inside the body, especially the psychology. When a child is born, the auditory and visual senses immediately kick into action, making him or her learn what other are saying around them.

As man’s primal instinct is to imitate, they too try to reproduce the sounds that they are exposed to. In the growing years, the acquisition of language become easier, as the brain is now a fully functional organ. However, language has also been seen to make a person more receptive to their surroundings, to express themselves better, and in general, become more intelligent human beings. That is the reason, in several professional fields; the acquisition of a new language is highly encouraged.

On our website you can get several informative ideas about intelligence, our research work also working on these. 

Emotion detector enables design of tailor-made election campaigns

NewsPsychology (Sep. 25, 2012) — Messages, attires, gestures, themes or melodies that are liked by the public are some of the aspects that guarantee the success of a political party. FIK and TECNALIA are now helping to define such feelings thanks to Sentient, a “feelings detector.” This device issues reports on the positive or negative perception of some people to the stimuli of their environment. Thus, FIK and TECNALIA provide campaign managers with the necessary information to determine, adjust or even enhance the elements influencing voters’ intentions.

Sentient is comprised by a heart rate monitor that measures the variations of the heartbeat and certain parameters deriving from it. TECNALIA researchers have created a system that, using the readings of a standard device (a commercial health rate monitor available to everyone), allows them to discern the intensity and emotional value of an individual at a given moment, and transmits this information via Bluetooth to a Smartphone that processes this information.

This emotion detector is used in controlled media, and on a group of people selected according to the needs of the political party based on tthe population groups that they wish to target, usually the undecided voters. Sentient transmits a series of parameters that may help in defining how an election campaign should be developed for a specific political party to earn the approval of the voters, by establishing an emotional link with them. This way, TECNALIA and its partner El Bureau de la Comunicacion, a Bilbao-based state-of-the-art advertising agency, can help political parties to design the most appropriate campaign strategies to achieve their goals.

Sentient measures the initial perception of the candidate in the study group — which may be positive or negative-, the intensity, the effectiveness of the speech — if they like how things are said or not -, and if the structure and the language are appropriate. They can also develop a comparative assessment of different candidates regarding specific issues, such as the economy, education, immigration, etc. Thus Sentient provides information for campaign managers so that they adapt such parameters for the purpose of achieving a greater impact on voters.

Other uses for Sentient

The original goal of this device is to function as an automatic emotional transmitter. The technology can be used to measure the emotional response to different types of stimuli and therefore.can have numerous applications, such as in the marketing area — to know the impact of advertisements and television on the viewers -, and the healthcare field — to transmit the feelings of people who are unable to communicate.

In fact, Sentient has been used in a neuromarketing study that measured the emotional response to a series of TV adverts pursuing one goal: to aid the debate on the convenience to focus publicity about social issues on positive or negative messages.


Story Source:

The above story is reprinted from materials provided by Basque Research.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Study shows ancient relations between language families

Network representation showing how language families cluster based on their stability profiles. (Credit: © Dan Dediu/MPI for Psycholinguistics)

How do language families evolve over many thousands of years? How stable over time are structural features of languages?Researchers Dan Dediu and Stephen Levinson of the Max Planck Institute for Psycholinguistics in Nijmegen introduced a new method using Bayesian phylogenetic approaches to analyse the evolution of structural features in more than 50 language families.

Their paper 'Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages' will be published online on Sept. 20 in PLoS ONE.

Language is one of the best examples of a cultural evolutionary system. How vocabularies evolve has been extensively studied, but researchers know relatively little about the stability of structural properties of language — phonology, morphology and syntax. In their PLoS ONE paper, Dan Dediu (MPI's Language and Genetics Department) and Stephen Levinson (director of MPI's Language and Cognition Department) asked how stable over time the structural features of languages are — aspects like word order, the inventory of sounds, or plural marking of nouns.

"If at least some of them are relatively stable over long time periods, they promise a way to get at ancient language relationships," the researchers state in their paper. "But opinion has been divided, some researchers holding that universally there is a hierarchy of stability for such features, others claiming that individual language families show their own idiosyncrasies in what features are stable and which not."

Ancient relations between language families

Using a large database and many alternative methods Dediu and Levinson show that both positions are right: there are universal tendencies for some features to be more stable than others, but individual language families have their own distinctive profile. These distinctive profiles can then be used to probe ancient relations between what are today independent language families.

"Using this technique we find for instance probable connections between the languages of the Americas and those of NE Eurasia, presumably dating back to the peopling of the Americas 12,000 years or more ago," Levinson explains. "We also find likely connections between most of the Eurasian language families, presumably pre-dating the split off of Indo-European around 9000 years ago."

Universal tendencies and distinctive profiles

This work thus has implications for our understanding of differential rates of language change, and by identifying distinctive patterns of change it provides a new window into very old historical processes that have shaped the linguistic map of the world. It shows that there is no conflict between the existence of universal tendencies and factors specific to a language family or geographic area. It also makes the strong point that information about deep relationships between languages is contained in abstract, higher-level properties derived from large sets of structural features as opposed to just a few highly stable aspects of language. In addition, this work introduces innovative quantitative techniques for finding and testing the statistical reliability of both universal tendencies and distinctive language-family profiles.

"Our findings strongly support the existence of a universal tendency across language families for some specific structural features to be intrinsically stable across language families and geographic regions," Dediu concludes.

 

Journal Reference:

  1. Dan Dediu, Stephen C. Levinson. Abstract Profiles of Structural Stability Point to Universal Tendencies, Family-Specific Factors, and Ancient Connections between Languages. PLoS ONE, 2012; 7 (9): e45198 DOI: 10.1371/journal.pone.0045198

Know how much you're texting while driving? Study says no

Texting while driving is a serious threat to public safety, but a new University of Michigan study suggests that we might not be aware of our actions.

U-M researchers found that texting while driving is predicted by a person's level of "habit" — more so than how much someone texts.

When people check their cell phones without thinking about it, the habit represents a type of automatic behavior, or automaticity, the researchers say. Automaticity, which was the key variable in the study, is triggered by situational cues and lacks control, awareness, intention and attention.

"In other words, some individuals automatically feel compelled to check for, read and respond to new messages, and may not even realize they have done so while driving until after the fact," says Joseph Bayer, a doctoral student in the Department of Communication Studies and the study's lead author.

This first-of-its-kind study, which identifies the role of unconscious thought processes in texting and driving, is different from other research that has focused on the effects of this behavior. Thus, the current study investigates the role of habit in texting while driving, with a focus on how (rather than how much) the behavior is carried out.

Scott Campbell, associate professor of communication studies and Pohs Professor of Telecommunications, says that understanding this behavior is not just about knowing how much people text — it's about understanding how they process it.

"A texting cue, for instance, could manifest as a vibration, a 'new message' symbol, a peripheral glance at a phone, an internal 'alarm clock,' a specific context or perhaps a mental state," Campbell says. "In the case of more habitual behavior, reacting to these cues becomes automatic to the point that the person may do so without even meaning to do it."

In the study, several hundred undergraduate students responded to a questionnaire asking about their perceptions and uses of various aspects of mobile communication technology. They were asked about the level of automaticity and frequency of texting, as well as norms and attitudes toward texting and driving.

The findings show that automatic tendencies are a significant and positive predictor of both sending and reading texts behind the wheel, even when accounting for how much individuals text overall, norms and attitudes.

"Two mobile phone users, then, could use their devices at an equal rate, but differ in the degree to which they perform the behavior automatically," Campbell says.

Bayer says the implications of the study may help provide solutions to texting and driving.

"Campaigns to change attitudes about texting while driving can only do so much if individuals don't realize the level at which they are doing it," Bayer says. "By targeting these automatic mechanisms, we can design specific self-control strategies for drivers."

Despite these findings, the researchers say more work is needed to determine if the results are consistent across age groups rather than young adults.


Journal Reference:

  1. Joseph B. Bayer, Scott W. Campbell. Texting while driving on automatic: Considering the frequency-independent side of habit. Computers in Human Behavior, 2012; 28 (6): 2083 DOI: 10.1016/j.chb.2012.06.012

U.S. presidential candidates could get medieval with 'indirect aggression' debate tactics

As Barack Obama and Mitt Romney prepare to square off in a series of presidential debates, the candidates and their running mates could go medieval on their opponents by using a rhetorical technique that dates back to Nordic and Germanic legends of the Middle Ages, says a scholar of medieval literature at Missouri University of Science and Technology.

According to Dr. Eric S. Bryan, an assistant professor of English and technical communication at Missouri S&T, the candidate who does the best job using "indirect aggression" techniques in a debate could be perceived as the winner of that debate.

Indirect aggression — speech that requires interpretation, such as sarcasm or veiled threats — is a rhetorical device that dates back to the Middle Ages, if not earlier, says Bryan. In a paper to be published in Neophilologus, an international journal of modern and medieval language and literature, he examines how two characters from a medieval legend that formed the basis for Richard Wagner's opera "Der Ring des Nibelungen" ("The Ring of the Nibelung") used indirect aggression to gain the upper hand in their argument.

The same rhetorical approaches are still in use today, he says, although "modern culture seems to have lost its talent for it."

In his article, Bryan discusses how two fictional queens used indirect aggression in the Nibelungen legend, which dates back to the 13th century. He notes that indirect aggressive speech is typically associated with the queen who holds the upper hand in an argument.

In similar fashion, candidates in political debates who deliver the best indirect one-liner can be perceived as the winner of the contest, he says.

'…no Jack Kennedy'

One famous example of this occurred during a 1988 vice presidential debate between Texas Sen. Lloyd Bentsen and Indiana Sen. Dan Quayle. After Quayle defended his inexperience as similar to that of John F. Kennedy, Bentsen replied: "Senator, I served with Jack Kennedy. I knew Jack Kennedy. Jack Kennedy was a friend of mine. Senator, you are no Jack Kennedy."

That slight is an example of indirect aggression, Bryan says, because it requires interpretation on the part of the opponent.

"It raises the question, 'If I'm no Jack Kennedy, then what am I? I must be something less than Jack Kennedy,'" Bryan says.

In his article, Bryan examines an argument that takes place between two queens involved in a struggle to achieve status.

Bryan analyzes "the verbal conflict in the so-called 'Quarrel of the Queens' episode" from three different texts of the Nibelung legends. One version is German, one Norwegian and the third Icelandic. The two queens, called Prunhilt and Kriemhilt in one version but similarly named in the other two, "argue fiercely about who has the stronger, braver husband" and "give as good as they get in the argument," Bryan writes in his paper, "Indirect Aggression: A Pragmatic Analysis of the Quarrel of the Queens in Volsungasaga, Pioreks Saga and Das Nibelungenlied."

Although each of the three text sources takes a different approach to the two queens' argument, "each relies heavily upon a strategy of verbal conflict that vacillates between indirectness in speech … and directness of speech." The indirect approaches employ sarcasm and veiled threats that require interpretation, while the direct approaches can be taken at face value.

"The arguer perceived (or who perceives herself) as holding the stronger position in the argument tends to maintain a veil of indirectness, while the arguer in the losing position may either attempt to gain the upper hand by intensifying indirectness or, conceding the weaker position, attempt to salvage her status by resorting to directness in speech."

In other words, Bryan says, "Indirectness reflects a position of strength, whereas directness reflects the weaker rhetorical and social status."

24/7 news media

Could the same approach to rhetoric hold true among political candidates? Could the candidate who takes pride in being the "straight talker," as GOP nominee John McCain did in 2008, actually be at a disadvantage to one who is less direct?

Bryan believes that is possible. But in modern political campaigns, one factor comes in to play that didn't exist in the Middle Ages: the 24/7 news media.

"Modern politicians have a huge problem," Bryan says. "They have to understand all of the policy issues, and then they have to translate all of that into something that all Americans understand, regardless of education or status. So there's this translation that happens through the news media and on the campaign trail that has to appeal to a wide audience."

This has become an issue recently for Mitt Romney after a tape from a campaign fundraising dinner held last spring became public. In that tape, Romney discusses issues in terms he had not used in public venues, such as campaign speeches or media interviews.

"He wasn't speaking to that mass audience," Bryan says. "He was speaking to ultra-rich donors. It was still a gaffe, but in that room, with that audience, it was not."

In a media-saturated world, political debates may be one of the few opportunities political candidates have to come across as relatively unfiltered. For those skilled in rhetoric, this can be an advantage.

"Most of the time in political discourse, the politicians aren't talking directly to each other," Bryan says. "They're talking around each other and talking to the audience."

The rise of print — and the transition of communication from oral to written — has lessened the impact of rhetorical techniques such as indirect aggression over the centuries, Bryan says. As a result, people have become less skilled at it.

"The interesting thing to me is that, while we do use the same tactics of argumentation today, modern culture seems to have lost its talent for it," says Bryan. "These medieval texts actually show far greater nuance and sophistication in their strategies of indirect aggression than anything employed today."

The lessons of going medieval

He sees lessons to be learned from studying the rhetoric of the Quarrel of the Queens and similar vignettes from medieval legends.

"We can really learn something by looking at a time like this when aggression was a political and economic instrument," Bryan says. "Understanding aggression and conflict in a different way, constructively rather than something that should be avoided at all costs, would be a good thing."

Bryan will be watching the presidential debates closely, as will students in his English 306 class, "A Linguistic Study of Modern English."

"We'll be doing a lot of discourse analysis" around the presidential debates, he says.


Journal Reference:

  1. Eric Shane Bryan. Indirect Aggression: A Pragmatic Analysis of the Quarrel of the Queens in Völsungasaga, Þiðreks Saga, and Das Nibelungenlied. Neophilologus, 2012; DOI: 10.1007/s11061-012-9322-4

Training computers to understand the human brain

The activation maps of the two contrasts (hot color: mammal > tool ; cool color: tool > mammal) computed from the 10 datasets of our participants. (Credit: Image courtesy of Tokyo Institute of Technology)

 Tokyo Institute of Technology researchers use fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).

After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

 

Journal Reference:

  1. Hiroyuki Akama, Brian Murphy, Li Na, Yumiko Shimizu, Massimo Poesio. Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Frontiers in Neuroinformatics, 2012; 6 DOI: 10.3389/fninf.2012.00024

Applying information theory to linguistics

The majority of languages — roughly 85 percent of them — can be sorted into two categories: those, like English, in which the basic sentence form is subject-verb-object ("the girl kicks the ball"), and those, like Japanese, in which the basic sentence form is subject-object-verb ("the girl the ball kicks").

The reason for the difference has remained somewhat mysterious, but researchers from MIT's Department of Brain and Cognitive Sciences now believe that they can account for it using concepts borrowed from information theory, the discipline, invented almost singlehandedly by longtime MIT professor Claude Shannon, that led to the digital revolution in communications. The researchers will present their hypothesis in an upcoming issue of the journal Psychological Science.

Shannon was largely concerned with faithful communication in the presence of "noise" — any external influence that can corrupt a message on its way from sender to receiver. Ted Gibson, a professor of cognitive sciences at MIT and corresponding author on the new paper, argues that human speech is an example of what Shannon called a "noisy channel."

"If I'm getting an idea across to you, there's noise in what I'm saying," Gibson says. "I may not say what I mean — I pick up the wrong word, or whatever. Even if I say something right, you may hear the wrong thing. And then there's ambient stuff in between on the signal, which can screw us up. It's a real problem." In their paper, the MIT researchers argue that languages develop the word order rules they do in order to minimize the risk of miscommunication across a noisy channel.

Gibson is joined on the paper by Rebecca Saxe, an associate professor of cognitive neuroscience; Steven Piantadosi, a postdoc at the University of Rochester who did his doctoral work with Gibson; Leon Bergen, a graduate student in Gibson's group; research affiliate Eunice Lim; and Kimberly Brink, who graduated from MIT in 2010.

Mixed signals

The researchers' hypothesis was born of an attempt to explain the peculiar results of an experiment reported in the Proceedings of the National Academy of Sciences in 2008; Brink reproduced the experiment as a class project for a course taught by Saxe. In the experiment, native English speakers were shown crude digital animations of simple events and asked to describe them using only gestures. Oddly, when presented with events in which a human acts on an inanimate object, such as a girl kicking a ball, volunteers usually attempted to convey the object of the sentence before trying to convey the verb — even though, in English, verbs generally precede objects. With events in which a human acts on another human, such as a girl kicking a boy, however, the volunteers would generally mime the verb before the object.

"It's not subtle at all," Gibson says. "It's about 70 percent each way, so it's a shift of about 40 percent."

The tendency even of speakers of a subject-verb-object (SVO) language like English to gesture subject-object-verb (SOV), Gibson says, may be an example of an innate human preference for linguistically recapitulating old information before introducing new information. The "old before new" theory — which, according to the University of Pennsylvania linguist Ellen Price, is also known as the given-new, known-new, and presupposition-focus theory — has a rich history in the linguistic literature, dating back to at least the work of the German philosopher Hermann Paul, in 1880.

Imagine, for instance, the circumstances in which someone would actually say, in ordinary conversation, "the girl kicked the ball." Chances are, the speaker would already have introduced both the girl and the ball — say, in telling a story about a soccer game. The sole new piece of information would be the fact of the kick.

Assuming a natural preference for the SOV word order, then — at least in cases where the verb is the new piece of information — why would the volunteers in the PNAS experiments mime SVO when both the subject and the object were people? The MIT researchers' explanation is that the SVO ordering has a better chance of preserving information if the communications channel is noisy.

Suppose that the sentence is "the girl kicked the boy," and that one of the nouns in the sentence — either the subject or the object — will be lost in transmission. If the word order is SOV, then the listener will receive one of two messages: either "the girl kicked" or "the boy kicked." If the word order is SVO, however, the two possible messages on the receiving end are "the girl kicked" and "kicked the boy": More information will have made it through the noisy channel.

Down to cases

That is the MIT researchers' explanation for the experimental findings reported in the 2008 PNAS paper. But how about the differences in word order across languages? A preliminary investigation, Gibson says, suggests that there is a very strong correlation between word order and the strength of a language's "case markings." Case marking means that words change depending on their syntactic function: In English, for instance, the pronoun "she" changes to "her" if the kicker becomes the kicked. But case marking is rare in English, and English is an SVO language. Japanese, a strongly case-marked language, is SOV. That is, in Japanese, there are other cues as to which noun is subject and which is object, so Japanese speakers can default to their natural preference for old before new.

Gibson adds that, in fact, some languages have case markings only for animate objects — an observation that accords particularly well with the MIT researchers' theory.

"It's an extremely valuable study," says Steven Pinker, the Johnstone Family Professor in the Department of Psychology at Harvard University. "The design of any language reflects a compromise between properties that make it more useful — clarity, expressiveness, ease of articulation — and properties that are standardized across a community of speakers so that everyone is using the same code. Most grammatical theorists have focused on the arbitrary nature of the community-wide grammar. Gibson has now shed light on how each of these grammars has evolved, in a few predictable ways, to maximize clarity in communicating who did what to whom. That is, much more can be said than just 'That's the way English is; that's the way Turkish is,' and so on. Gibson's study shows that there is a great deal of functional design in seemingly arbitrary patterns of variation across languages."

In order to make their information-theoretical model of word order more rigorous, Gibson says, he and his colleagues need to better characterize the "noise characteristics" of spoken conversation — what types of errors typically arise, and how frequent they are. That's the topic of ongoing experiments, in which the researchers gauge people's interpretations of sentences in which words have been deleted or inserted..


Journal Reference:

  1. E. Gibson, S.T. Piantadosi, K. Brink, L. Bergen, E. Lim, and R. Saxe. A noisy-channel account of crosslinguistic word order variation. Psychological Science, (accepted) 2012

Language learning makes the brain grow, Swedish study suggests

Scientists have measured brains before and after language training and suggest that language learning makes the brain grow. (Credit: © pixologic / Fotolia)

At the Swedish Armed Forces Interpreter Academy, young recruits learn a new language at a very fast pace. By measuring their brains before and after the language training, a group of researchers has had an almost unique opportunity to observe what happens to the brain when we learn a new language in a short period of time.

At the Swedish Armed Forces Interpreter Academy in the city of Uppsala, young people with a flair for languages go from having no knowledge of a language such as Arabic, Russian or Dari to speaking it fluently in the space of 13 months. From morning to evening, weekdays and weekends, the recruits study at a pace unlike on any other language course.

As a control group, the researchers used medicine and cognitive science students at Umeå University — students who also study hard, but not languages. Both groups were given MRI scans before and after a three-month period of intensive study. While the brain structure of the control group remained unchanged, specific parts of the brain of the language students grew. The parts that developed in size were the hippocampus, a deep-lying brain structure that is involved in learning new material and spatial navigation, and three areas in the cerebral cortex.

"We were surprised that different parts of the brain developed to different degrees depending on how well the students performed and how much effort they had had to put in to keep up with the course," says Johan Mårtensson, a researcher in psychology at Lund University, Sweden.

Students with greater growth in the hippocampus and areas of the cerebral cortex related to language learning (superior temporal gyrus) had better language skills than the other students. In students who had to put more effort into their learning, greater growth was seen in an area of the motor region of the cerebral cortex (middle frontal gyrus). The areas of the brain in which the changes take place are thus linked to how easy one finds it to learn a language and development varies according to performance.

Previous research from other groups has indicated that Alzheimer's disease has a later onset in bilingual or multilingual groups.

"Even if we cannot compare three months of intensive language study with a lifetime of being bilingual, there is a lot to suggest that learning languages is a good way to keep the brain in shape," says Johan Mårtensson.

 

Journal Reference:

  1. Johan Mårtensson, Johan Eriksson, Nils Christian Bodammer, Magnus Lindgren, Mikael Johansson, Lars Nyberg, Martin Lövdén. Growth of language-related brain areas after foreign language learning. NeuroImage, 2012; 63 (1): 240 DOI: 10.1016/j.neuroimage.2012.06.043ewsPsychology or its staff.

A little science goes a long way: Engaging kids improves math, language scores

A Washington State University researcher has found that engaging elementary school students in science for as little as 10 hours a year can lead to improved test scores in math and language arts.

Samantha Gizerian, a clinical assistant professor in WSU's Department of Veterinary and Comparative Anatomy, Pharmacology and Physiology, saw improved test scores among fourth-grade students in South Los Angeles after students from the Charles R. Drew University of Medicine and Science gave 10 one-hour presentations on science.

"A lot of students say things like, 'I didn't know science was fun,'" says Gizerian, who helped with the classes while on the Drew faculty. "And because they think it's fun, all of a sudden it's not work anymore. It's not homework. It's not something extra that they have to do."

The fourth-graders in turn took home nonfiction books and showed a greater willingness to practice reading and math, says Gizerian.

Test scores bear that out.

According to a poster Gizerian presented at the recent annual meeting of the Society for Neuroscience, the students' average percentile rank in math on a standardized test increased from 53.2 in the third grade to 63.4 in the fourth grade. The language arts percentile improved even more dramatically, rising from 42.8 in the third grade to 60.3.

The study was part of a science-education initiative in which students from Drew acted as science mentors and gave science lessons. The program, funded by a National Center for Research Resources Science Education Partnership Award, improved the Drew students' ability to describe difficult scientific concepts, says Gizerian, "under the premise that, if you can teach a fourth grader a complex science concept, then you can teach anybody."

The Drew students, most of whom are ethnic minorities, served as role models for the pupils, who come from predominantly low-income, minority neighborhoods.

The pupil's prevailing attitude, says Gizerian, is, "in our culture, science isn't something we do. Science is for 'them.' To have kids in their classroom whose faces are the same colors, and for them to say, 'science is for me,' that's a big thing that we do."

In some cases, a lesson could be as simple and eye-opening as a microscope slide and the tiny life forms visible on it.

"It's really amazing when you hand them a piece of glass that's a microscope slide and you tell them, 'This is a real microscope slide — I use these in my lab,'" says Gizerian. "All of a sudden there's just complete reverence. They're just completely blown away by the idea that they're doing real science."

Gizerian's study concludes that the science lessons, while effective in themselves, also serve "as a spark to ignite a child's interest in lifelong learning in all areas."

Neandertal's right-handedness verified, hints at language capacity

enlarge

Scratch marks on the teeth from the Neandertal skeleton Regourdou. (Credit: Volpato et al, Hand to Mouth in a Neandertal: Right-Handedness in Regourdou 1. PLoS ONE, 2012; 7 (8): e43949 DOI: 10.1371/journal.pone.0043949)

There are precious few Neandertal skeletons available to science. One of the more complete was discovered in 1957 in France, roughly 900 yards away from the famous Lascaux Cave. That skeleton was dubbed "Regourdou." Then, about two decades ago, researchers examined Regourdou's arm bones and theorized that he had been right-handed.

"This skeleton had a mandible and parts of the skeleton below the neck," said David Frayer, professor of anthropology at the University of Kansas. "Twenty-plus years ago, some people studied the skeleton and argued that it was a right-handed individual based on the muscularity of the right arm versus the left arm."

Handedness, a uniquely human trait, signals brain lateralization, where each of the brain's two hemispheres is specialized. The left brain controls the right side of the body and in a human plays a primary role for language. So, if Neandertals were primarily right-handed, like modern humans, that fact could suggest a capacity for language.

Now, a new investigation by Frayer and an international team led by Virginie Volpato of the Senckenberg Institute in Frankfurt, Germany, has confirmed Regourdou's right-handedness by looking more closely at the robustness of the arms and shoulders, and comparing it with scratches on his teeth. Their findings are published August 23 in the journal PLoS ONE.

"We've been studying scratch marks on Neandertal teeth, but in all cases they were isolated teeth, or teeth in mandibles not directly associated with skeletal material," said Frayer. "This is the first time we can check the pattern that's seen in the teeth with the pattern that's seen in the arms. We did more sophisticated analysis of the arms — the collarbone, the humerus, the radius and the ulna — because we have them on both sides. And we looked at cortical thickness and other biomechanical measurements. All of them confirmed that everything was more robust on the right side then the left."

Frayer said Neandertals used their mouths like a "third hand" and that produced more wear and tear on the front teeth than their back ones. "It's long been known the Neandertals had been heavily processing things with their incisors and canines," he said.

Frayer's research on Regourdou's teeth confirmed the individual's right-handedness.

"We looked at the cut marks on the lower incisors and canines," said the KU researcher. "The marks that are on the lip side of the incisor teeth are oblique, or angled in such away that it indicates they were gripping with the left hand and cutting with the right, and every now and then they'd hit the teeth and leave these scratch marks that were there for the life of the individual."

Frayer said that the research on Regourdou shows that 89 percent of European Neandertal fossils (16 of 18) showed clear preference for their right hands. This is very similar to the prevalence of right-handers in modern human populations — about 90 percent of people alive today favor their right hands.

Frayer and his co-authors conclude that such ratios suggest a Neandertal capacity for language.

"The long-known connection between brain asymmetry, handedness and language in living populations serves as a proxy for estimating brain lateralization in the fossil record and the likelihood of language capacity in fossils," they write.

 

Journal Reference:

  1. Virginie Volpato, Roberto Macchiarelli, Debbie Guatelli-Steinberg, Ivana Fiore, Luca Bondioli, David W. Frayer. Hand to Mouth in a Neandertal: Right-Handedness in Regourdou 1. PLoS ONE, 2012; 7 (8): e43949 DOI: 10.1371/journal.pone.0043949