Media bias seems greater when you hold stronger beliefs about the subject, psychological research finds.
Media bias seems greater when you hold stronger beliefs about the subject, psychological research finds.
Media bias is a bias among journalists and news producers that affects which news stories they report and how they choose to report them.
Some of the most common ways in which media bias occurs are:
Mainstream bias: the bias to report the same stories other outlets are reporting.
Corporate bias: reporting stories that please the owners of media.
Statement bias: when coverage is biased either for or against particular issues or actors (for example, political coverage).
There are many more types of media bias.
Media bias certainly exists in the media: in fact it would be a miracle if it were permanently and perfectly balanced, that isn’t what this article is about.
Instead, this is about how you and I perceive the presence or absence of bias in the media.
This study, conducted in the 1980s, helps to explain a lot of the heat and light that gets produced by those commenting on media bias across the political spectrum, including the remarkably vitriolic outpourings often seen in the comment sections of newspaper websites and across the internet.
Media bias and the Beirut massacre
Robert P. Vallone and colleagues from Stanford University invited 144 Stanford undergrads who held a variety of views on the continuing Arab-Israeli conflict to watch some of the news coverage of the Beirut massacre (Vallone et al., 1985).
The Beirut massacre was the killing of between 328 and 3,500 Palestinian and Lebanese civilians by Lebanese militia forces in September 1982.
At the time the story received huge media coverage around the world with much speculation about whether Israeli forces had allowed it to happen (a subsequent commission held the Israeli government indirectly responsible).
Some of the participants recruited for the study were moderate in their initial views, others were specifically recruited from both the pro-Arab and pro-Israeli student associations.
Each was asked for their views about the conflict, its history and where their sympathies lay.
Here’s what they found:
68 were pro-Israeli,
27 were pro-Arab,
49 had mixed feelings.
All the participants then watched a series of news segments taken from US networks (NBC, ABC and CBS).
Afterwards they were asked to rate whether overall it was for or against Israel.
They used a scale of 1 (heavy pro-Arab bias) to 9 (heavy pro-Israel bias) where a rating of 5 was fair and impartial.
The results
Here are the average ratings for the news coverage from each group:
Pro-Israeli: 2.9 (perceived a marked pro-Arab bias)
Neutral: 3.8 (perceived a slight pro-Arab bias)
Pro-Arab: 6.7 (perceived a marked pro-Israeli bias)
As you can see the pro-Israeli participants thought the news reports were biased against Israel while the pro-Arab participants thought the news reports were biased against Arabs.
This is impressive because everyone was watching exactly the same news reports.
Even more surprising was that each thought that when someone neutral saw the coverage, it would persuade them to side with the opposite position.
Notice that those who claimed to be neutral thought the coverage had a slight pro-Arab bias.
This could be a hint of actual media bias or could be just an unacknowledged bias in those initially declaring themselves neutral.
Causes of the hostile media bias phenomenon
The study demonstrates what the authors call the ‘hostile media phenomenon’: people’s tendency to view news coverage about which they hold strong beliefs as biased against their own position.
There were two mechanisms at work here:
The truth is black and white: partisans generally thought that the truth about the Arab-Israeli debate was black and white. Any hint of shades of grey in the news reports was interpreted by partisans as bias towards the other side. In other words: any balanced report will seem biased to partisan viewers.
The news report was too grey: as well as thinking the Arab-Israeli issue was either black or white, partisans also perceived that the specific news report they watched was too grey.
Put simply: when we care about an issue, we tend not to notice all the points we agree with, and focus on the ones we don’t.
Admitting media bias
Whether the news actually is biased in one particular outlet about an issue that you care about can be very hard to quantify.
What we can say from this study is that people who care about a particular issue will tend to find media bias everywhere, whether or not it really exists.
Not only that but they are unlikely to admit this fact to themselves since this study, amongst others, also shows how remarkably resistant we are to admitting to our own biases, even when they are categorically demonstrated to us.
The top five list of personal fears of Americans: public speaking is at number 5.
The top five list of personal fears of Americans: public speaking is at number 5.
The first comprehensive survey of what Americans are afraid of has revealed that top of the list of personal fears is ‘walking alone at night’, not ‘public speaking’.
The full top 5 list of personal fears is:
Walking alone at night.
Becoming the victim of identity theft.
Safety on the internet.
Being the victim of a mass/random shooting.
Public speaking.
People in the Chapman University survey were asked separately about fear of crime, and the researchers were surprised by the results.
Dr Edward Day, who led this part of the study, said:
“What we found when we asked a series of questions pertaining to fears of various crimes is that a majority of Americans not only fear crimes such as, child abduction, gang violence, sexual assaults and others; but they also believe these crimes (and others) have increased over the past 20 years.
When we looked at statistical data from police and FBI records, it showed crime has actually decreased in America in the past 20 years.
Criminologists often get angry responses when we try to tell people the crime rate has gone down.”
Here is the percentage of people who thought different crimes were on the increase:
In fact, crime is decreasing in the United States:
Fear factors
Across the different types of fears, the researchers looked at what characteristics of individuals predicted fear.
Dr. Christopher Bader, one of the study’s authors, explained:
“Through a complex series of analyses, we were able to determine what types of people tend to fear certain things, and what personal characteristics tend to be associated with most types of fear.”
What emerged were two factors that most consistently predicted high levels of fear:
Self-deception research in psychology suggests people find it relatively easy to pull the wool over their own eyes, when motivated.
Self-deception research in psychology suggests people find it relatively easy to pull the wool over their own eyes, when motivated.
Self-deception, meaning lying to ourselves, seems like the last thing we should do.
Surely self-deception is counter-productive and full self-awareness should be the aim?
Like calmly and deliberately shooting yourself in the foot or taking a hot toasting fork and plunging it into your eye?
But look around and it’s not hard to spot the tell-tale symptoms of self-deception in other people.
So perhaps we are also practicing self-deception ourselves in ways we can’t clearly perceive?
But is that really possible and would we really believe the lies that we ‘told’ ourselves anyway?
That’s what Quattrone & Tversky (1984) explored in a classic social psychology experiment published in the Journal of Personality and Social Psychology.
Examples of self-deception
Any study of self-deception is going to involve a fair amount of bare-faced lying, and this research was no different.
They recruited 38 students who were told they were going to take part in a study about the “psychological and medical aspects of athletics” — no mention of self-deception.
Not true, in fact the researchers were going to trick participants into thinking that how long they could submerge their arms in cold water was diagnostic of their health status, when really it showed just how open people are to self-deception.
This is how they did it.
The experiment
The participants were first asked to plunge their arms into cold water for as long as they could.
The water was pretty cold and people could only manage this for 30 or 40 seconds.
Then, participants were given some other tasks to do to make them think they really were involved in a study about athletics.
They had a go on an exercise bike and were given a short lecture about life expectancy and how it related to the type of heart you have.
They were told there were two types of heart:
Type I heart: associated with poorer health, shorter life expectancy and heart disease.
Type II heart: associated with better health, longer life expectancy and low risk of heart disease.
Half were told that people with Type II hearts (apparently the ‘better’ type) have increased tolerance to cold water after exercise while the other half that it decreased tolerance to cold water.
Except, of course, this was all lies only made up to make participants think that how long they could hold their arm under water was a measure of their health, with half thinking cold-tolerance was a good sign and half thinking it was a bad sign.
Now time for the test: participants had another go at putting their arms into the cold water for as long as they could.
The graph below shows the average results before and after all the blatant lying (in the name of science of course!):
As you can see, the experimental manipulation had a strong effect.
People who thought it was a sign of a healthy heart to hold their arms underwater for longer did just that, while those who believed the reverse all of a sudden couldn’t take the cold.
That’s all well and good, but were these people really lying to themselves or just the experimenters and did they believe those lies?
Results of the research
After the arm-dunking each participant was asked whether they had intentionally changed the amount of time they held their arms underwater.
Of the 38 participants, 29 denied it and 9 confessed, but not directly.
Many of the 9 confessors claimed the water had changed temperature.
It hadn’t of course, this was just a way for people to justify their behaviour without directly facing their self-deception.
All the participants were then asked whether they believed they had a healthy heart or not.
Of the 29 deniers, 60 percent believed they had the healthier type of heart.
However of the confessors only 20 percent thought they had the healthier heart.
What this suggests is that the deniers were more likely to be truly practicing self-deception and not just trying to cover up their a half-truth.
They really did think that the test was telling them they had a healthy heart.
Meanwhile, the confessors tried to tell a lie back to the experimenter (seems only fair!), but privately the majority acknowledged they were practicing self-deception.
Levels of self-deception
This experiment is neat because it shows the different gradations of self-deception, all the way up to its purest form, in which people manage to trick themselves hook, line and sinker.
At this level people think and act as though their incorrect belief is completely true, totally disregarding any incoming hints from reality.
What this study suggests is that for many people self-deception is as easy as pie.
Not only will many people happily lie to themselves if given a reason, but they will only look for evidence that confirms their comforting self-deception, and then totally believe in the lies they are telling themselves.
Negotiation has two major pitfalls: when people do not communicate with each other and when they start using threats.
Negotiation has two major pitfalls: when people do not communicate with each other and when they start using threats.
Negotiation is one of those activities we often engage in without quite realising it.
Negotiation doesn’t just happen in the boardroom, or when we ask our boss for a raise or down at the market, it happens every time we want to reach an agreement with someone.
This agreement could be as simple as choosing a restaurant with a friend, or deciding which TV channel to watch.
At the other end of the scale, negotiation or bargaining can affect the fate of nations.
Big-scale or small-scale, negotiation is a central part of our lives.
Understanding the psychological processes involved in negotiation can provide us with huge benefits in our everyday lives.
In a classic, award-winning series of studies, Morgan Deutsch and Robert Krauss investigated two central factors in negotiation: how we communicate with each other and how we use threats (Deutsch & Krauss, 1962).
To do this, they used a game which forces two people to negotiate with each other.
Although Deutsch and Krauss used a series of different conditions – nine in fact – once you understand the basic game, all the conditions are only slight variations.
So, imagine you were a clerical worker at the Bell Telephone Laboratories in the late 1950s and you’ve been asked to take part in a psychology study.
Every psychology study has a story, and this one revolves around two trucking companies…
Negotiation experiment 1: Keep on trucking
Before the experiment proper starts, the researcher explains that you’ll be playing a game against another participant.
In the game you will run a trucking company.
The object of the game is the same as a real trucking company: to make as much money as possible.
Like the real-life trucking company you have to deliver as many of your goods as possible to their destination in the shortest possible time.
But in this game you only have one starting point, one destination and one competitor.
It looks like a pretty simple game.
Here’s the catch.
The road map your one truck has to travel across presents you with a dilemma.
You are the ‘Acme’ trucking company and your fellow participant is the ‘Bolt’ trucking company, although both of you have an identical problem.
Have a look below.
[Deutsch & Krauss, 1962, p. 55]
As you’ll see there are two possible routes you can take from the start to your destination: the short and the long.
Remember, time is money, so the longer it takes you to get to your destination, the less profit you make, which is the aim of the game.
Unfortunately the short route has a major shortcoming: it is one-way.
Only one of you can travel down it at a time towards your destination.
It seems you’ll be forced to work out some agreement through negotiation with your unknown rival to share this one-way route so that you can both make money.
How you’ll do this is another mystery, though, as there is going to be no communication between the two of you during the experiment.
You are to be seated in a cubicle from where you’ll only be able to see the control box for your ‘truck’ and the experimenter.
Threatening gates
You are to be given one method of communication with your rival, albeit indirect communication.
Each of you controls a gate at your own end of the one-way road.
Your gate can be opened or closed whenever you pass through it.
This will be your threat.
It is reinforced by the experimenter that you are out to make as much money as you can for yourself – the other person’s profit is not a concern.
Once the experimenter sets you off, it soon becomes clear you’re not going to make much money at all.
In the first of 20 trials, both you and your rival shut your gates, forcing both trucks onto the alternative route.
This is 50% longer and means you make a loss on the trip as a whole.
In the second trial your trucks meet head-on travelling up the one-way road.
You both have to reverse, costing you time and money.
The rest of the trials aren’t much better.
Occasionally you make a profit on a trip but more often than not it’s a bust.
You spend more time on the long route or reversing than you do chugging happily along the main route making money.
At the end of the experiment, the researcher announces how much profit you made.
None.
In fact you made a crippling loss.
Perhaps trucking companies aren’t so easy to run.
Comparing threats
You find out later that you were in one of three experimental conditions.
The only differences in the other two conditions were that in one there were no gates at either end of the one-way road.
In the other there was only one active gate controlled by one player.
Before I tell you the results of the other two conditions, try to guess.
One condition, which you’ve taken part in, contained bilateral threat – you could both threaten each other.
One condition had unilateral threat – only one could threaten the other.
And the final condition had no threat at all.
What was the order of profit?
In fact it turns out that your condition, of bilateral threat, made the least profit when both participant’s scores were added up.
The next most profitable was the unilateral threat condition, while the most profitable overall was the no-threat condition.
Here’s the first rather curious result.
While the person who had the threat – control of the gate – in the unilateral condition did better than the person who didn’t, they were still better off, individually and collectively, than if they both had threats.
What this experiment is showing is that the availability of threats leads to worse outcomes to the extent that unilateral threat is preferable to bilateral threat to both parties.
Negotiation experiment 2: Lines of communication
But surely a little communication goes a long way?
You weren’t allowed to talk to the other participant in this experiment, so your trucks had to do the talking for you.
Bargaining is all about reaching a compromise through negotiation – surely this should help?
To test the effect of communication Deutsch and Krauss (1962) set up a second experiment which was identical in all respects to the first except participants were given headphones to talk to each other.
Here’s the next curious result: allowing the two participants to communicate with each other made no significant difference to the amount of money each trucking company made.
In fact the experimenters found no relationship between words spoken and money made.
In other words, those who communicated more did not manage to reach a better understanding with each other.
They did little negotiation.
Like the experimenters themselves, I find this result surprising.
Surely allowing people to communicate let’s them work out a way for them both to make money?
And yet this isn’t what happened in the experiment at all.
Instead it seems that people’s competitive orientation was stronger than their motivation to communicate.
On the other hand, perhaps something specific to the situation in this experiment is stopping people talking?
Participants in the second study reported that it was difficult to start talking to the other person, who was effectively a stranger.
As a result they were considerably less talkative than normal.
Could it be that it was this situational constraint that meant little talking, and therefore little bargaining was going on?
Nnegotiation experiment 3: Forced communication
Deutsch and Krauss decided to test the effect of forced communication in their third experiment.
Again the procedure is the same as last time but now participants are instructed that on each of the 20 trials they have to say something.
If they don’t talk on one of the trials they are gently reminded by the experimenter to do so.
They are told they can talk about whatever they like, as long as they say something.
The results finally showed some success for communication.
Performance in the one-gate (unilateral threat) condition came close to that achieved in the ‘no-threat’ condition (remember the no-threat condition has the best outcomes).
Forced communication didn’t have much effect on the ‘no-threat’ condition when compared with no communication, and neither did it improve the bilateral threat condition much.
It still seems that people are so competitive when they both have threats it’s very difficult to avoid both sides losing out.
In negotiation, threats cause resentment
The most surprising finding of this study is how badly people do under conditions of bilateral threat.
In this experiment not even forcing communication can overcome people’s competitive streaks.
Deutsch and Krauss provide a fascinating explanation for this.
Imagine your neighbour asks you to water their plants while they’re on holiday.
Socially, it looks good for you if you agree to do it.
On the other hand if they ask you to water their plants otherwise they’ll set their TV on full blast while they’re on holiday, it immediately gets your hackles up.
Suddenly you resent them.
Giving in when there is no threat is seen by other people as pro-social.
Duress, however, seems to make people dig in their heels.
Applying the brakes to negotiation
Before drawing some general conclusions from these studies, we should acknowledge the particular circumstances of this research.
Deutsch and Krauss’s experiment covers a situation in which negotiation is carried out under time pressure.
Recall that the longer participants take to negotiate, the less money they make. In real life, time isn’t always of the essence.
The present game also has a relatively simple solution: participants make the most profit if they share the one-way road. In reality, solutions are rarely that clear-cut.
Finally, our participants were not professional negotiators, they were clerical and supervisory workers without special training.
Real-life implications
Despite these problems the trucking game has the advantage of being what game theorists call a non-zero-sum game.
In other words if you win, it doesn’t automatically mean the other person loses.
When you total the final results, as you sometimes can in a financial sense, they don’t add to zero.
In real life many of the situations in which we find ourselves are of this nature.
Cooperation can open the way to more profit, in financial or other form, for both parties.
As a result the trucking game has clear implications for real life:
Cooperative relationships are likely to be much more beneficial overall than competitive relationships. Before you go ‘duh!’, remember that increasing proportions of the world’s societies are capitalist. Deutsch and Krauss’s experiment clearly shows the friction caused by competitive relationships, such as those encouraged by capitalism. I’m not saying capitalism is bad, I’m just saying competition isn’t always good. This simple fact is often forgotten.
Just because people can communicate, doesn’t mean they will – even if it is to their advantage.
Forcing parties to communicate, even if they already have the means to communicate, encourages mutually beneficial outcomes.
In competitive relationships, communication should be aimed at increasing cooperation. Other methods will probably create more heat than light.
Threats are dangerous, not only to other’s interests, but also to our own.
Remember all these the next time you are bargaining with your partner over a night out, about to shout a threat at a motorist blocking your path on a one-way road, or even involved in high-level political negotiations between warring factions with nuclear capabilities.
It could save you, and the other side, a lot of trouble.
The cheerleader effect in psychology is that people appear more attractive in a group. It is explained by the averaging effect of the group.
The cheerleader effect in psychology is that people appear more attractive in a group. It is explained by the averaging effect of the group.
The so-called ‘cheerleader effect’ is the phenomenon that people seem more attractive when they are in a group than when they are alone.
At least, so urban legend has it.
But now the cheerleader effect has scientific backing from a study published recently in Psychological Science (Walker & Vul, 2013).
In fact, the study finds that both men and women are perceived as more attractive when they are in a group than when alone.
What is the cheerleader effect?
The effect is the result of the way we look at groups and what people, on average, deem an attractive face.
Generally people find ‘average’ faces most attractive.
When psychologists say ‘average’ in this sense, they mean if you average out the faces of lots of different people.
They don’t mean people who are average-looking.
Lead author of the study, Drew Walker, explains:
“Average faces are more attractive, likely due to the averaging out of unattractive idiosyncrasies.
Perhaps it’s like Tolstoy’s families: Beautiful people are all alike, but every unattractive person is unattractive in their own way.”
The cheerleader effect comes about, then, because when we look at a group of people, we see them as a group, and our brains average out their facial features.
In the study, people’s faces were shown to participants either alone or in group photos.
Sure enough, both men and women were rated more highly when presented in a group than when alone.
The effect was small but still noticeable.
The study’s co-author, Edward Vul, joked:
“The effect is definitely small, but some of us need all the help we can get.”
This leads to the idea that you might try to hang out with people whose ‘less average’ features complement your own.
The authors hint at some future research:
“If the average is more attractive because unattractive idiosyncrasies tend to be averaged out, then individuals with complementary facial features — one person with narrow eyes and one person with wide eyes, for example — would enjoy a greater boost in perceived attractiveness when seen together, as compared to groups composed of individuals who have more similar features.” (Walker & Vul, 2013).
How selfish people justify their behaviour to both themselves and others.
How selfish people justify their behaviour to both themselves and others.
Selfish people tend to forget their selfish acts, research finds.
It is a psychological mechanism that helps the selfish maintain a positive view of themselves.
However, only a minority of people are selfish and this bias only applies them.
The majority of people are generous and recall their behaviour accurately.
The conclusions come from a study motivated by the question of how selfish people can live with themselves.
The answer, it emerges, is partly through self-deception.
People who are selfish often justify their selfish behaviour to both themselves and others.
For example, they might justify giving a small tip by saying the service was poor.
However, selfish people also rely on ‘motivated misremembering’ to deny that it even happened.
Dr Molly Crockett, study co-author, said:
“When people behave in ways that fall short of their personal standards, one way they maintain their moral self-image is by misremembering their ethical lapses.”
For the research, five separate experiments that tested generosity were carried out on 3,190 people.
Across the studies, stingier people tended to recall giving more than they actually had.
This was despite being motivated to tell the truth by the offer of financial reward.
In a twist, two of the studies sometimes instructed people to be less generous.
Then, selfish people tended to recall exactly how much they had given.
The reason is that this time they were not morally responsible for the choice, so there was no need to ‘forget’ their selfish behaviour.
Mr Ryan Carlson, the study’s first author, said:
“Most people strive to behave ethically, but people sometimes fail to uphold their ideals.
In such cases, the desire to preserve a moral self-image can be a powerful force and not only motivate us to rationalize our unethical actions, but also ‘revise’ such actions in our memory.”
The study was published in the journal Nature Communications (Carlson et al., 2020).
The above-average effect or better-than-average effect explains why some people consider themselves superior at everything.
The above-average effect or better-than-average effect explains why some people consider themselves superior at everything.
The above-average effect, sometimes known as illusory superiority or the better-than-average effect, is a finding in social psychology that people tend to overestimate their abilities.
Whether it is driving ability, estimating IQ, health, memory, relationships and even happiness, people consistently rate themselves as better than others.
For example, 93 percent of people think their driving abilities are better than average.
Naturally, we cannot all be above average — unfortunately, by definition some of us have to be below average.
Above-average effect vs. below-average
People do not always assume themselves to be above average, though.
When a task is particularly difficult, such as a mathematical problems or playing chess, people assume they will do worse than average (see: worse-than-average effect).
Suddenly, instead of overestimating their abilities, people start underestimating their abilities.
This creates an apparent contradiction: how can people assume they are better-than-average, but suddenly lose their confidence when the task is difficult?
Better than average
A study seeks to resolve this contradiction in the above-average effect by surveying runners about how they expected to do in an upcoming and challenging race (Engeler & Häubl, 2021).
Their estimates were then compared to their actual times.
The results showed that some of the runners showed the better-than-average effect: they thought they would do better than others.
The better-than-average effect was mainly driven by overconfidence, they predicted they could run faster than they really could.
The runners who underestimated their ability, though, were mainly driven by their expectations about other competitors.
In other words, they assumed other runners would be faster than they actually were.
Professor Gerald Häubl, study co-author, said:
“Our work identifies two distinct sources of bias or two different reasons for why people might not be well calibrated: they can be biased in their self-assessment, and they can be biased in their assessment of others.”
Overconfidence vs. under confidence
Fascinatingly, the runners who were worst were also the most overconfident.
This is another demonstration of the Dunning-Kruger effect, the findings that the poorest performers are unaware of their shortcomings.
Put more crudely, it is why the incompetent don’t know they’re incompetent.
Overconfidence, or the above-average effect, is not always bad, it depends on the circumstances, said Professor Häubl:
“Some of humankind’s greatest achievements were probably fuelled by some form of overconfidence.
But then, so were some of humankind’s most spectacular failures.
In very general terms, well-calibrated confidence, based on an accurate assessment of both one’s own and others’ abilities, is what people should strive for.”
In contrast, under confidence has more obvious disadvantages:
“The problem with under confidence, however, is that it can prevent people who actually have the potential to excel at something—a particular job or career—from even trying, because they falsely believe there are many others who are better than they are.”
The study was published in the Journal of Personality and Social Psychology (Engeler & Häubl, 2021).
Research in the psychology of attitudes reveals why people say one thing, but do another.
Research in the psychology of attitudes reveals why people say one thing, but do another.
The word for when someone says one thing but does another is hypocrisy.
But why are people often so hypocritical?
It’s only natural to think a person’s attitudes and behaviours are directly related.
If someone says, while truly believing it, that they’re not a racist, you’d expect them to behave consistently with that statement.
Despite this, psychologists have found that the link between a person’s attitudes and their behaviours is not always that strong.
People frequently say one thing but do another.
In fact, people have a nasty habit of saying one thing and then doing the exact opposite, even with the best of intentions.
You see it all the time:
People say they’re worried about global warming and yet they drive around in a big gas guzzler.
They say that money isn’t their God, yet they work all the hours.
They say they want to be fit but they don’t do any exercise.
Say one thing, do another
The discovery of the extent of people’s blatant hypocrisy goes back to 1930s America and the work of a Stanford sociology professor, Richard LaPiere (LaPiere, 1934).
In the early 30s he was on a tour across California with some close friends who happened to be Chinese.
LaPiere was worried that they would encounter problems finding welcoming restaurants and hotels because of his Chinese friends.
At that time in the US there had been lots of stories in the media about how prejudiced people were against Chinese people.
LaPiere and his friends were, therefore, pleasantly surprised to find that out of the 128 restaurants and hotels they visited, all but one served them courteously.
Nowadays the fact that one place refused to serve them would rightly be considered an outrage – but those were different times.
So it sounds like a happy ending: perhaps the papers had just exaggerated people’s negative attitudes towards Chinese people?
The gap between attitudes and behaviour
But when LaPiere got home he started to wonder why there was such a gap between what the newspapers were reporting about people’s attitudes and their actual behaviour.
Why do people say one thing, then do another?
To check this out he decided to send out a questionnaire to the restaurants and hotels they had visited along with other similar places in the area.
The questionnaire asked the owners about their attitudes, with the most important question being: “Will you accept members of the Chinese race in your establishment?”
The answers they could give were:
Yes.
No.
Depends upon the circumstances.
Incredibly, 90 percent of respondents answered, no, they wouldn’t accept members of the Chinese race into their establishments.
Imagine LaPiere’s surprise when he looked at the results.
People genuinely did say one thing and do the complete reverse.
They didn’t even select ‘it depends’.
What on Earth was going on?
Problems with the study
LaPiere himself argued that the problem lay in the questionnaire.
The questions themselves cannot represent reality in all its confusing glory.
What probably happened when people were asked if they accept Chinese people was that they conjured up a highly prejudiced view of the Chinese which bore little relation with what they were presented with in reality.
Here was a polite, well-dressed, well-off couple in the company of a Stanford University professor.
Not the rude, job-stealing, yobbish stereotype they had in mind when they answered the questionnaire.
This study has actually been subsequently criticised for all sorts of reasons.
Nevertheless its main finding – that people say one thing and do another in many situations – has been backed up by countless later studies, although in more sophisticated fashion.
The question is: why?
Snapshot of prejudices
Many psychologists effectively agree with LaPiere that it all depends on how you ask the questions and what stereotypes people are currently imagining when they give their answers.
In some ways an attitude is like a snapshot of the prejudices the respondent has available to memory just at the moment they are questioned.
This has led to a whole raft of studies and theories searching for connections between people’s attitudes and their behaviour.
Many a lengthy tome has been dedicated to explaining the divergence.
Some of the factors that have been found important are:
Social norms.
Accessibility of the attitude.
Perceived control over behaviour.
Despite these findings, the picture is extremely complicated and frustratingly inconclusive.
Perhaps as a result interest in this area has been waning amongst psychologists.
The exact way in which people’s attitudes and behaviour are connected remains a mystery.
All we can say with certainty is that people are frequently extremely inconsistent.
The process of lie detection has nothing to do with supposed ‘tells’ like avoiding eye-contact or sweating.
The process of lie detection has nothing to do with supposed ‘tells’ like avoiding eye-contact or sweating.
Despite all the advice about lie detection going around, study after study has found that it is very difficult to spot when someone is lying.
Previous tests involving watching videos of suspects typically find that both experts and non-experts come in at around 50/50: in other words you might as well flip a coin.
A study published in Human Communication Research, though, has found that a process of active questioning yielded almost perfect results, with 97.8 percent of liars successfully detected (Levine et al., 2014).
The process of lie detection has nothing to do with supposed ‘tells’ like avoiding eye-contact or sweating, and everything to do with the way the suspect is questioned.
The Reid Technique
In the series of studies, participants played a trivia game in which they were secretly offered a chance to cheat.
In one experiment 12 percent cheated and in another 44.9 percent chose to cheat.
Participants were then interviewed using a variety of active questioning techniques.
One group were interrogated using the Reid Technique, which is employed by many law enforcement professionals in North America.
It involves tactics like presuming the suspect is guilty, shifting the blame away from the suspect and asking loaded questions like “Did you plan this or did it just happen?”
This technique was 100 percent effective with all 33 guilty participants owning up to their ‘crime’.
A second group were interviewed by US federal agents with substantial experience of interrogation.
They were able to detect 97.8 percent of people that cheated — in reality all but two of 89 people.
Bear two things in mind, though:
The Reid Techniques’ detractors say that it can lead to false confessions.
Participants in this study did not have that much to lose by admitting their guilt. It wasn’t as if they’d murdered their spouses.
Active questioning
Across the different types of interrogation, though, the important factor was that the questioning was active and of the kind used in real interrogations.
Professor Timothy Levine of Michigan State University, who led the study, said:
“This research suggests that effective questioning is critical to deception detection.”
Asking bad questions can actually make people worse than chance at lie detection, and you can make honest people appear guilty.
But, fairly minor changes in the questions can really improve accuracy, even in brief interviews.
This has huge implications for intelligence and law enforcement.”
Presumption of honesty
Professor Levine believes lies are partly so difficult to detect because in normal, everyday life we have a presumption of honesty.
“The presumption of honesty is highly adaptive.
It enables efficient communication, and this presumption of honesty makes sense because most communication is honest most of the time.
However, the presumption of honesty makes humans vulnerable to occasional deceit.”
The key, then, to detecting lies may be to assume someone is lying and then question them on that basis.
.
Get free email updates
Join the free PsyBlog mailing list. No spam, ever.