Don't Believe Everything You Read and Hear

By Diana Holbourn

Article Summary

Welcome

Besides fake news and outright fraudulent claims, there's a lot of information around that is misleading, but sometimes not intentionally so. Some of it is biased, without a deliberate intent to deceive on the part of the one spreading it. But some of it is intended to manipulate people into doing things that will benefit those spreading it in some way, such as by advertising some things in a way that makes them sound more impressive or useful than they really are, to make money.

This article is about various kinds of common misinformation, from fraud to misinterpreted study results. It covers a range of topics, such as reasons not to instantly trust claims of alternative health cures, dubious studies, how opinion polls carried out by people trying to give the impression there's a lot of support for something they want to achieve can be manipulated to persuade people there is, how statistics can give a false impression of things, how backbiting can unfairly turn people against others, and other such things.

Skip past the following quotes if you'd like to get straight down to reading the article.


Quotes and Insightful Quips - Some Thought-Provoking - About Opinions, Thinking and Statistics

Error is a hardy plant; it flourishes in every soil.
--Martin F. Tupper

All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others.
Douglas Adams

To obtain a man’s opinion of you, make him mad.
--Oliver Wendell Holmes

Inquire not what are the opinions of any one; but inquire what is truth.
--John Calvin

Belief is when someone else does the thinking.
--Buckminster Fuller, 1972

Reading without reflecting is like eating without digesting.
--Edmund Burke

Opinions don’t affect facts. But facts should affect opinions, and do, if you’re rational.
--Ricky Gervais

Do not put your faith in what statistics say until you have carefully considered what they do not say.
--William W. Watt

Most people use statistics the way a drunkard uses a lamp post, more for support than illumination.
--Mark Twain

When even the brightest mind in our world has been trained up from childhood in a superstition of any kind, it will never be possible for that mind, in its maturity, to examine sincerely, dispassionately, and conscientiously any evidence or any circumstance which shall seem to cast a doubt upon the validity of that superstition. I doubt if I could do it myself.
--Mark Twain

In science it often happens that scientists say, 'You know that’s a really good argument; my position is mistaken,' and then they would actually change their minds and you never hear that old view from them again. They really do it. It doesn’t happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day. I cannot recall the last time something like that happened in politics or religion.
--Carl Sagan

Advances are made by answering questions. Discoveries are made by questioning answers.
--Bernhard Haisch, astrophysicist

The Main Contents

  1. Don't be Too Quick to Believe Reports and Recommendations About Unorthodox Health Cures
  2. Being Misled by Other Information
  3. Flawed Studies and Misleading Reporting of Them
  4. Believing Misleading Claims Made by People Such as Advertisers and People who Claim to be Psychic
  5. The Importance of Trying to Find Out More Than One Side of a Story
  6. Being Careful Not to Put Too Much Faith in Reported Statistics
  7. The Accuracy of Opinion Polls

Thinking

Part One
Don't be Too Quick to Believe Reports and Recommendations About Unorthodox Health Cures

Beware of Being Too Trusting of Anecdotal Evidence

Queasy

Alternative medicine can seem attractive, especially for people with conditions that can't be or haven't been cured by conventional medical treatment. But people ought to often be cautious about trying it, because there are charlatans who take advantage of people's needs, and other practitioners who are genuinely convinced their health cures work, when really, they're mistaken. There are several reasons why that can happen, and also why people can genuinely think they've been cured by them when they haven't really been.

Sometimes, people can even take serious decisions that turn out to be tragic, because they're misled by a claim that seems convincing, but isn't really based on good evidence at all. People can make claims that make them sound as if they really know what they're talking about, when really, the evidence for what they're praising isn't as good as they think it is at all, but their words convince others because they sound so sure.

For instance, they can assume that things they've found out about, or that people they know think worked for them, are greater proof of something than they really are. And if they've got some kind of special interest in trying to prove to people that they're right, or if they've become convinced themselves that what they're trying to convince others of is a good thing, but they're having a hard time persuading them, they can use phrases to try to persuade them of their benefits, like, "I know someone who it worked for", "You see it all the time!" or, "I've seen it happen." But they can often be mistaken. Sometimes, people will know they haven't got solid evidence, and use personal stories to compensate, hoping they'll be convincing, or that they'll make the other person stop challenging them and arguing, for fear of looking foolish. But sometimes, they really believe there is good scientific evidence for what they've come to believe about what they're praising.

One example is where someone might read or hear about someone who had cancer trying to cure themselves with positive thinking alone, and apparently succeeding because the cancer went away. That might seem like convincing evidence to the person who reads or hears about it, so they might start promoting the idea that cancer can be cured by positive thinking. But they might be unaware that there could be hundreds of people who've tried the positive thinking cure who've got worse or died, and that occasionally, cancer just seems to go away by itself, which is likely in reality due to natural processes believed to be to do with the stimulation of the immune system to greater action by infection by diseases causing fever; so the cancer of the person they heard about might have gone away even without them trying positive thinking.

It's for reasons like this that medical treatments are tested scientifically in trials, instead of drug companies just going with what some people say seems to have worked for them. (Having said that, some drug companies have been known to hide information that was unfavourable to some of the drugs they were developing, to make the trials seem more successful than they really were).

Many practitioners of alternative medicine are convinced their treatments work even though there's no scientific evidence for them, partly because time and time again, they've heard people say they're working, or seen their symptoms fade after they've been taking or doing something they've told them to. But though it might sound convincing when a lot of people feel sure such remedies are working, it doesn't necessarily mean they really are. Here's why:

How Treatments For Ailments Can Seem to Work When They Don't

Disappointed

There are several reasons why people can mistakenly believe a particular treatment has cured them when it was actually something else, or they're not really cured. With some diseases, symptoms get better and worse over time just by themselves. They might seem to have gone, but they're still hanging around undetected by the person who has them, and then something triggers them to flare up again. If they took something for them just before the symptoms died down, they might think that whatever it was cured the disease. Cold sores is one example. They can go away all summer, and a person can think they must have gone completely, and think a healing ceremony, or something they took for them just before they went, made them go for good. But then they might be back just as badly when the weather gets colder again.

Many diseases fluctuate in severity; bad days will often be followed by good days automatically, or bad months can be followed by better ones. But if a person's taking something, they can mistakenly think that what they took helped them improve, when they would have improved anyway.

So scientists don't take people's own reports of cures, or a practitioner of alternative medicine's claims to have had many successes, as good evidence that something works. They might dismiss such reports as mere anecdotal evidence, which means they're based on stories people have told of how they think something happened, based on their own or others' experiences. Such evidence tends to be thought of lightly, because there are several reasons why the one telling the story could be mistaken, or even not telling the whole truth.

Another reason why people can think something cured them when it didn't is that some diseases can just clear up by themselves. So, for instance, if someone has a cold, and they take a particular cold remedy, and then a day or two later, their cold vanishes, they might assume it was the cold remedy that cured them, when actually, the cold would have gone away anyway.

Another reason is that stress can cause some symptoms to be made worse. That's particularly true for pain. If someone is tense, they can hold themselves awkwardly, and if something's giving them pain, the muscle tension can make the pain worse. Being with a nice soothing person who relaxes them can make some pain fade quite a bit. That's all well and good in itself, but some people can mistakenly believe the actual disease they've got is going. That can be dangerous if they stop taking medication because of it.

Another thing that can relieve some pain temporarily is excitement and distraction. Some faith healers might seem successful because they fill people at their meetings with the excitement of the expectation that something good's going to happen, and people in pain are in a crowd of people all getting absorbed in joyful singing and so on; so some of them will be absorbed in it themselves, and will have the buzz of anticipation giving them a rush of endorphins and adrenaline, and they can feel better because of that, and imagine they've been cured, and even go up to the front and tell the congregation they have been, not realising that when the excitement wears off, the pain will come back just as badly. Again, that can be dangerous if they're so convinced they've been healed they throw away their medication, or anything else they usually need to help them. Apparently, people have even died because they've had so much faith they've been healed they haven't sought proper medical attention afterwards.

Another reason people can be mistaken about whether a treatment's worked for them is if they're taking several different remedies, either at once or one by one, and while they're taking one, their symptoms improve. They might assume it was the treatment they were taking when they improved that helped, and tell all their friends that treatment really helped them; but since they were taking a treatment all the time, they don't know how much they'd have improved if they hadn't taken anything, or whether the previous treatment they took could have just started to work, having taken a little while, rather than it being the latest one that helped.

Another reason people can mistakenly come to believe certain treatments are more effective than they are is because they're more likely to hear reports of success than reports of failures.

For instance, in a support group for cancer survivors, the people who followed a treatment regimen recommended by a practitioner of alternative medicine and then died just won't be around to tell others what happened; and if they died before they even got to go to the support group, there might be no way that anyone in it will find out that they died. For instance, if one treatment being recommended by someone locally is a diet of fresh vegetables and water, supposedly because the diet will be toxin-free and that'll take a burden off the immune system so it'll help it fight the cancer, anyone who tried such a thing and then died just won't be there to provide the story to counterbalance any stories told of people who tried the diet and at the same time, their symptoms improved a bit.

And those who do tell of improvements might not mention that they've also been having chemotherapy while they've been on the diet. If they did, it might be assumed that that's been helping the most. If they just mention the diet because it seems something remarkable and thus more worth mentioning, they might mistakenly give people the impression that the diet's what caused the improvement.

People might get the impression that such a thing works all the more because it's more of a talking point when a treatment seems to work, so they're far more likely to hear stories of treatments appearing to work than stories from people who took those treatments but they did nothing. People are far more likely to want to say, "I tried this diet and my symptoms improved!" than they are to say, just out of the blue, "Hey, you know what? I tried this treatment the other day, and it didn't work at all." So people's impressions of a treatment's success can be skewed, simply because they're less likely to hear about the times it didn't do a thing than they are to hear about it when something quite newsworthy seemed to happen.

Another reason people in a medical-type field might get a skewed impression of how successful their treatments are is that people who don't think they're working may usually simply not go back. So they're more likely to hear reports of their apparent successes than they are to hear about their failures. No one's going to want to pay for the time of someone just to take it up by saying they no longer believe in their treatments because they haven't worked. Most people who've seen no benefit might simply stay away. Only if some people still believe in the skill of the practitioner so they want to try another of their treatments, or if something goes dramatically wrong, are they likely to hear about their failures. So they can carry on using the same treatments or therapies, assuming they're working, when actually they're not.

Another reason people can think certain treatments are more successful than they really are is because it's easier to remember stories of apparent success than it is to remember times when people say they tried one but nothing happened. They just tend to stick in the mind more. Or if a person wants to find out whether a treatment works, it's instinctive to just look for evidence that it's been successful; it takes extra thinking to take the step of looking for evidence that it often doesn't work, and that step might especially not be taken if a person really wants to believe it works because they're desperate for a cure.

Some people offer very expensive treatments for people desperate for a cure, and lure them to find the money and pay them to get treated, by having a lot of claims in their advertising literature from people saying how much the treatment helped them. They might not all be made-up; some people who tried it might genuinely feel better for a time. What a person reading the advertising won't know is whether they still felt better six months later. That's besides the fact that they often won't be told the safety record for the treatment, and the percentage of successes compared to the percentage of failures.

So reputable scientists won't consider people's reports of improvement of their symptoms reliable on their own; they'll want to look for additional evidence, such as - in circumstances that aren't too serious to risk such things - how well a treatment does in a controlled trial, where they observe what's happening from the start, comparing what happens to people who take the treatment with people who do nothing, or who are given a sham treatment, without knowing whether they're being given the sham treatment or the real one.

It has apparently been known for some practitioners of medical treatments that aren't recognised as scientific ones to say to patients wondering about how reputable they are that studies proving they're successful have been mentioned in scientific journals. But not all journals are as respectable as each other; and also, it has been known for a journal article to analyse a number of studies that have been done into such a practice, at first saying they appear to show that the method is successful; but anyone reading further on will discover that the studies are all criticised for having flaws in them, and possibly that better-done studies have found there's no evidence for its success.

Also, since practitioners of a lot of alternative treatments aren't given medical licenses, bodies have been sometimes set up to give them licenses, and although they have respectable-sounding names, they're sometimes really set up by people promoting the same practices that lack scientific evidence that they're using.


Part Two
Being Misled by Other Information

Though there's a great deal of accurate information around, there's also a fair bit of information that's misleading, or that people can easily read too much into, which can lead to problems. It's especially easy to take it on trust if the information comes from sources that seem respectable. Here are some examples:

Astrology

Reading the paper

One mistaken thing people can do is to form stronger beliefs in things than they should, because it's easy to be convinced by things that aren't really good evidence, but seem to be. One example is when people believe in horoscopes, which they can sometimes do partly because they've heard people say such things as that they've had experience of having some of them predict things for them that came true. So many predictions are made by horoscopes in the papers and things that some are bound to come true by chance sometimes. What someone who's convinced they must be true because something predicted for them or for someone they know or have heard about came true isn't taking into account is how many predictions about them and others haven't come true.

Oh yes, I once read about someone who got agoraphobia after she read a horoscope that said people with her star sign might be at a greater risk of danger that day, and she had an accident. She said she'd read the horoscope not believing it, just thinking it was a bit of fun. But after she had the accident, she became convinced they must be true, and became scared to go out, rather than putting it down to chance. Still, there can't be many people that happens to; it couldn't be said that horoscopes are detrimental to people because of that kind of thing. Then again, though not many people take horoscopes all that seriously in the Western world, in some countries they do. I read about how a whole town full of people ran away from their town in India one day because certain astrologers had predicted there would be a disaster there that day. It didn't happen.

So such beliefs can have a big impact on people, when in reality, there's no good evidence that astrology works.

People Adopting Children Going on to Have Their Own

Child having a tantrum

Another example of how people can read too much into things is the way some people can draw conclusions after hearing about a few families who have tried to have a child for a while, given up and adopted one or two, and then almost immediately managed to have a child of their own. It might seem amazing, especially if a person hears about more than one couple it happened to. But still it's worth being cautious about what gets read into it.

It seems that some people may have assumed that it must mean that the reason some couples can't conceive is because they're over-anxious about having a child, and that hinders the process, but after they've adopted one, they're more relaxed about it, so they're more likely to conceive their own; and that view can be unfortunate, because if people who think that might be the case see another couple having difficulties conceiving a child, they might urge them to relax, and thoroughly embarrass them or hurt their feelings, when in fact their problem might be to do with something else entirely. And after all, they won't know what was really causing the problems of the couples who adopted and then found they were having children of their own. Nor will they know how many people adopt and then don't manage to have their own child. Nor will they know how many people had trouble conceiving for some time but then managed to do so without adopting a child first.

That's just an example to show that it's worth being cautious about reading a lot into things we can't really be sure about.

People can become more convinced about conclusions they draw like that, for instance because when they hear about something happening, if they've got a theory about why it might have happened, they can decide to look online or elsewhere for information about whether their theory seems to be true, thinking it will be good evidence if they find out that it's common for it to happen the way it would happen if it was, for instance if people who have a theory that makes it seem to them that it's as if adopting some children might have taken a feeling of pressure to have them off the parents, and that could have helped them conceive their own, look online, and find quite a few cases of couples who have difficulty conceiving children who then adopt them having their own child soon afterwards.

If they read quite a few stories about that happening, they might become convinced that when couples have children of their own soon after adopting them, it must mean they might have had their own earlier if they hadn't been so stressed about whether they'd have kids, so that other couples who can't conceive would often be able to if they just stressed over the idea less.

It's less likely to occur to people to think to look for evidence of where what they think must happen hasn't happened; and in fact, that's harder to do, because when remarkable things don't happen, they just don't tend to get talked about so much. A couple having a baby of their own after adopting a child is a talking point, for example, whereas if a couple are trying for their own baby after adopting other children but nothing's happening, they might not even tell anyone but their closest friends. So people looking online for evidence might become more convinced of their theory, because they're not looking for, or finding, evidence that contradicts it, not to mention evidence that other things entirely are causing what's happening.

Worrying About Stories of Violent Crime in the News

Anxious

It's a bit similar to what goes on in the news, in one way; people can easily get the impression that society's more violent than it is if they read too much into it; the news doesn't tend to broadcast stories about nice things happening, and about the times when nothing bad's happened. Such things just aren't the stuff of the sensationalist material they have an agenda to broadcast.

And when bad things happen a lot, they become normal, so they no longer make headlines. So the stories about such things as people being murdered in broad daylight on the news might give the impression that going out is dangerous, when in reality, even if one or two stories about such things are broadcast every day, it shows that the things it's talking about don't happen all that often, - the complete opposite of the impression it can give.

But it can lead to genuinely serious consequences, such as some people getting depressed or becoming anxious about going out, or people keeping their children indoors on lovely warm days, because they're scared they'll be attacked by a paedophile, when in fact most people who molest children are people known to them. It can happen that a child molester will abduct a child from a group of children playing, but it's rare. So while keeping a reasonably regular check on the whereabouts of children is wise, restricting their freedom a lot because of some news story about something that didn't happen locally is probably doing too much.

Trusting Religious and Other Authority Figures Too Much

Praying

People should never put critical thinking skills aside and trustingly believe anyone, whether that be religious leaders, political figures, or anyone else. It can even be dangerous. Some people are in positions of authority that suggest they can only have got there if they're intelligent or caring and respectable, or that they must be if they want to do the job they do. But it isn't necessarily true that people in positions of the most influence are the brightest and best; and people have all kinds of different motives for trying to get into positions of authority, or promoting others into them. And people at the top can be just as prone to doing things out of jealousy, selfishness, a desire for power or prestige, a sincere but misguided belief in something, and a range of other unfortunate motives, as anyone else.

The more someone trusts another, or is confident they're on-side because they hold some views they already agree with, the more likely they are to believe other things they say. But a lot of people are accorded trust simply because they've been given those positions of authority by others, not because what they say has actually been proven correct.

Often, people have little option but to put their trust in authority figures. For instance, when voting for people running for political office, politicians might tell the public what they'd like them to know about them. But people are expected to vote for them without having a clue how honest and caring they really are. Details that would give important clues as to a political candidate's real nature, such as, perhaps, "This man seems to think of war as a game; he jumped up and down with glee when he saw the bodies of children from a country we've been at war with", are never likely to be widely publicised if they've only been witnessed by allies in private, so people could be voting psychopaths into office who'd lead the country to war without a second thought, without having a clue they're doing it, through no fault of their own.

It's best not to take any big decision - or even a little one - merely because an authority figure or group of like-minded ones says so. And sometimes it's easy to find out they're wrong.

For instance, one Christian sect teaches that women should never cut their hair, so those who belong to it never even trim it, no matter how straggly it gets. But any one of them who looks in the Bible, the supposed place where the sect gets the teaching that women should never cut their hair, will find it doesn't say that at all. It just says that long hair looks best on a woman, and that women should dress and wear their hair in a way that looks modest.

There are preachers who'll quote what they say are Bible verses, to convince people to do more serious things, such as giving the church leaders a lot of their money; but they can be distorting the meaning of the verses, quoting them out of context, or using a version of the Bible that isn't generally accepted, and was written by people who don't even have any familiarity with the original languages it was written in, and want to distort the meaning of some of the words to fit their own agenda.

For example, verses from the Old Testament about giving money to the priests can be quoted as proof that congregations should do just that, when only a few verses on from one of the ones quoted, in a verse that they don't quote, it says people should give such a thing once every three years or something!

Even a genuinely caring and honest pastor can still be dangerously misguided. For example, a pastor who sincerely believes in God's power to heal those who ask him might fill a member of the congregation with hope so much when he prays for them that they decide to stop taking important medication, in faith that they'll be healed, only to have a bad reaction that puts them in hospital later in the day.

Similarly, a union leader, who might be presumed to be in the job because he cares about the well-being of those in the union, might in reality just be there because he enjoys status and gets a thrill from making trouble; and he might rally the workers behind him, making impassioned speeches about how employers are all out for themselves, and how some need to be taught a lesson because it's about time they shared more of the money they make, when in reality, times are hard, and some could be driven out of business by militant demands for higher wages.

Or a charity can make touching appeals for money, when their managing director gets a salary of half a million pounds a year, and others at the top earn high wages too, and the money that does get allotted to the needy isn't even well spent.

There's still a lot of goodness and generosity around, so it would be wrong to be discouraged about every effort. It's just best not to be too quick to trust people, even when they seem to be well-intentioned, but instead to find other opinions on what they're saying first.

Mistakes Can Creep Into Textbooks, and Media Reporting Can be Inaccurate

Reading a book

It's just as well to double-check, or even do more than that, to find out whether something's true, before relying on it too heavily when making decisions, or doing other such things. Even textbooks can contain errors, sometimes in stories they're telling to illustrate a point. If a story didn't really happen the way they say it did, perhaps the point they're making isn't so valid after all, or else they could have used a better illustration of it.

While the vast majority of textbooks are probably completely accurate, it can't be guaranteed that they all will be. To give an example of a story that was repeated in lots of textbooks, where it often had a lot of little inaccuracies:

A psychologist in 1919, when psychology hadn't been around for that long as a recognised discipline, wanted to find out more about how easily children can develop phobias and other unfortunate symptoms, feeling convinced it could come about easily, which was the opposite of what psychoanalysts like Freud were declaring - that phobias had deep-rooted causes, and that a lot of therapy was necessary to uncover them. The psychologist wanted to prove that that point of view was inaccurate, and that his own theory was right.

Child screaming

He did an experiment, scaring a young toddler he called Albert, to see if he could make him develop a phobia of little animals and other furry things. Experiments involving frightening a baby again and again wouldn't be allowed today, but he was allowed to do that. The experiment basically involved him first showing the baby little animals, to make sure he wasn't scared of them in the first place, and then bashing a claw hammer hard against a pipe near his ear, which made a very loud noise, when he showed him a rat, and other furry things, to scare him, to see if it made him scared of the animals, and also objects that had similarities to them, such as a fur coat, even when he wasn't scared with a loud noise when he saw them. The psychologist wrote a paper all about it, and it's become very famous, and is described on a lot of study courses introducing psychology. But apparently, the facts are sometimes reported inaccurately.

There were problems with the way the experiment was done, so the psychologist's findings couldn't, with full certainty, be considered to be informative. For instance, he made the loud noise when the baby was shown a few of the other things, which made it difficult for him to prove his idea that children will just naturally become afraid of things that have similarities to what they originally became afraid of. He may well have been right that that often happens; but it seems that his experiment didn't do a good job of proving it.

Also, it seems that the experiment failed to make clear that it really was the animals and objects themselves that the baby was afraid of, rather than being scared that their presence meant someone was about to make a horrible loud noise again right near his ear. Because he couldn't talk, he couldn't explain exactly what was upsetting him.

But apparently, many textbooks have even added to the inaccuracy. Some say the psychologist taught the baby not to be frightened; but he didn't. He said there wasn't time - perhaps because he was in a hurry to get his paper on the experiment published. Some textbooks make it sound more dramatic than it was, by giving the impression that after one single event of frightening, the baby became frightened of everything that was shown him, whereas in reality, he was frightened of some things more than others, and where he showed little fear of some things, the psychologist or his assistant frightened him again by bashing the pipe with the claw hammer right near his ear when he showed interest in what was in front of him, to see if doing that repeatedly could frighten him more permanently.

Some of the reason for the textbooks' inaccuracies is that the psychologist himself apparently left certain details out of the story when he retold it, and may have given a false impression in other ways sometimes. But it's been suggested that another reason might be because the authors want to use it to illustrate that once people have a phobia of one thing, they can start becoming scared of things with similarities to it, and the details that make the experiment less valid get in the way of the story, so they're not put in. It's not at all clear whether that really is the case though. Also, authors may have copied from each other a lot of the time, rather than going back to the original sources, so if one person's account contains errors, the accounts of those who learned from them will do also. And it might be that the accounts they're getting the information from, unbeknownst to them, were themselves copied from sources that contained errors.


What?

Even some school textbooks have been found to contain errors, even recently.

Apparently, in India, a school textbook even called the Suez Canal the Sewage Canal.

Some errors apparently find their way into lots of different school and even college textbooks. It's been suggested that one reason may be that some authors don't attempt to find out all the information they put in them from scratch, which would after all be a lot more difficult a lot of the time, but they rely on quite a bit of what other textbooks say for their information.

They might assume the information's accurate because they think the authors they're copying from have probably taken what they've written straight from the original papers. But if unbeknownst to them, they're copying from an author who copied from someone else, who copied from someone else and so on, the information can be more inaccurate than they realise, since if sometimes when the information's copied, a little error or two creeps in, perhaps because an author reads the information, thinks they remember it, and writes it down, not realising they're actually remembering something slightly inaccurately, and they don't double-check, or else they want one or two details to be different, because then they'll help prove something they're trying to prove better, or for other reasons, then over time, new textbooks where quite a bit of information is copied from old ones might come to contain quite a few errors.

Having said that, virtually all of them are probably pretty reliable.

But some notable errors have been discovered. An organisation called the Textbook League reviews textbooks for errors. Some widespread, or notable, errors in recent textbooks have apparently included:

So sad as it may seem, it's best if someone interested in a particular event or description of something else goes to several unconnected sources to find out about it. And if someone's claiming to be saying something very important or to have made a great new discovery, it's worth looking to see if anyone's making criticisms of it, and what they're saying.


Part Three
Flawed Studies and Misleading Reporting of Them

Some Reasons Why Studies or Reports of Them Can be Inaccurate, and Some Examples of Questionable Research

Disagreeing

Parts of the media should sometimes be more careful to report studies more accurately. But then, with their best efforts, there's still no guarantee that the information they're reporting on itself will be accurate, so if an article's misleading in any way, the fault will not always lie with them; they may have received, for instance, an overly-optimistic or exaggeratedly negative assessment of something that's going on, via a press release put out by, for example, an organisation that's done a study of it, and reported it to the media in a way that only makes known the most dramatic-sounding findings, for instance if they report that research is being done into a cure for a particular disease, but don't report that it looks as if it might come to nothing, or that a cure will likely be a decade away, because they want to make special efforts to get their findings noticed because they think they're important, or they want to publicise what they've done to try to attract funding for more studies into something they think looks like promising research.

Or the study they've done might actually be flawed, without anyone who hasn't looked into the way it was carried out realising.

Inaccuracies won't necessarily be deliberate at all. People carrying out studies will sometimes have no idea that their research is faulty. And the studies can be repeated by other people and get the same results, because they don't realise the methods are faulty either.

Hundreds of psychological studies in particular can all find the same thing, and textbooks can even announce that human nature has been found to be a particular way, and that the proof is that hundreds of studies have all got the same results; but it's certainly possible that what's really happened is that all the psychologists doing the studies interpreted the results wrongly, or used study methods that were flawed in some way.

Biting nails

For instance, a study was done that seemed to get the worrying result that people behave differently if they're wearing a uniform, and that their behaviour can change according to which uniform they're wearing. They were asked to give electric shocks to people. They were sham ones, but the participants didn't know that. People were found to give more shocks to people when they were wearing a military uniform, and fewer when they were wearing a nurse's uniform. Some wore Ku Klux Klan uniforms, and were the most aggressive of the lot.

Apparently the psychologists reached the worrying conclusion that people's behaviour changes according to even little things like what uniform they're wearing; and books speculated about what such a terrible finding could mean, since if a uniform could have that effect in a lab setting, what might it do in real life! Other psychologists may have tested whether people change their behaviour while wearing a uniform, and got the same results, thinking it meant the original psychologists must be right. But it's possible that no psychologist out of all the ones who did the studies thought to consider the possibility that rather than meaning terrible things about what people do when wearing uniforms, the study results didn't actually show anything of value at all, because people played the role of whoever's uniform they were wearing for a little while, but if the psychologists had followed them for days rather than minutes to see how they were behaving, they might well have found the effects of wearing the uniform wore off after twenty minutes or so!

Trying to think

Another example is an experiment that was done where people were given a series of three numbers, 2, 4 and 6, and they were told it was a series that followed a particular rule, and that they had to guess what the rule was, and that before they did, they could test each idea they had about what the rule was, by suggesting sequences of three numbers that they thought might possibly obey it, and they'd be told whether they did or not.

People couldn't guess what the rule really was, or took a long time. They would do such things as suggesting another set of even numbers going up in twos, and when they were told that those numbers obeyed the rule, guess that the rule must be that the numbers had to be even numbers going up in twos. When they were told they were wrong, they'd suggest odd numbers that went up by twos, wondering if the rule could be that sequences could be either even or odd numbers going up in twos. When they were told that that was wrong too, they often suggested numbers that went down in twos, wondering if the rule was that sequences could either go up or down in twos.

The rule was actually that the numbers could be anything at all, as long as they were ascending. The experimenters apparently claimed that people's failure to easily guess what the rule was was evidence of people's tendency to think of a theory as to why things are the way they are, and then try to find evidence that confirms it, and neglect to look for evidence that might prove it's not true, which might otherwise help them decide more easily whether it is or not. People likely often do do that, partly because most people will understandably be likely to be a lot keener to prove something they feel sure must be true really is, than to risk disappointing themselves by diligently trying to find evidence that their ideas are wrong. Also, if people are pretty certain they're right, they might be carried away with enthusiasm to prove that they really are; or they might not see the point in trying to find out if they're wrong.

For example, if someone hears someone enthusing over how they think homeopathic medicine has helped them, and they want to find out more about whether it really works before entrusting their medical treatment to it themselves, but they're hopeful about it working for a problem they have, because of the enthusiasm of the person they've been talking to, they might be far more likely to look for articles that claim that it works than evidence that it doesn't.

But the fact that people kept suggesting sequences of numbers going up or down in twos, rather than completely different kinds of sequences, almost certainly wasn't actually good evidence that their problem was that they were unwilling to look for evidence that would disprove their theory about twos being the important thing.

The claim that the experiment showed that people are unwilling to look at evidence that disproves their theories was repeated in books by professors and other educated people, one or more of whom tried doing the experiment on people themselves; and when they found that people were finding it hard to work out the rule and just sticking to the idea of numbers going up or both up and down in twos, it seems they assumed that they'd found the same thing - that people conclude one thing and then look for evidence that confirms it, rather than looking at alternative theories that might show that something different is going on, even when it would make more sense to look at a different theory.

Funnily enough, it seems that proved that they themselves were doing that very thing - sticking to a theory they had about why people were doing what they were doing, instead of trying to think up alternative possibilities, rather than it meaning that the people being set the challenge were doing that, because it's likely that the results had a completely different explanation to the one they assumed they had, and it was that people simply wouldn't be expecting the rule to be anything as simple as, "It can be a sequence of any old numbers, as long as they're in ascending order". So they would be looking for slightly more complicated rules; and because they wouldn't have been able to think of many, some would have kept coming back to the idea that it must be something to do with sequences of twos.

When they kept suggesting rules to do with twos, those who set them the challenge would take it as evidence that they didn't want to abandon their theory that the rule was something to do with twos. One said that the cleverer thing to do would be to think of sets of completely random numbers, to see if they disproved their theory about twos, and ask the psychologists if they obeyed the rule, since if they were told they did, they'd immediately know they should abandon their twos theory.

While doing such things might very well be the cleverer things to do in a lot of circumstances, people's failure to do that with the numbers was in reality probably far more to do with the fact that they wouldn't have seen a point in throwing out three random numbers, since even if they were told they fitted the rule, since they were looking for something more complex than, "It can be any old sequence of numbers, just so long as the three of them are in ascending order", they wouldn't have had any more of a clue as to what the rule was than if they were told their numbers didn't fit the rule, unless they'd tried out lots of sequences of ascending random numbers just for the heck of it, and then came to the conclusion that it simply had to be that, since they all obeyed the rule.

So basically, it's best not to assume that someone more specialised in a particular subject must necessarily have a better idea of what they're talking about than someone less informed about it does, if they seem to be saying something logically questionable.

Asking Questions That Are Not Likely to Elicit the Information the People Asking Them Think They Will

Doh

Sometimes, questions can be asked that are bound to be answered in a way that give a false impression, regardless of the intention of both the one who asks and the one who answers. The same is true of what could be said to be little personal studies as it can be for more professional ones.

An example is if someone wants to find out whether each person in a new school class they're in is extroverted or introverted, because a teacher has set them the task of finding out. One question some might ask each one is, "How would you liven up a party?", thinking that extroverts would be bound to be better at it, and that would show in their answers, or that introverts would likely say they wouldn't even attempt such a task. But though introverted people might not be keen on having to liven up a party, many would know what to do if they had to. So most might answer the question by describing a technique that would likely be effective, unwittingly giving the impression that they're extroverted, not because they want to, but just because they're giving the one who asked the question the information they think they want to know. A better question would be, "How do you usually behave at parties?"

In fact, people can make the mistake again of only looking for information that confirms a particular idea they have about people's behaviour. For instance, if someone's instructed to find out how extraverted another person is, they might start off with the idea that they might be extroverted, and ask questions to get information about how extrovertedly they can behave, thinking that if they find evidence to confirm that they can behave in an extroverted way, it'll mean the person is extraverted, rather than theorising that the person could be one thing or the other, and asking questions about whether they normally behave in a lively way or a quiet way, where the answer could go either way.

Similarly, if they're asked to find out how introverted a person is, they might focus on the possibility that they're introverted, and ask them questions about how introvertedly they can behave at times. Since even the most extroverted extrovert may have days when they don't feel like mixing with others, they might talk about those times, again saying what they assume the person who asked the question wants to know, and come across as being more introverted than they really are.

A Note About How Not All Therapy is Good

Crying after counselling

Just as psychological research ranges from useful and insightful to no good at all, so does psychotherapy - in fact some types can be harmful. Anyone wanting therapy should find out exactly what's involved in any particular therapy a counsellor or psychotherapist is offering before entrusting themselves to their care. The best therapy is forward-looking and doesn't take forever. Some counselling might feel helpful and therapeutic at the time, but actually makes people feel worse, because it encourages them to start thinking about all the worst things that have happened in their pasts, and they go home and think a lot more about them, and just get more and more depressed. And they're nowhere nearer working out what to actually do about their problems than they were before they started. The best therapy will give people new ideas from the start.

More Examples of Problems With Some Psychological and Scientific Studies and the Way They Can be Reported

One problem with some psychology experiments is that it seems that the psychologists doing them don't ask the people who've been experimented on to consider and then explain the reasons why they behaved as they did during the experiments; they just seem to assume they know why the people behaved as they did, after having predicted before they do their studies that if they behave a certain way, it'll have proved a theory they have about people's behaviour in certain circumstances; and then it seems they write about how they've found that people do such-and-such a thing for such-and-such a reason in their papers, regardless of whether they're in reality right or wrong about the reasons. Then it seems that books by other people who just assume the psychologists know what they're talking about can refer to them uncritically, saying they proved that people think a certain way. It's unclear how much that's really the case, but they can certainly give that impression.

People eating a meal

For instance, in another experiment, one group in a restaurant was apparently told that a complimentary glass of wine they were given came from California, which might possibly have conjured up images in their minds of enjoying relaxing long sunny days on the beach, or something similar, while another group of people were given a glass of free wine each and told it came from North Dakota. The ones in that group didn't eat so much of their meals, and left the restaurant earlier than the group who'd been told their wine came from California. They rated their meals and the wine less favourably, and were less likely to say they'd come back again. It seems the psychologist who did the experiment read quite a bit of significance into that, and interpreted it to mean that just thinking about where a certain part of your meal comes from can make the food taste better or worse, and influence your behaviour as much as that.

But realistically, one group was almost bound to leave earlier than the other, and even without the groups being told anything, all other things being equal, there was a 50/50 chance that that group would leave earliest anyway. So how much did their leaving early really prove? Also, what if instead of the people genuinely thinking their meal tasted better when they were told their wine came from California, they just thought they'd been treated to something a bit special, so they felt obligated to rate the experience more favourably than they would have done otherwise; and also, what if feeling as if they'd been given more of a treat put them in a better mood, so they felt like staying longer, which meant they ate more?

It's best not to accept a simple single explanation for any study result when it comes to what it proves about human nature, but to think through different possibilities as to what it could mean, including the possibility that it means nothing of any significance whatsoever.

Sometimes though, studies are more complex than they're reported as being in the media, so psychologists shouldn't immediately be dismissed as quacks or uninsightful if their study findings sound silly; it's as well to find out more about them before making a definite judgment about that.

The most important thing is that when people read a claim that's made about something, whether in the news, in a textbook or anything else, if it's important enough to them that they might act on it in some way, they should never just accept it, but should look at other sources of information to see if they back up or disagree with it. The importance of doing that will naturally vary with the amount of effect the information might have on their lives.

Fruit

For instance, if a website claims that a certain diet works wonders, but it would mean cutting certain types of foods that are believed to have important nutritional content out of the diet altogether, even though it might be tempting for someone who needs to go on a diet to follow it because there are glowing reports of its success, it'll be important that they go to respected websites where people can talk - using words the general public can understand rather than technical jargon - about whether the diet has real benefits, or whether there could be long-term health problems with it, so that a different kind of food reduction plan might be better. After all, there are lots of different kinds of diets out there; and in fact, just making a personal plan to cut down on certain high-calorie foods can work as well as any official diet.

But the importance of trying to find out a variety of views from science-minded people who are supposed to know what they're talking about increases even more when health could be affected in more drastic ways.

The Famous MMR Scandal

Chickenpox

One well-known example is the MMR scandal, where lots of parents were scared off getting their babies vaccinated against measles, mumps and rubella, because they were informed by the media that someone who was actually supposed to know what he was talking about - a doctor, no less, - claimed the vaccination could cause autism. Perhaps the fact that he was highly qualified made the media take his supposed findings a lot more seriously than they would have done if he hadn't been. But with the scary media reporting, which included interviews with worried parents, it's no wonder so many parents didn't get their babies vaccinated - after all, it's commonly thought that measles, mumps and rubella are minor childhood illnesses that might last days or weeks at most, as compared with autism, which can be very disabling and lasts a lifetime.

The BBC has a rule about how its news coverage should be impartial, so they try to give equal airtime to both sides of an argument. But while that might be good in a lot of circumstances, they realised afterwards that when they give people who are promoting studies that turn out to be faulty equal airtime with scientists trying to debunk them, it gives the impression that it's hard to really know what the truth is, because they could both be making arguments that seem as if they could be valid to people who don't know much about the matter.

Also, giving a similar amount of airtime to parents who were scared that their children might have been victims of harmful vaccines as they did to scientists trying to reassure people that they're harmless could well have made people worry that they'd better be on the safe side and not get their children vaccinated with the implicated vaccine, not realising that the illnesses it vaccinates against could sometimes be more serious than they knew.

So even with the best of intentions, a lot of the reporting failed the public, because the assumption was made that the study was important and needed to be taken seriously and publicised quickly. It's easy to criticise the media in hind-sight, now we know it was a dud study; and yet there is more they could have done at the time.

It seems that not nearly enough care was taken to find out the views of other medical scientists before they aired the story; and the things about the study that made the results uncertain, such as that it was a study of very few people, which will tend to make studies less reliable than if a large number of people are tested, because the results could simply be due to chance, or due to the particular people being studied having genetic or other characteristics that are found in only a small percentage of people, were not publicised anywhere near as well as they should have been.

Apparently, less than a third of all reports about the study in respectable newspapers in 2002 mentioned that there was a lot of evidence that the MMR vaccine is safe, and not many more than one in ten reported that it's considered safe in the 90 other countries where it's used. The media in general apparently made too little effort to seek quotes from scientific experts, and instead sought out the views of people who might be well-loved and have a good media image, but have no scientific expertise, such as radio and television presenters.

Apparently, journalists were often put on the story who didn't have any background in science reporting, so they would have been less aware of the need to speak to people who could point out the things about the study that made it unsafe to draw major conclusions from it, and who could discuss all the evidence that the vaccine was safe. Also, they would have been less able to decipher the technical jargon in a lot of the scientific studies so as to report their findings themselves.

Instead of reporting what the science had actually found, the media apparently resorted to getting quotes from authority figures on both sides of the argument, who gave conflicting warnings to the public on what they should do. Because what the scientific evidence could tell us was left out of the discussions, to a large extent, the public were given the impression that the scientific evidence for both positions at best held roughly equal weight, and it seems that many thought they'd better be on the safe side by not getting their children vaccinated.

In the next few years, there were studies that looked for the same thing the original study had, but got completely opposite results, and yet the media was apparently almost completely silent about them, so the public never found out about them. Yet when someone claimed to have done a study with the same results as the original one, some newspaper headlines were again warning of the scare, though the man who'd done it apparently had a reputation for unreliability, the study was never published, and he worked in the same private hospital as the original study author, which suggests a possible wish to back up a friend, or something of that nature. Reporting the scare stories, but keeping silent about studies that had found the opposite, seem to be evidence of media bias, or else a policy of just reporting sensational-sounding findings. Of course, the media were quick to vilify the original study author when the fact that the medical community had discredited his study became the sensational story of the day.

Meanwhile, some children were going down with the diseases they hadn't been vaccinated against, and some were suffering serious complications that many families probably hadn't realised had always been possible consequences of the illnesses.

All that shows that it can even be dangerous to just accept what even respectable newspapers say if they don't seem to have an informative discussion of the science behind controversial health claims. It's as well to browse websites run by respected organisations to see if there are expert opinions from people who actually have a good knowledge of the subject, before making a decision based on a study.

The Difficulty Even Well-Informed People Have With Making Predictions About the Future

Nervous

But if it's an issue of speculating over the future of something, for any expert at all, predicting the long-term future will be difficult, and even if they sound as if they know what they're talking about, there's a high chance they'll get it wrong, unless it's a subject that genuinely does have predictable outcomes, such as that a particular disease will spread if measures aren't taken to ensure it doesn't.

People should never think that just because someone is an authority figure or a professional, they'll be more likely to be an expert on things in general than anyone else. For instance, not long after AIDS first came on the scene, some in the media apparently started asking sex therapists for their opinions on how much danger the country was in as a whole.

But while some sex therapists might be very good at their particular job, and be able to give good advice on behaviour, they wouldn't necessarily have been any more expert on how quickly the new disease was likely to spread in the general population than other people, since while they might have a much better knowledge of general sexual behaviour than most people, so they could warn people that it seemed that a lot of them might be at risk, by guessing by what their clients told them about their behaviour and that of people they knew, and by what they themselves had read in the media about the disease, they wouldn't necessarily have known anything more about how quickly it might spread in the coming years, and who would be most at risk, than any other member of the general public would; it would have been better to interview the kinds of scientists who spend their lives studying patterns of disease, and predicting what kinds of diseases might increase in the future, about how fast the disease might spread, and what sections of the population would be likely to be most affected, and so on.

Parts of the media tend to sensationalise things, to attract attention so they can sell papers and so on. Scientists will tend to be more cautious about any predictions they make; but one of them who makes an outlandish dramatic claim will often get media coverage above someone who's done very good lengthy research, and is now making cautious predictions that don't sound dramatic, because the media tends to go for the dramatic above cautious optimism, without necessarily weeding out bad stories and rejecting them for publication; they're under pressure to fill space in papers and airtime, and they want people to be intrigued enough to come back for more. So what will attract attention can take precedence over the genuinely best news.

And if a scientist makes a cautious prediction, saying, for example, that between a dozen and three thousand people could die of a certain thing in the next few years, the media will often latch on to the highest most sensational figure, and will likely broadcast that "as many as" three thousand people might die of it, rather than telling readers/listeners that that's just the worst possible outcome, and that things might not be nearly so bad. Or if they do, it might be mentioned a fair bit further down an article than the worst possible news is.

Misinterpreting Study Results

Even good studies can be used to try to prove points that they don't prove at all, even though they might seem to on the surface. So it's as well to be alert to the possibility that study findings don't mean what some people think they mean.

Graduating from university

For instance, some university lecturers apparently asked their university bosses if they could reduce class sizes to make teaching easier. They were told there was no need to by the university management, who showed them studies that had found that teaching can be just as effective in large classes as in small ones. Though the studies might have been done by reputable organisations, it turned out that they didn't show quite what the management said or thought they did. They did show evidence that teachers could teach large classes well, but only where there was someone lecturing at the front, with all the students quietly taking notes. What the teachers asking for smaller class sizes were finding difficult was class discussions, where they wanted everyone to have a chance to contribute or to show they understood things; they had a lot more of those in the subjects they taught than in other subjects that mainly were taught by lectures.

So it's best not to take someone's word for something, but to examine the evidence they're putting forward for their point of view, in case it isn't as good as they think it is.

Another example is that someone might cite an important-sounding study as evidence for a claim they're making that vaccines are made using chemicals that are harmful to children; they might say it warns of a substance that's put in them that's highly toxic. A person hearing that might be scared to take their child to be vaccinated; but it might turn out that the study itself says that the substance is in a much milder form when it's put in vaccines than it is in its original form, and that it's only put in a small minority of them, which a lot of people aren't even given; and it might turn out when further investigation's done that in any case, doctors actually stopped using it a few years ago! The person raising the alarm about it might not know that.

Walking in the rain

Visual images shouldn't be taken as more convincing proof of anything than just claims on their own, since the explanation for them might be different or more complicated than it's reported as being. For instance, there's a mountain in Africa with snow and a glacier on top. Pictures of it melting were claimed by some to be proof of global warming, since the melting was found to be speeding up. But in reality, there were several different reasons why it was doing that, including, according to some researchers, that a lot of trees had been cut down near the bottom of the mountain, and the winds, which had previously picked up a lot of moisture while travelling through them on the way up, weren't doing that anymore, so the winds were dry, and didn't drop water on the glacier to make up for any evaporation as they had before, so the glacier shrunk as it evaporated in the hot sunshine. It is, after all, near the equator. Also, there have been several times throughout the centuries when the glacier has shrunk and then grown again.

Also, some ice melting unexpectedly quickly in Antarctica has been claimed to be evidence of global warming, when in fact, ice in another part of it was increasing at the same time.

Naturally, that doesn't mean there's no evidence for global warming, or that if a claim that's said to be evidence of something having happened is proved to be mistaken, all the evidence for the thing having happened may as well be ignored. It just means such things as that it's best not to be influenced to believe something that might impact the way we live our lives or make decisions by any one claim or set of one-sided claims made by anyone who might have an interest in presenting only one side of an argument, or who might only have a superficial understanding of things, no matter how impressive a claim seems. It's best to do some investigation into what other reputable people have found before making a decision one way or the other.

Strawberries

A book declares that people are biased in favour of remembering things that fit their beliefs and expectations about the way things are or are going to be, and forgetting things that might call them into question. That might well be the case. But it says that one thing that proves this is that a study found that farmers who were interviewed about their belief in global warming were found to be biased in favour of remembering any warmer weather that seemed to prove them right, thinking there had actually been weather that had been warmer than it really had been, while those who didn't believe in global warming felt sure the weather had been colder at times than it really had been, without balancing what they said by talking about any weather that had been warmer than they'd expected, as if they'd forgotten any pattern of warmer-than-average weather, just remembering what fitted with their beliefs.

But before people draw the conclusion that the study must prove that those farmers were subconsciously biased in favour of their beliefs, and that they just subconsciously forgot evidence that conflicted with them, questions ought to be considered, such as whether it could in reality be the case that rather than having had their belief or lack of belief in global warming first, the beliefs of at least some of them have come about as a result of what they've noticed most about the weather recently.

And the way the book's phrased, it makes it sound as if every one of the farmers, to a man, remembered only the weather that seemed to confirm them in their beliefs. A question has to be asked as to whether this could really be the case, or whether the study findings were reported inaccurately.

Also, it's as well to wonder how much the answers the farmers gave really did say anything about their biases, and how much they might actually have said something about the way the questions were asked. The farmers may have been asked leading questions that prompted them to answer in certain ways: Leading questions are questions that suggest the answer within the question, so as to put the idea of responding in a certain way into the mind of the person answering.

For instance, if farmers who believed in global warming were asked, "Have you noticed a lot of weather in the past few years that's been warmer than usual?", they would start thinking of any weather that they thought had been warmer, and might well say yes. If those who didn't believe in global warming were asked if they'd noticed the weather getting cooler recently, they might too say yes, ... especially if winter was beginning. Both groups of farmers might start discussing the weather they'd started thinking about as a result of the question. Those asking the questions might think they were seeing a result that proved that people who believe in global warming are biased in favour of noticing warmer weather and disregarding the cooler weather, and that those who don't believe in it will likely notice the cooler weather but conveniently forget the warmer-than-average weather.

And yet if the questions had been reversed, they might have even got the opposite results! If the farmers who did believe in global warming were asked, "You have noticed the long spates of cooler weather we've had recently, haven't you?" they might well have started talking about the cooler weather, and the psychologists might have gone away and reported that there were a bunch of barmy farmers who believed in global warming but believed temperatures had actually become cooler recently. And vice versa for those who didn't believe in global warming.

Still, that having been said, it's possible that the study was actually carried out in a far more respectable way than that.

In a lot of other studies, likewise though, it's impossible to know, when just reading about the findings, whether they really do represent valid information, or whether the results have a lot to do with the way the studies were actually done. So it's best not to accept study findings at face value, unless they're generally in line with the findings of past studies that information put out by organisations known to be reputable confirm as valid; otherwise, sometimes they'll be good and worthwhile, but sometimes, either the books or newspapers reporting them could be reporting them inaccurately, or they could be reporting on them accurately, but the studies themselves might have been done poorly, and the conclusions drawn from them are mistaken, or even fraudulent.

Bias in Ordinary People and in Some Scientists, and Better Scientific Practice

People also have a tendency to accept information they think is good at face value, but look for flaws in information that opposes it. That can mean that although they don't deliberately get a biased viewpoint, they can end up spotting a lot more errors in the information that opposes their views than in the information that supports them, simply because they believe that the information that backs up their beliefs is true, so they don't think to look for flaws in it. So they can end up thinking the information that supports their beliefs is best, even if it isn't really. They can end up even more convinced that what they believe is true than they were before because of the flaws they found in the research that opposes them, when the research that supports them might contain just as many errors.

Tomato

For instance, if an environmentalist who strongly opposes scientists genetically modifying food comes across a study that says it's a good idea, they might think of it as a challenge and seek to disprove it, so they might look at it closely and find it fails to mention several things believed by some to be possible hazards of genetic engineering. But they might completely fail to spot that the studies they're using to support their ideas exaggerate the possible hazards when compared to more reputable studies, and that they play down the benefits, and don't mention some at all.

So those are things to watch out for, since we might do such things without realising, unless we're on the lookout for times when we might do them.

Objecting to things

A book says that some scientists have done similar things, examining findings that oppose their beliefs far more than findings that support them that they take for granted. Quite a few scientists looking at other scientists' papers to peer-review them for journals, for example, have been found to recommend the publication of quite a few more of the papers that support their beliefs than ones that oppose them; and the methods the people who wrote the papers used in carrying out their experiments can be criticised more if the findings oppose their beliefs than is often the case for those that support them. The author says every Psychologist he knows who does experiments will repeat one if it contradicts what he believes, to see if the findings come out differently, whereas they don't do that with experiments that support their beliefs.

Scientists have also been known to make more efforts to explain away findings they don't like than findings they like. The author gives the example of a French scientist who long ago weighed the brains of a sample of French people and Germans, to see which country's people had the heavier ones, the idea being that those with more brain matter would be more intelligent. He couldn't accept the finding that the brains in his sample of German ones were on average 100 grams heavier than his sample of French people's brains, so he explained it by saying the reason was that the Germans' body sizes had been bigger, and that was what accounted for it; but he didn't do the same when he measured men's and women's brains, and found the men's were bigger, apparently preferring to think of men's bigger brain size as a sign of greater intelligence.

Eating pizza

A study once came out that claimed that there's a relationship between people being overweight in middle age and them getting dementia in old age, suggesting that overweight people are more at risk of it. Perhaps that's really the case. But not looking at the study - which might have explained things in more depth - but just listening to it being reported as a short news story on the radio, it appears that it left questions unanswered, like, "Just how overweight would one have to be to make the risk significant, and for how long? And how do they know being overweight is the significant thing, rather than something else that often causes people to be overweight, but doesn't necessarily, such as eating unhealthy food?" Study results often need to be looked into a bit, to find out if they're what they seem to be at first.

Science is saved from a lot of genuinely poor findings becoming thought of as scientific fact though, because there is often a demand in science that study findings are confirmed again and again as new experiments are done, often by different people, before a finding is accepted by the scientific community as fact.

Also, while quite a lot of scientists have made mistaken and fraudulent claims over the years, science is protected from the effects of that nowadays to a large extent, because reputable scientists use ways of reducing the possibility that they'll get incorrect results that they'll think are correct.

For instance, if they want to find out whether people tend to gain weight after they stop smoking, then rather than just monitoring a group of ex-smokers for six months to see how much weight they put on, and, if they all put on quite a bit of weight, concluding that people tend to eat more as a substitute for smoking and so will tend to put on weight after they stop smoking, they will have what they call a control group, a group of ordinary people who haven't stopped smoking or never did smoke, and they'll measure how much their weight fluctuates in the same six months. It might turn out that the weight of quite a few people in that group goes up quite a bit as well. So then they would conclude that it's quite common for people to put on weight anyway, so giving up smoking doesn't necessarily have as big an effect as they might have thought otherwise.

Cake

Or if they were doing a study to find out if children whose parents fed them on a diet of unhealthy food didn't do so well at school, they might find a big relationship between not eating healthy food and doing badly at school. But they wouldn't just conclude that eating unhealthy food was to blame. They would try to think of the things a lot of the children who ate unhealthy food had in common, and try to find out how much those things themselves had to do with them doing badly, before announcing any conclusions about unhealthy food not being good for the brain.

For instance, could it be that families who consistently give their children unhealthy food are often poorer than average, and so they can't afford many of the toys and things richer children often have that stimulate their minds, or that they work longer hours, so they're not with their children so much to help them learn? Or could it be that a lot of parents who give their children unhealthy food are of the type who don't care enough about their children's well-being to attend well to other areas of their lives, such as their need to spend time with them or others, learning from them? Could it be that many are less intelligent, or very busy, so again, they don't spend so much time with their children, helping them learn? Or what's the probability that the study finding can just be put down to chance, and that in another sample of children who eat unhealthily, you'd find most of them doing as well as many children who eat healthily?

Reputable scientists will try to take all those factors into account and more. They will, for example, look at the school performance of children from poor backgrounds, where they might also not get so many stimulating toys, or where parents have to work long hours and so have less time available to help with their homework, but who do nevertheless give them healthy food, to see if they do just as badly at school, or better than the other group.

Scientists will be well aware that not all study findings will be accurate, so standard procedure will be that more research is done, and that the findings be repeated, before they're trusted. Making mistakes in the research could lead to inaccurate conclusions with serious consequences, or sometimes just expensive ones. Such things have been known to happen, but there are protections in place to try to prevent it.

Someone using crutches

For instance, a group of patients with arthritis could all be given a new treatment in a drug trial or a trial of a new operation, and at the end of six months, they could all be significantly better, and it might be concluded that the treatment worked. But it might happen that many changed their diets during the same time, some tried other treatments, some had conditions that have periods when they're not that bad and then they flare up again, and some might have found that certain exercises relieve their pain and started doing them every day, and so on. So they might have got better for a range of reasons that had nothing to do with the treatment that's being tested. If the research does nothing to find out about the conditions of a group of people with arthritis who weren't on the treatment, so as to compare the rates of recovery of the two groups, but just concludes that the treatment must have worked, some very inaccurate research findings indeed might be released to the public.

So good scientific studies will normally compare groups like that. And when trying to find out how good a scientific study is, one thing to investigate is whether it's been done like that.

Even Reputable Organisations Can Do or Report Misleading or Poorly-Done Research

Nurse taking blood pressure

In 2006, the BBC did a series of programmes on alternative medicine. One included a heart operation that was done on someone who'd had acupuncture but wasn't under general anaesthetic. The person felt no pain. The impression was given that acupuncture could be powerful. But it was discovered a while later that the person undergoing the surgery had also been given a combination of three very powerful sedatives, and large amounts of local anaesthetic in the chest area. So the programme either misled viewers, or the makers of the programme themselves were misled.

Another programme in the series reported on a study that was done into acupuncture, and it was concluded that it actually does something to the pain receptors in the brain that dulls pain. But again the programme was criticised. A Times article said:

... Lewith, an expert on the effects of acupuncture, said in an interview yesterday: “The experiment was not groundbreaking; its results were sensationalised. It was oversold and over-interpreted. Proper scientific qualifications that might suggest alternative interpretations of the data appear to have been edited out of the programme.”

In one of the programmes, on the placebo effect, they said it could be quite striking; and one reason they gave for concluding it can be powerful was that a study was done where some patients with arthritis were given knee surgery, and some were given sham surgery, where nothing aimed at benefiting the patient was done. Both groups recovered equally as well. That gave the impression that the sham surgery had some remarkable power because it tricked the patients' minds into believing they were getting better, and that somehow produced changes that did make the patients better.

But it turns out that what happened was rather different. The findings were not that the sham surgery had a remarkable influence on the minds of patients, but something unfortunate - that genuine knee surgery of the type they did was no more effective than the sham surgery was. Other things the group being studied did or experienced over the follow-up period after the surgery might have been what really helped, such as trying different diets, doing special exercises under the guidance of a physiotherapist, having steroid injections and using other pain relieving drugs, just the body's ability to heal, or even the fact of being in a more relaxing environment, being taken care of after the operation, so any tension that had made symptoms worse was reduced.

Relief

There was another programme on the BBC, where they were testing whether people felt less pain after taking painkillers with a trusted brand name than they did when they took cheaper painkillers with exactly the same ingredients but with an unrecognised name. They wanted to see how much psychology has to do with pain relief, such as whether the expectation of a well-known painkiller working could actually make people believe it was more effective than an unknown one. They didn't explain why they thought people would have lower expectations of one with a brand-name they didn't recognise. Still, it sounded like an interesting thing to investigate.

But the experiment they did to try to find the answer was flawed, and they didn't seem to realise. They got a group of big strong rugby players, who obviously wouldn't flinch from a bit of pain, given that they would go out week after week to play rugby where there was always a high risk that they would experience some, and they asked them to take painkillers, and then put one of their arms in water with lots of ice in it until they couldn't stand it anymore. They first gave them the brand-name painkillers, and timed how long they kept their arms in the icy water for; and then on a later day, they gave them the other ones, and asked them to put their arms in the icy water a second time, and timed them again. Most wouldn't keep their arms in there for so long that time; and it was concluded from that that the brand-name painkillers had a psychological effect that made them more effective, because of the confidence the rugby players must have had in the effectiveness of their brand name that the cheaper ones with the unrecognised name didn't produce.

But what if the reason they didn't want their arms in the icy water for so long the second time was because having experienced it once, they knew that the longer they kept their arms in it, the longer it was going to hurt, and it was the anticipation of the pain getting worse that caused the psychological effect that made them take their arms out sooner?

And how do we know that either of the painkillers had any effect at all? Who knows how long the men would have kept their arms in the icy water for if they hadn't taken anything each time! Maybe they would have kept them in the water for about the same time as they did when they took the painkillers for the first time, for all anyone knows, especially if they would each have felt embarrassed to look like wimps in front of their friends and the BBC by taking them out before some of the others did, which might have actually been quite a powerful psychological motivator for them to keep them in there in itself.

The experiment might have shown more if just some of the rugby players had taken the brand-name painkillers the first time, and the other ones the second time, some had taken the other ones first and then the brand-name ones for their second try, and some had taken nothing each time they put their arms in the water. If the ones who'd taken nothing kept their arms in the water for about the same length of time as the others did each time, it would have suggested either that they were brave, or that neither of the painkillers were having much effect. After all, they were probably only mild, over-the-counter ones; and it's unlikely that painkillers are actually designed to allow people to put their arms in icy water for minutes and minutes on end.

If the men who'd taken no painkillers took their arms out of the icy water sooner the second time than they had the first, it would have suggested that the effect was being caused by the anticipation of the pain getting worse, or their stamina decreasing the more they did the experiment, because they were fed up with how bad the pain already was. After all, they may have been brave enough to go out on the rugby field week after week, knowing they might get hurt; but it could be that the adrenaline rush they got from playing caused them to feel the pain less than they would have done if they'd just been sitting around with nothing to do but think about it, and that that, plus the distraction of all the action and the need to respond quickly, caused them to notice the pain of being tackled and so on less than they would notice pain if they had to just sit there, focusing on it for minutes on end.

But if all the men who'd taken no painkillers kept their arms in the water for less time than those taking the painkillers did, it would have made it seem more certain that the painkillers were working.

If the ones who'd taken the brand-name painkiller first kept their arms in the icy water for longer the first time, but the ones who took the cheaper unrecognised version the first time kept their arms in the icy water for longer the second time, when they were taking the brand-name painkiller, then it would have been safer to conclude that the brand-name ones were having some kind of psychological effect because of the confidence in their recognised name that increased their effectiveness over the other ones.

Strange-Sounding Experiments

Suspicious

Another experiment was done, this time by lecturers at an American university, where students were split up into three groups, and told the experimenters were testing the effectiveness of an energy drink. What was really being tested was whether any expectation the students had that a more expensive one would work better than one that had been bought more cheaply would have a psychological effect that made a difference to their concentration levels and achievement, because the experimenters believed that expensive things in general would be valued more and assumed to have a higher quality than cheap things by most people.

One group was given no drink, and asked to take a test where they had to use brain power. Everyone was asked to take the same test. The experimenters thought of the group that was given no drink as the control group, intending to find out whether the students who were given the energy drink did better or worse in the test than that group. Another group was given the drink, and told it cost nearly three dollars. The participants in that group were told they had to sign a form authorising the people doing the experiment to charge the university they came from for it. The third group was told that the university had obtained a discount, which meant they'd been able to pay under a dollar for each drink. Both of the groups who were given the drink were told it had great intelligence-boosting properties, which might have really got their hopes up.

The group that did the best in the test was the one that had been told the drink was the most expensive. The experimenters concluded that thinking the drink was more expensive, and thinking they were having to obtain special permission to get it, had made the students think it must work better, so they got a psychological boost that made them do better in the test, perhaps because they assumed they'd be better at it, so they were more confident, and that made them feel better, so they put more effort into it.

But what if in reality, the reason they did better was that they thought that given the university would pay so much for the drinks, they didn't want to let them down, so they made an extra effort to concentrate?

And one group was simply bound to do better than the others anyway. What if that one would have done best anyway, even if none of the groups had been given the drink? After all, they only did 'slightly' better than the group that had been given no drink.

The group who were told they'd received the drink at a discount actually did worse than the others. That was interpreted as meaning that people don't value cheap things highly, thinking they're likely to be of a lower quality, so the group must have had an expectation that the energy drink wouldn't do them much good, perhaps assuming that it was likely not to have been very expensive anyway if it had been easy to get it at a discount; and that must have made them less confident that their intelligence would be boosted than the ones who'd been told it was more expensive were.

But what if what really happened was that they started out with high hopes, but when they began to have difficulties with the test, they felt discouraged, and then the thought that the drink might not work well, since after all it was only cheap, crossed their minds; but rather than being a straightforward case of not doing well because they didn't expect that a drink that cheap could do much for them, instead, thoughts about how well they could be expected to do given that the drink was cheap, or maybe thoughts that weren't about the money at all but were about how it didn't seem to be boosting their intelligence as much as the experimenters had claimed it would, actually distracted them from the test, so they weren't concentrating so well, hence their poorer results? The ones who'd been told it was more expensive might have wondered whether it really worked too, but maybe their sense of obligation to repay the university for spending so much to buy them the drinks compelled them to put more effort into the test, rather than dwelling on such thoughts. It's a possibility.

It might have said more about how the amount the drink cost affected their performance if there had been another group, who was given the drink without any cost being mentioned at all, but told about its supposed intelligence-boosting qualities, to see how their test results compared with those of the other groups.

In another experiment, discounts were given to some students who bought season tickets to watch ten plays that were put on by a drama group at their university. One discount was bigger than the other one. Some students weren't given a discount at all, (without being told that others had been given one). The three groups were monitored, and it was found that people in the group that had paid the full price watched more plays than the others, but that there wasn't much difference at all between the group that had been given the small discount and the group that had been given the bigger one. The people who did the study somehow concluded that that must mean that people's actual enjoyment of a thing can be influenced by how much they've paid for it, and that things that can be got at a discount simply aren't valued as highly, to the point where students can actually believe a play they saw wasn't as good as they would have thought it was if they'd paid more for it.

The experimenters did consider the possibility that the reason the students who'd paid the full price for the play tickets went to more plays than the others was because they thought it would be a waste of money if they didn't attend; but they said something more must be going on, and they were sure it must be that the students who paid the full price for the plays were actually enjoying them more, because they thought that if it had just been a case of people wanting to make up for the money they'd spent, the students given the small discount would have gone to more plays than the ones who'd received the bigger discount so they'd paid less themselves.

But what if what really happened was that several of the students who'd received a discount forgot exactly how much of a discount they'd been given, and, rather than actually enjoying the plays less because they had a discount, they just thought it didn't matter so much if they didn't attend all of them, because they hadn't cost them so much? What if rather than the people who'd been given the discount devaluing the plays because of it, the students who'd paid the full price thought they ought to go to even the ones they didn't like the sound of so much, because they thought that otherwise they'd be wasting their money, whereas those who'd received a discount thought they could afford to be pickier, or that they wouldn't be losing so much if they missed a few plays, for reasons such as that they had essays they really needed to work on sometimes because they needed to be handed in soon, or that they'd been asked out on a date on one of the days a play was on? Perhaps it isn't the case, but it seems the researchers didn't consider possibilities like that.

It's because findings can be misinterpreted that reputable science has measures in place to control for things that could cause inaccurate interpretations of findings. Controlling for things won't always be expertly done though, so having a control group in itself won't make for a more reliable study. sometimes, as with the drink study, the researchers might think they're controlling for everything that might otherwise make it easy to misinterpret their study results, when they're actually not.

Unfortunately, when previous study findings that were well publicised are proved to have been inaccurate, it isn't always reported in the media.


Astonished

Fraud and media misreporting can both leave people believing things that aren't true. One day I heard a story on the radio about how several years ago, a study was done that even a scientific journal reported as showing that listening to Mozart increased a person's intelligence. It was hailed as one of the most interesting studies of the year. The study participants were split into three groups; one listened to Mozart, one listened to a relaxation tape which had a musical drone in the background, and one sat in silence. Then they were asked to cut a folded piece of paper in certain ways, and try to guess how it would look when it was unfolded. Or something similar.

The people who'd listened to Mozart did best. All over the media it was reported that listening to Mozart makes people more intelligent. One man apparently even bought 100 thousand Mozart CDs for people - I think he was a boss and he bought them for people in his company. People started thinking they could make babies more intelligent if they played Mozart to them in the womb.

But really, the study didn't show any such thing. The person who'd conducted it didn't claim it did.

Naturally, if you listen to a relaxation tape that slows your responses down, or sit in silence where you've got nothing to do, you're liable to become less alert, so chances are you won't do so well in such a test. So the findings could simply have been showing how well someone with average alertness levels would be likely to do whether they'd been listening to Mozart or not, and how relaxation can dull responses a bit. Apparently no tests were done before the study participants were split into groups to find out how well they did before they listened to anything.

Then more studies were done, and it was discovered that people who preferred and listened to another classical composer did better than people who listened to Mozart. Then studies were done in schools that found out that people who listened to pop music did better than people listening to classical music.

Then studies were done that found that people listening to mere bangs on the table did better than people who didn't.

It was finally concluded that it wasn't the music making a difference; what was making the difference was listening to any kind of sound that would increase alertness levels, and the more it was enjoyed the better.

Well-Publicised Experiments That Weren't Quite What They Seemed

Being Obedient

The BBC did a programme about whether people are automatically obedient to authority, in which they did another flawed experiment; they got someone in uniform to ask people on a train if they could have their seat. Most people got up for them. The programme interpreted that as meaning most people are obedient to people who look authoritative; but what if most people got up because they didn't see a good reason not to? Many might have assumed the person had a good reason for wanting the seat, and didn't think it was worth risking an argument over in any case.

In fact, the BBC did an experiment to see if they'd get the same findings as an earlier experiment that was done by an American psychologist, who supposedly found that power is so corrupting that normal decent people can become monsters within days if they're given too much, in situations where they have to control people. He'd got a group of students, assigned some the role of prison officers, and some the role of prisoners, and put them in a mock-up of a prison. Within days, some of the students who'd taken on the role of prison officers were mistreating the ones who had the role of prisoners so badly that the experiment was stopped early.

The psychologist concluded that anyone can become a monster under the right conditions. The BBC wanted to see if the same thing would happen if they set up a similar prison experiment, giving some people the role of prisoners, and some the role of prison officers. Nothing like it happened at all. If anything, the prisoners gained the upper hand; and the prison officers tried to keep things fairly amicable throughout.

It turns out that the original experiment by the American psychologist wasn't what it was reported as being at first at all. In reality, he had encouraged cruel behaviour, and the people pretending to be guards were play-acting, in accordance with what he'd encouraged them to do.

In prison

The original Experiment was carried out in 1971 by psychologist Philip Zimbardo, and a team of investigators working for him. The Daily Telegraph and other publications reported on the inaccuracies that were spread about it:

Basically, while the experiment did not prove that people on their own initiative will become sadists, it showed something equally sinister, that given what they believe to be a good justification by an authority figure, some people can become sadistic. But the findings cannot be considered scientifically reliable, since the people playing the role of the guards were doing what they were doing thinking they were contributing to some worthwhile research, which would only last a fortnight, and where those playing the role of prisoners were willing participants in the experiment. So it's unclear whether those people would have behaved equally cruelly in other circumstances. And by no means all of them behaved sadistically. Some stood by, not protesting, but not joining in. It's impossible to know whether they would have protested if they'd thought the other guards' behaviour was genuinely uncontrolled, and not part of an important experiment by a university psychologist that would only go on for two weeks.

Also, there were only 24 people in the study, which makes it such a small sample size that you could get quite a few more than the normal percentage of sadistic people in society just by chance. The number of those in the group could actually have been larger than the average because of the set-up of the experiment itself, because rather than people randomly being approached and asked if they'd take part in the experiment, it was left up to interested people to write in and volunteer; adverts were put in a student newspaper. It may be that a lot of people who were interested in a study of how isolation would affect prisoners would be types who fancied a bit of an excuse to let their sadistic fantasies loose.

In fact, that idea was tested out some time later: Some psychologists put an advert in a newspaper as an experiment, using the original wording of the advert that had invited students to volunteer for the prison experiment; and they put another advert there, also inviting people to volunteer for an experiment, but not mentioning that it was to do with prisons. The psychologists did personality tests on the people who applied, and reported that they found that the people who'd applied to volunteer for the experiment that they'd been told was to do with prisons had higher levels of aggression, authoritarian tendencies, beliefs about how abuse of power was justified if it resulted in successfully achieving things, and other such characteristics, and scored lower on caring ones, than the people who responded to the advert where prisons weren't mentioned.

The psychologist who did the prison experiment in 1971 said he'd done a range of tests on the people who applied, asking about their family backgrounds and mental and physical health, to make sure they were just ordinary people, and to eliminate real criminals or drug users. But it's possible that he didn't sufficiently test for things such as aggressive inclinations.

Still, Zimbardo's concerns about how circumstances can turn ordinary people into sadists are backed up, to an extent, by other research, and did lead to him making recommendations that make a great deal of sense, such as that attempts should be made to minimise the amount to which circumstances can needlessly put the kinds of stresses on people who have power over others that will make them more likely to abuse it, such as if they aren't properly trained in effective methods of handling disputes, so they feel as if they have to resort to cruelty to control people.

Write-ups in a lot of textbooks about the experiment he did have apparently exaggerated the extent to which the findings should be worrying, as Zimbardo himself apparently did at first, giving the impression that even simply assigning people the roles of prison guards with power over people who are given the role of prisoners could turn them sadistic.

One man who'd played the role of one of the prison guards later said he thought Zimbardo's interpretations of the findings of the experiment had too much to do with what he had expected to happen from the start - he'd expected to find that even people from good prosperous family backgrounds will turn on each other if one group is given a lot of power over the other and expected to use it in certain ways; and he thought what happened confirmed it, apparently disregarding the things that made his findings unreliable, and that meant the claims were exaggerated.

But apparently, he himself later said that it hadn't been meant as an actual experiment, although it was reported as such, but that it had been meant as a demonstration of what he believed to be true, that it's very easy for people who are put in positions of power over others, such as in prison situations, to become brutal.

Zimbardo may underestimate the amount to which personality characteristics have to do with sadism - he seems to attribute more behaviour to circumstances than might often be the case, seeming to believe that anyone can become sadistic and abuse their power if put in situations where they feel justified in doing so, even if they weren't at all prone to behaving abusively before, so personality characteristics have less to do with it than might often be assumed.

Other research backs him up in showing that situation has a lot to do with it. But personal characteristics seem to play a larger role than he thinks. After all, not all the people playing the role of guards in his experiment behaved sadistically. He does devote some of a book he wrote about his experiment and the conclusions he drew from it to recognising and examining the issue of how more people could be brought up to behave more humanely under pressure, since he does recognise that it isn't a given that everyone will behave sadistically in the same situation, saying that some will collaborate up to a point out of fear; and some will be whistleblowers.

Situations That Can Reduce Compassion and Increase Temptation to be Cruel

Shouting

Zimbardo has been quoted as saying that though it's "hard to believe", even he, with his compassion and empathy, had absolutely no concern for the prisoners who were being abused while the experiment was going on, as if to further illustrate his point that people can do cruel things no matter who they are.

But what he says doesn't tell us what he thinks it does. It doesn't seem to make all that much sense. Why should he be surprised that he didn't have any feelings of compassion and empathy while the experiment was going on when, while it might possibly have got nastier than he initially intended, he himself instructed the people playing the role of the guards to do things that would induce fear and frustration and a sense of loss of control in the prisoners? Perhaps he wasn't as compassionate and empathetic as he thought he was to start with. He ordered that the people playing the role of prisoners should be sleep-deprived to some extent from the beginning. He took part in the experiment himself. He told the 'guards' to be cruel. This is not someone who felt compassion for these people before the experiment began but was somehow carried along by it into behaving with a lack of compassion he couldn't possibly have predicted.

It is conceivable that he does normally have a good deal more compassion and empathy than he showed during the experiment, but that it went out of the window before the experiment began, and it was his desire to prove a point he wanted to make that was so powerful it drove it away. There are a number of things that can make people who did have compassion and empathy lose it in some situations, so they're more likely to be abusive, one of which is wanting to prove a point they believe to be important so much they think it over-rides the rights of an individual or group that might be hurt by what they're doing, because they think it will result in a greater good in some way. There are several others, including, but not limited to these:

Certainly, many people can do cruel things because of the pressures of certain situations that they'd refuse to believe they were capable of doing before, and that they afterwards find it difficult to believe they did. So there do need to be checks built into a system to minimise the possibility that that will happen. For instance, at Abu Ghraib prison in Iraq, where American soldiers were highly criticised for abusing prisoners during the Iraq War, a number of things contributed to them doing so, and they ought to serve as a lesson for future planners to learn from, to reduce the possibility of such things happening in future:

So it seems that a combination of personal characteristics and the situation the guards were put in resulted in them becoming abusive.


Part Four
Believing Misleading Claims Made by People Such as Advertisers and People who Claim to be Psychic

Websites Advertising Products and Treatments for Problems

Smiling

The book How We Know What Isn't So gives an example of how the public might be fooled by something that a scientist who was used to testing things wouldn't be fooled by. If a company was claiming, for instance, that they had a great motivational CD they were selling that helped salesmen increase their confidence, and that helped them increase their sales performance, and on their website, they had testimonials from salesmen who had glowing praise for the CD, saying it had increased their confidence and sales ability a lot and they were selling more products than they had before because of it, a lot of people might find that convincing evidence that the CD was a good one and buy it. Reputable scientists, however, wouldn't think any claims of increased confidence were evidence that the CD helped people to sell things. Nor would they immediately assume the claims of the salesmen to have sold more products since hearing the CD were evidence it worked.

That's because after all, surely the number of things they sold would fluctuate naturally anyway; no one would ever sell exactly the same number of things every month. And even if they really had sold more than normal, there might be other reasons for it, such as the economy picking up. And that's assuming the testimonials were all true; there would be a possibility that they were exaggerated or made up, either by the company themselves, or by people they'd paid to say what they said.

Before scientists reached any conclusion, they would want to find out exactly how many products had been sold by the salesmen who'd listened to the CD and written testimonials praising it, and how that compared with the number of CDs they'd sold each week going back sometime into the past. Then they'd want to calculate what kind of influence other factors might have had, such as chance, the economy picking up, and perhaps other things; and only if the salesmen had sold significantly more products than could be expected because of those other things would they conclude that the motivational CDs really had had an effect.


Sometimes there are testimonials for very expensive products or services that are being sold on websites or in other places, that make them sound enticing by heaping glowing praise on them, saying they're from people who they really worked for. Testimonials for things that some people feel they really need to give them a decent quality of life, like a medical procedure, can seem even more persuasive than others, because people can really hope that what's being advertised will help them, so they really want to believe they're true, and can think anything's worth a try. But people should be wary of them for a number of reasons:

So it's best to hunt for other information from other people who're critically appraising what's being sold, discussing good and bad points. One thing to beware of though is that sometimes adverts for products are disguised as unbiased information, being misnamed "consumer reviews" and the like.

Psychic Predictions and Claims of Extra-Sensory Perception

Fortune-teller

It's common in human nature to trust that people are telling the truth, so it can be easy to just believe testimonials for things.

And not only that, but it can sometimes take quite a lot to shake a person's trust in something they believe in. If we expect something to be true, and then we find out something that might have called it into question in our minds if we didn't already believe it, it's easy to find reasons to still believe in it, because of our expectations that it will still basically be true, rather than beginning to disbelieve it.

For instance, if a psychic predicts that a major politician will die within the next year, and a lot of people hear them say that who feel sure real psychics exist and make good predictions, they might wonder with interest which one it will be, but then if no politician dies, to their knowledge, they might assume the psychic must have meant a much less well-known one in some other country far away, or that they still probably made a fairly good prediction, but that maybe whatever signals gave them their psychic information weren't quite clear, so they thought they were telling them a politician would die, when they really meant a politician's wife or secretary would die, or a major businessman who talked to politicians a lot. Or they might think that perhaps they meant a politician's career would die. Or an injury might count, especially if it was caused by an attempted assassination.

A lot of generosity could be shown in giving the one claiming to be a psychic the benefit of the doubt, especially among people who want to believe in them. So the fact that no major politician died wouldn't shake their belief in psychics and their powers. Unless they decided at the beginning that "death of a major politician" would really have to mean that if they were to accept the prediction as true, a whole variety of things could end up being accepted as proof that the assumption that the psychic was real was correct.

Reputable scientists don't work like that. They don't try to interpret the results of an experiment in a way that fits what they expected to happen to start with. They make very specific rules about what they're going to test for, and don't change the rules they had to start with if the test results aren't what they expected.

For instance, if they're testing whether telepathy exists, by, for example, asking a psychic to try to communicate telepathically with a succession of people who try to transmit a message with their thoughts from another room that a scientist has just told them to try and transmit, and the scientist specifies a certain number of successes as being above the number that might be expected to happen by chance, if they test someone's ability, and they get slightly less success than what could be expected by chance, a scientist who was sure they'd calculated the number that could be expected by chance correctly in the first place wouldn't think, "Well, perhaps the number of successes that could be expected to be got by chance is a bit lower than I thought; and the number of successes they achieved when they were trying to read people's thoughts is still significant". They would conclude that the person wasn't successful after all, because they were no more successful than they could have been expected to be by chance.

Researchers Changing Their Ideas About What Makes for a Significant Study Finding Partway Through One

Child screaming

When studies are done without such rules being put in place, the study findings can end up showing what a researcher expects to find, or they can take things as evidence for it that they wouldn't have at first, which might not actually show what they'd intended to show.

For instance, if a researcher thinks it's likely that putting a child in day care when they're little distresses them because they're away from their parents, and that they'll likely have more behaviour problems and more difficulty fitting in with others later in life as a result, they might set out to try to prove it. But if they don't specify precisely what they think would constitute a behavioural problem or distress, then because they're testing for things that are actually quite vague, a whole range of things could be thought to be proof of it. In other words, what exactly should be classed as difficulty fitting in, rather than standing out from the crowd for a good reason or being naturally shy? What should be classed as a behavioural problem, and how bad does it have to be before it does get classed as one, rather than as just normal childhood boisterousness, and so on.

If a scientist starts by looking for things they haven't defined well, then anything they happen to think is a bit odd might be taken as evidence of what they're trying to prove, especially if the children who went to day care aren't matched with a sample of children who didn't, to see how similar or different their behaviour is.

So reputable scientists like to be exact about what they're going to look for at the start, and stick to it. Or if they do find things they hadn't expected to find that they think are worth taking into account, they'll adjust their study plans, and publicise the new ones, so they can scientifically test for the new things they want to test for.

People tend not to use scientific methods in everyday life, so it's easier to believe things that aren't true, and carry on believing them when the evidence against them is stronger than the evidence for them.

Urban Legends and Other Tall Stories

Unhappy

Scary urban legends and myths and other made-up stories are something else people can easily be taken in by, especially since some seemed to be designed to tug at the heartstrings, and the more shocked or sympathetic or angry a story makes a person feel, the less they'll be in the mood to be looking out for clues that it isn't true. But it's worth keeping an open mind to the possibility, and trying to notice things that don't seem right about the story, because so many untrue things are spread, some which are designed to scam people out of money, for instance the emails that claim to be from important people who are leaving you - little old you of all people - a fortune!--The only thing it turns out you have to do is pay them some money to enable them to transfer it to your account or something ... and then more money ... and then more money ... and then they disappear with it.

There are websites like Snopes.com that are dedicated to informing people about whether a lot of the made-up stories that circulate on the Internet are true, so it's worth people going there to see if any ones they come across or get sent are listed there, if they're wondering whether to believe them. But also, sometimes there are things in the stories themselves that are so unrealistic as to make the stories unlikely.

For instance, one scary story warned people that any foods or drinks with a certain sugar substitute in them could even kill if too much was consumed. It told of how the sister of the one writing the story had supposedly loved diet coke, and been struck down with a terrible illness, getting worse and worse for months, and ended up on 18 pills a day, feeling sure she was about to die. But after she was warned about the killer substance in diet coke, she stopped drinking it, and within half an hour, she was much better! In only a few days she was down to one pill a day.

Now how likely is that! For one thing, if the illness could get much better within half an hour, it would surely have always showed a pattern of getting much better in between drinks, and then worse again when she drank another diet coke. Each morning when she hadn't had one for hours, the illness would be much less severe. She would have detected the cause of the problems herself pretty quickly. Also, if the substance was just as bad for everyone, a lot more people would be going down with symptoms.

Sometimes, just a little bit of logic can give a real clue that a story's not true, or at least not all of it is.


Part Five
The Importance of Trying to Find Out More Than One Side of a Story

Being Influenced by Opinion Pieces and Stories in the Media

Sometimes, stories appear in the newspapers where the parts of the story that are covered are determined by the point of view on it that the writer of it holds, and people can think they're just reading plain fact, or realise they're reading an opinion, but it seems so logical that it seems it just has to be right. So they come to believe it firmly. But an opposing opinion would throw a completely different light on it.

Picking a fight

For instance, there might be an opinion piece about how terrible the discipline in some schools is, and the reporter might say that things would be so much better if corporal punishment was brought back, proclaiming that discipline in schools was so much better in the good old days when people were regularly caned when they were naughty. The article might then quote pupils who say they'd be much less likely to misbehave if they knew they might be caned for it, and report on how a school that's infamous for bad discipline now had much better discipline forty years ago when they had the cane. Many readers might go away convinced the cane ought to be brought back, sure that all right-thinking people must be bound to think the same way, since it must be irresponsible or ignorant to think otherwise.

But an opposing opinion might throw a completely different light on things; it might well be possible to find examples of schools where the discipline is excellent and yet the cane is never used; they use other methods that work much better than the cane ever did. Also it might turn out that there are other reasons why the school where the discipline was so much better in the old days was so much better; for example, less competent staff might have taken over; new populations might have moved into the area among which there's a much higher incidence of crime in general so there's a much higher percentage of children in the school who are being brought up badly, and so on. So things aren't the way they seemed at first after the report that seemed so convincing was read.

There's often more than one side to a story, and it's best to reserve judgment on a lot of things till it's discovered whether there are good alternative opinions.

Newspaper articles sometimes do their best to inform readers of all points of view on a subject; but sometimes they give opinions that only present one side of an issue - the opinion the writer of the article holds. It's important not to mistake opinion for fact, since there might be good arguments that make an alternative point of view more convincing.

Opinions being expressed

For example, one opinion piece might state boldly that immigration should be stopped altogether because immigrants cause this country a lot of problems; for instance, trying to teach schoolchildren becomes a bit of a nightmare when there are several people in the class who can't even speak English, and don't even speak the same language as each other. The piece might say that trying to help them must mean the others in the class don't get so much attention, so they're simply bound not to learn nearly so fast, and will likely go on to achieve less in life, so the country as a whole will become less successful. It might say that also, there simply isn't enough land to build all the houses the immigrants coming in will want, and that it's also unfair to give houses to newcomers that people who've lived here all their lives paying taxes into the economy will struggle to be able to get.

That might seem a convincing and worrying argument, so its readers might go away convinced there should be no immigration. But an article in another paper might state the opinion that it's a good thing there is immigration, and that we should accept as many immigrants as want to come here, because immigrants do a lot of the dirty jobs people here don't want to do, and that since the birth rate's falling here, we're going to find it useful to have immigrants to look after all the old people we'll one day have who don't have big families to care for them like they used to. The piece might go on to say that also, immigrants who work pay taxes and can do good things for our economy, even creating employment for people here sometimes, by starting businesses. It might say that a lot of them have come from places where they've been suffering distress, danger or severe poverty, so it's only fair they get the opportunity of a better life.

Someone who doesn't know much about the subject might come to hold one or other of those opinions, depending on which paper they happen to read. They might not realise there's another way of looking at things.

So when an opinion is presented, it's worth knowing that there is likely another side of the argument, or more than one, and that the truth of the matter can't really be decided on before examining as many sides as can be discovered, and that there might well be a bit of truth in all sides, so the most reasonable opinion might lie midway between two extremes somewhere. Also there might be solutions to the problems that those saying there's only one thing to do just haven't thought of. People sometimes make arguments that stir the emotions and convince people they must be right. But if people hear another side to the argument later, they can realise things aren't that simple.

Sometimes, something can seem very clear, but it's only because a lot of the facts about it aren't being revealed, or haven't been found out by the person talking about them yet.

For instance, someone might declare that the government should be doing more to get people into employment. If the unemployment rate is quite high, a lot of people might instantly assume, "Yes, what a good idea! The government can't be doing enough!" and rally to the support of the one who claimed they should be doing more. But it might turn out that actually, the government are trying to do a lot, but they've encountered a whole host of difficulties that most people won't hear about unless they happen to watch the occasional television or radio documentary about it, or specifically look up information on what they're doing. Suddenly what seemed so simple and obvious can turn out to be a case of "The more you know, the more you realise you don't know!" That's because you can discover there are a whole lot of things to learn on a subject that you didn't even realise existed before, and will take some time to get to know.

Becoming Informed Before Signing Petitions

Promoting an Opinion

A lot of online petitions circulate, and a lot of people who are asked to sign them do so without really thinking, it seems, swayed to outrage or pity by a story they're told about how something's gone wrong, easily convinced that the solution the person who started the petition wants must be a good idea. But the story they're telling might not even be fully true - they might be mistaken about a few things. And there might be far better solutions than the one they're proposing. So a bit of investigation ought to be done by people before they put their names to such things.

For instance, a petition might claim that a certain member of parliament has said that poor people are just lazy and criminal and should have their benefits taken away, and it might call for him to be sacked. People might be outraged, and be quick to sign.

But it might turn out that what he actually said was that he believes poverty sometimes has a lot to do with the people in it, saying that he's heard that in certain communities, there's a culture where people just consider themselves entitled to benefits and don't bother working, and the attitude's being passed down from generation to generation, and that perhaps it could be changed if some benefits were removed from such people or just cut, since then they'd be more motivated to find work, and might end up much better off for it - as would be their communities, because some people who would currently hang around the streets getting into mischief because they've got nothing better to do would be doing something useful instead; and also they're currently in a poverty trap where they're getting so much money in benefits that it doesn't pay for them to go to work, because they'd lose their benefits if they did, and wouldn't be earning so much working at first as they had on benefits; but if they never get into work, they'll never have a chance of promotion, so they'll never legally get more money, so motivating them by cutting their benefits might do them good.

That might be a controversial opinion; but expressing it might seem much less of a sackable offence than the misrepresentation of what he said that was originally passed around might have done.

Things can often turn out to be more complex than they seem at first, with more to a story, and more than one side to it. So people ought to bear that in mind before rushing to judgment, or rushing to do anything. It's worth trying to find out more about the issues.

Campaigning for Something While Assuming There's Only One Side of a Story Can Have Unfortunate Life-Damaging Consequences

Crying

Sometimes things aren't quite the way they seem at first. Hearing a shocking-sounding fact or two can motivate people to join campaigns for things; but taking the trouble to find out more information about them, from a few different sources, in case one or more want to push their own point of view rather than looking at things from all sides to try to find the truth, can be essential in making sure people end up doing the right thing.

Here's an example of how harm has been done by people, many of whom were probably well-intentioned and idealistic, wanting to make a difference in the world, but making the mistake of not researching the issues they were campaigning about well first. A man who'd lived in China for years put a message about something bad that happened there on an Internet forum. (He attributes to ego-tripping to feel good what might often better be put down to a misguided but genuine wish to do good; it's impossible to really know the motives of a lot of people; but certainly his message shows that bad mistakes can be made by people who jump to conclusions and are spurred to action without finding out facts first.) He said:

... I lump ... "human rights crusaders" into three large groups. The first group are those who are sincerely interested in human rights, and who take the time to understand both the real situation in China, and what impact their actions/policies will have on the Chinese people. Sadly, this group (in my experience) represents a very small minority of the overall whole.

Second are those who are on an ego trip. The "I'm gonna' save the world" crusaders. They want change, and they want it now. They have grown up in the fast-food culture of instant gratification, and want to feel that they're "making a difference", without having to actually risk much or put much time into it.

And third are the worst ones -- the ones who exploit concern for human rights to accomplish goals that have nothing whatsoever to do with human rights [such as by lying about how much better off everyone would be if they adopted certain lifestyles, just to promote the ideologies they believe in].

In the second category, one of the best illustrations dates from 15 years ago. I had a friend, an American, who was made General Manager of a medium-sized American clothing manufacturer that had operations in China. He'd been in China 5 years, married a Chinese woman, and loved the country and the people. He wanted to give something back to the country that had helped his company make a profit, so he contacted his American head office with a proposal. His wife came from a small, impoverished village in the West of China. He proposed that his company open a factory there, providing jobs to the local people. He further proposed that the company also fund a school, to provide teachers and materials to give the children there an opportunity for a decent education (something they did not have).

Now, the wages that they needed to pay in that village were about 1/4 what they'd pay in Beijing, so there were significant theoretical savings. But by the time you factored in transportation of goods (it was quite far from any major port), and the cost of funding a school, the cost was only slightly lower than setting up such an operation in Beijing. When the company finally approved this, it wasn't only because they'd be making a significant profit from it...it was because it was also doing something to really help the local people.

That factory ran for about two years, until some human rights group in the U.S. heard about it. What they saw was a company that was paying abysmally poor wages ("slave wages") and exploiting Chinese people in "sweat shops". They started a huge campaign in the U.S., telling people to boycott this clothing company's products until they agreed to close down their "sweat shop".

The American side didn't need the bad PR, and within six months, had entirely closed down that factory. Of course, the American human rights campaigners trumpeted their "victory" in winning a human rights victory for those poor, abused Chinese.

Not one of those people ever actually visited the village, or the factory in question. Not one of those people ever did research to find out the actual situation in that village. If they'd done so, they would have discovered that although the wages paid were low by American (or even Beijing) standards, they were about three times higher than what the local people made from subsistence farming. They would also have discovered that the company offered comprehensive on-the-job training programs, giving the local people the opportunity to learn skills and knowledge that they would otherwise never have access to...skills and knowledge that entitled them to job promotions, and higher salaries. Skills and knowledge that gave them opportunities to work in larger operations in Beijing. And they would have discovered that they also provided funding for education for that village's children.

The result of shutting down that factory? More than 300 people lost their jobs, and went back to making much less money than they had in the factory. The school, ultimately, lost most of its teachers, because they could no longer afford to pay them.

And yet those Americans were parading their "triumph" in fighting for human rights. ...

The fight for human rights is not about making you feel important, or successful. The fight for human rights is not about you, and your ego. The fight for human rights is not about you changing the world.

Sensationalism Sells More Than Careful Examination of the Facts

Psychic

To give other examples of things that can give a one-sided impression of things, it seems that the press in general loves stories of the paranormal, because they inspire interest and sell papers - people are intrigued by them. But the stories will often be reported as if something paranormal really did happen, when there's an alternative explanation that sometimes isn't mentioned, perhaps because it's disappointing and boring, because it explains how such a thing could happen through something entirely natural.

And some people who claim to be psychic make claims that others mistakenly believe. For instance, apparently a man who did water divining claimed to have an extremely high success rate, and may have got more business because of it; but it later turned out he was only including the times when he found water in his calculations of success, saying that all the many many times when he failed to find water were times when obviously his powers weren't working, so he was just guessing rather than divining.

Sometimes things are reported fully accurately, but they're chance events or amazing coincidences that people can mistakenly believe are so unlikely there just must be something mysterious about them.

Even Being Told Little Bits of Information Can Influence Opinions of a Person Significantly

One reason to be careful not to say unfair things about people is that even one single word of description can change the way one person thinks of another; they can even interpret their behaviour for a while in terms of it. What backbiting, or even more minor things, can do, is to lower a person's reputation in the minds of others unfairly, and even permanently. An example can illustrate that:

An experiment was done at a university, where one day, economics students waiting for a lecture were told by a college representative who came in that their tutor was away that day, but that a substitute tutor they'd never met had come in to teach them the lesson. They said that since the department was interested in finding out more about how students react to different tutors, they'd be asked to fill in a form at the end, telling them what they thought of him. They were told that first though, they'd be given some information about him. Sheets of paper were handed out with a supposed description of his character on them. What the students didn't know was that there was one word's difference between the ones half of them were given and the ones the other half were given. One half were told, amid the other things, that he was very warm by nature, and the others were told he was rather cold.

The tutor led a class discussion on things they'd recently learned. Afterwards, the students were given forms to fill in about what he was like, and the differences were striking - anyone would think two completely different people were being described! It seemed the students had completely different impressions of him: Most of those who'd been told he was very warm attributed characteristics to him like, "good-natured, considerate of others, informal, sociable, popular, humorous and humane." Most of those who'd been told he was rather cold attributed characteristics to him such as, "self-centred, formal, unsociable, unpopular, irritable, humourless and ruthless."

Still, perhaps one reason the students said they'd assessed the character of the tutor the way they had was because they didn't want to be seen to be contradicting the description they'd been given, since they thought they might be seen as making trouble, and it might affect the way their tutors thought of them; or maybe they would have just felt silly contradicting the description they were given of him, since they would have assumed that the people who gave it knew better than they did, because they probably had more experience of knowing him. Or maybe they didn't think anything at all about his character while the lesson was going on, so they didn't really think they could judge him much in just the space of one lesson, but they thought they'd better put something down, because they thought they might look foolish if they said they didn't know, so they just followed the lead of what they'd been told, and said something they thought the person asking must want to hear. It's impossible to be sure what was really going on in their minds.

But still, they might well have genuinely had those impressions, for a couple of reasons:

Whenever the students who'd been told the tutor was cold heard him say something that seemed to fit with that description, they might have quickly thought something like, "Oh I see what they mean about him being cold"; but whenever he didn't appear cold, because they weren't looking out for him not appearing cold, it probably wouldn't have registered with them, since they'd have been thinking about what they were learning or day-dreaming, and unless he did something that seemed strikingly warm, the behaviours that contradicted the description wouldn't have stood out in their minds, so they wouldn't have made mental notes of them.

Likewise with those who'd been told the tutor was warm - anything he did that could have been interpreted as warm would have been more memorable, because they might have made a mental note of it as confirming the description they'd been given; but since they weren't looking out for things that contradicted it, anything that did would have likely just not been noticed, because they would have had their minds filled with what was being discussed.

But more than that, they could have actually interpreted the same thing in very different ways, according to the idea they already had of what he was like. For instance, if the tutor had talked about economic decline with a smile on his face, those who'd been told he was warm might have interpreted that as evidence of a cheerful nature, while those who'd been told he was cold might have interpreted it as uncaring and even a bit sinister.

Another reason for their different opinions could well have been that they may have treated the tutor differently according to their expectations of him, so those who'd been told he was very warm might have smiled at him and were friendly, confident he was a nice person, so he reacted to them with warmth, whereas those who thought he was cold were more negative towards him from the start, and that affected his attitude towards them in a negative way.

Also, they probably weren't thinking much about his character as they got engrossed in the lesson; but looking back afterwards, they would have tried to think of examples that confirmed what they'd been told about him; they wouldn't have felt the need to try to bring to mind examples of things that contradicted it, because they would have trusted that it was true.

And when they thought over the examples they brought to mind that confirmed what the description of his character had been, they'd likely have become more firmly fixed in their minds, and they'd remember them better afterwards for thinking more about them. So they might have actually gone away with the impression that the tutor really was the way the description had described him.

Still, perhaps if he'd taught them for a few more lessons, they'd have begun to see things in him that contradicted their expectations, and they'd have begun to form their own opinions of what his character was like. Getting off to a bad start by reacting to him in accordance with what they expected of him wouldn't have helped though, since if they'd shown dislike towards him, it could have soured the relationship for a while, as his unfavourable reaction to their signs of dislike sparked off a bad one in them, which he responded to unfavourably, and so on, in a cycle of bad feeling.

Still, that's one reason people should reserve judgment when told something bad about someone, unless it means they could be dangerous, or harmful in another way. The impression isn't guaranteed to be trustworthy.

How Gossip and Backbiting Can Be Unfair

Feeling superior

It's just as unfair to gossip about people you're not that close to. For instance, someone might hear a rumour at work that a young woman in the office was getting overly-friendly with a married man at the office party, and they might pass the rumour on. A new person in the office might hear it, and when they first meet the woman being spoken about, they might think, "Oh, so this is the slutty marriage wrecker!" In reality, what happened might have been that the husband, a bit worse for drink, was sexually harassing the woman. The person who started the rumour might have been seeing it from the other side of the room, not able to hear her protests, and perhaps not seeing much of what happened at all. So not only did the woman have to put up with the harassment, but she might later have to put up with bad attitudes from work colleagues, and the new one there might not be enthusiastic about getting to know her, and be deprived themselves of a friendship they would have enjoyed.

Sometimes, people exaggerate stories about other people or add details, and then others pass those on, assuming they're true, perhaps adding a few exaggerations or added details of their own. They can do it because they're telling the story to entertain others and themselves, or they forget details, and assume it won't do any harm to make up some to replace the ones they've forgotten, because they treasure the entertainment value of the story over the reputation and feelings of the person the story's being told about - or it doesn't even occur to them to think about their reputation and feelings, especially because the person isn't there, so they assume they can't be harmed by the spread of the story; or they might feel silly not being able to remember bits, so they quickly make up others to compensate. But the reputation of the person the story's being told about can in reality be harmed, because the attitudes of the others to them and the way they behave towards them will be shaped by what they hear.

In fact, a study was done where students were told to write about any stories they told others over the following few weeks. They were asked to tell the gist of the stories they told, saying who they told them to, and giving the reasons for telling them, such as to entertain, to inform, or whatever. They were asked whether they'd added extra details they'd made up, or left out bits, and whether what they'd done had made things seem different from the way they really were.

It turned out that well over half the stories had been altered so things weren't described as they really were, and the students admitted to doing a lot of it, so they were doing it deliberately. But sometimes they didn't admit to doing it, but their listeners thought they had done it, which could have meant that the listeners were mistaken, or that the students were ashamed of having done it, or that some distortions are so common they didn't even think to mention them when reporting on their stories, because they just took it for granted that everyone makes them, or there were insignificant bits of the stories they'd forgotten, some of which they filled in the blanks of on the spur of the moment, not being entirely sure whether what they said was true or if they'd made it up, and they didn't really care, so they quickly forgot they'd done it. Or it could have meant they forgot some of what they'd said between the times when they said it and the times when they wrote about it.

Or it could have even meant that they didn't realise they were doing it after a while - it's been found that it can happen that once a person starts exaggerating, adding details they've made up, for example giving the impression that people were making a fuss about less than they really were and so on, they start remembering what happened as if the details they've invented really did happen, so when they tell the story some more, they'll really believe they're telling the truth, because what they've said before is as much of an aid to their memory as what actually happened is, or even more, as time goes on and their recollection of what really happened begins to fade, so they rely on their memories of their retelling of the story more. Sometimes listeners guess they're not being told the truth, but sometimes they don't.

Then if listeners tell the story to others, they might add bits themselves. As the story's retold by more and more people, more distortions can be made, till it's quite different from what really happened; and yet people will be being told the story as if it's true.

It turns out that a lot of the time, people don't tell a story just because they somehow think the information ought to be passed on, but to have fun, to amuse others so they'll think better of them or to liven up a conversation, to make themselves look clever, and so on, though it's doubtful that many people plan to do that in advance - it's probably more like a habit that they fall into, after only half thinking about it for a moment.

But that shows that people shouldn't be too quick to believe rumours they hear about others that damage their reputations.

Also, when someone tells another person several things about someone they know over time, they're often likely to be the most extreme - and thus most memorable - examples of the person's behaviour, so the listener can get a distorted impression of them.

For instance, a college student might tell their roommate about several of the times when their mum shouted at them for not doing their homework at school. When they finally get to meet the mother, the roommate might be expecting to meet a really bad-tempered person, and be surprised and relieved when she seems quite normal. Her child would have been unlikely to have told their roommate about all the times when she wasn't shouting at them, because it wouldn't have made for a worthwhile story. So even if she only shouted at them about two per cent of the time, they might well have got the impression that she was always doing it. Also, they might have got the impression that she's just a nasty bad-tempered person, when in reality, the times when she shouts the most are the times when she's feeling stressed and worried, and doesn't know how else she can get her children to do what she wants.

Circumstances That can Seem Incriminating Can be More Understandable Once More Facts are Known

Empty pockets

Another issue is that people talking about what other people have done can often just give the details of the deed, but not say much about what led them to do it, as if they don't think it's so important or interesting. so people can get the impression that others are nastier or better than they really are.

For instance, if someone were to say he spent a year living homeless begging on the streets, making about £50 within a few hours each day, but blowing the lot on slot machines, which he'd since come to realise was a horrible waste of money, and he'd changed his ways since then and didn't gamble anymore, but he'd done it because all the while he was playing the games, he was in another world where his spirits were lifted and he had an adrenaline rush that made him feel alive and he forgot all about his misery, the dramatic detail of how he'd spent time on the streets making about £50 within a few hours from begging but spending the lot on slot machines every day might be passed on, but the not-so-exciting detail of how the reason he'd done it was to escape into a happier world for a time, and how he now regretted blowing all the money, and realised it wasn't a good use of his time and had changed his ways, might not be, so people would find it easy to judge him as feckless and not to be trusted with money; and if they met him themselves, they might be expecting to meet a ruffian, and be surprised he wasn't that bad.

That being said, some people are very good at lying to cover up their bad behaviour and make people think they're better than they are, so they can deceive people into putting more confidence in them than is good for them. Still, sometimes reserving judgment about a person is best.


Anxious

Sometimes, people do things that are morally wrong, and yet there are mitigating circumstances that make them more understandable; so while just finding out what a person has done will make people want to instinctively condemn them, finding out why they did what they did can put a different light on it.

For instance, if a husband is worried his wife is cheating on him, he might not know how to find out for sure, feeling certain she'll deny it if he asks her. So he might search through her private letters and emails to try to find evidence. He might find her Facebook password and log in as her, pretending to be her to try and catch a possible cheater out, or to get more information. If the wife were to find out and say to her friends, "My husband's been reading my private letters and emails, and stole my Facebook password and pretended to be me", they'd probably be shocked, and have some angry words to say about him.

When they find out why he did what he did, they might still say it was a bad thing to do so he shouldn't have done it; but while it was technically morally wrong, if it didn't do any harm, and he couldn't think of another way to find out the information, though he tried, and he did have genuine reason for suspicion, it does make it so understandable as to make it not really worthy of much condemnation after all. People might condemn him more because of their sympathies with his wife than because they would think he genuinely deserved it if they were to really think about it. If they heard about a couple who were strangers where the same thing happened, and they were told all the details, then because their emotions weren't stirred up with sympathy for one side or the other, they might judge things more impartially.

If someone makes an accusation that's quite shocking, it's perhaps even more likely it'll be believed quickly without question, because it'll stir up strong emotions of outrage or something similar, and they'll make people feel like making judgments quickly; but people should still try to remember to reserve judgment till they've heard both sides of the story, so they have more understanding of why things have happened, or they have more of an idea of whether the story's even true.

Criticising

Here are a couple of imagined divorce letters from the Internet, one replying to the other, that illustrate how things can look very different when the other side of the story has been found out. It can be tempting to sympathise with the first person who describes what's been going on, till the other side of the story's told:

Dear Wife,

I’m writing you this letter to tell you that I’m leaving you forever. I’ve been a good man to you for 7 years and I have nothing to show for it. These last 2 weeks have been hell.

Your boss called to tell me that you quit your job today and that was the last straw. Last week, you came home and didn’t even notice I had a new haircut, had cooked your favorite meal and even wore a brand new pair of silk boxers. You ate in 2 minutes, and went straight to sleep after watching all of your soaps. You don’t tell me you love me anymore; you don’t want sex or anything that connects us as husband and wife. Either you’re cheating on me or you don’t love me anymore; whatever the case, I’m gone.

Your EX-Husband

P.S. don’t try to find me. Your SISTER and I are moving away to West Virginia together! Have a great life!


Dear Ex-Husband

Nothing has made my day more than receiving your letter.

It’s true you and I have been married for 7 years, although a good man is a far cry from what you’ve been. I watch my soaps so much because they drown out your constant whining and griping. Too bad that doesn’t work.

I DID notice when you got a haircut last week, but the 1st thing that came to mind was ‘You look just like a girl!’ Since my mother raised me not to say anything if you can’t say something nice, I didn’t comment.

And when you cooked my favorite meal, you must have gotten me confused with MY SISTER, because I stopped eating pork 7 years ago.

About those new silk boxers: I turned away from you because the $49.99 price tag was still on them, and I prayed it was a coincidence that my sister had just borrowed $50 from me that morning.

After all of this, I still loved you and felt we could work it out. So when I hit the lotto for 10 million dollars, I quit my job and bought us 2 tickets to Jamaica But when I got home you were gone. Everything happens for a reason, I guess.

I hope you have the fulfilling life you always wanted. My lawyer said that the letter you wrote ensures you won’t get a dime from me. So take care.

Signed, Your Ex-Wife, Rich As Hell and Free!

P.S. I don’t know if I ever told you this, but my sister Carla was born Carl. I hope that’s not a problem.


Shopping for clothes

In a more mundane example of how things can seem very different when just one or two more details are known than they were at first, a woman did some work in a charity shop. She said her boss kept rejecting ideas she came up with for how to entice people to buy things. She said one thing she'd wanted to do was to put teddy bears on a piece of furniture to make it look as if they were climbing up it. That sounded like a cute idea, but she said her boss had said no. When asked why, she said the boss had said the piece of furniture was new and looked nice, and it would ruin the look of it. It was tempting to think, "Silly snobbish woman, being so proud of a new bit of furniture she thinks covering it in teddies will mean it can't be admired so much because people can't see it so well!"

But several weeks later, one or two more details of the story were heard, and it turned out that what the woman with the ideas had actually asked to do was to sellotape the teddies to the furniture. That one detail totally changed the way the story was understood; it became obvious why the boss had objected: When the sellotape was eventually taken off, for instance when people wanted to buy the teddy bears, it might have left sticky marks on the furniture that were hard to remove, as well as pulling out some of the fibres in the teddies.


Waiting to find out the other side of the story before drawing conclusions naturally doesn't mean waiting to find the truth so as to take sides. Maybe the truth will never be found out sometimes. After all, sometimes, asking the one being accused of things for their version of events can leave people thinking both sides are probably as much to blame as each other; or since people can often tell lies and make excuses to put what they've done in a better light, and they might make even more unfair accusations against the one who first made accusations against them than that person originally made themselves, it might be difficult to get to the truth.

Sometimes, it's best not to look for the truth at all, but simply to reserve judgment about a person who accusations are being made against, knowing there are bound to be two sides to the story.

Talking angrily

Another thing that should make people cautious about making hasty judgments about others when they hear gossip about them, or on their first impressions of them, is that people can't know what was going on in a person's mind when they did something, so accounts of events won't be able to include what was perhaps some of the most important mitigating information sometimes; for instance, someone might shout at someone when all they did was ask them to do a little favour, and word of it might get around, giving lots of people the impression that the person who shouted is an unreasonable bad-tempered person. But it might be that no one knows that the person had felt irritated for a long time because the one who asked for a favour had often done that on the very days they were at their busiest, and that they'd never before complained much; but just when they finally thought an understanding had been reached between them, the person asked them for a favour, knowing they were at their busiest again.

All a person who doesn't know what's been going on, or who does but doesn't realise how much it's been stressing the person who got angry out over time, will know, is that a small favour was asked for, and like a mad thing, the person being asked got really angry about it. And if they tell all their friends and colleagues about it, that's what they'll all think too, when it isn't really fair.

So unless someone needs to be warned about because they're dangerous, creepy or could cause a problem in some other way, and it's fairly certain that any bad reports about them are true, then it's unfair to say bad things about people behind their backs - at least just for the sake of conversation, and it could lead to future problems. It's also best to reserve judgment about what a person is like until they've been like it enough times for there to be a definite pattern of behaviour, rather than it possibly being a one-off, or a difficult phase they're going through.

Even Scary Stories that Friends Tell Aren't Necessarily Trustworthy

Anxious

Sometimes, people will talk about events as if they happened to someone they know, rather than more truthfully saying they're things they've simply heard about; it can happen when someone wants to tell someone a story that they want to seem more believable so it'll be more impressive, or have more impact in some other way, or if they want it to seem more exciting so they make up a few extra details, or even simply through a poor choice of words, for instance, "My brother knows someone" rather than, "My brother knows of someone". Unfortunately, even people who others assume they can trust can do that, not meaning to deceive, but just assuming it's harmless. If the story was never true in the first place, people can end up believing incredible things have happened that just haven't.

Also, they can believe some things are much more common than they really are: If several friends say they know someone who had a bad experience, it might be easy to get the impression that the experience is common. But for one thing, they could all happen to know the same person. And for another, some of them might be really just talking about a story they've heard, told to them by someone who might have said it happened to someone they knew in person, but who in reality had just heard about it, maybe from someone who made part of it up. So it's best to keep an open mind to whether stories are true - especially if they're dramatic ones where unusual things took place, particularly if actions might be taken because of them.

For instance, if someone tells their friends that they know someone who went to the doctor and had an injection, only to wake up the next day with some hideous disease, and found out later that they'd been injected with an old needle that had been used on someone who had that disease, and it hadn't been sterilised afterwards, anyone who's scared by the story shouldn't make an immediate decision never to get injections from a doctor again, or anything like that; that could be harmful, naturally. If they're worried about the story, it would be better to try to find information - perhaps from reputable websites on the Internet, - about whether such things happen.

The same is true for stories that appear in the papers or on television.

Don't Assume Authority Figures are Necessarily Trustworthy or More Expert Than Others

Unpleasant teacher

Sometimes people can be misinformed, because someone who has professional qualifications in something, and is an important person, will make a claim, or give a warning and so on, but they're not actually qualified in a way that makes them any more expert in that particular subject than anyone else is.

For instance, apparently a member of a presidential AIDS commission in the 1980s claimed that AIDS was the greatest ever threat to civilisation - more serious than any of the plagues of former centuries. And Oprah Winfrey claimed that research studies had predicted that within three years - by 1990, - one in five heterosexuals could be dead from AIDS. But not all research studies are equal. And though the people making those claims might have been well educated and trusted by important people, it's quite possible that they had no training in the field of epidemiology - they had no expertise in predicting the ways disease might spread; and we don't know how expert the people they got their information from were. The predictions were very wrong in any case. Even scientists can be unqualified to make predictions and judgments about other types of scientific subjects they haven't been trained in.

Classroom

To give another example, I know a woman who spent time on a social work course. The first essay the students had to write didn't count towards their grades, so she thought it wouldn't matter if she failed. So she experimented: She'd heard that social work tutors disapprove of anything politically incorrect, like preference for one religion, and was curious to know if it was true. So in her essay, she said she thought Christian principles like abstaining from sex till people were in a committed relationship should be promoted in schools to protect teenagers from unwanted pregnancies and sexually transmitted diseases and being used. Her essay was failed, and a lecturer protested the idea that young teenage girls shouldn't have sex if they wanted, saying that, for instance, trying to stop a 13-year-old in care from having sex with anyone she wanted was violating her rights. The student wondered how much that was actual social work ideology.

The same man said in a lecture that Muslim men who rape girls do it because their parents don't like them having sex at home, so they have to do it secretly, as if he was making excuses for it.

While it will often be safer to rely on the word of a professional like a social worker, psychologist, nurse and so on, than to rely on the version of events a client of theirs or someone who on the surface seems less respectable might tell, no one should ever automatically side with one person just because of their superior status; it simply won't always be apparent what strange ideas, personal grudges or corrupt and dishonest thinking might govern the one with the superior status; even highly respected people have been found to be corrupt or got carried away into promoting harmful or unwise ideas; for instance, even some Nobel Prize winners have apparently gone on to try to promote wacky ideas, or tried to prove mainstream science wrong. For instance, apparently there have been a couple of them who have denied that global warming has anything to do with humans; and one promoted the idea that since vitamins could make people healthier, it must surely follow that taking megadoses of certain ones could surely cure even serious diseases like cancer. He was wrong about that.

So whoever a person is, what they say can't necessarily be relied on as absolute fact, especially if they're speaking about things that aren't in their area of expertise.

Be Aware That the Impression You Get of Things Isn't Necessarily the Way Things Really Are

Surprised

Things can seem different from the way they normally do in the wake of some things that have just happened, so people can get the impression that they like them more than they turn out to really do. For instance, after a long tiresomely boring meeting, hearing a song on the radio that they used to consider mediocre might seem pleasant, because of the contrast. But if they add it to their record collection, they may later wonder why they thought they liked it enough to do that.

Or if someone's just been through a tense stressful experience, something such as a mindless television programme that might normally seem intolerably boring can instead seem relaxing and pleasant, because it can be soothing to have it just wash over you instead of having to cope with something that takes more effort. Someone who's just undergone something stressful will likely not be in any mood for a fun mental challenge.

Also, a person's thought processes might have been so disturbed by the stressful experience that their thinking powers aren't on top form anyway, so it doesn't take so much to make them feel satisfied with the programme. One reason will be that they're likely to be distracted, their minds often wandering, because the stressful experience they've just had was so dramatic by contrast they can't stop thinking about it. So when their minds do focus on the programme, it's likely to be on the best bits that particularly catch their attention. So even if there are only a few good bits in a programme that's otherwise dull, because those will be the bits that caught their attention, and they didn't really notice much about the other bits because they weren't trying to concentrate on them, they might think the whole programme was probably OK.

Naturally, people who aren't used to watching better-quality things will be more satisfied with it too, because they've got nothing better to contrast it with, so it won't seem poor by comparison.


Part Six
Being Careful Not to Put Too Much Faith in Reported Statistics

It's Easy for Statistics to Give an Incorrect Impression

There are several ways statistics can be misleading if used wrongly. So it's best to keep an open mind to whether statistics the media or researchers or the government and so on report are accurate, or could at least be easily misinterpreted.

Driving a car

It's easy to be worried by information based on statistics when there's no need. For instance, a magazine once reported that a study had found that there were four times as many road deaths in the evening as in the morning, claiming it must therefore be four times as safe to drive in the morning as in the evening. But they were wrong, because it turned out that there were calculated to be four times as many cars on the road in the evening as the morning, so no individual driver had any more chance of dying than they would in the morning; the percentage killed would be the same.

Sometimes, statistics are reported on the news that sound alarming, when they're really of far less concern for most people than they seem. Sometimes, it's worth anyone who's worried about them doing more investigation to find out the full story.

Feeling hopeless

For instance, it was reported on the news that a study had found that taking a certain class of drugs to lower blood pressure would put a person at a 30 % increased risk of lung cancer. A health professional said she was annoyed at the way that had been reported, because no mention was made of the fact that for most people, who have only a very low risk of developing lung cancer, even a 30 % increase in their risk wouldn't mean their risk had become high, so the news report might have been unnecessarily worrying.

And she said that what the study meant in practice was that for every 2500 people taking the drug over a long period of time, one extra person would get cancer. Not good, but nowhere near as bad as the impression the news reports gave. And that was if the study was even accurate.

A related issue is that sometimes, people can mistakenly assume that one thing has caused another, when in reality, it just happened at the same time, or something else caused them both, or it was actually caused by the thing it's assumed to have caused itself, so the truth is the opposite of what a lot of people think it is.

Boring office

One example of confusing the cause of something with something that just happened at the same time is having mistaken assumptions about the effect of Britain being in the European Union on jobs: One person might complain that being a member of it has been bad for Britain, saying that before Britain joined, in the mid 1970s, the employment rate was high, - so high that if a person was dismissed from a job, or didn't like it, they could walk out, and find another job within the week, because employers were always on the look-out for new workers; but since Britain joined the European Union, things have completely changed; unemployment has risen. They might blame it on the fact that a lot of jobs have been taken by immigrants, saying that's deprived native people of them.

So they might be blaming unemployment on joining Europe. But they might be somehow completely forgetting that in the 1980s, some big manufacturing industries were deliberately decimated, because they were thought to be uneconomical, and because Margaret Thatcher wanted to break the power of the unions; and the unemployment rate went above three million - before it eventually fell again - partly because of deliberate policies, where mines and major factories were closed.

Also, for a long time before Britain joined the European Union, there were phases of high unemployment, alternating with a healthier job market.

Another issue is that robots are probably now used to do more jobs than they ever were before.

And for various other reasons, more work nowadays is part-time work, and people tend to have less job security, in contrast to being able to expect a job for life some decades ago.

Someone who hears that the job market has got a whole lot more precarious since Britain joined the European Union might be convinced that it's to blame, and think it would be best if Britain left it, when in reality, there are several things that have contributed to bringing about the change, some of them nothing at all to do with it.

As for immigrants taking jobs, a lot of them are jobs that are so laborious that not enough local people can be found to do them.

A Joke Demonstrating the Need to be Cautious About Accepting Statistics Without Question

I once found the joke on the Internet. I can't remember where now.

It demonstrates that one thing doesn't necessarily cause another, even when a statistic might suggest it does:

The Dangers of Bread

IMPORTANT WARNING!

For Those Who Have Been Drawn Unsuspectingly Into The Use Of Bread:

1. More than 98% of the inmates in American prisons are now bread users.

2. Fully HALF of all children who grow up in bread-consuming households score below average on standardised tests.

3. In the 18th century, when virtually all bread was baked in the home, the average life expectancy was less than 50 years; infant mortality rates were unacceptably high; many women died in childbirth; and diseases such as typhoid, yellow fever, and influenza ravaged whole nations.

4. More than 90% of violent crimes in America are now committed within 24 hours of eating bread.

5. Bread is made from a substance called "dough". It has been proven that as little as one pound of dough can be used to suffocate a mouse. The average person eats more bread than that in one month!

6. Primitive tribal societies that have no bread exhibit a low incidence of cancer, Alzheimer's, Parkinson's disease, and osteoporosis.

7. Bread has been proven to be addictive. Subjects deprived of bread and given only water to eat begged for bread after as little as two days.

8. Bread is often a "gateway" food item, leading the user to harder items such as butter, jam, peanut butter etc.

9. Bread has been proven to absorb water. Since the human body is more than 90% water, it follows that eating bread could lead to your body being taken over by this absorptive food product, turning you into a soggy, gooey bread-pudding person.

10. Most bread eaters are utterly unable to distinguish between significant scientific fact and meaningless statistical babbling.

Scary-Sounding Statistics that Aren't as Worrying as they Seem

Scared

Newspapers sometimes make statements in their articles that aren't precise enough to be as informative as they should be, and risk scaring people unnecessarily, or not being specific enough to warn people who need to be warned, like:

Scientists say high consumption of sugar puts people at a 30% higher chance of heart disease.

though the warning might be a very worthwhile one, it does - and should - raise questions:

A scary statistic has been reported in a newspaper: A third of all people who have hip fractures die within a year.

It's tempting to wonder what on earth it is about hip fractures that kills so many people. But perhaps a lot of those deaths in reality don't have anything to do with the fractures at all: What if it turns out that most people who get hip fractures are over 80, (and perhaps it's osteoporosis that comes on with age that makes most of them more likely)? At that age, a lot of those people would have died anyway within the year. It's sometimes easy to confuse the cause of something with something that just happens to be going on at the same time. Sometimes, keeping an open mind to how to interpret a statistic till more information's known is best.

Giving a False Impression by Skewing Averages

When it's reported that the average person does this or that, it's possible that the average hasn't been calculated well. It's possible that things have been included in the calculations that have distorted them.

There are three kinds of ways averages can be calculated, and calculations can sometimes be used to mislead, so as to give a more favourable impression of someone's argument than it deserves, for instance if they want to try to convince the public that people are paid well in their firm; or they can mislead because of errors made when calculating them.

The three kinds of averages are called the mean, the median and the mode. Things can sometimes seem more favourable or unfavourable to people depending on which is used.

Too much work

The mean is calculated by adding up all the numbers you have, and then dividing it by however many numbers there are. So, for example, if someone was calculating the average salary of everyone in a company, they might add them all up, and then divide them by how many salaries there were, (which would naturally equal how many workers there were). So, for example, if ten people in a company where there were twenty employees were paid £200 a week, and the other ten were paid £400 a week, then adding up all the salaries would make £6000; and dividing that by 20 - the entire number of workers - would make £300; so it could be said that the average salary was £300, right in the middle of £200 and £400. (There would be a much easier way of calculating that particular average, but doing that would be the best way if the sums were a lot more complicated because there were a lot more people in the firm, earning a range of different salaries.)

The thing is that if the boss's salary was included in a calculation, and it was much more than those of most people in a company, and especially if there were a few more people at the top with huge salaries, while most people's weren't all that high at all, then adding the high numbers in with the others would make the result of the calculation a higher number than it would be otherwise. So it might look as if the salaries of all the people in the company were higher than most of them really were.

The average salary could be calculated in other ways. The median would be the middle number, if all the numbers were lined up in a row from the smallest to the biggest. So half the numbers would be lower than it, and half higher. Then the fact that a boss's salary might be a whole lot higher than that of most workers wouldn't matter, because it would just be one number among all the rest. The average salary would be said to be the one that came up in the middle of the row of numbers.

The other type of average is called the mode. It's just the most common number. So with that, you'd get what the most common salary was in the company. That might give quite a good picture, although some types of workers' salaries might differ quite a lot from the most common ones. So just calling the most common salary the average wouldn't tell the whole story.

Eating crisps

To give another illustration, once someone told me she was doing a survey for a university course she was on, to find out how much alcohol the average person drank in a week. She said most people she'd interviewed for the survey said they drank around five or six pints of beer, but that one person told her he drank 200! She said she'd better not include that, since it would skew the averages a bit!

If she was planning to calculate the average by adding up the number of pints that were drunk, and dividing them by the number of people in the survey, so as to get a rough average for each person, then including the person who said they drank 200 would increase the final figure a bit. For instance, if there were 50 people in the survey, and the total number of pints drunk was 400, then 50 into 400 would be eight, so she might calculate that on average, people drank eight pints a week. But remove the person who drank 200 pints a week from the survey, and you get a total number of pints drunk of 200. 50 into 200 goes four times. So that would give an average number of pints drunk a week of four, if she was calculating the average using the mean.

In that case, calculating by the median or the mode would get a more accurate average.


Part Seven
The Accuracy of Opinion Polls

There are ways of doing opinion polls fairly accurately. Still, it's worth keeping an open mind about the reliability of one on the news, since sometimes there will have been something wrong with the way the information was collected, or there's been a reason why many people's opinions might have changed since the poll was taken, for instance if at the beginning of the week, a certain politician was looked on favourably by a lot of people, but then a couple of days after an opinion poll was taken that asked people what they thought about them, the news broke that they'd been involved in a scandal, and a lot of people don't think much of them anymore.

The Importance of Polling a Sample of People Who are Truly Representative of the Group Being Polled

Talking

One important thing people collecting opinions for polls need to do to try and ensure they're accurate is to poll from a sample of the public that is, as far as they know, representative of the public in general, or whichever group they're trying to find out the opinions of. It helps if their sample is big, since in a small sample, there's more likely to be flukes of chance, such as finding 90 % of people who all think the same thing, when in reality a far smaller percentage of the public do. Also, in a small sample, there's more likelihood that it won't really be representative of the public, because too many of the sample have things in common. For instance, it could happen that someone standing on a street asking people who they want to vote for in the next election could, unbeknownst to them, be encountering a lot of people who've just come out of a Conservative club meeting. So they'd think more of the public were intending to vote Conservative than really were.

Embarrassed

But big samples of the public won't necessarily give a more accurate picture. The people conducting the poll need to do what they can to try to make sure they're genuinely polling a random sample of the public. An example of what can happen when they don't happened in 1936. A magazine in America called the Literary Digest predicted that one particular presidential candidate would win a decisive victory over the other. Ten Million postcards had been sent out to the public. 2.5 million came back. It was predicted that the Republican candidate, Alf Landon, would win with a 20 % majority of the vote. 60 % had said they'd vote for him, and 40 % for Roosevelt, the Democratic candidate.

Within a week, it was discovered that the magazine had got it badly wrong! Roosevelt won with a big majority.

The reason for such inaccuracy in predicting the result was that it hadn't in reality been a random sample of the public that was polled; the addresses were all taken from directories of people who owned phones and cars, and from the magazine's subscribers. In the 1930s, owning a car or a phone was a luxury. Almost all owners of such things would be prosperous, as would people who had the money to spare to subscribe to a magazine. Well-off people would be more likely to vote Republican. But in 1930s America, there was a lot of financial hardship; unemployment and poverty were very high. Less well-off people would be more likely to vote Democrat. The magazine had made the mistake of leaving them out of the poll, even though they polled millions.

There are a few ways that people doing polls can try to make sure the samples of people they use are representative of the public in general:

Dancing

One is if they can try to find a completely random sample of the public, so any one kind of person has as much likelihood of being chosen as any other. That's difficult to do, since unforeseen things can bias the sample, such as if a poll was taken of whether people in the neighbourhood thought the noise from people playing music at night was a nuisance, and the sample included a large group of students who were the ones responsible for the loud music, unbeknownst to the people taking the poll. The larger the sample, and the more diverse the places are that it's taken from, the less likely things like that will be to happen and bias it in a major way.

Another way of trying to make sure a sample is representative of the population is dividing the population into all the different interest groups there are thought to be, and trying to get a random sample from each. for instance, before an election, people doing opinion polls might try to get random samples of voters from areas that traditionally vote for left-wing candidates, and also from areas where more right-wing ones tend to be voted for. If the researchers wanted to find out how people in an area as a whole thought, taking a sample that turned out to have a lot more people from part of it with a lot of voters of one kind would bias it. It can be best if the number of people in each voter group is estimated, and the sample taken from each group is bigger or smaller, depending on how big the group is.

The Way Questions are Phrased and the Kinds of Questions Asked Can Bias a Poll's Results

Confused

Polls can sometimes be unreliable when emotive questions are being asked of people who haven't formed strong opinions on the topic being asked about; they're liable to give an answer they haven't really thought through, but have decided on more-or-less on the spur of the moment, and might think better of later, for all anyone knows. Therefore, what answer they give can be influenced by the wording in the question. Exactly the same question worded differently can get different results.

For instance, if a group wanted to find out the public's views on abortion, and they went out and asked people, "Do you believe in protecting the rights of the unborn child?" they might get a high percentage of people saying yes, because after all, most people want to protect little children. And who really wants to say no to a question about whether children should be protected? If another group went out and asked people, "Do you believe in the right of a woman in an awkward situation to have an unwanted pregnancy terminated?" they might get a high percentage of people saying yes to that. After all, most people believe in human rights. And if the term "awkward situation" wasn't defined, a lot of people being asked the question could think it meant something like a situation where continuing the pregnancy would get a young woman thrown out of the family house by her parents, whereas the questioner might have just meant the unwanted pregnancy itself, which, after all, would be awkward.

So it could happen that a group campaigning for access to abortion to be restricted could claim that a lot of the public were in favour of it, whereas a group trying to make access to abortion easier could claim that most of the public were in favour of that. And if all the members of the public who weren't polled didn't even know what questions were actually asked, they wouldn't be able to judge whether people's responses might have been influenced by the ways the questions were asked; most of them probably wouldn't even be aware that the questions could have been asked in such a way as to make it more likely that people would give one answer than another.

So an opinion poll carried out by an interest group campaigning for something they want can be manipulated to show that the public are more in favour of it than they really would be if they were given time to think about the questions they're being asked, without most people realising.

Confused

Another way opinion polls can show deceptive results, either if they're carried out by interest groups who want to convince politicians or some organisation that the public are on their side, or just because the questions are clumsily worded, is if two questions are asked as one. That could mean a lot of people say yes or no to one, while the other one's easy to ignore, because one stands out more than the other.

One example is: "Do you believe the decision as to whether to have an abortion rests with a consenting patient and that it should be carried out by a trained medical practitioner?"

Well, who's going to say no and make it sound like they want to go back to the days when backstreet abortions were carried out by amateurs and it was often dangerous? And the bit of the question about consent could sound as if it's asking whether women should have the right not to be given abortions they don't want, whereas in reality it's asking whether a woman should have the right to be automatically given an abortion she asks for, rather than access to abortion being restricted by politicians making new laws, or by doctors making the final decision on whether or not to give a woman an abortion.

So again, the public can appear to be much more in favour of something than they really are, because of the way the question in the opinion poll was phrased. Interest groups can manipulate questions especially to get a result like that.

Inaccurate results can also be obtained if the questions use wording that's plain confusing because jargon or long and complex words are used. People might think the question's saying one thing, when it's really saying another.

Working

Also, online polls can sometimes be more accurate, because people have more time to think before they answer than they would if they were stopped while busy and asked face-to-face in the town. Also, people can be more honest about answers to personal questions if they're answering anonymously. For instance, if it's a poll on how many people have voted in an election for an unpopular minority group, such as an extreme right-wing or left-wing one, people might be willing to admit things in the privacy of their homes and the anonymity of a polling website that they wouldn't tell anyone face to face.

On the other hand, online polls can by their nature produce skewed results, simply because the people who'll bother to respond to them will likely be the people who feel most strongly about the issues, rather than them being representative of public opinion as a whole.

Talking angrily

Similarly, opinion polls can be unreliable if they're done by asking people such as newspaper readers to write in if they want to answer the questions, rather than by contacting people in person and asking them. It's likely that only the people with strong views will bother to write in. So, for instance, a poll on whether to change something might look as if most people are in favour of doing so, when in reality, the majority don't want to change it at all, and will be unhappy if it is changed, but they're so contented with the way it is that they aren't motivated to say anything; those who feel they have something to complain about will be the ones who have most to say and want their voices heard most. Unless the others think there's a high likelihood of it being changed, they might well not be motivated to write in, not foreseeing that lots of the people who want it changed will write in, and will be in the majority of those who wrote in, so it may well be changed, because it'll be thought that that's what most of the public wants.

Polls of random people approached and asked questions on the streets won't necessarily be a representative sample either, because the people most willing to stop and answer questions will be people with the most time on their hands, rather than people rushing to work or busy supervising children and so on.

Talking on the phone

Polls have also turned out to be inaccurate because they were carried out by people phoning members of the public up at home during the day and asking for their opinions; but the people most likely to be at home during the day are retired and unemployed people, and mothers of young children; so the opinions of large sections of the population were left out, unbeknownst to the ones carrying out the poll, and a lot of them differed from the majority view obtained by phoning people up at home.

All techniques of collecting information for opinion polls are likely to have their difficulties. Another problem with online polls is that trolls can answer in the most controversial way just for fun.

Also, if people are incentivised to answer a series of questions online by being paid to do so, many can simply answer quickly without really thinking about the questions. So, for instance, if they're given a series of statements, and asked whether they agree or disagree with each one, if they agree with the first few, they can assume they'll agree with the rest, and just quickly tick them to indicate that they do, when in reality, they haven't really thought about them, or even read them properly. Sometimes, if a few questions are asked at the start that it would be impossible for any one person to agree or disagree to all of without contradicting themselves, that kind of thing can be avoided.

With polls that are carried out with people in person, the order the questions are put in can also make a difference. If several questions are asked, more personal questions might get more honest answers if they're nearer the end than the beginning, because by then, the person being asked might feel more friendly towards the interviewer, and so more willing to be honest with them, whereas they might be more guarded at the beginning when the person's a total stranger.


Another reason people can give the opposite answer to a question if it's asked later than they would have at first is because if the person being asked the questions hasn't really thought about the issue, they might do so in response to the first questions, and have a clearer idea about what they think about it after a while than they did at the beginning.

Another issue is that a question might sound, if it's asked at the beginning, as if people are being asked to say yes or no to something that doesn't sound very nice, but questions can be asked before it that make them realise there are benefits in it after all, and which don't hint at any disadvantages, so they don't put them off the idea. Interest groups that want to manipulate people into answering in a particular way can use this technique.

Rabbit

For example, apparently there were two opinion polls carried out in America about whether drilling for oil should be done in a part of Alaska that was currently a wildlife reserve, that turned out to have exactly opposite results: One said there was a majority in favour of it of 17 %, and the other said there was a majority against it of 17 %. The poll that said there was a majority of 17 % in favour was carried out by an organisation that wanted it to happen, and the poll that said there was a majority of 17 % against was carried out by people who didn't.

What made the difference was that before they asked the question, the pollsters in favour of oil drilling in the wildlife reserve asked a dozen questions about what people thought of the price of oil, and whether they were happy that America was getting so much of it from the Middle East. By the time they asked the question about drilling for oil in Alaska, it had occurred to people that if the supply of oil was greater, the price might go down, and that it would be nice to have American oil instead of having to get it from countries where a lot of anti-American terrorists lived. On the other hand, the poll carried out by the organisation opposed to drilling in Alaska simply asked the one question about whether people were happy for drilling for oil to go on in what was currently a wildlife reserve. Of course a lot of people didn't like the idea of depriving wildlife of their reserve. So without trying to find out the advantages and disadvantages, or without it even occurring to them that there might be advantages, they just said no.

People can also be influenced to answer in certain ways by leading questions, where they're asked to agree or disagree with something that will likely sound like an appealing idea on the spur of the moment. For instance, a poll was done where people were asked if they agreed with the idea that the national health service needed reform more than it needed extra money, and quite a large majority said they did; but another poll asked the opposite - whether people agreed that the NHS needed extra money more than it needed reform, and a large majority said they agreed with that. In reality, it's unlikely that most of the people asked had looked into the matter.

So the policies of governments and other organisations should really be determined by people with an in-depth knowledge of the complex issues involved, rather than by such things as opinion polls, carried out to find out the opinions of possible voters and supporters, except in situations where what they're thinking of doing has the potential to do real harm to members of the public, such as getting involved in wars.

Even questions asked in all innocence can result in uninformed answers being given, where people can agree to things they really know nothing about, because something sounds good on the surface, and they haven't looked into it to find out whether it really is, but perhaps just assume it is on the spur of the moment. Some people asking questions for opinion polls might assume that people have more knowledge than a lot of them really have. For instance, if people were asked, "Do you think homeopaths should work alongside doctors in their practices?" a large majority might say yes. They might not know a thing about homeopathy, but they might think it sounds like good medical stuff, and think more good medical practice alongside what there already is can only be a good thing. If they found out there's no evidence that homeopathy works, they might change their opinion immediately.

Also, with some things, a lot of people who don't really know might say yes or no, depending on what sounds best in the moment they're asked, but if they were told there was an option in the poll to say they weren't sure, they'd pic that, and the percentage who said yes and no could end up being quite different.

So the most reliable opinion polls are likely to be carried out by reputable experienced polling organisations who know the pitfalls of doing polls, and can take steps to avoid them, and who aren't trying to push some agenda.

Things to Consider When Deciding Whether an Opinion Poll Can Be Trusted

Contemplating

Some opinion polls are much more reliable and carefully-done than others. If the results of an opinion poll are important to someone or to the government or some other organisation, there are a few things they should investigate when deciding how much faith to put in the results:

Is the organisation who carried out the poll well-known for the accuracy of its results and impartial, so it's unlikely to be trying to influence public policy or something else in the way it wants?

A related question is who asked for the poll to be carried out. If it was a respected media organisation or for independent researchers, there's more reason to trust it than if it was carried out by some company or political party or pressure group, and the findings give support for something they're doing or want to do, - although if it's like that, it doesn't necessarily mean it's not accurate.

If a polling organisation's reluctant to answer questions about things such as who the poll was done for and why, and what steps they took to make sure the questions were asked in an impartial way and would get informed answers, then they're likely not a reputable one. Reputable ones will have no qualms about providing that information.

It will also matter whether they've tried to use a genuinely representative sample of the public, if they're claiming the poll's of the public as a whole, rather than of particular groups in it. If they are, it'll be important to know if they've taken care to try to make sure they took a random sample, or whether they could have made mistakes or deliberately manipulated things, so, for instance, there's a higher percentage of gun owners in a survey of whether gun control laws should be more or less restrictive than there are among the general public, because the poll was carried out by a magazine about guns, and as well as asking random members of the public, they also asked the subscribers to their magazine.

It's important that the sample's quite big as well, although answers gathered in a scientific way from a smaller sample might be more representative of the views of the public in general than a poll of a larger number of people which had shortcomings, such as a large percentage of respondents being people who feel strongly enough about one side of an issue to write into a paper about it.

The dates when the poll was carried out, which might have been a while before it was published, should be made known, since it's possible that the results were an almost perfect reflection of the public's opinion then, but since then, opinion has shifted quite a bit, for instance if a poll on people's eating habits was carried out before a major food scare but published afterwards.

If there are several polls on the same topic around, it's best to compare them all. If they all have roughly the same results, there can naturally be more confidence that they're accurate than if all the results differ significantly. If the result of one differs significantly from the results of the others, mistakes might have been made while carrying it out.

Little differences can't be considered significant. It's recognised that there will always be a possibility of a little bit of error, perhaps 3 % in a sample of 1000 people. That means that if a poll gives a political election candidate a 2 % lead over another one one week and a 5 % lead over them the next, it shouldn't be considered to be necessarily significant, since allowances have to be made for the fact that the polls might have been inaccurate by a few percentage points.

Basically, it's best not to be too quick to take anything on trust.

Notes and Links

If after browsing this article you'd like more detail on similar topics, try looking at the related articles on this website.

If there's anything about this article you'd like to comment on, Contact the author.

Follow this link if you'd like to know the main sources used in creating this article.

Before putting any ideas that you might pick up from this article into practice, please read the disclaimer at the bottom of the page.

Since this article's almost certainly too long to read all in one go, if you like the parts of it you do browse, feel free to add it to your favourites and read it bit by bit over the coming days or weeks as you choose, since it's really designed to be taken in as a step-by-step process anyway rather than a one-off. It'll also make it handy to read bits of it again and again, since it's normal for people to forget most of what they read the first time.



The End


Note that if you choose to try out some or all of the recovery techniques described in this article, they may take practice before they begin to work.

Contact Information

If you'd like to email the author of this article to make comments on it, good or bad: Email the author.

If you email us, please use the subject line that's already in the email, since there is a spam filter that will otherwise treat an email as spam and delete it. Sorry for the inconvenience; it was put there as an easy way of weeding out and getting rid of all the spam sent to this address.

You might well not get a response to your email, but be assured that most feedback is very much appreciated.

Feel free to add this article to your favourites or save it to your computer. If you know of anyone you think might benefit by reading any of the self-help articles in this series, whether they be a friend, family member, work colleagues, help groups, patients or whoever, please recommend them to them or share the file with them, or especially if they don't have access to the Internet or a computer, feel free to print any of them out for them, or particular sections. You're welcome to distribute as many copies as you like, provided it's for non-commercial purposes.


This includes links to articles on depression, phobias and other anxiety problems, marriage difficulties, addiction, anorexia, looking after someone with dementia, coping with unemployment, school and workplace bullying, and several other things.


Related Articles

Literature and Articles Used in Creating This Article


Disclaimer:
The articles are written in such a way as to convey the impression that they are not written by an expert, so as to make it clear that the advice should not be followed without question.

The author has a qualification endorsed by the Institute of Psychiatry and has led a group for people recovering from anxiety disorders and done other such things; yet she is not an expert on people's problems, and has simply taken information from books and articles that do come from people more expert in the field.

There is no guarantee that the solutions the people in the articles hope will help them will work for everybody, and you should consider yourself the best judge of whether to follow their example in trying them out.


Back to the contents at the beginning.

If after reading the article, you fancy a bit of light relief, visit the pages in our jokes section. Here's a short one for samples: Amusing Signs.
(Note: At the bottom of the jokes pages there are links to material with Christian content. If you feel this will offend you, give it a miss.)


This includes links to articles on depression, phobias and other anxiety problems, marriage difficulties, addiction, anorexia, looking after someone with dementia, coping with unemployment, school and workplace bullying, and several other things.


To the People's Concerns Page which features audio interviews on various life problems. There are also links with the interviews to places where you can find support and information about related issues.