What is Sound Science?...
The Basics...
A good starting point is the Random House Dictionary, which defines science as systematic
knowledge of the physical or material world gained through observation and
experimentation.
Science is systematic in the sense that it is ordered,
organized and methodical. Again, it can be seen that science is a process, not just a body
of knowledge.
Science is based upon observation of the material world. Events must be observable
and measurable. Those events which cannot be observed or measured fall outside the realm
of science.
Finally, sound science is usually based upon experimentation.
Experiments are designed to prove a particular hypothesis true or false. Significantly, no
one experiment can be used to confirm a theory or hypothesis. Experiments must be
repeatable, and actually repeated, before a new finding can become part of the body of
accepted scientific knowledge.
The ability to disprove a hypothesis is what sets
science apart from pseudo-sciences like astrology. A well-conducted experiment will result
in a true or false condition regarding the hypothesis being tested. No such tests exist in
the pseudo-sciences.
Example: The results of an
astrological reading
can never
be proved or disproved. Astrologers will point out when their predictions are right, but
not when they are wrong. More importantly, they do not (and realistically, cannot) use the
knowledge gained from previous readings to make improvements or advance the prediction
capabilities of their profession. In fact, scientific studies show astrological
predictions to be "correct" no more than would be expected from random guessing.
Many people believe that there is something intrinsically
"unnatural" about science. Nothing could be further from the truth! To quote Lewis Thomas,"The central task of science is to arrive, stage by
stage, at a clearer comprehension of nature." Physics, chemistry, biology, et al
are disciplines that examine and explain nature from specific viewpoints.
Scientific Reasoning
There are two types of thinking usually associated with science:
- inductive reasoning,
where a specific fact or set of facts is used to ascertain a general theory or hypothesis.
(The swans in the park are white, therefore all swans are white.)
- deductive reasoning,
which works the other way, starting with a general assertion and using experimentation and
observation to deduce a specific fact or set of facts. (We know that swans are white. This
bird is white, so it must be a swan.)
Inductive reasoning was most popular in the Nineteenth
Century (A System of Logic, John Stuart Mill, 1843). It was assumed that science
advanced incrementally and fact by fact, with hypotheses and theories becoming more
general as specific pieces of knowledge became available. However, deductive reasoning has
replaced it as the method of first choice in the Twentieth Century. Such modern luminaries
as Peter Medawar and Karl Popper have argued that "there is no logically rigorous
procedure by which an inductive truth can proved to be so." (Medawar, 1984)
As his quote at the beginning of this section suggests,
Einstein was also a believer in deductive reasoning. By stating that no amount of research
can prove a theory right, but that one discovery can prove it wrong, he was arguing in
favor of deduction.
As it turns out, both forms of reasoning appear to be
essential to the ongoing process of scientific learning. It takes induction to develop
grand "what if?" ideas and to generate new hypotheses. It then takes deduction
to test these hypotheses. So while induction helps get the ball rolling, deduction is the
true workhorse of the scientific world, forming the basis for what has become known as the
Scientific Method.
Scientific Method
The Scientific Method is a standard for conducting credible science. It is intended to be
an unbiased process by which neutral and objective scientific inquiry can occur.
The cornerstones of the Method are the concepts of reliability
and validity. Results are considered reliable only if they can be replicated by
third parties. Results are deemed valid only if they meet the strict logical criteria that
has been established for deductive thinking. (See the module on reasoning
for a detailed discussion of logic.) Both reliability and validity require the Scientific
Method to be an open process subject to continual appraisal, scrutiny, criticism and
revision.
Contrary to popular belief, there is no specific step by step
procedure that can be called the Official Way. However, the Scientific Method always
encompasses certain steps:
- Identify a Specific Problem
In this case, a problem is not necessarily something negative, but rather a situation
where the result or effect is understood but the cause needs to be identified.
- Develop a Hypothesis
A hypothesis is a tentative, educated explanation of the facts. Development of a
hypothesis requires inductive reasoning, whereby a few specific facts lead to the creation
of a broad explanation.
- Test the Hypothesis
An experiment which tests the hypothesis is designed and conducted. Experimentation is a
science unto itself, as a good design must account for and control all of the variables
that might affect the results. A good experiment ensures that only the variable in
question can cause or create the hypothetical outcome.
- Draw Conclusions
Conclusions generally are of three varieties: the hypothesis is true, false or needs to be
modified to better reflect the newly discovered facts.
- Re-Test the Hypothesis
One test does not a new theory make. Findings must be re-tested and re-analyzed many times
by independent third parties to verify and confirm the results. Studies are often
published, in an attempt to both report results and alert others to the need to scrutinize
and validate the data.Because nothing can ever
be proven with complete and total confidence, many scientists will tell you that a theory
is never right -- it just hasn't been proven to be wrong. (We're back to Einstein again!)
Example:
You wish to determine why the wall lamp in your living room doesn't work. Based on prior
knowledge and observation regarding similar situations, you formulate the hypothesis that
the bulb is faulty.
You then devise an experiment to confirm the hypothesis: You
will replace the bulb that doesn't work with another one that has worked very recently,
and is assumed to still work. If the new one works, your hypothesis is confirmed (and
hopefully your problem is solved).
First, you minimize the probability that the problem is due
to anything other than a faulty bulb by doing the following:
- Checking the fuse box to ensure power to the lamp circuit.
- Testing the circuit to ensure that it provides consistent
current of the proper voltage and amperage.
- Confirming that there are no wall switches which can be used
to turn the lamp off and on.
Now for the experiment:
You put a different bulb that has been proven to work into the lamp. If you turn the lamp
on and it lights, your initial bulb must in fact be faulty. Your hypothesis has been
confirmed.
On the other hand, if the new bulb doesn't light, you still
don't know whether the problem is that the old bulb is faulty, the new bulb is also
faulty, the lamp is broken, or all three! You must go back and reformulate the experiment,
and possibly the hypothesis.
This example shows the precision with which experiments must
be developed. As importantly, it illustrates the fact that the scientific method is a
process, the results of which can easily lead to more experiments as well as new or
reformulated hypotheses. Note: Even though the experiment failed to confirm the
hypothesis, significant learning did occur.
Experimental Design
A well-designed experiment has a well-defined objective, is precise, can estimate error
and can distinguish the strength and presence of various effects. (See our section on statistics.)
Test vs. Control
One of the most important aspects of experimental design is the test versus control
situation, in which two almost identical versions of the experiment are run. The
objective is to eliminate, or hold in check, those variables which could
"confound" the ability to draw confirmatory conclusions. Theoretically, the only
difference that should exist between the two groups is that the control version does not
include the variable in question, or includes a different level or concentration of the
variable than does the test condition.
Example:
To test whether or not fertilizer can make pea plants grow more rapidly, you would start
with a number of identically sized and aged pea seeds. All would be placed into equivalent
pots at a similar depth in similar soil. All would receive the same amount of water and
exposure to light.
Half the pots would form
the test, the other half, the control. The test group would receive fertilizer, the
control would not. After a few weeks, plant growth would be measured and the mean growth
of the two groups would be compared.
Once the presence of fertilizer is established as a growth
agent, the test sample could be used as a control group: Additional studies could compare
differing levels and types of fertilizer to determine optimal growing conditions for pea
plants.
The use of
well-matched test and control groups is thus critical to the pursuit of sound science. In the case of the peas, a comparison of merely one test versus one control
plant could seriously affect the results and therefore the conclusions of the study: What
if one of the two plants had died of a fungus or bug infestation, or been genetically
defective in some way? Results would thus not be attributable, either solely or in part,
to the presence or absence of fertilizer.
Using test and control groups of specimens
significantly increases the probability of developing statistically reliable data. Group
results, in the form of averages, can be used to ensure that differences which occurred between
the two groups is greater than any differences that occurred within them. Also,
using multiple specimens allows the experimenter to throw out a deviant specimen without
needing to abort the entire experimental procedure.
Experimental Bias
It is also important to understand that observations and analyses made by those running an experiment
might be affected by the outcomes they expect; while the actions of those participating in
a study could be affected if they pick up on these cues, or if they have their own
expectations regarding the results.
A standard technique for reducing this type of bias is the double
blind procedure. Neither the technicians nor the participants are made aware of the
type of group (test or control) in which they are involved, hence they are both
"blind" concerning the initial situation or the expected results.
Causality
Obviously, a key reason to observe and experiment is to try and gain an understanding of
both the presence and strength of cause and effect-related variables. Yet even with good
scientific methods, the results may turn out to be far from accurate:
Hypothetical Example: A
researcher examines different environments to ascertain the cause of malaria. From an
analysis of outbreaks it is concluded that malaria is caused by an airborne disease that
only survives in warm, wet climates. People living in the tropics are advised to wear
masks, and a massive education program is launched to make them aware of the need to do
so.
In reality, malaria is caused by bites from certain tropical
mosquitoes that carry the disease. By not studying the problem more closely, huge amounts
of time and money would have been expended to eradicate the disease, without producing any
appreciable reduction in its occurrence.
Good research will attempt to develop and discuss
statistical measures of cause and effect. Known as causal analysis, these methods include
measures of correlation, regression, variance and
covariance.
Because statistics can sometimes be misinterpreted, outside
experts (peers) are brought in to provide fresh and unbiased perspectives. This brings us
to the concept of....
Peer Review
Good research is peer reviewed. A panel of disinterested experts will scrutinize and
analyze a study to a.) determine if the experimental design was sound, and b.) ensure that
the conclusions are technically correct and consistent with the findings. Beware of research that has not been peer reviewed. Also, be suspicious of
review panels "stacked" with people who would normally be sympathetic to the
views of those paying for or conducting the research.
Transparency
Good research hides nothing. Everything about the study should be "transparent",
i.e., readily available for review. This includes all of the raw data developed during the
study -- even the data points that were thrown out. Having transparency ensures that the
study is completely reproducible, allowing others the opportunity to reproduce the
results. Studies where information has been lost or is unavailable
should be viewed with extreme caution.
Junk Science...
The term "junk science" is used to describe
poorly designed or conducted research. In some cases, the motivation is purely benign,
with researchers starting with the most sincere intentions. Nevertheless, poor methodology
results in data and conclusions that are invalid, unreliable and unconfirmable.
In other cases, the intent of junk science is more malicious,
purposely abusing scientific methods in order to support a personally or politically
favorable agenda or ideology. This type of deception knows no political bounds and is
conducted for and by academia, special interest groups, government and industry.
Most people interested and dedicated to providing the public
with factual information -- policy makers, journalists and teachers, to name a few -- are
not readily equipped to see through the distortions. It is thus very important to find out
why a study has been conducted and reported, who paid for it, and what they have to gain
from the results.
A Prime Target
An area of study that is particularly prone to junk science is epidemiology,
the medical science interested in the factors controlling the presence or absence of a
disease or pathogen. Epidemiology tends to be observational, rather than experimental,
because ethics demands that we observe, rather than experiment, on people. Studies are
conducted on very large populations and attempt to draw statistically valid associations
between certain diseases or conditions and specific risk factors.
Epidemiological research can be the stuff of front page
headlines, because it discusses risks related to life and death. Conclusions can be
emotionally charged, generating high levels of personal fear and expectations about
immediate action. Taking a calm and distanced approach to analyzing
epidemiological results is therefore crucial, since conclusions can have enormous
political and economic impact.
One of the most important aspects of epidemiology is the idea
that the dose makes the poison. This means that the effects of a substance are
determined by the quantity that is present and/or the length of time it is administered.
Example: An extremely small amount of aspirin is useless from a therapeutic
standpoint.
A moderate dose is
an effective pain killer. A very large dose can cause severe complications and even death.
From a time perspective, one aspirin taken today will do nothing, but one aspirin taken
every day may help reduce the risk of heart attacks.
It's easy to be fooled and emotionally carried away when
learning of the results of medical research. To find out just how easy, you can review
some information on the dangers of a widely available chemical, dhmo.
Watch out for stories that trumpet a
particular chemical or environmental factor as either beneficial or problematic without
stating the quantity and amount of time needed to create the result in question:
- Special Interest Group A might say that "taken in large
enough doses, food could kill you." True, but virtually no one consumes enough food
to cause death from overeating.
- Or, Company B might argue that its product "kills 80%
more germs" than a similar product from Company C. This too might be true, but will
be of little value if both products reduce germs to levels that are well below accepted
standards for risk. (Please see the CIDM module called Thinking About
Risk.)
Summary...
The use and
understanding of scientific principles is a critical part of the informed decision making
process. Since it is fairly easy to be fooled by questionable science,
no evidence, report or conclusion should be taken at face value. Learn all you can prior
to making a decision.
Here is a list of questions to ask when shown the results of
a "scientific" study:
Sound Science Crib Sheet
- What institution conducted the research? Research
should be conducted by institutions, not by individuals currently or formerly associated
with them. Also, try and establish that the institutions are respected and credible, with
a history of doing sound scientific research.
- For whom was the research conducted? Much of the time,
reputable institutes are given research grants by government, environmental and industry
groups. These groups are hoping that studies will either prove or disprove a particular
perception or point of view.
For example, the
Tobacco Institute may fund a lung cancer study to be performed by Johns Hopkins
University. There is nothing wrong with this situation, as long as
the funder exercises no control over the study's design, execution, results or
conclusions.
- When did the study occur? Make sure that results are
recent. Otherwise, it's possible that the conclusions have been superceded by more recent
studies.
- What are the credentials of the people conducting the
research? Medical research should include PhDs in the specific discipline being
studied. Watch out for studies with "experts" whose
credentials seem to be in fields that are not directly related to the research in
question.
- Were results published in a respected scientific or medical
journal that routinely conducts peer reviews? Look for names like Nature, Science,
The
Journal of the American Medical Association or The Lancet.
- Is the sample size large enough to be projectable?
Studies of small samples are of dubious value.
- Was the sample selected properly? Try to make sure that
bias is reduced through the use of properly matched test and control groups. Check the
reports for sections discussing methodology and any potential problems relating to it.
- Did the study contain other methods to eliminate bias and
confounding variables? Good studies go to great lengths to minimize the potential for
error. They also go to great lengths to explain both what bias or errors may still exist.
- Are results consistent with the generally accepted body of
research on the subject? Don't draw conclusions from single studies or ones that
contradict the preponderance of available evidence.
- Are there other possible reasons for the relationship being
discussed? This is a far bigger possibility than you might think! It is also another
reason to not rely solely on the results of a single study.
|
References On the Net...
American
Association for the Advancement of Science
Creation
Science Home Page, religion masquerading as science
Epidemiology:
The Science of People, Foundation for American Communications
Experimental
Design, from the University of Akron
The Junk Science Home Page,
a funny and enlightening look at less-than-sound science
Knowledge
Base Home Page, from Bill Trochim at Cornell University
National Academy Press
Reading Room, a terrific on-line science resource
Scientific Journals,
including the proceedings of the National Academy of Sciences
References Off the Net...
(Clicking on the link will take you to the appropriate
catalog page of Amazon.com, where you can learn more about the book and/or order it.)
The Dose
Makes the Poison, M. Alice Ottoboni, Van Nostrand Reinhold, 1991).
Late Night
Thoughts on Listening to Mahler's Ninth Symphony, Lewis Thomas (Penguin Books,
1980).
The Limits of Science,
Peter Medawar (Oxford University Press, 1984). Note:
This book is out of print.
The Logic of
Scientific Discovery, Karl R. Popper (Routledge, 1980).
Science as
Social Knowledge: Values and Objectivity in Scientific Inquiry, Helen Longino
(Princeton University Press, 1990).
Science
Matters: Achieving Scientific Literacy, Robert M. Hazen and James Trefill (Anchor,
1991).
The Strange
Case of the Spotted Mice: And Other Classic Essays on Science, Stephen Jay Gould
and Peter Medawar (Oxford University Press, 1996).
The Structure
of Science, Ernest Nagel (Hackett, 1996).
Tainted Truth,
Cynthia Crossen (Simon & Schuster, 1996).
What Causes Cancer?, D. Trichopoulos, F.P. Li, D.J. Hunter, Scientific
American (September 1996).