The Scientific Method/Print Version< The Scientific Method
|This is the print version of The Scientific Method
You won't see this message or any elements not part of the book's content when you print or preview this page.
The Scientific Method
Welcome to the wikibook about the scientific method. This book is a wiki, and may be freely edited by all users. This book is released under the terms of the GFDL.
|Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License."
GFDL GNU Free Documentation License
Who is this book for?
This book is for any person interested in learning about the scientific method. No background in science or mathematics is required to read and understand this text. Some of the historical chapters will discuss some philosophical topics that might be confusing to people with no familiarity with philosophy, but an attempt will be made to make all sections accessible.
What will this book cover?
This book is going to cover the scientific method, its history, and its applications. This book will not attempt to cover all of science, nor will it provide technical instruction in any particular branch of science. This book is mostly historical and philosophical, not technical.
How is this book organized?
This book will be organized into three primary sections. The first section will discuss the history of the scientific method, and the philosophical and scientific advancements that let to it's modern form. The second section will talk about how to apply the method to your inquiries, including a discussion of some common terms and tools. The third and final section will take a look at some specific experiments in various fields of science, to demonstrate how the method has been used to make major breakthroughs.
What are the prerequisites?
There are no specific prerequisites to reading and understanding this book. However, because the subject matter will be focused on history (especially European history) and philosophy, readers may find some benefit to reading books on those subjects first. This is not strictly required, however.
Philosophy of Science
The history of science and scientific thought is long and varied. In these chapters, we will look at the history and the philosophy behind science.
Introduction to Science
Modern science is broken into so many divergent branches that it's almost inconceivable to think that they are all related. However, despite the varied subject matter, all scientific disciplines are tied together through their use of a common method, the scientific method. The scientific method is mostly a philosophical exercise that is used to refine human knowledge.
Precepts of the Method
Different disciplines may employ the general scientific method in slightly different ways, but the major precepts are the same:
- Any result should be provable. Any person (with the proper training and equipment) must be able to reproduce and verify any scientific result.
- Any scientific theory should enable us to make predictions of future events. The precision of these predictions is a measure of the strength of the theory.
- Falsifiability is an important notion in science and the philosophy of science. For an assertion to be falsifiable it must be logically possible to make an observation or do a physical experiment that would show the assertion to be false. It is important to note that "falsifiable" does not mean false. Some philosophers and scientists, most notably Karl Popper, have asserted that no empirical hypothesis, proposition, or theory can be considered scientific if no observation could be made which might contradict it. Note that if an assertion is falsifiable its negation can be unfalsifiable, and vice-versa. For example, "God does exist" is unfalsifiable, while its negation "God doesn't exist" is falsifiable. Any scientific theory must have criteria under which it is deemed invalid. Should predictions and verifications fail completely, the theory must be abandoned.
- Data needs to be analyzed as a whole or as a representative sample. We cannot pick and choose what data to keep and what to discard. Also, we cannot focus our attention on data that proves or disproves a particular hypothesis, we must account for all data even if it invalidates the hypothesis.
Stages of the Method
We will get into more detail in the following chapters, but the basic steps to the scientific method are as follows:
- Observe a natural phenomenon
- Make a hypothesis about the phenomenon
- Test the hypothesis
Once the hypothesis has been tested, if it is true we can work to find more evidence, or we can find counter-evidence. If the hypothesis is false, we create a new hypothesis and try again.
The important thing to note here is that the scientific process is never-ending. No result is ever considered to be perfect, and at no point do we stop looking at evidence.
Example: Newton and Einstein
Isaac Newton, a brilliant physicist, developed a number of laws of motion and mechanics that we still use today. For many many years the laws of Newton were considered to be absolute fact. Many years later, a physicist known as Albert Einstein noticed that in certain situations Newton's laws were incorrect. Especially in cases where the object under consideration is moving at speed nearing the speed of light.Einstein helped to create a new theory, the theory of relativity, that corrected those errors. Even though Einstein was a brilliant scientist, modern physicists are developing new theories because there are some small errors in Einstein's theories. Each new generation of physicists helps to reduce the errors of the previous generations.
↘=== The Complete Method ===
The complete scientific method, as it is generally known is:
- Define the question
- Gather data and observations
- Form hypothesis
- Perform experiment and collect data
- Interpret data and draw conclusions
Notice that the first step is to define the question. In other words, we can not look for an answer if we do not know what the question is first. Once we have the question, we need to observe the situation and gather appropriate data. We need to gather all data, not just selectively acquire data to support a particular hypothesis, or to make analysis more simple.
Once we have our data, we can analyze it to determine a hypothesis. In many cases a hypothesis is a mathematical relationship between the data points. However, it is not necessary to use mathematics at any point with the scientific method. Once we have our hypothesis, we need to test it. Testing is a complicated process, and will be the focus of the second section in this book. We collect data from our tests, and attempt to fit that data to our hypothesis. At this point, we need to ask, is the hypothesis right or wrong? Or, if it is not completely wrong nor completely right, we need to ask if this hypothesis is better then the previous hypothesis? If this hypothesis is not quite right, we can modify it and perform the tests again.
Once we have completed tests and verified our hypothesis, we need to draw conclusions from that. What does our hypothesis mean, in the bigger picture? What kinds of relationships between the data can we find? What further problems does this hypothesis cause? What would it take to prove this hypothesis wrong?
Components of the Method
The laws of nature as we understand them are the bases for all empirical sciences. They are the result of postulates (specific laws) that have passed experimental verification resulting in principles that are widely accepted and can be re-verified (using observation or experimentation).
World View (Axioms, Postulates)
The word "axiom" comes from the Greek word αξιωμα (axioma), which means that which is deemed worthy or fit or that which is considered self-evident.
To "postulate" is to assume a theory valid due to be based on an a given set of axioms, resulting on the creation of a new axiom, this is so due to be self evident, "axiom," "postulate," and "assumption" are used interchangeably.
Generally speaking, axioms are all the laws that are generally considered true but largely accepted on faith, they cannot be derived by principles of deduction nor demonstrable by formal proofs—simply because they are starting assumptions, other examples include personal beliefs, political views, and cultural values. An axiom is the basic precondition underlying a theory.
Another thing one should be aware is that some fields of science predate the scientific method, for instance alchemy is now part of chemistry and physics and math was created even before we had numbers, one should have particular attention that in some fields the definitions or nomenclature may be out dated or be so for historical reasons, due to their use since before the definition of scientific method, and that mathematics uses not only the scientific method but also logical deductions, that result in theorems.
Take for instance the use of the word "axiom" in mathematics, this particular field has gone by several changes, especially in the 19th century, but due to historic reasons an "axiom" in mathematics does have a particular meaning.
Euclid's geometry, is based on a system of axioms that look self-evident. So, in physics, Euclid's geometry was used as natural (and the only) choice, the complete theory could be drawn from the axioms, resulting in the whole geometry to be considered to be true and self evident. This changed in the early 19th century, Gauss, Johann Bolyai, and Lobatchewsky, each independently, took a different approach. Beginning to suspect that it was impossible to prove the Parallel Postulate, they set out to develop a self-consistent geometry in which that postulate was false. In this they were successful, thus creating the first non-Euclidean geometry. By 1854, Bernhard Riemann, a student of Gauss, had applied methods of calculus in a ground-breaking study of the intrinsic (self-contained) geometry of all smooth surfaces, and thereby found a different non-Euclidean geometry.
It remained to be proved mathematically that the non-Euclidean geometry was just as self-consistent as Euclidean geometry, and this was first accomplished by Beltrami in 1868. With this, non-Euclidean geometry (both Lobatchewsky and Lobatchewsky) was established on an equal mathematical footing with Euclidean geometry. But this raised issues, "what geometry is true?". Even more, "does the latest question make sense?". All three geometries are based on different system of axioms, all are consistent.
Physics has helped answer these questions. While Euclid's geometry is used on Newtons' mechanics (normal distance), Riemann's geometry became fundamental for Einstein's theory of relativity. Moreover, Lobatchewsky's geometry was used later in quantum mechanics. So, question "Which one of these theories is correct for our physical space?" was answered in the surprising way: "All geometries represents physical space, but on a different scale".
All this influences what to think about axioms. From end of 19th century - beginning 20th century, math didn't appeal to the "self-evidence" of the axioms. It takes the freedom to freely choice axioms. What does, math say, that if the axioms are true, then the theory is followed from the axioms. Correspondence to the real world should be established separately. Axioms doesn't provide any guarantee.
- Example of conflict of mathematics/theoretical physics and the scientific method
The best example lies with quantum mechanics. Many facets of quantum mechanics are merely mathematical models explaining the behavior and interaction between subatomic particles. One of the major stumbling blocks in quantum mechanics lies within one its' fundamental theories: Quantum superposition (all particles exist in all states at the same time). There are many interpretations of this, the standard being the Copenhagen interpretation. This states, basically, that the act of measuring (or observing) the state of a particle collapses the superposition effect, altering it's state to the value defined by the measurement. This shows that the superposition effect, while being one of the most widely accepted and fundamental principles of quantum mechanics, can never actually be directly observed, even if it is possible for experiments to be devised to corroborate the theory.
Consists in a set of statements or principles devised to provide an explanation to a group of facts or phenomena. For instance a mathematical theorem; in the mathematical field we have to be careful on how we apply the definition since a theorem may be considered an axiom in itself, they can be accepted as valid until proved false (due to the infinite nature of numbers, it is common to propose limits to sets to provide validation), and other mathematical theorems may depend or be created over each others assumption of validity.
The hypothesis, or the model is a way for us to make sense of the data. We try to fit the data into some kind of model, and that model is our hypothesis.
A key component to the scientific method is the ability to predict. We can make predictions about something, and then test those predictions to see if they are correct. If the predictions are true, it's likely that the hypothesis is correct.
Are not part of the scientific method but may be a cause of some confusion. Most theorems have two components, called the hypotheses and the conclusions. The proof of the theorem is a logical argument demonstrating that the conclusions are a necessary consequence of the hypotheses, in the sense that if the hypotheses are true then the conclusions must also be true, without any further assumptions. The concept of a theorem is therefore fundamentally deductive, in contrast to the notion of a scientific theory, which is empirical.
The fundamental step of turning an hypothetical relation into a principle, by validating it with real world data. Any verified hypothesis becomes a principle.
an act or instance of viewing or noting a fact or occurrence for some scientific or other special purpose.
Experiments are key to the scientific method. Without experiments, any conclusions are just conjecture. We need to test our observations (to ensure the observations are unbiased and reproducible), we need to test our hypothesis, and then we need to test the predictions we make with our hypothesis.
Setting up of a proper experiment is important, and we will discuss it at length in section 2.
History of Scientific Thought
Some of the earliest science was a combination of practical wisdom, basic arithmetic, observations, and mysticism. Many early civilizations used supernaturalism to explain natural phenomena, which helps to explain why these early civilizations had such a broad and varied mythos. Every new phenomena required the creation of a new god, goddess, spirit, or demon for explanation.
The sun was not a burning sphere in space, but instead it was the god Apollo on a burning, flying chariot. Desert mirages were not an optical and psychological occurrence, it was a trick played by an evil jinn. Natural disasters represented the wrath of an angered deity.
Greece and Rome
The rise of Greece and Rome created an environment where philosophical minds could consider the natural world more readily and easily than was possible in previous times. The philosophical tradition began with Thales of Melitus, who posited that the world was made of water. While it's obvious to us now that the world is not made of water, it does give insight into the conceptual framework that the earliest scientists had. It is clear that world is not immutable, that is that it changes over time. Water is also known to take multiple forms of ice (solid) and vapor (gas).
After Thales, there were a long line of scientific thinkers: Anaximenes (who held that the world was made of air, not water), Pythagoras, Heraclitus, Democritus and Euclid. Aristotle, one of the last great Greek philosophers was said to know all of science by the time he died.
After the fall of the Roman empire, Europe descended into a period known as the dark ages. Religious oppression and oppression by feudal lords led to a sharp decline in scientific thought. then no one knows!!
The renaissance, or the "rebirth" was a period where great thinkers challenged the power of the church, and opened up the way for science to grow once again. Much of science at the beginning of the renaissance was centered around observational science, such as astronomy (with Galileo, Copernicus, and Kepler). However, Mathematics was going through a rebirth of it's own, and many great thinkers were beginning formalize mathematical tools and apply them to physical problems. Rene Descartes, a preeminent philosopher, scientist and mathematician created the first version of the scientific method, and employed it to study various subjects in science and philosophy.
Best known of this period, although certainly not the only great thinker was Isaac Newton, who used the new mathematics of Calculus to unify the study of physics and astronomy.
The industrial revolution was a time when machines were used to a much greater extent then they had ever been. Many processes were automated through the use of mechanical machines, and many great inventions, including the steam engine, were created. It was during this time that many great scientists began to study electricity and magnetism, effects that were poorly understood and not explained by the theories of newton.
While the world of the industrial revolution was about mechanics and machines, the modern world has been dominated by the study and application of electricity. Starting with the equations of Maxwell, many great thinkers such as Heaviside, Edison, Tesla, and Hertz began to invent new technologies that we still use today: Telephones, Electrical power distribution, and electrical communications. With the advent of the vacuum tube and silicone transistor, the computer revolution has pushed science and technology further and faster than it has ever been.
Empiricism and Inductivism
Take the example of the Megalodon in the field of Paleontology. From only a handful of teeth and vertebrae, paleontologists "tell" us that the Megalodon was, basically, a 20m long Great White with similar structure and behavioral patterns. The teeth of a Megalodon are similar in shape to that of a Great White so it has been assumed that it's morphology and behavior are similar (although because the teeth are larger, it's prey would be larger). While this may be accurate, it may also be completely wrong (there is strong support for the theory that Megalodon and great whites are not related, the latter being a descendant of the broad-tooth Mako shark). The only real clue that we have towards the size and behavior is that many bones of large whales have been found with tooth marks almost identical to that of the Megalodon, as sharks have full cartilaginous skeletons. However, there is no evidence, other than its similarity to the great white's, that the teeth and vertebrae even came from a shark and not some other animal which happened to have similar dentition and spinal structure.
Rene Descartes' Method
Rene Descartes (March 31, 1596 – February 11, 1650) was a highly influential mathematician, scientist and philosopher. Descartes is widely considered to be the 'Father of modern Philosophy'. His most influential work is Meditations on First Philosophy ('First Philosophy being metaphysics). Descartes advocates a method of radical doubt, now labeled Cartesian doubt, whereby the reader, or meditator, begins to doubt all external objects of sense perception and focus only on what the mind 'clearly and distinctly' perceives to be true. Descartes discovers the now well known proposition 'I think, therefore, I am' (known as cogito ergo sum). Descartes unique idea was to start from axiomatic principles that could not be doubted, and proceed to discover truths and certainty from these axioms. He argued that the mind and rational thought, not experience, is the source of all knowledge. This is why Descartes is know seen as a 'Rationalist'. His method is opposed to a more Newtonian or Aristotelean principle of deriving the axioms from the objects of sense experience.
Hans Christian Oersted
The scientific method is not without it's criticisms. Science has limitations but most if not all are human in nature and not a fault in the process itself, if that was the case the methodology would simply be improved. The biggest fragility of the scientific process is its reliance in consensual acceptance of results, this in itself does not invalidate the process but can delay or hinder scientific advance, especially on areas that require complex, costly or a high degree of verification of data, making replication difficult. Other issues to consider is in regards to how scientific knowledge is prone to be adulterated, even subverted by the way it is disseminated, validated and funded.
The concept of the scientific delusion is often proposed as a fault of science. The premise is that science includes a belied that it has a grasp (and the only possible grasp) over reality. While science can validate itself and so only consider the validity of its own conclusions the idea that science is the end all in regards to finding universal answers comes not from science itself (since if it had the ultimate word and a conclusive stage the continuation of the process would be broken), but from those that impart on science some of the characteristics of faith (belief without verifiability).
Since science is done by humans (and humans are intrinsically fallible), it, like in any other process will incur in error of judgment. This is not a fault of the scientific process (science) but of those that misapply it or impart on it characteristics that do not apply. Science is not a belief system, not even a perfect system, it is nevertheless the best process we have constructed to explore "reality".
This proposal addresses the notion that science has partitioned areas where it does not function or is not applied. Since science is a process there may be areas that scientific tests are harder to perform, especially in fields on the border of the human knowledge but its failing is a human limitation an not of the process itself. Society also has an influence in determining (and even dictating) what areas get to be scientifically investigated and those that should be avoided, in this science itself has no blame it is humans that bring into it artificial limitations or simply are unable to properly devise or accept scientific experiences as to validate observations.
Science is driven by experimentation, where hypotheses must be tested and verified. In these chapters, we will look at how to perform a proper scientific experiment.
Determining What to Measure
Independent and Dependent Variables
Relationships between variables
In any experiment, the object is to gather information about some event, in order to increase one's knowledge about it. In order to design an experiment, it is necessary to know or make an educated guess about cause and effect relationships between what you change in the experiment and what you are measuring. In order to do this, scientists use established theories to come up with a hypothesis before experimenting.
A hypothesis is a prediction of how changing one variable effects another, bring a variable any aspect, or collection, open to measurable change. The variable(s) that you alter intentionally in function of the experiment are called independent variables, while the variables that do not change by intended direct action are called dependent variables.
A hypothesis says something to the effect of:
Changing independent variable X should do something to dependent variable Y.
For example, suppose you wanted to measure the effects of temperature on the solubility of table sugar (sucrose). Knowing that dissolving sugar doesn't release or absorb much heat, it may seem intuitive to guess that the solubility does not depend on the temperature. Therefore our hypothesis may be:
Increasing or decreasing the temperature of a solution of water does not affect the solubility of sugar.
Isolation of Effects
When determining what independent variables to change in an experiment, it is very important that you isolate the effects of each independent variable. You do not want to change more than one variable at once, for if you do it becomes more difficult to analyze the effects of each change on the dependent variable.
This is why experiments have to be designed very carefully. For example, performing the above tests on tap water may have different results from performing them on spring water, due to differences in salt content. Also, performing them on different days may cause variation due to pressure differences, or performing them with different brands of sugar may yield different results if different companies use different additives.
It is valid to test the effects of each of these things, if one desires, but if one does not have an infinite amount of money to experiment with all of the things that could go wrong (to see what happens if they do), a better alternative is to design the experiment to avoid potential pitfalls such as these.
Corollary to Isolation of Effects
A corollary to this warning is that when designing the experiment, you should choose a set of conditions that maximizes your power to analyze the effects of changes in variables. For example, if you wanted to measure the effects of temperature and of water volume, you should start with a basis (say, 20oC and 4 fluid ounces of water) which is easy to replicate, and then, keeping one of the variables constant, changing the other one. Then, do the opposite. You may end up with an experimental scheme like this one:
Test number Volume Water (fl. oz.) Temperature (oC) 1 4 20 2 2 20 3 8 20 4 4 5 5 4 50
Once the data is gathered, you would analyze tests number 1, 4, and 5 to get an idea of the effect of temperature, and tests number 1, 2, and 3 to get an idea of volume effects. You would not analyze all 5 data points at once.
Control of Experimental Conditions
Controlled Variables: These variables are controlled throughout the experiment to prevent other factors impacting on the experimental data.
Such could potentially include:
Quantity of- solution or compound
Equipment size such as 100mL beakers
Same source of the material
Same stirring time
Control of Measurement Errors
Perhaps the most important step in controlling experimental error is to design your experiments to produce as little systematic error as possible. In order to do this, it is important to know something about what you are measuring. As an example, suppose that you desired to measure the weight of the oxygen produced in the decomposition of hydrogen peroxide:
You would need to ask yourself: How would you separate the oxygen from the water and unreacted hydrogen peroxide? How will you prevent the oxygen from leaking? Do you want to measure the weight directly, or by calculating it from other values (such as pressure)?
Get into the habit of asking yourself, "what could go wrong with this experiment?" before you start the experiment. Then if you can, design it so that the things that could go wrong are as minor as possible, and then when performing it be as careful as possible to avoid what is left.
Calibration and Accuracy
All measurement instruments need to be calibrated in some way in order to ensure that the values that are read are near the true value of the property being measured. Rulers all are compared to a standard when they are made so that when an inch is marked on the ruler, it is truly an inch.
Many instruments lose their calibration, and hence their accuracy, over time. Therefore it is necessary to recalibrate them. Instruments are generally re-calibrated by measurement of a standard or several, which have well-defined properties. For example, a scale might be calibrated by weighing a 5g weight and adjusting a dial until the reading is 5.000 g. Follow the instrument manual closely for calibration procedures, so that any bias in measurement due to measurement inaccuracy can be mitigated.
Repeatability and Precision
Measurement instruments never will give you an exact answer. For example, if you are measuring the volume of a liquid in a graduated cylinder, it is necessary for you to estimate which of the hash marks on the instrument is the closest to the true volume (or to interpolate between them based on your eyesight). Most computerized measurement devices, such as many modern scales, take multiple measurements and average them to obtain accurate results, but these also have sensitivity limitations.
Manufacturers often report the precision of their instruments. The repeatability of an instrument is a measure of the precision, which is the similarity of successive measurements of an identical quantity to each other. Reproducibility is essentially the ability to, with all other conditions the same (or as close to the same as possible), achieve the same measurement value in an experiment. For example, you may measure the weight of an object with the same scale multiple times. If the reading is significantly different every time, it is possible that the instrument needs to be recalibrated or re-stabilized (for example, by cleaning out dust from the receiver, or making sure the setup is right). If it has been properly calibrated and set up and measurements still vary more than the precision claimed by the manufacturer, the instrument may be broken.
Another way to control errors in measurement from experiment to experiment is to constantly assess the reproducibility of the measurements. Reproducibility is measured essentially by performing the same measurement multiple times while varying one part of the experiment. For example, if you are measuring the pH of a buffer as part of a process, you may assess the reproducibility of the buffer preparation by preparing the same sample several times, independently of each other, and measuring the pH of each sample. If the variance in the pH measurements is larger than the measurement accuracy (or repeatability) of the instrument, then it is likely that the preparation of the buffer is to blame for this error. Such tests can be performed on many parts of a larger process in order to pinpoint and remedy the largest control difficulties.
Another possible reproducibility test would be measuring the same sample with different pH meters. It is very important to test the compatibility of different measurement instruments before claiming that the results are comparable, and such reproducibility measurements are critical for determining the relationship between two instruments.
Tests for Experimental Validity
Now that we've discussed the scientific method and it's application, we will take a look at several historical examples from all branches of science.
Experiments in Biology
Leeuwenhoek and the Discovery of The Cells
Prior to Leeuwenhoek, there had been little or notion of the idea that microscopic living things could exist. Part of this was due to the fact that most microscopes of the time were not strong enough to see them, though they did exist . However, Leeuwenhoek was skillful enough to make such a powerful microscope, and observed protazoans in pond water .
Not only did he observe them, but he also was able to deduce that they were living because they were motile, and only living things have the ablility to move by their own power. Leeuwenhoek's deductions, and his creative use of technology to explore new avenues, led to the recognition of cells as the building blocks of life, which would have profound influence on biological study and knowledge.Without the microscope we count not see anything like airplanes and turtles so what would we do
Pasteur and the Death of Spontaneous Generation
Before Louis Pasteur and other scientists proved them wrong, the mainstream belief in science was that living things arose spontaneously from non-living things. This was particularly true of the "cells" which Leuvenhok discovered, because the cells were comparatively simple and therefore it appeared logical that they arose naturally from their environments . Another scientist named Lazzaro Spallanzani had previously proven that many bacteria (though not all) are killed by boiling and, if sealed from the air, they will not regrow in the container . However, contemporaries refused to deny spontaneous generation, saying that since the organisms spontaneously arose from air, sealing off the container made Lazzaro's hypothesis invalid.
Pasteur put this to rest by designing an experiment. He designed an apparatus, called a swan-neck flask, in which a sterile liquid was exposed to the air but no bacteria could reach it. Bacteria and the dust they lived on were trapped in the swan neck, and sterile air reached the liquid. Since the liquid did not become contaminated with bacteria, Pasteur proved that they did not spontaneously arise from the air. This led to the death of the theory of spontaneous generation and to further studies about how bacteria do reproduce. These studies are important because they have lead to an understanding of how many antibiotics, including Penicillin, work.
Robert Koch and his Four Postulates
Before scientists like Pasteur and Koch arrived on the scientific scene to prove them wrong, many scientists had false beliefs concerning the nature of disease. One of these beliefs was that infectious bacteria spontaneously generated in a human body as a result of diseases  . While Pasteur proved that spontaneous generation was impossible in the air, Koch devised a scheme by which the cause of diseases in people (and animals) could be experimentally tested.
His scheme revolved around the notion that, if a person is able to prove that one organism is common to all cases of a disease, and if it can be shown that no outside factor causes the disease, then the organism must, indeed, cause the disease. To make it experimentally rigorous, Koch needed a control scheme, so he devised a set of "postulates" to prove that the organism in question actually caused the disease :
- The organism had to be present in any case of the illness in question.
- It had to be possible to take the organism out of the patient and purify it so that only one species was present. This prevents the possibility that an organism other than the one being observed actually causes the disease.
- The now-purified organism had to cause the disease when injected into a healthy animal. This prevents the possibility of something before the bacteria causing the disease, and debunks the possibility that the toxins came first and caused formation of the bacteria.
- As a final check, one must be able to re-isolate the organism from the newly-sick animal.
Koch used this to prove that Bacillus anthracis causes anthrax , and it has been used to establish the causes of a large number of other bacterial diseases as well.
Fleming and the advent of antibiotics
Mendel and his theory of inheritance
Darwin and his theory of evolution and natural selection
- Microscopes in Leeuwenhoek's time
- Leeuwenhoek's observations
- spontaneous generation
- Boiling kills microbes
- Historical beliefs about diseases
- Koch's Postulates
- Madigan M; Martinko J (editors). (2005). Brock Biology of Microorganisms, 11th ed., Prentice Hall. ISBN 0-13-144329-1.
Experiments in Chemistry
Mendeleev and the Periodic Table
The Periodic Table is an example of bringing order to a large amount of information that may seem chaotic. Before it was developed, trends between similar compounds were more difficult or impossible to visualize. There are a large number of properties which can be roughly predicted from trends in the periodic table including:
- How the elements react
- What the elements will react with
- How big the atoms are
- How the electrons are organized around the atoms.
There are several others properties that are also predicted well from the table. Mendeleev was not the first to notice the patterns but he was the first to bring an organizational scheme to the scientific community in a way that people would accept his work. In particular, he did several things differently from his predecessors:
- He chose different patterns on which to base his ordering scheme. Previous attempts at organizing the elements met with some failure because they were based on increasing atomic weight, and when two dissimilar compounds fell in the same column of the table, they went with the order based on weights, not based on properties .
- Mendeleev not only switched around molecular weights so that the properties of elements in the same column were similar, he also left spaces for undiscovered compounds when there was no in a reasonable weight range. In this way he was forward-thinking, which left room for later discoveries.
- He was able to convince people of the value of his scheme after several of the elements for which he had left "holes" were discovered.
This example shows that science benefits a great deal from the ability to organize information. Organizing information is necessary in order to generalize what is known and to generate new theories from the observed trends, and is a key step in hypothesis generation and testing.
Boyle and his Law
Boyle, back in the 17th century, helped to prove that gasses have weight and that their density depends on how much pressure is applied to them. In particular he discovered Boyle's Law, which roughly says that if you double the amount of pressure applied to a gas (at constant temperature), its volume will be halved .
He did this by making use of a manometer, which is a U-shaped instrument that measures pressure by the height of a liquid that is displaced. He capped one end and poured some mercury into the other end, thus trapping air in the middle. Then he measured the pressure, and added enough mercury to halve the volume of air present in the tube . He then measured the pressure at that instant, and through many measurements, showed that volume and pressure are inversely related.
This example shows that sometimes a scientist must be quite clever to achieve a new discovery. Many groundbreaking experiments, including this one, involved the use of fairly new inventions or apparatuses which cleverly could be used to measure quantities. It is only through measurement that hypotheses can be proved.
Avogadro and the nature of atoms and molecules
Before Avagadro made a keen, unifying observation to the theory of gasses, scientists weren't sure how to usefully define atoms and molecules, and therefore had difficulties measuring the molecular weight of compounds. However, Avagadro was able to remedy this by hypothesizing that any gas occupying the same volume at the same temperature and pressure has the same number of molecules, not the same number of atoms, and that these molecules were made of combined elements . Scientists were later able to prove that this is true by using the theory to measure atomic weights more accurately than had been possible before, and then combining them to yield the weights of known molecular compounds.
Avagadro's theory was important because it reconciled a couple of other theories: Dalton's theory that everything is made of atoms, and Gay-Lussac's observation that the volume of gasses in a gas-phase reaction changes in proportion to the molecules of gas consumed or generated  . This type of unification is central to the advancement of science.
Pasteur and Enantiomers of Tartaric Acid
Among his many accomplishments, Pasteur was one of the first people to discover that certain molecules have a property called chirality. A molecule is considered chiral if its mirror image or enantiomer is different from the original molecule. A typical physical analogue to this is a glove: you cannot put the right glove on your left hand because the thumb is the "wrong way" (unless they're specifically designed to fit both, in which case the glove is achiral).
Pasteur's experiment involved separating the enantiomers of tartaric acid from a mixture containing both. Now, enantiomers are not in general easy to separate from each other because they usually have identical physical and chemical properties, but tartaric acid is unusual because it forms crystals which are visibly different from one another in their direction . Therefore, Pasteur was able to visually separate the two enantiomers.
Once he separated, he drew upon the work of Jean Biot and shined light through each enantiomer. Biot had previously shown that, due to some unknown physical pheonomenon, some substances rotated light in one direction, some on another direction, and some did not at all. Pasteur hypothesized that this was due to the presence of the asymmetric enantiomers, and when he tested his theory he turned out to be right.
It is now known that many compounds are achiral due to the nature of carbon-carbon bonds. In particular, if a carbon has four different substituents attached to it, that carbon is chiral  Pasteur's experiment helped in both deducing structures of chiral compounds and in spurring experiments regarding their biological significance.
- Asimov, Isaac. Asimov's Guide to Science. New York: Basic Books, 1972, 230.
- [History of the Periodic Table]
- Boyle's Law demo
- Description of Boyle's measurement methods
- Avagadro's theory
- Guy-Lussac's Law
- Avagadro's Hypothesis
- Pasteur's Experiment
- http://www.chemguide.co.uk/basicorg/isomerism/optical.html#top Chiral molecules]
Experiments in Physics
Experiments in Psychology
Appendices and Licensing
This Timeline of the history of scientific method shows an overview of the cultural inventions that have contributed to the development of the scientific method. For a more detailed account see History of Scientific Thought.
- 2000 BC — First text indexes (various cultures).
- 320 BC — Aristotle, comprehensive documents categorising and subdividing knowledge, dividing knowledge into different areas (physics, poetry, zoology, logic, rhetoric, politics, and biology).
- 200 BC — First cataloged library (at Alexandria).
- 800 AD — Arguably the scientific method in many of its modern forms is developed in some aspects of early Islamic philosophy, theology and law. In particular the methods of citation, peer review and open inquiry leading to development of consensus.
- 1015 - Alhazen used experimental methods to obtain the results in his book Optics In particular, he combined observations and rational arguments to show that his intromission theory of vision was scientifically correct, and that the emission theory of vision supported by Ptolemy and Euclid was wrong.
- 1327 — Ockham's razor clearly formulated (by William of Ockham)
- 1403 — Yongle Encyclopedia, the first collaborative encyclopedia.
- 1590 — First controlled experiment, (Francis Bacon).
- 1600 — First dedicated laboratory.
- 1620 — Novum Organum published (Francis Bacon).
- 1637 — First Scientific method (René Descartes).
- 1650 — Society of experts (the Royal Society).
- 1650 — Experimental evidence established as the arbiter of truth (the Royal Society).
- 1665 — Repeatability established (Robert Boyle).
- 1665 — Scholarly journals established.
- 1675 — Peer review begun.
- 1687 — Hypothesis/prediction (Isaac Newton).
- 1710 — The problem of induction identified by David Hume .
- 1753 — Description of a controlled experiment using two identical populations with only one variable. (James Lind's A Treatise of the Scurvy).
- 1926 — Randomized design (Ronald Fisher).
- 1934 — Falsifiability as a criterion for evaluating new hypotheses (Karl Popper's The Logic of Scientific Discovery).
- 1937 — Controlled placebo trial.
- 1946 — First computer simulation.
- 1950 — Double blind experiment.
- 1962 — Meta study of scientific method (Thomas Kuhn's The Structure of Scientific Revolutions).