# Planet Earth/print version

## Intended Audience of this Textbook

This textbook is written for an audience of introductory college students in a non-science degree program. It is intended to provide a detailed comprehensive knowledge of Planet Earth, including basic aspects of physics, chemistry, geology and biology. As a major scientific overview of the entirety of Planet Earth, the intention is to only present key concepts that will enhance, enrich, and engage the readers interest in Earth Sciences. It is intended to make any reader, such as yourself, at least a little more knowledgeable of the amazing place that we all live within.

## Purpose of Writing an Open Text and What that Means

All of the text and modules of the Planet Earth course are offered under a Creative Commons with Attributions license, which means that you are free to share and redistribute the material in any medium or format, and adapt remix, transform, and build upon the material for any purpose, even commercially. Just be sure to attribute the text with the author's name and course name, and indicate where you found the information. The purpose of making this text free to disseminate is that it contains valuable information that you should feel free to share and discuss as widely as possible. Science adapts to new knowledge, and as such this text can be updated and modified as new discoveries are made. An open text also ensures that the knowledge remains affordable to the average student such as yourself. Feel free to pass on the information that you learn in this course and you are free to make printed copies. The referenced text is available as a Wikibook, on the Wikipedia Website.

Benjamin J. Burger is a geologist who earned his Masters of Science degree in 1999 at Stony Brook University in New York and Doctorate in 2009 at the University of Colorado in Boulder, and spent five years working at the American Museum of Natural History in New York City. He has also worked as a professional geologist in the states of Utah, Colorado, and Wyoming. He joined the Utah State University faculty in 2011 and continues to teach and conduct research as an Associate Professor in the Department of Geoscience at the Uintah Basin – Vernal Campus of Utah State University located in northeastern corner of Utah. Many of his course lectures and educational content can be found on YouTube or on his website at www.benjamin-burger.org

This book was written with the support of a grant offered by Utah State University Libraries, Academic & Instructional Services and College of Science to support faculty and instructors at Utah State University—State Wide Campuses to create Open Educational Resources to support their online courses in the United States of America. These grants are made to reduce barriers to student success, as well as to encourage faculty and instructors to try new, high-quality, and lower cost ways to deliver learning materials to students through Open Educational Resources.

The majority of the first edition of the textbook was written between 2019 and 2020, with the intention that the textbook to be offered free of charge to all participants in GEO 1360 Planet Earth, an online course offered at Utah State University. As an Open Educational Resource this textbook if offered for any faculty, instructor, or teacher to adopt for their own courses they teach, and is distributed under a Creative Commons License. If you notice any errors or mistakes please contact the author.

## Digging Deeper

Hyperlinks will be referenced throughout the text to encourage further reading on any particular topic, most of these will point toward a Wikipedia article or an original scientific publication. These referenced hyperlinks will follow a similar style and format as seen in the popular Wikipedia website, where sources of specific information can be referenced and verified with a simple link. Every attempt was made to ensure the referenced external links that you will find within the modules are verified in print and online sources, including peer-reviewed scientific papers, publications of scientific societies, government organizations, and mainstream news organizations. There is no guarantee these external links will remain available online or whether they will be archived for future assessing electronically in the future. Furthermore, there is no guarantee that your university or college will have a subscription to the article to view online. However, most of these external references should be accessible to you if you wish to explore a topic more in-depth than provided in the text, especially many of the Wikipedia entries. Only information covered within the text of this course will be used on quizzes and exams, as the reference hyperlinks serve to support statements and data within the main body of this course. You are not responsible for information that exists outside of this course on external webpages.

## Vocabulary and Glossary of Terms

Important scientific terms will be in bold print, and may have a hyperlink to a clear definition of that term. These terms should be defined in your notes, as they will likely be referenced in quiz and exam questions. Use of flashcards with the term and its definition might be an important study tool for the exams.

\newpage

# 1a. Science: How do we Know What We Know.

## The Emergence of Scientific Thought

The term science comes from the Latin word for knowledge, scientia, although the modern definition of science only appears during the last 200 years. Between the years of 1347 to 1351 a deadly plague swept across the Eurasian Continent, resulting in the death of nearly 60% of the population. The years that followed the great Black Death as the plague came to be called was a unique period of reconstruction which saw the emergence of the field of science for the first time. Science became the pursuit of learning knowledge and gaining wisdom, it was synonymous with the more widely used term of philosophy. It was born in the time when people realized the importance of practical reason and scholarship in the curing of diseases and ending famines, as well as the importance of rational and experimental thought. The plague resulted in a profound acknowledgement of importance of knowledge and scholarship to hold a civilization together. An early scientist was indistinguishable from being a scholar.

Two of the most well-known scholars to live during this time was Francesco “Petrarch” Petrarca and his good friend Giovanni Boccaccio, both were enthusiastic writers of Latin and early Italian and enjoyed a wide readership of their works of poetry, songs, travel writing, letters, and philosophy. Petrarch rediscovered the ancient writings of Greek and Roman figures of history and worked to popularize them into modern Latin, particularly re-discovering the writings of the Roman statesman Cicero, who had lived more than thousand years previously. This pursuit of knowledge was something new, both Petrarch and Boccaccio proposed the kernel of thought in a scientific ideal that has transcended into the modern age, that the pursuit of knowledge and learning does not conflict with religious teachings, as the capacity of intellectual and creative freedom is in itself divine. Secular pursuit of knowledge based on truth complements faith and religious doctrines which are based on belief and faith. This idea manifested during the Age of Enlightenment and eventual the American Revolution as an aspiration for a clear separation of church and state. This sense of freedom to pursue knowledge and art, unhindered by religious doctrine lead to the Italian Renaissance of the early 1400s.

Leonardo da Vinci, Vitruvian Man sketch is an example of the careful artistic reflection of reality inherit in science.

The Italian Renaissance was fueled as much by this new freedom to pursue knowledge as it was the global and economic shift that brought wealth and prosperity to northern Italy, and later in northern Europe and England. This was a result of the fall of the Eastern Byzantine Empire, and the rise of a new merchant class of the city states of northern Italy which took up the abandoned trade routes throughout the Mediterranean and beyond. The patronage of talented artists and scholars arose during this time, as wealthy individuals financed not only artists, but also the pursue of science and technology. The first universities, places of learning outside of monasteries and convents came into fashion for the first time, as wealthy leaders of the city states of northern Italy sought talented artists and inventors to support within their own courts. Artists like Leonard da Vinci, Raphael and Michelangelo received commissions from wealthy patrons including the church and city states to create realistic artworks from the keen observation of the natural world. Science grew out of art, as the direct observation of the natural world lead to deeper insights into the creation of realistic paintings and sculptures. This idea of the importance of observation found in Renaissance art transcended into the importance of observation in modern science today. In other words, science should reflect reality by the ardent observation of the natural world.

## The Birth of Science Communication

Science and the pursuit of knowledge during the Renaissance was enhanced to a greater extent by the invention of the printing press with moveable type, allowing the widespread distribution of information in the form of printed books. While block and intaglio prints using ink on hand carved wood or metal blocks predated this period, re-movable type allowed the written word to be printed and copied onto pages much more quickly. The Gutenberg Bible was first printed in 1455, with 800 copies made in just a few weeks. This cheap and efficient way to replicate the written word had a dramatic effect on society, as literacy among the population grew. It was much more affordable to own books and written work than any previous time in history. With a little wealth, the common individual could pursue knowledge through the acquisition of books and literature. Of importance to science, was the new-found ease to which you could disseminate information. The printing press lead to the first information age, and greatly influenced scientific thought during the middle Renaissance, in the second half of the 1400s. Many of these early works were published with the mother tongue, the language spoken in the home, rather than the father tongue, the language of civic discourse found in the courts and the churches of the time, which was mostly Latin. These books spawn the early classic works of literature we have today in Italian, French, Spanish, English, and other European languages spoken across Europe and the world.

Figure in De Revolutionibus orbium coelestium showing the Sun (sol) in the center of the Solar System.

One of the key figures of this time was Nicolaus Copernicus who published his mathematical theory that the Earth orbited around the sun in 1543. The printed book entitled De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), written in the scholarly father tongue of Latin ushered in what historians called the Scientific Revolution. The book was influential because it was widely read by fellow astronomers across Europe. Each individual could verify the conclusions made by the book by carrying out the observations of their own. The Scientific Revolution was not so much what Nicolaus Copernicus discovered and reported (which will be discuss in depth later), but that the discovery and observations he made could be replicated by others interested in the same question. This single book led to one of the most important principles in modern science, that any idea or proposal must be verified through replication. What makes something scientific is that it can be replicated or verified by any individual interested in the topic of study. Science embodied at its core the reproducibility of observations made by individuals, and ushered in the age of experimentation.

During this period of time such verifications of observations and experiments was a lengthy affair. Printing costs of books and the distribution of that knowledge was very slow, and often subjected to censure. This was also the time of the Reformation first lead by Martin Luther who protested corruption within the Catholic Church, leading to the establishment of the Protestant Movement in the early 1500s. This schism of thought and belief brought about primarily by the printing of new works of religious thought and discourse lead to the Inquisition. The Inquisition was a reactionary system of courts established by the Catholic Church to convict individuals who showed signs of dissent from the established beliefs sent forth by doctrine. Printed works that hinted at free thought and inquiry were destroyed and their authors imprisoned or executed. Science which had flourished in the century before suffered during the years of the Inquisition, but it also brought about one of the most important episodes in the history of science involving of one of the most celebrated scientists of its day, Galileo Galilei.

## The Difference between Legal and Scientific Systems of Inquiry

Front piece of Galileo Galilei's famous censured book, arguing that the sun is the center of the Solar System.

Galileo was a mathematician, physicist, and astronomer who taught at one of the oldest Universities in Europe— the University of Padua. Galileo got into an argument with a fellow astronomer named Orazio Grassi who taught at the Collegio Romano in Rome. Grassi had published a book in 1619 on the nature of three comets he had observed from Rome, entitled De Tribus Cometis Anni MDCXVIII (On Three Comets in the Year 1618). The book angered Galileo, who argued that comets were an effect of the atmosphere, and not real celestial bodies. Although Galileo had invented an early telescope for observing the moon, planets and comets, he had not made observations of the three comets observed by Grassi. As a rebuttal Galileo published his response in a book entitled The Assayer which in a flourish he dedicated to the Pope living in Rome. The dedication was meant as an appeal to authority, in which Galileo hoped that the Pope would take his side in the argument.

Galileo was following a legal protocol for science, where evidence is presented to a judge or jury, or Pope in this case, and they decide on a verdict based on the evidence presented before them. This appeal to authority was widely in use during the days of the Inquisition, and still practiced in law today. Galileo in his book The Assayer presented the notion that mathematics is the language of science. Or in other words, the numbers don’t lie. Despite being wrong, the Pope sided with Galileo, which embolden Galileo to take on a topic he was interested in, but considered highly controversial by the church— the idea that Earth rotated around the Sun proposed by Copernicus. Galileo wanted to prove it using his own mathematics.

Before his position at the university, Galileo had served as a math tutor for the son of Christine de Lorraine the Grand Duchess of Tuscany. Christine was wealthy and highly educated, and more open to the idea of a heliocentric view of the solar system. In a letter to her, Galileo proposed the rational for undertaking a forbidden scientific inquiry, invoking the idea that science and religion were separate and that biblical writing was meant to be allegorical. Truth could be found in mathematics, even when it contradicted the religious teachings of the church.

In 1632 Galileo published the bookDialogue Concerning the Two Chief World Systems in Italian and dedicated to the grandson of Christine, and Grand Duke of Tuscany. The book was covertly written in an attempt to get beyond the censures of the time, who could ban the work if they found that it was heretical to the teachings of the church. The book was written as a dialogue between three men (Simplicio, Salviati, and Sagredo), who over the course of four days debate and discuss the two world systems. Simplico argues that the Sun rotates around the Earth, while Salviati argues that the Earth rotates around the Sun. The third man Sagredo is neutral, and listens and responses to the two theories as an independent observer. While the book was initially allowed to be published it raised alarm among members of the clergy, and charges of heresy were brought forth against Galileo after its publication. The book was banned as well as all the previous writings of Galileo. The Pope who had previously supported Galileo, saw himself as the character Simplicio, the simpleton. Furthermore, a letter to Christine was uncovered and brought forth during the trial. Galileo was found guilty of heresy, and excommunicated from the church and placed under house arrest for the rest of his life. Galileo’s earlier appeal to authority appeared to be regaled as he faced these new charges. The result of Galileo’s ordeal was that fellow scientists felt that he had been wrongfully convicted, and that the authority, whether religious or governmental was not the determiner of truth in scientific inquiry.

Galileo’s ordeal established the important principle of the independence of science from authority in the determination of truth in science. The notion of appealing to authority figures should not be a principle of scientific inquiry. Unlike the practice of law, science was governed not by judges or juries, who could be fallible and wrong, nor was it governed through popular public opinion or even voting.

This led to an existential crisis in scientific thought. How can one define truth, especially if you can’t appeal to authority figures in leadership positions to judge what is true?

## How to Become a Scientific Expert and Scientific Deduction

Rene Descartes, the French Philosopher.

The first answer came from a contemporary of Galileo, René Descartes a French philosopher who spent much of his life in the Dutch Republic. Descartes coined the motto, Ego cogito, ergo sum, I think there for I am, which was taken from his well-known preface entitled, Discourse on the Method, published in both French and Latin in 1637. The essay is an exploration of how one can determine truth, and is a very personal exploration of how he himself determines what is true or not. René Descartes argued for the importance of two principles in seeking truth.

First was the idea that it requires much reading, taking and passing classes, but also exploring the world around you— traveling and learning new cultures and meeting new people. He recommended joining the army, and living not only in books and university classrooms, but living life in the real world and learning with everything that you do. Truth was based on common sense, but only after careful study and work. What Descartes advocated was that expertise in a subject came not only from learning and studying a subject over many years but also practice in the real-world environment. A medical doctor who had never practiced nor read any books on the subject of medicine was a poorer doctor to one who attended many years of classes and kept up to date on the newest discoveries in books and journals, and had practiced for many years in a medical office. The expert doctor would be able to discern a medical condition much more readily than a novice. With expertise and learning, one could come closer to knowing the truth.

The second idea was that anyone could obtain this expertise if they worked hard enough. René Descartes basically states that he was a normal average student, but through his experience and enthusiasm for learning more, he was able over the years to become an expert enough to discern truth from fiction, hence he could claim, I think there for I am.

What René Descartes advocated was that if you have to appeal to authority, seek experts within the field of study of your inquiry. These two principles of science should be a reminder that in today’s age of mass communication (Twitter, Facebook, Instagram) for everyone, much falsehood is perpetrated by novices in the spread of lies unknowingly, and to combat these lies or falsehoods one must be educated and well informed through an exploration of written knowledge, educational institutions, and life experiences in the real-world, and if you don’t have these, then seek experts.

René Descartes philosophy had a profound effect on science, although even himself would reference this idea to “le bon sens” or common sense.

Descartes philosophy went further to answer the question of what if the experts are wrong? If two equally experienced experts disagree, how do we know who is right if there is no authority we can call upon to decide. How can one uncover truth through their own inquiry? Descartes answer was to use deduction. Deduction is where you form an idea and then test that idea with observation and experimentation. An idea is true until it is proven false.

## The Idols of the Mind and Scientific Eliminative Induction

Francis Bacon in 1618.

This idea was flipped on its head, by a man so brilliant that rumors exist that he wrote William Shakespeare’s plays in his own free time, although no evidence exists to prove these rumors true, they illustrate how widely regarded he was considered even today. The man’s name was Francis Bacon, and he advanced the method of scientific inquiry that today we call the Baconian approach.

Francis Bacon studied at Trinity College Cambridge England, and rose up the ranks to become Queen Elizabeth’s legal advisor thus becoming the first Queen’s Counsel. This position lead Francis Bacon to hear many court cases and take a very active role in interpreting the law on behalf of the Queen’s rule. Hence, he had to devise a way to determine truth on a nearly daily basis. In 1620 be published his most influential work, Novum Organum Scientiarum, or the New Instrument of Science. It was a powerful book.

Francis Bacon contrasted his new method of science from those advocated by René Descartes by stating that even experts could be wrong, and that most ideas were false, rather than true. According to Bacon, falsehood among experts comes from four major sources, or in his words Idols of the Mind.

First was the personal desire to be right— the common notion that you consider yourself smarter than anyone else, this he called idola tribus. And it extended to the impression you might have that you are on the right track or had some brilliant insight even if you are incorrect in your conclusion. People cling to their own ideas, and value them over others, even if they are false. This could also come from a false idea that your mother, father, or grandparent told you was true, and you held onto this idea more than others, because it came from someone you respect.

The second source of falsehood among experts comes from idola specus. Bacon used the metaphor of a cave where you store all that you have learned, but we can use a more modern metaphor, watching YouTube videos or following groups on Social Media. If you consume only videos or follow writers with a certain world view you will become an expert on something that could be false. If you read only books claiming that the world is flat, then you will come to a false conclusion that the world is flat. Bacon realized that as you consume information about the world around you, you are susceptible to false belief due to the random nature in what you learn and where you learn those things from.

The third source of falsehood among experts come from what he called idola fori. Bacon viewed that falsehood resulted from the misunderstanding of language and terms. He viewed that science, if it seeks truth should clearly define the words that it uses, otherwise even experts will come to false conclusions by their misunderstandings of a topic. Science must be careful to avoid ill-defined jargon, and define all terms it uses clearly and explicitly. Words can lie, and when used well can cloak falsehood, as truth.

The final source of falsehood among experts results from the spectacle of idola theatri. Even if the idea makes a great story, it may not be true. Falsehood comes within the spectacle of trending ideas or widely held public opinions, which of course come and go based on fashion or popularity. Just because something is widely viewed, or in the modern sense gone viral on the internet, does not mean that it is true. Science and truth are not popularity contests nor does it depend on how many people come to see it in theaters, or how fancy the computer graphics are in the science documentary you watched last night, nor how persuasive the Ted Talk. Science and truth should be unswayed by public perception and spectacle. Journalism is often engulfed within the spectacle of idola theatri, reporting stories that invoke fear and anxiety to increase viewership and outrage, and often they are untrue.

These four idols of the mind lead Bacon to the conclusion that knowing the truth was an impossibility, that in science we can get closer to the truth, but we can never truly know what we know. We all fail at achieving “truth.” Bacon warned that “truth” was an artificial construct formed by the limitations of our perceptions, and that is easily cloaked or hidden in falsehood, principally by the Idols of the Mind.

So if we can’t know absolute truth, how can we get closer to the truth? Bacon proposed something philosophers call eliminative induction. Start with observations and experimentations, and using that knowledge to look for patterns, eliminating ideas which are not supported by those observations. This style of science, which starts with observations and experiments resulted in a profound shift in scientific thinking.

Bacon viewed science as focused on the exploration and the documentation of all natural phenomena. The detailed cataloguing of all things observable, all experiments untaken and the systematic analysis among multitudes of observations and experiments for threads of knowledge that lead to the truth. While previous scientists proposed theories and then sought out confirmation of those theories, Bacon proposed first making observations, and then drawing theories which best fit the observations that had been made.

Francis Bacon realized that this method was powerful, and proposed the idea that “With great knowledge comes great power.” He had seen how North and South American Empires, such as the Aztecs had been crushed by the Spanish during mid-1500s, and how knowledge of ships, gun powder, cannons, metallurgy and warfare had resulted in the fall and collapse of whole civilizations of peoples in the Americas. The Dutch utilized the technology of muskets against North American tribes focusing on the assassination of its leaders, as well as the wholesale manufacturing of wampum beads, which destroyed North American currencies and the native economies. Science was power because it provided technology that could be used to destroy nations and conquer people.

He foresaw the importance of exploration and scientific discovery if a nation was to remain of importance in a modern world. With Queen Elisabeth’s death in 1603, Francis Bacon encouraged her successor King James to colonize the Americas, envisioning the ideal of a utopian society in a new world. This utopian society he called Bensalem in his unfinished science fiction book New Atlantis. This utopian society would be an industry into pure scientific inquiry, where researchers could experiment and document their observations with finer detail and from the observations great patterns and theories could emerge that would lead to new technologies.

Francis Bacon’s utopian ideals took hold within his native England, especially within the Parliament of England, under the House of Lords who viewed the authority of the King with less respect than any time in its history. The English Civil War and the execution of its King, Charles I in 1649 tossed the country of England into chaos, and many people fled to the American Colonies in Virginia during the rise of Thomas Cromwell’s dictatorship.

But with the reestablishment of a monarchy in 1660 the ideas laid out by Francis Bacon came to fruition with the founding of the Royal Society of London for Improving Natural Knowledge, or simply, the Royal Society. It was the first truly modern scientific society and it still exists today.

## Scientific Societies

A scientific society is dedicated to research and sharing of discovery among its members. They are considered an “invisible college” since scientific societies are where experts in the fields of science come and learn from each other and demonstrate new discoveries and publish new results of experiments that they had conducted. As one of the first scientific societies, the Royal Society in England welcomed experiments of grand importance, but also insignificant small-scale observations at their meetings. The Royal Society received support from its members, but also from the monarchy, Charles II, which viewed the society as a useful source of new technologies where new ideas would have important implementations in both state warfare and commence. Its members included some of England’s most famous scientists of the time including Isaac Newton, Robert Hooke, Charles Babbage, and even the American Colonialist Benjamin Franklin. Membership was exclusive to upper class men with English citizenship, who could finance their own research and experimentation.

Most scientific societies today are open to membership of all citizens and genders, and have had a profound influence on the sharing of scientific discoveries and knowledge among its members and the public. In the United States of America, the American Geophysical Union and Geological Society of America rank as the largest scientific societies dedicated to the study of Earth Science, but hundreds of other scientific societies exist in the fields of chemistry, physics, biology and geology. Often these societies hold meetings, where new discoveries are shared with scientists by its members giving presentations, and societies have their own journals which publish research and distribute these journals to libraries and fellow members of the society. These journals are often published as proceedings, which can be read by those who cannot attend meetings in person.

The rise of scientific societies allowed the direct sharing of information and a powerful sense of community among the elite experts in various fields of study. It also put into place an important aspect of science today, the idea of peer review. Before the advent of scientific societies all sorts of theories and ideas where published in books, and most of these ideas were fictitious to the point that even courts of law favored verbal rather than written testimonies, because they felt that the written word was much farther from the truth, than the spoken word. Today we face a similar multitude of false ideas and opinions expressed on the internet. It is easy for anyone to post a webpage or express a thought on a subject, you just need a computer and internet connection.

## Peer-Review

To combat widely spreading fictitious knowledge, the publications of the scientific societies underwent a review system among its members. Before an idea or observation was placed into print in a society’s proceedings it had to be approved by a committee of fellow members, typically between 3 to 5 members who agreed that it was with merit. This became what we call peer-review. A paper or publication that underwent this process was given the stamp of approval among the top experts within that field. Many manuscripts submitted for peer review are never published, as one or more of the expert reviewers may find it lacking evidence and reject it. However, readers found peer review articles of much better quality than other printed works, and realized that these works carried more authority than written works that did not go through the process.

Today peer review articles are an extremely important aspect of scholarly publication, and you can exclusively search among peer reviewed articles by using many of the popular bibliographical databases or indexes, such as Google’s scholar.google.com, GeoRef published by the American Geosciences Institute, available through library subscription, and Web of Science published by the Canadian based Thomson Reuters Corporation and also available only through library subscription. If you are not a member of a scientific society, retrieved online articles are available for purchase, and many are now accessible to non-members for free online depending on the scientific society and publisher of their proceedings. Most major universities and colleges subscribe to these scholarly journals, and access may require a physical visit to a library to read articles.

While peer reviewed publications carry more weight among experts than news articles and magazines published by the public press, they can be subjected to abuse. Ideas that are revolutionary and progress science and discovery beyond what your current peers believe is true are often rejected from publication because it may prove them wrong. Furthermore, ideas that conform to the current understanding of the peer reviewer’s ideas are often approved for publication. As a consequence, peer review favors more conservatively held ideas. Peer review can be stacked in an author’s favor when their close friends are the reviewers, while a newbie in a scientific society might have much more trouble getting their new ideas published and accepted. The process can be long, with some reviews taking several years to get an article published and accepted. Feuds between members of a scientific society can cause members to fight among themselves over controversial subjects or ideas. Max Planck, a well-known German physicist, lamented that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.” In other words, science progresses one funeral at a time.

Another limitation of peer review is that the articles are often not read outside of the membership of the society. Most local public libraries do not subscribe to these specialized academic journals. Access to these scholarly articles are limited to students at large universities and colleges and members of the scientific society. Scientific societies were seen in the early centuries of their existence as the exclusive realm of privileged wealthy high-ranking men, and the knowledge contained in these articles were locked away from the general public. Public opinion of scientific societies, especially in the late 1600s and early 1700s, viewed them as secretive and often associated with alchemy, magic and sorcery, with limited public engagement of the experiments and observations made by its members.

The level of secretively rapidly changed during the Age of Enlightenment in the late 1700s and early 1800s with the rise of well-read newspapers which reported to the public on scientific discoveries. The American, French and Haitian Revolutions likely were brought about as much by a desire for freedom of thought and press, as it was fueled by the opening of scientific knowledge and inquiry into the daily lives of the public. Most of the founding members of the United States of America where avocational or professional in their scientific inquiry, directly influenced by the scientific philosophy of Francis Bacon, particularly Thomas Jefferson.

## Major Paradigm Shifts in Science

In 1788 the Linnean Society of London was formed, which became the first major society dedicated to the study of biology and life in all its forms. Name after the Swedish biologist Carl Linnaeus, who laid out an ambiguous goal to name all species of life in a series of updated books first published 1735. The Linnean Society was for members interested in discovering new forms of life around the world. The great explorations of the world during the Age of Enlightenment, resulted in the rising status of the society, as new reports of strange animals and plants were studied and documented.

The great natural history museums were born during this time of discovery to hold physical examples of these forms of life for comparative study. The Muséum National d'histoire Naturelle in Paris was founded in 1793 following the French Revolution. It was the first natural history museum established to store the vast variety of life forms from the planet, and housed scientists who specialized on the study of life. Similar natural history museums in Britain and America struggled to find financial backing until the mid-1800s, with the establishment of a permeant British Museum of Natural History (now known as the Natural History Museum of London) in the 1830s, and the American Museum of Natural History, and Smithsonian Institute following American Civil War in the 1870s.

The vast search for new forms of life resulted in the discovery by Charles Darwin and Alfred Wallace that through a process of natural selection, life forms can evolve and originate into new species. Charles Darwin published his famous book, the Origin of Species by Means of Natural Selection in 1859, and like Copernicus before him, science was forever changed. Debate over the acceptance of this new paradigm proposed by Charles Darwin resulted in a schism among scientists of the time, and resulted in a new informal society of his supporters dubbed Club X and lead by Thomas Huxley, who became known as Darwin’s Bulldog. Articles which supported Darwin’s theory were systematic rejected in the established scientific journals of the time, and the members of Club X established the journal Nature, which is today considered one of the most prestigious scientific journals. New major scientific paradigm shifts often result in new scientific societies.

## The Industrialization of Science

Public fascination of natural history and the study of Earth grew greatly during this time in the late 1700s and early 1800s, with the first geological mapping of the countryside and naming of layers of rocks. The first suggestions of an ancient age and long history for the Earth were suggested with the discovery of dinosaurs and other extinct creatures by the mid-1800s.

The study of Earth lead to the discovery of natural resources such as coal, petroleum, and valuable minerals, and advances in the use of fertilizers and agriculture which lead to the Industrial Revolution.

All of this was due to eliminative induction advocated by Francis Bacon, but it was beginning to reach its limits. Charles Darwin wrote of the importance of his pure love of natural science based solely on observation and the collection of facts, coupled with a strong desire to understand or explain whatever is observed. He also had a willingness to give up any hypothesis no matter how beloved it was to him. Darwin distrusted deductive reasoning were an idea is examined by looking for its confirmation in the world, and strongly recommended that science remain based on blind observation of the natural world, but realized that observation without a hypothesis, without a question, was foolish. For example, it would be foolish to measure the orientation of every blade of grass in a meadow, just for the sake of observation. The act of making observations assumed that there was a mystery to be solved, but the solution of which, should remain unverified until all possible observations are made.

Darwin was also opposed to the practice of vivisection, the cruel practice of making observations upon experiments and dissections on live animals or people that would lead to an animal or person’s suffering, pain or death. There was a dark side to Francis Bacon’s unbridled observation when it came to experimenting on living people and animals without ethical oversight. Mary Shelley’s Frankenstein published in 1818 was the first of a common literary trope of the mad scientist and the unethical pursuit of knowledge through the practice of vivisection, and the general cruelty of experimentation on people and animals. Yet these experiments advanced knowledge, particularly in medicine, and they still remain an ethical issue science grabbles with even today.

Following the American Civil War and into World War I, governments became more involved in the pursuit of science then they had in any prior time, with the founding of federal agencies for the study of science, including maintaining the safety of industrial produced food and medicine. The industrialization of the world left citizens dependent on the government for oversight in the safety of food that was purchased for the home rather than grown at the home. New medicines which were addictive or poisonous were tested by governmental scientists before they could be sold. Governments mapped in greater detail their borders with government funded surveys, and charted trade waters for the safe passage of ships. Science was integrated into warfare and the development of airplanes, tanks and guns. Science was assimilated within the government, which funded its pursuits, as science became instrumental to the political ambitions of nations.

However, freedom of inquiry and the pursuit of science through observation was restricted around the rise of authoritarianism and national identity. Fascism arose in the 1930s through the dissemination of falsehoods which stoked hatred and fear upon the populations of Europe and elsewhere. The rise of propaganda using the new media of radio and later television nearly destroyed the world of the 1940s, and the scientific pursuit of pure observation was not enough to question political propaganda.

## The Modern Scientific Method

Karl Popper in 1990

During the 1930s Karl Popper, who watched the rise of Nazi fascism in his native Austria, set about codifying a new philosophy of science. He was particularly impressed by a famous experiment conducted on Albert Einstein’s theory of General Relativity. In 1915 Albert Einstein proposed, using predications on the orbits of planets in the solar system, that large masses aren’t just attracted to each other, but that matter and energy are curving the very fabric of space. To test the idea of curved space, scientists planned to study the position of the stars in the sky during a solar ellipse. If Einstein’s theory was correct, the star’s light would bend around the sun, resulting in an apparent new position of the stars around the sun, and if he was incorrect, the stars would remain in the same position. In 1919, Arthur Eddington lead a trip to Brazil to observe a total solar ellipse and using a telescope he confirmed that the stars’ positions did changed during the solar ellipse due to General Relativity. Einstein was right! The experiment was in all the newspapers, and Albert Einstein went from an obscure physicist to a someone synonymous with genius.

Influenced by this famous experiment Karl Popper dedicated the rest of his life to the study of scientific methods as a philosopher. Popper codified what made Einstein’s theory and Eddington’s experiment “scientific.” It carried the risk of proving his idea wrong. Popper wrote that in general what makes something scientific is the ability to falsify an idea through experimentation. Science is not just the collection of observations, because if you view it under the lens of a proposed idea you are likely to see confirmation and verification everywhere. Popper wrote “the criterion of the scientific status of a theory is it's falsifiability, or refutability, or testability.” Popper (1963) Conjectures and Refutations. And as Darwin wrote, a scientist must give up their theory if it is falsified through observations, and if a scientist tries to save it with ad hoc exceptions it destroys the scientific merit of the theory.

Popper developed the modern scientific method that you find in most school textbooks. A formulaic recipe where you come up with a testable hypothesis, you carry out an experiment which either confirms the hypothesis or refutes it, and then you report your results. Scientific writing shifted during this time to a very structured format - introduce your hypothesis, describe your experimental methods, report your results, and discuss your conclusions. Popper also developed a hierarchy of scientific ideas, with the lowest being hypotheses which are unverified testable ideas, above which sat theories which are verified through many experiments, and finally principles, which have been verified to such an extent that no exception has ever been observed. This does not mean that principles are truth, but they are supported by all observations and attempts at falsification.

Popper drew a line in the sand to distinguish what he called science and pseudo-science. Science is falsifiable, whereas pseudo-science is unfalsifiable. Once a hypothesis is proven false, it should be rejected, but this does not mean that it should be abandoned.

For example, a hypothesis might be “Bigfoot exists in the mountains of Utah.” The test might be “Has anyone ever captured a bigfoot?” with the result “No”, then “Bigfoot does not exist.” However, this does not mean that we stop looking for bigfoot, but that it is not likely this hypothesis will be supported. However, if someone continues to defend the idea that Bigfoot exists in the mountains of Utah, despite the lack of evidence, the idea moves into the realm of pseudo-science, where as “Bigfoot does not exist” moves into the realm of science. There is a greater risk that someone will find a bigfoot and prove it wrong, but if you cling to the idea that bigfoot exists without evidence, then it is not science, it is pseudo-science, because it is now unfalsifiable.

## How Governments Can Awaken Scientific Discovery

Vannevar Bush in the 1940s.

On August 6th and 9th 1945, the United States dropped atomic bombs on the cities of Hiroshima and Nagasaki in Japan ending World War II. It sent a strong message that scientific progress was powerful. Two weeks before the dramatic end of the war, Vannevar Bush wrote to President Franklin Roosevelt that “Scientific progress is one essential key to our security as a nation, to our better health, to more jobs, to a higher standard of living, and to our cultural progress.”

What Bush proposed was that funds should be set aside for pure scientific pursuit which would cultivate scientific research within the United States of America, and drafted his famous report called Science, The Endless Frontier. From the recommendations in the report, five years later in 1950 the United States government created the National Science Foundation for the promotion of science. Unlike agency or military scientists, which were full time employees, the National Science Foundation offered grants to scientists for the pursuit of scientific questions. It allowed funding for citizens to pursue scientific experiments, travel to collect observations, and carry out scientific investigations of their own.

The hope was that these grants would cultivate scientists, especially in academia, that could be called upon during times of crisis. Funding was determined by the scientific process of peer-review rather than the legal process of appeal to authority. However, the National Science Foundation has struggled since its inception as it has been railed against by politicians with a legal persuasion, who argue that only Congress or the President should be the ones to decide what scientific questions are deserving funding. Most government science funding supports military applications as funded directly by politicians, rather than panels of independent scientists, as demonstrated by finances of most governments.

## How to Think Critically in a Media Saturated World

Carl Sagan in 1994.

During the post-war years until the present time, false ideas were not only perpetrated by those in authority, but also by the meteoritic rise of advertising— propaganda designed to sell things.

Mass media of the late 1900s and even today, the methods of science inquiry become more important in combating falsehood, not only among those who practiced science, but the general public. Following modern scientific methods, skepticism became a vital tool in not only science, but critical thinking and the general pursuit of knowledge. Skepticism assumes that everyone is lying to you. But people are especially prone to lie to you when selling you something. The common mid-century phrase “There's a sucker born every minute” exalted the pursuit of tricking people for profit, and to protect yourself from scams and falsehood one need to become skeptical.

To codify this in a modern scientific framework, Carl Sagan developed his “baloney detection kit”, outlined in his book The Demon-Haunted World: Science as a Candle in the Dark. A popular Professor at Cornell University in New York Sagan, was best known for his television show Cosmos, had been diagnosed with cancer when he set out to write his final book. Sagan worried that like a lit candle in the dark, science could be extinguished if not put into practice.

He was aghast to learn of the general public believed in witchcraft, magic stones, ghosts, astrology, crystal healing, holistic medicine, UFOS, Bigfoot and the Yeti, sacred geometry, and opposition to vaccination and the inoculation of cured diseases. He feared that with a breath of wind, scientific thought would be extinguished by the widespread belief in superstition. To prevent that, before his death in 1996, he left us with this “baloney detection kit.” A method of skeptical thinking to help evaluate ideas.

Step one: Wherever possible there must be independent confirmation of the “facts.”

Step two: Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.

Step three: Arguments from authority carry little weight— “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.

Step four: Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives.

Step five: Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.

Step six: Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.

Step seven: If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.

Step eight: Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.

Step nine: Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

The baloney detection kit is a causal way to evaluate ideas through a skeptical lens, and borrows heavily from the scientific method, but has enjoyed a wider adaption outside of science, as a method in critical thinking.

Carl Sagan never witnessed the incredible growth of mass communication through the development of the Internet at the turn of the last century, and the rapidity of how information could be shared globally instantaneously which has become a powerful tool, both in the rise of science, but also propaganda.

## Accessing Scientific Information

The newest scientific revolution of the early 2000s regards the access to scientific information, and the breaking of barriers to free inquiry. In the years leading up to the Internet, scientific societies relied on traditional publishers to print journal articles. Members of the societies would author new works, as well as review other submissions for free and on a voluntary basis. The society or publisher would own the copyright to the scientific article which was sold to libraries and institutions for a profit. Members would receive a copy as part of their membership fees. However, low readership for these specialized publications with high printing costs resulted in expensive library subscriptions for these publications.

With the advent of the internet in the 1990s traditional publishers begin scanning and archiving their vast libraries of copyright content onto the Internet, allowing access through paywalls. University libraries with an institutional subscription account would allow students to connect through a university library to access articles, while locking the content of the archival articles beyond paywalls to the general public.

Academic scientists were locked into the system because tenure and advancement within universities and colleges was dependent on their publication record. Traditional publications carried higher prestige, despite having low readership.

Publishers exerted a huge amount of control on who had access to scientific peer-reviewed articles, and students and aspiring scientists at universities were often locked out of access to these sources of information. There was a need to revise the peer-review traditional model.

## Open Access and Science

One of the most important originators of a new model for the distribution of scientific knowledge was Aaron Swartz. In 2008, Swartz published a famous essay entitled Guerilla Open Access Manifesto, and lead a life as an activist fighting for free access to scientific information online. Swartz was fascinated with online collaborative publications, such as Wikipedia. Wikipedia was assembled by gathering information from volunteers who contribute articles on topics. This information is verified and modified by large groups of users on the platform who keep the website up to date. Wikipedia grew out of a large user community, much like scientific societies, but with an easy entry to contribute new information and edit pages. Wikipedia quickly became one of the most visited sites on the internet in the retrieval of factual information. Swartz advocated for Open Access, and that all scientific knowledge should be accessible to anyone. He petitioned for Creative Commons Licensing, and strongly encouraged scientists to publish their knowledge online without copyrights or restrictions to share that information.

Open Access had its adversaries in the form of law enforcement, politicians and governments with nationalist or protectionist tendencies, and private companies with large revenue streams from intellectual property. These adversaries to Open Access argued that scientific information could be used to make illicit drugs, new types of weapons, hack computer networks, encrypt communications and share state secrets and private intellectual properties. But it was private companies with large sources of intellectual property who worried the most about the Open Access movement, lobbying politicians to enact stronger laws to prohibit the sharing of copyright information online.

## Daisy World

James Lovelock in 2005.

In 1983 after receiving heavy criticism for his concept of a Gaia Hypothesis, James Lovelock teamed up with Andrew Watson, an atmospheric scientist and global modeler to build a simple computer model to simulate how a simplified planet could regulate surface temperature through a dynamic negative feedback system to adjust to changes in solar irradiation. This model became known as the Daisy World model. The modeled planet contains only two types of life: black daisies with an albedo of 0, and white daisies with an albedo of 1, with a gray ground surface with an albedo of 0.5. Black daisies absorb all the incoming light, while white daisies reflect all the incoming light back into space. There is no atmosphere in the Daisy World, so we don’t have to worry about absorption and reflection of light above the surface of the simple planet.

A short video about the DaisyWorld model and its implications for real world earth science, made by the NASA/Goddard Space Flight Center

As solar irradiation increases, black daisies become more abundant as they are able to absorb more of the sun’s energy, and quickly they become the prevalent life form of the planet. Since the planet is warming due to its surface having a lower albedo, quickly it becomes a hotter planet, which causes the white daisies to grow in abundance, as they do so, the world starts to reflect more of the sun light back into space, cooling the planet. Over time, the surface temperatures of the planet will reach an equilibrium and stabilize, so that it does not vary much despite changes in the amount of solar irradiation increasing. As the sun’s irradiation increases, it will be matched by an increase abundance of white daisies over black ones. Eventually, solar irradiation will increase to a point where white daisies are unable to survive on the hot portions of the planet, and they begin to die, revealing more of the gray surface of the planet, which absorbs half the light’s energy. As a result, the planet quickly starts to absorb more light, and quickly heats up, killing off all the daises and leaving a barren gray planet. The Daisy World illustrates how a planet can reach a dynamic equilibrium in regard to surface temperatures and how there are limits or tipping points in regard to these negative feedback systems. Such a simple model is extremely powerful in documenting how a self-regulating system works and the limitations of such regulating systems. Scientists, since this model was introduced in 1983, have greatly expanded the complexity of Daisy World models, by adding atmospheres, oceans and differing life forms, but ultimately, they all reveal a similar pattern of stabilization followed by a sudden collapse.

## Water World

A fictional Water World.

The Daisy World invokes some mental gymnastics as it ascribes life forms to a planet, but we can model an equally simple life-less planet; one more similar to an early Earth. A water world with a weak atmosphere. Just like the 1995 sci-fi action movie starring Kevin Costner, the Water World is just open ocean and contains no land. The surface of the ocean water has a low albedo of 0.06, which absorbs most of the incoming solar irradiation. As the sun’s solar irradiation increases and the surface temperatures of the Water World begin to heat up, the water reaches high enough temperatures that it begins to evaporate into a gas, resulting in an atmosphere of water, and with increasing temperatures, the atmosphere begins to form white clouds. These white clouds have a high albedo of 0.80 meaning more of the solar irradiation is reflected back into space before it can reach the ocean’s surface, and the planet begins to cool. Hence, just like in the Daisy World, the Water World can become a self-regulating system with an extended period of equilibrium. However, there is a very narrow tolerance here, because if the Water World gets too cooled down, then sea ice will form. Ice on the surface of the ocean with a high albedo of 0.70 is a positive feedback, meaning that if ice begins to cover the oceans, it will cause the Water World to cool down, which causes more ice to form on the surface of the Earth. In a Water World model, the collapse is toward a planet locked in ice— a Frozen World.

Europa, a moon of Jupiter an example of a Frozen World).

There is evidence that early in Earth’s own history, the entire planet turned into a giant snow ball. With ever increasing solar irradiation a Frozen World will remain frozen, until the solar irradiation is high enough to begin to melt the ice to overcome the enhanced albedo of its frozen surface.

The world, at this point will quickly and suddenly return to a Water World again, although if solar irradiation continues to increase the oceans will eventually evaporate, despite increasing cloud cover and higher albedo, leaving behind dry land with an extremely thick heavy water atmosphere of clouds. Note that a heavy atmosphere of water clouds will trap more of the outgoing long-wave infra-red radiation, resulting in a positive feedback. The Water World will eventually become a hot Cloud World.

Venus, an example of a hot Cloud World.

Examples of both very cold Frozen Worlds and very hot Cloud Worlds exist in the Solar System. Europa, one of the four Galilean moons of Jupiter is an example of Frozen World, with a permeant albedo of 0.67. The surface of Europa is locked under thick ice sheets. The moon orbits the giant planet of Jupiter which pulls and tugs on its ice-covered surface, producing gigantic cracks and fissures on the moon’s icy surface with an estimated average surface temperature of −171.15 ° Celsius or 102 on the Kelvin scale.

Venus, the second planet from the Sun is an example of a Cloud World, with its thick atmosphere, which traps the sun’s irradiation. In fact, the surface of Venus is the hottest place in the Solar System, besides the Sun, with a surface temperature of 462° Celsius or 737 on the Kelvin scale, nearly hot enough to melt rock, and this is despite an albedo slightly higher than that of Europa of around 0.69 to 0.76.

The Solar System contains both end states of Water Worlds, and Earth appears to be balanced in an ideal Energy Cycle, but as these simple computer models predict, Earth is not immune from these changes and can quickly tip into either a cold Frozen World like Europa or extremely hot Cloud World like Venus. Ultimately, as the sun increases its solar radiation with its eventual expansion, a more likely scenario for Earth is a Cloud World, and you just have to look at Venus to imagine the long-term very hot future of planet Earth.

An image of the Earth taken from the VIIRS instrument aboard NASA's Earth-observing research satellite, Suomi NPP, taken from 826 km altitude.

\newpage

# 2e. Other Sources of Energy: Gravity, Tides, and the Geothermal Gradient.

A schematic view of the geothermal gradient of increasing temperature with depth inside the Earth.

The sun may appear to be Earth’s only source of energy, but there are other much deeper sources of energy hidden inside Earth. In the pursuit of natural resources such as coal, iron, gold and silver during the heights of the industrial revolution, mining engineers and geologists took notice of a unique phenomenon as they dug deeper and deeper into the interior of the Earth. The deeper you travel down into an underground mine, the warmer the temperature becomes. Caves and shallow mines near the surface, take on a yearly average temperature making hot summer days feel cool in a cave and cold winter days feel warm, but as one descends deeper and deeper underground, ambient temperatures begin to increase. Of course, the amount of increase in temperature varies depending on the proximity you are to an active volcano or upwelling magma, but in most regions on land, a descendant 1,000 meters underground will increase temperatures between 25 to 30° Celsius. One of the deepest mines in the world is the TauTona Mine in South Africa, which descends to depths of 3,900 meters with ambient temperatures rising between 55 °C (131 °F) and 60 °C (140 °F), rivaling or topping the hottest temperatures ever recorded on Earth’s surface. Scientists pondered where this energy, this heat within the Earth comes from.

Scientists of the 1850s viewed the Earth like a giant iron ball heated to glowing hot temperatures in the blacksmith-like furnace of the sun and slowly cooling down ever since its formation. Such view of a hot Earth, bore its origins to the rise of industrial iron furnaces that dot the cityscapes of the 1850s, suggested that Earth, like poured molten iron was once molten and over its long history has cooled. Suggesting that the observed heat experienced deep underground in mines was the cooling remnant of Earth’s original heat from a time in its ancient past when it was forged from the sun. Scientists term this original interior heat within Earth left over from its formation, Accretionary heat.

## Lord Kelvin and the First Scientific Estimate for the Age of Earth

As a teenager, William Thomson pondered the possibility of using this geothermal gradient of heat in Earth’s interior as a method to determine the age of the Earth. He imagined the Earth to have cooled into its current solid rock from an original liquid molten state, and that the temperatures on the surface of the Earth had not changed significantly over the course of its history. The temperature gradient was directly related to how long the Earth had been cooling. Before changing his name from William Thomson to Lord Kelvin, he acquired an accurate set of measurements of the Earth’s geothermal gradient from reports of miners in 1862, and returned to the question of the age of the Earth.

Lord Kelvin assumed three initial criteria, first was that Earth was once a molten hot liquid, with a uniform hot temperature, and second that this initial temperature was about 3,900 °C, hot enough to melt all types of rocks. Lord Kelvin also assumed that the temperature on Earth’s surface would be the same throughout its history near 0 °C. Like a hot potato thrown into an icy freezer, the center of the Earth would retain its heat at its core, while the outer edges of the Earth would cool with time. He devised a simple formula:

${\displaystyle {{\text{Age of Earth}}={\frac {(T/G)^{2}}{\pi k}}}}$

Where T is equal to the initial temperature, 3,900 °C. G is the geothermal gradient he estimated to about 36 °C/km from those measurements in mines, and k was the thermal diffusivity, or the rate that a material cools down measured in meters per second. While Lord Kelvin had established estimates for T, G, and used the constant π, he still had to determine k the thermal diffusivity. In his lab, he experimented with various materials, heating them up and measuring how quickly heat was conducted through the material, and found a good value to use for the Earth of 0.0000012 meters squared per second. During these experiments of heating various materials and measuring how quickly they cooled down, Lord Kelvin was aided by his assistant a young student named John Perry. It must have been exciting when Lord Kelvin calculated an age of the Earth to around 93 million years, although he gave a broad range in his 1863 paper between 22 to 400 million years. Lord Kelvin’s estimate gave hope to Charles Darwin’s budding theory of evolution, which required a long history for various lifeforms to evolve, but ran counter to a notion that Earth had always existed.

John Perry who idolized his professor, graduated and moved on to a prestigious teaching position in Tokyo Japan. It was there in 1894 he was struck by a foolish assumption that they had made in trying to estimate the age of the Earth, and it may have occurred to him after eating some hot soup on the streets of Tokyo. In a boiling pot of soup, heat is not dispersed through conduction the transfer of heat energy by simple direct contact, but dispersed through convection, that is the transfer of energy with the motion of matter, an in the case of the Earth, the interior of the planet may have acted like a pot of boiling soup, the liquid bubbling and churning bringing up not only heat to the surface, but also matter. John Perry realized if the heat transfer of the interior of the Earth was like boiling soup, rather than an iron ball, the geothermal gradient would be prolonged far longer near the surface due to the upwelling of fresh liquid magma from below. In a pot of boiling soup, the upper levels will retain higher temperatures because the liquid is mixing and moving as it is heated on the stove.

Convection of heat transfer (boiling water), verses Conduction of heat transfer (the hot handle of pot).

In 1894, John Perry published a paper in Nature, indicating the error in Lord Kelvin’s previous estimate for the age of the Earth. Today, we know from radiometric dating that the Earth is 4.6 billion years ago, 50 times longer than Lord Kelvin’s estimate. John Perry explained the discrepancy, but it was another idea that captured Lord Kelvin’s attention. The existence of an interior source of energy within the Earth, thermonuclear energy, that could also claim to keep the Earth’s interior hot.

## Earth’s Interior Thermonuclear Energy

Marie Skłodowska Curie, the great scientist.

Unlike the sun, Earth lacks enough mass and gravity to trigger nuclear fusion at its core. However, throughout its interior, the Earth contains a significant number of large atoms (larger than iron) that formed during the initial giant supernova explosion that formed the solar system. Some of these large atoms, such as thorium-232 and uranium-238 are radioactive. These elements have been slowly decaying ever since their creation around the time of the initial formation of the sun, solar system and Earth. The decay of these large atoms into smaller atoms is called nuclear fission. During the decay, these larger atoms are broken into smaller atoms, some of which can also decay into even smaller atoms, like the gas radon which decays into lead. The decay of larger atoms into smaller atoms produces radioactivity, a term coined by Marie Skłodowska-Curie. In 1898, she was able to detect electromagnetic radiation emitted from both thorium and uranium, and later she and her husband demonstrated that radioactive substances produce heat. This discovery was confirmed by another female scientist named Fanny Gates, who demonstrated the effects of heat on radioactive materials, while the equally brilliant female scientist discovered that radioactive solid substances produced from the decay of thorium and uranium, further decay to a radioactive gas, called radon.

The New Zealand scientist Ernest Rutherford, who wrote the classic book on radioactivity.

These scientists worked and corresponded closely with a New Zealander, named Ernest Rutherford, who in 1905 published a definitive book on “Radio-activity.” This collection of knowledge begun to tear down the assumptions made by Lord Kelvin. It also introduced a major quandary in Earth sciences. How much of Earth’s interior heat is a product of accretionary heat and how much is a product of thermonuclear heat from the decay of thorium and uranium?

A century of technology has resulted in breakthroughs in measuring nuclear decay within the interior of the Earth. Nuclear fusion in the sun causes beta plus (β+) decay, in which a proton is converted to a neutron, and generates a positron and neutrino, as well as electromagnetic radiation. In nuclear fission, in which atoms break apart, beta minus (β−) decay occurs. Beta minus (β−) decay causes a neutron to convert to a proton, and generates an electron and antineutrino as well as electromagnetic radiation. If a positron comes in contact with an electron the two sub-atomic particles annihilate each other. If a neutrino comes in contact with an antineutrino the two sub-atomic particles annihilate each other. Most positrons are annihilated in the upper regions of the sun, which are enriched in electrons, while neutrinos are free to blast across space, zipping unseen through the Earth, and are only annihilated if they come in contact with antineutrinos produced by radioactive beta minus (β−) decay from nuclear fission on Earth.

Any time of day, trillions of neutrinos are zipping through your body, followed by a few antineutrinos produced by background radiation. Neither of these subatomic particles cause any health concerns, as they can’t break atomic bonds. However, if they strike a proton, they can emit a tiny amount of energy, in the form of a nearly instantaneous flash of electromagnetic radiation.

The Kamioka Liquid-scintillator Anti-Neutrino Detector in Japan is a complex experiment designed to detect anti-neutrino’s emitted during radioactive beta minus (β−) decay cause by both nuclear reactors in energy generating power plants, as well as natural background radiation from thorium-232 and uranium-238 inside the Earth.

The detector is buried deep in an old mine, and consists of a steel sphere filled with a balloon filled liquid scintillator, and buffered by a layer of mineral oil. Light within the steel sphere is detected by highly sensitive phototubes mounted on the inside surface of the steel sphere. Inside the pitch-black sphere any tiny flash of electromagnetic radiation can be detected by the thousands of phototubes that line the surface of the sphere. These phototubes record tiny electrical pulses, which result from the collision of antineutrinos striking protons. Depending on the source of the antineutrinos, they will produce differing amounts of energy in the electrical pulses. Antineutrinos produced by nearby nuclear reactors can be detected, as well as natural antineutrinos caused by the fission of thorium-232 and uranium-238. A census of background electrical pulses indicates that Earth’s interior thermonuclear energy accounts for about 25% of the total interior energy of the Earth (2011 Nature Geoscience 4:647–651, but see 2013 calculations at https://arxiv.org/abs/1303.4667) the other 75% is the accretionary heat, left over from the initial formation of the Earth. Thorium-232 is more abundant near the core of the Earth, while uranium-238 is found closer to the surface. Both elements contribute to enhancing the geothermal gradient observed in Earth’s interior, and extending Earth’s interior energy beyond that predicated for a model involving a cooling Earth with only heat leftover heat from its formation. A few other radioactive elements contribute to Earth’s interior heat, such as potassium-40, but the majority of Earth’s interior energy is a result of residual heat from its formation.

Comparing the total amount of Earth’s interior energy sources with the amount Earth receives via the Sun, reveals an order of magnitude of difference. The entire interior energy from Earth accounts for only about 0.03% of Earth’s total energy. The other 99.97% comes from the sun’s energy, as measured above the atmosphere. It is important to note that it is estimated that current human populations utilize about 30 Tetrawatts or about 0.02% of Earth’s total energy. Hence, the interior energy of Earth and the resulting geothermal gradient could support much of the energy demands of large populations of humans, despite the fact that it accounts for a small amount of Earth’s total energy budget.

## Gravity, Tides and Energy from Earth’s Inertia

While the vast amount of Earth’s energy comes from the Sun, and a small amount comes from the interior of the Earth, a complete census of Earth’s energy should also discuss a tiny component of Earth’s energy that is derived from its motion and the oscillations of its gravitational pull with both the Moon and the Sun.

Animation of tides as the Moon goes round the Earth with the Sun on the right.

Ocean and Earth tides are caused by the joint gravitational pull of the Moon and Sun. They daily cycle between high and low tides over a longer two-week period. Twice a lunar month, around the new moon and full moon, when a straight line can be draw through the center of the Sun, Moon and Earth, a configuration known as a syzygy, the tidal force of the Moon is reinforced by the gravitational force of the Sun, resulting in a higher than usual tides called a spring tide. When a line drawn through the center of the Sun to the Earth, and Moon to the Earth forms a 90°, or is perpendicular, the gravitational force of the Sun partially cancels the gravitational force of the moon, resulting in a weaken tide, called a neap tide. These occur when the Moon is at first quarter or third quarter in the night sky.

Gravitational pull of the Moon generates a tide-generating force, effecting both liquid water, as well as the solid interior of Earth.

Daily tides are a result of Earth’s rotation relative to the position of the Moon. Tides can affect both the solid interior of the Earth (Earth tides), as well as the liquid ocean waters (Ocean tides), which are more noticeable, as ocean waters rise and fall along coastlines. Long records of sea level are averaged to indicate the average sea level along the coastline. The highest astronomical tide and lowest astronomical tide are also recorded, with the lowest record of the tide equivalent on navigational charts as the datum. Metrological conditions (such as hurricanes), as well as tsunamis (caused by earthquakes) can dramatically rise or lower sea level along coasts, well beyond the highest and lowest astronomical tides. It is estimated that tides contribute only 3.7 Tetrawatts of energy (Global Climate and Energy Project, Hermann, 2006 Energy), or about 0.002% of Earth total energy.

In this census of Earth’s energy, we did not include wind and fossil fuels such as coal, oil and natural gas, as these sources of energy are ultimately a result of input of solar irradiation. Wind is a result of thermal and pressure gradients in the atmosphere, that you will learn more of later when you read about the atmosphere, while fossil fuels are stored biological energy, due to sequestration of organic matter produced by photosynthesis, in the form of hydrocarbons, that you will learn more of as you read about life in a later chapter. \newpage

# 3a. Gas, Liquid, Solid (and other states of matter).

## What is stuff made of?

Ancient classifications of Earth’s matter were early attempts to determine what makes up the material world we live in. Aristotle, teacher of Alexander the Great, in Ancient Greece in 343 BCE proposed five elements; earth, water, air, fire, and aether. These five elements were likely adapted from older cultures, such as ancient Egyptian teachings. The Chinese Wu Xing system, developed around 200 BCE during the Han dynasty listing the elements; Wood (木), Fire (火), Earth (土), Metal (金), and Water (水). These ideas suggested that the ingredients that make up all matter were some combination of these elements, but theories of what those elements were appeared arbitrary in early texts. Around 850 CE, the Islamic philosopher Al-Kindi who had read of Aristotle in his native Baghdad, conducted early experiments in distillation; the process of heating a liquid and collecting the cooled produced steam in a separate container. He discovered that the process of distillation could make more poignant perfumes and stronger wines. His experiments suggested that there were in fact just three states of matter; solids, liquids and gasses.

Ancient early classifications of matter differ significantly from today’s modern atomic theory of matter, that forms the basis of the field of chemistry. Modern atomic theory classifies matter into 94 naturally occurring elements, and an additional 24 elements if you include elements synthesized by scientists. The atomic theory of matter suggests that all matter is composed of a combination or mixture of these 118 elements. However, all these substances can adopt three basic states of matter as a result of differences in temperature and pressure. Hence all combinations of these elements can exist theoretically in solid, liquid and gas phases dependent on their temperature and pressure. Most states of matter can be classified as solid, liquid or gas, despite the fact that they are made up of different elements.

A good example is ice, water and steam. Ice is a solid form of hydrogen atoms bonded to oxygen atoms, symbolized by H2O, as it contains twice as many hydrogens (H) as oxygen (O) atoms. H2O is the chemical formula of ice. Ice can be heated to form liquid water. At Earth’s surface pressures (1 atmosphere) ice will melt into water at 0° Celsius (32° Fahrenheit). Likewise, water will freeze at the same temperature 0° Celsius (32° Fahrenheit). If you continue to heat the water it will boil at 100° Celsius (212° Fahrenheit). Boiling water produces steam, or water vapor, which is a form of gas. If water vapor is cooled below 100° Celsius (212° Fahrenheit), it will turn back into water.

One of the most fascinating simple experiments is to observe the temperature in a pot of water as it is heated to 100° Celsius (212° Fahrenheit). The water will rise in temperature until it reaches 100° Celsius (212° Fahrenheit), at that temperature it will remain until all the water is evaporated into steam (a gas) before the steam will rise any higher. A pot of boiling water is precisely at 100° Celsius (212° Fahrenheit), as long as it is pure water and is at 1 atmosphere of pressure (at sea level).

The amount of pressure can affect the temperatures that phase transitions take place at. For example, on top of a 10,000-foot mountain, water will boil at 89.6° Celsius (193.2° Fahrenheit), because it has less atmospheric pressure. This is why you often see adjustments to cooking instructions based on altitude, since it takes longer to cook something at higher altitudes. If you place a glass of water in a vacuum by pumping gases out of a container, you can get a glass of water to boil at room temperature. This phase transition happens when the pressure drops below about 1 kilopascal in the vacuum. The three basic states of matter are dependent on both the pressure and temperature of a substance. Scientists can diagram the different states of matter of any substance by charting the observed state of matter at any temperature and pressure. These diagrams are called phase diagrams.

The phase diagram of water, note that the Y-axis is Pressure while the X-axis is Temperature. Each area of the diagram shows the phase (liquid, solid, gas) at that temperature and pressure.

A phase diagram can be read by observing the temperatures and pressures substances will change phases from solid, liquid and gas. If the pressure remains constant, you can read the diagram by following a horizontal line across the diagram, observing the temperatures a substance melts or freezes (solid-liquid) and boils or evaporates (liquid-gas). You can also read the diagram by following a vertical line across the diagram, observing the pressures that a substance melts or freezes (solid-liquid) and boils or evaporates (liquid-gas).

On the phase diagram for water, you will notice that the division between solid ice and liquid water is not a perfectly vertical line around 0° Celsius, at high pressures around 200 to 632 MPa, ice will melt at temperatures slightly lower than 0° Celsius. This zone causes ice to melt that is buried deeply under ice sheets, which increases the pressure on the ice. Another strange phenomenon can happen to water heated to 100° Celsius. If you subject normal water heated to 100° Celsius to increasing pressures, up above 2.1 GPa, the hot water will turn to solid ice and “freeze” at 100° Celsius. Hence, at very high pressures, you can form ice at the bizarrely hot temperatures of 100° Celsius! If you were able to touch this ice, you would get burned. Another strange phenomenon happens if you subject ice to decreasing pressures in a vacuum, the ice will sublimated, turn from a solid to a gas at temperatures below 0° Celsius in a vacuum. The process of a solid turning to a gas is called sublimation, and the process of a gas turning into a solid is called deposition. One of the most bizarre phenomena happens at a triple junction of the three states of matter, where the solid, liquid and gas phases can co-exist. For pure water (H2O) this happens at 0.01° Celsius and a pressure of 611.657 Pa. When water, ice or water vapor is subjected to this temperature and pressure, you get the weird phenomena of water both boiling and freezing at the same time!

What phase diagrams demonstrate is that the states of matter are a function of the space between molecules within a substance. As temperatures increase, the vibrational forces push the molecules of a substance farther apart, likewise as pressures increases, the molecules of a substance are pushed closer together. This balance between temperature and pressure dictate which phase of matter will exist at each discreet temperature and pressure.

More advanced phase diagrams may indicate different arrangements of molecules in solid states, as they are subjected to different temperatures and pressures. These more advance phase diagrams illustrate crystal lattice structural changes in solid matter that is more densely packed and can form different crystals arrangements.

Phase diagram of carbon dioxide.

Each substance has different phase diagrams, for example a substance of pure carbon dioxide (CO2), which is composed of a single carbon atom (C) bonded to two oxygen atoms (O) is mostly a gas at normal temperatures and pressures on the surface of Earth. However, carbon dioxide when cooled down to -78° Celsius undergoes deposition, and turns from a gas to a solid. Dry ice, which is solid carbon dioxide sublimates at room temperatures making a gas. It is called dry ice, because the phase transition between solid and gas at normal pressures does not go through a liquid phase like water. This is why dry ice kept in a cooler will not get your food wet, but will keep your food cold and actually much colder than normal frozen ice made of H2O.

Strange things happen when gases are heated and subjected to increasingly high pressures. At some point these hot gasses under increasing compression will become classified as a super critical fluid. Super critical fluids act both like a gas and a liquid, suggesting an additional fourth state of matter. Super critical fluids of H2O occur when water is raised to temperatures above 374° Celsius and subjected to 22.1 MPa or more of pressure, at this point the super critical fluid of water will appear like a cloudy steamy fluid. Super critical fluids of CO2 occur at temperatures above of 31.1° Celsius and subjected to 7.39 MPa or more of pressure. Because super critical fluids act like a liquid and a gas, they can be used as solvents in dry cleaning without getting fabrics wet. Super critical fluids are used in the process of decaffeinating coffee beans, as caffeine is absorbed by super critical fluids of carbon dioxide when mixed with coffee beans.

Phase diagrams can get more complex when you consider two or more substances mixed together and examine how they interact with each other. These more complex phase diagrams with two different substances are called binary systems, as they compare not only temperatures and pressures, but also the ratio of two (and sometimes more) components. Al-Kindi when developing his distillation processes, utilized the difference in boiling temperature of water (H2O) which occurs at 100° Celsius and alcohol (C2H6O) which occurs at 78.37° Celsius. The captured gas resulting from a mixture of water and alcohol heated up to 78.37° Celsius, would contain only alcohol. If this separated gas is then cooled, it would be a more concentrated form of alcohol, this is how distillation works.

Example of a simple distillation set up, which uses different phases of matter at different temperatures to separate out different liquid molecules.

Utilizing the knowledge of phase diagrams, the distribution of the different compositions of the 94 naturally occurring elements can be elucidated. And scientists can determine how substances can get enriched or depleted in these natural occurring substances as a result of changes in temperature and pressure.

## Plasma

Plasma is used to describe free flowing electrons, as seen in electrical sparks, lighting and found encircling the sun. Plasma is not technically a state of matter since it does not contain particles of sufficient mass. Although sometimes included as a state of matter, plasma, like electromagnetic radiation such as light which contains photons is best considered a form of energy rather than matter. Although electrons play a vital role in bonding different types of atoms together. In the next module you will be introduced to additional phases of matter at the extreme limits of phase diagrams.

## Density

Different phases of matter have different densities. Density as you may recall is a measure of a substance’s Mass per Volume. In other words, it is the number of atoms (mass) within a given space (volume). Specific gravity is the comparison of a substance’s density compared to water. It is a simple test to see if an object floats or sinks, such observations are measured as specific gravity. A specific gravity of precisely 1, means that the object has the same density as water. Substances whether solid, liquid or a gas with specific densities higher than 1 will sink, while substances with a specific gravity lower than 1 will float. Specific gravity of liquids is measured using a hydrometer. Otherwise, density is measured by finding the mass and dividing it by its measured volume (usually by displacement of water if the object is an irregular solid).

A column of colored liquids with different densities.

Most substances will tend to have higher density as a solid than a liquid, and most liquids have a greater density than in a gas phase. This is because solids pack more atoms together in less space, than a liquid, and much more atoms are packed in a solid phase of matter than a gas phase. There are exceptions to this rule, for example ice, the solid form of water floats. This is because there is less mass per volume in an ice cube than liquid water, as the crystal lattice of ice (H2O) forms a less dense network of bonds between atoms and spreads out over more space to accommodate this crystal lattice structure. This is why leaving a soda can in the freezer will cause it to expand and burst open. However, most substances will be denser in the solid phase than their liquid phase.

Density is measured as kg/m3 or specific gravity (in comparison to liquid water). Liquid water has a density of 1,000 kg/m3 at 4° Celsius, and steam (water vapor) has a density of 0.6 kg/m3. Milk has a density of 1,026 kg/cm3, slightly more than pure water, and the density of air at sea level is about 1.2 kg/m3. At 100 kilometers above the surface of the Earth (near the edge of outer space), the density of air drops down to 0.00000055 kg/m3 (5.5 x 10-7 kg/m3).

## Mass

A balancing scale to measure mass.

Remember, the acceleration of gravity (g) is dependent on an object’s mass, hence the denser an object is, the more gravitational force will be exerted on it. This previously came into discussion on calculating the density of the Earth, in refuting a hypothesis of a hollow center inside the Earth.

It is important to distinguish an object’s Mass from an object’s Weight. Weight is the combined force of gravity (g) and an object’s Mass (M), such that Weight = M x g. This is why objects in space are weightless, and objects have different weights on other planets, because the value of g differs depending on the density of each planet. However, Mass, which is equivalent to the total number of atoms within an object, remains the same no matter which planet you visit.

A spring scale to measure weight.

Weight is measured by scales that use springs pushing down which combines mass and gravity pushing an object toward the Earth and recording the displacement of the spring. Mass is measured by scales that compare an object to standards, like in a balance-type scale, where standards of known mass are balanced on the scale. \newpage

# 3b. Atoms: Electrons, Protons and Neutrons.

## Planck’s length, the fabric of the universe, and extreme forms of matter

What would happen to water (H2O) if you subjected it to the absolute zero temperature predicted by Lord Kelvin, of 0° Kelvin or -273.15° Celsius and under a complete vacuum of 0 Pascals of pressure? What would happen to water (H2O) if you subjected it to extremely high temperatures and pressures, like those found in the cores of the densest stars in the universe?

Such answers to these questions may seemed beyond the limits of practical experimentation, but new research is discovering new states of matter at these limits. These additional states of matter exist at the extreme end of all phase diagrams; at the limits of observable temperature and pressure. It is here in the corners of phase diagrams that matter behaves in strangely weird ways. However, these new forms of matter were predicted nearly a century before it was discovered by a unique collaboration between two scientists living on different sides of the Earth.

Satyendra Nath Bose.

As the eldest boy in a large family with seven younger sisters, Satyendra Nath Bose grew up in the bustling city of Calcutta, India. His family was well off, as his father was a railway engineer and a member of the upper-class Hindu society that lived in the Bengal Presidency. Bose showed an aptitude for mathematics, and rose up the ranks as a teacher and later became a professor at University of Dhaka, where he taught physics. Bose read Albert Einstein’s papers and translated his writings from English to Hindi, and started a correspondence with Albert Einstein. While lecturing his class in India on Planck’s constant and black body radiators, he stumbled upon a unique realization, a statistical mathematical mistake that Einstein had made in describing the nature of the interaction between atoms and photons (electromagnetic radiation).

As you might recall Planck’s Constant relates to how light or energy striking matter is absorbed or radiates in a prefect black body radiator. In 1900, Max Planck used his constant (${\displaystyle h}$ ), and calculated a minimum distance between wavelengths of photons possible for electromagnetic radiation. The equation is

${\displaystyle {\text{Planck's Length}}={\sqrt {\frac {\hbar G}{c^{3}}}}}$

where ℏ is the reduced Planck’s constant (h) which is equal to 1.054571817x 10-34 Joules Second or ℏ and equals h divided by 2π. G is Henry Cavendish’s calculation for gravity G= 6.67408x10−11 Meters3/Kilograms Seconds2, and c is the speed of light in a vacuum, 299,792,458 Meters per Second.

This length is called Planck’s length. It is the theoretical smallest distance between wavelengths of the highest energy electromagnetic radiation possible. It also relates to the theoretical smallest distance between electrons within an atom. The current calculated Planck’s length is 1.6 x 10-35 meters which is incredibly small, as the decimal place has 35 zeros in front of it, or is 0.000000000000000000000000000000000016 meters long.

## Bohr's Model of the Atom

In physics it is the smallest measurement of distance. Satyendra Nath Bose was also aware of a new model of the atom, proposed by Niels Bohr a Danish scientist, who viewed atoms similar to how the solar system is arranged, with planets orbiting around stars, but instead of planets, tiny electrons orbiting around the atom’s nucleus. Under Bohr’s model of the atom, the simplest type of atom (hydrogen) is a single electron orbiting around a nucleus composed of a single proton.

Bohr's model of the simple Hydrogen atom, with 1 proton and 1 electron moving between two energy states, and releasing energy.

Niels Bohr

## Electron Orbital Shells

Experiments in fluorescence demonstrates that when electromagnetic radiation, such as light is absorbed by atoms, the electrons rise to a higher energy state. They subsequently fall back down to a natural energy state and release energy as photons. This is why materials glow when heated and why radioactive materials glow when subjected to gamma or x-ray electromagnetic radiation. Scientists can measure the amount of energy released as photons when this occurs, and Niels Bohr suggested that the amount of energy released appeared to be related to orbital shell distances, at tiny units measured in Planck’s lengths. Niels Bohr developed a model explaining how each orbital shell appeared to hold increasing numbers of electrons, with an increasing number of protons.

One way to think of these electron orbital shells is that they are like notches along a ruler. Electrons must encircle each atom’s nucleus from one or more of those discrete notches, which are separated by distances measured in Planck’s lengths, the smallest measurement of distance theoretically possible. To test this idea, scientists excited atoms with high energy light, and measured the amount of electromagnetic radiation that was emitted by the atoms. When electrons absorb light they move up the notches by discrete Planck lengths, however they also would move back down a notch and release photons, emitting in the process electromagnetic radiation, until they settle on a notch that is supported by an equal number of protons in the nucleus.

Albert Einstein the year he earned his Nobel Prize.

This effect is called the photoelectric effect. Albert Einstein earned his Nobel prize in 1921 by showing that it was the frequency of electromagnetic radiation that excites electrons by a factor of Planck’s constant in determining energy output.

Such, that E = hv, where E is the energy measured in Joules, h is Planck’s constant, and v is the frequency of the electromagnetic radiation. We can use v = c / λ, where c is the speed of light, and λ is the wavelength to determine v for the frequency of the different light wave lengths, finding that the shorter the wavelength, the higher the amount of energy.

As electrons move up the notches away from the nucleus by absorbing more electromagnetic radiation they can eventually become so excited that they can become completely free of the nucleus all together and become free electrons (electricity). This happens especially with metal materials that have a looser connection with orbiting electrons, but can theoretically happen with any type of material, given enough electromagnetic radiation subjected to the matter. This is what happens to matter when it is heated, the electrons move upward in their energy states, causing the atoms to jiggle which is subjected to the surrounding particles as electromagnetic radiation, their electromagnetic energy expands. This is why there is an overall trend with increasing temperatures and decreasing pressures toward matter that is less dense, expanding in volume from a solid to a liquid to a gas, eventually with enough energy the electrons become freed from the nucleus and result in plasma of free-flowing electrons or electricity.

The Periodic Table of Elements is organized by orbital shells of electrons.

These notches that the electrons encircle the nucleus are focused upon certain orbital shells of stability, such that the number of electrons exactly match the number of protons within the nucleus and fill orbital shells in a sequential order. The orbital shells of stability form the organization of the Periodic Table of Elements that you see in many classrooms.

One way to think of these orbital shells of stability is as discrete notches on a ruler, each “centimeter” on this ruler representing an orbital distance in the electron shell. There can be smaller units such as millimeters, with the smallest unit measured in Planck lengths. Scientists were eager to measure these tiny distances within atoms, but found it impossible, because the electrons behave not like planets orbiting a sun, but as oscillating waves forming a probability function around each of those discrete distances of stability. Hence it is impossible to predict the exact location of an electron along these notched distances from the nucleus. This is known as the Heisenberg Uncertainty Principle, which states that the position and the velocity of an electron cannot both be measured exactly at the same time. In a sense this makes sense. Electrons like photons encircle the nucleus traveling at the speed of light and as oscillating waves, making it impossible to measure a specific position of an electron within its orbit around the nucleus. The study of atomic structures, such as this is called quantum physics.

Satyendra Nath Bose had read Einstein’s work on the subject, and noted some mathematical mistakes in Einstein’s calculations of the photoelectric effect. Bose offered a new solution, and ask Einstein to translate the work into German for publication. Einstein generously agreed and Bose’s paper was published. Einstein and Bose took this new solution to the question of what happens to these electron orbitals when atoms are subjected to Lord Kelvin’s extremely low temperature of absolute zero.

Einstein, following Bose, proposed that the electron orbital distances would collapse, moving down to the lowest possible notch on the Planck scale. This tiny distance prevents the atom from collapsing, and is referred to as zero-point energy. What is so strange, is that all atoms no matter how many protons or electrons it contains, will result in a similar collapse of the electrons down to the lowest notch at these extremely low temperatures.

At this point the atoms become a new state of matter called Bose-Einstein condensate. Bose-Einstein condensate has some weird proprieties. First is that it is a superconductor, because electrons are weakly held to the nucleus, second all elements except helium become solids, and the strangest propriety, all atoms in this state will exhibit the same chemical properties since the electrons are so close to the nucleus and they occupy the lowest orbital shell.

Helium, which is a gas at normal room temperatures and pressures, has two protons and two electrons. When it is cooled to absolute zero in a vacuum, it remains in liquid form, rather than the denser solid, like all other elements, and only when additional pressure is added does helium eventually turn into a solid. It is the only element to do this, all others become solids at absolute zero temperatures. This is because the zero-point energy in the electron orbitals is enough to keep helium as a liquid even at temperatures approaching absolute zero. In 1995 two scientists at the University of Colorado Eric Cornell and Carl Wieman super cooled rubidium-87, generating the first evidence of Bose-Einstein condensate in a lab, which earned them a Nobel prize in 2001. Since then numerous other labs have been experimenting with Bose-Einstein condensate, pushing electrons within a hare’s-breath away from the nucleus.

What happens to atoms when subjected to intense heat and pressure? Well electrons will move up these notches until they become far enough from the nucleus that they leave the atom and become a plasma, a flow of free electrons. Hence the first thing that happens at high pressure and high temperature is the generation of electricity from the free flow of these electrons. If pressure and temperature continue to increase, the protons will convert to neutrons, releasing photons as gamma radiation and neutrinos. This nuclear fusion is what generates the energy inside the cores of stars, such as the sun. If neutrons are subjected to even more pressure and temperature, they form black holes, the most mysterious form of matter in the universe.

One of the frontiers of science is the linkage between the extremely small Planck’s length and the observed cosmic expansion of the universe as determined by Hubble’s constant. One way to describe this relationship is to imagine a fabric to matter, which is being stretched apart (expanding) at the individual atomic level resulting in an expanding universe. The study of this aspect of science is called cosmology.

## The Atom

### Electrons

In chemistry, the electrons are often considered the most important aspect of the atom, because they determine how atoms bond together to form molecules. However, electrons can move around between atoms, and even form plasma. Maybe of more importance in chemistry is the number of protons within the nucleus of the atom.

### Protons

The atomic number, using Helium as an example, with an atomic number of 2 (# of protons), but atomic mass of 4 (# of protons + neutrons).

The number of protons within an atom determines the names of the elements. Such that all atoms with 1 proton are called hydrogen, atoms with 2 protons are called helium, while 3 proton atoms are called lithium. The number of protons in an atom is referred to as the Atomic Number (Z). Each element is classified by its atomic number, which appears in the top corner of a periodic table of elements, along with the chemical symbol of each element. The first 26 elements formed during fusion in the earlier proto-sun while the elements with atomic numbers higher than 26 formed during the supernova event and elements higher than 94 are not found in nature and must be synthesized in labs. Here is a list of elements listing the atomic number and name of the element, as of 2020.

Elements formed in the sun through fusion 1-Hydrogen (H) 2-Helium (He) 3-Lithium (Li) 4-Beryllium (Be) 5-Boron (B) 6-Carbon (C) 7-Nitrogen (N) 8-Oxygen (O)

Elements formed in the larger proto-sun through fusion 9-Fluorine (F) 10-Neon (Ne) 11-Sodium (Na) 12-Magnesium (Mg) 13-Aluminium (Al) 14-Silicon (Si) 15-Phosphorus (P) 16-Sulfur (S) 17-Chlorine (Cl) 18-Argon (Ar) 19-Potassium (K) 20-Calcium (Ca) 21-Scandium (Sc) 22-Titanium (Ti) 23-Vanadium (V) 24-Chromium (Cr) 25-Manganese (Mn) 26-Iron (Fe)

Elements formed from the Supernova Event 27-Cobalt (Co) 28-Nickel (Ni) 29-Copper (Cu) 30-Zinc (Zn) 31-Gallium (Ga) 32-Germanium (Ge) 33-Arsenic (As) 34-Selenium (Se) 35-Bromine (Br) 36-Krypton (Kr) 37-Rubidium (Rb) 38-Strontium (Sr) 39-Yttrium (Y) 40-Zirconium (Zr) 41-Niobium (Nb) 42-Molybdenum (Mo) 43-Technetium (Tc) 44-Ruthenium (Ru) 45-Rhodium (Rh) 46-Palladium (Pd) 47-Silver (Ag) 48-Cadmium (Cd) 49-Indium (In) 50-Tin (Sn) 51-Antimony (Sb) 52-Tellurium (Te) 53-Iodine (I) 54-Xenon (Xe) 55-Caesium (Cs) 56-Barium (Ba) 57-Lanthanum (La) 58-Cerium (Ce) 59-Praseodymium (Pr) 60-Neodymium (Nd) 61-Promethium (Pm) 62-Samarium (Sm) 63-Europium (Eu) 64-Gadolinium (Gd) 65-Terbium (Tb) 66-Dysprosium (Dy) 67-Holmium (Ho) 68-Erbium (Er) 69-Thulium (Tm) 70-Ytterbium (Yb) 71-Lutetium (Lu) 72-Hafnium (Hf) 73-Tantalum (Ta) 74-Tungsten (W) 75-Rhenium (Re) 76-Osmium (Os) 77-Iridium (Ir) 78-Platinum (Pt) 79-Gold (Au) 80-Mercury (Hg) 81-Thallium (Tl) 82-Lead (Pb) 83-Bismuth (Bi) 84-Polonium (Po) 85-Astatine (At) 86-Radon (Rn) 87-Francium (Fr) 88-Radium (Ra) 89-Actinium (Ac) 90-Thorium (Th) 91-Protactinium (Pa) 92-Uranium (U) 93-Neptunium (Np) 94-Plutonium (Pu)

Non-naturally occurring elements, synthesized in labs 95-Americium (Am) 96-Curium (Cm) 97-Berkelium (Bk) 98-Californium (Cf) 99-Einsteinium (Es) 100-Fermium (Fm) 101-Mendelevium (Md) 102-Nobelium (No) 103-Lawrencium (Lr) 104-Rutherfordium (Rf) 105-Dubnium (Db) 106-Seaborgium (Sg) 107-Bohrium (Bh) 108-Hassium (Hs) 109-Meitnerium (Mt) 110-Damstadtium (Ds) 111-Roentgenium (Rg) 112-Copernicium (Cn) 113-Nihonium (Nh) 114-Flerovium (Fl) 115-Moscovium (Mc) 116-Livermorium (Lv) 117-Tennessine (Ts) 118-Oganesson (Og)

Reading through these names is a mix of familiar elements, such as oxygen, helium, iron and gold, and the unusual. It may be the first time you have heard of indium, technetium, terbium and holmium. This is because each element has difference occurrences in nature, with some orders of magnitude more common on Earth than others. For example, the highest atomic number element, element 118 Oganesson formally named in 2016, is so rare only 5 to 6 single atoms have been reported by scientists. These elements are so extremely rare, because as the number of protons increases in the nucleus of the atom, the more unstable the atom becomes.

Atoms with more than 1 proton need additional neutrons to overcome the repulsion of the two or more protons. Protons are positively charged and will attract negative charged electrons, but these positive charges also push protons way from each other. The addition of neutrons helps stabilize the nucleus allowing multiple protons to co-exist in the nucleus. In general, the more protons that an atom contains the more unstable the atom becomes, resulting in radioactive decay. This is why elements with large atomic numbers, like 90 for Thorium, 92 for Uranium and 94 for Plutonium are radioactive. Scientists speculate on even higher atomic numbers beyond 118 might exist, where the atoms could be stable, but so far, they have not been discovered. Another important fact is that unlike electrons, protons have atomic mass. This important fact will be revisited when you learn how scientists determine what types of elements are actually within solids, liquids and gases.

### Neutrons

The last component of atoms is neutrons. Neutrons, like protons have an atomic mass, but lack any charge, and hence are electrically neutral in respect to electrons. Neutrons form in stars by the fusion of protons, but can also appear in the beta decay of atoms during nuclear fission. Unlike protons, which can be free and stable independent of electrons and neutrons (as hydrogen ions). Free neutrons quickly decay within a few minutes to protons when on Earth. These free neutrons are produced through beta decay of larger elements, but neutrons are stable within the cores of the densest stars, which can hold them together within their gigantic gravitational accelerations within the largest type of stars; neutron stars. Neutrons almost exclusively exist on Earth within atoms next to protons adding stability to atoms with more than 1 proton. Protons and neutrons are the only atomic particles within the nucleus, and only the atomic particles with atomic mass. \newpage

# 3c. The Chart of the Nuclides.

## What is an isotope?

Margaret Todd

Twenty-seven years earlier, the chemist Frederick Soddy was attending a dinner party, with his wife’s family in Scotland. During the dinner he got into a discussion with a guest named Margaret Todd, a retired medical doctor. Conversation likely turned to the research Soddy was doing on atomic structure and radioactivity. Soddy had recently discovered that atoms could be identical on the outside, but have differences on their insides. This difference would not appear on standard periodic tables which arrange elements by the number of electrons and protons, and that he was trying to come up with a different way to arrange these new substances. Margaret Todd suggested the term “Isotope” for these substances, Iso- meaning same, and -tope meaning place. Soddy liked the term, and published a paper late that year using the new term isotope, to denote atoms that differ only in the number of neutrons in a nucleus, but had the same number of protons.

Protons and neutrons exist only within the center of an atom, in the nucleus of atoms, and are called nuclides. An arguably better way to organize the different types of atoms is to chart the number of protons (Z) and number of neutrons (N) inside the nucleus of atoms, (see https://www.nndc.bnl.gov/ for interactive chart). Unlike a periodic table of elements, every single type of atom can be plotted on such a chart, including atoms that are not seen in nature or are highly unstable (radioactive). This type of chart is called the chart of the nuclides.

The full Chart of the Nuclides, arranging types of atoms by the number of protons (Z) and neutrons (N). Black atoms are stable, other colors radioactively decay at differ rates.(click here for a full table of each isotope. )

For example, we can have an atom with 1 proton and 0 neutrons, which is called hydrogen. However, we can also have an atom with 1 proton and 1 neutron, which is called hydrogen as well. The name of the element only indicates the number of protons. In fact, you can theoretically have hydrogen with 1 proton and 13 neutrons. However, such atoms don’t appear to exist on Earth, because it is nearly impossible to get 13 neutrons to come together given Earth’s pressure and temperatures with a single proton. However, such atoms might exist in extremely dense stars. A hydrogen with 1 proton and 13 neutrons would act similar to normal hydrogen, but would have an atomic mass of 14 (1 + 13), making it much heavier than normal hydrogen, with an atomic mass of only 1. Atomic mass is the total number of protons and neutrons in an atom.

Most charts of the nuclides don’t include atoms that have not been observed, however hydrogen with 1 proton and 1 neutron has been discovered, which is called an isotope of hydrogen. Isotopes are atoms with the same number of protons but different numbers of neutrons. Isotopes can be stable or unstable (radioactive). For example, hydrogen has two stable isotopes, atoms with 1 proton and 0 neutrons, and atoms with 1 proton and 1 neutron, but atoms with 1 proton and 3 neutrons are radioactive. Note that atomic mass differs depending on the isotope, such that we could call a hydrogen isotope with 1 proton and 0 neutrons (with 1 atomic mass) light, compared to an isotope of hydrogen with 1 proton and 1 neutron (with 2 atomic mass) heavy. Scientists will often refer to isotopes as either light or heavy, or by a superscript prefix, such as 1H and 2H, where the superscript prefix indicates the atomic mass.

Close up of base of the Chart, showing isotopes from Hydrogen to Boron.Note that the axis is reversed than above the number of Protons is on the vertical axis (y-axis) and number of Neutrons on the horizontal axis (X-axis), such that all atoms with 1 proton are H, 2 are He, etc..

In 1931, Harold Urey and his colleagues Ferdinand G. Brickwedde and George M. Murphy at the University of Chicago isolated heavy hydrogen (2H), by distilling liquid hydrogen over and over again to purify the liquid hydrogen to contain more of the heavy hydrogen. In discovering heavy hydrogen, Harold Urey named this type of atom, deuterium (sometimes abbreviated as D). Only isotopes of hydrogen are named, all other elements are known only by their atomic mass number, such as 14Carbon (i.e. carbon-14). The number indicates the atomic mass which is the number of protons and number of neutrons, so 14C (carbon-14) has 6 protons and 8 neutrons (6+8 = 14).

Hydrogen that contains 1 proton, and hydrogen that contains 1 proton and 1 neutron will behave similarly in their bonding properties to other atoms and are difficult to tell apart, hydrogen no matter the number of neutrons will have 1 electron to equally matching the number of protons.

They do, however have slightly different physically properties because of the difference in mass. For example, 1H, will release 7.2889 Δ(MeV), while 2H (deuterium) will release 13.1357 Δ(MeV), slightly more energy when subjected to photons, due to the fact that the nucleus of the atom contains more mass, and the electron orbital shells are pulled a few Planck’s lengths closer to the nucleus in deuterium than typical hydrogen. The excited electrons will have farther to fall and will release more energy. These slight differences in chemical properties allows isotopes to undergo fractionation. Fractionation is the process of changing the abundance or ratio of various isotopes within a substance, by either enriching or depleting various isotopes in a substance.

## Heavy Water

Water that contains deuterium, or heavy hydrogen has a higher boiling temperature of 101.4 degrees Celsius (at 1 atmosphere of pressure) compared to normal water that boils at 100 degrees Celsius (at 1 atmosphere of pressure). Deuterium is very rare, accounting for only 0.0115% of hydrogen atoms, so to isolate deuterium would require boiling away a lot of water, and keeping the last remaining drops each time, over and over again to increase the amount of deuterium in the water. Heavy water is expensive to make, because it requires so much normal water and distilling it over and over again. This is a process of fractionation.

Irène Joliot-Curie

In 1939 deuterium was discovered to be important in the production of plutonium-239 (239Pu), a radioactive isotope used to make atomic weapons. In an article published in the peer-reviewed journal Nature in 1939, the daughter of Marie Curie, Irène Joliot-Curie and her husband Frédéric Joliot-Curie described how powerful plutonium-239 could be, and how it could be made from uranium using deuterium to moderate free neutrons. The article excited much interest in Nazi Germany, and a campaign was made to produce deuterium. Deuterium bonded to oxygen in water molecules is called heavy water. In 1940 Germany invaded Norway and captured the Vemork power station at the Rjukan waterfall in Telemark, Norway, which had produced deuterium for Leif Tronstad’s lab, but now was capable of producing deuterium for the Germans in the production of the isotope plutonium-239 (238Pu).

Leif Tronstad needed to warn the world that Germany would soon have the ability to make plutonium-239 (238Pu) bombs. But the fighting across Norway was going poorly, as soon the city of Trondheim in the north surrendered, and Leif Tronstad was now a resistance fighter in a country overrun by Nazi Germany. He sent a coded message to Britain warning them of the increased production of deuterium by the Germans. But he was unable to verify the message was received, so he had to escape Norway and warn the world. Leif Tronstad left his family’s cabin on skis and made his way over the Norwegian border with Sweden, and found passage to England. Once in England his warning was received with grave concern by Winston Churchill, who would later write “heavy water – a sinister term, eerie, unnatural, which began to creep into our secret papers. What if the enemy should get the atomic bomb before we did! We could not run the mortal risk of being outstripped in this awful sphere.”

## The Race for the Atomic Bomb

The Vemork Hydroelectric Plant in 1935, the source of heavy water.

## The Hydrogen Bomb

Knowledge of isotopes and understanding how to the read the chart of the nuclides you can understand the frighten nature of atomic power. For example, there is another type of isotope of hydrogen that contains 1 proton and 2 neutrons, called tritium (3H), which has an atomic mass of 3. Unlike deuterium which is stable, tritium is very radioactive, and will decay within a few years, with a half-life of 12.32 years. Half-life is the length of time for half of the atoms to decay, so in 12.32 years, 50% of the atoms will remain, in 24.64 years, only 25% of the atoms will remain, and each 12.32 years into the future, the percent of remaining tritium will decrease by one half. As a very radioactive isotope, tritium is made inside of hydrogen atomic bombs (H-Bombs) through a process of fission of the stable isotope 6-Lithium (6Li) with free neutrons, which acts like a catalyst by increasing energy released during this decay. In nature, tritium does not exist because it decays so quickly, but is a radioactive component of nuclear fall-out in the much more powerful H-bombs or Hydrogen-Bombs first tested after the war in 1952.

The dreaded Hydrogen bomb, tested on Bikini Atoll in 1954.

Are there hydrogen atoms with 1 proton and 3 neutrons? No, as it appears that atoms with this configuration can’t exist, hydrogen atoms with 3, 4, 5 and 6 neutrons decay so quickly that it is nearly impossible to detect them. The energy released can be measured when hydrogen atoms are bombarded by neutrons, but these atoms are so unstable they can’t exist for any length of time. In fact, for most proton and neutron combinations there are no existing atoms in nature. The number of protons and neutrons appears to be fairly equal, although the larger the atom, the more neutrons are present. For example, plutonium contains 94 protons, (the greatest number of protons in a naturally occurring element), but contains between 145 and 150 neutrons to hold those 94 protons together, and even with these neutrons, all isotopes of plutonium are radioactive, with the radioactive 244Pu isotope have the longest half-life of 80 million years. Oganesson (294Og) is the largest isotope ever synthesized and has 118 protons and 176 neutrons (118+176 = 294), but has a half-life of only 0.69 microseconds!

There are 252 isotopes of elements that don’t decay and are stable isotopes. The largest stable isotope was thought to be 209Bi, but recently it has been discovered to decay very very slowly with a half-life that is more than a billion times the age of the universe. The largest stable isotope known is 208Pb lead, which has 82 protons and 126 neutrons. There are actually three stable isotopes of lead, 206Pb, 207Pb and 208Pb, all appear not to decay over time. \newpage

# 3d. Radiometric dating, using chemistry to tell time.

## Radiometric dating to determine how old something is – the hour glass analogy

The radioactive decay of isotopes and use of excited electron energy states have come to dominate how we tell time from the quartz crystals in your wrist watch and computer, to atomic clocks onboard satellites in space. Measuring radioactive isotopes and electron energy states is the major way we tell time in the modern age. It also enables scientists to determine the age of an old manuscript a few thousand years old, as well as uncovering the age of the Earth itself at 4.6 billion years old. Radioactive decay of isotopes has revolutionized how we measure time, from milliseconds up to billions of years, but how is this done?

First, imagine an hour glass filled with sand which drops from two glass filled spheres connected by a narrow tube. When turned over, sand from the top portion of the hour glass will fall down to the bottom, this rate of sand falling is a linear rate, which means only sand positioned near the opening between the glass spheres will fall. Over time the ratio of sand in the top and bottom of the hour glass will change, so that after 1 hour all the sand will have fallen to the bottom. Note that an hour glass can’t be used to measure years, nor can it be used to measure milliseconds, since in the case of years, all the sand will have fallen, and in measuring milliseconds, not enough sand would have fallen in that short length of time. This ratio is measured by determining the amount of sand in the top of the hour glass, and the amount of sand in the bottom of the hour glass. In chemistry dealing with radioactive decay, we call the top sand the parent element and the bottom sand the daughter element of decay.

A "hourglass" where sand drops from the top (Parent) to the bottom (Daughter), which can be used to measure the passage of time up to 11 minutes. The graph shows the ratio of sand at each moment, with an arrow pointing to the moment when half of the sand is in the top and half on the bottom (this is known as half-life, which is 6 minutes). This is an example of linear decay.

## Radiometric dating to determine how old something is – the microwave popcorn analogy

Radiometric decay does not work like an hour glass, since each atom has the same probability to decay, whereas in an hour glass, only the sand near the opening will fall. So a better analogy than an hour glass is to think about popcorn, in particular microwave popcorn. A bag of popcorn will have a ratio of kernels to popped corn, such that the longer the bag is in the microwave oven, the more popped corn will be in the bag. You can determine how long the bag was cooked by measuring the ratio of kernels and popped corn. If most of the bag is still kernels, the bag was not cooked long enough, while if most of the bag is popped corn, then it was cooked for a longer time.

Microwave popcorn where kernels (Parent) pop into popcorn (Daughter), which can be used to measure the passage of time up to 11 minutes. The graph shows the ratio of kernels to popcorn at each moment, with an arrow pointing to the moment when half of the kernels have popped (this is known as half-life, which is 2 minutes). This is an example of exponential decay similar to that used in radioactive dating.

The point in which half of the kernels have popped is referred to as half-life. Half-life is the time it takes for half of the parent atoms to decay to the daughter atoms. After 1 half-life the ratio of parent to daughter will be 0.5, after 2 half-lives, the ratio of parent to daughter will be 0.25, after 3 half-lives the ratio will be 0.125, and so on. Each half-life the amount of parent atoms is halved. In a bag of popcorn, if the half-life is 2 minutes, you will have half un-popped kernels and half popped popcorn, and after 4 minutes the ratio will be 25% kernels and 75% popcorn, after 6 minutes, only 12.5% of the kernels will remain. Each 2 minutes the number of kernels will be reduced by one half.

You can leave the bag in the microwave longer, but the amount of kernels will drop by only half for each additional minute, and likely burn the popcorn, leaving a few kernels still left un-popped. Radiometric dating works the same way.

## What can you date?

The first thing to consider in dating Earth materials is what precisely you are actually dating. There are four basics moments that determine the start of the clock in measuring the age of Earth materials:

1) A phase transition from a liquid to a solid, such as the moment liquid lava or magma cools into a solid rock or crystal.

2) The death of a biological organism, the moment an organism (plant or animal) stops taking in new carbon atoms from the atmosphere or food sources.

3) The burial of an artifact or rock, and how long it has remained in the ground.

4) The exhumation of an artifact or rock, so how long it has been exposed to sunlight.

## Radiocarbon dating or C-14 dating

There are two stable isotopes of carbon (carbon-12 and carbon-13), and one radioactive isotope of carbon (Carbon-14), a radioactive carbon with 6 protons and 8 neutrons. Carbon-14 decays, while carbon-12 and carbon-13 are stable and do not decay. The decay of carbon-14 to nitrogen-14 involves the loss of a proton. For any sample of carbon-14 half of the atoms will decay to nitrogen-14 in 5,730 years. This is the half-life, which is when half of the atoms in a sample have decayed. This means carbon-14 dating works well with materials that are between 500 to about 25,000 years old.

A schematic of a simple mass spectrometer with sector type mass analyzer. This one is is set for the measurement of carbon dioxide isotope ratios, to find the ratio of 13-C to 12-C.

Radiocarbon dating was first developed in the 1940s, and pioneered by Willard Libby, who had worked on the Manhattan Project in developing the atomic bomb during World War II. After the war, Libby worked at the University of Chicago developing carbon radiometric dating, for which he won the Nobel Prize in Chemistry for in 1960. The science of radiocarbon dating has been around for a long time!

Radiocarbon dating measures the amount of time since the death of a biological organism, the moment an organism (plant or animal) stopped taking in new carbon atoms from the atmosphere or food sources. It can only be used to date organic materials that contain carbon, such as wood, plants, un-fossilized bones, charcoal from fire pits, and other material derived from organic material. Since the half-life of carbon-14 is 5,730 years, this method is great for material that is only few hundred or thousand years old, with an upper limit of about 100,000 years. Radiocarbon dating is mostly used in archeology, particularly in dating materials during the Holocene Epoch, or the last 11,650 years. The first step is to collect a small piece of organic material to date, being very careful not to contaminate the sample with organic material, such as the oils on your own hands. The sample is typically wrapped in aluminum foil to prevent contamination. In the early days of radiometric dating before the 1980s, labs would count the decay in the sample measuring the radioactivity, the more radioactivity, the younger the material was. However, a new class of mass spectrometers were developed in the 1980s giving the ability to directly measure the atomic mass of atoms in these samples, the steps are complex, but yield a more precise estimate of age. The steps involve determining the amount of carbon-14, as well as the two stable types of carbon-13 and carbon-12. Since the amount will depend on the amount of material, scientists look at the ratio of carbon-14 to carbon-12, and carbon-13 to carbon-12. The higher the ratio of carbon-14 to carbon-12 the younger the material is, while the carbon-13 to carbon-12 ratio is used to make sure there is not an excess of carbon-12 in the first measurement, and provide a correction if there is.

One of the technical problems that needed to be overcome was that traditional mass spectrometers measure only the atomic mass of atoms, and carbon-14 has the same atomic mass as nitrogen-14. Nitrogen-14 is a very common component of the atmosphere, and air that surrounds us. And this is a problem for labs. In the 1980s a new method was developed called the Accelerated Mass Spectrometry method, which deals with this problem.

The first step of the process is to take your sample and combust the carbon in a stream of pure oxygen in a special furnace or react the organic carbon with copper oxide, both of which produces carbon dioxide, a gas. The gas of carbon dioxide (which is often cryogenically cleaned) is reacted with hydrogen at 550 to 650 degrees Celsius, with a cobalt catalyst, which produces pure carbon in the form of power graphite from the sample, and water. The graphite is held in a vacuum to prevent contamination from the nitrogen-14 in the air. The vacuumed graphite powder is then purged with ultra-pure argon gas to remove any lingering nitrogen-14 which would ruin any measurement, in a glass vial. This graphite, or pure carbon is ionized, adding electrons to the carbon and making it negatively charged. Any lingering nitrogen-14 will not be negatively charged in the process, because it has an additional positive charged proton. An accelerated mass spectrometer spins the negatively charged atoms passing them through the machine at high speeds as a beam. This beam will have carbon-14, but also ions of carbon-12 bonded to 2 hydrogen, as well as carbon-13 bonded to 1 hydrogen all of which have an atomic mass of 14. To get rid of these carbon atoms bonded with hydrogen, the beam of molecules and atoms with atomic mass of 14 is passed through a stripper that removes the hydrogen bonds, and then through a second magnet. Resulting in a spread of atomic mass of carbon-12, carbon-13 and carbon-14 on the detector for each mass. The ratio of carbon-14/carbon-12 is calculated as well as the ratio of carbon-13/carbon-12 and compared to lab standards. The carbon-13/carbon-12 ratio is used to correct the ratio of carbon-14/carbon-12 in the lab and to see if there is an excess of carbon-12 in the sample, due to fractionation. To find the actual age in years, we need to find out the initial amount of carbon-14 that existed at the moment that the organism died.

1: Formation of carbon-14 in the atmosphere 2: Decay of carbon-14 once inside living organisms 3: The "equal" equation for living organisms as they take in atmospheric carbon-14, and the unequal one is for dead organisms, in no new carbon-14 is added and the remaining carbon-14 decays.

Now carbon-14 is made naturally in the atmosphere from nitrogen-14 in the air. In the stratosphere these atoms of nitrogen-14 are hit by cosmic rays from the sun, which bombards the nitrogen-14 with thermal neutrons, producing a carbon-14 and an extra proton, or a hydrogen atom. This process is dependent on the magnetic field from the Earth and solar energy, which vary slightly in each hemisphere, and when solar anomalies happen, such as solar flares. Using tree ring 14-carbon/12-carbon ratios, where we know the year of each tree ring, we can calibrate 14-carbon/12-carbon ratios to absolute years for the last 10,000 years.

There are two ways to report the age of materials dated this way, one is to apply these corrections, which is called the radiocarbon calendar age or you can report the raw date determined solely from the ratio, called Carbon-14 dates. Radiocarbon calendar ages will be more precise than simple carbon-14 dates, especially for older dates.

There is one fascinating thing about determining the initial 14-carbon/12-carbon ratios for materials during the last hundred years. Because of the detonation of atomic weapons in the 1940s and 1950s, the amount of 14-carbon increased in the atmosphere dramatically after World War II, as seen in tree ring data and measurements of isotopes of carbon in carbon-dioxide of the atmosphere.

This fact was used by neurologist studying brain cells, leading to the medical discovery that new brain cells are not formed after birth, as people born before the 1940s have lower levels of 14-carbon in their brain cells in old age, than brain cells of people born after the advent of the nuclear age, which have much higher levels of 14-carbon in their cells. However, over the past few decades, neuroscientists have found two brain regions, the olfactory bulbs (where you get the sense of smell) and the hippocampus (where the storage of memories happen) that do grow new neuron cells throughout life, but the majority of your brain is composed of the same cells throughout your life.

Radiocarbon dating works great, but like a stop watch, it is not going to tell us about things much older than 100,000 years. For dinosaurs and older fossils, or rocks themselves the next method is more widely used.

## Potassium-argon (K-Ar) Dating

Potassium-argon dating is a great method for measuring ages of materials that are millions of years old, but not great if you are looking to measure something only a few thousand years old, since it has a very long half-life.

Potassium argon dating measures the time since a phase transition from a liquid to a solid took place, such as the moment liquid lava or magma cools into a solid rock or crystal. It also requires that the material contain potassium in a crystal lattice structure. The most common minerals sampled for this method are biotite, muscovite, and the potassium feldspar group of minerals, such as orthoclase. These minerals are common in volcanic rocks and ash layers, making this method ideal for measuring the time when volcanic eruptions occurred.

If a volcanic ash containing these minerals are found deposited within or near the occurrence of fossils, a precise date can often be found for the fossils, or a range of dates, depending on how far stratigraphically that ash layer is found from the fossils. Potassium-40 is radioactive, but with a very long half-life of 1.26 billion years, making it ideal for determining ages in most geologic time ranges measured in millions of years. Potassium-40 decays to argon-40, as well as calcium-40, argon-40 is a gas, while calcium-40 is a solid, and very common, hence we want to look at the amount of argon-40 trapped in the crystal and compared that amount to potassium-40 contained in the crystal (both of which are fairly rare).

Ancient volcanic ash layers preserved in sediment can be dated using Potassium-Argon dating.

This requires two steps, first to find out how much potassium-40 is contained within the crystal, and second how much argon-40 gas is trapped in the crystal. One of the beautiful things about potassium-argon dating is that the initial amount of argon-40 in the crystal can be assumed to be 0, since it is a gas. Argon-40 was not present when the crystal was a liquid and cooled into a solid. The only argon-40 found within the crystal would be formed by radioactive decay of potassium-40 and become trapped inside the solid crystal after this point. One of the problems with potassium-argon dating is that you have to do two different lab methods to measure the amount of potassium-40 and the amount of argon-40, and within a single crystal and not destroy the crystal in the process of running those two separate tests. Ideally, we want to sample the exact spot on a crystal for both measurements with a single analysis. And while potassium-argon dating came about in the 1950s, it has become less common compared to another method, which is easier and more precise, and only requires a single test.

## 40Ar/39Ar dating method

This method uses the potassium-argon dating technique but makes it possible to do a single lab analysis on a single point on a crystal grain, making it much more precise than the older potassium-argon method. The way it works is that a crystal containing potassium is isolated, and studied under a microscope, making sure it is not cracked or fractured in any way. The selected crystal is subjected to neutron irradiation, which converts any of the potassium-39 isotopes, to argon-39 isotopes a gas that will be trapped within the crystal (this is similar to what the sun does to nitrogen-14 to change it to carbon-14). These argon-39 isotopes join any of the radiogenic argon-40 isotopes in the crystal as trapped gases, so we just have to measure the amount of argon-39 to argon-40.

A single crystal of biotite in a rock that can be used for argon-argon dating.

The argon-39 number will determine about how much potassium was in the crystal. After being subjected to neutron irradiation the sample crystal will be zapped with a laser, that will release both types of argon gas trapped in the crystal. This gas is sucked up within a vacuum into a mass spectrometer to measure atomic masses of 40 and 39. Note that Argon-39 is radioactive, and decays with a half-life of 269 years, so any argon-39 measured was generated by the irradiation done in the lab. Often this method requires large, unfractured and well-preserved crystals to yield good results. The edges of the crystal and near cracks within the crystal, may have let some of the argon-40 gas to leak out, and will yield too young of a date. Both potassium-argon and argon-argon dating tend to give minimum ages, so if a sample yields 30 million years within a 1-million-year error, the actual age is more likely to be 31 million years, than 29 million years. Often potassium-argon and argon-argon dates are younger than other evidence suggests, and likely were determined from fractured crystals with some leakage of argon-40 gas. Studies will often show the crystal sampled and where the laser points are, and the dates calculated from each point in the crystal. The maximum age is often found near the center and far from any edge or crack within the crystal. Often this will be carried out with multiple crystals in a single rock, to get a good range, and taking the best resulting maximum ages. While potassium-argon and argon-argon are widely used it does require nicely preserved crystals of such fragile minerals grains as biotite, which means that the older the rock, the less likely good crystals can be found. It also does not work well with transported volcanic ash layers in sedimentary rocks, because the crystals are damaged in the process. Geologists were eager to use other minerals, more rugged minerals, that could last billions of years yet preserve the chemistry of radioactive decay— the mineral that meets those requirements is zircon.

## Zircon Fission track dating

Zircon’s are tough and rugged minerals, found in many igneous and metamorphic rocks, and are composed of zirconium silicate (ZrSiO4), which form small diamond like crystals. Because these crystals are fairly rugged and can survive transport they are also found in many sandstones. These transported zircons in sedimentary rocks are called detrital zircons. With most zircon dating, you are measuring the time since the phase transition from a liquid to a solid, when magma cooled into a solid zircon crystal.

A small zircon crystal.

Zircon fission track dating is more specifically measuring the time since the crystal was cooled to 230 to 250 °C, which is called the annealing temperature. Between 900 °C to 250 °C the zircons are somewhat mushy. Zircon fission dating dates the cooler temperature when the crystal became hard, while another other method dates the hotter temperature when the crystal became a solid. Zircons are composed of a crystal lattice of zirconium bonded to silicate (silica and oxygen tetrahedrals), the zirconium is often replaced in the crystal with atoms of similar size and bonding properties, including some of the rare earth elements, but what we are interested in is that zircon crystals contain trace amounts of uranium and thorium. Uranium and thorium are two of the largest naturally occurring atoms on the periodic table. Uranium has 92 protons, while thorium has 90. Both elements are radioactive and decay, with long half-lives. These atoms of uranium and thorium act like mini-bombs inside the crystal, and when one of these high atomic mass atoms decay, it sets off a long chain reaction of decaying atoms, the fission of which causes damage to the internal crystal structure.

The older the zircon crystal is the more damage it will exhibit. Fission track dating was developed as an independent test of potassium-argon and argon-argon dating, as it does not require an expensive mass spectrometer, but simply looking at the crystal under a powerful microscope and measuring the damage caused by the radioactive decay of uranium and thorium. Zircon fission track dating is also used to determine the thermal history of rocks, as they rose up through the geothermal gradient, recording the length of time it took to cool to 250° C.

Decay chain of Uranium-235 to Lead-207 (note that Radon is a gas)

Decay Chain of Uranium-238 to Lead-206 (note that Radon is a gas).

During the 1940s and 1950s a young scientist named Clair Cameron Patterson was trying to determine the age of the Earth by dating zircons and meteorites. Rather than look at zircons, he was trying to date meteorites, which contain the stable isotope lead-204, and used a type of uranium-lead dating simply called lead-lead dating. Clair Patterson used an isochron, which graphically compares the ratios of lead produced through the decay of uranium and thorium, lead-206 and lead-207 with stable lead-204 (an isotope not produced by radioactive decay of uranium and thorium), by plotting these ratios on a graph the resulting slope would indicate the age of the sample, the line is called an isochron, meaning the same age. Using lead isotopes recovered from the Canyon Diablo meteorite from Arizona, Patterson calculated that the Earth was between 4.5 and 4.6 billion years old in 1956.

To acquire these ratios of lead, Clair Patterson developed the first chemical clean room, as he quickly discovered abundant lead contamination in the environment around him traced to the widespread use of lead in gasoline, paints, and water pipes in the 1940s and 1950s. Patterson dedicated much of his later life fighting corporate lobbying groups and politicians to enact laws prohibiting the use of lead in household products, such as fuel and paints. The year 1956 was also the year that the solution to the uranium-lead problem was solved by the brilliant scientist George Wetherill who published a solution to the problem, something called a Concordia diagram, or sometimes called the Wetherill diagram, which allowed direct dating of zircons. There two types of Uranium isotopes in these zircons Uranium-238 (the most common) which decays to Lead-206, and Uranium-235 (the next most common) which decays to Lead-207 (with different half-lives).

An example of a concordia diagram which overcomes the problem of missing daughter products lost during the gas (radon) stage of decay to lead.

If you could measure these two ratios, in a series of zircon crystals and compare the ratios graphically, you could calculate the true ratios of the zircons as if they had not lost any daughter products. Using this set of ratios, you can determine where the two ratios would cross with a given age, and hence where they would be in accordance with each other. It was a brilliant solution that solved the issue of daughter products escaping from zircon crystals. Today geologists can analyze particular points on individual zircon crystals, and hence select the best spot on the crystal that has the minimum amount of leakage of daughter products, using the Concordia diagram allows a correction to these resulting ratios.

Uranium-Lead dating requires you to determine two ratios, Uranium-238 to Lead-206, and Uranium-235 to Lead-207. The Uranium-238 to Lead-206 has a half-life of 4.46 billion years, while Uranium-235 to Lead-207 has a half-life of 704 million years, making them great for both million-year and billion-year scales of time. To do Uranium-Lead dating on zircons, rock samples are grounded, and zircons are extracted using heavy liquid separation. These zircons are analyzed under a microscope. Zircons found in sedimentary rock will yield the age when the zircon initial formed from magma, not when it was re-deposited in a sedimentary layer or bed. Zircons found in sedimentary rocks are called detrital zircons, and will yield maximum ages; for example, a detrital zircon that is 80 million years old, could be found in sedimentary rock deposited 50 million years ago, and the 30-million-year difference is the time when the zircon was exhumed and eroded from igneous rocks and transported into the sedimentary rock. A 50-million-year old zircon will NOT be found in sedimentary rocks that are in fact 80 million years old, so detrital zircons will tell you only that the rock is younger than the zircon age.

Zircons deposited or forming within igneous rock or volcanic ash layers that contain fresh zircons can yield very reliable dates, particularly when the time between crystallization and deposition is minimal.

The first step of Uranium-Lead dating is finding and isolating zircon crystals from the rock, usually by grinding the rock up, and using heavy liquid to separate out the zircon crystals. The zircons are then studied under a microscope to determine how fresh they are. If zircons were found in a sedimentary rock, they are likely detrital, and the damage observed in the crystal will tell you how fresh they are. Detrital zircons are dated in studies to determine the source of sedimentary grains by sedimentary geologists, however, they often lack the resolution for precise dates, unless the zircon crystals were deposited in a volcanic ash, and have not be eroded and transported. Once zircons are selected, they are analyzed using laser ablation inductively coupled plasma mass spectrometry, abbreviated as LA-ICP-MS, which zaps the crystals with a laser, the ablated material is sucked up and ionized in the mass spectrometer under extremely hot temperatures, and a plasma is created which passes the atoms along a tube at high speed measuring the atomic mass of the resulting atoms scattered along the length of the plasma tube. LA-ICP-MS measures larger atomic mass atoms, such as lead and uranium. LA-ICP-MS does not require much lab preparation, and zircons can be analyzed quickly resulting in large sample sizes for distributions of zircons, giving very precise dates. Zircon Uranium-Lead dating is the most common type of dating seen today in the geological literature, exceeding even the widely used Argon-Argon dating technique. It is also one of the more affordable methods of dating, requiring less lab preparation of samples.

## Dating using electron energy states

One of the things you will note about these dating methods is that they are used either to date organic matter that is less than 100,000 years old, or volcanic or igneous minerals that are much older between 1-million to 5-billion years old.

That leaves us with a lot of materials that we can’t date using those methods, including fossils directly that over 100,000 years old, and sedimentary rocks, since detrital zircons will only give you the date when they turned into a solid crystal, rather than the age the sedimentary rocks they are found in. Also using these methods, we can’t determine the age of stone or clay pottery artifacts, the age of glacial features on the landscape, or fossilized bone directly.

One place that is notoriously difficult to date are cave deposits that contain early species of humans, which are often older than the limits of radiocarbon dating. This problem is exemplified by the controversial ages surrounding the Homo floresiensis discovery, a remarkably small species of early humans found in 2003 in a cave located on the island of Flores in Indonesia. Physical anthropologists have argued that the species shares morphological similarities with Homo erectus, which lived in Indonesia from 1.49 million years ago to about 500,000 years ago. Homo erectus was the first early human to migrate out of Africa, and fossils discovered in Indonesia were some of the oldest, as determined from potassium-argon and zircon fission track dating. However, radiocarbon dating from the cave where the tiny species Homo floresiensis was found were much younger than expected, yielding radiocarbon dates of 18,700 years and 17,400 years old, which is old, but not as old as the anthropologists had suggested if the species was closely related to Homo erectus. Researchers decided to conduct a second analysis, and they turned to luminescence dating.

## Luminescence (optically and thermally stimulated)

Luminescence dating was first developed to date how far back in time pottery was fired in a kiln.

There are two types of luminescence dating, optically stimulated and thermally stimulated. They measure the time since the sediment or material was last exposed to sunlight (optical) or heat (thermal). Luminescence dating was developed in the 1950s and 1960s initially as a method to date when a piece of pottery was made. The idea was that during the firing of clay in a pottery kiln to harden the pottery, the quartz crystals within the pottery would be subjected to intense heat and energy, the residuals of this energy would dim slowly long after the pottery was cooled down. Early experiments in the 1940s on heating crystals and observing the light emitted after subjecting the crystals to heat or light, showed that materials could fluorescence (spontaneously glow) and phosphorescence as a delay of light given off by the material for a longer period of time, long after the material was subjected to the initial light or heat.

A glow in the dark figure of an eagle, which needs to be exposed to light.

If you have ever played with glow in the dark objects, you can see this when you expose the object to light, then turn off the light, there is a glow to the object for a long while until it dims to the point you can’t see it anymore. This effect is called phosphorescence. It was also known that material near radioactive-materials would also give off either spontaneous fluorescence and phosphorescence which would last, so it does not have to be heated or in light, radioactive particles can also excite material to glow as well.

What causes this glow is that by exciting electrons in the atom with intense heat or exposure to sunlight (photons) or even radioactivity, the electrons move up in energy levels, however these electrons quickly drop down in energy levels, and in doing so emitting photons as observable light as the object cooled or was removed from the light. In some materials these electrons become trapped at these higher energy levels, and slowly and spontaneously pop back down to the lower energy levels over a more extended period of time. When electrons drop down from their excited states they emit photons, prolonging the glow of the material over a longer time period and perhaps thousands of years.

Scientists wanted to measure the remaining trapped electrons in ancient pottery. The dimmer the glow observed the older the pottery would be. Earlier experiments were successful, and later this tool was expanded to materials exposed to sunlight, rather than heat. The way it works is to determine two things, first is the radiation dose rate, as this will tell you how much radiation the crystal is absorbing over time. This is usually is done by measuring the amount of radioactive elements in the sample and surrounding it. The second thing to measure is the total amount of absorbed radiation, which is measured by exposing the material to light or heat, and measuring the number of photons emitted by the material. Using these two measurements you can calculate the age since the material was subjected to the initial heat or light. There are three types of Luminescence dating.

The first is TL (or thermal luminescence dating) using heat to measure the amount of photons given off of the material. The second is infrared stimulated or IRSL, and the third is optical stimulated or OSL, both of these methods refer to how the photons are measured in the lab by stimulating them with either infrared light or visible optical light. The technique works well, but there is a limit to how dim the material can be to give you useful information, so it works well for materials that are 100 to 350,000 similar to ranges found with radiocarbon dating, but can be carried out on different material, such as pottery, stone artifacts, and the surfaces of buried buildings and stone work.

Researchers in addition to determining the radiocarbon age of Homo floresiensis, used luminescence dating and found a TL maximum date of 38,000 years old, and an IRSL minimum date of 14,000 years old, suggesting that the 18,000 years old date was correct for the skeletons found in the cave. Theses ages are when these sediments were last exposed to sunlight, and not when they were actually deposited in their current place in the cave, so there is likely a lot of mixing going on inside the cave.

## Uranium series dating

Decay Chain of Uranium-238 to Lead-206 (note that Radon is a gas). In Uranium-uranium dating only the first part of this decay chain is examined that between U-238 to U-234, via Th-234.

As a large atom, uranium decays over a very long half-life to lead, and that there are two uranium decay chains, one for uranium-235 which decays to the stable isotope lead-207 and one for uranium-238 which decays to stable isotope lead-206. Scientists look at is just a segment of that long decay chain, the decay of uranium-238 to uranium-234, which is the first part of uranium-238 decay.

And just measure the amount of uranium-234 decaying to thorium-230. Uranium-238 decays to thorium-234 with half-life of 4.27 billion years, thorium-234 decays to protactinium-234 with a half-life of 27 days, then protactinium-234 decays to uranium-234 also with a half-life of 27 days and finally Uranium-234 with a half-life of 245,500 years decays to thorium-230.

The decay between Uranium-234 and thorium-230 can be used to measure things within a few hundred thousand years. There is a problem with this method, since scientists don’t know the initial amount of uranium within the bone or sediment that we are measuring. There is an unknown amount of uranium-234 starting out in the bone or sediment. Uranium-oxide is often carried by groundwaters moving into and out of the fossil and pores between sediment grains.

So unlike other dating methods, were the initial amount of daughter product was assumed to be zero, or a way to determine it experimentally, such as in carbon-14 dating, we can’t make that case. So we have to build a diffusion model, often this is called modeled ages.

The way this is done is that the bone is sectioned, cleaned and laser ablated at various points across its depth measuring the ratios between uranium-234 and thorium-230. Because the bone absorbed more uranium-234 over time, the outer layers of the bone will be enriched in uranium-234 compared to the internal part of the bone, using a gradient of uranium-238, uranium-234 and thorium-230 a diffusion model can be made to determine the amount of uranium-234 likely in the bone when the fossil organism died, and the amount of thorium-230 resulting from the decay of this uranium-234 as additional uranium-234 was added during the fossilization process. Because of this addition of uranium-234, and the fact that uranium-234 is very rare, (as it is produced only by the decay of uranium-238), this method is reserved for difficult cases, such as dating fossils deposited in hard-to-date cave deposits, especially in the upper limits of radiocarbon dating, between 100,000 to 500,000 year-old fossils.

Uranium series dating was used to re-exam the age of Homo floresiensis by looking at the actual fossil bone itself. Uranium series dating of the bone of Homo floresiensis resulted in ages between 66,000 and 87,000 years old(link to revised age), older than the radiocarbon dates from the nearby charcoal (17,400-18,700 years old) and luminescence dating of sediment in the cave (14,000-38,000 years old), but a modeled on the actual bones themselves. These are modeled ages, since you have to determine the diffusion of uranium into the pores of the bone as it was fossilized in the cave, which can yield somewhat subjective dates.

Uranium series dating was also done for another problematic early human cave discovery, the age of Homo naledi from the Rising Star cave in South Africa. Fossil teeth were directly dated using uranium series dating, yielding a minimum age of 200,000 years old (link to paper on the age of the fossil), which had been predicted to be about 1,000,000 years old.

Although, uranium series dating tends to have large error bars, due to the modeling of the diffusion of uranium into the fossils and rocks. Depending on how quickly and how much uranium-238 and uranium-234 was added to the fossil over time in the cave. Uranium series dating is used really only in special cases where traditional dating such as radiocarbon dating and uranium-lead dating can’t be done.

## Electron Spin Resonance (ESR)

The Chernobyl nuclear power plant melt down in 1986.

In April of 1986 the Chernobyl Nuclear Power Plant suffered a critical melt down resulting in an explosion and fire that release large amounts of radioactive materials into the nearby environment. The accident lead to the death of 31 people directly from radiation, and 237 suffered acute radiation sickness. Worry spread across Europe as to how to measure the exposure to radiation from the accident, and electron spin resonance was developed by Soviet scientists to measure the exposure to radiation by looking at teeth, particularly baby teeth of children living in the area.

Electron spin resonance is the measurement of the number of unpaired electrons within atoms. When exposed to radiation, electrons will unpair from their typical covalent bonds, and become unpaired within the orbitals resulting in a slight difference in the magnetism of the atom. This radiation damage, results in the breaking of molecular bonds, and the reason radiation causes cancers and damage to living cells. At the atomic, level radiation can break molecules, resulting in abnormally high errors in DNA and proteins within living cells. Electron spin resonance measures the amount of free radical electrons with a material.

Using this measurement, scientist measured the amount of electron spin resonance in teeth from children who lived near the accident to determine the amount of exposure they had to radiation fall-out from the Chernobyl accident. The study worked, which led to the idea of using the same technology in fossilized teeth, exposed to naturally occurring radiation in the ground.

The loss of baby teeth, allowed scientists to measure the amount of radiation exposure to childern using lost baby teeth in children living near the nuclear plant.

Dating using electron spin resonance requires that we know the amount of uranium and radioactivity that is in the surrounding material through its history, and calculate the length of exposure time to this radiation. The issue however is that you have to model the amount of uranium uptake within the fossil over time, similar to the model you develop with uranium series dating. This is because in both methods of dating you can’t assume that uranium (and amount of radioactivity) in the material remained the same, as the uptake of fresh uranium over time likely occurred. Often scientists will focus on the dense crystal lattice structure of enamel, a mineral called hydroxyapatite, as it is less susceptible to the uptake of uranium.

Electron spin resonance is often paired with uranium series dating, since it has a similar range of ages that it can be used for from a 100 up to 2,000,000 years. Unpaired electrons within atoms is a more permanent state than electrons at higher energy levels seen in Luminescence dating, so older fossils can be dated, up to 2 million years old. This dating method can’t be used for the vast majority of fossils older than 2 million years old, but can be used to date the length of time a fossil or rock was buried up to that limit. Note that electron spin resonance dating is determining the length of time a fossil was buried in sediment that has a background radiation that can be measured.

## Surface Exposure Dating or Beryllium-10 dating

This dating method has revolutionized the study of past Ice Ages over the last 2.5 million years, and study of the glacial and interglacial cycles of Earth’s recent climate. Surface exposure dating can determine the length of time a rock has been exposed to the sun. Ascertaining the length of time, the rock has been exposed to the sunlight allows geologists to discover the age of when that rock or boulder was deposited by a melting glacier, and the timing of the extent of those glaciers on a local level throughout past ice age events.

The way it works is that when rocks are exposed to sunlight, they are bombarded by cosmic rays from the sun, which contain neutrons. These rays result in something called spallation of the atoms in mineral crystals, resulting in the build-up of cosmogenic nuclides.

There are a number of different types of cosmogenic nuclides. For example, we had previously talked about potassium-40 being hit with neutrons in a lab setting and producing Argon-39, in argon-argon dating. The same thing happens in nature, when rocks are left in the sun for a long time, and you could measure the amount argon-39. Most geologists instead look for atoms which form solids as there are easier to extract from the rock, including Beryllium-10, one of the most widely used cosmogenic nuclides to measure.

Beryllium-10 is not found in quartz minerals common in rocks and boulders when they form, but will accumulate when the oxygen atoms within the crystal lattice structure are exposed to cosmic rays containing short-lived free neutrons. The beryllium-10 will build up within the crystals, as long as the rock is exposed to the sun. Beryllium-10 is an unstable, radioactive isotope, with a half-life of 1.39 million years, making ideal for most applications of dating during the Pleistocene Epoch. Most rocks studied so far have exposure ages of less than 500,000 years, indicating that most rocks get re-buried within half a million years.

Surface Exposure Dating is different because we are looking at the amount of beryllium-10 building up within the surface of the rock over time, so the more beryllium-10 is within the rock the longer it has been exposed to sunlight. If the rock becomes obscured from the sun through burial, or a tree grows next to it, then the build-up of the beryllium-10, will be slowed or turned off, and over time will decay to boron-10, emptying the beryllium-10 out of the rock, and resetting the clock for the next time it is exposed to the sunlight. Geologists have to be sure that the rock has been well exposed to sunlight and not shaded by any natural feature in the recent past, like trees.

Geologists will select a boulder or rock, and carefully record is location, as well as the horizon line surrounding the rock, to account for the length of sunlight exposure of any given day at that location.

Beryllium-10 dating can be used to determine how long this boulder has been exposed to sunlight.

A small explosive charge is drilled into the rock, and rock fragments are collected of the surface edge of the rock. The sample is grounded into a powder back in the lab, and digested with hydrofluoric acid, to isolate quartz crystals, which is turned into a liquid solution within a very strong acid. This solution is reacted to various chemicals to isolate the beryllium into a white powder, which is then passed through a mass spectrometer to measure the amount of Beryllium-10 in the rock. This amount is then compared to a model of how much sunlight the rock was exposed to at that location, with the topography of the surrounding features, and determine the length of time that rock has sat there on the surface of the Earth. It’s a pretty cool method which, has become highly important in understanding the glacial history of the Earth through time.

## Magnetostratigraphy

Magnetostratigraphy is the study of the magnetic orientations of iron minerals within sedimentary rocks. These orientations record the direction of the magnetic pole when the sedimentary rocks were deposited. Just like a magnetic compass, iron minerals when transported in lava, magma, or in sediment will orient to the current Earth’s magnetic field. The Earth magnetic field is not stationary, but moves around. In fact, the orientations of the poles switch randomly every few hundred thousand years of so, such that a compass would point toward the south pole rather than the north pole. This change in the orientation of the iron minerals is recorded in the rock layers formed at that time. Measuring these orientations between normal polarity, when the iron minerals point northward, and reversal polarity, with the iron minerals point southward, gives you events that can be correlated between different rock layers.

Geomagnetic Polarity Reversals for the last 5 million years.

Geomagnetic Polarity since the middle Jurassic Period.

The thickness of these bands of changing polarity can be compared to igneous volcanic rock as well, which record both the absolute age using potassium-argon dating for example, as well as the polarity of the rocks at that time, allowing correlation between sedimentary and igneous rocks. Magnetostratigraphy is really important because it allows for the dating of sedimentary rocks, that contain fossils, even when there is no volcanic ash layers present.

There are a couple problems with magnetostratigraphy. One is that rocks can become demagnetized in sedimentary rocks. For example, the rock being struck by lightning will scramble the orientations of the iron grains. It can be also difficult to correlate layers of rock if the sedimentation rates vary greatly or you have unconformities you are unaware of. However, it really works well for many rock layers, and is a great tool to determine the age of rocks by documenting these reversals in the rock record. It was also one of the key technologies to demonstrate the motion of Earth’s tectonic plates.

Rock samples are collected in the field, by recording their exact orientation and carefully extracting the rock not to break it. The rock is then taken to a lab, where the rock is placed in iron cage to remove the interference of magnetic fields from the surrounding environment. The rock sample is cryogenic cooled down to extremely cold temperatures just above absolute zero, where the residual magnetism of the rock will be easier to measure, because sedimentary rocks are not very magnetic, more magnetic rocks, like igneous rocks don’t have to be cooled. The rock is slowly demagnetizated and the orientations vectors are recorded and spatially plotted.

These data points will fall either more toward the north or south, depending on the polarity of the Earth at the time of deposition. The time span between polar reversal events is short, a few hundred years, with the majority of time the polarity is either in normal or reverse state. Sometimes the polarity changes rapidly, while other times the polarity does not change for millions of years, such long intervals with lack of change occurred during the Cretaceous Period, during the age of dinosaurs for 40 million years, which is called the Cretaceous Superchron where the polarity stayed normal for a very long time, and geologists don’t know why this happened.

## Overview of Methods

Method Range of Dating Material that can be dated Process of Decay
Radiocarbon 1 - 70 thousand years Organic material such as bones, wood, charcoal, and shells Radioactive decay of 14C in organic matter after removal from biosphere
K-Ar and 40Ar-39Ar dating 10 thousand - 5 billion years Potassium-bearing minerals Radioactive decay of 40K in rocks and minerals
Fission track 1 million - 10 billion years Uranium-bearing minerals (zircons) Measurement of damage tracks in glass and minerals from the radioactive decay of 238U
Uranium-Lead 10 thousand - 10 billion years Uranium-bearing minerals (zicrons) Radioactive decay of uranium to lead via two separate decay chains
Uranium series 1 thousand - 500 thousand years Uranium-bearing minerals, corals, shells, teeth, CaCO3 Radioactive decay of 234U to 230Th
Luminescence (optically or thermally stimulated) 1 thousand - 1 million years Quartz, feldspar, stone tools, pottery Burial or heating age based on the accumulation of radiation-induced damage to electron sitting in mineral lattices
Electron Spin Resonance (ESR) 1 thousand - 3 million years Uranium-bearing materials in which uranium has been absorbed from outside sources Burial age based on abundance of radiation-induced paramagnetic centers in mineral lattices
Cosmogenic Nuclides (Beryllium-10) 1 thousand - 5 million years Typically quartz or olivine from volcanic or sedimentary rocks Radioactive decay of cosmic-ray generated nuclides in surficial environments
Magnetostratigraphy 20 thousand - 1 billion years Sedimentary and volcanic rocks Measurement of ancient polarity of the earth's magnetic field recorded in a stratigraphic succession

\newpage

# 3e. The Periodic Table and Electron Orbitals.

## Electrons: how atoms interact with each other

If it was not for electrons inside atoms, atoms would never bond or interact with each other to form molecules, crystals and other complex materials. Electrons are extremely important in chemistry because they determine how atoms interact with each other. It is no wonder that the Periodic Table of Elements, found in most science classrooms is displayed rather than the more cumbersome Chart of the Nuclides, since the Periodic Table of Elements organizes elements by the number of protons and electrons, rather than the number of protons and neutrons.

A simple periodic table which arranges types of atoms (elements) by electron orbitals.

Trends in the reactivity of elements as seen on the modern Periodic Table of Elements.

As discussed previously, electrons are wayward subatomic particles that can increase their energy states and even leave atoms altogether to form plasma, which is also called electricity. Electricity is the flow of free electrons which can move near the speed of light across conducting material, like metal wires. In this next section we will look in detail at how electrons are arranged within atoms in orbitals. However, remember that highly excited atoms bombarded with high levels of electromagnetic radiation, such as increasing temperatures and high pressures, electrons can leave atoms, while at very cold temperatures, near absolute zero Kelvin electrons will be very close to the nucleus of atoms forming Bose–Einstein condensate. When we think of temperature (heat), what is really indicated is the energy states of electrons within the atoms of a substance, whether a gas, liquid or solid. The hotter a substance becomes the more vibrational energy electrons will have.

Electrons orbit around the nucleus at very fast speeds and under no discrete orbital path, but as an electromagnetic field called an orbital shell. The Heisenberg principle describes the impossible nature of measuring these electron orbital shells, because anytime a photon is used by a scientist to measure the position of an electron, it will move and change its energy level. There is always an uncertainty as to the exact location of an electron within the orbit around the atom’s nucleus. As such electron orbital shells are probability fields where an electron is likely to exist at any moment in time.

Negatively charged electrons are attracted to positively charged protons, such that equal numbers of electrons and protons are observed in most atoms.

Dmitri Mendeleev (фото Дмитрия Ивановича Менделеева)

Early chemists working in the middle 1800s knew of only a handful of elements, which were placed into three major groups based on how reactive they were with each other, the Halogens, Alkali Metals and Alkali Earths. By 1860, the atomic mass of many of these elements were reported, allowing the Russian scientist Dmitri Mendeleev to arrange elements based on their reactive properties and atomic mass.

The early 1871 Periodic Table of Elements

While working on a chemistry textbook, Mendeleev stumbled upon the idea of each set of elements having increasing atomic mass, such that a set of Halogens would have elements of differing mass. Without knowing the underlying reason, Mendeleev organized the elements by their atomic number (number of protons), which is related to atomic mass and the number of electron orbitals, which is related to how reactive an element is to bonding with other elements. While these early Periodic Tables of Elements look nothing like our modern Periodic Table of Elements, they excited chemists to discover more elements. The next major breakthrough came with the discovery and wide acceptance of Noble gasses, which include Helium and Argon, which are the least reactive elements known.

## The Periodic Table of Elements

So how does an atom’s reactivity relate to its atomic mass? Electrons are attracted to the atomic nucleus in equal number to the number of protons, which is half the atomic mass. The more atomic mass, the more protons, and the more electrons will be attracted. However, electrons prefer to fill electron orbital shells in sets, such that an incomplete electron orbital shell will attract other electrons, despite there being an equal number of electrons to protons. If an atom has a complete set of electrons which matches the number of protons, it will be non-reactive, while elements that need 1 less or gain 1 more electron to fill an orbital set are the most reactive types of elements.

 Group → 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ↓ Period 1 1H 2He 2 3Li 4Be 5B 6C 7N 8O 9F 10Ne 3 11Na 12Mg 13Al 14Si 15P 16S 17Cl 18Ar 4 19K 20Ca 21Sc 22Ti 23V 24Cr 25Mn 26Fe 27Co 28Ni 29Cu 30Zn 31Ga 32Ge 33As 34Se 35Br 36Kr 5 37Rb 38Sr 39Y 40Zr 41Nb 42Mo 43Tc 44Ru 45Rh 46Pd 47Ag 48Cd 49In 50Sn 51Sb 52Te 53I 54Xe 6 55Cs 56Ba 57La 58Ce * 72Hf 73Ta 74W 75Re 76Os 77Ir 78Pt 79Au 80Hg 81Tl 82Pb 83Bi 84Po 85At 86Rn 7 87Fr 88Ra 89Ac 90Th ** 104Rf 105Db 106Sg 107Bh 108Hs 109Mt 110Ds 111Rg 112Cn 113Uut 114Uuq 115Uup 116Uuh 117Uus 118Uuo * Lanthanides 59Pr 60Nd 61Pm 62Sm 63Eu 64Gd 65Tb 66Dy 67Ho 68Er 69Tm 70Yb 71Lu ** Actinides 91Pa 92U 93Np 94Pu 95Am 96Cm 97Bk 98Cf 99Es 100Fm 101Md 102No 103Lr

1Actinides and lanthanides are collectively known as "Rare Earth Metals." 2Alkali metals, alkaline Earth metals, transition metals, actinides, and lanthanides are all collectively known as "Metals." 3Halogens and noble gases are also non-metals.

State at standard temperature and pressure

• those with atomic number in blue are not known at STP
• those with atomic number in red are gases at standard temperature and pressure (STP)
• those with atomic number in green are liquids at STP
• those with atomic number in black are solid at STP

Natural occurrence

• those with solid borders have isotopes that are older than the Earth (Primordial elements)
• those with dashed borders naturally arise from decay of other chemical elements and have no isotopes older than the earth
• those with dotted borders are made artificially (Synthetic elements)

Unknown properties

• those with a cyan background have unknown chemical properties.

## The First Row of the Periodic Table of Elements (Hydrogen & Helium)

First Row of the Periodic Table of Elements

The first row of the Periodic Table of Elements contains two elements, Hydrogen and Helium.

Hydrogen has 1 proton, and hence it attracts 1 electron. However, the orbital shell would prefer to contain 2 electrons, so hydrogen is very reactive with other elements, for example in the presence of oxygen, it will explode! Hydrogen would prefer to have 2 electrons within its electron orbital shell, but can’t because it has only 1 proton, so it will “steal” or “borrow” other electrons from nearby atoms if possible.

Helium has 2 protons, and hence attracts 2 electrons. Since 2 electrons are the preferred number for the first orbital shell, helium will not react with other elements, in fact it is very difficult (nearly impossible) to bond helium to other elements. Helium is a Noble Gas, which means that it contains the full set of electrons in its orbital shell.

The columns of the Period Table of Elements are arranged by the number of electrons within each orbital shell, and the atomic number (number of protons), is represented by rows.

## The Other Rows of the Periodic Table of Elements

The names of the blocks of electron orbitals (s, p, d and f).

The rules for filling electron orbitals.

The first row of the Periodic Table of Elements is the when the first 2 electrons fill the first orbital shell, called the 1s orbital shell. The second row is when the next 2 electrons fill in the 2s orbital shell, and 6 fill the 2p orbital shell. The third row is when the next 6 orbitals fill in the 3p orbital shell, and 2 fill the 4s orbital shell. The fourth row is when the next 10 orbitals fill in the 3d orbital shell, and 6 fill in the 4p orbital, and 2 fill in the 5s orbital.

## Valence Electrons

A valence electron is an outer shell electron that is associated with an atom, but not completely filling the outer orbital shell, and as such is involved in bonding between atoms. The valence shell is the outermost shell of an atom. Elements with complete valence shells (noble gases) are the least chemically reactive, while those with only one electron in their valence shells (alkali metals) or just missing one electron from having a complete shell (halogens) are the most reactive. Hydrogen, has one electron in its valence shell but also is just missing one electron from having a complete shell has unique and very reactive properties.

The number of valence electrons of an element can be determined by the periodic table group, the vertical column in the Periodic Table of Elements. With the exception of groups 3–12 (the transition metals and rare earths), the columns identify by how many valence electrons are associated with a neutral atom of the element. Each s sub-shell holds at most 2 electrons, while p sub-shell holds 6, d sub-shells hold 10 electrons, followed by f which holds 14 and finally g which holds 18 electrons. Observe the first few rows of the Periodic Table of Elements to see how this works to determine how many valance electrons ( < ) are in each atom of a specific element.

Element # Electrons 1s 2s 2p 2p 2p # Valance

Electrons

Hydrogen 1 < 1
Helium 2 <> 0
Lithium 3 <> < 1
Beryllium 4 <> < < 2
Boron 5 <> < < < 3
Carbon 6 <> < < < < 4
Nitrogen 7 <> <> < < < 3
Oxygen 8 <> <> <> < < 2
Fluorine 9 <> <> <> <> < 1
Neon 10 <> <> <> <> <> 0

Notice that Helium and Neon have 0 valance electrons, which means that they are not reactive, and will not bond to other atoms. However, Lithium has 1 valance electron, if this 1 electron was removed, it would have 0 valence electrons, this makes Lithium highly reactive. Also notice that Fluorine just needs 1 more valence electron to complete its set of 2s and 2p orbitals, making Fluorine highly reactive as well. Carbon has the highest number of valence electrons in this set of elements, which will attract or give up 4 electrons to complete the set of 2s and 2p orbitals.

Understanding the number of valence electrons is extremely important in understanding how atoms form bonds with each other to form molecules. For example, the first column of elements containing Lithium on the periodic table all have 1 valence electron, and likely will bond to elements that need 1 valence electron to fill the orbital shell, such as elements in the fluorine column on the Periodic Table of Elements.

Some columns of the periodic table are given specific historical names. The first column of elements containing Lithium collectively are called the Alkali Metals (hydrogen a gas is unique and often not considered within Alkali Metals), the last column of the elements containing Helium all have 0 valence electrons, and are collectively called the Noble Gases. Elements under the Fluorine column require 1 valence electron to fill the orbital shell and are called the Halogens, while elements under Beryllium are called the Alkaline Earth Metals, and have 2 valence electrons. Most other columns are not given specific names (sometimes collectively called Transitional Metals), but can be used to determine the number of valence electrons, for example Carbon and elements listed below will have 4 valence electrons, while all elements listed under Oxygen will have 2 valence electrons. Notice that after the element Barium, there is an insert of two rows of elements, these are the Lanthanoids and Actinoids, which contain electrons in the 4s, 4p, 4d, 4f orbitals, for a possible total of 32 electrons, a little too long to include in a nice table, and hence often these elements are shown at the bottom of the Periodic Table of Elements.

The extended Periodic Table (typically the Lanthanoids and Actinoids are show as inserts)

A typical college class in chemistry will go into more detail on electron orbital shells, but it is important to understand how electron orbitals work, because the configuration of electrons determines how atoms of each element form bonds in molecules. In the next section, we will examine how atoms come together for form bonds, and group together in different ways to form the matter that you observe on Earth. \newpage

# 3f. Chemical Bonds (Ionic, Covalent, and others means to bring atoms together).

There are three major types of bonds that form between atoms, linking them together into a molecule, Covalent, Ionic, and Metallic. There are also other ways to weakly link atoms together, because of the attractive properties related to the configuration of the molecules themselves, which includes Hydrogen bonding.

## Covalent Bonds

Diamonds (like the Hope Diamond) are very hard because they are made up of covalent bonds of carbon atoms.

Covalent bonds are the strongest bonds between atoms found in chemistry. Covalent bonding is where two or more atoms share valence electrons to complete their orbital shells. The most-simple example of a covalent bond is found when two hydrogen atoms bond. Remember that each hydrogen atom has 1 proton, and 1 electron, however to fill the s1 orbital requires 2 electrons. Hydrogen atoms will group into pairs, each contributing an electron to the s1 orbital shell. Chemically hydrogen will be paired, which is depicted by the chemical formula H2. Another common covalent bond can be illustrated, by introducing oxygen with hydrogen. Remember that oxygen needs two valence electrons to fill its set of electron orbitals, hence it bonds to 2 hydrogen atoms, each having 1 valence electron to share between the atoms. H2O the chemical formula for ice or water is where 2 hydrogen atoms, each with an electron, bond with an oxygen atom that needs 2 electrons to fill its s2 p2 orbitals. In covalent bonds, atoms share the electron to complete the orbital shells, and because the electrons are shared between atoms covalent bonds are the strongest bonds in chemistry.

Covalent bonds of two hydrogen atoms.

Oxygen for example, will pair up to share the 2 electrons (called a double bond), forming O2. Nitrogen does the same, pairing up to form N2, by sharing 3 electrons (called a triple bond). However, in the presence of nitrogen and hydrogen, the hydrogen will bond with nitrogen forming NH3 (ammonia) because it would require 3 electrons each from a hydrogen atom to fill all the orbitals. Carbon which has 4 valence electrons most often bonds with hydrogen to form CH4 (methane or natural gas), because it requires 4 electrons each from a hydrogen atom. Bonds that form by two atoms sharing 4 or more electrons are very rare.

Covalent bond found in methane, where a carbon atom shares 4 valence electrons with 4 hydrogen atoms.

The electrons shared equally between the atoms makes these bonds very strong. Covalent bonds can form crystal lattice structures, when valence electrons are used to link atoms together. For example, diamonds are composed of linked carbon atoms. Each carbon atom is linked to 4 other carbon atoms, which each share an electron between them. If the linked carbon forms a ring, rather than a lattice structure, the carbon is in the form of graphite (used at the end of pencils). If the linked carbon forms a lattice structure the crystal from is much harder, a diamond. Hence the only difference between graphite pencil lead and a valuable diamond is how the bonds between the carbon atoms are linked together in covalent bonds.

## Ionic Bonds

Salt is weak, and will dissolve in water because of its ionic bonds of sodium and chloride.

Ionic bonding is where an atom gives an electron to a neighboring atom.

Ionic bonds are a weaker type of bond between atoms found in chemistry. Ionic bonding is where one atom gives a valence electron to complete another atom’s orbital shell. For example, lithium has a single valence electron, and would like to get rid of it, so it will give or contribute the electron to an atom of fluorine, which needs an extra valence electron. In this case the electron is NOT shared by the two atoms, however, when lithium gives away its valence electron, it becomes positively charged because it has fewer electrons than it has of protons. While fluorine, will have more electrons than protons, and will be negatively charged. Because of this charge, the atoms will be attracted together. Atoms that have different numbers of protons and electrons are called ions. Ions can be positively charged like Lithium, which are called cations or negatively charged like Florine which are called anions.

An excellent example of ionic bonding you have encountered is salt, which is composed of Sodium (Na) and Chloride (Cl). Sodium has one extra valence electron that it would like to give away, and Chloride is looking to pick up an extra electron to fill its orbital, this results in Sodium (Na) and Chloride (Cl) ionically bonding to form table salt. However, the bonds in salt are easy to break, since they are held NOT by sharing electrons, but by their different charges. When salt is dropped in water, the pull of the water molecules can break apart the sodium and chloride, resulting in Sodium and Chloride ions (the salt is dissolved within the water). Often chemical formulas of ions are expressed as Na+ and Cl- to depict the charge, where + sign indicates a cation and – sign indicates an anion. Sometimes an atom will give away or receive two or more electrons, for example Calcium will often give up two electrons, resulting in the cation Ca2+.

The difference between covalent bonds and ionic bonds is that in covalent bonds the electrons are SHARED between atoms, while in ionic bonds the electrons are GIVEN or RECEIVED between atoms. A good analogy to think of is friendship between two kids. If the friends are sharing a ball, by passing it between each other, they are covalently bonded to each other since the ball is shared equally between them. However, if one of the friends has an extra ice cream cone, and gives it to their friend, they are ionically bonded to each other.

Some molecules can have BOTH ionic and covalent bonds. A good example of this is with a common molecule Calcium Carbonate CaCO3. The carbon atom is covalently bonded to three oxygen atoms, which means that it shares electrons between the carbon and oxygen atoms. Typically carbon only covalently bonds to two oxygen atoms (forming carbon dioxide CO2), each sharing two electrons, for a total of 4. However, in the case of carbonate, three oxygen atoms are bonded to the carbon, with 2 sharing 1 electron, and one sharing 2 electrons, this results in 2 extra electrons. Hence CO3-2 has two extra electrons that it would like to give away, and is negatively charged. Calcium atoms have 2 electrons more than a complete shell, and will loose these electrons resulting in a cation with a positive charge of +2, Ca+2. Hence the ions CO3-2 and Ca+2 have opposite charges that will bond together and form CaCO3, calcium carbonate, a common molecule found in limestones and shelled organisms that live in the ocean. Unlike salt, CaCO3 does not readily dissolve in pure water, as the ionic bonds are fairly strong, however if the water is slightly acidic, CaCO3, calcium carbonate will dissolve.

A solution is called an acid when it has an abundance of ions of hydrogen within the solution. Hydrogen ions, loose 1 electron, forming a cation H+. When there is an excess of hydrogen ions in a solution, these will break ionic bonds by bonding to anions. For example, in CaCO3, the hydrogen ions can form bonds with the CO3-2, forming HCO3- ions, called bicarbonate, dissolving the CaCO3 molecule. Acids break ionic bonds, by introducing ions of hydrogen, which can dissolve molecules that form these ionic bonds. Note that a solution with an abundance of anions, such as OH-, can also break ionic bonds, and these are called bases. So a basic solution, is one with an excess of anions. In this case the calcium will form a bond with the OH- anion, forming Ca(OH)2, calcium hydroxide, which in a solution of water is known as limewater.

The ratio of ions of H+ and OH- is measured in pH, such that a solution with a pH of 7 has equal numbers of H+ and OH- ions, while acidic solutions have pH less than 7, with more H+ cations, while basic solutions have pH more than 7, with more OH- anions.

## Metallic bonding

Metallic bonding is a unique feature of metals, and can be described as a special case of ionic bonding involving the sharing of free electrons among a structure of positively charged ions (cations). Materials composed of metallic bonded atoms exhibit high conductivity of electricity, as electrons are free to pass between atoms across its surface. This is why electrical wires are composed of metals like copper, gold, and iron, since they can conduct electricity across their surface, since electrons are shared evenly between many atomic bonds. Material composed of metallic bonding also have a metallic luster or shine, and can be more ductile (bend) easily because of the flexibility of these bonds.

Native copper is an example of metallic bonding of Cu (copper) atoms, which is an excellent conductor of electricity, and ductile (bendable).

Metallic bonds are susceptible to oxidation. Oxidation is a type of chemical reaction in which metallic bonded atoms loose electrons to an oxidizing agent (most often atoms of oxygen), resulting in metallic atoms becoming bonded covalently with oxygen. For example, iron (Fe) which can be either cations of Fe2+ (Iron-II) or Fe3+ (Iron-III) lose electrons and can receive these missing electrons with oxygen (O-2), which has two extra electrons in its orbitals, resulting in a series of molecules called iron oxides such as Fe2O3. This is why metals, such as iron rust or corrode and silver tarnishes over time, these metallic bonds react with the surrounding oxygen by gaining extra electrons from them. Oxygen is common in the air, within water and within acidic solutions (corrosive solutions), and the only way to prevent oxidation in metals is to limit the exposure to oxygen (and other atoms with an excess of electrons like fluorine).

When electrons are gained the reactive is called a reducing reaction, and is the opposite of oxidation. Collectively these types of chemical reactions are called "Redox" reactions and they form an important aspect in chemistry. Furthermore, the transfer of elections in oxidation-reduction reactions are useful way to store excessive electrons (electricity) in batteries.

## Hydrogen bonding

Hydrogen bonding in water, caused by the polarization of the molecule (H2O).

Covalent, Ionic and Metallic bonding all require the exchange of electrons between atoms, and hence are fairly strong bonds, with covalent bonds being the strongest type of bond. However, molecules themselves can become polarized because of the arrangement of the atoms, such that a molecule can have a more positive and more negative side. This frequently happens with molecules containing hydrogen atoms bonded to larger atoms. These types of bonds are very weak and easily broken, but produced very important aspects in the chemistry of water and organic molecules essential for life. Hydrogen bonds form within water and is the reason for the expansion in volume between liquid water and solid ice. Water is composed of oxygen bonded to two hydrogen atoms covalently (H2O). The distribution of these two hydrogen atoms contribute an electron to the p2 orbitals, which require 6 electrons. Hence the two hydrogen atoms are pushed toward each other slightly because of the pair of electrons in the first p2 orbital, forming a “mouse-ear” like molecule. These two hydrogen atoms are more positively charged and give the molecule a slight positive charge at the hydrogen atom side compared to the other side of the oxygen atom which lacks a hydrogen atom. Hence water molecules oriented themselves with weak bonds between the positively charged hydrogen atoms and the open space negative charge side of the atoms. Hydrogen bonds are best considered an electrostatic force of attraction between hydrogen (H) atoms which are covalently bound to more electronegative atoms such as oxygen (O) and nitrogen (N). Hydrogen bonds are very weak, but provide important bonds in living organism, such as the bonds within the helix in the double helix structure of DNA (Deoxyribonucleic acid), and hydrogen bonds are important in capillary forces with water transport in plant tissue and blood vessels, as well as hydrophobic (water repelling) and hydrophilic (water attracting) organic molecules in cellular membranes.

Hydrogen bonds explains the unique feature of water having high surface tension to hold up this paperclip.

Hydrogen bonding is often considered as a special type of the weak Van der Waals molecular forces which cause the attraction or repulsion of electrostatic interacts between electronically charged or polarized molecules. These forces are weak, but play a role in making some molecules more “sticky” than other molecules. As you will learn later on, water is a particularly “sticky” molecule because of these hydrogen bonds.

# 3g. Common Inorganic Chemical Molecules of Earth.

## Goldschmidt Classification

Victor Goldschmidt

With 118 elements on the periodic table of elements, there can be a nearly infinite number of molecules with various combinations of these 118 elements. However, on Earth some elements are very rare, while others are much more common. The distribution of matter, and of the various types of elements across Earth’s surface, oceans, atmosphere, and within its inner rocky core is a fascinating topic. If you were to grind up the entire Earth, what percentage would be made of gold? What percentage made of oxygen? How could one calculate the abundances of the various elements of Earth? Insights into the distribution of elements on Earth came about during World War II, as scientists developed new tools to determine the chemical makeup of materials, one of the great scientists to lead this investigation was Victor Goldschmidt.

On November 26th 1942, Victor Goldschmidt stood among the fearful crowd of people assembled on the pier in Oslo Norway waiting for the German ship Donau to transport them to Auschwitz. Goldschmidt had a charmed childhood in his home in Switzerland, and when his family immigrated to Norway, Goldschmidt was quickly recognized for his early scientific interests in geology. In 1914 he began teaching at the local university after successfully defending his thesis on the contact metamorphism in the Kristiania Region of Norway. In 1929 he was invited to Germany to become the chair of mineralogy in Göttingen, and had access to scientific instruments that allowed him to detect trace amounts of elements in rocks and meteorites. He also worked with a large team of fellow scientists in the laboratories whose goal it was to determine the elemental make-up of the wide variety of rocks and minerals. However, in the summer of 1935 a large sign was erected on the campus by the German government that read, “Jews not desired.” Goldschmidt protested, as he was Jewish and felt that the sign was discriminatory and racist. The sign was removed, but only to reappear later in the Summer, and despite his further protest against the sign, the sign remained as ordered by the new Nazi party. Victor Goldschmidt resigned his job in Germany and returned to Norway to continue his research, feeling that any place where people were injured and persecuted only for the sake of their race or religion, was not a welcome place to conduct science. Goldschmidt had with him vast amounts of data regarding the chemical make-up of natural materials found on Earth, particularly rocks and minerals. This data allowed Goldschmidt to classify the elements based on their frequency found on Earth.

Goldschmidt's Classification of the Elements, blacked out elements don't naturally occur on Earth.

## The Atmophile Elements

The first group Goldschmidt called the Atmophile elements, as these elements were gases and tended to be found in the atmosphere of Earth. These included both Hydrogen and Helium (the most abundant elements of the solar system), but also Nitrogen, as well as the heavier Nobel Gasses: Neon, Argon, Krypton and Xenon. Goldschmidt believed that Hydrogen and Helium as very light gasses were mostly stripped from the Earth’s early atmosphere, with naturally occurring Helium on Earth found from the decay of radioactive materials deep inside Earth, and trapped, often along with natural gas underground. Nitrogen forms the most common element in the atmosphere, as a paired molecule of N2. It might be surprising that Goldschmidt did not classify oxygen within this group, and that was because oxygen was found to be more abundant within the rocks and minerals he studied, in a group he called Lithophile elements.

## The Lithophile Elements

Lithophile elements or rock-loving elements are elements common in crustal rocks found on the surface of continents, they include oxygen and silicon (the most common elements found in silicate minerals, like quartz), but also a wide group of alkali elements belong to this group including lithium, sodium, potassium, beryllium, magnesium, calcium, strontium, as well as the reactive halogens: fluorine, chloride, bromine and iodine, and with some odd-ball middle of the chart elements, aluminum, boron, phosphorous, and of course oxygen and silicon. Lithophile elements also include the Rare Earth elements found within the Lanthanides, and making a rare appearance in many of the minerals and rocks understudy.

## The Chalcophile Elements

The next group are the Chalcophile elements or copper-loving elements. These elements are found in many metal ores, and include sulfur, selenium, copper, zinc, tin, bismuth, silver, mercury, lead, cadmium and arsenic. These elements are often associated in ore veins and concentrated with sulfur molecules.

## The Siderophile Elements

The next group Goldschmidt described where the Siderophile elements or iron-loving elements, which include iron, as well as cobalt, nickel, manganese, molybdenum, ruthenium, rhodium, palladium, tungsten, rhenium, osmium, iridium, platinum, and gold. These elements were found by Goldschmidt to be more common in meteorites (most especially in iron-meteorites) when compared to rocks found on the surface of the Earth. Furthermore, these elements are common in iron-ore and associated with iron-rich rocks when they are found on Earth’s surface. The last group of elements are simply the Synthetic elements, or elements that are rarely found in nature, which include the radioactive elements found on the bottom row of the Periodic Table of Elements and produced only in labs.

## Meteorites, the Ingredients to Making Earth

A Pallasite Meteorite

A deeper understanding of the Goldschmidt classification of the elements was likely being discussed at the local police station in Oslo Norway on that chilly late November day in 1942. Goldschmidt’s Jewish faith resulted in his imprisonment seven years later, when Nazi Germany invaded Norway, and despite his exodus from Germany, the specter of fascism had caught up with him. Jews were to be imprisoned, and most would face death in the concentration camps scattered over Nazi-occupied Europe. Scientific colleagues argued with the authorities that Goldschmidt’s knowledge of the distribution of valuable elements was much needed. The plea worked, because Victor Goldschmidt was released, and of the 532 passengers that boarded the Donau, only 9 would live to see the end of the war. With help, Goldschmidt fled Norway instead of boarding the ship and would spend the last few years of his life in England writing a textbook, the first of its kind on the geochemistry of the Earth.

As a pioneer in understanding the chemical make-up of the Earth, Goldschmidt inspired the next generation of scientists to study not only the chemical make-up of the atmosphere, ocean, and rocks found on Earth, but to compare those values to extra-terrestrial meteorites that have fallen to Earth from space.

A Carbonaceous chondrite meteorite.

Meteorites can be thought of as the raw ingredients of Earth. Mash enough meteorites together, and you have a planet. However, not all meteorites are the same, some are composed mostly of metal iron, called iron meteorites, other meteorites have equal amounts of iron and silicate crystals, called stony-iron meteorites, while the third major group, the stony meteorites are mostly composed of silicate crystals (SiO2).

An Iron meteorite from Seymchan Russia.

If the Earth formed from the accretion of thousands of meteorites, then the percentage of chemical elements and molecules found in meteorites would give scientists a starting point for the average abundance of elements found on Earth. Through its history Earth’s composition has likely changed as elements became enriched or depleted in various places, and within various depths inside Earth. Here are the abundances of molecules in meteorites: (From Jarosewich, 1990: Meteoritics)

Stony meteorites (% weight)
SiO2 38.2%
MgO 21.6%
FeO 18.0%
CaO 6.0%
FeS 4.8%
Fe(m) 4.4%
Al2O3 3.7%
H2O+ 1.8%
Na2O 0.9%
Ni 0.7%
Cr2O3 0.5%
C 0.5%
H2O- 0.4%
MnO 0.3%
NiS 0.3%
NiO 0.3%
SO3 0.3%
P2O5 0.2%
TiO 0.2%
K2O 0.1%
CO2 0.1%
Co trace
CoO trace
CoS trace
CrN trace
Iron meteorites (% weight)
Fe(m) 92.6%
Ni 6.6%
Co 0.5%
P 0.3%
CrN trace

If Earth was a homogenous planet (one composed of a uniform mix of these elements) the average make-up of Earthly material would have a similar composition to a mix of stony and iron meteorites. We see some indications of this fact, for example SiO2 (silica dioxide) is the most common molecule in stony meteorites at 38.2%, with silica bonded to two oxygen molecules, silicon and oxygen are the most common molecules found in rocks, forming a group of minerals called silicates, which include quartz, a common mineral found on the surface of Earth. The next three molecules, MgO, FeO, and CaO are also commonly found in rocks on Earth, however, iron (Fe) which is very common in iron meteorites, and also makes up a significant portion of stony meteorites with various molecules containing FeO, FeS, and Fe in native metal form. Yet typical rocks found on the surface of Earth contain very little iron. Where did all this iron go?

Goldschmidt suggested that iron (Fe) is a Siderophile element, as well as nickel (Ni), manganese (Mn) and cobalt (Co), which sank into the core of the Earth during its molten stage. Hence over time the surface of the Earth became depleted in these elements. A further line of evidence for an iron rich core is Earth’s magnetic field observed with a compass. This magnetic field supports the theory of an iron rich core at the center of Earth. Hence siderophile elements can be thought of as elements that are more common in the center of the Earth, than on Earth’s near surface. This is why other rare siderophile elements like gold, platinum and pallidum are considered precious metals at the surface of Earth.

Goldschmidt also looked at elements common in the atmosphere, in the air that we breath and that readily form gasses with Earth’s temperatures and pressures. These atmophile elements include hydrogen and helium, which are only observed in meteorites as H2O and very little isolated helium gas. This is despite the fact that the sun is mostly composed of hydrogen and helium. If you have ever lost a helium balloon, you likely know the reason why there is so little hydrogen and helium on Earth. Both hydrogen and helium are very light elements and can escape into the high atmosphere, and even into space. Much of the solar system’s hydrogen and helium is found in the sun, which has a greater gravitational force, as well as the larger gas giant planets in the outer solar system, like Jupiter which has an atmosphere composed of hydrogen and helium. Like the sun, larger planets can hold onto these light elements with their higher gravitational forces. Earth has lost much of its hydrogen and helium, and almost all of Earth’s hydrogen is bonded to other elements preventing its escape.

Nitrogen is only found in trace amounts in meteorites, as the mineral carlsbergite, which is likely the source of nitrogen in Earth’s atmosphere. Another heavier gas is carbon dioxide (CO2), which accounts for about 0.1% of stony meteorites. However, in the current atmosphere it accounts for less than a 0.04%, and as a total percentage of the entire Earth much less than that. In comparing Earth to Venus and Mars, carbon dioxide is the most abundant molecule in the atmosphere of Venus and Mars, accounting for 95 to 97% of the atmosphere on these planets, while on Earth it is a rare component of the atmosphere. As a heavier molecule than hydrogen and helium, carbon dioxide can stick to planets in Venus and Earth’s size range. It is likely that Earth early in its history had a similar high percentage of carbon dioxide as found on Mars and Venus, however over time it was pulled out of the atmosphere. This process was because of Earth’s unusual high percentage of water (H2O). Notice that water is found in stony meteorites, and this water was released as a gas during Earth’s warmer molten history, and as the Earth cooled, it resulted in rain that formed the vast oceans of water on its surface today. There has been a great debate in science as to why Earth has these vast oceans of water and great ice sheets, while Mars and Venus lack oceans or significantly large amounts of ice. Some scientists suggest that Earth was enriched in water (H2O) from impacts with comets early in its history, but others suggest that enough water (H2O) can be found simply from the molten gasses that are found in rocks and meteorites that formed the early Earth.

So how did this unusual large amount of water result in a decrease of carbon dioxide in Earth’s atmosphere? Looking at a simple set of chemical reactions between carbon dioxide and water, you can understand why.

CO2 (g) + H2O (l) <=> H2CO3 (aq)

Note that g stands for gas, l for liquid, and aq as an aqueous solution (dissolved in water), and also notice that this reaction goes in both directions with the double arrows. Each carbon atom takes on an additional oxygen atom, which results in two extra electrons, this results in the ion CO3-2. This ion forms ionic bonds to two hydrogen ions (H+), forming H2CO3. Because these hydrogen ions can break apart from the carbon and oxygen, this molecule in a solution forms a weak acid called carbonic acid. Carbonic acid is what gives soda drinks their fizz. If water falls from the sky as rain, the amount of carbonic acid would cause a further reaction to solid rocks composed of calcium. Remember that calcium forms ions of Ca+2, making these ions ideal for reacting with the CO3-2 ions to form Calcium Carbonate (CaCO3) a solid.

Ca2+(aq) + 2HCO3- (aq) <=> CaCO3 (s) + CO2 (aq) + H2O (l)

Note that there is a 2 before the ion HCO3- so that the amount of each element in the chemical reaction is balanced on each side of the chemical reaction.

Over long periods of time the amount of carbon dioxide will decrease from the atmosphere, however, if the Earth is volcanically active and still molten with lava, this carbon dioxide would be re-released into the atmosphere as the solid rock composed of calcium carbonate is heated and melted (a supply 178 kJ of energy will convert 1 mole CaCO3 to CaO and CO2).

CaCO3 (s) → CaO (s) + CO2 (g)

This dynamic chemical reaction between carbon dioxide, water and calcium causes parts of the Earth to become enriched or depleted in carbon, but eventually the amount of carbon dioxide in the atmosphere will reach an equilibrium over time, and during the early history of Earth water scrubbed significantly amounts of carbon dioxide out of the atmosphere of Earth.

Returning to the bulk composition of meteorites, oxygen is found in numerous molecules, including some of the most abundant (SiO2, MgO, FeO, CaO). One of the reasons, Goldschmidt did not include oxygen in the atmophile group of elements was because it is more common in rocks, especially bonded covalently with silicon in silica dioxide (SiO2). Pure silica dioxide is the mineral quartz, a very common mineral found on the surface of the Earth. Hence oxygen, along with magnesium, aluminum, and calcium, is a lithophile element. Later we will explore how Earth’s atmosphere became enriched in oxygen, an element much more commonly found within solid crystals and rocks on Earth’s surface.

Isolated carbon (C) is fairly common (0.5%) in meteorites, but carbon bonded to hydrogen CH4 (methane) or in chains of carbon and hydrogen (for example C2H6) are extremely rare in meteorites. A few isolated meteorites contain slightly more carbon (1.82%) including the famous Murchison and Banten stony meteorites which exhibit carbon molecules bonded to hydrogen. Referred to as hydrocarbons, these molecules are important in life, and will play an important role in the origin of life on Earth. But why are these hydrocarbons so rare in meteorites?

This likely has to do with an important concept in chemistry called Enthalpy. Enthalpy is the amount of energy gained or lost in a chemical reaction at a known temperature and pressure. This change in enthalpy is expressed as (ΔH) and expressed in Joules of energy per Mole. A Mole is a unit of measurement that relates the number of atoms per gram of a molecule’s atomic mass. A positive change in enthalpy indicates an endothermic reaction (requiring heat), and while negative change in enthalpy releases heat resulting in an exothermic reaction (producing heat). In the case of hydrocarbon (like CH4) and the presence of oxygen, there is an exothermic reaction, that releases 890.32 Kilo Joules of energy as heat per mole.

CH4 (g) + 2O2 (g) → 2H2O (l) + CO2 (g)

The release of energy via this chemical reaction makes hydrocarbons such a great source for fuels, since they easily react with oxygen to produce heat. In fact, methane or natural gas (CH4) is used to generate electricity, heat homes and used to cook food on a gas stove. This is also why hydrocarbons are rarely found when closely associated with oxygen. Hydrocarbons are however of great importance, not only because of their ability to combust with oxygen in these exothermic reactions, but because they are also the major elements found in living organisms. Other elements that are important for living organisms are phosphorous (P), nitrogen (N), oxygen (O), sulfur (S), sodium (Na), magnesium (Mg), calcium (Ca) and iron (Fe). All of these lithophile elements are found in complex molecules within life forms near the surface of Earth which are collectively called organic molecules, which bond with carbon and hydrogen in complex molecules found within living organisms. The field of chemistry that study these complex chains of hydrocarbon molecules is called organic chemistry.

Goldschmidt’s classification of the elements is a useful way to simplify the numerous elements found on Earth, and way to think about where they are likely to be found, whether in the atmosphere, in the oceans, on the rocky surface, or deep inside Earth’s core.

# 3h. Mass spectrometers, X-Ray Diffraction, Chromatography and Other Methods to Determine Which Elements are in Things.

## The chemical make-up and structure of Earth’s materials

Rosalind Franklin

In 1942, Victor Goldschmidt having escaped from Norway arrived in England, among a multitude of refugees from Europe. Shortly after arriving in London, England he was asked to teach about the occurrence of rare elements found in coal to the British Coal Utilisation Research Association, which was a non-profit group funded by coal utilities to promote research. In the audience was a young woman named Rosalind Franklin. Franklin had recently joined the coal research group having left graduate school at Cambridge University in 1941, and leaving behind a valuable scholarship. Her previous advisor Ronald Norris was veteran of the Great War, and suffered as prisoner of war in Germany. So, the advent of World War II plagued him, and he took to drinking and was not supportive of young Franklin’s research interests. In his lab, however, Franklin was exposed to the methods to the chemical analysis of using photochemistry, or light to excite materials to produce photons of differing light-waves of energy.

Photochemistry experiment using a mercury gas filled glass tube.

After leaving school, her research focused on her paid job to understand the chemistry of coal, particularly how organic molecules or hydrocarbons breakdown through heat and pressure inside the Earth and thus also leading to decreasing porosity over time (the amount of space or tiny cavities) within the coal. Franklin moved to London to work, and stayed in a boarding house with Adrienne Weill, a French chemist and refugee who was a former student of the famous chemist Marie Curie. Adrienne Weill became her mentor during the war years, while Victor Goldschmidt taught her in a classroom during the war. With the allied victory in 1945, Rosalind Franklin returned to the University of Cambridge and defended her research she had conducted on coal in 1946. After graduation, Franklin asked Adrienne Weill if she could come to France to continue her work in chemistry. In Paris, Franklin secured a job in 1947 at the Laboratoire Central des Services Chimiques de l'État, one of the leading centers for chemical research in post-war France. It was here that she learned of the many new techniques being developed to determine the chemical make-up and structure of Earth’s materials.

What allows scientists to determine what specific chemical elements are found in materials on Earth? What tools do chemists use to determine the specific elements inside the molecules that make-up Earth materials, whether they be gasses, liquids or solids?

## Using Chemical Reactions

Any material will have a specific density at standard temperatures and pressures, and exhibit phase transitions at set temperature and pressures, although elucidating this from every type of material can be challenging if the material has extremely high or low temperatures/pressures for these phase transitions. More often than not, scientists use chemical reactions to determine if a specific element is found in a substance. For example, one test to determine the authenticity of a meteorite is to determine if it contains the element nickel.

A possible Meteorite found in the desert.

Nickel as a siderophile element is rare on the surface of Earth, with much of the planet’s nickel being found in the Earth’s core. Hence, meteorites have a higher percentage of nickel than most surface rocks. To test for nickel, a small sample is ground up into a powder, and added to a solution of HNO3 (nitric acid). If nickel is present the reaction will result in ions of nickel (Ni+ and Ni2+) to be found in the solution, a solution of NH4OH (Ammonium hydroxide) is added to increase the pH of the solution by introducing OH- ions, this will cause any iron ions (Fe2+ and Fe+3) to oxidize with the OH- ions, which is a solid (and will be seen as an rust-color solid in the bottom of the solution), the final step is to pour off the clear liquid, which now should contain ions of nickel (Ni+ and Ni2+) and add Dimethyl-glyoxine (C4H8N2O2) a complex organic molecule which reacts with ions of nickel. Ni+2C4H8N2O2→Ni(C4H8N2O2)2

Nickel bis(dimethylglyoximate) is a bright red solution, and a bright red color would indicate the presence of nickel in the powdered sample. This method of determining nickel was worked out by Lev Chugaev, a Russian chemistry professor at the University of Petersburg in 1905. This type of diagnostic test may read a little like a recipe in a cookbook, but provides a “spot” test to determine the presence or absence of an element in a substance. Such tests have been developed for many different types of materials in which someone wishes to know if a particular element is present in a solid, liquid and even in gases.

Such methodologies can also separate liquids that had been mixed together by utilizing differences in boiling temperatures, such that a mixture of liquids can be concentrated back into specific liquids based on what temperature each liquid substance boils at. Such distillation methods are used in petroleum refining processes at oil and gas refineries. The heating of liquids contain crude oil can be used to separate out different types of hydrocarbon oils and fuels, such as kerosene, octane, propane, benzene, and methane.

## Chromatography

One of the more important innovations in chemical analysis is chromatography, which was first developed by the Italian-Russian chemist, Mikhail Tsvet, whose Russian surname Цвет, means color in Russian. Chromatography is the separation of molecules based on colors, and was first developed in the study of plant pigments found in flowers. The method is basically to dissolve plant pigments in a solution of ethanol and calcium carbonate, which will separate different color bands that can be observed in a clear beaker. Complex organic molecules can be separated out using this method, and purified.

Using this principle, gasses can be analyzed in a similar way through gas chromatography, which is where gases (often solid or liquid substances that have been combusted or heated until they become a gas in a hot oven) are passed through a column with a carrier gas of pure helium. As the gasses pass by a laser sensor, the color differences are measured at discrete pressures which are adjusted within the column. Gas chromatography is an effective way to analyze the chemical makeup of complex organic compounds.

Color is an effective way to determine the chemical ingredients of Earthly materials, and it has been widely known that certain elements can exhibit differing colors in materials. Many elements are used in glass making and fireworks to dazzling effect. Pyrolysis–gas chromatography is a specialized type of chromatography in which materials are combusted at high temperatures, and the colors of the produced smaller gas molecules are measured to determine their composition. Often gas chromatography is coupled with a mass spectrometer.

## Mass spectrometry

Many mass spectrometers use a magnet to separate molecules of different atomic mass.

A mass spectrometer measures the differing masses of molecules and ionized elements in a carrier gas (most often inert helium). Mass spectrometry examines a molecule’s or ion’s total atomic mass by passing it through a gas-filled analyzer tube between strong magnets which deflect their path based on the atomic mass. Lighter molecule/ions will deflect more than heavier molecule/ions resulting in them striking a detector at the end of the analyzer tube at different places, which produces a pulse of electric current in each detector. The more the electric current is recorded the higher the number of molecules or ions of that specific atomic mass. Mass spectrometry is the only way to measure various isotopes of elements since the instrument measures directly the atomic mass.

There are two flavors of mass spectrometers. The first is built to examine isotopic composition of carbon, oxygen, as well as nitrogen, phosphorus and sulfur. Elements common in organic compounds, but which combust in the presence of oxygen, producing gas molecules of CO2, NO2, PO4 and SO4. These mass spectrometers can be used in special cases to examine the hydrogen and oxygen isotopic composition in H2O. They are also useful to measure carbon and oxygen isotopes in calcium carbonate CaCO3, when reacted with acid producing CO2. The other flavor of mass spectrometer’s ionizes elements (remove electrons) under extremely high temperatures, allowing ions of much higher atomic mass to be measured through an ionized plasma beam. These ionizing mass spectrometers can measure isotopes of rare transitional metals, such as nickel (Ni), lead (Pb), and uranium (U), for example the ratios used in radiometric dating of zircons. Modern mass spectrometers can laser ablate tiny crystals or very small materials using an ion microprobe, capturing a tiny fraction of the material to pass through the mass spectrometer, and measure the isotopic composition of very tiny portions of substances. Such data can be used to compare the composition across surfaces of materials at a microscopic scale.

A robotic mass spectrometer on the surface of Mars (the Curiosity Rover).

The first mass spectrometers were developed during the 1940s, after World War II, but today are fairly common in most scientific labs. In fact, the Sample Analysis at Mars (SAM) instrument on the Curiosity Rover on the surface of Mars contains a gas chromatograph coupled with a mass spectrometer, allowing scientists at NASA to determine the composition of rocks and other materials encountered on the surface of Mars. Rosalind Franklin did not have access to modern mass spectrometers in 1947, and their precursors, gigantic scientific machines called cyclotrons, were not readily available in post-war France. Instead, Rosalind Franklin trained on scientific machines that use electromagnetic radiation to study matter, using specific properties of how light interacts with matter.

## Using Light

The major benefit to using light (and more broadly all types of electromagnetic radiation) to study a substance, is that using light does not require that the bonds between atoms be broken or changed in order to study them. So, these techniques using light do not require that a substance be destroyed by reacting it with other chemicals or altering it by combustion to determine its composition as a gas.

A natural ruby, a gem stone that has been cut to reflect more light.

A gold bar exhibits metallic luster, or shine due to metallic bonds.

Earlier crystallographers, who studied gems and jewels noticed the unique ways in which materials would absorb and reflect light in dazzling ways, and to understand the make-up of rocks and crystals, scientists used light properties or luminosity to classify these substances. For example, minerals composed of metallic bonds, will exhibit a metallic luster or shine and are opaque, while covalently and ionically bonded minerals are often translucent, allowing light to pass through. The study of light passing through crystals, minerals and rocks to determine the chemical make-up crystals is referred to as petrology. More generally, petrology is the branch of science concerned with the origin, small-scale structure, and composition of rocks and minerals and other Earth materials under a polarizing light microscope.

Michel-Lévy pioneered the use of birefringence to identify minerals in thin section with a petrographic microscope.

## Refraction and Diffraction of Light

Light in the form of any type of electromagnetic radiation interacts with material substances in three fundamental ways, the light will be absorbed by the material, bounce off the material, or pass through the material. In opaque materials, like a gold coin, light will bounce off the material, or diffract from the surface, while in translucent materials, like a diamond, light will pass through the material and exhibit refraction. Refraction is how a beam of light bends within a substance, while diffraction is how a beam of light is reflected off the surface of a substance. They are governed by two important laws in physics.

## Snell's Law of Refraction

Snell’s law is named after Ibn Sahl, a Persian scientist who published his early thesis on mirrors and lenses in 984 CE. Snell’s law refers to the relationship between the angle of incidence and the angle of refraction resulting from the change in velocity of the light as it passes into a translucent substance. Light slows down as it passes into a denser material. By slowing down, the light beam will bend, the amount that the beam of light bends is mathematically related to the angle that the beam of light strikes the substance and the change in velocity.

Light from medium n1, point Q, enters medium n2 at point O, and refraction occurs, to reach point P.

The mathematical expression can be written where ${\displaystyle \phi 1}$ the angle from perpendicular to the substance the light strikes the outer surface of the substance, and ${\displaystyle \phi 2}$  is the angle from perpendicular that the light bends within the substance. v1 is the velocity of light outside the substance, while v2 is the velocity of light within the substance.

${\displaystyle {\frac {sin\phi 1}{v^{1}}}={\frac {sin\phi 2}{v^{2}}}}$

Some translucent materials reduce the velocity of light more than others. For example, quartz (SiO2) exhibits high velocity, such that the angle of refraction is very low. This makes materials made of silica, or SiO2, such as eye glasses and windows easy to see through.

A crystal of calcite which produces a double image because the light bends as it passes through the crystals.

However, calcite (CaCO3) also known as calcium carbonate, exhibits very low velocities, such that the angle of refraction is very high. This makes light bend within the materials made of calcite. Crystals of calcite are often sold in rock shops or curio shops as “Television” crystals, because the light bends so greatly as to make any print placed below a crystal appear to come from within the crystal itself, like the screen of a television. Liquid water with a lower velocity than air, also bends light, resulting in the illusion of a drinking straw appearing broken in a glass containing water.

Measuring the angles of refraction in translucent materials can allow scientists to determine the chemical make-up of the material, often without having to destroy or alter the material. However, obtaining these angles of refractions often requires making a thin section of the material to examine under a specialized polarizing microscope designed to measure these angles.

## Bragg's Law of Diffraction

Bragg’s law is named after Lawrence Bragg and his father William Bragg, who both discovered the crystalline structure of diamonds, and were awarded the Nobel Prize in 1915 for their work on diffraction. Unlike Snell’s law, Bragg’s law results from the inference of light waves as they diffract (reflected) off a material’s surface. The wave length of the light needs to be known, as well as the angle that the light is reflected from the surface of the substance.

Bragg’s law results from the inference of light waves as they diffract (reflected) off a material’s surface.

The mathematical expression can be written where the light wavelength is λ, and ${\displaystyle \phi h}$  is the angle from the horizontal plane of the surface (or glancing angle), and d is the interplanar distance between atoms in the substance or the atomic-scale crystal lattice plane.

${\displaystyle 2dsin\phi h=n\lambda }$

The distance d can be determined if λ and ${\displaystyle \phi h}$  are known. n an integer, which is the “order” of diffraction/reflection of the light wave.

Monochrome light (that is light of only one wave length) is shone onto the surface of a material at a specific angle, resulting in light reflected off the material which is measured at every angle from the material. Using this information, the specific distances between atoms can be measured.

Since the distance between atoms is directly related to the number of electrons within each orbital shell, each element on the periodic table will have different values of d, depending on the orientations of the atomic bonds. Furthermore, different types of bonds of the same elements will result in different d distances. For example, both graphite and diamonds are composed of carbon atoms, but are distinguished from each other in how those atoms are bonded together. In graphite, the carbon atoms are arranged into planes separated by d-spacings of 3.35Å, while in diamonds the d-spacings are closely together linked by covalent bonds of 1.075Å, 1.261Å, and 2.06Å distances.

These d-spacings are very small, requiring light with short wavelengths within the X-ray spectrum with a wavelength (λ) of 1.54Å. Since most atomic bonds are very small, X-ray electromagnetic radiation is typically used in studies of diffraction. The technique used to determine d-spacings within materials is called X-Ray Diffraction (XRD). It often is coupled with tools to measure X-ray fluorescence (XRF), which measures the energy states of excited electrons that actually absorb X-rays and release energy as photons. X-Ray Diffraction measures how light wave reflect off the spacing between atoms, while X-Ray Fluorescence measures how light waves are emitted from atoms which were excited by light striking the atoms themselves. Fluorescence looks at the broad spectrum of light emitted, while Diffraction only looks at monochromic light (light of a single wave-length).

Great advances have been made in both XRD and XRF tools, such that many hand-held analyzers now allow scientists to quickly analyze the chemical make-up of materials outside of the laboratory and without destruction of the materials understudy. XRD and XRF has revolutionized how materials can be quickly analyzed for various toxic elements, such as lead (Pb) and arsenic (As).

However, in the late 1940s, X-Ray diffraction was on the cutting edge of science and Rosalind Franklin was using it to analyze more and more complex organic molecules – molecules containing long chains of carbon bonded with hydrogen and other elements. In 1950, Rosalind Franklin was awarded a 3-year research fellowship from an asbestos mining company to come to London to conduct chemistry research at King’s College. Staffed with the state-of-the-art X-Ray diffraction machine, Rosalind Franklin set to work to decode the chemical bonds that form complex organic compounds found in living tissues.

## The Discovery of the Chemistry of DNA

She was encouraged to study the nature of a molecule called deoxyribonucleic acid, or DNA that was found inside living cells particularly sperm cells. For several months she worked on a project to unravel this unique molecule, when one day an odd little nerdy researcher by the name of Maurice Wilkins arrived at the lab. He was furious with Rosalind Franklin, as he can been away on travel, but before leaving had been working on the very topic Rosalind Franklin was working on, using the same machine. The Chair of the Department had not informed either of them of their research work and that they were to be sharing the same equipment and lab space. This, as you can imagine, caused much friction between Franklin and Wilkins. Despite this setback Rosalind Franklin had made major breakthroughs in the few months she had access to the machine alone, and was able to uncover the chemical bonds found in deoxyribonucleic acid. These newly deciphered helical-bonds allowed the molecule to spiral like a spiral stair-case. However, Franklin was still uncertain. During 1952 both Franklin and Wilkins worked alongside each other, on different samples, and using various techniques, using the same machine. They also shared a graduate student who worked in the lab between, Raymond Gosling. Sharing a laboratory with Wilkins continued to be problematic for Rosalind Franklin and she transferred to Birkbeck College in March of 1953, which had its own X-Ray diffraction lab. Wilkins returned to his research on deoxyribonucleic acid using the X-Ray diffraction at Kings College, however a month later, in 1953 two researchers at Cambridge University, announced to the world their own solution to the structure of DNA in a journal article published in Nature.

A scale model of the atoms that make up DNA (a double helix)

Their names were Francis Crick and James Watson. These two newcomers published their solution before either Franklin and Wilkins had a chance too. Furthermore, they both had only recently began their quest to understand deoxyribonucleic acid, after being inspired by attending scientific presentations by both Franklin and Wilkins. Crick and Watson lacked equipment, so they spent their time using models by linking carbon atoms (represented by balls) together to form helical towers with other elements. Their insight came from a famous photograph taken by Franklin and her student Gosling, and was shown to them by Wilkins in early 1953. Their research became widely celebrated in England, as it appeared that the American scientist Linus Pauling was close to solving the mystery of DNA, and the British scientists had uncovered it first.

In 1962, Wilkins, Crick and Watson shared the Nobel Prize in Physiology and Medicine. Although, today Rosalind Franklin is widely recognized for her efforts to decipher the helical nature of the DNA molecule, she died of cancer in 1958 before she could be awarded a Nobel Prize. Today scientists can map out the specific chemistry of DNA, such that each individual molecule within different living cells can be understood well beyond its structure. Understanding complex organic molecules are of vital importance in the study of life on planet Earth.

## Rayleigh and Raman scattering

Chandrasekhara Venkata Raman

There stands an old museum nestled in the bustling city of Chennai along the Bay of Bengal in eastern India, where a magnificent crystal resides on a wooden shelf accumulating dust in its display. It was this crystal that would excite one of the greatest scientists to investigate the properties of matter, and discover a new way to study chemistry from a distance. This scientist was Chandrasekhara Venkata Raman of India, often known simply as C.V. Raman. Raman grew up in eastern India with an inordinate fascination for shiny crystals and gems and the reflective properties of minerals and crystals. He amassed a large collection of rocks, minerals, and crystals from his travels. One day he purchased from a farmer a large quartz crystal which contained trapped inclusions of some type of liquid and gas inside the crystal. The quartz crystal intrigued him, and he wanted to know what type of liquid and gas was inside. If he broken open the crystal to find out, it would ruin the rarity of the crystal, as such inclusions of liquid and gasses inside a crystal are very rare. Without breaking open the crystal, if Raman used any of the techniques described previously, he could only uncover the chemical nature of the outer surface of the crystal, likely as silica dioxide (quartz).

To determine what the liquid and gas were inside, he set about inventing a new way to uncover the chemical make-up of materials that can only be seen. His discovery would allow scientists to not only know the chemical make up inside this particular crystal, but allow scientists to determine the chemical make-up of far distant stars across the universe, tiny atoms on the surfaces of metals, as well as gasses in the atmosphere.

Light has the unique ability to reveal the bonds within atomic structures without having to react or break those bonds. If a material is transparent to light, then it can be studied. Raman was well aware of the research of Baron Rayleigh, a late nineteenth century British scientist who discovered the noble gas argon by distilling air to purify it. Argon is a component of the air that surrounds you, and as an inert gas unable to bond to other atoms it exists as single atoms in the atmosphere. If a light bulb is filled with only purified argon gas, light within the bulb will pass through the argon gas producing a bright neon-like purple color. If a light bulb is filled with helium (He) gas, it would produce a bright reddish color, but the brightest light would be seen with neon (Ne), a bright orange color. These noble gases produced bright neon-colors in light bulbs and soon appeared during in the early twentieth-century as bright neon window signs in store fronts of bars and restaurants.

Excited argon gas in a glass tube.

Why did each type of gas result in a different color in these lightbulbs? Rayleigh worked out that the color was caused by the scattering of light waves due to the differences in the size of the atoms. In the visual spectrum of light, light waves are much larger than the diameter of these individual atoms, such that the fraction of light scattered by a group of atoms is related to the number of atoms per unit volume, as well as the cross-sectional area of the individual atoms. By shining light through a material, you could measure the scattering of light and in theory determine the atom’s cross-sectional area. This Rayleigh scattering could also be used to determine temperature, since with increasing temperature the atoms enlarged in size.

In addition to Rayleigh scattering the atoms also absorb light waves of particular wave lengths, such that a broad range of light wavelengths of light passed through a gas would be absorbed at discrete wavelengths, allowing you to fingerprint certain elements within a substance based on these spectral lines of absorption.

In his lab, Raman used these techniques to determine the chemical make-up of the inclusions within the crystal, by shining light through the crystal. This is referred as spectroscopy, the study of the interaction between matter and electromagnetic radiation (light). Raman did not have access to modern lasers, so he used a mercury light-bulb and photographic plates to record where light would be scattered, as thin lines appeared where the photographic plate was exposed to light. It was time consuming work, but eventually lead to the discovery that some of the scattered light had lost energy and shifting into longer wave lengths.

Most of the observed light waves would bounce off the atoms without any absorption (elastic) or Rayleigh Scattering, while other light waves would be fully absorbed by the atoms, however, Raman found that some light waves would both bounce off the atoms and contribute some vibrational energy to the atoms (inelastic) energy, which became known as Raman Scattering. The amount of light that would scatter and be absorbed was unique to each molecule. Raman discovered a unique way to determine the chemistry of substances by using light, which today is called Raman spectroscopy, a powerful tool to determine the specific chemical makeup of complex materials and molecules by looking at the scattering and absorption of light. In the end, CV Raman determined that the mysterious fluid and gas within the quartz crystal was water (H2O) and methane (CH4). Today, the crystal still remains intact in a museum Bangalore, residing in the institute named after Raman, the Raman Research Institute of India.

Scientists today have access to powerful machines to determine the chemical make-up of Earth’s materials to such an extent that nearly any type of material, from solids, liquids and gasses and even plasma, can be determined as to which elements are found within the substance under study. Each technique has the capacity to determine the presence or absence of individual elements, as well as the types of bonds that are found between atoms. Now that you know a little of how scientists determine the makeup of Earth materials, we will examine in detail Earth’s gases, in particularly Earth’s atmosphere.

# 4a. The Air You Breath.

## Take a Deep Breath

The thin blue line represents Earth's atmosphere as seen from space.

Take a deep breath. The air that you inhale is composed of a unique mix of gasses which form the Earth’s atmosphere. The Earth’s atmosphere is the gas-filled sphere representing the outer most portion of the planet. Understanding the unique mix of gasses within the Earth’s atmosphere is of vital importance to living organisms that require the presence of certain gases for respiration. Air in our atmosphere is mix of gases with very large distances between individual molecules. Although the atmosphere does vary slightly between various regions of the planet, the atmosphere of Earth is nearly consistent in its composition of mostly Nitrogen (N2), representing about 78.08% of the atmosphere. The second most abundant gas in Earth’s atmosphere is Oxygen (O2) representing 20.95% of the atmosphere. This leaves only 0.97%, of which 0.93% is composed of Argon (Ar). This mix of Nitrogen, Oxygen and Argon is very unique among the solar system especially compared to neighboring planets, as Mars has an atmosphere of 95.32% Carbon dioxide (CO2), 2.6% Nitrogen (N2) and 1.9% Argon (Ar). While Venus, has an atmosphere of 96.5% Carbon dioxide (CO2), 3.5% Nitrogen (N2) and trace amounts of Sulfur dioxide (SO2). Earth’s atmosphere is strange in its abundance of Oxygen (O2) and very low amounts of Carbon dioxide (CO2). However, evidence exists that Earth begun its early history with an atmosphere similar to Venus and Mars, an atmosphere rich in carbon dioxide.

Atmospheric composition of Mars, Earth and Venus, with carbon dioxide rich atmospheres on Mars and Venus, and a Nitrogen/Oxygen rich atmosphere on Earth.

## Earth’s Earliest Atmosphere

Evidence for Earth’s early atmosphere comes from the careful study of moon rocks brought back to Earth during the Apollo missions, which show that lunar rocks are depleted in carbon, with 0.0021% to 0.0225% of the total weight of the rocks composed of various carbon compounds (Cadogan et. 1972: Survey of lunar carbon compounds). Analysis of Earth igneous rocks show that carbon is more common in the solid Earth, with percentages between 0.032% and 0.220%. If Earth begun its history with a similar rock composition found on the Moon (during its molten early history), most of the carbon on Earth would have been free as gasses of carbon dioxide and methane in the atmosphere, accounting for an atmosphere that was an upwards of 1,000 times denser, and containing a majority of carbon dioxide, similar to Venus and Mars. Further evidence from ancient zircon crystals indicate low amounts of carbon in the solid Earth during its first 1 billion years of history, and supports an early atmosphere composed mostly of carbon dioxide.

Today Earth’s rocks and solid matter contain the vast majority of carbon (more than 99% of the Earth’s carbon), and only a small fraction is found in the atmosphere and ocean, whereas during its early history, the atmosphere appears to have been the major reservoir of carbon, containing most of the Earth’s total carbon, with only a small fraction lockup in rocks. Over billions of years, in the presence of water vapor, the amount of carbon dioxide in the atmosphere decreased, as carbon was removed from the atmosphere in the form of carbonic acid, and deposited as calcium carbonate (CaCO3) into crustal rocks. Such scrubbing of carbon dioxide from the atmosphere did not appear to occur on Venus and Mars which both lack large amounts of liquid water and water vapor on their planetary surfaces. This also likely resulted in a less dense atmosphere for Earth, which today has a density of 1.217 kg/m3 near sea level. Levels of carbon dioxide in the Earth’s atmosphere dramatical decreased with the advent of photosynthesizing life forms and calcium carbonate skeletons which further pulled carbon dioxide out of the atmosphere and accelerated the process around 2.5 billion years ago.

## Water in the atmosphere

The water vapor is highest near the Earth's equator, and lowest near the poles.

It should be noted that water (H2O) makes a significant component of the Earth’s atmosphere, as evaporated gas from Earth’s liquid oceans, lakes, and rivers. The amount of water vapor in the atmosphere is measured as Relative humidity. Relative humidity is the ratio (often given as a percentage) between the partial pressure of water vapor to the equilibrium pressure of liquid water at a given temperature on a smooth surface. A relative humidity of 100% would mean that the partial pressure water vapor is equal to the equilibrium pressure of liquid water, and would condense to from droplets of water, either as rain or dew on a glass window. Note that relative humidity is NOT an absolute measure of atmospheric water vapor content, for example a measured relative humidity of 100% does NOT mean that the air contains 100% water vapor, nor does 25% relative humidity mean that it contains 25% water vapor. In fact, water vapor (H2O) accounts for only between 0 to 4% of the total composition of the atmosphere, with 4% values found in equatorial tropical regions of the planet, such as rainforests. In most places, water vapor (H2O) represents only trace amounts of the atmosphere and is found mostly close to the surface of the Earth. The amount of water molecules air can hold is related to both its temperature and pressure. The higher the temperature and lower the pressure the more water molecules are found in the air. Water molecules are at an equilibrium with Earth’s air, however, if temperatures on Earth’s surface were to rise above 100° Celsius, the boiling point for water, the majority of water on the planet would be converted to gases, and make up a significant portion of Earth’s atmosphere as water vapor. Scientists debate when temperatures on Earth’s surface dropped below this high value and when liquid oceans first appeared on the surface of the planet, but by 3.8 billion years ago, Earth appeared to have oceans present.

Artist reconstruction of the Hadean Eon of Earth's early history, when the Earth was mostly molten.

Before this, was the period of time of a molten Earth called the Hadean (named after Hades, the underworld of Greek mythology). Lasting between 500-700 million years, the Hadean was an Earth that resembled the hot surface of Venus, but consistently bombarded with meteorite impacts and massive volcanic eruptions and flowing lava. Few, if any rocks are known from this period of time, since so much of the Earth was molten and liquid at this point in its history. Temperatures must have dropped, leading to the appearance of liquid water on Earth’s surface and resulting in less dense atmosphere. This started the lengthy process of cleansing the atmosphere of carbon dioxide.

For the next 1.3 billion years, Earth’s atmosphere was a mix of water vapor, carbon dioxide, nitrogen, and argon, with traces of foul-smelling sulfur dioxide (SO2), nitrogen oxides (NO2), it is debated whether there may have been pockets of hydrogen sulfide (H2S), methane (CH4) and ammonia (NH4), or whether these gas compounds were mostly oxidized in the early atmosphere. However, free oxygen (O2) was rare or absent in Earth’s early atmosphere, as free oxygen as a gas would only appear later, and when it did, it would completely alter planet Earth.

# 4b. Oxygen in the Atmosphere.

## How Earth's Atmosphere became enriched in Oxygen

Classified as a lithophile element, the vast majority of oxygen on Earth is found in rocks, particularly in the form of SiO2 and other silicate minerals and carbonate minerals. During the early history of Earth most oxygen in the atmosphere was bonded to carbon (CO2), sulfur (SO2) or nitrogen (NO2). However, today free oxygen (O2) accounts for 20.95% of the atmosphere. Without oxygen in today’s atmosphere you would be unable to breath the air and die quickly.

The origin of oxygen on Earth is one of the great stories of the interconnection of Earth’s atmosphere with planetary life. Oxygen in the atmosphere arose during a long period called the Archean (4.0 to 2.5 billion years ago), when life first appeared and diversified on the planet.

Early microscopic single-celled lifeforms on Earth utilized the primordial atmospheric gasses for respiration, principally CO2, SO2 and NO2. These primitive lifeforms are called the Archaea, or archaebacteria, from the Greek arkhaios meaning primitive. Scientist refer to an environment lacking free oxygen, as Anoxic, which literally means without oxygen. Hypoxia, meaning an environment with low levels of oxygen, while Euxinic means an environment that is both low in oxygen, with a high amount of hydrogen sulfide (H2S). These types of atmospheres were common during the Archean Eon.

Three major types of archaebacteria lifeforms existed during the Archean, and represent different groups of microbial single-celled organisms, all of which still live today in anoxic environments. None of these early archaebacteria had the capacity to photosynthesize, and instead relied on chemosynthesis, the synthesis of organic compounds by living organisms using energy derived from reactions involving inorganic chemicals only, typically in the absence of sunlight.

## Methanogenesis-based life forms

Methanogenesis-based life forms take advantage of carbon dioxide (CO2), by using it to produced methane CH4 and CO2, through a complex series of chemical reactions in the absence of oxygen. Methanogenesis requires some source of carbohydrates (larger organic molecules containing carbon, oxygen and hydrogen) as well as hydrogen, but these organisms produce methane (CH4) particularly in sediments on sea floor in the dark and deep regions of the oceans. Today they are also found in the guts of many animals.

## Sulfate-reducing life forms

Sulfate-reducing life forms take advantage of sulfur in the form of sulfur dioxide (SO2), by using it to produce hydrogen sulfide (H2S). Sulfate-reducing life forms require a source of carbon, often in the form of methane (CH4) or other organic molecules, as well as sources of sulfur, typically near volcanic vents.

## Nitrogen-reducing life forms

Nitrogen reducing life forms take advantage of nitrogen in the form of nitrogen dioxide (NO2) by using it to produce ammonia (NH4). Nitrogen-reducing life forms also require a source of carbon, often in the form of methane (CH4) or other organic molecules.

All three types of life-forms exhibit anaerobic respiration, or respiration that does not involve free oxygen. In fact, these organisms produce gasses that combust or burn in the presence of oxygen, and hence oxidize to release energy. Both methane (CH4) and hydrogen sulfide (H2S) are flammable gasses and are abundant in modern anoxic environments rich in organic carbon, such as in sewers systems and underground oil and gas reservoirs.

During the Archean, a new group of organisms arose that would dramatically change the planet’s atmosphere, these are called the cyanobacteria. As the first single-celled organism able to photosynthesis, cyanobacteria convert carbon dioxide (CO2) into free oxygen (O2). This allows microbial organisms to acquire carbon directly from atmospheric air or ocean water. Photosynthesis, however, required the use of sunlight or photons, which prevents these organisms living permanently in the dark. They would grow into large “algal” blooms seasonally on the surface of the oceans based on the availability of sunlight. Able to live in both oxygen-rich and anoxic environments, they flourished. The oldest macro-fossils on Earth are fossilized “algal” mats called stromatolites, which are composed of thin layers of calcium carbonate secreted by cyanobacteria growing in shallow ocean waters. These layers of calcium carbonate are preserved as bands in the rocks, as some of Earth’s oldest fossils. Microscopically, cyanobacteria grow in thin threads, encased in calcium carbonate. With burial, cyanobacteria accelerated the decrease of carbon dioxide from the atmosphere, as more and more carbon was sequestrated into the rock-record as limestone, and other organic matter was buried over time.

Large bloom of cyanobacteria in the Baltic Sea, which convert carbon dioxide to oxygen through photosynthesis. The emergence of this type of bacteria had a dramatic effect on Earth's atmosphere.

The first appearance of free oxygen in ocean waters lead to a fifth group of organisms to evolve, the iron-oxidizing bacteria, which use iron (Fe). Iron-oxidizing bacteria can use either iron-oxide Fe2O3 (in the absence of oxygen) or iron-hydroxide Fe(OH)2 (in the presence of oxygen). In the presence of small amounts of oxygen, these iron-oxidizing bacteria would produce solid iron-oxide molecules, which would accumulate on the ocean floor, as red-bands of hematite (Fe2O3). Once the limited supply of oxygen was used up by the iron-oxidizing bacteria, cyanobacteria would take over, resulting in the deposition of siderite, an iron-carbonate mineral (FeCO3). Seasonal cycles of “algal” blooms of cyanobacteria followed by iron-oxidizing bacteria would result in yearly layers (technically called varves or bands) in the rock record, oscillating between hematite and siderite. These oscillations were enhanced by seasonal temperatures, as warm ocean water holds less oxygen than colder ocean waters, hence the hematite bands would be deposited during the colder winters when the ocean was more enriched in oxygen.

Rock sample from a banded iron formation (BIF). Moories Group, Barberton Greenstone Belt, South Africa, dated at 3.15 billion years old.

These bands of iron minerals are common throughout the Archean, and are called Banded Iron Formations (BIFs). Banded Iron Formations form some of the world’s most valuable iron-ore deposits, particularly in the “rust-belt” of North America (Michigan, Wisconsin, Illinois, and around the Great Lakes). These regions are places where Archean aged rocks predominate, preserving thick layers of these iron-bearing minerals.

## The Great Oxidation Crisis

Stromatolite fossils which are fossilized layers of alga mats (cyanobacteria) are common during great oxidation crisis, indicating an dramatic increase in photosynthesis and oxygen levels on Earth.

Around 2.5 to 2.4 billion years ago, cyanobacteria quickly rose as the most dominate form of life on the planet. The ability to convert carbon dioxide (CO2) into free oxygen (O2) was a major advantage, since carbon dioxide was still plentiful in the atmosphere and dissolved in shallow waters. This also meant that free oxygen (O2) was quickly rising in the Earth’s atmosphere and oceans, and quickly outpacing the amount of oxygen used by iron-oxidizing bacteria. With cyanobacteria unchecked, photosynthesis resulted in massive increases in atmospheric free oxygen (O2). This crisis resulted in the profound change in the Earth’s atmosphere toward a modern oxygen-rich atmosphere, resulting in the loss of many anoxic forms of life that previously flourished on the planet. The Great Oxidation Crisis was the first time a single type of life form would alter the planet in a very dramatic way and cause major climatic changes to the planet. The Banded Iron Formations disappeared, and a new period is recognized around 2.4 billion years ago, the Proterozoic Era.

## The Ozone Layer

The Antarctic ozone hole recorded on September 24, 2006, the protective layer blocks UV light and is produced by excited oxygen gas in the upper atmosphere.

An oxygen-rich atmosphere in the Proterozoic resulted for the first time the formation of the ozone layer in the Earth’s atmosphere. Ozone is where three oxygen atoms are bonded together (O3), rather than just two (O2). This results from two of the oxygen atoms sharing a double covalent bond and one of these oxygen atoms sharing a coordinate covalent bond with another oxygen atom. This makes ozone highly reactive and corrosive as it easily breaks to form a single ionized atom of oxygen (O-2) which quickly bonds to other atoms. Oxygen gas (O2) is much more stable as it is made up of two oxygen atoms joined together by a double covalent bond. Ozone has a pungent smell, and is highly toxic because it easily oxides both plant and animal tissue. Ozone is one of the most common air pollutes in oil and gas fields, as well as large cities, and a major factor in air quality indexes.

Most ozone, however, is found high up in the Earth’s atmosphere, where it forms the ozone layer between 17 to 50 kilometers above the surface of the Earth, with highest concentration of ozone about 25 kilometers in altitude. The ozone is created at these heights in the atmosphere through the complex interaction with Ultra-Violet (UV) electromagnetic radiation from the sun. Both oxygen and ozone block Ultra-Violet (UV) light from the sun, acting as a sun-block for the entirety of the planet. Oxygen absorbs ultraviolet rays with wavelengths between 240 to 160 nanometers, this radiation results in breaking oxygen bonds, and results in the formation of ozone. Ozone can further absorb ultraviolet rays with wavelength between 200 to 315 nanometers, and most radiation smaller than 200 nanometers are absorbed by nitrogen and oxygen, resulting in oxygen and ozone blocking more incoming electromagnetic radiation in the form of high-energy UV light.

With oxygen’s ability to prevent incoming UV sunlight to reach the surface of the planet, oxygen had a major effect on Earth’s climate. Acting like a large reflective mirror, oxygen blocked high energy UV light, and as a consequence Earth’s climate began to drastically cool down. Colder oceans increased the absorption of oxygen into the colder water, resulting in well oxygenated oceans during this period in Earth’s history.

A new group of single-celled organisms arose to take advantage of increased oxygen levels, by developing aerobic respiration, using oxygen O2 as well as complex organic compounds of carbon, and respiring carbon dioxide (CO2). These organisms had to consume other organisms in order to find sources of carbon (and other vital elements), allowing them to grow and reproduce. Because oxygen levels likely varied greatly, these single-celled organisms could also use a less-efficient method of respiration in the absence of oxygen, called anaerobic respiration. When this happens, waste products such as lactic acid or ethanol are also produced in addition to carbon dioxide. Alcohol fermentation uses yeasts which convert sugars using anaerobic respiration to produce alcoholic beverages containing ethanol and carbon dioxide. Yeasts and other more complex single-celled organisms began to appear on Earth during this time.

Single celled organisms became more complex by incorporating bacteria (Prokaryotes), either as chloroplast that could photosynthesize within the cell or mitochondria that could perform aerobic respiration within the cell. These larger more complex single-cellular lifeforms are called the Eukaryotes and would give rise to today’s multicellular plants and animals.

An equilibrium or balance between carbon dioxide consuming/oxygen producing organisms and oxygen consuming/carbon dioxide producing organisms existed for billions of years, but the climate on Earth was becoming cooler than any time in its history. More and more of the carbon dioxide was being used by these organisms, while oxygen was quickly becoming a dominate gas within the Earth’s atmosphere, blocking more of the sun’s high energy UV light. Carbon was continually being buried either as organic carbon molecules or calcium carbonate, as these single-celled organisms died. This resulted in the sequestration or removal of carbon from the atmosphere for long periods of times.

## The Cryogenian and the Snowball Earth

If all the carbon dioxide is replaced by oxygen, the Earth would likely become life-less and frozen.

About 720 million years ago, the amount of carbon dioxide in the atmosphere had dropped to such low levels that ice sheets begin to form. Sea ice expanded out of the polar regions toward the equator. This was the beginning of the end of the Proterozoic, as the expansion of the sea ice reflected more and more of the sun’s rays into space with its much lower albedo. A tipping point was reached, in this well oxygenated world, where ice came to cover more and more of the Earth’s surface. This was a positive feedback as expanding ice cooled the Earth by lowering its albedo, and resulting in runaway climate change. Eventually, according to the work and research of Paul Hoffman, the entire Earth was covered in ice. An ice-covered world or snowball Earth effectively killed off many of the photosynthesizing lifeforms living in the shallow ocean waters, as these areas were covered in ice preventing sunlight penetration. Like the ice-covered moon of Jupiter, Europa, Earth was now a frozen ice planet. These great glacial events are known as the Sturtian, Marinoan and Gaskiers glacial events, which lasted between 720 and 580 million years ago. From space, Earth would appear unhabituated and covered in snow and ice.

The oxygen-rich atmosphere was effectively cut off from the life-forms that would also draw down the oxygen and produce carbon dioxide. Life on Earth would have ended sometime during this point in its history, if it were not for the active volcanic eruptions which continue to happen on Earth’s surface, re-releasing buried carbon back into the atmosphere as carbon dioxide. It is startling to note that if carbon dioxide had been completely removed from the atmosphere, photosynthesizing life, including all plants would be unable to live on Earth and without the input of gasses from volcanic eruptions, Earth would still likely be a frozen nearly life-less planet today.

Volcanic eruptions likely continued to release carbon dioxide gas, building over time, with the lack of photosynthesis on a frozen planet.

Levels of carbon dioxide slowly increased in the atmosphere (an important green-house gas) and these volcanic eruptions slowly thawed the Earth from its frozen state and the oceans became ice free. Life survived, resulting in the first appearance of multicellular life forms, and the first colonies of cells, with the advent of jelly-fish and sponge-like animals and the first colonial corals found in the Ediacaran, the last moments of the Proterozoic and the early diversification of multicellular plants and animals in a new era, the era of multicellular-life, the Phanerozoic.

Today, carbon dioxide is a small component of the atmosphere, making up less than 0.04% of the atmosphere, but carbon dioxide is rising dramatically just in the last hundred years, to levels above 0.07% in many regions of the world, nearly doubling the amount of carbon dioxide in the Earth’s atmosphere in a single human lifespan. A new climatic crisis is facing the world today, one driving by rising global temperatures and rising carbon dioxide in the atmosphere.

# 4c. Carbon Dioxide in the Atmosphere.

## Mysterious Deaths

Her body was found when the vault was opened. Ester Penn lay inside the large locked bank vault at the Depository Trust Building on 55 Water Street in Lower Manhattan New York. Security cameras revealed that no one had entered or left the bank vault after 9pm. Her body showed no signs of trauma, no forced entry was made into the vault, and nothing was missing. Ester Penn was a healthy 35-year old single mother of two, who was about to move into a new apartment in Brooklyn that overlooked the Manhattan skyline. Now she was dead.

On August 21st 1986, the small West Africa villages near Lake Nyos became a ghastly scene of death, when every creature, including 1,746 people, within the villages died suddenly in the night. The soundless morning brought no sounds of insects, no cries of roosters nor children playing in the streets. Everyone was dead.

Each mysterious death has been attributed to carbon dioxide toxicity. The human body can tolerate levels up to 5,000 ppm or 0.5% carbon dioxide, but levels above 3 to 4% can be fatal. A medical condition called hypercapnia occurs when the lungs are filled with elevated carbon dioxide, which causes respiratory acidosis. Normally, the body is able to expel carbon dioxide produced during metabolism through the lungs, but if there is too much carbon dioxide in the air, the blood will become enriched in carbonic acid (CO2 + H2O -> H2CO3), resulting in partial pressures of carbon dioxide above 45 mmHg.

For the villagers around Lake Nyos, carbon dioxide was suddenly released from the lake where volcanic gasses had enriched the waters with the gas, while in the case of Ms. Penn, she released the carbon dioxide when she pulled a fire alarm from within the vault, which trigger a spray of carbon dioxide as a fire suppressant1. Divers, submarine operators, and astronauts all worry about the effects of too much carbon dioxide in the air they breathe. No more dramatic episode in carbon dioxide can match the ill-fated Apollo 13 mission to the moon.

## “Houston we have a problem.” – Jack Swigert

On April 14th 1970 at 3:07 Coordinated Universal Time 200,000 miles from Earth, three men wedged in the outbound Apollo 13 spacecraft heard an explosion (NASA, 2009). A moment later astronaut Jack Swigert transmitted a message to Earth “Houston, we've had a problem here.” One of the oxygen tanks on board the Service Module had exploded, which also ripped a hole in a second oxygen tank, and cut power to the spacecraft. Realizing the seriousness of the situation, the crew quickly scrabbled into the Lunar Module. The spacecraft was too far from Earth to turn around, instead the crew would have to navigate the spacecraft around the far side of the moon, and swing it back to Earth if they hoped to return alive. The Lunar Module now served as a life raft strapped to a sinking ship, the Service Module. The improvised life-raft was not designed to hold a crew of 3 people for the 4-day journey home. Oxygen was conserved by powering down the spacecraft. Water was conserved by shutting off the cooling system, and drinking became rationed to just a few ounces a day. There remained an additional worry; the buildup of carbon dioxide in the space capsule. With each out breath, the crew expelled air with about 5% carbon dioxide. This carbon dioxide would build up in the lunar module over the four-day journey, and result in death by hypercarbia; the buildup of carbon dioxide in the blood. The crew had to figure out how long the air would remain breathable in the capsule.

Apollo 13 spacecraft configuration during its journey to the moon and back.

From Earth, television broadcasters reported the grave seriousness of the situation from Mission Control. The crew of Apollo 13 had to figure out the problem of the rising carbon dioxide in the air of the Lunar Module, if they were going to see Earth alive again.

## The Keeling Curve

Charles "David" Keeling in 2001.

In 1953, Charles “Dave” Keeling, arrived at CalTech in Pasadena, California on a postdoctoral research grant to study the extraction of uranium from rocks. Assigned to the lab of Harrison Brown, his lab supervisor proved to be a dynamic figure. His advisor had a central role in the development of nuclear bombs used in Japan. During the war, he had invented a new way to produce plutonium, which allowed upwards of 5 kg (11 lbs) of plutonium to be added to “Fat Man” bomb that was dropped on the city of Nagasaki killing nearly 100,000 people in August 1945. After the event, Brown’s heart was crushed at the personal responsibility he felt for these deaths. He penned a book Must Destruction Be our Destiny? in 1945, and begin traveling around the world giving lectures on the dangers of nuclear weapons. Harrison Brown had previously advised Claire Patterson, who was the first to radiometrically date meteorites to determine the age of the Earth at 4.5 billion years using lead isotopes while at the University of Chicago. In 1951, Harrison Brown divorced his wife and remarried, took a teaching position at Caltech, and it was here that a new chemistry postdoctoral researcher Charles Keeling arrived in 1953 to his lab. Initially, Keeling was set to the task of extracting uranium from rocks, but his interests turn to atmospheric sciences looking at the chemical composition of the air, in particular measuring the amount of carbon dioxide.

Keeling set about making an instrument in the lab to measure the amount of carbon dioxide in air using a tool called a manometer. A manometer is a cumbersome series of glass tubing which measures the pressures of isolated air samples. Air samples were captured by using a glass spherical flask, cleared of air in a vacuum and locked closed. Wrapped in canvas, so that fragile glass would not break, the empty glass spherical flask would be opened outside, and the captured gas that flowed into the glass flask would be taken back to the lab to be analyzed. The manometer was first developed to measure the amount of carbon dioxide produced in chemistry experiments involved in the combustion of hydrocarbons, allowing chemists to known how much carbon was in a material. Keeling used the same techniques to determine the amount of carbon dioxide in the atmosphere, his first measured value was 310 ppm, or 0.0310% which he found during a series of measurements made at Big Sur near Monterey, California.

Interestingly, Keeling found that concentrations of carbon dioxide increased slightly during the night. One hypothesis was that as the gas became cooler, it sank during the colder portions of the day. Carbon dioxide, which has a molar mass of 44.01 g/mol, compared to a molar mass of 32 g/mol for oxygen gas (O2) and 28 g/mol for nitrogen gas (N2), is a significantly heavier gas and will sink into lower altitudes, valleys and basins. Unless the sample was taken from places where carbon was being combusted in power plants, factories or near highways, repeated experiments showed that carbon dioxide did not vary from place to place and remained near 310 ppm.

However, this diurnal cycle intrigued him and he undertook another analysis to measure the isotopic composition of the carbon, to trace where the carbon was coming from. The ratio between Carbon-13 (13C) to Carbon-12 (12C) [this is called delta C-13 or ${\displaystyle \delta ^{13}C}$ ] is higher in molecules composed of carbon bonded to oxygen, while the ratio is lower in molecules composed of carbon bonded to hydrogen, because of the atomic difference in mass. Changes in this ratio demonstrate the source of the carbon in the air. If