IB Physics/Measurements and Uncertainties (2016)/Measurements in Physics (2016)
Physics is the study of real, physical phenomena. Anything that exists or can be directly or indirectly measured, with any measuring instrument or thought experiment, can be thought of as a quantity. This idea of the world being described by however so many different quantities, which, in IB physics, you will learn about them, as well as how to measure, manipulate and utilise the relationships we've found between them, is the basis of physics as an experimental science. Quantities are made up of two components - a number and a unit. These ideas are explained below.
Think back to your early childhood. What were numbers to you? They probably represented some quantity of a real thing - 2 apples, 3 cookies, and so on. "2 apples" is a very real thing, but "2" is merely an abstract idea that happens to describe how numerous those apples are.
Likewise, "5 metres" is a real thing, that can be drawn out, pointed to, and visualised in reality; on the other hand, "5" is just another abstract concept that may describe some real things when used in a certain way.
The key difference between those two terms in quotes is the fact that the first term is the physical, real concept of a metre multiplied by 5, whereas "5" is still just a number. These physical, real concepts which are multiplied by numbers in order to represent things in the real world are called units.
Fundamentally speaking, the idea of a physical unit is what allows us to express very real things through the powerful abstract system of mathematics. That's why you should never omit the unit, if present, when finding a quantity in physics - the number by itself is meaningless - it is merely the act of multiplying it by some physical or abstract concept that gives it meaning.
Fundamental SI UnitsEdit
There are seven basic units - seven basic physical things or phenomena - that can be combined through multiplication and division to describe every single quantity that is measured in the course of doing physics experiments and solving physics problems.
These are the metre (distance), kilogram (mass), second (time), ampere (flow of charge carriers / electric current), kelvin (absolute temperature, starting at 0), mole (unit of amount, a number, that allows conversion between subatomic particle masses and grams), and candela (light intensity; not touched in IB physics).
These units are defined and regulated by the BIPM; the acronym is French for the organisation's name, the International Bureau of Weights and Measures. The base SI units are called "SI", another French acronym meaning International System, to indicate how they are in practically universal international use in physics (except, sometimes, in the United States).
These seven units propagate through the various physics equations we have discovered that accurately describe reality to combine into new units. They can be divided by one another - for instance, the metre, , can be divided by the second, , to get the unit of velocity - the "metre per second", or - a compound unit for speed or velocity.
Derived SI UnitsEdit
While it is possible to show the nature of any physical quantity you might want to just in terms of the above 7 basic SI units, this can often become cumbersome when units get more complex. For instance, consider the formula for kinetic energy below.
The units of energy are thus the units of mass, multiplied by the units of velocity squared - or, . Writing this out every time you want to describe some amount of energy would just be tedious. As such, some combinations of the basic SI units have their own names and symbols - in this case, one is called a joule, with the symbol .
These derived units are in practically universal use in physics. Throughout the course, you will learn of more derived units and what they represent. All derived SI units can still be expressed in basic SI units, as the equation used to find the quantity with a particular derived unit will describe - however, the derived SI unit and its symbol is used for simplicity.
Numbers are the other part of a quantity, in addition to units - the multiplier - which describe the magnitude of the physical quantity by scaling the unit up or down. The idea of a number is not as hard to comprehend as the idea of a unit; however, scientists have a wide diversity of conventions, or established practices, that they use in order to make their usage of numbers quick and precise.
Orders of MagnitudeEdit
One thing that you ought to know is the idea of orders of magnitude. They are not a natural concept, but a human-created one - essentially, an extra power of 10 is an additional order of magnitude, and taking away a power of 10 is one fewer order of magnitude. Although one might discuss orders of magnitude formally in the sense that is on the second order of magnitude (like other numbers around ), this is never really done in practice.
Instead, the key uses of orders of magnitude in physics are in communication - for instance, of the fact that two quantities are of the same order of magnitude, or that a quantity is several, or a particular number of, orders of magnitude greater or smaller than another.
As an example, I might say that "the gas constant, , is on the same order of magnitude as the number of fingers I have on my hands". I might say that "the mass of the earth is many orders of magnitude greater than the mass of my apartment building". I might also say, more relevantly to physics, for instance, that "electric fields are many orders of magnitude stronger than gravitational fields".
Orders of magnitude are a qualitative measurement that are really just used in conversation or informal writing between physicists. Exam questions on this topic may be phrased awkwardly because of this informal use, but will be easy.
Scientific notation, overall, is a standard way to write down numbers such that the below objectives in communication are achieved.
- The number of significant figures to which a quantity is precise is clearly indicated.
- The order of magnitude is clearly indicated.
- The number will never have an incredible amount of zeroes, nor will it have any more detail than is needed to convey all of the information about the number that we know.
All numbers written in scientific notation are done so in the form . Here, is any integer, including 0. It is used to define the order of magnitude of the number - essentially, the place value where the number begins - a number in the thousands, hundreds of thousands, trillions, and so on can all be described as such by that power of 10, . Given this, all the numbers which have a place value beginning at a certain power of 10, and a certain number of significant figures' precision, can be described by the decimal term multiplied by that power of 10 - . The number must be greater than or equal to 1 (or else a lower order of magnitude would be appropriate), and must be less than 10 (or else a higher order of magnitude would be appropriate). It is usually a decimal, but not always - and it is written to exactly the number of significant figures that the writer can be sure of.
For instance, if I have measured the height of a skyscraper to be , but only to a precision of around , or two significant figures, then I would write this height as .
Another tool that scientists use to show the order of magnitude of a quantity, which goes hand in hand with scientific notation, is metric multipliers. These are like little, pre-packaged powers of 10 that can be multiplied onto a particular unit to make it easier to read, pronounce, and talk about quantities. The common examples of these, that you might be expected to know, are in the table below.
Although these are technically numbers, they are shunted right next to the units to multiply them, essentially creating a new unit that is a few orders of magnitude away from the original. That's what's unique about them - even though they are technically a number, they make up a part of the unit. In IB physics, you will be expected to have a solid command of these metric multipliers,
It must be noted that while the is technically a modified unit because it has the "kilo" metric multiplier (i.e., a kilogram is a unit of mass equivalent to grams), it is still a base SI unit. Generally, through, people will speak of "milligrams" and "nanograms" rather than "micro-kilograms" and "pico-kilograms". In practice, metric multipliers are mainly used for derived units, in addition to the metre, ampere and second, and are more often used to make units smaller rather than to make them larger.
The idea of estimation is not something that can be easily taught in a textbook. In a few words, it is "calculating but not really", or making "an educated guess".
One form of estimation centers around the idea of doing calculations only with order-of-magnitude figures in one's head, such that an estimate for the order of magnitude of the answer might be figured out. It's a quick way to check that a calculation seems valid. Did you find that a snail might move at 94% of the speed of light? You've probably done something wrong. That's where this form of estimation comes in most often.