Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.
Measurement standards are needed for knowing “how much” exists.
A teacher giving rulers to children of the second grade (8 years old) in a primary school in Vaasa, on their second day of school in Finland. The ability to measure, quantitatively, “how much” of something you have is a key aspect at the foundation of all quantitative endeavors.
Early distance standards, like “cubits” or “feet,” were based on body parts.
This Ancient Egyptian artifact shows a fragment of a cubit measuring rod. Note the markings at the bottom of the rod showing various fractions of a cubit: forerunners of divisions like inches, centimeters, and millimeters.
A single “pace” was often used: around one yard/meter.
A pace, either defined by a single stride as shown or a “return to the same foot” stride, was used as the original definition of a mile, where 1000 or 2000 paces defined that mile. Before a meter was defined by a pendulum’s length, this non-consistent standard was frequently used for similar distances.
The idea of a “standard meter” came from pendulum observations.
A pendulum, so long as the weight is all in the bob at the bottom while air resistance, temperature changes, and large angle effects can be neglected, will always have the same period when subject to the same gravitational acceleration. The fact that the same pendulum swung at different rates between different locations in Europe and the Americas was a hint toward Newton’s gravitation, and the variation of surface gravity in a latitude-dependent fashion.
A swinging pendulum’s period is determined by two factors: length and gravity.
The front view (left) and side/schematic view (right) of the first pendulum clock ever built, in 1656/7, which was designed by Christiaan Huygens and built by Salomon Coster. The drawings come from Huygens’ 1658 treatise, Horologium. Many subsequent refinements, even prior to Newton’s gravity, were made to this original design; Huygens’ second pendulum clock, built in 1673, was designed to have each half-swing last for precisely one second.
A seconds pendulum, where each half-swing lasts one second, requires a pendulum one meter long.
In general, there are only two factors that determine the period of a pendulum: its length, where longer pendulums take longer to complete one oscillation, and the acceleration due to gravity, where larger amounts of gravity results in faster pendulum swings. This is why a pendulum clock is not universal, but must be calibrated to the specific gravitational acceleration at its location.
Because gravity varies by ~0.2% across Earth, any pendulum-based “length” isn’t universal.
The gravitational field on Earth varies not only with latitude, but also with altitude and in other ways, particularly due to crustal thickness and the fact that the Earth’s crust effectively floats atop the mantle. As a result, the gravitational acceleration varies by a few tenths of a percent across Earth’s surface.
In 1790, the meter was defined as 1/10,000,000th the distance from the North Pole to the equator.
This map shows the entire globe of the Earth projected onto a Mollweide projection, where areas are accurate and well-preserved and the map is fully connected, with no gaps. However, the perpendicularity of latitude and longitude is sacrificed at high latitudes and far away from the centrally projected longitude, while preserving the areas of the land masses and oceans. A meter was once defined as 1/10,000,000th the distance from the North Pole to the equator, along the meridian passing through Paris, France.
Credit: Strebe/Wikimedia Commons
That distance was then cast into a platinum bar.
These two images show two bars that defined the meter: the top 100% platinum bar brought to the United States in 1799 at top, and the second meter bar following 1875’s Treaty of the Meter, received by President Benjamin Harrison in 1890, where this meter bar (No. 27) became the reference standard for all length measurements until 1960.
After correcting an early error of 0.2 millimeters, these bars became distance standards for decades.
Even though advances in quantum physics enabled superior definitions of the meter starting in 1927, the platinum-iridium bar, with an X-shape and with a meter determined by markings along it rather than by the cut ends of the bar itself, remained the global standard until 1960.
Platinum-iridium alloys, with X-shapes to better resist distortions, replaced those originals.
The idea behind a Michelson interferometer is that a source of light can be split into two by passing it through a device like a beam splitter, which sends half of the original light down each of two perpendicular paths. At the end of the path, a mirror bounces the light back toward the way it came from, and then those two beams are recombined, producing an interference pattern (or a null pattern, if the interference is 100% destructive) on the screen. If the speed of light is different down each path, or if the path length is different, then the interference pattern will change in response.
In the 1920s, atomic interferometry — based on light’s wavelength — superseded the “bar” standard.
This illustration shows how destructive interference (left) and constructive interference (right) determine what sort of interference pattern arises on the screen (at bottom, both panels) in an interferometer setup. By tuning the interferometer’s distance to a specific number of wavelengths of light, a quantity such as a “meter” can be either measured or even defined.
The right number of wavelengths of light defined the 20th century’s meter.
NIST’s William Meggers, shown here in March 1951, demonstrates a measurement of the wavelength of mercury-198, which he proposed could be used to define the meter. By defining the meter as a precise number of wavelengths emitted by a well-known atomic transition, a superior precision could be achieved compared to any “standardized” physical object, such as the classic meter bar.
First cadmium, then mercury, and next krypton atoms defined the meter.
In 1960, after years of experiments with cadmium, krypton, and mercury, the 11th Conférence Général des Poids et Mésures redefined the meter as the “length equal to 1,650,763.73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the krypton-86 atom,” a definition which stood until 1983.
Finally, in 1983, a new standard was adopted: the distance light travels in 1/299,792,458th of a second.
Although light is an electromagnetic wave with in-phase oscillating electric and magnetic fields perpendicular to the direction of light’s propagation, the speed of light is wavelength-independent: 299,792,458 m/s in a vacuum. If you can measure the distance that light of any wavelength travels in 1/299,792,458th of a second, you can precisely measure and know how long “1 meter” is from anywhere in the Universe.
Credit: And1mu/Wikimedia Commons
Because the speed of light in a vacuum is always constant, this definition is universal.
The longer a photon’s wavelength is, the lower in energy it is. But all photons, regardless of wavelength/energy, move at the same speed: the speed of light. This is, surprisingly, irrespective of the motion of the observer relative to light; the speed of all forms of light is measured to always be the same for all observers.
Mostly Mute Monday tells a scientific story in images, visuals, and no more than 200 words.
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.
