Units of Measurement

Units of measurement are necessary to describe physical quantities. They are internationally (previously nationally or even regionally) agreed comparative quantities with a fixed value that is repeatable at any time. Today’s system of units of measurement (the International System of Units – abbreviated in all languages as “SI”) was introduced in 1960 within the scope of the Metre Convention. At first intended for science, technology and education, the system has meanwhile gained general acceptance in commerce and all other areas of social life. The SI distinguishes between two categories of units: base units and derived units. The subsequent seven base units exist since 1971:

Quantity Unit Symbol
Time, duration second s
Length meter m
Mass kilogram kg
Electric current ampere A
Temperature kelvin K
Amount of substance mole mol
Luminous intensity candela cd

The derived units are formed from the base units by algebraic operations (multiplication and division) based on the laws of nature for the relevant quantities.

In this connection, it is essential that no other proportionality factor than 1 is included (coherent system of units). Some derived units have been given special names: e.g. volt, hertz, joule. It should be possible to realize the base units in an adequate laboratory at any time. Consequently, their definitions relate to invariable properties of nature (atomic properties and fundamental constants) with the exception of the unit of mass. Only the kilogram is still represented by an international prototype. The disadvantage of such prototypes is that they are exposed to environmental influences and its entailed changes and, moreover, that they are not freely available.

The International System of Units was introduced in Austria with the amendment of the Metrology Act 1973.

The 26th Conférence Générale des Poids et Mesures 2018 decided the following definition of the second:

The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1.

This is a restatement of the definition which was valid since the 13th Conférence Générale des Poids et Mesures in the year 1967:

The second (s) is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom.

Isidor Isaac Rabi (1898 – 1988)

It follows that the hyperfine splitting in the ground state of the caesium-133 atom is exactly 9 192 631 770 Hz. More detailed information on the practical realisation of the second can be found on our page „Time and Frequency“.

Since time immemorial, our concept of time is based on a day – the time that the Earth needs for one revolution around its axis. This time period is arbitrarily divided by clocks into 24 hours divided into 60 minutes and these again into 60 seconds. Incidentally, revolutionary France introduced a decimal time in 1793. Each month had three decades at 10 days. Each day had 10 hours at 100 minutes at 100 seconds each. But this reform did not prevail and was therefore withdrawn in 1795. The definition of the second as 1/86 400
(24 · 60 · 60 = 86 400) of a mean solar day was recommended by the 3rd Conférence Générale des Poids et Mesures in 1913.

Now, subject to varying fluctuations (influence of tides, etc.) the rotation of the Earth is gradually but irregularly slowing down. In the thirties even the influence of the seasons could be demonstrated with the steadily improved clocks (quartz clocks). Due to these fluctuations, the definition of the second by the length of a day was no longer adequate as its length would be dependent on the exact day of its definition. Therefore it was decided to define the second as the fraction of one rotation of the Earth around the sun. The 11th Conférence Générale des Poids et Mesures in 1960 defined the second as the fraction 1/31 556 925,9747 of the tropical year that began on 31 December 1899 at noon. As one can see from this definition even the solar year is subject to fluctuations caused by gravitational interactions of the Earth with other celestial bodies. However, the atomic clock had already been invented and so the second was defined as the transition of the two hyperfine levels of caesium in 1967.

Transitions between hyperfine levels are used in atomic clocks because their frequencies can be determined very precisely and can be easily produced and measured since they lie in the radio or microwave frequency range. Hereby, two hyperfine levels of the ground state of the electron in the outermost atomic shell of the caesium isotope 133Cs (not radioactive!) are used. By definition, a microwave radiation with a frequency of 9 192 631 770 Hz is emitted during the transition from one energy state to another. A caesium atomic clock is a high precision frequency standard in the microwave spectrum which is stabilized by a resonance effect. The foundational work leading to the fabrication of the atomic clock was carried out by the American physicist I.I. Rabi. He was awarded the Nobel prize in 1944.

Since 1967 the second is no longer the fraction of a day or a year. Therefore atomic time scale and astronomical time scale drift apart, so leap seconds are necessary to adapt civil time to astronomical conditions.

The 17th Conférence Générale des Poids et Mesures 1983 adopted the following definition of the meter:

The meter (m) is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.

Pierre-François-André Méchain (1744 – 1804)

This definition of the meter assigns a fixed value to the light velocity – the speed of light in vacuum is exactly 299 792 458 metres per second,

c0 = 299 792 458 m/s.

More detailed information on the practical realisation of the meter can be found on our page „Length“.

The original definition and the first realisation of the meter date back to the time of the French Revolution. At that time, many different local length units were seriously hampering trade which even affected the upcoming Industrial Revolution. It was in the spirit of the French Revolution to look for a length unit that was equal for all people.

 

Therefore it was decided to use a certain fraction of an earth meridian (i.e. the Earth’s circumference from pole to pole) as a length standard. 1792 P. F. A. Méchain and J. B. J. Delambre began to measure the meridian that runs through Paris between Dunkirk and Barcelona. The commotions of the French Revolution delayed this work so that it could only be completed after six years.

The length of the meridian was deduced from the length of this section and it was decided that from now on one ten-millionth of a quadrant of the earth’s meridian corresponds to one meter (thus the circumference of the Earth is approximately 40 000 km). By the way, the data lying behind the meter also led to the development of modern methods for the evaluation of measurement data (C. F. Gauß, A. M. Legendre).

Jean-Baptiste Joseph Delambre (1749 – 1822)

1799 a platinum bar, corresponding to the determined length of the meter became the official French standard (“Mètre des Archives”). At the end of the 19th century (1st Conférence Générale des Poids et Mesures 1889) a bar of a platinum-iridium alloy (10% Iridium) became the new prototype of the meter. The Mètre des Archives served as a reference. Copies (likewise platinum-iridium) for the member states of the Meter Convention (among them also the former Austria-Hungary) were produced to serve as their national standards.

Shortly after (1892-1893), the first successful experiments were made at BIPM to determine the length of the meter in terms of multiple wavelength of monochromatic light. But it was not until 1960 that a definition of the meter was adopted based on the wavelength of monochromatic light.

As a definition the radiation corresponding to a transition between specified energy levels of the krypton 86 isotope (86Kr) was chosen. The wavelength of this radiation multiplied by 1 650 763.73 gives one meter.

At about the same time, the laser was invented and new, even more precise methods of length measurement became feasible. This led to the redefinition of the meter, valid since 1983, as the length of the path travelled by light in vacuum during a specific fraction of the second. In the 1970s painstaking measurements determined the light velocity in vacuum very precisely to define the value given above. The second is the base unit that can be realised with today’s highest possible precision. At present, primarily the frequency of laser radiation is determined for length measurements and out of it the wavelength is determined from light velocity. Here stabilised lasers are used as standards for frequency and wavelength. Nevertheless, the original prototype of the meter 1889 is still being kept at BIPM in Paris.

 

It should be mentioned that at the time of the French Revolution an alternative approach for the definition of the meter was discussed. The meter should be defined as the length of a pendulum with the frequency of one beat per second. This definition was also based on the second but it was rejected in favour of the meridian definition at that time.

 

The 3rd Conférence Générale des Poids et Mesures 1901 specified the following definition of the kilogram:

The kilogram (kg) is the unit of mass; it is equal to the mass of the international prototype of the kilogram.

National kilogram
prototype No. 49

By this definition the mass of the international prototype of the kilogram is always 1 kilogram exactly. To this day the kilogram is the only base unit that cannot be traced back to fundamental constants (e.g. light velocity in vacuum). All mass determinations refer to the prototype, which is in safe custody at the BIPM in Paris. For more detailed information on mass see our page Mass and Related Quantities.

As for the meter, the origin of the kilogram lies in the times of the French Revolution. Already during the reign of Louis XVI attempts were made to replace the many different weight units by a uniform standard. The basis for this standard was the mass of cubic decimetre of water at the temperature of its maximum density (i.e. at 4°C). Initially, this mass unit should be named “Grave”. After the Revolution it was decided to adopt the gram as the new mass unit. This was principally motivated by the fact that at that time many experiments for mass determination were conducted with much smaller masses than the kilogram. However, since a Gram standard would have been impractical due to its small size and difficult to realise, the option for a new mass unit as 1 kilogram standard (=1000 Gram) was realised. It is quite possible that this decision was also politically motivated; anyhow this is the reason why the base unit of mass has a prefix in front (prefix kilo = 1000).

After a lot of extensive measurements, mainly based on Archimedes Principle, cylindrical artefacts of Platinum were produced that represented the new defined mass unit kilogram One of these prototypes was declared the official kilogram standard “Kilogramme des Archives” of France in 1799. The 1st Conférence Générale des Poids et Mesures 1889 introduced a new standard as international mass standard. t was made of an alloy of platinum with 10 % iridium. The “Kilogramme des Archives” served as a reference. Since then copies of these standards (likewise platinum-iridium) are used in the member states of the Meter Convention as national standards. Austria holds artefact no. 49. The definition of the kilogram is valid since that time. The new definition of 1901 should help to distinguish the mass of a body explicitly from the weight of a body.

Currently, different methods are tried to find a new definition of the kilogram based on fundamental constants with the aim of replacing the kilogram prototype. The most promising method seems to be the watt balance. Beyond that, efforts are made to redefine the mass unit with a very accurate determination of the Avogadro constant or by ion accumulation.

 

The 9th Conférence Générale des Poids et Mesures 1948 adopted the following definition of the ampere:

André Marie Ampère (1775 – 1836)

The ampere (A) is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2 x 10–7 newton per metre of length.

 

The force between two current-carrying conductors was firstly described by A. M. Ampère in 1820. This definition of the unit of current is purely hypothetical and unfeasible in practice (cf. conductors of infinite length); in fact it only served to define the permeability of free space (magnetic constant) µo as follows: µo = 4p · 10-7 N/A2. The permeability of vacuum also defines the dielectric constant of vacuum eo via the light velocity c where 1/eo=c2 · µo. The factor 4p derives from the surface of a sphere with radius 1. More details on the practical realisation of electric quantities can be found on our page “Electric quantities”.

The most customary electric quantities are the ampere (A) for the electric current, the volt (V) for the electric voltage (electrical potential difference) and the ohm (W) for electric resistance. At the end of the 19th century a practical system for these units was defined for the first time, admittedly based on the cgs system (c…centimetre, g…gram, s…second) proposed by Maxwell and Thomson (later Lord Kelvin). At that time its definition was based upon a specified solution of silver nitrate in water. This “international ampere” as it was called is defined as that constant current that would deposit 0.001 118 000 grams of silver per second on a platinum anode from a solution of silver nitrate in water.

The big disadvantage of the cgs systems was that for the same quantity it differentiated between electrostatic and electromagnetic units. The 6th Conférence Générale des Poids et Mesures decided to extend BIPM’s activities on the field of electric quantities in 1921. As a consequence a definition was searched combining the existing mechanical quantities (meter, kilogram, second) with the electric quantities thus avoiding the problems of the cgs system and eventually leading to today’s definition of the ampere of approximately 1.000 15 international ampere.

It is sufficient to define a single electrical unit since the units volt and ohm can be derived from the ampere (O=W/A2; V=W/A) through the unit of power (W). Since the associated definition of the constant µo changes the form of the fundamental equations of electrodynamics (Maxwell’s equations), the units of the Gaussian cgs system are consistently used for microscopic and relativistic problems (not to be confused with the above cgs units) besides the defined SI-units for electrical quantities which are particularly suitable for practical technical applications.

Today the units volt and ohm can be realised with high precision using quantum mechanical effects (Josephson effect, quantum Hall effect). Intensive research is currently ongoing to realise the electric current in a similar way.

 

The following definition of the kelvin was given by the 13th Conférence Générale des Poids et Mesures in 1967:

William Thomson meist als Lord Kelvin bezeichnet (1824 – 1907)

The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water.

 

Thus the temperature of the triple point of water is accurately defined as 273.16 K. More details on the practical realisation can be found on the page of the temperature division.

The first useable thermometers were already developed in the 17th century (Galilean-thermometer). The term temperature is ascribed to the Greek medical practitioner Galen (ca. 170 AD) who distinguished eight degrees of “temperamentum” in his patients to characterise the effects of his medicine. In the 18th century several different temperature scales were developed of which the Celsius temperature scale and the Fahrenheit scale are the best known. All these relative temperature scales are based on the definition of two fixed temperature points. For the zero point of his mercury thermometer Fahrenheit used the temperature of a mixture of salmiak with snow corresponding roughly to the deepest temperatures in Danzig in the winter of 1708/1709. The numerical value 96 was assigned to the temperature of the human body. Much more progressive was the definition of Celsius who used the freezing and boiling point of water. Though Celsius assigned 0° to the boiling point of water and 100° to the ice point this was soon inverted into today’s form.

Furthermore, all these definitions of temperature depend on special material properties varying on a wide temperature range (cf. thermal expansion coefficient of mercury). In the middle of the 19th century William Thomson (later Lord Kelvin) recognized, that according to the second Law of Thermodynamics, a universal temperature scale independent of the properties of special thermometers existed. The temperature of this scale can for example be measured with the gas thermometer. The so-defined thermodynamic temperature is always a positive quantity whose zero point is likewise defined by the second law of thermodynamics. Because of this absolute zero point only a single fixed point is necessary for the definition of a temperature scale. But only in 1948 the 9th Conférence Générale des Poids et Mesures defined an absolute temperature for the first time.

Suitable fixed temperature points are the temperatures of the states of equilibrium between the aggregate states of pure substances (i.e. freezing or melting temperatures of pure metals at a specific pressure).

 

The triple point is the state in which all three states of aggregation (solid, liquid and vapour phases) coexist in a thermodynamic equilibrium. This state is only reached at a certain pressure and temperature and therefore particularly suitable as a fixed point. The factor 1/273.16 guarantees that a rise in temperature of 1 K corresponds to a rise in temperature of 1°C.

0°C equals 273.15 K. Today the practical realisation of the temperature scale is carried out using different fixed points in the range from 0.65 K to 1400 K, where the realisation of the fixed points of the lower temperatures is extremely time-consuming and laborious.

The essential difference of the currently valid definition of the Kelvin from 1967 to the earlier definition of 1948 is that the designation °K was allowed. Speaking of kelvin today 100 K are “hundred kelvins” and not “hundred degrees kelvin” in contrast to 100 °C as “hundred degrees Celsius”.

 

The 14th Conférence Générale des Poids et Mesures 1971 adopted the following definition of the mole:

The mole (mol) is the amount of substance of a system which contains as many elementary entities as there are atoms in 0,012 kilogram of carbon 12; its symbol is “mol”.

Lorenzo Romano Amadeo Carlo Avogadro (1776 – 1856)

When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles.

It follows that the molar mass of carbon 12 is exactly 12 grams per mole,
M(12C) = 12 g/mol.

The first part of this definition refers to unbound atoms of carbon 12, at rest and in their ground state.

As a result of the binding energy per atom for example 0,012 kg graphite at ambient temperature contains little more atoms than 0,012 kg diamond (equivalence of energy and mass).

A mole is the number of particles of a substance corresponding to its atomic mass in gram. It defines a huge number (Avogadro constant, formerly Loschmidt’s number), since the number of particles per mol is always the same. According to the present definition, a mole of a substance refers to roughly 6,022 x 1023 particles. The numerical value was first estimated by the Austrian chemist Johann Josef Loschmidt in 1865 using the kinetic gas theory.

Before that, Amadeo Avogadro (1776–1856) already suspected that equal volumes of gases (at constant temperature and constant pressure) had the same number of atoms.

Johann Josef Loschmidt (1821 – 1895)

The reciprocal of the Avogadro constant is 1/12 of the atomic mass of the carbon isotope 12C in gram. Given that the nucleus of the 12C atom contains 12 elementary particles (6 protons, 6 neutrons) it defines the atomic mass unit (u).

The Periodic table of the elements indicates the atomic masses as a multiple of these atomic mass units. At the same time, these relative atomic masses show the mass of a mole (molar mass) of a particular substance in gram. If 12C were the only carbon isotope, the relative atomic mass of carbon would be 12. In fact, the relative atomic mass indicated in the periodic table is a bit higher (12.011), because natural carbon always contains traces of the isotopes 13C and 14C; hence, a mole of natural carbon weighs more than a mole of 12C. Although the atomic mass unit (u) is no SI unit, its usage together with SI units is accepted by the Conférence Générale des Poids et Mesures.

The 16th Conférence Générale des Poids et Mesures 1979 adopted the following definition of the candela:

Relativer spektraler Hellempfindlichkeitsgrad des menschlichen Auges

The candela (cd) is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian.

It follows that the luminous intensity for this monochromatic light source is exactly 1/683 watt per steradian.

More information on the practical realisation of photometric quantities can be found on our page „Photometry“.

The system of photometric (light measurement) terms essentially goes back to Johann Heinrich Lambert (1728-1777). Light is the radiation of the visible range of the electromagnetic spectrum (i.e. wavelengths within 380 nm – 780 nm). The term luminous intensity is the radiation efficacy per solid angle in a given direction weighted according to the impression of brightness perceived by the human eye. The unit of luminous intensity is the candela (cd). In 1924 the Commission Internationale de l’Eclairage (CIE) defined and tabulated the spectral luminous sensitivity of the human eye at daylight.

From the middle of the 19th century many countries used different units of luminous intensity whereof the international candle (France, England, and USA) and the Hefner candle (Germany,Austria, and Scandinavia) should be mentioned. The Hefner candle was realised with an oil lamp of the same name where the height of its flame could be adjusted by an eyepiece. The 9th Conférence Générale des Poids et Mesures 1948 adopted the candela as the unit of luminous intensity for the first time: 1 cd is 1/60 of the luminous intensity per square centimetre of a blackbody radiating at the temperature of freezing platinum (~2042.5 K) perpendicular to the surface.

This definition was very complicated to realise and involved some disadvantages. It needed a cavity with a small inlet hole to absorb any radiation entering the cavity (black radiator). At the temperature of freezing platinum this cavity emits the radiation that is necessary for this definition. The development of silicon-photodiodes as radiation detectors eventually led to the detector-based definition of the candela which is valid since 1979. A frequency of 540·1012 hertz corresponds to a wavelength of 555 nm in standard air; at this wavelength the spectral sensitivity of the human eye at daylight (green light) is at its maximum. The advantage is that this definition links the photometric unit candela to the radiometric unit of radiant intensity (W/sr). In practice, photometric units are nothing else but radiometric units considering also the properties of the human eye. The human eye perceives green or yellow light brighter than red or blue light of the same radiant intensity. In principle, the factor 1/683 is arbitrary, but it was chosen so that the modern definition of the candela was close to the old definition. Since one Hefner candle is roughly one candela, also the modern unit of luminous intensity equates to the luminous intensity of a wax candle (Candela: lat. for candle; pronunciation on 2nd syllable).