Atomism — An Outline of Discoveries and Theories

by Gopi Krishna Vijaya

This Article is sourced from :
https://www.natureinstitute.org/gopi-vijaya/atomism-an-outline-of-discoveries-and-theories

Download the full paper here:
https://www.natureinstitute.org/s/GKVijaya-Atomism-an-outline.pdf

Abstract: In this paper, Gopi traces the development of the ideas about the atom from the 17th century until the current day. He considers the changing models of the atom and the phenomena they are intended to explain. He shows how what one theoretically called “the atom” increased in complexity as scientists implemented different methods and made new discoveries about the properties of matter. This overview of the phenomena and the theories paves the way for a phenomenological understanding of this topic.

” The history of science is science itself. ”
— Johann Wolfgang von Goethe (1749 – 1832)

” It often happens that the mind of a person who is learning a new science has to pass through all the
phases which the science itself has exhibited in its historical evolution. ”
— Stanislao Cannizzaro (1826 – 1910)

A. THE MECHANICAL ATOM

In the 5th century BCE the Greek philosopher Democritus proposed that all matter consists of atoms. This idea was variously modified and challenged for centuries. Leaving the early development of atomism aside, I will begin this overview with the 17th and 18th centuries, when atomism experienced resurgence and became more of a serious physical and material hypothesis. This hypothesis underwent several
transformations over the years, and what follows is an attempt to describe them without assuming any model or attempting to reduce the observations to one theory or another. The intention is to clarify the observations and the theories from a third person’s perspective — as if we were to take a walk inside the great minds who struggled with this problem, in the hope that it would lead to an improved objective
understanding of the search for the atom. The essential idea in the early days of atomic theory was to imagine all matter as consisting of “ultimate indivisible particles” or atoms, which were looked upon as being hard material particles moving in a void, or nothingness.
The descriptions from various scientists point to the image they had in their minds: Pierre Gassendi (1592-1655): “These ultimate particles [of matter] can be called atoms or indivisibles — not that they are completely deprived of parts — but in the sense that there exists no force of nature that is capable of reducing them. The atoms are solid corpuscles and they comprise little bulk. When many atoms adjoin one another, there may form a body which has a bulk, or let us say, a mass of a greater magnitude.” 1

1 [Gassendi, Pierre, Opera Omnia Vol. I (1658), 130a-132b, quoted in: Fisher, Saul, Pierre Gassendi’s Philosophy and Science, (2005), p. 219.]

Robert Boyle (1627-1691): “Now, given that a single particle of matter can be diversified in so many ways simply through its shape and motion, think what a vast number of variations could be produced by the compositions and decompositions of myriads of single invisible corpuscles that may be contained and organised in one small body!” 2

Isaac Newton (1642-1727): “the least parts of bodies to be — all extended, and hard and impenetrable, and moveable, and endowed with their proper inertia.” 3

And as for qualities other than mass and motion:

Walter Charleton (1619-1707): “Colour, Sound, Odour, Sapor (taste), Heat, Cold, Humidity, Siccity (dryness), Asperity (roughness), Smoothness, Hardness, Softness, &c. are really nothing else but various MODIFICATIONS of the insensible particles of the First Matter, relative to the various Organs of the Senses.”

In other words, when we interact with the world around us, several properties are sensed by us. However, most of these properties are now attributed to “hard” or “impenetrable particles”, which are presumed to be in motion. So it is probably fair to represent the imagination of the atoms during this time in the following way:

Here the arrows represent movement, and the spheres represent miniature solid balls. The process of interaction, in time, was replaced by the notion of a “force” being exerted, such that the capacity to exert a force became included in the properties that the atoms were supposed to have. The two types of forces, which describe two ways of interacting with the world, were attraction and repulsion. This additional attraction and repulsion was introduced by Newton’s admirer, the Jesuit priest Roger Boscovich:

Roger Boscovich (1711-1787): “that matter is unchangeable, and consists of points that are perfectly simple, indivisible, of no extent, & separated from one another; that each of these points has a property of inertia, & if the distance is diminished indefinitely, the force is repulsive, & in fact also increases indefinitely; whilst if the distance is increased, the force will be diminished, vanish, be changed to an
attractive force that first of all increases, then decreases, vanishes, is again turned into a repulsive force, & so on many times over; until at greater distances it finally becomes an attractive force that decreases approximately in the inverse ratio of the squares of the distances. This connection between the forces & distances, & their passing from positive to negative, or from repulsive to attractive, & conversely, I illustrate by the force with which the two ends of a spring strive to approach towards, or recede from, one another, according as they are pulled apart, or drawn together, by more than the natural amount.”5

2 [Boyle, Robert, The Grounds for and Excellence of the Corpuscular or Mechanical Philosophy (1674), Tr. Jonathan
Bennett (2017), p. 3.] 3 [Newton, Isaac, The Mathematical Principles of Natural Philosophy (1687), Book 3, Rule III.] 4 [Charleton, Walter, Physiologia Epicuro-Gassendo-Charltoniana (1654), London: Printed by Tho. Newcomb for
Thomas Heath, p. 191]

Boscovich therefore suggests that matter is made of “points of no extent” but yet “having inertia”! He also emphasized the atoms as “centers of force” more than “hard balls.” As the atoms come together, they repel, and as they go apart, they attract according to Newton’s famous law of gravity. This element of “springiness” was added to the hard atom of Newton, adding another property to the atomic model. Boscovich’s ideas were very influential, as they were carefully studied by many later philosophers, physicists and chemists, such as Michael Faraday, James Maxwell, Lord Kelvin, Joseph Priestley, Ampère, and even Friedrich Nietzsche. Boscovich has therefore been called “the creator of fundamental atomic physics as we understand it.” In fact, this picture of atoms is something that is sustained to this day.

5 [Boscovich, Roger Joseph, A Theory of Natural Philosophy (Venice: 1763) trans. J. M. Child (Chicago and London: Open Court Publishing Co., 1922), part I.] 6 [By Lancelot Law Whyte. https://sciencemeetsfaith.wordpress.com/2019/05/18/roger-joseph-boscovich-sjpolymath-philosopher-and-priest/]

B. THE CHEMICAL ATOM

John Dalton (1766-1844): “Matter, though divisible in an extreme degree, is nevertheless not infinitely divisible. That is, there must be some point beyond which we cannot go in the division of matter. The existence of these ultimate particles of matter can scarcely be doubted, though they are probably much too small ever to be exhibited by microscopic improvements. I have chosen the word atom to signify these
ultimate particles… Chemical analysis and synthesis go no farther than to the separation of particles one from another, and to their reunion. No creation or destruction of matter is within reach of chemical agency.” As the nineteenth century approached, chemistry took on a greater importance, and atomism also underwent a revolution: it gave rise to chemical atomism. In other words, the chemical combining capacity (or the “chemical bond”) was added to the atoms in addition to the mechanical push and pull that was emphasized until then. This change was brought about by John Dalton, who is called the “father of modern atomic theory.” The background experimental work was carried out by chemists like Antoine von Lavoisier and Joseph Proust.

7 Dalton’s Manuscript Notes, Royal Institution Lecture 18 (30 Jan 1810). In Ida Freund, The Study of Chemical Composition: An Account of its Method and Historical Development (1910), p. 288.

To begin with, Lavoisier observed that the total weight of reacting substances equals the total weight of the resulting substances. In other words, the weight of the substances is conserved in chemical reactions. Before Proust, it was thought that a certain weight of a substance, say iron, could combine with any other weight of another substance, say sulphur, when heated together. So it was assumed that if you had a lot of
iron, the resulting compound would be iron-rich, and if you had a lot of sulphur, the resultant would be sulphur-rich. However, Proust showed that iron could combine with only certain weights of sulphur and oxygen, and not arbitrary weights. For example, 56 grams of iron could, when heated in air, result in an iron compound that had either 16 grams of oxygen (black powder) or 24 grams of oxygen (red powder), but nothing in between.

Similarly, 1 gram of hydrogen would always combine with 8 grams of oxygen to give 9 grams of water. This form of ratio, 1:8 or in general m:n, was seen to hold true for all observable chemical reactions. Dalton declared that these different weight ratios arose because of atoms, i.e. a hydrogen atom weighed 1 unit while an oxygen atom weighed 8 units, so the combination of H and O resulted in the combined atom
(later to be called a molecule) of HO that weighed 9 units. In other words, he transposed the weights that were combining chemically to the atoms that he imagined made up the specific substance. As a result, the types of atoms were multiplied into the several distinct substances or elements i.e. each element had its own unique type of atom. All chemical combinations resulted from this process, according to him.8

Dalton represented atoms, and their combinations, through different symbols. He attributed a unique symbol for common substances like hydrogen, oxygen, carbon (carbone), and nitrogen (azote). A typical example is his element Number 21, which has one hydrogen atom and one oxygen atom combining to form one water molecule. (Dalton had the weight 7 attributed to oxygen instead of the actual 8, due to the
low accuracies of the time.)

Dalton’s abstract symbols are hence transformed into a spatial arrangement as soon as “combined atoms” are involved. The abstract symbols still have a relation to different substances — in that sense a “chemical thinking” is prevalent. As soon as the chemical reactions have taken place and the resultant of say “21” combining with “22” is observed, due to the notation, the “chemical thinking” shifts into a more or less
mere “spatial thinking”: the combination of spatially extended objects. Chemistry is expressed through spatial symbols. The symbols are also rearranged spatially to reflect combinations, and this entire arrangement can be seen in several of the examples that are included in the excerpt below from his book. 9 :

8 [An interesting side-note is that Dalton was also color-blind, and he was one of the first to take a lot of scientific interest in color-blindness.] 9 [Dalton, John, A New System of Chemical Philosophy (1808), London: Bickerstaff, Strand. p. 218.]

Based on Dalton’s approach, we can represent the atoms or their combination as shown above. Since every atom is no longer of the same substance, we can show them through different colors instead of abstract symbols. These atoms of the chemical elements and combinations of atoms (compounds) are now seen as the elementary constituents of matter. There is another significant change in the concept of the
atom: there are now multiple types of atoms, each one distinguished by their different weights and differing chemical affinities with other atoms. According to Dalton, all atoms of a given substance, whether simple or compound, are alike in shape, weight and any other particular feature.

The addition of chemical affinity to shape, weight, movement and springiness of atoms had some consequences. For one thing, Michael Faraday had shown that chemical change was closely tied to electricity. He subjected a piece of filter paper that was dipped in different solutions to an electrical current and observed that the chemicals deposited in the paper are identical to those that occur in a chemical reaction. For example, potassium iodide paper generates a discoloration due to iodine current. The reverse also held true: chemical changes were closely tied to electricity generation. In a battery, the chemical changes that occurred in the metal plates were directly responsible for the production of  electricity. Thus, chemical change and electricity were shown to be closely linked. More importantly, the electrical decomposition of matter (electrolysis) itself was the method selected to separate out the various chemical elements. When a substance could no longer be decomposed, it was called a chemical element. Each of these chemical elements had its own distinct atoms (or molecules), according to Dalton.

Faraday’s mentor, Humphry Davy, was famous for electrolyzing hundreds of substances and discovering seven new elements. The connection between chemical affinity and electricity meant that the polarity that occurred in chemical reactions, viz. acidic and basic natures, had to now be transferred over to the atoms via positive and negative charges. More importantly, it was not possible to clearly fit in “electric charge” as simply a movement of the atom. Hence, it had to be added on as a new property, with two poles. The attraction of positive and negative charges was made the basis of atomic combinations:

Jöns Jacob Berzelius (1779-1848): “… every chemical combination is wholly and solely dependent on two opposing forces, positive and negative electricity, and every chemical compound must be composed of two parts combined by the agency of their electrochemical reaction, since there is no third force. Hence it follows that every compound body, whatever the number of its constituents, can be divided into two
parts, one of which is positively and the other negatively electrical.”

This was Berzelius’ theory of “dualism.” His confident assertion was that not only was a chemical substance made up of positive and negative electricity, but there was nothing else in it. He was to be proved wrong later on when neutral components were attributed to the atom.
During electrolysis of water, when water subjected to electricity gave rise to hydrogen and oxygen, aninteresting fact was observed regarding the volumes of these gases as they were collected, shown in the figure below.

Joseph-Louis Gay-Lussac (1778-1850): “Suspecting, from the exact (volume) ratio of 100 of oxygen to 200 of hydrogen, which M. Humboldt and I had determined for the proportions of water, that other gases might also combine in simple ratios…”11

10 [Berzelius, Jöns Jacob, Essai sur la théorie des proportions chemiques (1819), 98. Quoted by Henry M. Leicester in article on Bessel in Charles Coulston Gillespie (editor), Dictionary of Scientific Biography (1981), Vol. 2, p. 94.]
11
[Gay-Lussac, Joseph Louis, Memoir on the Combination of Gaseous Substances with Each Other, Mémoires de la Société d’Arcueil, 2, 207 (1809) [from Henry A. Boorse and Lloyd Motz, eds., The World of the Atom, vol. 1 (New York: Basic Books, 1966) (translation: Alembic Club Reprint No. 4)]

The volumes of hydrogen and oxygen were measured in their gaseous states. In order to have an adequate comparison, the corresponding volume of liquid water that gave rise to those gas volumes was heated to obtain the volume of the water vapor, and this was seen to be the same volume as that of the hydrogen:

Amedeo Avogadro (1776-1856): “For instance, the volume of water in the gaseous state is, as M. GayLussac has shown, twice as great as the volume of oxygen which enters into it, or, what comes to the same thing, equal to that of the hydrogen instead of being equal to that of the oxygen.”12

Dalton had considered hydrogen and oxygen atoms to combine to give HO. His focus was entirely on the weights of the atoms — and therefore, the ratio of weights was 1:8 (current accuracy) while the ratio of atoms, according to him, was 1:1. Gay-Lussac found that 2 volumes of hydrogen combined with 1 volume of oxygen to give 2 volumes of water vapor. The idea that hydrogen and oxygen would mix as 1:1 became doubtful, since the combining volumes seemed to show a ratio of 2:1. What would happen if, just as Dalton attributed the weight ratio to the weight of the atoms, we were to attribute the volume ratio to the number of atoms? This would require a ratio of 2:1 in atoms of hydrogen and oxygen combining to give water. But this complicates the whole situation — which is right, 1:1 or 2:1? The way the weights of
gases combined seem to be very different from the way their respective volumes combined.

To see why this is tricky, let us start with the situation that Dalton expected in the atomic combination:

Theoretically, everything seems reasonable so far.

What happens if we now expect the respective volumes of hydrogen and oxygen to combine? To start with, every volume of H or O should be replaced by an atom of H or O. And therefore 2 volumes reacting with 1 volume ought to become 2 atoms reacting with 1 atom. This leads us to this:

However, there is something wrong in the resulting single molecule of water in the second line (“Atomic combination 1”). The number of H and O atoms in the constituent gases and the resulting water should be

12 [Avogadro, Amedeo, Essay on a Manner of Determining the Relative Masses of the Elementary Molecules of Bodies, and the Proportions in Which They Enter into These Compounds, Journal de Physique 73, 58-76 (1811) [Alembic Club Reprint No. 4]]

balanced, so we have no option but to obtain one molecule of H2O by combing 2H and 1O. Experimentally, however, if two volumes of hydrogen combine with one volume of oxygen, it should give rise to two volumes of water vapor according to Gay-Lussac, and not one volume of H2O as shown in the “atomic combination 1” formula. This discrepancy was what Avogadro pointed out. Dalton vehemently opposed Gay-Lussac’s proposed relation between volumes and atoms, and preferred that two volumes combining with one volume has nothing to do with two atoms combining with one. Dalton believed that a certain number of atoms were not connected to a certain volume. In either case, the conundrum is still not resolved at this point, since experimental volume ratios are 2:1:2 while atomic ratios are 2:1:1.

One other option possible to balance the atoms according to volumes was to try something like this:

This would preserve the 2:1:2 ratio of both atoms and volumes. But this requires the oxygen to split into two parts, and for each half to combine with a volume of hydrogen. This was not permissible by the very definition of atomism (a + tomos = indivisible).

Another option was to double the atoms in oxygen and hydrogen to make them into diatomic molecules, so that the above equation would look something like this:

There are no more half-atoms and the ratio 2:1:2 is obtained once more. In other words, two volumes combining with one volume giving two volumes of the resultant can be made consistent with two molecules (diatomic compound atom H2) combining with one molecule (diatomic compound atom O2) giving two molecules (H2O water). This would demand that the O2 combination split, and combine with two volumes of hydrogen to give two volumes of water vapor, as desired. This was how the H2O formula for water was born.

This direct relation between volume and molecules was the method preferred by Avogadro and his countryman, Stanislao Cannizzaro. A direct relation between volume and molecules in a gas meant that a specific volume of any gas at a specific pressure and temperature theoretically should contain the same number of molecules. If 1 volume contained N molecules of gas, then 2 volumes must contain 2N
molecules of gas, and so on. Physicists and chemists decided to standardize this number by fixing a standard volume. The number designated as N, which denotes the number of molecules expected in this volume, is called the Avogadro’s number. A gas was visualized very literally as many small atoms or molecules moving around and impinging on one another, and all gases were considered to have the same number of atoms or molecules if they had same volumes and standardized conditions. Observationally as well, gases did have remarkable homogeneity, which seemed to support this idea. Hence, the entire explanation for the decoupling of mass and volume of gases in their interactions was expressed through the rearrangements of symbolic formulae and the introduction of a constant number of molecules for a gas in similar conditions.

However, diatomic molecules led to trouble according to Berzelius, since there was no way that two like charges could combine to give a diatomic compound molecule like O2! According to his dual theory, only electropositive and electronegative substances could result in a compound. However, here we have two identical oxygen atoms combining to give a diatomic oxygen molecule. Would it make sense to say that two identical atoms can have opposite charges? Or is it possible that a neutral charge is also able to create the compound molecule like O2 and H2? What is to be the “charge” on the oxygen atom? These were open questions.

Regardless, this diatomic system was accepted, so Dalton and Berzelius were superseded by Avogadro and Cannizzaro. The problem of the diatomic system of how like atoms bonded, was left aside until the 20th century. Not only that, there was also the problem of multiple combinations of the elements. For example, three different types of combinations of nitrogen and oxygen existed, with volume ratios 2:1,
1:1 and 1:2. How they could be combined in such different ways (called valency) was another mystery, especially since the charge affinities were expected to be fixed for each element. A frustrated scientist remarked:

Victor Meyer (1848-1897): “The sheer volume of work and the large number of advantages gained have never been able to suppress the awareness that we are currently completely unclear about the basic principle of our current views and the nature of what we call valence or affinity.” 13

Valency would not be addressed again until the electrical nature of matter had led to another revolution in the picture of the atom. Our summary table can be updated in this way:

13 ;Meyer, Viktor, Zur Valenz Und Verbindungsfähigkeit des Kohlenstoffs, Justus Liebigs Ann. Chem. 1876, v. 180, pp. 192–206]

C. THE ELECTRIC ATOM

In the second half of the nineteenth century, researchers were evacuating glass tubes and creating a high electric potential between the extreme ends by metals that were inserted into the tube. At first, there was an arc between the two metal electrodes, and as more and more gas was evacuated, the entire tube would glow like a neon light, and then gradually grow dark from the cathode (negative electrode) onwards until the entire tube interior is darkened. An eerie green glow was seen exactly opposite the cathode, which was seen to be caused by a beam of something that was radiating straight out of the cathode (left to right):

Further studies of “cathode rays” were carried out:

Joseph John Thomson (1856-1940): “As the cathode rays carry a charge of negative electricity, are deflected by an electrostatic force as if they were negatively electrified, and are acted on by a magnetic force in just the way in which this force would act on a negatively electrified body moving along the path of these rays, I can see no escape from the conclusion that they are charges of negative electricity carried
by particles of matter.”14

14 [Thomson, J. J., Cathode Rays, Philosophical Magazine, 44, 293 (1897).]

Based on the amount of deviation that a magnetic field would cause this radiation, a ratio of charge-tomass was calculated for these rays, suggesting that they not only had a charge, but also mass. This is why Thomson speaks of “electricity carried by particles of matter”, or “charge carriers” as they are more conventionally known. Not only did these rays appear to be negatively charged, they also appeared to be
independent of the substance used as the gas or as the metal. In other words, this negatively charged
electricity was seen to be a general property of matter and therefore, of atoms:

J. J. Thomson: “Atoms are not indivisible, for negatively electrified particles can be torn from them by the action of electrical forces”. … “Thus on this view we have in the cathode rays matter in a new state, a state in which the subdivision of matter is carried very much further than in the ordinary gaseous state : a state in which all matter — that is, matter derived from different sources such as hydrogen, oxygen, &c.
— is of one and the same kind; this matter being the substance from which all the chemical elements are built up.” (emphasis GKV).15

This brought the whole atomic model into question once again. An atom that was divisible — a clear contradiction in terms — was taken seriously and allowed to continue, unlike at the time of Dalton. In addition, when chemical affinities were being attributed to the different atoms, some of them were deemed electropositive, and some electronegative. A clear charge polarity was always created between two different elements, with both positive and negative being symmetrical and equally valid. The discovery of the negative electrical nature of matter and Thomson’s proposed structure tipped that equilibrium to one side, by making all substances and atoms consist of negative charges in some form or another. But experience shows that we do not get an electric shock by every single substance, as regular
matter is electrically neutral as a whole. How to maintain charge neutrality if every atom was supposed to have some negative charge?

By assuming there is positive charge in every atom as well, of course. In the words of J. J. Thomson, who was at the center of this development: “the atoms of the chemical elements [which] are built up of large numbers of negatively electrified corpuscles revolving around the centre of a sphere filled with uniform positive electrification.”16

The image of the atom, divisible for the first time, thereby underwent this transformation:

It is important to pause here to consider how the atomic model has crossed an important boundary at this point. Prior to this structure, all the atomic models had properties that were exact equivalents of our daily experience. We encounter heavy, hard, springy, moving objects easily. We encounter charged or chemically reactive substances, that are detected by touch or even by smell and taste. In addition, based
on the concepts of attractions and repulsions, such as in the case of magnets or charged pieces of cloth, we can detect opposite poles of both electricity and magnetism. In case of magnets, the two poles occur together, with north and south poles detectable “back-to-back” even when we break a magnet into small pieces. In case of electricity, a positive charge and a negative charge do not coexist passively in the same
location, but they annihilate the charge unless the two poles are always separated. For example, when we rub a balloon with a cloth, the balloon and the cloth are electrified, and are said to pick up opposite charges, and when brought back into contact, they lose the charge (“discharge”) fairly quickly. Hence, unlike magnetic poles that always occur together in the same substance, positive and negative electricity
appear on distinct substances, and any attempt to bring them into close proximity results in the charges annihilating each other, or “discharging.”

15 [Thomson, J. J., Recollections and Reflections (1936), p. 338.] 16 [Thomson, J. J., The magnetic properties of systems of corpuscles describing circular orbits, Phil. Mag. 36, (1903), p. 673.]

Yet, in the model shown above, we have a picture that we never come across in our experience at all. Negative charges cannot be present embedded in a positive charge, nor in such close proximity to a large positive charge, as they ought to neutralize immediately. However, the observations showed that negative charges could be activated from different kinds of matter:

J. J. Thomson: “The [electric] corpuscle appears to form a part of all kinds of matter under the most diverse conditions; it seems natural therefore to regard it as one of the bricks of which atoms are built up.”17

Therefore, even if the presence of negative charge in close proximity to a positive charge was unphysical, it was nevertheless retained as an essential “brick.” This promoted a form of thinking that visualized the charge as being contained inside the atom. In summary, we can state that the atom became divisible, and gained both kinds of charges as constituents “parts”, and our summary table now reads:

D. THE NUCLEAR ATOM

The picture changed again in a few years. The phenomenon of radioactivity suggested a lot more inner structure to matter, or by extension, to the atoms of matter. In radioactivity, elements like radium, thorium and uranium disintegrated, or decayed, while emitting various forms of radiation. This property seemed very different from the known behaviour of matter:

Marie Skłodowska Curie (1867-1934): “I was struck by the fact that the activity of uranium and thorium compounds appears to be an atomic property of the element uranium and of the element thorium. The activity is not destroyed by either physical changes of state or chemical transformation.”18

17 [Thomson, J. J., Carriers of negative electricity, Nobel Lecture in Physics, December 11, 1906.] 18 [Curie, M. S., Radium and the New Concepts in Chemistry, Nobel Lecture in Chemistry, December 11, 1911.]

In order to differentiate these radiations, they were passed through an electric field. This led to the discovery of positively charged, negatively charged, and neutral radiation components, and these were later called α, β, and γ radiations respectively.

From the attractive and repulsive electrical interactions of these charged radiations, it was determined that α radiation was positive, β radiation was negative, while γ radiation was uncharged. β radiation was later definitively identified as being identical to cathode rays i.e. negatively charged electrons. It was also possible to “sort” them by placing the emanations in a perpendicular magnetic field, and observing the deviations:

By a study of the reduced amount of deflection as well as the charge connected with it, positive radiation (α) was calculated to be charged twice as much as β radiation, while being much heavier (about 7000 times) than β. γ radiation was, to all appearance, massless, just like light. These radiations also had different levels of penetrating power: α radiation was stopped by a sheet of paper, β radiation penetrated
paper but was stopped by an aluminum foil. γ radiation, on the other hand, took several inches of lead to stop it.

Now the connection of mass and charge were used for both: as a conceptualization of the atom itself and as a tool to investigate what was now seen as the “constituents” of atoms. In order to determine the structure of the atom, experiments were done where α-emanations (the heaviest) were directed in a beam towards a thin gold foil, which was a tenth of the thickness of the human hair. They had expected the gold
foil to disperse the incoming radiation in different directions. However, the results were quite astonishing. On the one hand, the way the heavy α radiation got reflected was completely unexpected, as noticed by J. J. Thomson’s favorite student, Rutherford:

Ernest Rutherford (1871-1937): “It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.”19

On the other hand, this “bouncing” happened for only a very small fraction of the beam of positively charged α-radiation — more than 99% of the beam moved through the gold foil as if there was nothing there. In the apparatus, a zinc sulphide screen picked up bright flashes that helped count these numbers:

Since the gold foil repelled the heavier and positively charged α radiation, and since like charges repel, it encouraged the division of the atom into a heavy concentrated positive core and a lighter portion of electrons. This led to the following rearrangement of the picture of the atom:

The issue of positive and negative charges annihilating when in close proximity was ignored. The relative size of the central positive part was cause for continued astonishment. In the words of Rutherford, the size of the central positive part in relation to the rest of the volume was like that of a “fly in a cathedral.” The negative charges around the positive core were imagined like mini-atoms — these “electrons” fly around
the center, and have a mass. Mass and charge were hence combined in the electron. The positive core was called the “nucleus”, giving rise to the next level of complexity in the atomic model: the nuclear model.

From Rutherford’s notebook: Positive core surrounded by electrons (dots) 20

19 [Quoted in Abraham Pais, Inward Bound (1986), p. 189, from E. N. da C. Andrade, Rutherford and the nature of the atom, (1964) p. 111.]

When all the positive charge of the atom is assumed to be concentrated in the center, we have yet another phenomenon that contradicts everyday experience. Normally like charges always repel. In fact, that was how it was determined that the nucleus was positively charged in the first place, because positively charged α radiation got repelled by the small portion of the gold foil. But inside the model of the nuclear
atom, not only did the positive charges not repel each other but they instead concentrated themselves extremely tightly in a miniscule region. So two of the fundamental observations regarding electric charges were turned on their heads — positive and negative charges were made to stay apart(electrons hovering around the nucleus), while similar charges such as the positive charges in the nucleus stay together. The
latter reversal was mentioned, almost as an aside, by Rutherford:

“Practically the whole charge and mass of the atom are concentrated at the centre, and are probably confined within a sphere of radius not greater than 10-12cm. No doubt the positively charged centre of the atom is a complicated system in movement, consisting in part of charged helium and hydrogen atoms. It would appear as if the positively charged atoms of matter attract one another at very small distances for
otherwise it is difficult to see how the component parts at the centre are held together.” (italics by GKV)

It was not radioactivity alone that was used to detect the positive charges in matter, since the gas discharge tubes also showed evidence of them. Soon after the detection of negatively charged cathode rays (later called electrons) going from cathode towards the anode, positively charged anode rays or canal rays were discovered with the same setup by making small holes (or “canals”) in the cathode such that radiation could escape in the direction opposite to cathode rays:

While handling the two oppositely directed beams, which looked like beams of radiation or light, the following suggestion was offered:

Wilhelm Wien (1864-1928): “Since positive and negative particles appear in both positive and negative light [beams], it should be advisable to give up the names cathode rays, canal rays and positive light [beams] and only speak of positive and negative particles.”21

Thus, the notion of positive and negative “particles” behaving like “mini-atoms” within the structure of the atom was getting well-embedded in the minds of researchers. The positively charged beams were, unlike the negatively charged ones, different for different materials. By bombarding light gases like hydrogen with α radiation, Rutherford was able to determine the resulting lightest positive beam as having the same mass as the hydrogen atom which he called the “H-particle.” Since he used αradiation to bombard B, F, Na, Al, P, and N and saw H-particles released from all of them, the conclusion was that, just like the electron, the H-particles or protons must be constituents of all elements.

20 [Shown in https://arxiv.org/pdf/1202.0954 (Helge Kragh, Rutherford, Radioactivity, and the Atomic Nucleus, 2012)] 21 [Wien, W., Untersuchungen über die elektrische Entladung in verdünnten Gasen, Ann. Physik 65, (1898) p. 440.]

But radioactivity only supplied this support to the atomic structure while simultaneously pulling the rug out from under another pillar of the theoretical system, viz. the constancy of atoms. Radioactive processes showed one element arising from another element, and in effect undergoing transmutation. To the modern scientist who did not believe in the alchemists’ conversion of elements, this was naturally
shocking. When thorium was converted to argon in a radioactive decay, this was what Rutherford told his colleague Frederick Soddy:

“For Mike’s sake, Soddy, don’t call it transmutation. They’ll have our heads off as alchemists!”22

It was not only the constancy of atoms that radioactivity challenged. One of the key pillars of the atomic model since the time of Dalton — namely constancy of atomic weight — also tottered with further study of radioactivity. Every element was supposed to have a specific weight unique to it, but in the beginning of the 20th century, it was found that there existed elements which were identical chemically, but yet had
different weights! This fact flatly contradicted Dalton’s original premise of each atom having a specific weight. Scientists had to come up with a method of accounting for the weight discrepancy, in a way that did not alter the element’s chemical activity. This was conclusion for the element lead:

Frederick Soddy (1877-1956): “Finally, it may be predicted that all the end-products, probably six in number of the three series, with calculated atomic weights varying from 210 to 206, should be nonseparable from lead; that is, should be ‘lead,’ the element that appears in the International List with the atomic weight 207.1.”23

Such substances which were identical chemically but had different weights were called isotopes. In terms of the atomic model, the number of electrons could not be altered, nor the positively charged “miniatoms” (protons). So they were forced to add some neutral weights to the atomic model as a whole. Even though Rutherford had already suggested this in 1920, a neutral uncharged beam was elicited from
different substances like H, He, N, Ar, Li, Be and C only in 1932, because:

James Chadwick (1891-1974): “The neutron was difficult to catch. Other particles can be seen and their actions watched, but the neutron we could not see and it left no traces of its passage.”24

22 [Howorth, Muriel, The Life Story of Frederick Soddy, New World Publications, (1958), p. 83.] 23 [Soddy, Frederick, The Radio-elements and the Periodic Law, Chemical News 107, (1913), pp. 97-99.] 24 [Kuhn Jr., Ferdinand, Chadwick calls Neutron ‘Difficult Catch’; His Find Hailed as Aid in Study of Atom, New
York Times (29 Feb 1932).]

The way Chadwick guessed at the presence neutrons was like this: Following the work of some other researchers, he used a source of α radiation — polonium. When this radiation fell on a thin sheet of beryllium, it ejected a mysterious neutral radiation. This neutral radiation evoked protons when it hit a sheet of paraffin wax:

He argued that something that is neutral, yet heavy enough to eject protons, could not be anything other than another “mini-atom” with approximately the same weight: the neutron. In any case, Rutherford was vindicated, and his part in unravelling the structure of the atom was shown in his confident words: “I have broken the machine (the atom) and touched the ghost of matter.” 25 Radioactive decay also pointed to the possibility of matter disappearing almost into nothingness, with an output of streams of radiation. The hard and fast atom which was the bedrock of earlier atomic theories was found to be impermanent and disintegrating conceptually as well. At this stage, the picture of the atom hence consisted of three forms of mini-atoms: protons, neutrons and electrons, charged positive, neutral and negative respectively. The indivisibility of atom was abandoned in stages based on the electrical, magnetic and mechanical interactions. Finally, the atom itself begins to transform into radiation. Another field of study had opened up in the meantime which was thought to shed additional light on the structure of the atom, particularly in the arrangement of its “constituents.” This field was spectroscopy.

E. THE SPECTROSCOPIC ATOM

25 [https://www.nzedge.com/legends/ernest-rutherford/]

The 19th century opened up the subject of spectroscopy. Flames of heated substances, which often gave brilliant colors in case of metallic salts, showed a great deal of lines and bands when observed through a prism or a prism-like object. This pattern, called the spectrum of the substance, turned out to be unique for each element. There was one experimental fact which was thought to hint at the reason for the
existence of this pattern: when light was shone on certain metals or other substances, it was found that negative electricity was released. It was hence concluded that the regularity in these lines was because of the activity of the newly coined “mini-atoms” or electrons. Instead of being spread around the positive core of the atom like a swarm of bees, they were arranged in regular orbits, mimicking the planets around
the Sun in the Copernican solar system model (left image) — and the familiar spectroscopic atom model was born (right image). It was also called the Bohr atom, after the physicist Niels Bohr who played a major role in developing it.

This regularity in electron arrangement was also called the system of “energy levels” or “electronic shells”, and the lines in the spectra were attributed to these electrons moving from one “shell” to another emitting a particular radiation. Like the specific ratios in weights and volumes obtained in the chemical combinations of substances, we have specific numerical values for the positions of these dark and light
bands in the interaction of a substance with light. While the former was used to give a specific number of electrons, protons and neutrons to the atom, the latter was used to give a specific number of “energy levels” to the electron. While it was surprising to see this regularity in weight, it was even more surprising to see it in the orbiting electron, because movement had always been seen as something continuous. It
was as if the electron could rotate smoothly along its orbit, but could only jump in a discontinuous fashion when moving from one orbit to another. This discontinuous “jump” is the so-called “quantum jump” that led to another revision of the model of the atom. The term “jump” shows that the electron was seen as a mere particle and scientists predominately related to their spectroscopic research in a mechanical manner.

F. THE QUANTUM ATOM

It is important to see how the various properties of the atom change over the course of time. To begin with, the atom was the fundamental constituent of all matter, which had a specific shape, size and movement, which was supposed to explain all the other properties of any particular substance. Later on, there was a differentiation of weights, and chemical affinities in atoms, such that each element was
attributed its own specific atom with its own chemical affinity. The springiness or force got added to it after Newton’s theories of force, and after further experiments with electricity and chemical affinity, electric charges got added to atoms as well. Upon experimenting further with radioactivity and electrified gases, a negative charge carrier — electron — was added to the atom, and later the positive and neutral
charge carriers — the proton and the neutron. Even though the charged particles were added, they behaved the opposite of the way charges behave in the observable world, with like charges clumping together and unlike charges staying apart. The constancy of an elemental atom was removed, as one atom could change into another via radioactive decay or bombardment. The constancy of weight of a particular
element was removed — a single element could have many different weights. The atom of the 1910’s was therefore completely different from that of the previous century, with each experimental observation of matter adding another modification to the proposed structure of the atom.

The electrons that were supposed to be rotating around the central nucleus were placed in particular energy levels, and the numerical regularity of such a process was highly valued:

Niels Bohr (1885-1962): “This interpretation of the atomic number [as the number of orbital electrons] may be said to signify an important step toward the solution of the boldest dreams of natural science, namely to build up an understanding of the regularities of nature upon the consideration of pure number.”26

This regularity was used to justify the regular structure of the periodic table of elements. However, the problem in understanding the movement of electrons between these levels — the quantum jump — was this:

Gilbert Lewis (1875-1946): “As far as we are aware, the electron cannot exist except in one of a series of levels, and whether the idea of motion of an electron from one level to another has any meaning is somewhat doubtful. As far as we can see, it disappears from one level and appears at another.”27

In addition, unlike the usual behaviour of negative charges (Coulomb’s law) where they intensely repel one another:

Gilbert Lewis (contd.): “Unless we are willing, under the onslaught of quantum theories, to throw overboard all of the basic principles of physical science, we must conclude that the electron in the Bohr atom not only ceases to obey Coulomb’s law, but exerts no influence whatever upon another charged particle at any distance. Yet it is on the basis of Coulomb’s law that the equations of Bohr were derived.”28

In other words, the electrons both did and did not follow the laws for charges. Hence, several inconsistent rules were being used in order to create this model. Just as regularity was detected in the electronic levels, other regularities cropped up in a magnetic sense as well. When a magnetic field was applied to some substances as the light was being passed through them, the resulting spectra showed a splitting of the
spectral lines.

26 [Bohr, N., Atomic Theory and the Description of Nature (1934), p. 103-104.] 27 [Lewis, G. N. Valence and the Structure of Atoms and Molecules, New York (1923), p. 163.] 28 [Lewis, G. N., The Static Atom, Science, Vol. 46, Issue 1187, (1917) p. 297-302.]

Zeeman Effect in Sodium spectrum lines

Just as theory caught up with the appearance of these lines by attributing them to the number of electrons in a particular “orbit”, the lines seen in the spectrum were split further into more lines, which did not fit the theory. Appearance of these unexpected lines was called the “anomalous Zeeman effect”, and there was no available mechanism to account for it. It caused physicists considerable consternation:

Wolfgang Pauli (1900-1958): “The anomalous type of [magnetic] splitting… was hardly understandable, since very general assumptions concerning the electron, using classical theory as well as quantum theory, always led to the same triplet. … A colleague who met me strolling rather aimlessly in the beautiful streets of Copenhagen said to me in a friendly manner, ‘You look very unhappy,’ whereupon I answered
fiercely, ‘How can one look happy when he is thinking of the anomalous Zeeman effect?’” 29

Not only was there an extra multiplicity of lines in the presence of a magnetic field, but it appeared that the electric charge itself had other intrinsic magnetic properties. Let us remember how the cathode rays, β radiation, or electrons were deviated by a magnetic field. Usually the beam of electrons moved in a slight curve to meet the screen at a single spot. Instead of a beam of negatively charged electrons, Otto Stern
and Walther Gerlach decided to use a beam of neutral silver, and according to theory a silver atom had one electron in its outermost “orbit.” They boiled silver up to 1300ºC, and collected the vapors in a box.

A beam of silver shot out through a hole in the box, and was passed through a magnetic field, where, instead of giving a single blob of intensity on the sensitive plate, it split neatly into two! Upon noticing this, the experimenter declared:

Otto Stern (1888-1969): “I was unable to understand anything about the outcome of the experiment, the two discrete beams. It was totally incomprehensible. It is obvious [today] that [in order to comprehend the experiment] one needs not only the new quantum theory but also a magnetic electron. These are the two things which were still missing at the time. I was fully confused and did not know what to do with such a result.” 30

He was confused because, in essence, the electron — the quintessential carrier of the electric charge in the atom — was now given an intrinsic magnetic property (also called “spin”) as well. It became a sort of “electro-magnetic mini-atom.” What about the protons and neutrons? They also seemed to have a similar magnetic property, since, just like silver, beams of simple elements like hydrogen — with one proton —
and deuteron (an isotope of hydrogen with one proton and one neutron) also split into a multitude of beams when passed through a magnetic field. This meant that even protons and neutrons had internal “structure”, and did not qualify as ultimate units of matter. The study of the “nucleus” has proceeded to detect further differentiations, predominantly by subjecting the beam of vapour of a substance to magnetic field oscillations. When, as a result, the substance emits radiation, the nature of that radiation is analyzed just as if it had come out of a prism — a process called nuclear spectroscopy. Nuclear spectra showed the same kinds of lines and bands that were earlier obtained in regular light spectra. Therefore, by analogy with the electronic shells, the “nuclear shell” model was proposed to account for it, even though it is hard to imagine how the tightly packed protons and neutrons could ever rotate freely like the electrons do:

29 [Pauli, W., Remarks on the History of the Exclusion Principle, Science, Vol. 103, Issue 2669, (1946), p. 213-215.] 30 [ETH-Bibliothek Zürich, Archive, http://www.sr.ethbib.ethz.ch/, Otto Stern tape-recording Folder “ST-Misc.”, 1961 at E.T.H. Zürich by Res Jost]

Maria Goeppert Mayer (1906-1972): “Indeed one might try to copy the essential features of the atomic structure for nuclear structure… The assumption of the occurrence of clear individual orbits of neutrons and protons in the nucleus is open to grave doubts… We shall pursue the description of the nucleus by the independent orbit model. It still remains surprising that the model works so well.” 31

It worked mathematically, and so another physical conundrum was left aside. In the earlier picture, the electrons had been orbiting with positively charged nucleus at the center. Now, the nucleus itself was imagined in the same way that the electrons were imagined earlier, but with nothing in the center:

Nucleus with protons (red) and neutrons (blue)

The question of “What do the nucleons orbit around?” was also left aside. As the picture of the atom was disintegrating into its tinier “parts”, the most critical change to take place in its structure occurred with the discovery of the wave-like nature of matter, where the tiny little billiard ball model of any part of the atom, be it proton, electron, neutron, or any other “-on”, was deemed to be inadequate. When beams of electrons (cathode rays) were directed at a crystal of nickel in the same way that α-radiation was beamed on to a thin gold foil by Rutherford, the resulting reflection from the crystal showed a periodic pattern. In a regular reflection, the electrons should have bounced right back along the way they arrived. Because there is a wave-like reflection, electrons also show up at different angles:

31 [Mayer, M. G., The Shell Model, Nobel Lectures in Physics, December 12, 1963.]

Similarly, helium and hydrogen beams were directed onto a salt (lithium fluoride or sodium chloride) crystal. 32 They showed a similar wave-like pattern in the reflection.

In the schematic, the skimmer is a small hole, and the manometer measures intensity of reflected helium. The graph shows the resulting intensity of the reflected beam of helium from a NaCl crystal as a function of angle. The main peak is the one in the center (0º), while the two adjacent ones (approx. ±10º) are what provide the wave-like character of the beam’s interaction with the crystal at 300 K. Such wave-like
behaviour extended to all matter, be it beams of hydrogen or even heavy carbon-60. What is even more surprising is that this behavior mimicked the behavior of light: light generates such wave-like patterns as well, when passed through a narrow slit:

32 [Estermann, I., and A. Stern, Beugung von molekularstrahlen, Z. Phys. 61, 95 (1930).]

Thus, matter and light behaved very similar to one another in these circumstances. However, it is important to note that neither of the analogies — a solid particle or a liquid dissipating wave — suffices to fully describe the behavior of light, electrons or even atoms. Let us take the case of X-rays. X-rays are generated when a beam of cathode rays strikes a metallic surface. However, the X-rays have the capacity
to trigger secondary electricity again in the surface they strike. If X-rays are taken to be waves dissipating from a source, the analogy would be something like this:

Sir William Bragg (1862-1942): “Let me take an analogy. I drop a log of wood into the sea from a height, let us say, of 100 feet. A wave radiates away from where it falls. Here is the corpuscular radiation producing a wave. The wave spreads, its energy is more and more widely distributed, the ripples get less and less in height. At a short distance, a few hundred yards perhaps, the effect will apparently have
disappeared. If the water were perfectly free from viscosity and there were no other causes to fritter away the energy of the waves, they would travel, let us say, 1,000 miles. By which time the height of the ripples would be, as we can readily imagine, extremely small.

“Then, at some one point on its circumference, the ripple encounters a wooden ship. It may have encountered thousands of ships before that and nothing has happened, but in this one particular case the unexpected happens. One of the ship’s timbers suddenly flies up in the air to exactly 100 feet, that is to say, if it got clear away from the ship without having to crash through parts of the rigging or something else of the structure. The problem is, where did the energy come from that shot this plank into the air, and why was its velocity so exactly related to that of the plank which was dropped into the water 1,000 miles away? It is this problem that leaves us guessing.”

And he finally gives up completely:

“No known theory can be distorted so as to provide even an approximate explanation. There must be some fact of which we are entirely ignorant and whose discovery may revolutionize our views of the relations between waves and ether and matter. For the present we have to work on both theories. On Mondays, Wednesdays, and Fridays we use the wave theory; on Tuesdays, Thursdays, and Saturdays we
think in streams of flying energy quanta or corpuscles!”33

While this was described in The Robert Boyle Lecture at Oxford University in the year 1921, the situation is similar today, a century later. The “wave-particle” duality has become a paradox of modern science, and the dichotomy has not been resolved adequately to this day. The search for the material basis of matter itself, where we could say that the smallest parts of the atom contained something in the nature of a solid ball, was now appearing less “solid.” As another scientist of the day declared:

“The difficulties that emerge ever more clearly in atomic physics appear to me to arise less from an exaggerated application of the quantum theory and much more from a perhaps exaggerated belief in the reality of concepts of models.”34

33 [Bragg, W. H., Electrons and Ether Waves, The Robert Boyle Lecture at Oxford University for the year 1921] 34 [Sommerfeld, A., Grundlagen der Quantentheorie und des Bohr’schen Atommodelles, Die Naturwissenschaften, 12 (1924), p. 1048.]

G. ATOMISM TODAY

The complexity of the spectra of nucleons, i.e. protons and neutrons, gave rise to theoretical ideas about further constituents of these as well, called “quarks.” In order to delve deeper into the constituents of the nucleons, experimentation in the 1950’s used the “bombardment” technique that was inaugurated by Rutherford and his gold foil, and obtained an enormity of “particles” through intersecting subatomic
beams (like electrons and protons) with other beams or with a stationary target (we will refer to these as particles even though they are technically “wave-particles” or something more complex). This bombardment was accomplished through large accelerators such as those at CERN, Fermi Lab, Oak Ridge Lab etc. An accelerator contains circular evacuated tubes through which these beams could be accelerated, like in a sling. Just as we can rotate a rock in a sling repeatedly in a circle to build up enough speed to throw it at a target, the particle accelerators circulate beams of electrons or protons to direct them at a target. Instead of a flick of the wrist as in the case of a sling shot, here the beams are accelerated using bursts of radio waves. And instead of a cloth or a string holding our rotating object in place, we have powerful magnets that help guide the particle beams in a circle, thousands of times a second. All of these require a lot of infrastructure, and as a result, these projects have become enormous multi-national collaborations. Interestingly enough, just a bottle of hydrogen can serve as a proton source to run a large center like CERN for a couple of years!

Through these accelerator collisions, a plethora of new “particles” were observed that were collectively called the “particle zoo.” These observations were not always welcome:

Willis Lamb, Jr. (1913-2008): “When the Nobel Prizes were first awarded in 1901, physicists knew something of just two objects which are now called ‘elementary particles’: the electron and the proton. A deluge of other ‘elementary’ particles appeared after 1930; neutron, neutrino, μ-meson, π-meson, heavier mesons, and various hyperons. I have heard it said that ‘the finder of a new elementary particle used to be rewarded by a Nobel Prize, but such a discovery now ought to be punished by a $10,000 fine’.”35 

Isidor Isaac Rabi (1898-1988): “Who ordered that?” [in response to the detection of the muon particle]36 One can already see that the attempt to get to the simple fundamental unit of matter has led increasingly into a maze. These experiments also lacked the elegance of the older efforts:

John Ashworth Ratcliffe (1902-1987): “There was, I think, a feeling that the best science was that done in the simplest way. In experimental work, as in mathematics, there was “style” and a result obtained with simple equipment was more elegant than one obtained with complicated apparatus, just as a mathematical proof derived neatly was better than one involving laborious calculations. Rutherford’s first disintegration experiment, and Chadwick’s discovery of the neutron had a “style” that is different from that of experiments made with giant accelerators.”37

35 [Lamb, W. Jr., Fine structure of the hydrogen atom, Nobel Lecture in Physics, December 12, 1955.] 36 [https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/muon-discovery] 37 Ratcliffe, J.A., Physics in a University Laboratory Before and After World War II, Proceedings of the Royal
Society of London, Series A, 342, (1975), p. 463.

On the other hand, the experimental picture of the atomic structure in the latter half of the twentieth century seemed to benefit from the advances in technology such as nanometer scale microscopy, when focussed electron beams or extremely sharp tips were developed. The images of atoms are now created through high resolution microscopy, where either a focussed electric beam is passed through a surface,
and the resulting voltage changes are measured, or a thin needle is run over the surface of a solid substance, like a gramophone, and the resulting movements are measured. The entire apparatus is usually in a vacuum chamber under very low temperature, such that most of the substances being examined are usually frozen. The measured voltages or movements are converted into images via the computer, by
converting the voltage values or the needle deflection values into colors and brightness. This is how the modern “images” of the atomic lattice are generated. When changes in electric voltage or current are detected, this is called “scanning or transmission electronic microscope” (SEM) and when changes in the needle-like cantilever positions are detected, this is called the “atomic force microscope” (AFM).

In atomic force microscopy (AFM), the vibration of the cantilever tip is detected by a laser beam that is reflected off the tip, like this:

Sometimes, the vibration frequency of the tip itself is monitored, and the changes in this frequency are translated, through the use of a theoretical model that has been programmed into the computer, as “bumps” on the surface. Both SEM and AFM give rise to a series of images such as these :

1 nm means one nanometer (10-9 m). Note how there is no lattice structure visible for aluminum in the liquid state, while it is clearly visible for the other solid-state structures. This lack of granularity is especially important since it is usually assumed that liquids and gases are composed of atomic “particles” moving around. However, the experimental evidence so far shows the crystalline lattice structure — and
the relevant atomic periodicity — only in solids.

It is even possible to obtain images of molecular structures of organic compounds using these techniques:

38 http://dx.doi.org/10.1002/anie.201405690
39 A facetted nano-void in diamond. Image: Iain Godfrey (SuperSTEM Laboratory, University of Manchester)
40 https://doi.org/10.1016/j.actamat.2010.10.069
41 https://science.sciencemag.org/content/342/6158/611.abstract

This might give rise to the impression that a chemical reaction that adds an element (such as hydrogen) behaves like the typical ball-and-stick model, where the atom of the corresponding element (H) is merely attached to it mechanically. However, observation of a reaction shows something different42:

In these AFM images of a hydrocarbon on copper, the frequency of vibration of the cantilever tip is being monitored to detect where it changes the most as it scans the surface. It can be seen that in step (b) where hydrogen is supposed to be “attached”, the entire molecular structure is thrown into chaos, before a reconstitution that occurs in steps (c) and (d). This suggests that while the ball-and-stick approach might be easy to handle spatially, just like the “particle” approach, it is probably not the best fit for a chemical reaction.

These technological successes, subatomic collisions on the one hand and microscopy on the other, have for the most part reduced the focus on structural questions of the atoms. Nowadays, we rarely dwell on questions like: “What is the physical status of the atom? What is the most fundamental unit of the physical world? Does the behaviour of the atoms, or sub-atoms, ‘explain’ the observed phenomena? Or do we use the observed phenomena to continue modifying our ‘model’ of matter?” The philosophical debates that characterized the beginning of the 20th century have lost their emphasis at the turn of the 21st century, while experimental techniques have opened the doors to a host of new phenomena occurring every day that theoreticians are usually hard-pressed to provide an explanation for.

The emphasis has instead moved to the manipulation of atomic structures, such that they can be moved around using these microscopic electron beams. One of the first examples of this was the 1989 image of frozen xenon on nickel (at a very low –265 ºC) that spelled the name of the company that did it 43:

42 [https://www.nature.com/articles/ncomms12711] 43 [https://www.nytimes.com/1990/04/05/us/2-researchers-spell-ibm-atom-by-atom.html]

This was further enhanced by including several rearrangements in the 2013 “movie” by IBM called A Boy And His Atom, made with carbon monoxide on copper at a temperature of nearly –260 ºC, with the manipulation done by passing a tiny current through a microscopic needle to change the positions:

A closer inspection shows that there are wave-like disturbances surrounding each bright spot, which indicate that even though the spots look like independent and isolated units, they are nevertheless still connected through their wave-nature with the background.

SUMMARY

Atomism has had an interesting history, which occurred in three overall phases. In the first phase, the macroscopic properties such as weight, movement, chemical affinity, magnetic behaviour, and interaction with light, were transposed on to atoms, one by one, such that the atom became a “microscopic version” of a macroscopic object. The thinking process was that atoms were fundamental units of matter that
would explain the ordinary behavior of matter. Any numerical regularities such as proportional weights or volumes in chemical reactions were attributed to the number and type of atoms along with their interactions. It was a journey of “finding the atom.”

In the second phase, the seemingly indivisible atom itself was disintegrated into innumerable aspects, as various experiments showed new phenomena in the behaviour of matter. Therefore, electrons, protons and neutrons, “shells,” and spin, were added to its structure with their corresponding numerical regularities, while several known physical laws regarding charges, movements, etc. were disregarded, or even
inverted, to suit the experimental results. The second phase reached its culmination in the observation of wave-particle duality which was in common with the behaviour of light, and which has not been theoretically resolved. The “hardness” of the atom was no longer a secure concept, and the other concepts that were used to describe the atom also came apart one by one. Instead, a picture of immensely detailed
complexity with hundreds of new “particles” faced the researcher. This was a journey of “losing the atom.”

Today, in the third phase, technology has mostly taken over, as microscopy opened the door to visualizations of atomic systems, using focussed electron beams or extremely sharp needle-cantilevers as probes of the sample. These interacted with either electromagnetic changes or mechanical changes on the surface, respectively. In this sense, atoms appear as part of a lattice of relatively static centers of
electromagnetic or mechanical forces, in the solid state and mostly under low temperatures. Rather than focusing on what an atom is made of, the focus is on how an instrument interacts with what we had originally called the atom. We could call this a journey of “engaging with the new atom”, and it is a journey that we are on to this day.

44 [https://www.research.ibm.com/articles/madewithatoms.shtml]

Citation: Vijaya, Gopi Krishna. 2021. Atomism — An Outline of Discoveries and Theories. The

Nature Institute. https://natureinstitute.org/gopi-krishna-vijaya/atomism-an-outline-ofdiscoveries-and-theories

Email: vgopik@gmail.com

Copyright 2021 The Nature Institute