Although I have stopped publishing new essays for this site since I completed my Self-Doctorate, I was recently asked to write an essay for a physics & philosophy conference, and afterwards, I thought it would be nice to include that essay in this collection. The following is the history-of-the-modern-atom essay I wrote for the conference… It is very long (43 pages in the style I used), and it basically just sums up and brings together in one train of thought many of the points I made in my other essays (written over three years) concerning atomic physics…
— — —
The reaction of the scientific community against the concept of the Force-conveying Ether in the late 1800s and early 1900s caused atomic physicists to careen too far in the opposite direction, reducing in importance the idea of Energy (or Lines Of Force) propagating through Space, and instead, becoming obsessed with assigning to all dynamic systems identity-maintaining, matter-based Force-carriers.
In the science of today, “Things” have come to dominant “Events.”
The modern tendency to view the world as Things occupying Space is not unique to the mindset of physicists. There seems to be a biological or socio-historical bias toward overemphasising Position and Quantity at the expense of Process and Quality.
When we modern humans look at a tree, for instance, we tend to conceptualize it as something semi-rigid and near-monolithic, as ONE thing. What we don’t tend to think about is the other, equally valid view of the tree’s reality… that of an ever-changing stream of Energy, matter, and signals flowing through the world. If our consciousness was of a different sort– dare I say, of a higher sort– we would see the tree not as a Thing but as an Event, and beyond that, as a series of interconnected, inter-influencing Events which could be called a System.
In consequence of our current Thing-prejudiced view of the world, physicists of the 20th century claimed to see an atom composed of Force-carrying particles– starting with the poor, overworked Electron, which– according to currently accepted theories– serves as the Force-carrying particle for a variety of Forces, including: electricity, molecular bonding, and magnetism. Eventually there were also Protons (carriers of the supposed “positive Charge”), Gluons (carriers of the supposed “Strong Force”), W & Z Bosons (carriers for the supposed “Weak Force”), and some number of others, depending on whom you talk to.
Furthermore, since these Force-carrying particles are typically assumed by physicists to retain their identities within the atom, this has caused all sorts of headaches for atomic theorists, as the emerging Suitcase Atom is pictured as carrying around a sackful of Forces, some of which can interfere with one another, and all of which are supposed to be on the move, attached as they are to their Force-carriers and, to a removed degree, to the particles which emit the Force-carriers.
I will be explaining in this paper how we arrived at this illogical, perplexing, and truth-foggying state-of-affairs. I will also be offering an alternative version of atomic physics, a version in which Process and Quality are brought back in balance with Things and Quantity. I will show that the atomic world, instead of being a potpurri of particles, is actually a dynamic domain of circulating Energy, and that an atom is not so much a Thing, as an Event.
Everyone knows how difficult it is to think of the melody of a song that you’re trying to remember while another song is booming in your ears. The same goes for ideas. It is almost impossible to entertain a different idea when you are constantly being bombarded by old ones. New ideas come from putting together experiences in new ways– in new orders-of-progression, in new orientations, in new rhythms and in new sizes, shapes, and shades. New ideas do not come from tooling around in well-worn ruts or from blind-acceptance of handed-down proclamations.
The modern scientist who wishes to move closer to the deeper truths of our world must set himself apart. He must distance himself from the herd and achieve some semblance of isolation in order to achieve perspective upon what the herd is actually doing– what they are stampeding toward, what they are running away from.
If surrounded continuously and intimately by others, our own personal magnets, spiritually speaking, can’t help but line-up with those of others. Any attempt to turn against the current will be overwhelmed and pushed back into alignment with the general stream.
When wave after wave of herdthink is washing constantly over a person, there is not space or time enough for the formation of idiosyncratic thoughts. And it is idiosyncratic thoughts which seed the clouds of cognition and midwife the birth of those brilliant flashes of insight which drive human wisdom and achievement ever onward and upward.
When data and received opinions are constantly flooding our minds, we lose the ability to question their validity. One cannot see one’s reflection if constantly submerged in a body of water being forever being agitated. Reflection requires still waters.
To think outside the box, one must actually, physically sometimes BE outside the box. Without time spent alone, away from the herd, one’s ability to think creatively atrophies.
The dominance of a particular mindset throughout a profession or society is a sort of Babylonian Captivity– an imprisonment inside the dominant culture. It requires bravery, effort, resolution, and an open mind to break such mental bonds.
There are three types of knowledge-acquirers… the Specialist, the Technician, and the Generalist. Trouble emerges in a field if one or two these types become too dominant over the remainder. We need all the types, all productive styles of inquiry.
My fear for today’s scientific culture is that we are losing the informed Generalist, and so missing out on the contributions he or she could make to the ever-expanding sphere of human knowledge.
For instance, it would be all but impossible for any self-respecting physicist today (the successful ones all being Specialists in some very particular field) to question the theoretical tenets of the subatomic world laid down by the demi-gods of the Quantum revolution of the early 20th century. Not only would a second-guessing physicist not wish to invite upon himself the opprobrium, ridicule, and career-risk of mounting such a challenge– he, being a Specialist, would (frankly) not have the skillset to do so.
Because so much of physics is dominated today by Specialists, no one much seems to notice or to care that while some physicists work diligently in their own areas under certain theories and particular interpretations of Nature, other physicists –just as intelligent and just as educated– go about their own day-to-day tasks using conflicting theories and interpretations appropriate for work in their own specialized fields. The most glaring example of this phenomenon is in atomic physics– with some theorists operating under the paradigm of Electron clouds while others make their calculations under the assumption of Electron orbits and spins.
Another problem in modern physics, besides the cubicle-ization of the field, is that some best-guesses made by the creative physicists of a hundred years ago have become petrified into unquestioned dogma. This is why it is important that we encourage a re-dedication to the big-picture study of the history of physics. Scientists must continuously seek an understanding of Nature which is not filtered exclusively through the merely modern perspective, but one that includes the views of the great minds of physical science throughout the centuries. We must not overly discount those paths not taken. We stunt our own growth when we too readily dismiss the old theories without a serious and sympathetic attempt to understand them and, where necessary, to refute them with our own knowledge and powers of reasoning.
Ernst Mach once warned his fellow thinkers not to limit their views to those of their own age, for they did so, they would naturally be led to overestimate the significance of “the momentary trend.”
The more-than-adequate scientist must therefore travel back through history and revisit the intellectual forks in the road, re-examine the places where footprints and viewpoints became cemented, and where philosophies diverged into the living and the dead. We owe it to ourselves to travel a-ways down the paths not taken, to see what small or large treasures we may find left undiscovered along the untrodden ways.
It is all too easy to accept the dogma spoonfed us by the world’s authorities, to fall into the snugly inviting trap of accepting the handed-down principles without re-examination in light of new data or improved understandings. Without such re-evalutions, the principles of the past risk becoming, as Mach put it, “a system of half-understood prejudices.”
The baby bird who never learns to snatch his own worms will spend his life swallowing regurgitations.
It is often said that “Quantum theory is the most successful scientific theory in history.” But this statement is misleading.
What has proven “successful” is the modern physicist’s ability to use the statistical analysis of past atomic or molecular events to make statistical predictions about future events. As to all the talk of Quarks and Quantum Leaps and the like, that is all mere conjecture offered as interpretations of observed statistical outcomes. Much of Quantum Physics’ speculation as to the particles and Forces and activities of the subatomic world are not directly verifiable. In fact, Quantum Physics presupposes certain rules, such as Wave-Particle Duality and the Uncertainty Principle which make some of its tenets basically untestable. We are asked to accept them, like the tenets of any good religion, on faith alone.
When one looks at the history of Quantum Physics, one discovers that the reason its predictions so perfectly fit experimental outcomes is that the experimental outcomes came first, and then Quantum Physics adapted itself to fit the facts. The history of Quantum Physics is basically a history of patchwork, with new experiments blowing holes in the developing theories, only to have those blown-apart theories re-emerge zombie-like, stiched back together and reanimated by quick fixes designed to overcome each theory-threatening fact.
Over time, the grabbag of ad hoc fixes in modern physics has been sewn together water-tight, and the Quantum Religion, like any formidable and enduring religion, has been able to pretzel itself into a system mostly internally consistent and highly resistant to breaches made by unbelievers from the outside world.
However, such insular thinking is troubling. What if the unquestioned acceptance of the mythic explanations of Quantum Theroy has actually resulted in an unrecognized, counterproductive conservativism of thought? Unquestioned faith, in any religion, can hold a people back from seeing the real truths staring them in the face.
We must be careful in science not to over-extrapolate from the evidence in order to provide a more satisfying explanation of events than the data actually provides. There is often a giant leap to be made between the “What?” and the “Why?” of a situation. A true scientist does not make-up fairy tales he wishes were true or that would help him fill-in the blanks of his picture of the world.
Of what we do not know, we should not speak. We must hold-up as heroic example Isaac Newton’s proclamation,“Hypotheses non fingo” (“Hypotheses I do not make”); Newton showed a brave willingness to admit his own ignorance and partial impotency in the face of a mysterious and overwhelming Universe. We should prove half so strong.
Part of the problem underlying the lack of fundamentally new-thinking in modern physics is that the TYPES of experiments being funded today already assume a certain mindset and a pre-established set of assumptions. When we spend billions of dollars and numerous years of effort to create atom-smashers to find new particles then, by God, we’re going to find new particles. There’s simply too many dollars, hours, and egos invested NOT to.
It is inevitable, said Mach, that our scientific experiments and ideas “single out, more or less arbitrarily, now this aspect, now that aspect of the same facts,” and our thoughts about the physical world “never reproduce the facts in full, but only that side of them which is important to us.” By limiting the investigations of physics to narrow and unbending paths, we exclude from the outset a vast array of other approaches and ideas.
The situation is similar to the debate over who was the true inventor of Calculus — Newton or Leibniz? In reality, neither man was the “inventor” of Calculus; instead, the mathematical methods utilized and grouped under the subject-heading “Calculus” were developed gradually, over hundreds of years, and by a host of contributors. But by asking whether Newton or Leibniz was the inventor of Calculus, we make at least two, limiting assumptions: 1) that a single person did indeed invent Calculus, 2) that this person was either Newton or Leibniz, and could not have been someone else.
The power to frame the question, in any field, is the power to limit the range of possible answers. In physics, to ask… What new subatomic particles are we going to find? …or… How long until we find this-or-that particle?… skews the whole science away from any findings irrelevant to the particular questions posed.
If we think of what Quantum Physics has actually constructed for us over the last one hundred years, it has not been so much a final interpretation of the Universe, as an exquisite machine for processing scenarios and likely outcomes. But as Mach warned us long ago, “We must beware lest the intellectual machinery employed in the representation of the world” […] “be regarded as the basis of the real world.” A map of the world is not the world. As scientists, we must steer clear of the “undue and fantastical exaggerations of an incomplete perception.”
The purported physical explanations of the statistical outcomes of Quantum Physics is largely just so much myth-making. When you need an explanation for a phenomenon that you don’t understand, a Quark will do as well as a God, and a Strong Force as well as a Prophet’s Staff.
But are the events of our Universe really as random and non-determinant as believers in Quantum Theory (or some other free-wheelin’ deity) would have us believe?
During the first part of his life, and operating under the strong influence of Franz Exner, Erwin Schrodinger was quite willing to believe that determinism broke down at the atomic level. In the realm of the very, very small, Schrodinger conjectured that what we perceive as ironclad “laws” of Nature at the macro-level, are, at the micro-level, more like predelictions. Climbing the ladder from the nano to the macro, the predelictions of the very small eventually become, due to overwhelming statistical bias, the “laws” of the very large. For example, the Law Of Conservation Of Energy holds for the entire Universe, and in every event observable by the human eye. Nevertheless — or so thought young Schrodinger– at the atomic level, it is quite possible that Energy need NOT be conserved in each particular event.
Schrodinger was still thinking this way when he offered an explanation as to why atoms are so small and we humans are, relatively, so big. According to the Schrodinger of this period, if the Laws Of Nature are merely statistical, then they may only apply consistently to large sample sizes. An organism which is too small for the statistical averages to work themselves to the forefront of outcomes would thus be subject to random fluctuations, and would therefore, quite literally, fall to pieces.
Schrodinger at this point in his life felt, like Exner, that, if what we experience at the macro-level is actually the aggregate result of an uncountable number of statistically-biased outcomes at the nano-level, then to postulate that an ABSOLUTE law of Nature exists beneath such statistical, macro outcomes would take us “beyond the bounds of experience.” In other words, the young Schrodinger believed that individual atomic events could very well be random.
Logically speaking, however, even if we grant that there truly exists a nano-world which does not obey the laws of the macro-world– it need not be a world of randomness. To assume that chaos reigns in the realm of the very small requires a leap beyond the dictates of logic. It may indeed be the case that some macro-laws do not apply to the nano-level, but there is no overbearing reason to assume that NO laws apply there and that all is uncertainty from the ground up.
Schrodinger, by the way, later changed his mind about all this and, along with Albert Einstein, sided against the Uncertainty Principle of Quantum Theory. The older Schrodinger believed that even the Universe’s tiniest events are not completely undetermined.
Einstein once complained to Schrodinger that most of their fellow physicists, “do not look from fact to the theory, but from the theory to the facts.” To Einstein, the danger of possessing a blind loyalty to a pre-established worldview is that physicists so enthralled “cannot extricate themselves from a once accepted conceptual net, but only flop about in it in a grotesque way.” If highly trained mathematician really wished to prove that an earthworm is the most efficient, and thus highest, creation of an all-powerful deity, a justifying and ennobling set of complex and impressive mathematical equations would not be long in coming.
In fact, it is no secret that mathematics has largely taken-over physics, a case of the handmaiden becoming the mistress. Mach once remarked that the mirroring of physical phenomena with mathematical equations is a mysterious and happy coincidence, requiring merely that someone notice that “the relations between the quantities investigated [are] similar to certain relations obtaining between familiar mathematical functions.” The key word here is familiar. As Mach observed, “natural phenomena whose relations are NOT similar to those of familiar functions are at present very difficult to reconstruct.” In other words, we are far more likely to recognize mathematical patterns in Nature for which we already have mathematical equations than those which abide by no recognizable mathematical relationship.
Math most certainly should occupy a place of power and honor in physics For when we can combine experiment with mathematical logic, we can deduce new facts which are not directly observable, or facts which we did not even think to look for.
For example, some of the most fruitful types of equations in physics are those functions involving maximums and minimums. Leonhard Euler stated that, when we find a process producing something at a maximum, this can enlighten us as to the purpose of the process being studied– a sort of reverse engineering approach to physics. Also, in a Universe which appears to want to be as lazy as possible and to follow the path of least resistance, finding where values for things such as Work or Distance or Energy-Expense occur at a minimum can give us a good heads-up as to which path Nature is likely to travel– that path being the one requiring the minimum effort or Energy.
Furthermore, when testing a new hypothesis, the most illuminating information often can be gleaned from studying phenomena at their extremes… at or near highest speeds or lowest speeds, at highest temps or lowest temps, et cetera.
However, there are risks to a predominantly mathematical approach to physics. One danger is that sometimes equations can be found to fit the data by a kind of coincidence –or by a sort of neat trick. But such equations may tell us nothing about the real situation, and indeed, can sometimes mislead us.
To give one example of dangerous mathematics… Consider the situation in which the equations describing a particle’s trajectory during a certain experimental situation return an undefined value (or infinity!) for certain locations, then the physical interpretation may be that the particle “disappears” at that location, only to reappear later at some other location before continuing on. Such, for instance, is the math underlying the infamous “Quantum Leap” of Quantum Theory’s Electron.
To cite another example of misleading math… During the early days of spectral analysis, when it was discovered that different materials produce different spectral patterns when irradiated, some mathy individuals immediately went to work finding equations which would produce the same patterns. When someone produced an mathematical equation which could mirror the experimental data stream, that individual would be highly praised by the physics community, as if some fundamental law of Nature had been “discovered” by the finding of a fitting equation.
However, due to steadily advancing technology and methodology, better and more finely tuned spectral analysis was accomplished– and suddenly the mathematician’s nifty equations no longer matched the data being gathered. Thus, instead of serving as a mathematical backdoor to the secrets of Nature, the equations which had previously mirrored the original data were seen to have fit merely by a sort of happy coincidence, but the equation never actually represented the true reality of the situation. Worse still, such equations had misled scientists, causing them to spend time and resources attempting to make sense what it was that the equations were telling them about Nature.
Mach once cited, as an example of scientists allowing themselves to be carried away by speculation and by mathematical abstraction, the idea of supposed extra-dimensions (those beyond length, width, height… and for some, Time). The case for 5th, 6th, and even higher levels of dimensions stem mainly from mathematical models.
“As mathematical aids,” says Mach, […] “spaces of more than three dimensions may be used” […] “but it is not necessary to regard them on this account as anything more than mental artifices.” Yes, other dimensions may exist– who can prove that they don’t?– but, says Mach, perhaps with a forbearing smile… “I will not be ashamed of being the last to believe in them.”
Perhaps the most egregious drawback to an overly mathematical approach to science is that, once physicists have grown attached to a mathematical description of a situation and have begun building explanatory theories around and ontop of the math, they are far from inclined to abandon the math even after it has proven to be an inadequate description of the situation in light of new data. Instead of starting over, they will more likely attempt to tweak the pre-accepted equations until they can be made to adequately incorporate the new data. Over time — as physicists attempt to keep the old equations relevant to the ever-expanding, ever-changing data– what started out as a simple mathematical description can become extremely complex.
The reason for any relevance of Math to the physical world at all is that Nature is full of repeating patterns. Mathematics helps us to recognize, talk about, and explore such patterns, and is thus, obviously vital to physics. But one should keep in mind that mathematics never explains; it merely describes.
Of course, physicists of the modern era, including such early luminaries as Max Born, felt that description was enough. “The statistical interpretation of physics is the final one,” declared Born. Others, like Einstein, felt that the most fundamental purpose of science was to discover and understand the causes behind the effects. For Einstein, predicting outcomes is not enough; scientists must strive to answer the question, Why?
Thinkers such as Einstein and Mach felt that the general goal of Science is the gradual replacement of what appear to be isolated and myriad experiences with unifying and underlying principles which can be passed down to future generations as Natural law.
The simpler the laws or patterns we discern in Nature, the more satisfied we are with our explanations of events. “We regard a phenomenon as explained,” wrote Mach, “when we discover in it known, simpler phenomena.” Such is basically what Science does, or at least attempts. It is also why there can never be an end to Science. Every situation holds out the possibility that simpler situations exist behind it.
In Born’s day as in our own, physicists learn to predict likely outcomes only after repeating an experiment many times. They then can retroactively create and assign probability-equations describing the situational outcomes within some acceptable margin-of-error. The problem of this method, for thinkers such as Einstein, is that these the arrived-at equations are sometimes impossible to interpret as actual, physical behavior.
Of course, in much of the day-to-day work of physics, the rational, physical explanation of an event is irrelevant– what matters is that the situation is predictable and manipulatable. What a particular Uranium atoms is precisely up to during nuclear fission is, in a practical sense, irrelevant, as long as the nuclear power plant keeps running as predicted.
The twentieth century’s predeliction for complex mathematics, along with its Quantity-based worldview, culminated in that century’s fascination with the trajectories and wild lifestyles of purported Force-carrying mini-particles.
Views on the nature of Force hardened into the following three principles durng the 1900s:
1) Force does not arrive instantaneously, but travels at a finite speed
2) Force does NOT require the presence of the Ether to propogate
3) Force propogates, or is conveyed, by tiny Force-carrying particles
In spite of claiming that the Ether has become a superfluous concept, many physicists today cannot completely accept that Space is a mere void. Several generally accepted theories depend upon either the “curvature” of Space or on the ability of Space to act as the fecund soil from which different species from the Quantum Zoo pop into existence (or at least, “virtual” existence).
Many would credit Einstein with finally killing off the idea of the Ether– and yet he proposed that Space, itself, has properties. For example, Einstein claimed that Space bends toward Matter– with the bending becoming especially noticeable around large objects like planets.
But the question seems naturally enough to arise: How is a matter-affecting Space-with-properties so different from an Ether?
Also, whether they realize it or not, the same physicists who scoff at Newton’s supposed naive view of instantaneously acting Forces are sometimes the same physicists claiming to believe in the spooky “action at a distance” which some “entangled” Quantum particles are supposed to exhibit and for which Force-propagation requires no elapsed time. And neither Quantum Entanglement nor the so-called Quantum Leaps of Electrons possess the continuity and duration required by a time-occupying, Force-propagating event.
The simple fact of the matter is that Space, contrary to statements made otherwise, is not actually treated as a void by modern physicists.
The way forward in any situation is to first acknowledge the true state of affairs– not how it used to be, nor how you wish it would be, but as it is. In the case of Space, we should acknowledge that we cannot state with certainty that Space is a void. In fact, we treat Space, not as a Nothing, but as a Something possessing properties. In other words, by the back door, the Ether has returned. The invention of Force-carrying particles to replace the Ether is thus no longer a necessity.
The idea of Force seems like such a given to most people today– including almost all scientists– that they forget, or perhaps were never taught, that the concept of Force has not always been assumed.
The physics of Rene Descartes, for instance, did not need any mysterious “Forces.” Descartes considered Space to be filled completely, without spaces or voids, with a mixture of Etherial and material bodies. All bodies in Descartes’ Universe are connected on all sides with other bodies. Both types of bodies, Etherial and material, transfer movement (propagate Force) in a simple mechanical game of “pass-it-on,” as if all of Space were filled with dominoes.
Neither did Heinrich Hertz‘s physics rely on the concept of “Force.” Hertz was convinced that masses are only affected by other masses. Whenever we see or otherwise experience what appears to us as some “Force” acting upon some object, what we are actually and always truly witnessing is the contact of mass with mass. Where we do not discern the affecting masses, they are simply hidden to our perception.
Both Einstein and Gustav Kirchhoff thought that the idea of “Force” should be expunged from physics. Einstein, as pointed-out previously, felt that objects are caused to move toward each other, not by the Force we call “Gravity,” but by the distortion of Space caused by the masses of the objects. Also, he believed the Force we call “Light” travels as Photons, not as waves of Force.
Of course the majority of physicists do believe in Force, and almost all of those believe that Matter is required to act as the originator of Force and/or as the carrier of Force.
Perhaps surprisingly to some, Isaac Newton did NOT believe that Matter was required for the propogation of Force. He felt that some Forces, such as Gravity, could act instantaneously between objects even over vast distances, no Force-carrying particles required.
Michael Faraday contended that Matter, itself, was created where Lines Of Force crossed. For Faraday, Force-Fields did not emanate from Matter, Matter was created from Force-Fields.
Faraday also claimed that what we perceive as electricity is the movement through a conducting body of such Lines Of Force. He rejected the notion that there exists both a positive and a negative version of electricity. Instead, the so-called positive and negative sides of a current are actually just different ends of the same Lines Of Force; one end could be said to be moving inward and the other outward.
Most men who had thought about Electricity before Faraday had considered it to be composed of two subtle fluids, one positive and one negative. During the 1700s, scientific thinkers such as Charles-Augustin de Coulomb imagined that electric Charge was an Energy which built-up when the two different types of electric fluid got out of balance in a vicinity. This excess of a particular fluid could produce either a positive or a negative Charge, depending upon which electrical fluid was in excess.
Faraday instead believed that a stress can be built-up in a material if that material is not particularly good at conducting electrical Lines Of Force. This stress is what we call “Charge.” Faraday imagined that the molecules in a not-particularly-conducive material can become polarized by an electric current attempting to pass through, with part of each molecule becoming relatively positive and part relatively negative.
One problem with Faraday’s theory is that his talk of polarized molecules brings us right back to question of what MAKES something positive or negative. Be that as it may, Faraday said that the greater the molecular polarization, the greater the strain on the material becomes. The molecules will exist in this state of tension until the pent-up strain, or “Charge,” can be released (dis-charged). For Faraday, the produced current or spark is the breakdown of this built-up stress.
Because Faraday rejected the two-fluid theory of electricity, he also rejected the idea that there could be two different types of Charge. Instead, he asserted that there exists only one type of Charge. This is why charges always come in pairs. There is no such thing, said Faraday, as an isolated, or “absolute,” positive or negative Charge. What some claim to be two different sorts of Charge are actually just differences the Charges possess in degree, not in kind.
Maxwell held a very similar view to Faraday’s, only Maxwell did not think the build-up of stress was due to polarization. He thought the molecules in a material under electric stress become physically distorted when the electrical current flowing into a material is greater than the current flowing out. These distorted molecules press against each other, creating an actual physical pressure or strain. The worse the distortion grows, the greater the “Charge” becomes. Eventually, when the distortion grows too much to bear, the situation snaps back to a less physically stressful one for the vicinity, and the release is viewed as a burst of electric current.
Hendrik Lorentz gave us the basis of our modern view of electricity. Lorentz went back to the view that there do indeed exist two different types of electricity (namely, positive and negative), and that both types are contained and carried-about by extremely tiny particles. He claimed that Charge is due not to excessive Subtle Fluid build-up, nor to polarizational strain, nor to induced physical distortions– but to the presence of too many particles of the same type (positive or negative) of electricity in a vicinity.
No one seems to have noticed that Lorentz’s tiny electricity-carrying particle is in someways actually a setback to scientific understanding, stranding us farther away from understanding what, fundamentally, “Charge” is. At least with Faraday and Maxwell’s notions of polarization or physical strain, Charge can be put in context of physical conditions with which we are, at least in kind, familiar. But to simply inform us that Charge is an agglomeration of Force-carrying particles tells us nothing about the nature of Charge. Is it some kind of Ethereal fire? Is it a certain constellation of super-tiny grains of matter? Or is Charge simply a special type of subatomic mobility? It’s shocking how complacent we remain in our ignorace of Charge.
Nevertheless, Lorentz’s view of Charge came to dominate the theories of Electricity; indeed, no alternative theory is even seriously considered today.
In terms of worldview, Lorentz’s victorious, particle-centered interpretation of the Force we call electricity has had the powerful and lasting effect of shifting physics away from the approach of thinkers like Faraday and Maxwell, whose worldview put considerations of quality (such as polarization and strain) front-and-center, to the 20th century approach, which focused on measurements of quantity. This paradigm shift in scientific thinking, which passed unnoticed, resulted in a century of science overly focused on quantity– of very tiny particles, in particular — and not enough concerned with changes in quality. When the dynamical way of thinking was lost, the isolated Thing grew in importance, and the integrated System, or Ensemble, took a backseat.
Physicists, operating under a Thing-centered worldview, became practically obsessed with finding Force-carrying particles during the 1900s. As the century progressed, new information about the atomic world caused physicists to have to rework and greatly complicate their particle-based theories in order to salvage them. The Suitcase Atom became more and more stuffed with Things, and less and less full of good sense.
Physicists were particularly led astray by the notion, seemingly taken for granted, that the extremely tiny Force-carrying particles which they were inventing had to make their homes inside atoms, and futhermore, that even while “inside” atoms, particles maintained their integrity and specific characteristics. From these assumed –though quite mistaken– conjectures, a myriad of ridiculous, complex, and convoluted theories have emerged in the world of theoretical atomic physics.
About the time scientists were beginning to agree that, yeah, maybe atoms really do exist, experimenters discovered that they could cause atoms to release tiny somethings which could be deflected away from the negative portion of a magnetic field. Physicist J.J. Thomson is given credit for being the first to use a vacuum and some very strong magnetic and electric fields to discover this phenomenon. He and his supporters considered this something a “particle.”
Still operating under the delusion that electricity comes in the flavors of “positive” and “negative,” Thomson and other scientists assumed that the bias of the purported “particle” against the negative side of the magnetic field was proof that they had discovered the negative version of Lorentz’s two types of electricity-carrying particles.
Everyone would eventually agree to call this supposed negative particle the “Electron.” Its partner, the particle assumed to exist and carry-forth positive-electricity, was never discovered, although there are many people today who still claim that positively charged counterparts to Electrons exist, and they call these critters “Positrons.”
To complicate matters, many physicists and engineers operate under the working assumption that the lack of an Electron in a certain vicinity can also act the same as the presence of a positive Charge. Logically, this realworld amendation to theory serves to support the belief that there is only one type of Charge, and what is currently viewed as a “negative” or “positive” Charge is actually the relative presence or absence of the Energy which we sometimes call “electricity.”
No one seems to have noticed the fact that the interpretation of an Electron “Hole” as a positive Charge goes against the grain of all modern physics. Under orthodox modern theory, every Force rides upon its Force-carrier. This theory is undermined terribly if one adds, excpet when something that has all the characteristics of a Force exists without a Force-carrier.
Besides encouraging the belief in two different types of electricity, another problematic development with the Electron was the way Thomson arrived at the value of its Charge, his method not being what one might call stringently scientific. Thomson assumed that the particle he had found was the smallest carrier of Charge in the Universe (under what pretense, I cannot say); he then looked-up the smallest amount of Charge ever discovered by humankind, and from there simply assigned this amount of Charge to his particle.
Thomson suggested that his negatively charged particle was part of a population of charged particles existing INSIDE an atom, some (like the particle he believed he found) were negatively charged, while others were positively charged. As physicists were still operating under the assumption (as they do today) that same Charges repel each other, Thomson’s suggestion that there were also present opposite charges inside the atom seemed to neatly explain how an atom could stay together in spite of the repulsion of all the negative Charges he had decided to put there. The presence of positive charges would serve to hold to the atom the negatively charged Electrons.
Thomson’s placing of not one but two particles inside the atom was perhaps the crucial turning point in the worldview of 20th century physics. Henceforward, any Force necessitated to make a novel theory of atomic physics work could be assigned to a Force-carrying particle and inserted inside the atom. The developing Suitcase Atom of the twentieth century would carry around with it everything it needed for its exotic travels through the zany world being created by Quantum Theorists.
In his book, Representing Electrons, Theodore Arabatsiz wrote that, at the beginning of the twentieth century, “more and more experimental situations came to be interpreted as the observable manifestations of the presence and action of Electrons.” These observable manifestations included, among others things: cathode rays, Beta-rays, the Zeeman effect, the photo-electric effect, and cloud chamber tracks. However, adds Arabatsiz astutely, “the measurement of the properties associated with an unobservable entity does not imply that this entity exists.”
For example, in the late 1700s many scientists were convinced that they had discovered the supposed Force-carrying particle for Heat. Turns out, no such particle exists (or at least that’s the current theory), but that did not stop some of the sharpest scientific minds of the day from “measuring” and “weighing” what they were convinced was Phlogiston (the name they gave to Heat’s Force-carrier).
Not so dissimilarly, scientists in the late 1800s/early 1900s were convinced that they were “measuring” and “weighing” Electrons. Truth is, different experimenters were coming up with very different values for Electron-mass, as well as for Electron-charge. Scientists proved more than willing to chalk up the huge differences between their measurements to “margin of error” in order to maintain their belief in the Electron. Experimenters were “finding” big Electrons, small Electrons, powerful Electrons, weak Electrons.
Assuredly, Experimenter A was willing to cut Experimenter B a lot of slack if it meant he could interpret Experimenter B’s work as supporting his own findings. So acceptable margins-of-error were ballooned to quite generous proportions.
Nevertheless, with no direct observations of Electrons being made, the “Electron” which began to settle-out of this mixture of results was no more than a “statistical mean”– a thing with no more real existence, as Arabatsiz put it, than the “average man” can be said to have a real existence.
Eventually, the conflicting data concerning the purported Electron could no longer be discounted, smoothed-over, or ignored. Based on the acceptance of results gathered from experiments utilizing evermore sophisticated technologies, new scientific theories about the atom and the Electron developed, theories which would all but abrogate the “discovery” of the Electron as it was “measured” and “weighed” in the early days. In fact, our conception of the little mythical electric beastie has changed so much over the last hundred years that it is hardly the same entity at all as J.J. Thomson thought he found in the late 1800s. As Arabatzis stated frankly, “there would be no legitimate sense in which all the scientists who used the term ‘Electron’ between, say, 1895 and 1925 were talking about the same thing.”
One area of constant upheaval in Electron theory has been Electron-mass. The entire gamut of possible representations of Electron-mass have been run-through over the decades, from the representation of Electrons as actual, tiny particles with real, matter-derived mass– to matterless Electrons existing merely as points of statistical construct within clouds of Charge. About the only thing one won’t hear about the Electron is someone admitting that we actually don’t know what it is– or if it is (in the sense of being a real particle of some relatively lasting duration of existence).
In 1881, Thomson introduced the notion that, at high enough velocities, the Mass of an Electron is entirely Electromagnetically derived. Thomson’s idea as to what causes the phenomonon of Mass was just one in a long line of proposals…
Some thinkers, such as Descartes and Kirchhoff, contemplated the possibility that pieces of Matter are vortexes in the Ether. What we refer to as “Mass” could be the result of such Ether vortices “sucking” other vortexes toward themselves
Newton had a very interesting early interpretation of Mass. In Newton’s view, wherever Matter is present, it partially displaces the field of Ether permeating Space. Thus, the Ether is less dense inside Matter than outside of it. Due to this pressure differential (and we assume here some sort of Ether “pressure”), Ether is continually flowing from the area of greater density (outside the Matter) toward the area of lesser density (inside the Matter). This flow of Ether influences other objects in the Ether field to move toward the Matter, drawn by the Ether-currents. This flow we call “Gravity,” and its intensity determines the Mass we assign to an object.
It is held by many that we’ve moved past the old Newtonian Gravity and now have Relativistic Gravity. However, there is nothing whatsoever in Newton’s view of Gravity that is at variance with the outcomes of Relativity. Einstein conjectured that Space, itself, bends toward Matter; Newton said it’s the Ether moving toward Matter. Either way, objects are influenced one toward another.
One thing some people will point-to when they want to dismiss Newtonian Gravity is that Newton contended that Light moving past a heavy object will bend away from that object, whereas Relativity predicts the Light will move toward the object– and of course, Relativity appears to have turned out to be right (they’ll mention measurements made during a solar eclipse sometime in the late nineteen-teens).
In truth, Newton was wrong in the interpretation of his own theory, as his theory of Gravity has no problem whatsoever supporting the view that Light would bend toward a heavy object like the Sun on its way past, for Light could be affected by the Ether similar to how Matter is.
Many of us may think (or by not thinking, assume) that Gravity is some intrinsic quality of Matter, but many theoretical physicists throughout history would disagree with this view. Newton wrote in a letter to an acquaintance that Gravity is not at all “innate, inherent, and essential to Matter” (apparently he considered Matter’s cause of Ether differentials as a secondary effect). For Newton, Matter innately or absolutely possessed only: Inertia (aka Mass), Hardness (this possibly could be viewed as Density), and Extension (the three Dimensions). Others, such as Descartes and Spinoza, thought the only quality that Matter absolutely possessed was Extension (and Descartes barely thought this, for as we know, the “atoms” of his “Matter” were actually only vortexes in the Ether).
Ernst Mach agreed with Newton and Descartes that Gravity is not an intrinsic property of Matter. But Mach also contended that what we label “Mass” simply does not exist for one object in isolation. Mass is only created by the “mutual relationship” between two or more objects. To speak of “Mass” is merely to speak of the changes in velocity that each object generates in the other.
Said Mach: “It is possible to give only one rational definition of Mass, and that is a purely dynamical definition.” Sadly, such dynamic interpretations of the Universe have been on the wane since the late 19th century, when focus shifted away from Quality measurements of relationships toward Quantity measurements of isolated Things.
Other thinkers have imagined a different sort of Ether behind Gravity. In this version, the Ether is not simply “displaced” by Matter, but there occurs a mutual repulsion between the two types of Being (Matter and Ether). The net effect is that the Ether is constantly pushing Matter away from itself and toward other Matter.
On the other hand, some have contended that the Ether is constantly repelling itself. And it is only as a sort of side-effect that Matter winds-up being pushed together under the influence of the constant pressure of Ether’s self-repulsion.
I find especially interesting George LeSage‘s theory of Gravity. In his view, we can imagine the Ether as streaming equally through the Universe in every direction. To some extent, the Ether will even stream through Matter– but not without some obstruction. If there were only one object around, the streaming Ether would impact the object equally from all sides, and it would not move since no direction would be favored. However, a second introduced object will act as a shield to the first, partially shielding it from one direction of Ether streams. Simultaneously, the first object will be acting as a shield to the second.
Thus, the two objects will be impacted less by Ether from the sides facing each other. This unequal bombardment of Ether will then cause the objects to be pushed toward each other by the Ether streaming into each object from the other directions.
Wilhelm Wien proposed that a moving charged particle would have greater Mass due to the self-induction of electromagnetic Lines Of Force as it moves. Wien based his contention on the accepted notion that moving Charges create electric current, and that this Charge-created electric current in turn creates a magnetic field; this electric-current-created magnetic field then induces another electric current, which induces another magnetic field– and so on… Wien asserted that such electromagnetic fields created by the moving Charge, itself, would slow the speed of the particle carrying the Charge. This phenomenon works-out to be the same, practically speaking, as if the particle just got heavier… it’s effective mass would increase.
Experiments conducted by Thomson proved to his satisfaction that, at a high enough velocity, all of what we might call the “regular” mass of the Electron disappears, and the only Mass left is the resistance to movement (inertia) created by the tiny charged particle’s self-induction of Electromagnetic effects.
Both Joseph Larmor and Max Abraham took such thinking even farther, contending that ALL Mass ultimately stems from the inertia caused by Charge. This belief is based on the assumption that Charge is a necessary constituent of all Matter. Thus, Matter does not exist apart from Electricity, and you won’t find Electricity without Matter (which, to me, hints at some profound secret about the Nature of the Universe… just sayin’).
Walter Kaufmann pointed-out that, if the Electron’s Mass varies according to its velocity but its Charge remains fixed (remaining that Charge which is the smallest unit of Charge in the Universe), then the Charge-To-Mass ratio of the Electron can not be claimed to remain constant.
Besides the difficulty pertaining to the assignment of Mass to the Electron, there is also the problem Electron-Charge, and the gargantuan ramifications of placing this Force inside the Suitcase Atom…
Not long after Thomson proposed his model of an atom comprised of negative and positive Charges, Ernest Rutherford performed some experiments which he interpreted as indicating that all the imagined “positive” Charge of atom was concentrated at the atom’s center, not separated throughout the atom in such a way as to balance the negative Charges of the supposed Electrons.
Rutherford had been able to cause to shoot out from atoms what he called “Alpha Rays.” These Alpha Rays were found to act as if they contained a positive Charge when they passed through a magnetic field. Additionally, when these Alpha Rays were aimed toward other Matter, they were found to bounce off– which greatly surprised Rutherford (although, I personally find the level of his surprise a bit of an overreaction).
As is typically the case in Science, new technology and methodology– and not new theories– are what really drive progress; the theories come later (with exceptions, of course, to prove the rule). Faced with his results and his interpretations of results, Rutherford had now to come up with a new atomic theory to fit the data derived from his more advanced experiments.
Rutherford came to the conclusion that his Alpha Rays were composed of tiny particles of positive Charge. He called these new Force-carrying particles, “Protons.” In his model of the atom, he located the positively charged Protons at the atom’s center. To explain why the Electrons retained mobility and did not just clump to the center, drawn thither by the supposed opposite charge of the Protons, Rutherford put the Electrons in orbit around the Protons. This gave the atom a decidely solar system look. It also served as the beginning of an exponential complication of the Suitcase Atom due to the need for Electrons to keep in constant motion in some sort of stable orbiting pattern.
One early problem of revolving Electrons involved a pre-existing theory asserting that a revolving charged particle should steadily radiate Energy as it moves, and that the larger the orbit, the more “Energy” the particle is supposed to require to maintain its orbit. Physicists during the early 20th century were therefore perplexed as to why the Electron did not spiral into the center of an atom as it lost the Energy needed to maintain its orbit. An atom suffering from collapsing Electron-orbits would not be long for this world– which in turn would mean that this world would not be long for the atom.
As an aside, I think it worth mentioning here that I believe it to be a not-quite correct notion of physicists that an object in orbit is in a constant state of “acceleration” even if the object never changes speed. This problem is one of definitions. The definition we have officially given “velocity” informs us that if an object’s direction changes, its velocity changes. And, also by definition, any change in velocity is viewed as acceleration.
It would be better if we used a different term when speaking only of a change in direction, with no change in speed. Maybe “transdirection?”
Who knows? Could be that when we start being less messy with our words, we will discover new facets of old processes that we thought we knew.
The demise of the Universe due to collapsing atoms was not a problem for everyone. Niels Bohr was one of those early Quantum physicists who refused to give up on the emerging Suitcase Atom just because it was merely impossible. Bohr continued working on the Quantum model of the atom, gleefully tossing aside any established theories that got in his way. Ultimately, his model would be, as Arbatzis frankly put it, “without any mechanical foundation.” In his physics, Bohr justified the abandonment of –well, physics— by declaring that there are many properties of bodies impossible to explain if one assumes normal mechanistic behavior within the atom. In other words, Bohr was not going to give-in to the Universe– the Universe would have to give-in to Bohr.
Bohr even went so far as introduce a non-mechanical “Force” which would govern the interactions inside atoms. He called this (magical?) Force, “Zwang.” It would take a couple of years, and the suggestion of Electron “Spin,” before he would let go of this outlandish idea.
Interestingly, this was just the beginning of new “Forces” which physicists would invent ad hoc during the twentieth century to support their theories; come to think of it– Bohr doesn’t get the credit he deserves for this invention… Before there was Strong Force, before there was Weak Force, even before there was THE Force– there was Zwang Force!
To get around the problem of deterioriating Electron orbits, Bohr postulated that Electrons did not collapse inward because they are NOT steadily radiating Energy. He asserted that Energy comes in packets which can only be so small. The smallest packet of Energy, he called a “quantum.” If an Electron did not have a quantum’s worth of Energy to release, it could not radiate Energy, and thus, could not begin its downward spiral through the lower-Energy orbits.
Also, according to the thinking at the time, there were only certain, very specifically allowed orbits which an Electron could have. An Electron could not occupy a space inbetween these orbits even for a milli-second. Thus, to get from one orbit to another, an Electron would have to “jump” instantaneously. The Electron would be in one orbit and then—wham!—it would (rather magically) appear in another. This is the infamous “Quantum Leap.”
One may wonder… if Electrons are not losing Energy as they revolve, why must they change orbits at all?
Experimenters discovered that an atom bombarded with Energy will often radiate Energy back out. So physicists had to come up with a theory to explain this. The consensus opinion that emerged was that the Energy coming into the atom was being put to use by the atom to increase the size of one or more Electron orbits.
An Electron was said to “absorb” an input of Energy and to use it to obtain the quanta necessary to upsize orbits. If, however, an Electron moved up an orbit and that new orbit proved “unstable,” the Electron would drop back to a lower-Energy orbit, releasing its now-excess Energy as a quantum packet. This released quanta of Energy would then be picked up by the instruments of observers as radiation emitted from the bombarded atom.
Suspiciously conveniently, the valence of the smallest unit of Rutherford’s purported positive Charges turned out to be an exact match for the valence of the smallest negative charge (that said to be carried by the Electron). Furthermore, researchers discovered that this smallest unit of positive Charge was always associated with a consistent amount of mass. One might have assumed that the mass associated with the smallest positive Charge would be about the same as the mass associated with smallest negative Charge (thought to carried by the Electron).
But as it turned out, the mass found to be associated with the smallest positive charge (to be called the Proton) was many hundreds of times heavier than the mass of the Electron. And this was not the only surprise thrown at theorists. As heavy as the Proton was, it was nevertheless found to be too light to account for the weight of the atom (it being maintained that Electrons are so lightweight that their contribution to atomic mass is near-negligible). In fact, the mass of the Proton appeared to be only about half the weight of the atom.
Furthermore, it was immediately decided that for every Proton which exists inside an atom (the continued, individualized existence of Protons inside atoms was not questioned), there must exist exactly one Electron within the same atom, since atoms are normally Charge-neutral (no one seems to have suggested that maybe there were just zero active Charges inside a typically functioning atom). This meant that the typical atom could not contain any more Protons than Electrons, nor any more Electrons than Protons. This assertion also ruled out the possibility of adding a few thousand Electrons to the atom to make up for the Proton’s missing mass. Plus, physicists, using information derived from another field of science, thought they were getting a pretty good idea how many Electrons were carried around inside each Suitcase Atom…
During the time the Electron was being “invented,” there was great interplay between the experimental results being collected by physicists and the struggle of chemists to complete and understand the (still relatively new) periodic table. The valency of elements (how strongly they tended to react with other substances) was already a well-established concept… The character of the chemist’s valency was simply translated by the physicist into talk of the behavior and number of Electrons (specifically, “Outer” Electrons.) Also, studies of Atomic Spectra suggested justifications for assigning exact Electron quantities to atoms– again, especially the Outer Electrons.
In the early 20th century, when Niels Bohr was coming-up with one of his successive schemes of atomic structure, he used as his starting-point Rutherford’s solar system model. Bohr figured it sure would be neat if the number of Electrons stuffed in a Suitcase Atom had something to do with an atom’s chemical properties.
Without this sort of tie-in, the idea of Electron-orbits really had little connection to the real world and to the chemistry of the day. But by tying Electron-orbits to chemical properties, Bohr could make these conjectured orbits appear more realistic and, thus, more acceptable.
So, Bohr arbitrarily matched the number of Electrons in each atom’s outermost shell to each atom’s valency-level. Because chemists had found that elements exhibited one of eight possible reactivity levels, Bohr hit upon the scheme that the outermost shell of Electrons possessed by nearly all atoms could only hold between one and eight Electrons. These outermost-shell Electrons were to be the sole determinants of an element’s chemical properties. For example, an element in the fourth category of valencies was said to have four Electrons in the outershell of its atoms.
These outermost-shell Electrons came to be called Valence Electrons, and they became equated with the chemical characteristics of an element… no other part of the atom much mattering anymore, chemically speaking.
But the truth is, as Arabatzis astutely observes, Bohr’s electron-configuration scheme “could not be obtained solely from the principles of his theory.” For instance, if chemists had come-up with ten different valency-levels instead of eight, Bohr would have placed up to ten Valency Electrons in the outershells of atoms. If five valency categories had been created, Bohr’s atoms would have held between one and five Valency Electrons. There was nothing in his theory that determined the number. It was a matter of convenience and wishful thinking.
It is also why it is more than suspect that, in the current orthodox theory of the Suitcase Atom, only the outer shell of Electrons has any relevance to the rest of the Universe at all.
Once it was thought that the balancing positive Charge in the atom was provided by only the about half an atom’s mass, it didn’t take a genius (actually, it may have been, incidentally, a genius) to propose that the second half of the atom’s mass was made-up of neutral material. This proposition tied a nice bow on things. Theorists decided, and I think also somewhat arbitrarily at the time, that this neutral part of the atom must occur in little pieces which would just happen to be the exact same mass as the Proton, and that there would be exactly one of these present in an atom for every Proton. These Protons-without-charge would be called “Neutrons.”
At this point, if all the rest is taken as granted, packing the Suitcase Atom with particles becomes relatively simple. If the atom is assumed to be composed of three types of materials (the Electron, the Proton, and the Neutron), which all balance-out each other perfectly either by Charge (Proton and Electron) or by mass (Proton and Neutron), then if we know the number of one of them, we can know the number of the others.
There is still, of course, the minor problem that all atoms of the supposed same element actually do NOT weigh the same. How can this be?
Any mass irregularities could only come from the addition or subtraction of one or more of the three particles assigned to the Suitcase Atom (albeit, the number of particle-types placed inside the atom will grow substantially over the years). Of course, a change in either Protons or Electrons would throw off the Charge-balance of the atom; furthermore, it was thought that a change in Electrons, in particular, would muck with the atom’s chemical properties and periodic table position.
The only particle left to account for differences in mass between different atoms of the same element was obviously the supposed Neutron!
And so it went… whenever atoms were found with weights differing from the assigned ideal, then the mass differential was attributed to Neutron naughtiness. Atoms with more or less Neutrons than they are supposed to have (and it does usually tend to be MORE, I think, which is probably a clue to some better theory) are called Isotopes.
All this sounds so very, very nice, but there is nevertheless a quite compelling reason to suspect that Protons cannot possibly be grouped together in the Nucleus… Protons, if they are of the same Charge, should actually repel each other, not bundle together at the center of the atom.
To get around this little hiccup, theorists came up with, to my mind, one of the earliest, most ridiculous, and most harmful theories of atomic physics… the theory of the “Strong Force.” The Strong Force is supposed to be some (literally otherworldly) power so strong that it overrides the mutual repulsion of the Protons said to exist at the heart of every atom. Because the Strong Force is such a ludicrous idea, theorists had no choice but to keep it sealed-off and away from the rest of the Universe; it would and could exist only inside the holy-of-holies, the Nucleus of the atom. The Strong Force was not allowed to have any effects on anything outside the nucleus, even within the same atom (hence, why I term it “otherworldly”).
Besides the problem of Proton-Proton repulsion occurying at the center of an atom, there is another built-in antagonism in the Suitcase Atom… There is what the atom is supposed to crave energetically, and what the atom is supposed to crave electrically.
Electrically speaking, the typical atom is perfectly balanced, with the number of Electrons and the number of Protons being equal. It does not matter, electrically, how many Electrons reside in an atom’s outershell, as long as the total number of Electrons in all shells is equal to the total number of Protons in the nucleus.
However, energetically, theorists tell us that an atom with an unfilled outershell (containing less than eight electrons typically), will –in spite of being perfectly electrically balanced, crave more Electrons because it doesn’t “like” having an unfilled outershell. Due to this Energy-craving, an atom with less than eight Valency Electrons will attempt to combine with other atoms also possessing less-than-full outershells. An atom cannot simply snag more isolated Electrons to fill its outershell because it doesn’t contain enough Protons in its Nucleus to hold them.
Thus, there exists an everlasting antagonism between many atoms’ electrical needs and their energy-needs. No wonder the world is so insane if such schizophrenia exists at the Universe’s near-fundamental level !
Physicists have never adequately explained WHY an atom craves a full shell. It is just assumed that they do because it is “Energetically favorable”– a handwaving maneuver at best.
Furthermore, it is difficult to believe that atoms –veritable sackfuls of moving, antagonistic, and sometimes leaping Charges– would actually be drawn together to form relatively stable configurations such as molecules even due to dire Energy needs.
For one thing, the joining point between atoms making-up a molecule would occur at precisely the location where each atom’s fast-moving cloud of negative Charges would collide with each other, full of Energy and mutual repulsion, like some sort of horrible family reunion.
Furthermore, the positively charged Protons supposedly at the heart of each atom can’t be all that happy to be in such close proximity to the Protons of other atoms– especially since the (ridiculous) Proton-glue known as the Strong Force has been forbidden to act between atoms. The excuse offered to overcome the problem of the mutual repulsions of Protons belonging to different atoms is that the Electrons of the system are claimed to shield each Nucleus from the positive vibes sent-out from other Nuclei.
However, the idea is something less than convincing that negative Charges –which aren’t even strong enough to repel each other– would act as that great of a shield for the positive Charges behind them. The assumption that opposite Charges actually interfere with each other is also suspect. It is one thing to say that positive attracts negative, but quite another to say that one charge can cancel-out another. Where does the Energy of the charges then go?
Even after determining exactly how many Protons, Neutrons, and Electrons to pack into each Suitcase Atom, and how and why atoms would be drawn to come together as molecules, Quantum theorists were still a long way from being done contemplating the inner-workings of the atom.
Spectral lines (the Energy pattern emitted by an irradiated atom) also played a huge, huge role in the development of Quantum Physics. Time and again, discoveries about the spectral lines emitted by elements would blow holes in old theories and force theorists to come up with new theories in order to salvage something of the old ones.
Experimenters found that spectral lines remain consistent for each element and that no two elements exhibit the same pattern of spectral lines. Thus, spectral lines can serve the role of an elemental “fingerprint”– identifying each element based upon the pattern of its emitted spectral lines.
Physicists surmised that atoms emitted spectral lines when their Electrons switched orbits, proposing that the photons of arriving irradiation give their Energy to some of the receiving atom’s Electrons, disturbing them, and shifting several of them from one orbit-size to another. During these orbit-shifts, the Electrons would simultaneously expel Energy, also in the form of photons– thus forming the Light detected by physicists during the spectral analysis of irradiated atoms. Different orbit-changes would produce different patterns and colors of Light.
Bohr’s own work with spectral-line analysis led him to the conclusion that there must be precise gaps between the “orbits” (or alternatively, “Energy levels”) of Electrons.
Particularly, Bohr believed that only the outer, or Valence, Electrons absorb the incoming Energy from irradation, and so it is that only Valence Electrons are caused to shift orbits, and therefore only Valence Electrons emit the photons causing the spectral lines viewed by experimenters.
After analyzing the spectra of irradiated atoms, Bohr decided that an Electron could not possibly gradually move between orbits– but had to “jump” instantaneously from one orbit to another. He came to this conclusion due to observed sharpness of the spectral lines radiated by the Hydrogen atom. If, instead of jumping, the Electron were to gradually change position, then the frequencies it would radiate would also gradually change. What researchers actually found was that spectral emission-patterns shifted suddenly, leaving dark bands of zero Light-emission between certain frequencies. For each sudden shift in spectral emissions from one frequency to another (a “dark” band separating them), the Electron was assumed to have “leaped” over the gap between Electron-orbits as it moved from one Energy level to another.
The gaps between allowable Electron orbits also meant that some wavelengths can NOT be radiated by a particular atom, as there would be no Electron there to release its excess energy as it dropped orbits.
The problem– the recurring problem– for theorists was that each time they thought they had determined all the possible states of an atom’s leaping Electrons, researchers would hand them the latest batch of spectrographic data– which would prove incompatible with the current model.
As Pieter Zeeman and other experimenters became better and better at reading the spectral lines emitted by atoms, they kept discovering NEW spectral lines. The young turks of Quantum Physics, being too impatient to sit on their ideas, jumped in with explanations immediately after each discovery of some new spectral manifestation. This led to what may be the longest, fastest, most desperate run of ad hoc solutions in the history of Physics. Every time some new spectral lines appeared, Quantum Physicists would have to invent new Electron energy levels to explain them, becoming more and more creative with each new invention.
I believe that if the full “fine spectra” of atoms had been measured from the beginning, there is a good chance we might have chosen an alternative explanation for them– or if the idea of Electron orbits had not already captured the imaginations of the day’s physicists. As it was, the Electron-orbit theory continued to lumber forward, accruing each new “fix” to address each new batch of incoming data.
Eventually, Max Plank’s theory –which had supported the early spectra interpretations by positing that there is a connection between the “frequency” (orbits-per-time) of Electron revolution and the “frequency” (wave-peaks per time) of atomic radiation– had to be abandoned by theorists.
After learning that William Edward Curtis had discovered that the formulas which were supposed to predict atomic spectral patterns were actually not correct, Bohr decided that the missing piece was that the then-current formulas were not including the effects of the new theory of Relativity. According to Einstein’s Relativity theory, a fast-moving object (such as the Electron, for instance) undergoes a contraction in the direction of its motion.
Since Electrons were by then assumed to be travelling in elliptical orbits (contrary to Bohr’s original suggestion of circular rings), this meant that their velocities and masses would vary –one way or another– depending on where they were in their orbits; when the orbit is closer to the atomic “Core” (Nucleus plus inner-Electrons) an Electron was assumed to pick-up speed, just as planets do when orbiting the Sun, (see: Kepler’s Second Law).
By replacing the values for the Energy and Momentum of the orbiting Electron with values which took into account Relativistic ramifications (though apparently not electromagnetic mass considerations), Bohr was able to get the answers he wanted from the spectral-analysis formulas.
As Arabatzis explains it: “by taking into account the Relativistic change of the Electron’s mass when it moves within the atom, orbits which had been, from the point of view of classical mechanics, indistinguishable in Energy were now assigned slightly different Energies. The consequence of that was a multiplication of the Energy levels of the Electron within the atom, and therefore of its possible quantum transitions between levels.”
Once Arnold Sommerfeld got hold of Bohr’s Relativity-adjustments idea, he started doing the heavy-duty math and came up with a proliferation of possible Electron Energy-levels. Now, any observed spectral pattern could be explained with a little Relativity jiggering. The problem henceforth would become one of excess. There were, in spectral lines, now far more possibilities than actualities– there were too many Energy-level options compared to the actual number of patterns being generated.
Sommerfeld then tried to implement “selection rules” which would limit the transition-possibilities of Electrons from one orbit to another, but these were never very satisfactory. Of course, he was only doing what physicists had been doing throughout the Quantum revolution– coming up with ad hoc patches to fit the most current experimental data and save the Suitcase Atom from the final luggage carousel of history.
As the picture of the emitted spectral lines grew more detailed, there appeared to be Electrons moving in orbits which should be interferring with each other if they existed in reality. To remedy this situation, physicists decided that different Electrons could revolve at roughly the same distance from the nucleus– as long as their orbits were of different elliptical shapes. Orbit size and orbit shape became the known First and Second Quantum Numbers.
When the addition of different orbital shapes still failed to provide enough flexibility to fit the data, a third Quantum Number was introduced. Sommerfeld, its introducer, said that the new variable represented some “hidden rotation” in the atom, and this hidden rotation possessed some “geometric significance” about which “we are quite ignorant.”
The third Quantum Number was called for a time the “Inner Quantum Number.” Alfred Lande interpreted it as the Total Angular Momentum of the atom.
Eventually, the Lande view would be rejected, and physicists would come to think of the third Quantum Number as representing the angle, or orientation, of Electron orbits. In other words, orbits of the same distance from the center of the atom and of the same shape, could be differentiated further by the plane of their orbits; some orbits might be vertical, some horizontal, and some inbetween. This idea of Electrons being freed from a single plane of orbit is why we often speak now NOT of Electron “orbits” but of Electron “shells”—groups of orbits.
Wolfgang Pauli gets credit for pointing-out that one and only one Electron could occupy each specific orbit-type around a nucleus. This is called the Pauli Exclusion Principle.
After all this, it was discovered there were STILL not enough spaces for Electrons in the shells to match the spectral data; in fact, the theoretical atoms had only half the number of Electrons that they needed to match the experimental evidence.
But where would this additional complexity come from? Physicists had already tweaked Electron orbits in every possible way… They had used orbit size, shape, and orientation. What was left?
Pauli suggested there were four (not three, as had been previously thought) components to describing each Electron orbit-type. There was still, however, some confusion as to precisely what Pauli’s fourth Quantum Number represented physically. Some thought that the Fourth Quantum Number was some collective property of the entire atom.
Werner Heisenberg once suggested that the oscillation of the Core (Nucleus plus internal Electrons) could serve as the fourth determinate of emitted spectral patterns. Specifically, he felt that his theory could explain one particuliarly perplexing type spectral patterns– the “doublet.”
Heisenberg conjectured that the doublet pattern was caused by the oscillation of the atom’s Core within the internal magnetic field created by the atom’s circulating Outer Electrons (a magnetic field was, and is, thought by the orthodox to be created by the moving Charge of the revolving Electron). When the Core oscillated closer to one side of the Electron’s orbit, then one of the doublet lines would be generated, and when it moved closer to the other side, the other line would appear.
Pauli disagreed, contending that the only parts of the atom which should be involved in the production of spectral emissions are the Valence, or outer, Electrons. He was convinced that the rest of the atom, forming the Core, should remain a stable, cohesive unit. Therefore, Quantum Number Four must be a characteristic specific to the Valence Electrons. Furthermore, according to the latest evidence (and mathematical equations), this Fourth Number could only have one of two values.
Being it seemed that all real-world orbit options for the Valency Electrons had already been utilized, Pauli’s suggestion confused further many already-confused physicists. Bohr, for instance, thought Pauli was introducing a fourth dimension inside the atom for the outer Electrons to move through– and of course, Bohr being Bohr, he was all for it.
Then in 1926, theorists attempting to imagine what this fourth Quantum Number could be, began to reinterpret it, not as an orbital property, but as a physical property of the Force-carrying Electron-particle, itself– namely, the rotation of the Electron about its own axis… called from the start its “Spin.”
If Electrons are assumed to rotate in one of two directions, then the latest spectral emissions data could be perfectly explained by the addition of Spin as the Fourth Quantum Number.
When they first came up with Spin, theorists really did imagine the Electrons as spinning on their little axes. It was maintained that Electrons could spin clockwise or spin counterclockwise. By allowing two Electrons of opposite “Spins” to occupy the exact same orbits, the number of Electrons allowed in the shells was thus doubled.
Before Spin, the Electron could be thought of as “point particle.” It could move up-or-down, side-to-side, or back-and-forth– three degrees of freedom. However, by suggesting that the Electron could Spin about its own axis, theorists bestowed upon the Electron a quite corporeal existence.
One tiny problem with the addition of Spin into the physicists’ evermore complicated and exotic Wonderland of the atom, is that, well, it doesn’t work. You see, physicists like Bohr, desperately seeking patches to cover the gaping holes in their frentic, ad hoc theory-making, had already accepted the validity of the theory of Relativity and inserted it into the world of atom. Once Relativity was accepted as applicable to the sub-atomic world, everything (supposedly) in the atom now had to come to terms with it– everything must be interpreted or reinterpreted in light of Relativistic effects (actually, it’s not uncommon to find physicists ignoring Relativity when dealing with simplified models– a good reason, in itself, to question Relativity’s honest applicability).
Spin came late to the atomic story-telling game and had to fit into the formulas and equations already accepted by the atom-imagining clique. When physicists attempted to drop a tincture of Spin into their cauldron of guesses, they found that the runes of their complicated equations predicted that, if an Electron actually rotated in reality, then its surface would have to move faster than the speed of Light. Yikes!– such is not allowed by the theory of Relativity already invited by theorists to come behind the Looking Glass and into their Suitcase Atom.
Also, when Samuel Goudsmit and George Uhlenbeck (who get credit for the idea of Spin) sought a critique of their theory from the venerated Lorentz, the old sage patted them on the foreheads (figuratively) and informed them that the magnetic energy created by a rotating Electron would be so large that the Electron would have to have an (electromagnetically derived) mass larger than the Proton. This was a bit of a problem since one of the playing-cards in the towering atomic house built by Quantum physicists states that the Electron is hundreds and hundreds of times lighter-weight than the Proton.
Arabatzis tells us that these problems were “solved” when physicists later DENIED “that Spin represented a literal rotation of the Electron.”
Backtracking physicists claimed that, well, “Spin” is just a term for something that is not fully understood. What is really important about Spin is that it declares that Electrons can exist in one of two states, thus doubling the complexity supplied by the first three Quantum Numbers and fitting incoming data nicely.
One situation in which the split-personality of the spinning/not-spinning Electron is in evidence is in the work of Heisenberg. Heisenberg had found, in his calculations of Spin effects, that the incorporation of Spin into the atomic model predicted spectral line-splitting which was perfect– except for the fact the Spin-model produced values twice as large as the spectral line-splitting actually being observed in laboratories.
To put a patch over this latest problem, both Ralph Kronig and Albert Einstein, independently it appears, suggested that, in the frame of reference of the rotating Electron, the atomic Core appeared to be moving from the Electron’s point of view– as the Sun appears to us to move around the rotating Earth. And since moving Charges create a magnetic field, this would give the Core lodged in the center of the atom (but moving, Relativistically speaking, around each Electron!) a magnetic field. This magnetic field would interplay with the magnetic field generated by the spinning Electron, and, according to L.H. Thomas, all this magnetic interaction between the revolving-but-not-revolving Core and a spinning-but-not-spinning Electron would create a “Relativist precession” of the Electron’s orbit. And –obviously!(?)– “the conjunction of this Relativistic effect with the Spin-field interaction” will eliminate “the unwanted factor of two” by which the Spin-math was off.
But even as spectral analyzers hardened in their belief in Spin, the rest of the physics community was moving away from the belief in the individuality of Electrons. The solar system model of the atom, in fact, was outdated almost as soon as it achieved widespread notice. Physicists in other fields were beginning to think of Electrons as “waves” or as “clouds”– or even as “probabilities.”
Erwin Schrodinger had offered,in his wave theory of the Electron, what appeared to be a better physical picture of what was going on inside an atom than the suggestions –biased still by the solar-system model of the atom– offered by Heisenberg, Bohr, and the other Quantum Physicists. To compete with Schrodinger’s ideas, Quantumites had to go on the defensive– for basically the last time in history. This attempt to save their theories led to a deeper explanation of their proposed “quantum leap,” and to the birth of the Uncertainty Principle— which ultimately included the bizarre proposition that, actually, there is no particle until we find it.
I’ve always had my suspicions that physicists of today were misinterpreting the original assertions of Heisenberg’s Uncertainty Principle, that they were overstating just how nonsensical the world of the atom really is. After all, they have incentive; no matter how wild their reputation-making and grant-justifying claims, theories, and supposed discoveries, they can always just shrug their shoulders at the incredulous and say, “Well, that’s the wacky atom for you. ANYTHING is possible.”
The history of the Uncertainty Principle begins with a kid scientist (twenty-four) who is also a skilled mathematician named Werner Heisenberg. Heisenberg comes up with a complex mathematical way to describe the inside of an atom. It’s pure genius, and his mathematical formulations fit like a glove the results of the perplexing experimental results of the day.
Of course, his math didn’t make any sense.
But there were ways around that.
Heisenberg’s math was non-communative (meaning that in his math (A x B) did NOT necessarily equal (B x A). Someone else (Max Born) had to explain to him that this was okay because this is how one multiples “matrixes” (arrays of numbers arranged in a specific way). Matrix Math is non-communative, said Born. That must be what you used. Oh yeah, said Heisenberg, that was it. Only problem is, admitted Heisenberg, “I don’t even know what a matrix is!”
The math was one thing, but the physical interpretation was quite another. Heisenberg’s equations were not solvable for some values of an assumed Electron trajectory. According to the math, the Electron disappeared and reappeared along its journey through the atom. In other words, Heisenberg’s Matrix Math introduced DISCONTINUITY into the world of the atom. This is the mathematical foundation behind the infamous “quantum leap”—the Electron just jumps from one place to another, occupying no place in between. Or so the math said.
Also, Heisenberg’s Matrix Math was not solvable simultaneously for Time, Position, AND Momentum of a particle. It could tell you WHERE an Electron is, or how FAST it is going, but not both at the same time. Basically, as author Manjit Kumar explains it, the more accurately one value was determined, the less accurately the other could be determined. Keep in mind… this is just the mathematical failure of the model… the logical justifications for the failure will come later (for instance, that the measuring an occurrence interferes with that occurrence and thus keeps one from knowing all its characteristics at once).
But then comes along Schrodinger with his Wave Theory. Schrodinger’s wave-math did NOT leave gaps where solutions were undefined, as did Heisenberg’s Matrix Math explication of the Particle-Electron, so there was no need to invent and maintain the “Quantum Leap.” Schrodinger contended, instead, that electricity moves inside an atom not as particles, but as waves.
Under this view, there is no longer any need for messy discontinuities such as quantum leaps. And to top it off, Schrodinger’s math was much simpler than Heisenberg’s Matrix Math (relatively speaking, mere mortals). Since both Matrix Math and the Wave Theory proved mathematically equivalent, physicists naturally began to migrate toward Schrodinger’s work.
Schrodinger had used as his starting point de Broglie’s theory that Electrons were something more wavelike in nature than had been previously imagined. What sort of waves? That was not certain, unless one counts waves of Energy as a phrase possessing real meaning. De Broglie said that Electron-waves travel through space, pushing along their associated particles as they go.
Schrodinger took this basic wave idea, but said that the waves for Electrons were actually “standing waves”– that is, waves which maintain their general position, like the vibrating string of a plucked guitar. In another departure from de Broglie’s theory, Schrodinger said that Electron Waves do not push along Matter but, in essence, ARE the Matter.
Instead of single points of negative charge, Schrodinger imagined Standing Waves filling the entire Electron path at once, and even extending far beyond it. “No special meaning is to be attached to the Electron path itself,” he said, “and still less to the position of an Electron in its path.”
Schrodinger, wrote Walter Moore (author of Schrodinger: Life & Thought), imagined the atom “to be vibrating with a potpourri of very high frequencies.” It is inbetween these vibrational frequencies that atoms release or absorb energy.
Thus, instead of “Quantum Leaps,” in which Electrons are said to instantly transport from one atomic orbit to another without actually ever travelling in between the two orbits (as if by magic!), in Schrodinger’s theory, the different frequencies of the Electron are simply different vibrations of the same negative charge surrounding the atom. No discontinuities required.
Schrodinger believed that waves, not particles, constituted “the basic reality of the subatomic world.” He saw particles as being comprised of a group of waves travelling together which he called “Wave Packets.” Wave Packets consist of a collection of waves which mostly interfer with each other, thereby cancelling one another out– except for their middle peaks– which move around together, appearing to us (or some of us) as a “particle.”
One point not emphasized enough, I think, is that Schrodinger’s complex equations concerning Wave Mechanics may be solvable for the simple Hydrogen atom, but they result in overwhelming complications– even in this computer age– when one attempts to apply the equations to larger atoms which have numerous moving Electrons mutually influencing each other.
Also, Schrodinger was never able to include Relativity successfully in his models. The conumdrum of when to use and when not to use Relativy in calculations is a fundamental problem in modern physics. Some physicists take into account Relativity in their theories, and some do not– and somehow we are supposed to be okay with that.
One of Schrodinger’s sharpest critics was Lorentz. Lorentz pointed-out that a packet of waves travelling through space would soon disperse, and thus could not very well maintain the form of a tiny particle.
Specifically, Lorentz said that the area of the Electron Energy around the Hydrogen atom was not large enough to sustain a Wave Packet, since to persist for any length of time a Wave Packet would need a very large range to roam compared to its wavelength– a domain larger than the little Hydrogen atom could provide. Or, said another way, Lorentz objected that the Hydrogen atom cannot offer wavelength vibrations short enough for Wave Packet construction. Moore also wrote in his book that Schrodinger’s wave theory never actually determines the exact frequencies of Light absorption by atoms, but that Schrodinger merely suggests that the precise frequencies would be some combination of the frequencies of the Electronic standing-wave vibrations predicted by his theory.
Schrodinger at first held literally to his Particle-As-Wave-Packet idea, thinking of all matter as wave disturbances in space. But after such objections as those of Lorentz, Moore stated that Schrodinger “soon de-emphasized the Wave-Packet picture.” He began instead to re-interpret his own theory, focusing less on the fundamental nature of matter and more on what his theory said about the density of the “smeared-out” Electron charge in an atom
Eventually, Heisenberg came out publicly with a competing particle explanation to Schrodinger’s wave theory of Electrons, one that explained WHY the Heisenbergian Matrix Math was not solvable for Time, Position, and Momentum simultaneously.
Heisenberg said that the very act of measuring the position of an Electron makes the simultaneous determination of its momentum impossible due to the fact that, to see an Electron, one needs a photon of Light (I think they actually used gamma rays to find Electrons, but either way, visible light or gamma rays are both composed of photons). This photon impacts the Electron and disturbs it unpredictably. Therefore, though you might find the Electron’s position at the time of the photon impact, you have changed its momentum merely by the act of observation.
“No observation of atomic phenomena is possible without their essential disturbance,” said Bohr, agreeing with Heisenberg. Bohr explained that it was impossible to draw “any sharp distinction between the behavior of atomic objects and the interactions with the measuring instruments.”
That is basically the true origin of the Uncertainty Principle. However, things almost immediately got sticky.
Heisenberg, always trusting math and observation over conjecture, began saying that, because the position and momentum of an Electron cannot be found simultaneously, the Electron at any particular instance does not have position and momentum. In fact, he said, in the absence of an experiment measuring an Electron’s position and momentum, there is no position and momentum. Between measurements, the Electron existed only in the abstract.
Max Born, however, responded to the ideas of de Broglie and Schrodinger in a different way. Born denied that Particle-Waves are truly physical, but instead, represent the probability-array of possible states which an Electron could occupy at any given instant.
Born was thus able to co-opt upstart Schrodinger’s ideas, while simulaneously maintaining the Bohr-Heisenberg view of the Electron as a Quantum-Leaping particle. Importantly, he thusly gave other physicists cover to maintain their own belief in the tenets of the Quantum revolution and thereby kept the evolving new orthodoxy of physics from unraveling. Twenty-eight years after Born proposed his Probability-Wave Theory, his statistical interpretation of quantum mechanics had achieved preeminence, and he was finally awarded the Nobel Prize for his ideas.
Meanwhile, in spite of Born’s new cloud-of-probability view of the Electron, the now-outdated notion that an Electron is actually an individualized, physically “spinning” particle never died. The idea of a truly rotating Electron is often the conception used when physicists are trying to explain the phenomenon of magnetism. Theorists of magnetism also rely on the idea that something at least quasi-material is revolving around the nucleus, resulting in a phenomenon known (confusingly) as “atomic Spin.” When physicists talk of Spin, in fact, they might be talking about Electron Spin, atomic Spin, or the combined outcome of all such Spins within an atom.
Meanwhile, of course, other physicists have long ago moved-on from such a solar-system-sounding scenario for the atom, considering the old conception out-dated, quaint, and naive. But it seems not everyone got the memo. Or rather, many scientists simply choose to ignore it.
Physicists and engineers doing actual hands-on work in the realms of the applied sciences — such as, say, semiconductors or magnetic resonance imaging– have found it convenient to continue thinking in terms of the old, erroneous paradigm of orbits and Spins. Even the world’s most thoughtful and successful physicists either ignore, have forgotten, or don’t understand that subatomic orbit and Spin are false mythologies, created by the daring, imaginative, and free-wheelin’ (hell, slightly reckless, really) physicists of the early 1900s.
The one common factor uniting all the different theoretical and applied sciences concerning the atom is that no one wants to admit the inconvenient truth that we do not really understand the physicality of the atom at all, and — what is worse– we think we DO.
So-called “Spin” has been used to “explain” magnetism for decades now. The biggest harm done from this false belief is that it assuredly has kept great minds from exploring the true nature of magnetism along completely different lines– believing as they do that we already know the general principle whereby magnetism is generated. Once we can shake off the 100-year-old, false paradigm, who knows what next-level discoveries we may make!
If 20th century physics has taught us anything, it is that it is almost impossible to re-train the mind to think in truly new ways once the fundamental categories and grooves of thinking have become entrenched.
Scientists describing MRI will speak of “aligning Spins” and “Precessional frequencies,” but these are hardly more than convenient conventions. All we know in a practical sense is that materials respond differently to intense magnetic-field bombardment.
According to current orthodoxy, there are three sources (that are ultimately one source) for Magnetism….
1) a traveling charge, 2) a rotating charge (a charge particle spinning around its own axis) and 3) a revolving charge (a charged particle orbiting around an oppositely charged center.
The first source of magnetism would be represented by the typical electric Current; the second by imagined Electrons revolving around the purported Nucleus of an Atom; and the third by a “spinning” Electron. All three sources can be viewed ultimately as one source: a moving charge. Since all three explanations utilize suspect. or even long-ago-rejected atomic models, all three “explanations” of magnetism ultimately prove inadequate.
Of course, in the day-to-day application of technology, it doesn’t much matter that scientists can’t explain WHY moving charges create magnetism or that, truthfully, we don’t even know what this thing called “Charge” actually is. Technicians need only be able to correctly conduct procedures and o productively analyze results. In the same manner, early Man stumbling from the Bronze into the Iron did not need to understand the physics behind their smelting process to produce valuable structures, tools, and weapons.
Sure, let’s smelt what we can smelt, and scan what we scan, but if don’t truly understand the underlying physics, let us not pretend that we do. Assertions handed down unquestioned from generation to generation shut down fresh lines of inquiry, close eyes to otherwise obvious inconvenient facts, and make mushy even the most magnificent mind.
One may wonder, if magnetism is caused by revolving and/or rotating charged particles like Electrons– shouldn’t EVERYTHING exhibit magnetism? Why is it that, when we look around at the real world, we find that most things do NOT possess noticeable magnetic properties? Scientists have come up with two explanations…
The first one, I actually find completely reasonable within the frame-work of generally accepted beliefs concerning magnetism… Explaining the absence of atomic magnetism stemming from electron revolution (about the Nucleus), physicists say that atoms in a substance face every-which-a-way, and that thus the magnetisms generated by Electron orbits cancel each other out (although I’ve always been a little confused as to how these magnetic energies “cancel-out” each other in a Universe in which no Energy just disappears). It is rather the exception then the rule when a particular substance possesses Electrons that happen to line-up in the same direction.
Sometimes an object can possess numerous “domains” of similarly oriented atoms, but then each of these domains tend to face different directions, and thus they too cancel each other out, with the same net result, that of non-magnetism in most substances. Of course, when the domains DO line up, you have something which WILL exhibit magnetic properties. And this magnetism can even be permanent for those cases in which, once lined-up, it would take more energy for the domains to move out of alignment than to just stay the way they are.
The second excuse for the dearth of magnetism in the world, I find less plausible… Explaining the absence of atomic magnetism stemming from Electron rotation or “Spin,” physicists say that most Electrons do their “spinning” next to a buddy-Electron which is spinning in exactly the opposite direction, with the result that their magnetic effects are cancelled-out one by the other. It is only in the exceptional case of a material possessing several unpaired electrons that rotational magnetism is exerted to a noticeable degree.
But even for those cases in which there exist unpaired Electrons having the freedom to line-up their rotations, their Spins often still do not aline due to of “thermal agitation”– atoms are always jostling about, keeping Electron spins from lining up.
In the realworld medical application known as magnetic resonance imaging (MRI), technicians ignore Electrons completely when it comes to magnetism! What they claim to measure in MRI is Proton Spin. Technicians blast the atoms of the patient’s body with an electromagnetic pulse, and then imagine the body’s reaction as the “flipping” of the positively charged, Proton-heavy Nucleus of the atom. It is claimed that Protons of the Nucleus flip because they absorb Energy from the electromagnetic blast received.
Notice here… scientists like to think they have crawled away from superstitious ideas, but in the end, we often still resort to that great, movement-initiating Spirit or God that today goes by the name “Energy.” To credit developments to a Thing absorbing a Force, is not so different from superstitious claim that a Body can become possessed by a Spirit.
Scientists have also snuck into magnetic theory, by the back door as it were, a second creator of magnetism in addition to Spin– Precession. Precession is supposed to be the revolution of one end of a Proton around a hypothetical line drawn through space– specifically, a Line Of Force running in the direction in the oncoming waves of electromagnetic Energy. The concept of Precession allows scientists to continue operating within their same old paradigm since a “Precession” around a Line Of Force is not all that differently applied than a “Spin” around the axis of a Proton.
My guess is that Precession was added to the mix because scientists needed a way to conceptualize the discovered fact that a second applied magnetic field could affect material simultaneously with the first applied field. If there were only aligned Spins to play with, then once one magnetic field had claimed the allegiance of parallel Spins, it becomes problematic to explain why the material is behaving magnetically in another field-direction at the same time.
Additionally, with the idea of Precession added to the mix, it becomes easier to explain how a THIRD applied magnetic field is effectual simultaneously on the material. The explanation is that two fields affecting a Proton’s Precession simultaneously will cause the Precessional orbit to switch from a circular to a spiral shape.
All we actually know is that high doses of the great Energy-Spirit will cause fairly predictable responses in the substances attacked.
The great fault with Quantum Theory, in fact –whether it be in its interpretation of magnetism or of electricity or of molecular-bonding– is that theorists extend their “explanations” farther than the evidence warrants. Quantum Theory has moved forward throughout the last century by taking two crooked steps forward for every apparent step back. When particles were found to cross Energy barriers which they had no business crossing, Quantum physicists patched up this rip in their theory with the idea of “Quantum Tunneling” — the notion that, when a particle does not have enough Energy to surmount an Energy barrier, it may still pass through by going “under” the barrier (whatever that means). And I won’t even get into the numerous particles and “virtual particles” Quantumites pretend exist, like so many angels dancing on a pinhead. Each particle is basically a mathy patch that—if it did not exist—would destroy the otherwise successful equations of the Quantumites.
This, in broad outline, is how the growing, muddy snowball that is Quantum Theory rolled down through the twentieth century. The theory’s history is one of one patch placed upon another, until, at last, all that is left is patches. Granted, the predictions of Quantum Mechanics work adequately, but this is due much more to the retrofitting of equations to fit the found probability of experimental results than to the awe-inspiring myth-making abilities of the early Quantum theorists. Quantum mechanics is derived from the facts of the real world. The Religion Of The Quantum is merely an interpretation of those facts.
The Quantum interpretation of the world is certainly not the only one possible. The same situation can be viewed from several perspectives, with each perspective more or less valid depending upon the facet of the situation being concentrated upon. As Mach contended, “the motions of the Universe are the same whether we adopt the Ptolemaic or the Copernican mode of view” of the Solar System, and “both views are, indeed, equally correct.”
The patchwork Quantum Theory is one intepretation of our Universe, and it was valid enough for its day and age. But the time approaches when a newer, more fundamentally correct theory will be necessary to describe the world as it is experienced by a new generation of thinkers.
The old 20th century thinking was too tied-down to its bias in favor of Force-carrying particles to come up with any physical theory much different than the Suitcase Atom theory arrived at by Quantumites. Whatever the new 21st century theory turns out to be, it will almost certainly reflect the new thinking of new philosophers willing to follow the evidence where it leads them, instead of twisting facts to fit theory.
The beginning of the path forward has actually already been shown us by, not so surprisingly, Albert Einstein. Einstein took Born’s statistical interpretation of Schrodinger’s Wave Theory and reinterpreted its meaning to apply, not to a single particle (the Electron), but to the entire atom as a system. Einstein viewed the atom as an ensemble of actors and events. As he wrote Born… If one regards the Wave-Function “as the description of an ENSEMBLE, it furnishes statements– as far as we can judge– which correspond satisfactorily to those of classical mechanics.”
Einstein never went into great detail about his Ensemble Atom idea, but the gist of his thought was that the probabilities of atomic events reveal information about the personality of the WHOLE atom, not just whatever single particle-type is under consideration. This is somewhat similar to case of a gas cloud– we can speak statistically about the very high probability that a gas cloud will behave a certain way under certain conditions– but we find it practically impossible to predict the exact future-path of any PARTICULAR gas molecule. Concerning the atom, scientists are indeed qualified to speak of the behavior of the total atom, of the Ensemble; but what precisely is going on inside the atom, we simply do not know. And has been said by others before me… of what we do not know, we should not speak.
Using Einstein’s Ensemble Atom idea as a springboard for further contemplation, I’ve come to believe that we are wrong to talk about the behavior of individual particles within the atom, because we are wrong to assume that particles viewed outside the environment of the atom retain their individuality inside the atom.
The plain fact we must admit is that Dewey B. Larson was correct back in 1960s when he stated that the Nucleus IS the atom. Other physicists –including Bohr, Heisenberg, and Pauli– have been saying the same thing for a long time but without realizing, or at least admitting, it, for they were convinced that the “Core” of the atom remains as a distinct unit from the Energy circulating at the atomic surface. Furthermore, even the most die-hard Quantum enthusiast admits that the Protons and Neutrons comprising the “Nucleus” account for 99.9% of the mass of the whole atom. What is thought today to be a group of nearly massless charges (Electrons) hovering around the atomic mass is in reality mere visiting Energy.
Because scientists have insisted that a unit of Charge maintains its individualized character within the atom, many knots have been tied in atomic and molecular theories.
With no Electrons to explain, keep, or balance– the existence of Protons and Neutron becomes superfluous. And without need of Protons and Neutrons, the other chimeras claimed to live within the Quantum Zoo are also no longer needed, nor those strange inner-atom Forces, nor Force-carrying mini-particles.
The obvious realization after once considering the atom in this new light is that the Electron, if it can really be said to exist at all, ceases to be within the atom, and its Charge vanishes. What appears as a Charge is only the Energy which circulates over the surface of things.
In reality, all these particles or supposed particles ejected from atoms-under-duress are only birthed at the time of their emergence from the atom. Within the atom, they lose their individual characters, including mass, size, and charge. In a very real sense, physicists create the particles they wish to find.
In fact, the term “Charge” is so misleading at this point in scientific history, that we should probably abandon the term and use another word for the Energy of an atom; for now, I’m just calling it “Energy”– a nebulus and deficient word to be sure, but at least without the connotation of something existing as two, opposing types.
If there is an excess of “Energy” on an atom (relative to optimal Energy distribution for that atom in that place and time), the atom will exhibit what we today call “negative” characteristics. If there is a deficit of charge, the atom will exhibit what we call “positive” characteristics. I do not pretend to know WHY there is this Energy circulating at the surfaces of atoms, only that– well, there it is !
If I am correct, experiments should show that negative ions of elements will be wider (very slightly) than positive ions [for present purposes, I’m accepting the existence of the 100 or so “elements” as science has enumerated them]. And pleasantly enough, I have found that this is indeed the case… “The negative ion is signficantly larger than the atom from which it forms” …and… “Positive ions shold be smaller than the atoms from which they are formed.” [source: http:// chemed.chem.purdue.edu/ genchem/ topicreview/bp/ch7/]
An atom does not release its extra Energy immediately, but holds-on to it until the excess becomes so much that the atom must jettison it. In other words, there’s a GAP between an atom’s being overcharged with Energy and its being able to jettison this excess Energy. This is comparable to a glass being slightly overfull of water and yet not yet overflowing its bounds– until it is jarred, or more water is added, or some other inducement occurs.
As this surface Energy is neither “negative” nor “positive,” there needs be no core of an opposite Charge holding an atom together. That is why no Protons or Neutrons need be packed into the atom.
Furthermore, as there is no Electron maintaining its individualized existence within the atom’s environment, there need be no inner shell of useless Electrons, nor any of the complications such purported shells give rise to.
Atoms are attracted to each other depending solely upon their Energy needs, not their Electric ones. When molecules are formed from the agglomeration of atoms, those atoms have combined because it turns out that the optimum amount of Energy needed by ALL the combining atoms can be best acquired under the conjoined configuration, their surface Energies flowing over ALL of the atoms in the molecule. Like all of Nature, atoms and their Energies follow the path of least resistance.
If a material is too thickly coated with this Energy, it will try to unload some of it. If it is too thinly coated, it will try to acquire more. Typically, this Energy will not move off the surface of the atom and into space. Many times it remains energetically favorable for an atom to maintain a slight overabundance of Energy than to use-up Energy jettisoning the superabundance (although I shame myself by using the “energetically favorable” handwaving, the fallback argument of us physicists who can come up with no better explanation than this catch-all excuse). Energy will continue circulating around the surface of an atom until the atom finds conditions energetically favorable for a transfer of its excess Energy from its surface to the surface of some other material, or sometimes, off into space. Two or more atoms, working together, can bridge the Energy gap occurring between excess-Energy conditions and unloading-of-Energy conditions, and working together they can then more readily transfer, receive, or share Energy.
Sometimes, however, other atoms are not necessary to help an atom bridge the Energy gap, and an atom CAN be made to jettison some of its Energy straight into space… such as, say, when an atom is bombarded with little bundles of Energy called “photons.”
The smallest sphere possible for the maintenance of this Energy appears to be what is now commonly termed an “Electron.”
Any radiated charge which cannot bundle-up into a sphere and thus retain its integrity will be radiated as heat or light.
The reason Energy ejected from an atom appears as a particle to us is that it has no reason to stretch out in any particular way; it acts-out the same in all directions (as long as the surrounding environment is acting upon it roughly equally from all sides). Thus, cast-off Energy will form something spherical– however fleetingly… much as a water droplet forms a sphere in space, or the Energy of a star forms a roughly spherical shape. Again… Nature follows the path of least resistance.
Under certain conditions, the Energy of a multi-atom configuration can be made to circulate in a specific direction– such as along a copper wire. In such a case, the entire copper wire takes on the characteristic, electronically speaking, of ONE material. The Energy will circulate as locally as Energetically favorable, perhaps around a single atom, perhaps around a single molecule, perhaps along the length of the entire wire.
Some of this Energy-on-the-move will find a home along the way in atoms of the wire or in those atoms of the air which are willing to accept the Energy-boost. Thus, electricity running down a wire dissipates. Furthermore, when some of the circulating Energy is jettisoned outside the reach of the chain of atoms in the configuration, it may, if it cannot ball-up and retain its integrity long enough to find a new atomic home, be converted into heat or Light.
The Ensemble Atom with its circulating Energy is a far simpler model of atomic physicality than the Quantumites’ Suitcase Atom and its plethora of Force-carrying particles.
Mach once stated, echoing William of Ockham, that when two contending theories are capable of explaining the same event, we should favor the simpler conception. He agreed with Hertz ho said that our theories concerning Nature should have the least possible superfluous features.
True, I have not in the least explained what this circulating Energy is– no more than Quatum Theorists have ever explained what “Charge” is. But it is far more conducive to progress and to good science to admit what we do not yet know than to make-up mythological beasties to fill-in the knowledge-gaps.
Science advances farthest and most sure-footedly when we have the courage to admit what we DON’T know.
H. Shield’s The Birth, Life, & Ultimate Demise Of The Suitcase Atom