SORCE THEORY & SPINBITZ
Download SpinbitZ Books HERE
SpinbitZ Diagrams:
https://spinbitz.wordpress.com/gallery_szi/
https://spinbitz.wordpress.com/gallery_szii/
JOEL MORRISON LINKS
https://spinbitz.wordpress.com/
https://spinbitz.wordpress.com/utb/
https://www.integralworld.net/morrison2.html
https://spinbitz.wordpress.com/
https://www.yumpu.com/en/document/view/3593094/the-orb-spinbitz#
https://web.archive.org/web/20110809132839fw_/http://spinbitz.net/anpheon.org/html/Articles.htm
https://web.archive.org/web/20110809133123fw_/http://spinbitz.net/anpheon.org/html/Links.htm
Introduction
https://web.archive.org/web/20111026091038/http://www.spinbitz.net/#intro
In 1965 there was a theory published which explained the nature of matter and energy using the motions, refractions and reflections of pressure waves in a continuous, compressible, frictionless, fluid, material medium. The theory used well known principles of fluid dynamics and wave mechanics to explain the physical mechanisms of all basic matter and energy phenomena including: the quantum and the “wave-nature” of matter; the mechanism of gravity, inertia, electricity, magnetism, and the nuclear forces; the formation and structure of the atom; the physical mechanisms of all observed Relativistic effects; and the physical explanation of Einstein’s E=mc2. It thus simultaneously unified and explained the mechanisms of all of the disparate “fundamental forces” of nature through the actions of a simple, fluid net pressure called “sorce”. Sorce, being the source of all our energies, can also be thought of mnemonically as the primary single force enacting and giving rise to all the other secondary forces.
Through the years the theory has evolved in its scope and depth. Along the way the concepts, figures and predictions made by this theory were confirmed. For instance: in 1965 the theory predicted the existence of slight deviations in the rate of change of the strength of all gravitational fields. In the 1980’s confirmation was experimentally stumbled upon during tests of the earth’s local gravitational field strength. The deviations were quickly explained away by adding two new forces, the (still highly controversial) fifth and sixth forces, to the collection of already abstract and isolated forces, thus furthering the distance to grand unification and adding more empty mathematical complexity to the overly abstract standard model. Sorce Theory, however, expected the anomalies as a direct consequence of its theoretical constructions.
In Sorce Theory, through familiar fluid dynamic principles such as Bernoulli’s Principle and the laws of refraction and reflection, the slight theoretical gravitational variations naturally take the shape of highly complex concentric material shell layers surrounding the core of an object such as the earth or an atom. The fluid-dynamic, wave-resonance mechanisms form a pattern of repeating, concentric, square-of-the-distance shells. This pattern shows up in phenomena on many scales in universal organization such as: the electron shell spacings of the atom, the regular spacings of the atmospheric shells of the earth, the ring and moon spacings of the planets, and the planetary orbit distances of the solar-system. This orderly pattern which is repeated on so many scales cannot be attributed to the chance actions of a gravitational orbit system in the case of the solar-system, in which any orbit is just as probable as any other, nor can it be explained by the laws of quantum mechanics in the case of the electron shell spacings. Clearly then, the fluid-dynamic mechanisms behind this ever-present pattern are an important part of the structuring of the universe, nevertheless there is currently no accepted physical explanation for this phenomenon. The pattern is merely represented, in the standard model, as a limited, abstract, mathematical algorithm from the 18th century known as Bode’s Law.
This theory uses no premises or constructions which contradict basic causal experience: no backward time propagation, no spooky “action at a distance”, no unexplainable dualities or paradoxes and no empty mathematical probabilities miraculously rendered “physically real”. The result is a completely causal explanation of ALL of physics with no internal contradictions. If you are sufficiently dissatisfied with the abstract, semantically void accounting system of modern physics, and if you are looking to be able to understand nature not just quantitatively but QUALITATIVELY as well, and if you believe that at the very heart of nature resides humanly-understandable causality, integrity and unity, NOT contradiction, non-causality and disjointed ad-hoc multiplicity, then you will certainly find these concepts intriguing and perhaps extremely enlightening.

Scientific Revolution
https://web.archive.org/web/20111208191742/http://spinbitz.net/anpheon.org/html/AnpheonIntro2003.htm
An Imperfect Reaction to the Accumulating Errors of Science
“We of the present generation are too impatient to wait for anything. Within thirty years of Michelson’s failure to detect the expected motion of the earth with respect to the ether we have wiped out the slate, made a postulate that by no means whatever can the thing be done, and constructed a non Newtonian mechanics to fit the postulate. The success which has been attained is a marvelous tribute to our intellectual activity and our ingenuity, but I am not so sure with respect to our judgement.”
– Max Born, “Einstein’s Relativity” – 1962
Science progresses linearly. The outcome of each step is pre-conditioned by the result of the previous step, and each step in turn conditions the outcome of the next step. As science progresses, it accumulates vast amounts of knowledge in a step by step fashion. It acquires a ‘linear history’-a ‘time-line’. Science cannot see what lies ahead on this ‘time-line’. It cannot know the facts that will eventually be discovered which could, in principal, alter the context of the reception and understanding of the new facts that it is attempting to understand in the present. As we shall see, this ‘temporal-scope’ is a crucial limitation inherent in the linear accumulation-process of science. It is a constant and continual source of error. A “Scientific Revolution” [1] , as will be shown in this introduction, is a reaction to correct these intrinsic errors.
Depending on the type of the revolution taking place, as a response to specific definable criteria, the direction that science moves during one of these ‘revolutions’ can be either towards its ultimate goal of understanding or it can be a temporary movement in a direction away from that goal. If it is the latter, then it becomes a compounded error, an erroneous reaction to an erroneous assumption. This compounded error is due, once again, to the limited ‘temporal scope’ of the progression of science. Science looks for immediate answers, but sometimes the important clues to those answers are not immediately available.
To better understand this evolutionary, accumulative process we can view science as an attempt to build a rational and functional puzzle from a relatively small percentage of the total number of pieces critical to a comprehensive theoretical construction. It is the monumental task of science to take this incomplete yet vast puzzle-piece collection and form a coherent and accurate picture of observed reality. To do this, science takes the raw pieces of observational evidence (our puzzle pieces), and then produces rigid quantitative models of those seperate pieces and creatively integrates them into an interpretive, qualitative framework. This interpretive framework is what gives human meaning to the collection of raw facts and the disconnected quantitative models. It is the integrated form of our collective ‘understanding’ of Nature, and sometimes, unfortunately, when this framework consists of a retreat from causative interpretation, our ‘understanding’ temporarily takes the form of confusion.
Interpretation is a creative product of the imagination of mankind as such it is inherently arbitrary. It could, in principal, take many varied forms, as seen in the fantastical interpretations populating the vast historical continuum of scientific, philosophical and mythological history such as: the cosmogonies and mythologies of the ancient world; the four elements-fire, earth, water and air; Ptolemy’s Earth-centric model of the solar system; the Copenhagen Interpretation of Quantum Mechanics; Many Worlds Theory; The Big Bang Theory; String Theory etc..
At every step of the way, science tends to assume that the interpretive-framework of the puzzle, is fairly complete and accurate, because if it isn’t, then science has failed at its job. A tremendous cultural pressure is therefore placed upon science to give an authoritative ‘stamp’ of finality to its constantly evolving theories. This ‘stamp’ superficially solidifies science by shifting the cultural focus to the popularly selected theories while damping cultural interest in the competing alternatives, thus it plays a key role in determining what is considered ‘acceptable science’ to the scientific peer-review community.
The ‘solidified’ puzzle evolves continuously for long stretches of time as each new-found piece is simply incorporated into the puzzle framework in the easiest way possible so as not to disturb the functional order of the established construction. If the current framework is insufficient for the correct integration, or if a specific piece is still missing which is crucial for making sense of and integrating the newly discovered pieces, then the integration of the new pieces into the puzzle framework will necessarily contain a crucial error. Decades, or even centuries later, when the crucial ‘missing’ piece is finally discovered, it must then be integrated into an already established and stabilized structure which necessarily contains the original critical error due to the non-optimal order in which the pieces were initially integrated into the puzzle framework. The puzzle-picture at any point in time is therefore largely an accident of history: a result of the ‘random’, linear, step-by-step accumulation of puzzle-pieces (facts) and of the idiosyncrasies of the solutions (interpretations) for the integration of those pieces as they are discovered along the way.
Even within a limited, specialized domain such as Physics, the puzzle is so vast in its scope and complex in its inter-dependencies that rarely can any single scientist successfully disassemble a large number of the intricately integrated pieces, and reconstruct them into the proper hierarchical order required to fix the linearly accumulated errors. As a result, the error-prone accumulative process generally continues unchallenged until the structure encounters a puzzle-piece whose integration with the current framework is fundamentally impossible. The puzzle then becomes culturally ‘unsolidified’ and unstabilized as the majority of the scientists finally realize that the puzzle framework contains a crucial error. At this point an intensive investigation is initiated in order to fix the instability at all costs. A “Scientific Revolution” has begun. The end result of this ‘revolutionary’ process is an emergent product of at least three discernable factors:
Historical Continuity: There is a strong cultural pressure on science to maintain a superficial historical continuity in the evolution of its theories, if at all possible. This results in a tendency towards the limitation of the depth of the scientific reconstruction, and a tendency to select superficial ‘patches’ or ‘bug-fixes’ instead of a needed root-level ‘overhaul’. An excellent example of this is the Copernican Revolution where the addition of epicycle after epicycle to the Ptolemaic earth-centric model was, for many hundreds of years, ‘preferable’ to a root-level helio-centric reconstruction. When the qualitative reconstruction was finally accepted it greatly simplified both the qualitative and the quantitative aspects of the planetary model.
Cultural ‘Mood’ or Receptivity: This is a complicated factor as it depends on the current ‘state of mind’ of collective humanity. Current trends influenced by multiple factors such as: wars, the economy, the common opinions of science and religion, education etc. all play a part in determining which theories are ‘acceptable’ and ‘resonant’ with society.
Reconstruction Resources: There are specific and obvious limitations on the critical resources available to science. These limited resources include:
Knowledge: the available observational ‘facts’ (our current stock-pile of “puzzle-pieces”).
Intelligence: the mental capabilities and strategies of the scientists to integrate the facts into the framework of science.
The scientists can only work with the knowledge and intelligence at their disposal, therefore the limitations on both of these resources play a crucial role in determining the resultant structure of the “revolutionary” theoretical framework.
History Revisited
Let’s take a look at an important, more modern example of scientific history in light of our new focus on the linear accumulation of scientific errors.
It is commonly ‘understood’ that the Michelson and Morley (M&M) interferometry experiment in 1887 proved once and for all, the non-existence of an ‘all-pervading’ and ‘luminiferous’ substance called the ‘ether’. This article is not an attempt to discredit or even to challenge the physical results of these important and revolutionary experiments. Such challenges have already been taken up by Dayton Miller, et al. The following is taken from “Dayton Miller’s Ether-Drift Experiments: A Fresh Look” by James DeMeo, Ph.D. see http://www.orgonelab.org/miller.htm (also see https://www.aetherforce.energy/ether-drift-resource-guide-by-james-demeo/ and https://www.aetherforce.energy/an-orgonomic-perspective-on-ether-drift-by-james-demeo/ ).
“Should the positive result be confirmed, then the special theory of relativity and with it the general theory of relativity, in its current form, would be invalid. Experimentum summus judex. Only the equivalence of inertia and gravitation would remain, however, they would have to lead to a significantly different theory.”
— Albert Einstein, in a letter to Edwin E. Slosson, July 1925
“I believe that I have really found the relationship between gravitation and electricity, assuming that the Miller experiments are based on a fundamental error. Otherwise, the whole relativity theory collapses like a house of cards.”
— Albert Einstein, in a letter to Robert Millikan, June 1921 (in Clark 1971, p.328)[…]
Dayton Miller’s 1933 paper in Reviews of Modern Physics details the positive results from over 20 years of experimental research into the question of ether-drift, and remains the most definitive body of work on the subject of light-beam interferometry.
[…]
Miller’s work, which ran from 1906 through the mid-1930s, most strongly supports the idea of an ether-drift, of the Earth moving through a cosmological medium, with calculations made of the actual direction and magnitude of drift. By 1933, Miller concluded that the Earth was drifting at a speed of 208 km/sec. towards an apex in the Southern Celestial Hemisphere, towards Dorado, the swordfish, right ascension 4 hrs 54 min., declination of -70° 33′, in the middle of the Great Magellanic Cloud and 7° from the southern pole of the ecliptic. (Miller 1933, p.234) This is based upon a measured displacement of around 10 km/sec. at the interferometer, and assuming the Earth was pushing through a stationary, but Earth-entrained ether in that particular direction, which lowered the velocity of the ether from around 200 to 10 km/sec. at the Earth’s surface. Today, however, Miller’s work is hardly known or mentioned, as is the case with nearly all the experiments which produced positive results for an ether in space. Modern physics today points instead to the much earlier and less significant 1887 work of Michelson-Morley, as having “proved the ether did not exist”. ”
The results were certainly not the “null results” upon which Einstein had built his Theory of Relativity and which “collapses like a house of cards” without it, but they were definitely not what the scientists were expecting either, as will be shown below. The following is from Caroline Thompson’s “Forgotten History” at http://freespace.virgin.net/ch.thompson1/History/forgotten.htm
“Did the Michelson-Morley experiments prove there was no “aether wind”?
“Probably not! They have been accepted by almost everyone as giving a “null” result, but in point of fact they showed a very interesting periodic variation indicating something. If it was the presence of an aether wind, then it was not behaving in the way they expected, but it was definitely something that needed further investigation, and Dayton Miller, working at first with Morley, undertook the task. The variations proved to be reproducible and to show systematic changes with time of year and some other factors. He also showed, incidentally, that the effect disappeared if you put the apparatus in a thick-walled enclosure, which nullifies several of the more recent tests.”
Objections to the Michelson-Morley experiments aside, the immediate goal of this article is to understand just what, the results really claim to have proven or disproven. It is therefore important to take into account the theoretical context within which the “null results” of the M&M experiments were so shocking and paradigm-shattering in their implications.
What was the theoretical context of the Michelson-Morley experiment? More to the point, what was the ‘ether’ that Michelson, Morley and many others had “failed” to detect?
A Solid Ether?
The ‘ether’ at that point in time was conceived of as an isometric solid. This crucial premise of the solid ether, was the core conceptual groundwork of the entire structure of knowledge about electromagnetic fields and waves. It was also the context and motivation behind the M&M experiment. The premise of the solid ether follows directly, as we shall soon demonstrate, from the historical error-prone process of the linear accumulation of scientific data and is thus a demonstration, in historical fact, of the process outlined above.
In an address titled “Ether and the Theory of Relativity” delivered on May 5th, 1920 at the University of Leyden, Einstein said,
“When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.” [2]
In the early 1800’s the existence of the phenomenon of polarized light was quite well established. In 1816-1817, as a result of investigations by Fresnel and others on the interference of polarized light, an interpretation of this phenomenon was given by Thomas Young in which it was concluded that light waves are transverse (shear waves) and not, as had been previously thought, longitudinal (pressure waves). In 1865, Maxwell formulated his electric and magnetic field equations from his technique of analogy where he likened magnetic lines of force to incompressible fluid flow. The waves in his electromagnetic field theory, however, are transverse-as postulated by Young. In 1887 Lord Kelvin had demonstrated that a vortex saturated region of a fluid is capable of sustaining transverse waves, and even though there was scientific support from Kelvin and others for a fluid ether, it was still the common ‘understanding’ at this critical point in history [this “common understanding” is the crucial error!!], that transverse waves could not travel through a body of liquid or gas. These types of waves were generally thought to be exclusive to propagating through solids or at best, on the surfaces of fluids such as water. Therefore, the common reasoning went, the ether cannot be a fluid because the observed transverse waves of polarized light would not be able to pass through it. The ether must therefore (somehow) be a solid.
It was assumed that this solid ether must have the shear modulus of elasticity no less than that of solid steel to account for the observed properties of the electromagnetic waves. It was also necessarily assumed that objects, such as an atom, a molecule, the human body, a laboratory, or the earth-somehow moved through this solid steel-like ether and that the ether passed through solid objects as if neither were solid at all, as if they were not really even there. like ghosts walking through walls. Light, however, was understood as being a disturbance propagating within the solid ether. The ether was thus said to be ‘luminiferous’.
Since the earth and the laboratory of the M&M experiment were thought to be moving independently of, and freely through, the solid ghost-like luminiferous ether and since light was thought to be a disturbance of-and thus moving with respect to-this absolute etheric frame of reference, then the motion of the earth relative to the ether should be detectable as an “ether wind” altering the relative speed of light-waves depending on their direction of travel with respect to the moving system of measurement. The M&M experiment attempted to detect just such a relative motion of the Earth through the ether using the interference of lightwaves.
The M&M experiment produced a “null result”, meaning that it failed to detect any relative motion whatsoever, thus proving conclusively that the theoretical context of the M&M experiment was false-there simply was no solid, etheric frame of reference as postulated by classical science. This new and entirely unexpected null-result puzzle-piece simply did not fit the current framework of the puzzle. The task was now up to the scientists at that time to reconstruct a new theoretical context in which the null results of the M&M experiment made sense. The revolution had begun!
Einstein took up this challenge and formulated an answer with the experimental data available at that historical moment (this data-set was critically limited as we will soon show). The answer he came up with was to throw out the concept of the ether altogether and to assume that the speed of light was absolute with respect to ALL frames of reference whether in motion or not, thus mathematically satisfying the null result while simultaneously dissatisfying the human attempt to understand the nature of reality, especially the electromagnetic waves permeating all space. Much later, however, Einstein explained that the concept of the ether was absolutely essential for an understanding of what his abstract notion of “curved-space” physically represented. He suggested in “Ether and the Theory of Relativity” that the M&M experiments proved not that the ether did not exist, but merely that the ether was somehow (confusedly) dynamic. He stated that it was not “immobile” yet paradoxically he also claimed that it was not mobile either.
“It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility [3] .
[…]
“What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations”
[…]
“According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time.
“The idea of motion may not be applied to it.” If motion may not be applied to it. By this time in history, a time of extreme scientific, political and social turmoil, Einstein’s theory “brought a vision of the Universe as a whole, a vision that appeared as a solace to a tormented society”, says Eric J. Lerner in “The Big Bang Never Happened”. It was too late now for Einstein’s “mature reflection” to find a favorable reception and to undo his original and “revolutionary!” denial of the ether. Only Einstein’s vague and confused notion of its oxymoronic “not-immobile yet not-mobile”, dynamics, could take root as it was applied to the abstraction of “curved space”. By the time of this “mature reflection” the physics community and the scientific and popular culture in general had largely abandoned the concept of the ether altogether, even though a material medium was (and still is) essential and fundamental to an understanding of the waves and fields ubiquitous to all regions of matter and space, “for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time … nor therefore any space-time intervals in the physical sense”, said Einstein.
The Hidden Error
A few questions from our outlined premises are in order. What errors can we find hidden in the linear accumulation of scientific-data demonstrated in this historical episode? Can we demonstrate a “non-optimal sequence” of the accumulation and integration of facts which has given rise to an error at the foundation of the “Revolution of Modern Physics”? More precisely, can we find a crucial piece of the puzzle that Einstein and others were missing in order to properly integrate the new-found puzzle-piece of the M&M null-result into the puzzle framework of that pivotal moment in time? Or perhaps we could probe even further back and finally ask, what was the historical error which neccesitated the faulty theoretical context in the first place in which the M&M experiment was conducted?
To answer these questions, we must get a wider and more comprehensive scope of the time-line of scientific discovery in order to incorporate the newly discovered pieces (and perhaps some missing critical ones?) into the full puzzle framework.
Consider this scientific finding from 1999 as reported on Science Daily at http://www.sciencedaily.com/releases/1999/07/990730072958.htm
Superfluid Is Shown To Have Property Of A Solid
EVANSTON, Ill. — Northwestern University physicists have for the first time shown that superfluid helium-3 — the lighter isotope of helium, which is a liquid that has lost all internal friction, allowing it to flow without resistance and ooze through tiny spaces that normal liquids cannot penetrate — actually behaves like a solid in its ability to conduct sound waves. The finding, reported in the July 29 issue of the journal Nature, is the first demonstration in a liquid of the ‘acoustic Faraday effect,’ a response of sound waves to a magnetic field that is exactly analogous to the response of light waves to a magnetic field first observed in 1845 by British scientist Michael Faraday. The acoustic effect provides conclusive proof of the existence of transverse sound waves — which are characteristic of solids but not of liquids — in superfluid helium-3..
“I wouldn’t say that our discovery is of that magnitude [says William Halperin. (if only he knew!)], but it is significant as the first observation of a previously unknown mode of wave propagation in a liquid — one that is of the type you would expect to see in a solid.” [my emphasis]
Remember that Einstein said in his “Ether and the Theory of Relativity” speech that a fluid body could not transmit transverse waves?
Einstein again,
“It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid.”
This tacit assumption of erroneous “fact” was the main reason behind the acceptance of the counter-intuitive hypothesis that the ether must be a solid with the elastic properties of steel, through which all objects somehow moved with zero friction (note that “zero friction” is another property of a superfluid). Imagine if it had been known in the 1800’s that fluids could transmit transverse waves. If the scientists at the time would have still concluded initially (for some reason) that the ether was a solid, then when M&M null-result showed this reasoning to be false, would the scientists have taken the easier road instead (much easier, conceptually, than abandoning the medium of the light-waves and electromagnetic-fields themselves) and simply reformulated the ether as an inhomogenous fluid moving with the earth instead of an isotropic, solid-steel-ghost-like, absolute frame of reference moving through and relative to the earth?
It is hard to know exactly what would have happened in our little what-if story had the collection of facts happened in an ideal sequence, but there would have been nothing to stop the simple and straight-forward conclusion of the fluid ether from being reached. In such a scenario, the ether would not have been abandoned which would have satisfied the null-result of the M&M experiments and left intact the conceptual underpinnings of the theory of electromagnetism and light. Physics would not have needed to abandon causality and adopt mathematical abstractions such as randomness and probability in its place as the “medium” of the wave-nature of all matter and space. Of course we can’t know the details of the Physics that we would now have, had the course of events happened in the optimal sequence, but it is readily apparent that the difference could have been great indeed.
It should be clear now, how the sequence of the discovery of facts can drastically influence the flow of scientific ‘progress’. And that perhaps this non-optimal, linear sequence has actually generated an historical scientific error against which the “Revolution” known as “Modern Physics” was merely an erroneous reaction. It should be clear that this error was due in part to the lack of a critical piece of information demonstrating the propagation of transverse waves through a fluid, thus enabling the fluid model of the ether to model the transverse waves of polarized light. Of course we can’t change history itself, but we can surely overcome the inevitable errors of its linear flow! In retrospect, with a more complete collection of the critical pieces of the puzzle now in hand, the answer to the classical dilemma culminating in the Michelson and Morley experiment, is quite simple. If hind-sight is 20/20 then let us use this neglected heightening of historical scientific vision and declare right now a revision of the errors of history…
…”The ether is a dynamic fluid!”
The Evidence for the Fluid Nature of Fundamental Physical Reality
It is becoming more and more apparent that even in the darkness of the abandonment of causal understanding, the “Standard Model of Physics” appears to be steadily groping its way unconsciously toward the fluid-dynamic nature of fundamental physical reality-the dynamic “ether” vaguely and confusedly intuited by Albert Einstein. Despite coming from a faulty conceptual paradigm which it must eventually abandon altogether, Physics is slowly and blindly modeling its path, by experiment and equation, toward the alternate fluid-dynamic route that it did not have the initial framework to sufficiently formulate or accept at the crucial historical bifurcation point of the Michelson and Morley experiment. Physics is undergoing a slow oscillation back towards the distant beginnings of the ungrasped thread of understanding that it had lost sight of with the revolution of “Modern Physics”-the un-grasped concept of the fluid ether as the physical medium of the wave nature of all matter and “space”. As Sorce Theory will demonstrate, however, the actual “thread of error” goes much deeper than the simple error exposed in Part I of this introduction and briefly mentioned above. This thread “permeates all the branches of the existing tree of knowledge”. [4] It goes right down to the ancient Greek foundations of science-to the very coalescence of the fundamental framework of the standard paradigm of physical reductionism itself-straight to the core kinetic-atomic foundation and the never-ending ‘quest for the fundamental particle of matter’-the a-tom existing and acting in the always-hypothetical ‘void’. [5] This thread of error manifests itself as a wide-spread and self-limiting set of incorrect and artificial categories and concepts that render the most qualitatively simple of subjects, not only impossible to truly understand, but also extremely difficult to discuss and theorize about. Take for instance this quote from G.E. Volovik in “The Universe in a Helium Droplet” [6] .
“According to the modern view the elementary particles (electrons, neutrinos, quarks, etc.) are excitations of some more fundamental medium called the quantum vacuum. This is the new ether of the 21st century. The electromagnetic and gravitational fields, as well as the fields transferring the weak and the strong interactions, all represent different types of collective motion of the quantum vacuum.
“Among the existing condensed matter systems, the particular quantum liquid-superfluid 3He-A-most closely resembles the quantum vacuum of the Standard Model. This is the collection of 3He atoms condensed into the liquid state like water. But as distinct from water, the behavior of this liquid is determined by the quantum mechanical zero-point motion of atoms. Due to the large amplitude of this motion the liquid does not solidify even at zero temperature.” [my emphasis]
In the entire first paragraph, we can see the recent trend of the Standard Model of Physics toward the conception of the ‘quantum vacuum’ as a ‘zero-energy superfluid’ or ‘quantum liquid’. Apart from the erroneous conceptual structure of the Standard Model and the superficial, oxymoronic denial of the material substance that the ‘quantum liquid’ is composed of-the substance which the fluid equations actually quantify-despite all of these very important considerations, this basic quantitative conception of the fundamental level of physical reality as a ‘quantum liquid’ or ‘superfluid’, is not really so far off from the basic foundational conception of the fluid-dynamic continuum of matter proposed in Sorce Theory. The simple conceptual differences that do exist, however, at this foundational level, are CRUCIAL to a coherent understanding of Nature.
Further down in the second paragraph the meaning of the explanation gets obscured and highly distorted by the esoteric theoretical ‘baggage’, the erroneous and artificial categories and knowledge partitions inherent in the Standard Model. The entry point to the crucial error is exposed in the last two sentences of the quoted passage, “.the behavior of this liquid is determined by the quantum mechanical zero-point motion of atoms” [my emphasis]. So as the liquid cools down, according to the kinetic-atomic theory of heat, the atoms or molecules will slow down their billiard-ball like collisions until, at the point of absolute zero, they will cease motion altogether. This is what is meant by the phrase “zero-point motion” and it is called the “zero-momentum ground state”. The next sentence goes on to say “Due to the large amplitude of this motion the liquid does not solidify even at zero temperature” [my emphasis!]. It should be obvious, at this point that a “large amplitude” of “zero-point motion” is an absurdity! How can the lack of oscillatory motion possess a large amplitude? How can the physicists routinely get away with such nonsense? Of course you physicists know the simple answer to that apparently naieve question-it is through an appeal to quantum uncertainty of course!
Heisenbergs’ Uncertainty Principal states that as the knowledge of the momentum of a quantum-scale object gets more and more precise, the knowledge of its position, gets less and less precise. It is a directly inverse mathematical relation. So as the momentum of each individual atom decreases, the amplitude of our uncertainty (whatever this physically means) of its actual position steadily increases! In effect, our knowledge of the positions of the atoms gets fuzzier and fuzzier simply because we know that they are slowing down! Despite the obvious (discarded) “common-sense” recognition that the amplitude of the state of knowledge (?!) of the motion of the individual atoms should have nothing to do with the actual functioning of the quantum level (or any level) of reality, and despite the fact that we haven’t even measured the motions of any of the individual atoms and thus we don’t really know that their individual motions have actually slowed down or ceased at all, except perhaps through recourse to our interpretive theoretical kinetic-atomic model of heat which states that liquids at that temperature should freeze solid and not become a superfluid-despite all these rather important theoretical problems the fact is that the Uncertainty Principle tells us absolutely nothing of the PHYSICAL mechanisms which should explain how the lack of liquid-defining inter-atomic collisions, does not instantly render the super-cooled liquid Helium, into a frozen solid crystal-a Helium popsicle. After-all, a decrease in inter-atomic collisions is the defining property of a solid, according to the kinetic-atomic theory and this is why the discovery of superfluid helium-4, back in 1937 was a complete and total surprise to the experimentalists and is still considered “counter-intuitive” based on the kinetic-atomic model of heat and its relation to the states of matter. There are constant surprises in the field of condensed matter physics because the Standard Model cannot fully account for the appearance and properties of superfluidity even with recourse to the codified uncertainties and mathematical probabilities of quantum mechanics.
What does Heisenbergs’ Uncertainty Principal really have to do with the understanding of superfluidity? As Sorce Theory demonstrates, the Uncertainty Relations are a consequence of the standard lack of understanding of what an atom really is, what it is made of and what those ‘constituents’ are made of as well. [7] This ignorance begins at the core level of physical reality which lies beneath the probabilities and uncertainties of quantum mechanics and permeates into the very nature of our understanding of the ‘fundamental’ forces, energy, the quantum, thermo-dynamics, the states of matter, and much else. That is why an appeal to uncertainty must be made by the Standard Model in order to reconcile the surprise appearance of superfluidity with the absence of fluid-defining “kinetic-atomic motion”. The scientists really are uncertain as to what physically causes the fluid phenomena at the quantum level and this uncertainty propagates its way pervasively into the ‘understanding’ of macroscale phenomena.
Fluidity in the Fundamental Equations
Despite all of the various manifestations of the deep qualitative, interpretive, errors of Modern Physics, the equations which have been custom fit to model the results of our experimental contact with physical reality, actually tell a quite different story. The equations directly model the fundamental level as a frictionless fluid yet the Standard Model consistently denies that this fluid physically exists. The claim is that fundamental reality consists merely of probabilistic wave-equations defining the likely positions of its fundamental, extension-less “point-particles” which paradoxically exhibit a “wave-nature”. To admit that the fluid nature of the quantum level physically exists would be anathema to the dogma of the denial of the ether initiated by the patron saint of Physics himself, Albert Einstein, who, unknown to most people, later said that the ether must exist and it must be dynamic-in Einstein’s peculiar, confused and ill-informed sort of “dynamics”.
In “The Big Bang Never Happened” [8] , Eric J. Lerner writes,
“… since the nineteenth century it’s been recognized that the equations of electromagnetism are almost identical with the equations of hydrodynamics, the equations governing fluid flow. Even more curious, Schrödinger’s equation, the basic equation of quantum mechanics, is also closely related to equations of fluid flow. Since 1954 many scientists have shown that a particle moving under the influence of random impacts from irregularities in a fluid will obey Schrödinger’s equation.
“More recently, in the late seventies, researchers found another curious correspondence while developing mathematical laws that govern the motion of line vortices-the hydrodynamic analogs of the plasma filaments …. The governing equation turns out to be a modified form of Schrödinger’s equation, called the nonlinear Schrödinger equation. [This equation is a central part of the study of ‘quantum liquids’ as well. The interesting coincidence is that it is a modified form of the equation describing the shell structure of an atom. How this fluid-dynamic medium gets “quantized”‘ into the shell structure of the known electronic “orbits” is a key concept illustrated in Sorce Theory.]
“Generally in science when two different phenomena obey the same or very similar mathematical laws, it means that in all probability they are somehow related. Thus it seems likely that both electromagnetism and quantum phenomena generally may be connected to some sort of hydrodynamics on a microscopic level. But this clue, vague as it is, leaves entirely open the key question of what the nuclear particles are. And what keeps them together? How can fluids generate particles? [Sorce Theory fills in these crucial gaps as well.]
“But the idea of particles formed from vortices in some fluid is certainly worth investigating. (This is a real return to Ionian ideas: the idea of reality being formed out of vortices was first raised by Anaxagoras 2,500 years ago!) .However, I think there are additional clues, some developed from my own work, which indicate that plasma processes and quantum mechanical processes are in some way related.
“First and foremost are Krisch’s experimental results on spin-aligned protons. [9] Qualitatively, the results clearly imply that protons are actually some form of vortex, like a plasmoid. [10] Such vortices interact far more strongly when they are spinning in the same direction-which is certainly the behavior Krisch observed in proton collisions. Because vortex behavior would become evident only in near-collisions, the effects should be more pronounced at higher energies and in more head-on interactions-again, in accordance with Krisch’s results.
“A second clue lies in particle asymmetry .. Particles act as if they have a “handedness,” and the simplest dynamic process or object that exhibits an inherent orientation is a vortex. Moreover, right-and left-handed vortices annihilate each other, just as particles and antiparticles do.”
Collisionless Dynamics
The modern mathematical conception of the ‘quantum vacuum’ as a ‘zero-energy superfluid’, is virtually identical to the ‘quantum liquid’, superfluid 3He-A. What accounts for this extreme similarity? What is the modern mathematical relationship between a ‘quantum liquid’, and the ‘quantum vacuum’? Furthermore, what is the relationship between a quantum liquid, a classical liquid and heat? Why is it that a reduction of heat is sufficient to convert a molecular liquid into a ‘quantum liquid’? And finally, why on earth doesn’t it freeze solid in the absence of kinetic-atomic motion?
If the ‘quantum vacuum’ is defined as a “zero-energy superfluid” and e=mc2 says that the mass of an atom is a measure of its energy then it appears that the crucial difference between the superfluid ‘quantum vacuum’ and the superfluid 3He-A seems to be simply the presence of the energy-containing He atoms. The ‘quantum vacuum’ itself is already possessing of the superfluidity exhibited and slightly modified by the presence of energy-containing atoms. With the reduction of kinetic-atomic collisions known as heat, the inertia-containing He atoms no longer play a significant role in defining the ‘classical’ properties of the liquid. The atoms are essentially “just going along for the ride”, embedded in the frictionless dynamics of the “zero-energy superfluid quantum vacuum”, whose critical properties and emergent mechanisms are still unknown to the physicists.
This ‘counter-intuitive’ type of fluid-dynamics goes by the title “Collisionless Dynamics” as it is devoid of the classical kinetic-atomic collisions which are still the only method which the Standard Model employs for understanding fluid motion. This is why quantum phenomena in such a superfluid, manifest what is called “counter-intuitive behavior” at the macro-scale. The Standard Model has no conception of how a liquid can exist in the absence of kinetic atomic motion. Yet there it is-the manifestation of the fluid-dynamic nature of the ‘quantum vacuum’ right before the eyes of the physicists. It is the resonating and reflecting structure of the He atoms that simply enables us to actually see the frictionless, fluid nature of the ‘quantum vacuum’ in action.
What role does heat play in the transition from a superfluid to a molecular fluid? When the He atoms are agitated by the presence of properly resonating heat waves, the atoms begin to vibrate and to collide with each other, imparting a transfer of momentum as defined by the kinetic atomic theory. This inertial transfer of momentum is the very source of friction itself. The mass-containing atoms leave the “zero-momentum ground state” as they acquire velocity. This new inertial collision-based dynamics begins to interfere with the collisionless-dynamics and fluid flow of the frictionless quantum liquid in which the atoms were previously quietly embedded as a ‘superfluid’. Now the classical fluid-dynamic equations begin to take over in modelling the transition between the frictionless superfluid and the frictional classical fluid.
There are two sets of equations that are used in modeling such a system. One equation is classical based on the ideal inertial transfer of momentum from atom to atom, and the other equation is defined by the frictionless fluid-dynamic wave nature exhibited in quantum mechanics and the quantum equations. To model the change in state, one simply transitions between these two equations as the effects of one type of dynamic are damped or reinforced by the changing environmental conditions which give rise to the other type of dynamic.
Superfluidity and the Ether
As noted previously, there are many similarities between the superfluids seen in the laboratory and the modern conception of the ‘quantum vacuum’. Consider this quote from further on in Volovik’s “The Universe in a Helium Droplet.
“The reason for this similarity between the two systems is a common momentum space topology. This momentum space topology (Chapter 8) is instrumental for classifying of universality classes of fermionic vacua in terms of their fermionic and bosonic zero modes.
“This similarity based on common momentum space topology allows us to provide analogies between many phenomena in quantum liquids and in the quantum vacuum of the Standard Model. .However, in the low-energy corner they are described by the same equations if written in a covariant and gauge invariant form.. Our ultimate goal is to reveal the still unknown structure of the ether (the quantum vacuum) using our experience with quantum liquids.. The realization of a quantum liquid with the completely covariant effective theory at low energy requires some effort. We need such a `perfect’ quantum liquid, where in the low-energy corner the symmetries become `exact’ to a very high precision, as we observe today in our Universe.”
Again the crucial distinction between superfluidity and the ‘quantum vacuum’ appears to be the symmetry-breaking effects caused by the presence of energy-containing atoms.
For much more empirical detail on the Fluid Nature of Space, see the following article:
aethernitatis.net: The Case For Fluidic Space
[1] See “The Structure of Scientific Revolutions” by Thomas Kuhn, University of Chicago Press, 1962
[2] “Ether and the Theory of Relativity” by Albert Einstein, an address delivered on May 5th, 1920, University of Leyden
[3] Note that by taking away the “immobility” of any substance one is actually granting a “mobility” which is a required quality of any mechanism-producing substance. Thus taking away its immobility does not take away “its last mechanical quality” but actually gives it back! This shows a glimpse of the ill-conceived a-causal conception that Einstein had cooked up.
[4] From the Preface to “The Orb”.
[5] There is a basic tenet of logic that says that you can never prove a negative. The absolute ‘void’ is always hypothetical because you can not prove the non-existence of some type of matter that you are inable to detect which might be filling the ‘void’. There are deeper metaphysical reasons for rejecting the concept of the ‘void’, however. To put it simply, the ‘void’ and the ‘a-tom’ cannot exist, because the concepts are not complex enough to contain the property of causality, i.e. ‘spatio-temporal’ structure. See also Spinoza’s logical proofs (The Ethics) for the existence of an infinite and continuous substance and the self-negation of the concept of the ‘void’.
[6] The Universe in a Helium Droplet (The International Series of Monographs on Physics, 117) by Grigory E. Volovik, Oxford University Press; (June 2003)
[7] This error is centered on the notion of the electron as a ‘point-particle’ with a fuzzy (indeterminate) mathematical boundary, instead of viewing it as a real, physical gradient with a real, physical shape and size. Of course this erroneous assumption was a result of the ancient Greek reductonist paradigm of Democritus’ a-tom and void and the resultant search for the ‘fundamental particles of Nature’, which don’t appear to be fundamental at all as they contain an internal and external structure known quantitatively as a ‘probabilistic wave-nature’.
[8] “The Big Bang Never Happened”, by Eric J. Lerner, Vintage Books, 1991.
[9] Krisch, Alan D., “Collisions between Spinning Protons,” Scientific American, vol. 257, n. 2 (Aug. 1987), pp. 42-50. This experiment demonstrates a violation of the basic assumption of QCD that quarks act independently within a proton.
[10] This hints at the tight correspondence between Plasma Physics, Plasma Cosmology and Sorce Theory. This fascinating correspondence will be explored in a later work, see www.anpheon.org for details and availability. See www.electric-cosmos.org for an introduction to Plasma Cosmology.
Excerpts from: In the Beginning There was God
Introduction
A “paradigm” is the set of premises upon which a discipline builds its conclusions. In ancient times the premises were reached by dialectics, i.e. logical argument. Thales was one of the earliest Greek philosophers to record the results of prior such arguments. Noting the obvious fact that material things exist, he stipulated that a material substance, the “Ylem”, exists everywhere in space. He asserted that all material things are made of this single substance. Ensuing Greek philosophers then began their arguments, whose conclusions set up the paradigm upon which present Theoretical Physics rests.
Though; Physics subjects its conclusions to the test of physical experiment rather than pure argument, the conclusions so tested are consequences of the underlying premises; thus are not the basic premises themselves. The paradigm of the moment is called into question only when there is an unresolvable discrepancy between a given consequence and data provided by experimental measurements of events in the physical world. A “scientific revolution” occurred whenever any such underlying premise had to be revised accordingly. It is now thought that the most basic Greek premise of all, “matter per se exists and is the stuff of which all things are made”, has been cancelled. It is said that matter is a form of energy. The first law of classical physics, “The conservation of matter”, has now become “The conservation of energy”.
That conclusion, however, stems from a complex series of prior events. First, the Greek philosophers had not fully set forth the basic items of which the universe is made. Second, Newton defined his terms and set up his three laws of motion in accord with the paradigm reached by those philosophers. Third, the present definitions of the basic elements of Physics are derived from the equations set forth by Classical Physics as an expression of Newton’s laws of motion. But the equations do not accurately express Newton’s concepts nor, therefore, his laws. Fourth, Newton’s concepts were based on false premises; and the present terms and definitions, though they are not quite Newton’s, are just as false. Fifth, a series of hidden mathematical errors permeate the revolutionary papers that sired the present metaphysical foundations of Science.
Far beneath the scene lay the real cause of what went wrong. The paradigm of modern Physics is a covering theory built on the paradigm of Classical Physics, which was incorrect all along. The real frontier of Theoretical Physics is not at the perimeters of its specialties; it is at square one: The basic metaphysical premise beneath its present paradigms is false.
A premise is an assumption, a hypothesis, a concept to be accepted as valid without proof. It is the “this” in “if this, then that”; where “this” is always to be taken as true and “that” is its logical consequence. An unrecognized premise exists; accepted as the fundamental “this is true” foundation of the paradigms of modern Science.
Some twenty-seven centuries ago Thales set forth his thesis that the universe is filled with a primary material substance, out of which all discrete material objects are made. Concerning the nature of the physical world, the ancient Greek Philosophers later set forth the three main themes that vied with each other for acceptance ever since. Noting the perfect regularity of certain heavenly patterns, Plato hypothesized that pure mathematical form is the essence of reality; wherefore the world revealed by our senses is an illusion. Noting how plants grow from tiny seeds, and an animal from a minute embryo, Aristotle postulated The Theory of Becoming, in which the basic items of nature change their properties and thereby themselves evolve. Others set forth the “Kinetic Atomic Theory”, that matter is made of ultimate unchanging basic particles moving about in empty space. The latter became Newton’s doctrine and has been carried by Modern Physics to its illogical conclusion that the ultimate particles, quarks, are “extensionless points”. (Since an extensionless point is an imaginary mathematical construction this brings us back, as Heisenberg once announced, to the Platonic doctrine that pure form is the essence of reality and sense evidence is an illusion.)
In debating the consequences of Thales’ opening theme, the Greek Philosophers’ dialectic omitted one “yes or no” question. Without asking it, evidently they assumed the answer is No; and proceeded on that assumption. There is therefore an unstated postulate, a hidden premise, at the heart of every theory leading to and included in those of today. To discover that question, thus the secret premise that still persists, we will glance at “Science and First Principles”, by F. S. Northrop; The MacMillan Co, N.Y. 1931; pg 8,
“It was an event of no mean significance when Thales and Heraclitos observed the two extensive facts of stuff and change, and Parmenides noted that the fact of stuff involves the principle that the real is physical.
“Once this was recognized, Parmenides had no difficulty in proving that the two facts of stuff and change contradict each other, if nothing more is assumed. The proof is absolutely sound; and so brilliant in character as to be almost humorous. Change, he said, must be due to generation or motion. It cannot be due to generation for that means that the real changes its properties, and is incompatible with the principle of being which stuff entails. But neither can it be due to motion, if stuff is conceived as nothing but one physical substance or many microscopic particles. For motion requires that a thing moves from where it is to where it is not. If nature is nothing but the stuff which moves, there is no ‘where-it-is-not’, and hence motion is impossible. The difficulty is not escaped by regarding stuff as many, rather than one. For the motion of many particles involves a ‘where-it-is-not’ as much as the motion of one; a difficulty is not met by multiplying it many times. Moreover, there cannot be many particles if nothing but the stuff of the moving particles is supposed to exist. For manyness requires something to enable one to distinguish between one atom of stuff and another, and this is impossible if nothing but the stuff of the atoms exists. The essential point in the latter argument is not so much the need for an intervening space, as the necessity of something to designate the difference between one particle and another.”
The kinetic-atomic theory, that matter (or energy) is made of discrete ultimate separate particles, is the primary plank in the scientific paradigm of today. Taken together with the portions I italicized above, it is supported by the belief summarized in Northrop’s sentence, “The proof is absolutely sound; and so brilliant in character as to be almost humorous.” But that proof rests on the unstated answer to the unasked question:
Is matter compressible?
It is a very simple question, with a yes or no answer. Without asking the question or perhaps even knowing it exists, present Theory rests on the answer: No. The single basic premise beneath the paradigms of modern science was introduced by the No assumed by the Greek philosophers thousands of years ago.
The “brilliant” argument is valid if and only if matter itself, the Ylem, is basically incompressible. Indeed, the entire ultimate-particle theory of matter rests on the very same assumption. At the far end of every consequence based on that opening premise lies total mystery. As of now the mystery is blamed on the way God made the world, rather than its real cause: The primary premise is false.
God is very proud if His creation. The product, Mankind, is miraculous in very many ways. Our “operating programs”, however, are made by Man himself. Some of them, called “instinct”, are imprinted in the materials forming our genes (which are analogous to tape recordings rather than blank chemical tapes), and some of them we learn from each other.
Our inter-actions are ultimately governed by our conception of what the world physically is. That conception is based on what Science tells us about the basic structure of our universe. Science says it is a disembodied, totally impersonal place filled with separate charges of energy acting in automatic response to the probabilities specified by differential equations. No other causes and no values or deeper meanings exist in this world-view, upon which our philosophers and therefore, whether we know it or not, we ourselves decide how we should lead our lives.
The world-view of present Science is false. The mental operating programs by which we live, written by ourselves based on that world-view, are therefore defective.
Because Newton’s unspoken hypothesis was false and because it has never been adequately corrected, there is a chaotic lack of precision in Physics’ definitions; which fit neither Newton, our equations, nor physical reality. [That comment is not a criticism. It is offered in defense of both Newton and Physics. A man builds from where he starts, and he starts with what he knows, and what he knows at the start is what he was taught. If that was false, that’s not his fault.] The immediate goal of this book is to help cure that defect by revising and completing all aspects of modern Physics.
_______________
My analysis is powered by an entirely different metaphysical foundation than that of either Aristotle, Plato, Newton or present Science. Its basic premise was heard in the center of my head, spoken by a different voice and presumably a different entity than me. (That, in 1952, was the only time I ever heard such a voice.) It said, “Jerry! Let matter be compressible.”
To me, that meant the space-filling material I was contemplating, made of separate incompressible almost infinitesimally small particles touching one another and moving about everywhere, does not necessarily consist of particles. It meant that a bodily compressible material could fill space without being particulate. My primary premise thus rests on the answer, “Yes!”, to the ancient unasked question.
The basic premise of this book is: A compressible material substance permeates all space everywhere. A compressible material can move within, around, upon and through itself merely be deforming and changing the volume bits of it occupy during such motions. Example: Consider a solid glass globe, say as large as our solar system, with a spherical red portion 1 inch wide embedded in it. Letting glass be incompressible, nothing could enter that globe and the red sphere could not move at all. Now consider a hollow solid glass globe filled with a compressible material – say air – with a spherical red marble 1 inch wide embedded in it. Letting the globe have a portal into it, anything could enter or leave and the red marble could move all over the place. We could, for instance, insert many more red marbles into the globe merely by compressing the air that already completely filled it.
A compressible material can conduct pressure changes merely by bodily deforming in the act of conducting such changes through itself. An easily movable compressible material can form into all the many-patterned things that constitute particulate matter.
Aware that modern Physics denies the existence of a space-filling material medium I extensively analyzed the reasons for such a denial; and found conceptual and mathematical errors everywhere. The denial is conceptually and mathematically invalid. (This has been written up in a dozen or so uncirculated prior books, wherefore I will not argue these matters in here.) Those who are familiar with the accepted theories and would automatically reject a contrary view are advised to study some of those prior books; the last of which was written a year ago to open the way for this one; which is presented in two Volumes.
Volume One is entitled “Physics”. It will analyze the basic terms of Physics and the Newtonian concepts on which the definitions of those terms are based. These terms, including “atoms, mass, force, space and time” and others, will then be redefined. A major difficulty in that endeavor is to separate the useful conclusions due to Newton’s impeccable logic from the implicit conclusions due to his defective metaphysics. Though Newton tried very hard to do exactly that, one’s metaphysical premises are the only prior knowledge beneath one’s step-by-step logic; and cannot be avoided no matter how hard one tries.
My analysis and redefinitions of the basic elements of Physics began only a few years ago. Because most of it is new, much of it developing during the very act of writing this book, Volume One will include some of the arguments that led to and justify the revisions we will reach. Because some of the revisions and redefinitions were finalized as they were being written, the arguments may be redundant. Since the analyses and revisions are based on a new and different metaphysics, relevant portions of this metaphysics will be explained as needed. Volume Two is entitled “Metaphysics”. It is independent of the Physics discussed in Volume One but does continue the metaphysical story begun there.
Other than that it will help explain what God physically is, this work is not about religion nor psychic phenomena. The subjects are Metaphysics, Physics and Science. My musician friend said It told him, “God gives strength through prior knowledge.” Metaphysics is the prior knowledge upon which the basic elements of Physics are defined. The new premise is, “Let matter be compressible.” A premise contains within itself all its own consequences. I have been trying to reason them out ever since. This book presents the results.
https://web.archive.org/web/20101224093805fw_/http://spinbitz.net/anpheon.org/html/Books/ITB/ITB_Intro.htm
In the Principia, Newton wrote,
“I heartily beg that what I have done here may be read with forbearance; and that my labors in a subject so difficult may be examined, not so much with the view to censure, as to remedy their defects.
Is. Newton Cambridge, Trinity College, May 8, 1686”
Although my analyses may necessarily be argumentative and at times appear confrontational, please understand that my overall intent is precisely as Newton requested.
by du Gabriel
Du GABRIEL STARTED SORCE THEORY
Diagrammatic Art – and the Art of Philosophy
http://integral-life-art.s3.amazonaws.com/Morrison/morrison.html
Reality is the sequence of the explosive convulsions modeled in a pulsatile and rotative medium exposed to rhythms. The eye as the agent of memory is a means to simplify. … Without a vision from the eye, any representation stays blind. And the reasoning that follows stays insufficient, impotent. – Roberto Matta (1911-2002), Chilean surrealist-lucid artist (quoted in SpinbitZ, p. 130)
Joel Morrison is both an artist and an integrally-informed philosopher of the interface. Beginning as a trained visual artist, he found his artwork migrating into the exploration of the diagram as a mode of visual art, eventually moving into the domain of philosophy itself. As he reports:
Having started my life as a visual artist, I gradually discovered that my artistic expressions were becoming more and more philosophical as time progressed. I would often notice that in the back of my mind, as I lay thinking, an unconscious and intricate visual form was taking shape in my visual field; line by line, curving and collecting into shape after shape, unconsciously informing and solidifying the conceptual construction. Finally the philosophy began to rise above, transcending-and-including this everpresent and often unconscious visual art-form (SpinbitZ, Vol.1, p. 26).
The philosophy project now includes art as a fold – the art of philosophy and a philosophical mode of art entwined, opening a dual media expression and exploration of the kosmos. And it was the diagram that came to be the figural genre that served best this project. As the artist explains:
The art remains a key factor in the expression. It is an integral part of a symbiosis; an interface which informs and empowers the logic of the philosophical vision. And often it is through explicit catalysis in the creation of visual diagrams—vision-logic interfaces—that the philosophy itself necessarily unfolds. The linear expression of verbal ideas gains a new perspective through the non-linear and highly parallel expression in a visual form. They feed into and rebound off each other. With visualization, the whole mass of concepts can finally be seen simultaneously, nonlinearly, as one vibrant whole, instead of spaghettified by the linearity of language…. (p. 27).
The diagram is something of a “minor” genre or moment within the currents of Western and European art since the waning Renaissance of the late sixteenth and early seventeenth century, when emblem books and their presentations of the play of word and image (the core of the emblem tradition) were in instances further woven with charts and other non-depictive figural elements, exceeding the semiosis of basic allegory, where there is a simpler coding (e.g., justice personified as a female figure holding sword and scales), the expression more enigmatic, metaphoric, polysemantic: an aesthetic symbol in the Romantic sense of that term.
The work of Robert Fludd, an early 17th century English mystic philosopher, developed presentations that exemplify this complexifying of the emblem through inclusion of visual geometries, charts, and spatial coordinates. In the early nineteenth-century, the important German Romantic artist Philipp Otto Runge developed charts of what he called color-spheres, influenced by Goethe’s theories of color and perhaps too by the thought of proto-evolutionist mystic Jakob Böhme. In the early twentieth century, with the flourishing of so many creative modernist art movements and idioms, diagrams and the diagrammatic made their appearance. Kazimir Malevich, the inventor of Suprematism, produced numerous teaching charts – now works of art in their own right – to express the principles of modernist pictorial logics leading up to his own renewal of art of painting. The brilliant Paul Klee, in the wake of the cubist inclusions of all varieties of signs in the pictorial artwork (script, music notes, etc.), seamlessly integrated arrows into what are otherwise abstractive-depictive images.
During the complex moment of art making of the 1960s and early 1970s, an era still badly neglected in integral circles concerned with the visual arts, so-called Conceptual currents offered diagrammatic presentations as part of art exhibitions, as with several projects by Sol LeWitt: for example, his diagram of open cubes which were exhibited on occasion with three-dimensional executions of such cubes, the 2D and 3D artifacts constituting in tandem the work of art in those exhibition instances.
Morrison’s brilliant and creative forays into the art of the diagram is thus part of a venerable if neglected lineage, a “minor” stream in the Western-European tradition that has yet to be adequately mapped in any scholar detail and appreciation of which I am aware, only sketched in the most rudimentary manner in this commentary. Of course the “minor” is itself not a straightforward notion, as in following the philosopher Gilles Deleuze it is the “minor” which itself can within a tradition turn out to be that which emerges as a major chord advancing that very tradition – to wit, Deleuze’s own approach to philosophy.
What is distinctive about Morrison’s diagrams is that they signify otherwise than the ways of traditional verbal logics and their linearity; exceeding both contradiction and dialectical non-contradiction, exemplifying what John Sallis calls the “exorbitant logics of the imagination.” These diagrammatic art works have then an ineliminable:
The diagrams used in this construction are therefore found throughout this work as they will help the reader to process the abstract linear verbiage through the deeper, nonlinear and vastly parallel sensory functions that all humans possess. It is ultimately through the senses, transcended-and included in higher, more abstract, cognition, that the sense of the text is truly, integrally, embodied.
This is the general goal of SpinbitZ; to make sense of abstract thought through the employment of the human interface of sensation; to empower the conceptual imagination through images. Philosophy as the integrating art of the concept; a philosophy of vision-logic interfaces—and hence an Interface Philosophy (p. 27).
This art-and-philosophy project is to heal the modern (and postmodern) rift between concept and percept: philosophy as the creator of concepts and art as the inventor of percepts – precisely the respective roles of philosophy and art as forwarded and clarified by Deleuze and Guattari in their last jointly authored book, What is Philosophy? With Morrison there is an additional chiasmatic move, philosophy orienting towards aesthetics and embodiment themes through concepts like sensation; and the art of the diagram evincing the complexity of the philosophical concept. In the author’s words:
The inborn capacity to understand through the eyes has been put to sleep and must be reawakened. [Rudolf] Arnheim’s point, contrary to the interpretations of many of his critics, was not that perception, in itself, was the highest level of cognition, but merely (as the evidence clearly shows) that a training, or even a dabbling, in the arts—i.e. a “percept-training” where the senses are more effectively transcended-and-included (integrated) into the higher forms of abstract thinking—greatly enhances the ability to think conceptually. This is because, according to Arnheim, there is no real division between percept and concept. A training in the arts strengthens the very foundation of concepts themselves, the perceptual infrastructure of the imagination (p. 126).
In Morrison’s visual and verbal project the gathering and disclosure of sense is inclusive of logics that are non-contradictory and exorbitant: a richness of presentation that neither philosophy nor art could manage on its own. Such is the import, glory, and wonder of Morrison’s integrally-informed and integrally-evolved diagrammatic art.
SpinbitZ in a Nutshell
This is a quick reply in a forum about my view on the core difference brought in Spinozism. But it’s essentially a summary from a new vantage of much of the foundation laid out in the first half of SpinbitZ I. This created a lot of resonance in the discussion, so …
OK, so Deleuze and Merleau-Ponty say that “Positive Infinity is the secret of Grand Rationalism.” And I show in SZ how Spinoza’s triune infinite can be a powerful key to unlocking his metaphysics (onto-epistemology), as well as all of meta-mathematics, really, into a single onto-epistemic harmony. When this happens the finite and infinite become integrated, meta-mathematically speaking, and this results in the resolution (not dissolution) of the paradoxes of the infinite. In SZII I show this as an embryogenesis of dimensionality, but in SZI I term it “the embryogenesis of the concept”. Anyway, ontologically this integration, really between the absolute and relative, and also consequently between the ontic/epistemic and subject/object polarities (they are orthogonal in an important sense) is what Deleuze calls “univocity,” which he sees in Spinoza as the guiding principle in his opus, “The Ethics”. I show univocity essentially as conceptual or Rational nonduality. All levels when complete and integrated are “nondual reality” in expression of and as itself. When embodied it invokes experiential nonduality and is invoked therefrom recursively, in a sense. Resonance.
This positive infinity is only *implicit* in Descartes and it is not integrated, so the polarities at the cross-roads here (the “axis of Tao”, as Watts calls it) do not properly differentiate, orient, and reintegrate, and the structure remains what Deleuze calls “Representational”, and what I call “transitive”, “transcendent-biased”, “Mythic” or “proto-rational,” and so on. The Cartesian System itself is the image of the “Transitive Axis”, as I call it, which is one of the two “axes” (to over-simplify the polarity here) in conflation at the polar (nondual) level in the embryogenesis of dimension. This conflation is the essence of the paradoxes resolved in meta-dimensional integration in SZ with the culmination of a nondual Rationality.
A key sign of Representation or Transitivity shows up in the nature or sense of its forces, which Deleuze calls “oppositional”. Hence transitivity is dualistic and horizontal, as opposed to the forces opened in univocity which are “vertical”, or immanent/transcendent, nondual and “intensive”, Deleuze calls it. And while the intensive forces are present in the Cartesian system, they remain enfolded or implicit and unintegrated. The ghost and the machine, as you know. Spinoza radically opposed that and this is seen at the heart of the Spinozan system, where Deleuze says “Substance comes to turn on its modes”. This is his way of speaking to this integration between the absolute and relative in Spinoza’s univocity. This is an opening to and integrating with immanence (“emergence” in modern complexity parlance), and what I see as his Rational nonduality. The very foundations of the Spinozan system are rooted in rootlessness, emptiness, the Positive Infinity of which is the fullness of the Spinozan Substance with its infinite attributes.
I define the transition to the Rational largely mathematically, because that perspective is critical (native, proto-ontological) and not taken into account much in orthodoxy. So I show through the clear lens of mathematics, the embryogenesis of conceptuality, how it unfolds from unity (univocity and nonduality) through polarity (conflated and fused in paradox) through triunity and so on. The Rational numbers not coincidentally come into play with the integration of (mathematically, but not meta-mathematically) the immanent-transcendent axis. The axis is fundamental to mathematics and yet it remains implicit, stuck behind the default Cartesian Grid and linear/transitive Euclidean axioms of dimension.
