Crab Nebula (M1) — supernova remnant imaged by Herschel and Hubble Space Telescopes

Category: Tutorial Notes

Tutorial and lesson notes

Crab Nebula (M1), supernova remnant · ESA/Herschel/PACS; NASA, ESA & A. Loll/J. Hester (Arizona State Univ.) · NASA Image Library ↗

  • ELECTRON SPIN?: SCHISM OR NIH?

    ELECTRON SPIN?: SCHISM OR NIH?

    © Harold Aspden, 1998

    Research Note: 2/98: March 25, 1998


    I am writing this after reading an article by Robert Matthews, Science Correspondent of the British newspaper: The Sunday Telegraph. The article was entitled ‘Take a Spin’. It appeared at pp. 24-28 in the February 28, 1998 issue of the British publication: New Scientist.

    What captured my attention was the statement:

    “The spin property of particles has convinced those searching for the Theory of Everything that there must be a way to bridge the great spin schism and thus unify all particles.”

    How true that is! However, why is it that those who search for that Holy Grail cannot find what they are looking for? Well, the reason is hidden in a few other words that Robert Matthews contrived to work into his article. It is owing to the

    “Not invented here syndrome.”

    Well, I am familiar with the ‘NIH factor’. It was part of our vocabulary in IBM, as long ago as the early 1960s, my field being concerned with protecting inventions arising from IBM’s many different Development Laboratories. What, however, I see as very apt is the use of this ‘NIH’ expression in an article concerned with the theoretical physics of electrons. Are we really in a world where ‘invention’ describes the work of the theoretical physicist? Is that ‘Theory of Everything’ going to be the product of ‘invention’?

    What happened to my understanding that there is a line of demarkation as between ‘invention’ and ‘discovery’? If electrons spin, whatever that might mean, they certainly have been doing that for eons of time past. Invention is a term applied to something new, something not presented to us by Mother Nature, but rather something created by mankind.

    However, what I see in the theoretical treatment of the electron does seem to me to be the product of an ‘inventive’ mind, rather than a discovery founded in experimental research. I would urge Robert Matthews to take stock of the subject he has written about and consider not ‘the great spin schism’ but simply two numbers connected with electron spin, both numbers being a direct consequence of experimental observation. They are the numbers 2 and 1.001159652193. Now, numbers are not ‘inventions’, but a numerical result presented to us by Mother Nature can be ‘discovered’.

    Concerning these numbers, they are combined in an equation for what is known as the ‘g-factor’, the ‘anomalous’ property of the electron exhibited in the ratio of its magnetic moment to its angular momentum. The relationship is:

    g/2 = 1.000 159 652 193

    When I was active on my Ph.D. research into anomalous energy activity in the ferromagnetism of electrical power transformers, I ‘discovered’ that the factor-of-2 anomaly was attributable to the way in which energy is stored in .. dare I say it? .. the aether, that being a word in my vocabulary meaning space devoid of matter. However, the ‘inventive’ mind of the theoretical physicist had conjured up the concept of ‘electron spin’. Well, read what Robert Matthews has to say on that subject and see if you can understand the conventional ‘wisdom’ that guides those in search of that ‘Theory of Everything’.

    Mother Nature says: “If you deposit energy in my energy bank in space I will pool that energy with other such deposits. I will set the ‘wheels in motion’ that make it possible to keep track of how much energy I owe you and they will tap the energy pool to give energy back to you on demand, when, that is, you stop ‘spinning those wheels of yours’ that supplied the original energy input.”

    So ‘spin’ induces a ‘spin’ reaction, but what is that ‘spin’? One cannot just invent ‘spin’. However, I can say that 2 plus 2 is 4 and 2 minus 1 is 1. So if I can find a way of measuring the combined spin of my input action as offset by the ‘field’ reaction, and the net magnetic moment of that combination, I might see a ratio of magnetic moment to angular momentum that is double the ratio I calculate from my knowledge of the charge and mass of the electrons that are involved in my half of this process.

    Well, I do not propose to go into that here. Suffice it to say that the g-factor of 2 was easily explained, once I explored the way energy optimizes in its deployment in that magnetic field reaction. However, when in the latter part of the 1950s I urged attention to my interpretation of that anomaly-of-2 in the g-factor, I was told not to question Paul Dirac’s work and firmly told that there was no aether and that Einstein’s Theory of Relativity was beyond challenge. I was not told how empty space has a mechanism for storing energy, not to mention its way of providing that universal regulating quantum of angular momentum which we denote as h/2π. I did know that this quantum featured in the unit of magnetic moment, the ‘Bohr magneton’, a term having special significance in my study of ferromagnetism. More to the point, however, I was told to go away and read about the wonderful way in which quantum electrodynamics explains the precise value of the g/2 factor, not that factor of 2, but that number 1.001159652193, though its value was not known to that extreme order at the time.

    So, for want of an insight into the way in which space performs on the energy scene, physicists have lost sight of how to ‘discover’ their Holy Grail, the ‘Theory of Everything’. Necessity is the mother of invention and so they have resorted to ‘invention’, only to encounter the ‘NIH’ factor. They are, in fact, running around in circles and are not even seeing that what they call ‘spin’ is really something ‘running around in circles’. That factor-of-2 is, they believe, evidence that ferromagnetism is not produced by electrons describing circular orbits but rather by electrons complying with the spin doctor’s abstract formalism.

    That ‘way to bridge the great spin schism and thus unify all particles’, which Robert Matthews mentioned is so clear. All one has to do, as I have done in my analysis of all this in the 1950s, is to explain that factor-of-2 by aether reaction attributable to orbital charge motion. As to that g/2 factor, well, that is another problem and I invite you to press the link button below to see where my research findings can take you.

    I conclude by saying that I believe I am the victim of the ‘NDI’ factor, the ‘not discovered here’ factor, because I developed my theory outside the academic world, at least from 1954-1983, and, though back in academia from 1983-1992, I was in a Department of Electrical Engineering and so the ‘NDI’ factor was still at work. Electrical engineers cannot expect to be heeded if they venture into the realm of the theoretical physicist, far less, if they trespass onto the territory of the mathematician who thrives on the philosophical notions of Einstein’s theory.

    However, a ‘Theory for Everything’ has to embrace ‘everything’, be it the mathematician’s challenge of Fermat’s Last Theorem or the engineer’s ‘energy-everywhere-in-space’ challenge. So, if you dare to read on, I suggest you explore either or both of the following links:


    A Quotation

    “There are ordinary geniuses, whose achievements one imagines other people might emulate, with enormous hard work and a bit of luck. Then there are the magicians, whose inventions are so astounding, so counter to all the intuitions of their colleagues, that it is hard to see how any human could have imagined them. Dirac was a magician.”

    The words of:
    Sir Michael Berry, Royal Society Research Professor
    H H Wills Physics Laboratory, University of Bristol.
    Page 40 of the February 1998 issue of Physics World.

    Such is the legacy bequeathed to future generations of physicists, the inventions of a magician! However, that ‘Theory of Everything’ has still to be ‘invented’ by a wave of the magician’s wand, as well as that energy storage medium which functions by magic in empty space but yet has no name other than ‘space-time’. Thankfully, there remains some scope for those of us who are not magicians but, with ‘enormous hard work and a bit of luck’, might be able to emulate a genius!

    Harold Aspden


  • RENEWABLE ENERGY: A TOPIC FOR DEBATE

    RENEWABLE ENERGY: A TOPIC FOR DEBATE

    © Harold Aspden, 1998

    Research Note: 1/98: February 22, 1998

    The reason I am writing this item in my web pages is the fact that on January 30, 1998 I received in my E-mail a letter which read:

    I am a cross-examination debater at Ellison High School. The debate resolution that we argue this year is “Resolved: That Federal Government should mandate a policy to significantly increase the use of renewable energy in the United States.” As a debater, we have to come up with a proposal that implements a type of renewable energy. I intend to propose the use of free energy/zero point energy and I need information regarding its benefits, potential, and how the government has been reluctant to use it. If you can provide this information, I would greatly appreciate it.

    Thank you for your time,
    Anje Anderson

    Well, I could ignore this communication, but I admire those who are willing to debate something important to mankind, but yet lack the in-depth knowledge that the more-informed person uses to argue for keeping the status-quo, meaning ‘keep things as they are, rather that making a fool of yourself by venturing into the world of tomorrow’.

    Surely, we will see new forms of energy generation in the 21st century. It would be stupid to take the gloomy view that we now know all there is to know about energy generation.

    As someone well versed by academic and industrial training as an electrical engineer in the power industry, who then embarked on a professional career concerned with protecting inventions in that field, I note that before I entered the patent profession I did ask the question as to how I could be assured that enough inventions would be forthcoming during my lifespan to assure that I could remain active in my chosen profession.

    I need not have worried. So long as there are creative thinkers in the engineering community and those in authority care about technological progress, then there will be inventions.

    However, looking at the record of the last fifty years or so, I fear that the technology of the computer and the field of communication has tended to absorb brainpower which otherwise might have advanced us further into the era of ‘new energy’. Indeed, I am a victim of that syndrome. My salary prospects were better with IBM than with the company in which I trained as an electrical engineer and so it was the scope for invention in the storage of data by magnetic techniques, rather than power generation by magnetic techniques, that captured my attention.

    If I were to debate the issue of ‘Renewable Energy’ today, even with the hindsight of my years, I would not ‘propose the use of free energy/zero point energy’ based on data collected from an open enquiry for ‘information regarding its benefits, potential, and how the government has been reluctant to use it’.

    The government of any country has enough wisdom to take stock of available information on matters of importance. Advice of experts is solicited and action is taken based on that information. Sustaining the energy needs of a nation is of utmost importance, but there are economic factors, trade-offs between short-term and long-term aspects and the issue of global pollution that all need to be evaluated. A politician is likely to ask: “Where is this new energy technology that we are reluctant to use? Let me see a demonstration, some cost figures and the evidence that it can be relied upon before you accuse us of being reluctant to use it.”

    It must be assumed that many major corporations that will be affected by a breakthrough on the new energy front have already mounted a watch to keep an eye on developments. Exceptional though it may be, I am even aware of the interest being shown in this subject by a company in the specialist field of making anchors for oil rigs. You see, once the demand for oil drops owing to the onset of new energy revolution, there will be a drop in demand for new oil rigs. Forward planning in business implies trying to second-guess how one’s customers might react to what is seen on the future horizon.

    However, the essential issue here, unless you are talking about the modest scope for power derived from the wind or ocean waves, is new science and technology. You cannot trawl for information and canvas for reliable data on matters that are so technical in nature. You might just as well say that the time has come for the government to encourage the use of new inventions in the energy field and send out an enquiry as to whether such inventions exist as documented by granted patents.

    Indeed, you might propose that, in the new energy field, the government should fund the attorney costs and patent fees of all innovators who have something to offer. That would be answered by the statement that all worthwhile inventions come from established industrial enterprises, if not universities, who are well able to fund their R&D and who may in any event benefit from government funding in some way. You are then left with that band of rebel activity involving the maverick inventor, the private individual who some might see as trying to follow in the footsteps of Isaac Newton by discovering a way to make gold. The ‘gold’ in this case is ‘free energy’.

    On this point the government voice could say that those who seek patents for ‘perpetual motion’ inventions waste their time; such inventions are outlawed. They could say that those who fund such research by crackpot individuals are wasting their money and that such activity should be discouraged. Popular support for such inventors by those who elect politicians to power and so could be a deciding factor long term would surely be lacking. Try telling your friends that you have invented a ‘perpetual motion’ machine and see how they react!

    In summary, therefore, we have to face up to the inescapable fact that the ultimate breakthrough on the ‘free energy’ front will not come in an orderly way and be born from a normal ‘pregnancy’. This ‘free energy’ field is in a state of chaos, but, yes, I can see that at this time there is a sign is of life in Mother Nature, in that we are passing through a pregnant phase awaiting a ‘free energy’ birth. It may occur in a garden shed, if not in a stable, but it is on the way. Indeed, we can hope for a multiple birth, but whether the arrival will be welcomed and whether the authorities will recognize it and issue Birth Certificates as endorsement remains to be seen.

    Certainly, I would not encourage pointing an accusative finger at government and chiding them for not doing enough on the renewable energy front. Their experts are the ones who need to a wake-up call; they may be experts on what is known but there is no way in which they can deny that there is scope for new energy technology. They cannot predict future invention. They are not experts on the unknown. They have to abide by the one governing law in the energy field, namely the Law of Conservation of Energy, but they lack expert knowledge as to how to regenerate electricity from heat with the 100% efficiency that that law implies.

    I will conclude these remarks here as I may otherwise venture into a field that is too technical for the reader to follow. However, for the reader versed in electrical science the brief footnote below will serve as a guide to those technicalities.

    Footnote

    Virtually all the electricity generated in the world passes through a sequence of large power transformers. Yet it is a fact, which few experts on energy matters even know about, that in every large power transformer there is a process at work by which heat resulting from the electrical currents induced in the steel laminations of the transformer is reconverted into electricity in a way which increases those same currents and so produces additional loss. This is why that loss is, in fact, much greater than can be predicted theoretically. Commercially, in itself, this is not important, because the losses involved are small anyway in proportion to the power rating of the transformer. Technologically, however, by not understanding this phenomenon, electrical researchers have failed to see that Nature does have a way of converting ambient heat directly into electricity. One needs to get that heat to flow through a metal in the presence of a magnetic field and tap off the electrical power in a direction mutually orthogonal to the direction of heat flow and that magnetic field.

    However, I stress that that is not a way of tapping ‘zero-point’ energy. There is no ‘perpetual motion’ feature involved; just compliance with the Law of Conservation of Energy (sometimes termed the First Law of Thermodynamics), but I fear that the power transformer, sadly for some of those who teach thermodynamics, does not oblige by complying with the Second Law of Thermodynamics. Allowing for the ultimate and residual heat dissipation from the transformer, albeit one that is badly designed to accentuate the phenomenon, the regeneration of electricity from heat can be as high as 90% efficient, whereas the temperature differences in a transformer lamination correspond to a very much lower Carnot efficiency. All I can say is that here is a clue, a starting point for a young research-minded person to progress from in ‘free energy’ research. I only wish that I had discovered this when, some 48 years ago I embarked on my own Ph.D. research studying the anomalous loss experimentally. You see, in my educated youth, it never occurred to me to challenge the laws of science. Who was I to say that the Second Law of Thermodynamics can be faulted? Such thoughts were far from my mind. I am now older and wiser and I believe we are on track for that ‘free energy’ breakthrough, including accessing the ‘zero-point’ energy of the vacuum, the task ahead being to reeducate those experts! From my present retirement position and circumstances, it is easier for me to ‘educate’ than it is to prove my case by building demonstration rigs, which is why I am writing these Web pages. However, those experts do not want to be reeducated and so they will not heed what I say – but I will soldier on in my efforts.

    Harold Aspden

    February 22, 1998


  • MARINOV: A NOTE FOR THE RECORD

    MARINOV: A NOTE FOR THE RECORD

    © Harold Aspden, 1997

    Research Note: 15/97: September 12, 1997

    IN MEMORIAM

    Having heard via Internet channels of the untimely decease of Stefan Marinov and later, in the September 1997 issue of New Energy News, read his ‘SCIENTIFIC TESTAMENT’, a declaration written before he committed suicide on July 15, 1997, I feel compelled to put on record a scientific commentary concerning one of the subjects in which he had a particular interest.

    Marinov suffered from frustration brought about by the lack of interest by the scientific community in the field of endeavour to which he was committed, namely the regeneration of physics accompanied by the recovery of energy from the hidden world that the modern physicist cannot begin to envisage.

    Marinov’s final message, as published in New Energy News, reads:

    After having walked so many years on the thorny way of truth, I became tired. My books and papers are my scientific testament.

    I hope that soon the absolute (Newtonian) space-time concepts, which I restored by numerous experiments and by simple mathematical theory, will be accepted by the scientific community as those corresponding to physical reality.

    I hope the perpetual motion machines, of which I constructed many prototypes without closing the energetic circuits, will successfully be built by other people.

    And if my achievements in space-time physics, in electrodynamics and in the domain of the violation of the laws of conservation will be silenced also after my death, by leaving this world, I can only repeat the eternal words: feci quod potui.

    Graz, Austria, 15 July 1997
    Stefan Marinov

    *****

    We owe it to Stefan Marinov to pursue the issues raised by Erwin Schneeberger in his Letter of 12 August, 1997 published on p. 2 of the September, 1997 issue of New Energy News. Just before his death, Marinov asked Erwin Schneeberger, also of Graz, in Austria, to store his stock of books, an action which caused Schneeberger to remark: “But I could not realize his final intention.” This was, indeed, a very sad situation.

    The final paragraph of Schneeberger’s letter reads:

    There have been two disappointments for Stefan, his inertial-force driven vehicle is an artifact, and Ampere’s formula seems to be correct, as he realized from some experiments he made with Dr. Pappas in Greece, about two weeks before his death. I have gotten to know Stefan on my experiments with PAGD-devices of the Correas, of Canada. As my efforts to replicate their system clearly show, that there is no generation of electrical energy, I would ask you if you have any knowledge of a successful verification.

    Sincerely, Erwin Schneeberger
    The above letter was presumably addressed to the Editor of New Energy News.

    In the light of these comments, I call upon Dr. Pappas to disclose what he has discovered concerning Ampere’s law. I have no doubt that we will hear more about the Correa technology in due course.

    However, I wish here to add a few historical observations concerning Stefan Marinov and his research interests.

    [1] I first heard of Marinov many years ago when he discovered my book ‘Physics without Einstein’. He wanted to come to England to visit with ‘my publisher’ with a view to having his work published in the same way. Being the ‘publisher’ myself, ostensibly detached because I had a senior management position in IBM and deemed it prudent to operate that venture under my wife’s maiden name as her business, I had to decline interest in the efforts of an enthusiastic Marinov. He had declared his intention to earn a Nobel Prize for his experimental discovery on speed of light anisotropy tests.

    [2] It was Dr. Pappas who, at about that time, visited me to discuss what I had published in that book. Pappas was still a Research Student at London University, his thesis subject being Einstein’s Theory. After our meeting Dr. Pappas took a strong interest in what I had to say about the law of electrodynamics. He became alienated towards Einstein’s theory but did see his Ph.D. efforts to successful conclusion, thereafter mounting his own research interest in electrodynamic force law. Dr. Pappas made positive reference to my law of electrodynamics when he was called upon to write the text for a section on electrodynamics in the Greek (Larousse) version of the Encyclopedia Britannica.

    [3] Later Peter Graneau became interested in this electrodynamic topic and we met on occasion when he visited the University of Southampton and thereafter when I visited his laboratory at M.I.T. and witnessed his exploding wire experiment. Peter Graneau has held tenaciously to his opinion that Ampere’s law holds valid, agreeing with me that the Lorentz force law is inadequate but going expressly counter to my interpretation of the force law, mine being the version that leads directly to the form of the law of gravitation.
    [4] I believe it was Dr. Pappas who may have stimulated Marinov’s interest in the anomalies confronting the law of electrodynamics. I recall that Marinov announced a conference to be held in Bulgaria at a resort on the Black Sea (Varna). It was to be a major event establishing the case refuting Einstein’s theory based on Marinov’s experimental discovery. Ostensibly it looked as if this had academic backing and the official blessing of the Bulgarian state authorities. Such was my interest in the disproof of Einstein’s theory that I decided to attend. Upon enquiry at the London agency concerned with travel to Bulgaria I found that they had no knowledge of such an international conference. That resulted in me changing my plans and it was just as well, because almost on the eve of that occasion it was officially cancelled. However, Dr. Pappas had not heard of that cancellation and he ‘attended’ and, as a result, which included a diversion to Sofia, got to know Stefan Marinov quite well.

    [5] I have, in the light of those events, been interested in what Stefan Marinov was doing and have encountered him at various conferences, one memorable one being in Bologna, Italy, but in more recent times in connection with the Denver, Colorado, Symposia on New Energy.

    [6] Always, Stefan was alive with ideas and, although I knew he had once threatened to kill himself if he did not command the attentions of the Editor of the journal Nature, John Maddox, and have something of his published in that journal, I could not have imagined he really would eventually sacrifice his life in such a way.

    [7] We must, out of respect for what Marinov was striving to achieve, go further in our efforts to bring enlightenment and clarification into that Marinov’s field of endeavour. There will be enough of us carrying forward in the efforts to probe the prospects for a New Energy Technology, but only a few of us who will strive to clear up the questions concerning the law of electrodynamics. Therefore, it is on this latter theme that I will remount my own efforts to expose the issues governing that controversial subject.

    [8]
    To advance that research we should be wary about measuring force, per se, and look more to the energy shed by the aether in its inductive interaction with the flow of electric current. Force measurement can involve the aether indirectly as the seat of the inductive back-EMFs which can assert balancing forces and thereby deceive the experimenter into thinking that the force balance arises exclusively in the circuit interaction. Such is the fallacy of Ampere’s law, just as the imbalance is the fallacy of the Lorentz force law. The real question which leads on to the New Energy theme is whether the aether, in getting into the act, can ever release energy over and above that we supply to excite the circuit reactions.

    [9] To summarize some of my own conclusions:

    (i) I do not understand how anyone can prove that Ampere’s law of electrodynamics is correct by performing measurements on electron currents which flow around a closed circuit.

    (ii) I am aware that current flow around a closed circuit, including an electron discharge across an air gap, can involve forces tending to expand the circuit. That arises from the energy of the self-inductance, which, acting in an energy adjustment sense opposite to that of electric potential, tends to increase, meaning that if the circuit or an arc discharge in that circuit can expand, it will, because that increases the self-inductance. Here I have in mind the ingenious tuning fork experiments reported by Thomas E. Phipps in the September/October issue of Galilean Electrodynamics, v. 6, pp. 92-97. On the face of it such experiments can, it seems, disprove the Lorentz force law and show that forces in line with current flow are present, but far more is needed before the Ampere law can be said to be proved. This applies not only to tests using a.c. in which an electrode has freedom of movement, but also to moderately rigid closed circuits subjected to a sudden d.c. high current impulse, where the tug-of-war between the inductive back EMF and the forward EMF can tear the wire conductor into small pieces. This is known as the exploding wire phenomenon but it is not the same scenario as that on which the derivation of the Ampere law of electrodynamics is based, namely steady-state current flow around a specific circuit path. Once change of self-inductance or mutual inductance gets into the act, then there is cause for setting up a force acting along the path of current flow and, even though there is no net magnetic flux change in linking a closed circuit, there can be such forces set up in different segments of that path, that is even though no net EMF is generated around the circuit as a whole. (See the experiment reported at p. 120 by reference to Fig. 9 in my book ‘Modern Aether Science’). Concerning the exploding wire phenomenon, a subject championed by Peter Graneau, I draw attention to two papers of mine, abstracted in the Bibliographic section of these Web pages under references [1985c] and [1987c], namely Physics Letters, v. 107A, pp. 238-240 (1985) and v. 120A, pp. 80-82 (1987), where I explain how inductance effects set up the rupturing forces involved.

    (iii) I am also aware of attempts to prove something about the action of an isolated current circuit element by replacing it notionally by a different physical form which is more amenable to treatment, whether by theory or experiment. These involve substitution by what amounts to something providing a closed current circuit, as by replacing a current circuit element by a small magnet, for example. Here, the internal currents that develop the north and south poles of the magnet are invariably closed electron loops or circuits and, inevitably, that means that no out-of-balance force action is possible for interactions involving circuit current elements represented in this way. However, the most notorious example here is that of Einstein, who, by transformations based on the Lorentz pattern attempted to say that a discrete charge in uniform motion could be replaced by what amounted to a current filament of infinite length, which is equivalent to path closure, thereby deriving the Lorentz version of the law of electrodynamics and eliminating from that law the relevant force term which gives action along the line of current flow.

    (iv) Ideally, to test a law of electrodynamics one needs to confine electron flow to an incomplete but well-defined and unchanging circuit path and avoid rate of change of current because that implies inductance effects which are not representative of the steady-state Amperian circuit current element.

    (v) I am aware of only two experiments that satisfy such criteria. One is the famous Trouton-Noble experiment which dates from 1903. Its mis-interpretation by Lorentz in 1904 and by Einstein in 1905 put the evolution of physical science on the ruinous course that has given us our present problems. The other experiment is that of the circuit involving a cold-cathode discharge in which a segment of the circuit through the discharge tube involves current carried by heavy positive ions, rather than merely electrons. That is where the anomalous cathode reaction forces began to show in the early decades of the 20th century, but those anomalies were duly ignored by those more concerned with the mathematics of so-called four-space, thereby leaving open the scope for the debate I initiated on the subject in the 1960s.

    (vi) I should also mention an important experiment performed by Pappas and Vaughan, Physics Essays, v. 3, 211 (1990), which involved an antenna in which the current oscillations should, by the Lorentz force law, have produced deflection about a suspension, whereas none was observed. This indicated that the electrodynamic forces between the arms of the antenna were balanced, a result inconsistent with the Lorentz force law but consistent with the Ampere law or the one which I have advocated for many years. It was not a steady-state current experiment and could not prove the Ampere law but it did disprove the Lorentz force law.

    (vii) I should add here that when I approached the problem of the law of electrodynamics I had in mind two situations, the interaction between two charges in motion, given that the charges had the same mass, and the situation for which those charges had different mass. A system with two or more electrons following each other around a closed path typifies the first situation, whereas an electron circuit interrupted by a segment in which a positive ion captures an electron to move together across that segment in a neutral entity typifies the second situation, the heavy positive ion flow across that segment completing the current circuit. I could not ‘invent’ extra charges to cater for situations where electrodes vibrate, as applies if you do a.c. tests on such circuits keeping a.c. current amplitude constant. There the number of electrons traversing a section of the circuit in one second will be the same for a specified current, but if the circuit path has been allowed to extend then you must have added more such charge carriers to the system as a whole, as by ionization of an air gap. You do not have the scenario where you are dealing with forces between a specific set of charges constituting the system of the circuit under consideration. Ampere was dealing with a d.c. current flow in a closed circuit of definite form and, though he knew that there were forces acting on that circuit at right angles to current flow, he could only make assumptions concerning such forces as might exist along the current flow path. He assumed that action and reaction had to be equal as between any two circuit elements and thereby denied that the aether could assist in assuring that balance of action and reaction in force terms by providing an energy buffer which allowed a two-way transfer of energy between the circuit and the aether.

    (viii) I hold that Ampere’s law has to be wrong for the simple reason that my feet stay beneath me when I walk over the ground instead of treading air as I float off into space! You see, we ‘know’ that there just has to be an explanation of the force of gravity rooted in electrodynamics. Ampere’s law offer no such roots. It does retain action and reaction balance and the central force that goes with such balance, but the force can vary in strength as between two charges separated by the same distance, given that there is no feature built into the law that can bring about order precluding such variation.

    (ix) I hold the opinion that to get energy to transfer from electrons moving in their circuits in electrical machines and deploy into enveloping space, which is what we see with the process of induction, there has to be an out-of-balance force tolerated by a general form of electrodynamic law but one which can be eliminated from that law under certain circumstances, namely those pertaining to the gravity condition. The latter is a condition in which the Neumann potential, a component of the Lorentz force law or of my law of electrodynamics, but not featuring, as such, in the Ampere law, involves current elements flowing mutually parallel. I first wrote about this in 1959, but a convenient reference to a way of deriving my law, showing also how mass plays a role in that action, is Physics Letters, v. 111A, pp. 22-24 (1985).

    (x) I know that if the Neumann potential were to be zero by virtue of current elements being set in a mutually orthogonal configuration, then there would be no electrodynamic force at all acting between those current elements. This is important when one comes to understanding the physical basis of the Exclusion Principle which governs the electronic structure of atoms. In short, since all this is offered by my law of electrodynamics, meaning that we can bridge the secrets of gravitation and the atom and have scope for New Energy technology, all linked by that electrodynamic base, I hold firm in asserting that, whatever merit there was in Stefan Marinov’s efforts, the onward path is now clear. If, as Erwin Schneeberger implies, Marinov became depressed two weeks before his death by having realized that Ampere’s formula was correct owing to experiments jointly performed with Dr. Pappas, then that is, indeed, a sad circumstance. I cannot believe that such an experiment to prove Ampere’s law can have sound foundation and await disclosure of details of that experiment by Dr. Pappas.

    Harold Aspden
    September 12, 1997


  • CINCINNATI DISCLOSURE

    CINCINNATI DISCLOSURE

    © Harold Aspden, 1997

    Research Note: 14/97: August 30, 1997

    This Research Note is my response in reacting to a communication sent to me on July 16, 1997 by Mike Carrell (mikec@snip.net). He asked for my thoughts on the transmutation of thorium into titanium and copper by the Cincinnati Group, to be reported in depth in the then forthcoming issue of ‘Infinite Energy’. I have awaited receipt of my copy of that periodical and here is my answer.

    INTRODUCTION

    The March-June 1997 Special Double Issue (Nos. 13 and 14) of ‘Infinite Energy’ was published late, in August 1997, and I received my copy just a few days ago. It contains the breath-taking revelation of a Disclosure by ‘The Cincinnati Group’, giving details of transmutation of radioactive thorium into titanium and copper by what amounts to a ‘cold fission’ process. Quoting from p. 16 one reads:

    They claim to accomplish within minutes to hours what Nature requires tens of billions of years to do – at a cost of mere pennies of electrical input. (The half-life of thorium-232 is 14 billion years.) No exotic materials, except zirconium metal electrodes, are required.

    Note that 14 billion years is the age of the universe!

    Then on pages 18-29 of that Special Issue of ‘Infinite Energy’ one reads more and more about this process and becomes assured that it has been confirmed by independent parties. It all seems so impossible, but one finds it difficult to argue with the facts as presented. My interest centred upon the opening paragraph of the June 16, 1997 NEWS RELEASE as presented on p. 16:

    In a stunning upset of the fundamental dogmas of high-energy nuclear physics, a small group of inspired inventors, acting in the tradition of the Wright brothers of nearby Dayton, Ohio, has achieved reliable, multiply-confirmed, replicable-upon-demand, low-energy, bulk-process, high-speed, dirt-cheap, modern alchemy. For example, in less than an hour, one-tenth gram of radioactive thorium has been transmuted into nine-hundredths gram of titanium plus one hundredth gram of copper.

    That says that the thorium is 100% converted into something close to a 10:1 mix of titanium and copper. It implies that the transmutation is so well matched in mass terms that negligible heat energy is released, bearing in mind that this is a nuclear reaction process! It says there is a way of rendering radioactive material harmless and it says that the edifice of the high energy particle physicist is sitting on foundations which are liable to tumble as the shockwave of this disclosure makes itself felt.

    My interest in Aether Science, the theoretical world in which I live, is still rebounding from that shock, but I can offer some thoughts on the subject in this Research Note.

    WHY TITATIUM AND COPPER?

    The $64,000 question one needs to ask is: “What is so special about titanium and copper?” Also one must wonder if titanium and copper will appear as fission products of the decomposition of other radioactive isotopes based on the same processing technique, namely electrolytic adsorption into metal electrodes from a dilute salt solution or some kind of surface action at the interface between that solution and an electrode. The question at issue is whether the transmutation of the radioactive elements at the top end of the Periodic Table is a process distinct from the moderate transmutations that have been reported as occurring between adjacent atomic elements in other, but somewhat similar, processes. The latter may involve ‘proton creation’ within the element, proton creation being a principal theme in this, my Web site – http://www.energyscience.co.uk – and the former may involve a far more exotic concept, but one I feel bold enough to explain here.

    My approach is to say, first, that there has to be something special about the state of the radioactive thorium when associated with a metal cathode through which electric current is conducted. Since ‘cold fission’ is in mind, I will first explore whether something is occurring that actually cools the thorium selectively. In these Web pages, notably by my reference [1989a] in the Bibliographic section – the paper entitled ‘The Supergraviton and its Technological Connection’ -, I have argued that ‘cold fusion’ and ‘warm superconductivity’ are linked. The link is the ‘supergraviton’, which I know has a mass of some 95.18 GeV/c2 or approximately 102.2 atomic mass units.

    I have, since developing that supergraviton theory, come to realize that, if current is passed through a metal containing atoms that can migrate a little in the body of that metal (as can protons or thorium ions if adsorbed into it), then those migrant ions can build up chains or clusters. The thorium ions need not be adsorbed into metal but may simply group together at the interface between the metal cathode and the thorium nitrate solution from which they are dissociated. This may allow them to group together in units that are so well balanced by a dedicated number of supergravitons that they can absorb electron collision in a way which allows them to shed heat and augment the electron current flow in their recoil. The current flow is that of electrons in passage through the metal adjacent its surface, a flow necessarily involving impact with thorium atoms adhering to that surface. Note that gravitation comes about by a dynamic balance as between matter and the graviton population of the quantum underworld. (All this is fully described in the Tutorial section of these Web pages – even the precise value of G, the constant of gravitation, is derived by pure theory. See also Lecture No. 6).

    So I ask if thorium can build a well balanced cluster, meaning one whose total mass in a.m.u. is close to an integer multiple of 102.2. I found that 11 units of thorium, given that its atomic weight is very slightly greater than 232, sums to a little above 2552, which is virtually the mass of 25 supergravitons. So, here was my first clue.

    This fixed 11 as the magic number in my mind; there could be 11 units of thorium combining in a ‘suicidal’ disintegration into smaller elements. Then again I asked: “Why titanium?” I noted that the atomic number of thorium is 90. It has a nucleus with a charge of 90 units. Titanium has a nuclear charge of 22. I immediately saw that 11 times 90 is an exact multiple of 22. Here was another clue.

    The question then arises as to how the mass-energy can be deployed without the enormous loss of heat in a nuclear explosion if there is no residual charge to form the other necessary atoms. It was here that my mind had to jump back to something I published 17 years ago in my book ‘Physics Unified’. My attentions had been drawn by a colleague to a paper by M.H. MacGregor in ‘Physical Review’, D10, 850 (1974). He had suggested that the properties of a whole spectrum of fundamental particles could be explained if they were formed from four quarks. These were:

    Mo = 70.0 MeV
    M+ or M = 74.6 MeV
    S+ or M = 330.6 MeV
    S++ or S = 336.9 MeV

    Now you will find that I discovered how to deduce each of these values, precisely, from my aether theory and I explained that on pp. 153-155 of ‘Physics Unified’. The whole theory was founded on a proposition concerning the way in which nuclear charge is created! I confess that I had my doubts as to whether anyone would ever pay attention to the MacGregor account and, if that fell by the wayside, then my own theory concerning those four quarks would lose its foundation. I have not had occasion to think again about that subject – until now. We are talking about the creation of electric charge in units that can explain the atomic number of an atom, without relying on some mythical binding between protons mixed with neutrons.

    Even in my earlier book ‘Modern Aether Science’, published in 1972, I had shown very clearly (Chapter 4: ‘The Nuclear Aether’) why atoms build a satellite group of A nucleons (not neutrons) round a core charge of Z units. I quote from p. 141 of that book:

    When ‘Physics without Einstein’ was published the author supposed the nucleons to be formed as a system of neutrons and protons, as is conventional. The later realization of the stable charge system introduced in this chapter, however, has led to a revision of the model. All the nucleons are the same. They are negative particles of mass approximating that of the proton.

    The point of that argument was that the atomic nuclear charge stands alone, its mass being normally far less than the mass of a single proton, but there are, enveloping it and distributed in nearby space, what amounts to A anti-protons, each of which takes up a lattice site in the aether vacated by a quon (the aether lattice charge or aether particle, also referred to in these Web pages as the ‘sub-electron’). This is somewhat similar to the Dirac aether idea by which he suggested that positrons are sites in space vacated by electrons which move into the matter form. That suggests a hole or missing charge in the electrical background of space, the aether, but I prefer to see such ‘holes’ as filled by elements of matter, namely those anti-protons that surround the atomic nuclear charge Ze. They are held stable by the electrostatic balance of the aether itself. When this is all worked out in quantitative terms the results are indeed surprising and, I submit, convincing. However, that is another story and the issue here concerns the creation of nuclear charge.

    Well, the theme followed was simplicity itself. I would soon be asking myself how a quon, that physically-expanded electron, or sub-electron, forming the aether charge unit, would react if it were to be bombarded by virtual muons to create enough electrons and positrons to fill the space it occupied. That was how I ‘created’ the proton or anti-proton in my theory. But even before that I had taken the bold step of asking what would happen if Z electrons could overcome their mutual repulsion and all be forced into a sphere having the normal volume of a single electron. This is a kind of chicken-and-egg argument, because it is more likely that the composite charge form is produced first and it then breaks up into Z electrons. However, developing the argument, I assumed collective formation of a particle of charge Ze and its anti-particle of charge -Ze and that Z had to be an integer.

    I said to myself: “Suppose that a proton keeps its energy but expands to fill the volume of space normally occupied by a single electron, but that its charge can change to have the magnitude Ze, e being the unit of electron charge. Then determine the value of Z.” The energy formula requires the mass-energy to be proportional to Z2 and so the value of Z is then found to be the integer nearest to (1836)1/2, which is Z=43.

    Now what is so special about an atomic nucleus with Z=43? Firstly, I assure you that it led me immediately to the derivation of those four MacGregor quarks listed above. However, some years later, I came to realize that the atom technetium, for which Z is 43, is radioactive. It exists in the lower range of the Periodic Table where elements are stable, but yet technetium is missing amongst the natural elements. It, along with promethium at Z=61, are only seen as fission products of radioactive decay of elements at the top end of the scale.

    I wrote about that under the title ‘The Physics of the Missing Atoms: Technetium and Promethium’, (see [1987a] in the Bibliography). That paper is also reproduced in full in my 1996 book: ‘Aether Science Papers’. There is clearly something special about Z=43 and I am beginning to see this as having bearing upon the creation of atomic charge forms that constitute the nuclei of the atoms created by the ‘cold fission’ decay of thorium.

    I reason as follows. A charge of 43e combining with a charge e will give a net charge of 44e, spread between two nuclei. Taking these to be equal, that gives two atomic nuclear charges for which Z is 22. The atom having Z=22 is titanium. Here is our next clue in solving the thorium fission mystery!

    Now suppose that two charges of 43e combine with a charge e to give a net charge of 87e, spread between three nuclei. Taking these to be equal, that gives three atomic nuclear charges for which Z is 29. The atom having Z=29 is copper. Here was the next clue in solving the thorium fission mystery! We have arrived at the two elements which account 100% for the fission products, but what about their weight ratio in the resulting product?

    Given that we have seen a way of finding the balancing charge that has to be created, now we can go back to take a look at the mass-energy balance involved in the nuclear reaction.

    Why is it that there is no explosive fission of the kind that would result in heat and why does the fission process occur at a rate that speeds up from a period equivalent to the age of the universe to one measured in minutes or hours?

    Well I suppose that the supergraviton resonance is the trigger, giving that magic combination of 11 thorium ions, which would not cluster together in a chain-like configuration under normal circumstances. This sets up the filamentary pathways for supercurrent flow, albeit at the surface of the metal cathode. Then there is the point that if any excessive heat were generated it would destroy the superconductive resonance and arrest the process. Proceeding from that point, what we need to check is whether we can match the mass of the 11 thorium ions with that of an integer number of titanium atoms and copper atoms, as now helped by what has just been deduced. Firstly, if titanium is created by that process just described and there is a counterpart decay of charge from the thorium the energy of 45 titanium atoms will be absorbed in taking up the 11x90e charge of the thorium nuclei. Secondly, we are left with energy which can be deployed to create a mix of titanium and copper atoms. We now need to know whether we can match that energy with a low number of such atoms to assure the cold fission result.

    The mass-energy of 11 thorium atoms is found from their atomic weight as being that of 11 times 232.038 a.m.u. or 2552.418 and if we subtract the normal mass-energy of 45 titanium atoms, namely 45 times 47.90, this reduces to 396.918. Now, suppose the residual system tries to create one titanium atom and put the rest of the energy into copper-63. It would have enough energy to produce 5.54 such copper atoms but that means that the mass-energy of 34 a.m.u. is set free. This is far more than makes sense in a ‘cold fission’ scenario. Try next the creation of two titanium atoms. We are then left with enough energy to create 4.78 copper atoms. That gives an even larger release of energy and so that cannot occur. Moving on, try creating three additional titanium atoms. This leaves a residual mass-energy of 253.2 a.m.u. Now this is enough to create 4.023 Cu-63 atoms, because the isotope Cu-63 has an atomic mass of 62.93 a.m.u.

    So we find that the energy surplus is 0.023 times 62.93 a.m.u., which is 1.447 a.m.u. Spread amongst the decay of nine sets of 11 thorium atoms, this is enough to add to 13.023 a.m.u. to produce 13 added nucleons in the resulting product and leave a small surplus that could be released as heat. In this circumstance cold fission begins to look possible, with a surplus energy of the order of 0.023 a.m.u. from nine times that original 2552.4 a.m.u. of thorium. Based on the conversion of 0.1 gram of thorium, as reported, the heat produced should then have the mass-energy of 10-7 gm. Multiplying by c2, which is 9x(10)20, this is 9x(10)13 ergs or 9 megajoules. Spread over a one hour period this is a heat generation rate of 2.5 kilowatts.

    Now, of course, these numbers may not be accurate enough to get a true estimate of the heat generated. However, even 2.5 kilowatts, though insignificant in terms of the nuclear reaction we are discussing, is high in relation to the 300 or so watts needed to operate the Cincinnati Group’s device. There is clearly more to this cold fission process than we thought! Indeed, one really needs to look for the reason, as seems now evident, for all the mass energy shed by the thorium decay to be deployed into actual mass, rather than heat.

    At this point I can but look again to the role of that supergraviton system. It provides the energy in counterbalance with that of matter and it really links the system of thorium atoms in a kind of resonant state. This could well mean that any small residue of energy from the decay of those groups of 11 thorium atoms could be held in an energy pool, as by a quantum-electrodynamic effect involving meson creation and decay, pending deployment into forming titanium and copper atoms as other groups of thorium atoms decay.

    We will therefore go back and start again on this energy analysis, looking now, not for a perfect energy balance, but for some more detailed clue that will tell us about the weight ratio of the titanium to copper atoms created by that decay. Also we will try to understand how the energy is deployed amongst different isotopes.

    Looking now at the isotopic masses of the five isotopes of titanium we see that the dominant isotope Ti-48 has a mass of 47.948 a.m.u. To this level of precision, this value, stepped up by 1 a.m.u., happens to apply also to Ti-49. Copper has two isotopes, Cu-63 with a mass of 62.930 a.m.u. and Cu-65 with a mass of 64.928 a.m.u. The mass-energy of 11 thorium atoms is found from their atomic weight as being that of 11 times 232.038 or 2552.418 a.m.u. and if we subtract the mass-energy of 45 titanium atoms (Ti-48), namely 45 times 47.948, this reduces to 394.758 a.m.u. Now follow the argument used before and create the three extra titanium atoms to reduce the residual energy further to 250.914 a.m.u. This is very nearly enough to create four Cu-63 atoms, but it cannot do that, so we must look to the possibility that there are more than two groups of 11 thorium atoms in this atom building process.

    We seek a further clue in that Z=43 activity. The titanium atoms are produced in pairs, whereas the copper atoms are produced three at a time. We require that Z=43 process to create a multiple of three titanium atoms and a multiple of two copper atoms in order to arrive at that low energy situation. Noting that we need to create in this way three titanium atoms per 11-thorium group, this mix of added atoms comes about if we suppose that the decay of 6 groups of 11 thorium nuclei can share in the action. That involves 18 titanium atoms and 3n copper atoms, these being divisible by 2 and 3, respectively. We need to determine n.

    The residual energy from the single group, before creating the copper atoms is 250.914 a.m.u. However, we now have six such groups sharing the action. Therefore that energy becomes 1505.484 a.m.u. This is enough to create 23 copper atoms, but neither 23 nor 22 has the form 3n, so we can only create 21 copper atoms. That leaves us with a scenario of having created in that 6×11 thorium group decay, some 288 titanium atoms plus 21 copper atoms and we have some energy to spare to build some of these atoms in their higher isotope form.

    First, however, we have enough data now to estimate the weight ratio as between the titanium and copper in the resulting product. The ratio of 288×48 to 21×65 is 10 to 1. I now quote from the ‘Third Party Verification’ account on p. 20 of that issue of ‘Infinite Energy’:

    Comparison of the blank data with the processed test sample data indicated that significant quantities of titanium and copper had been produced. The concentration of titanium in the processed sample was 10 times greater than the copper concentration.

    So now let us see what that residual energy means for the production of isotopes. The 6×11 thorium atoms contribute 15,314.51 a.m.u. The 288 Ti-48 atoms require 13,809.02 a.m.u. 21 Cu-63 atoms require 1321.53 a.m.u. We have a difference of 183.96 a.m.u., enough to add 184 nucleons to augment the isotopic increment that must occur. The energy has to be deployed and I now assume that those 21 copper atoms all become Cu-65 and that 142 of the 288 titanium atoms are of the Ti-49 isotope. In other words, I am saying that I expect, on this theory, to see the copper-65 dominate the copper component of the resulting fission product and very nearly half of the titanium atoms to be Ti-49.

    Happily, the published data on the analysis of the thorium fission product has bearing on this. One cannot rely on the absolute precision of measurements based on scrutiny of a few samples, but at least one can judge if the isotopic masses have increased above the natural norm.

    I now quote again from that p. 20 of the ‘Infinite Energy’ report.

    Copper has two isotopes of mass 63 and mass 65. The natural abundance ratio of mass 65 to mass 63 is 0.45. The ratio observed in the processed sample is 8.2. This represents an 1800 per cent deviation from the natural abundance ratio. Titanium has five isotopes. The isotope of mass 48 is, naturally, the most abundant. Three of the four minor isotopes produced an isotopic mass ratio, with respect to the mass 48 isotope, which was equivalent to the natural abundance ratio. However, the mass 49 isotope produced a mass 49 to 48 ratio of 0.42. The natural abundance ratio is 0.075. This represents a deviation from the natural abundance ratio of 560 per cent.

    The theory developed is therefore supported by that Ni-65 concentration. There is support from the Ti-49 concentration also, but the observations suggest that this is only about half of the amount predicted theoretically. How can we bring the figures into line, bearing in mind that those 288 titanium atoms were deemed only to be either Ti-48 or Ti-49, in the ratio 146 to 142? The normal abundance is in the approximate ratio 3:3:30:2:2 as between the 45, 46, 47, 48, 50 masses. This would need to change, in order to get a distribution fitting that found experimentally, the resulting ratio being 17:17:170:72:12, if it is to represent 288 titanium atoms. This gives 72/170 or 0.42 as the ratio observed for the Ti-49 versus Ti-48 masses. However, this requires about 70 a.m.u. less than is available on the energy analysis. So, unless that Ti-49 component already measured proves to be an underestimate, one must wonder if some other form of atom already present is experiencing an uplift in its isotopic mass, as by Ni-64 being created from Ni-60. About 18 such transmutations could occur in company with the creation of 21 copper atoms, if we apply this to the above data.

    Then I invite you to look at the data for Scan 3 as shown on p.25 of that ‘Infinite Energy’ Special Issue. The data show that 92,342 counts for an atom of atomic mass 64 have appeared alongside the 2546 attributed to Ni-64, whereas, compared with the Scan 2 reference on p. 24, the Ni-60 content has diminished by 79,588. Yet, in contrast, the increment at an atomic mass of 65 (presumably Cu-65) is 90,136. It seems quite evident that nickel already present experiences an isotopic transformation in company with the creation of titanium and copper and does so in amounts that lend support to the theory outlined here.

    Suffice it now to say that this theory comes very close to explaining what is observed and particularly why it is that titanium and copper are the fission products. The supergraviton has revealed itself in a new technological context and we must now await further research results in order to test this theory further.

    To conclude, I stress that the role of the supergraviton is important because it can account for the clustering of thorium atoms in a manner conducive to fission. The warm superconductive aspects of the phenomenon involve the formation of filamentary channels for electron flow through those clusters. The bombardment by electrons (and positrons, because electric current is actually a counterflow of the two charge forms) involves penetration into the atom and preservation of current flow by inductance, the energy sustaining that flow being augmented by deployment of kinetic energy from the atoms, which cools the flow path. If those electrons can concentrate enough energy by their escalation of motion as current carriers, then they might well bring about the transmutation of the nuclei inside those atoms. This argument may seem speculative, but I am building, stage-by-stage, upon foundations laid earlier in my published work. One needs to have an open mind in searching for clues such as those discussed here and not react by quick impression to say something is impossible if the results predicted seem to confirm what is discovered experimentally.

    Harold Aspden, August 30, 1997

    P.S. added September 3, 1997. After reading this Research Note, Mike Carrell asked: “Why zirconium?” Zirconium is the metal used for the electrodes in the Cincinnati Group’s cell. I am no expert on such matters but I think it could be because zirconium can withstand corrosion. I read in Shankland’s ‘Atomic and Nuclear Physics’ (published in 1955) that:

    Materials used in reactors must be very resistant to corrosion; for example, the steam temperature of a power generating reactor is limited by the corrosion of the fuel elements and of the piping and container, produced by the high pressure, high temperature water. Fortunately, zirconium is very resistant to corrosion under these conditions…”


  • E=2Mc 2 ?

    E=2Mc2?

    © Harold Aspden, 1997

    Research Note: 013/97: May 20, 1997

    This Research Note is my response in reacting to a communication sent to me by Peter McNeall, by letter mailed on May 16, 1997 from his address in Houston, Texas. Peter was the very first physicist to check the mathematics of my aether theory. Many years ago, shortly after it was published, he drew my attention to a minor error in the numerical derivation of the fine structure constant in my 1966 book ‘The Theory of Gravitation’. Since then I have heard from him from time to time as he has revisited my theory and sought ways of modifying it or even reconciling it with the relativistic physics of our modern age. His message to me on this occasion raises a point which needs clarification. Hence this open response.

    Peter McNeall’s note reads:

    “In the magical world of matter and energy Einstein’s famous equation E=Mc2 applies to particles of matter obeying relativistic mechanics. But our ether particles obey quasi-Newtonian mechanics, so E=Mc2 no longer necessarily holds true. The energy of a lattice particle is (2/3)e2/b while its mass is only (1/3)e2/bc2, so it appears E=2Mc2! The result possibly has support from Dirac’s relativistic wave equation according to which an electron has a magnetic moment of 1 Bohr magneton, while its spin is only (1/2)[h/2(pi)]. It is just as if the inertial mass of the electron in the spin mode is only one half its normal translational mass – so once again we find E=2Mc2!
    There is no doubt Aspden’s theory is a challenge to one’s sanity and self respect, as is quantum mechanics itself.”

    So here Peter McNeall is saying that there are inconsistencies in my theory, it being a fact that I have used a mass value of half the normal particle mass in formulating the centrifugal force acting on my aether lattice particles as they are active in their quantum jitter motion. However, the physics underlying this can easily be explained and justified.

    I begin by saying that I certainly do not need ‘support from Dirac’s relativistic wave equation’. Whatever that equation has to offer to the world of physics and whatever inconsistencies it may produce, that is for its advocates to address. My concern is my own theory and it can stand firmly on its own foundations.

    McNeall is wrong in saying that the aether lattice particle of my theory has a mass-energy of half that given by the J. J. Thomson formula relating energy, charge and charge radius. E=Mc2 is deduced from first principles in my writings, dating from that 1966 publication mentioned above, and it holds validly founded on the principle that a charge possessing electric field energy E will respond in an accelerating field exactly so as to conserve energy and thereby exhibit an inertial mass M given by that formula. However, the response of a charged particle depends upon governing constraints and these can be different where that particle is not able to move freely. Aether lattice particles form part of a structured system having a microscopic quantum jitter motion which is the primary motion. In contrast an electron moving freely in a particle accelerator has only a minor quantum jitter component of motion and that does not affect the derivation of the formula E=Mc2.

    The governing constraint in the case of aether lattice particles is the synchronous phase-lock which holds between all those particles as they move collectively in small quantized orbits so that the whole structure jitters in harmony, there being one aether particle in each lattice site. Now, (see for example pp. 72-73 of ‘Physics Unified’) once E=Mc2 has been derived on a good physical foundation, onward progress from there to deduce the formula for the so-called ‘relativistic mass increase’ is simple algebra plus one assumption. The assumption is that energy is conserved and not radiated as the speed increases. Einstein’s theory is not involved. However, if the particle in question is constrained to conform with phase-locked motion, albeit gaining energy and speed in orbits of increased radius, then the particle mass cannot change. Simple harmonic motion at a fixed frequency is a characteristic of there being a linear restoring force rate governing radial displacement and an invariant mass. The aether particle does not change its mass as the aether absorbs energy! As a result, and even at speeds that are a significant fraction of the speed of light, the mass remains constant and Newtonian mechanics are applicable. Einstein’s theory has no place here.

    However, there is a very important element to all this which needs clarification. In my earlier writings I recognized that the aether lattice particles exhibited centrifugal effects as if they were only of half their normal mass. I justified that by noting that when a spherical particle is immersed in an incompressible fluid of equal mass density then the particle does respond inertially as if its mass is halved. This is a standard fact known in hydrodynamics. That is why I introduced the half mass factor in working out the formulae governing aether lattice dynamics. It gave perfect answers quantitatively and allowed derivation of the value of the fine structure constant to part per million conformity with its measured value.

    My last book presenting the formal analysis on that subject dates from 1980 (‘Physics Unified’) and progress thereafter has been reported in published scientific papers. In the course of these developments I had struggled relentlessly to derive from first principles what is known as the ‘Neumann Potential’. This is the stepping stone to understanding the physical basis of the law of electrodynamics, though I knew the form of the latter from my earlier work based on empirical analysis. In short, I had already (before 1959) discovered the form of electrodynamic law that gave what was needed for unification with the law of gravitation and it had empirical foundation, but my ambition was to derive that law from first principles, starting with Coulomb’s law of force between electric charges. The Neumann potential had been the starting point for deducing the applicable law of electrodynamics.

    It was in my paper [1985g] that I introduced an account of Fechner’s Hypothesis to lead into the derivation of the Neumann potential, but here I was adopting something that can be read in Clerk Maxwell’s treatise. The starting point was a formula that can best be described as a ‘mutual kinetic energy’ term involving the square of the relative velocity as between the two interacting particles. Then, some three years on from there, I completed the link back to Coulomb’s law as the starting point [1988a].

    This progress following my 1980 book has clarified that issue of the mass-halving feature of the centrifugal force of the aether particle, as I now explain. The Fechner hypothesis, as I interpret it in modern terms, requires that what we see as a charge e moving at velocity v is really a charge e moving at velocity v/2 towards a pair of charges e and -e located in the forward field, with the -e charge of that pair moving at velocity -v/2. When the two moving charges meet they annihilate one another and so leave the remaining charge e in a position forward of the original charge e. This action repeats as the energy shed to the aether by the charge pair annihilation regenerates a new charge pair in the field ahead of that charge e and so there is, in effect, a motion of charge e seemingly at the speed v. The reason for this curious state of affairs is apparent when we consider two separate charges moving along in spaced relationship, each at their own speed. To work out the mutual electrodynamic potential as between the two charges one finds that four interactions need to be added. It then works out, basing the analysis on a formula involving the square of relative velocities, that the resulting potential reduces precisely to the empirically-founded Neumann potential.

    So far as concerns the subject raised by McNeall, we can now explain how mass can be different for the centrifugal force of the aether particle. Firstly, concerning the electron advancing through space at speed v with the assistance of electron-positron pair creation and annihilation, there are two masses m moving in opposite directions at speed v/2. If the path they follow is an arc of a certain curvature radius R, their combined centrifugal force consists of two components m(v/2)2/R. This, however, is only half of the force we know to be applicable to a mass m moving at speed v. It is here that my gravity theory comes to the fore. Every mass m has a counterpart ‘ghost’ mass m linked to it and providing the dynamic balance in that quantum jitter which underlies the motion of all matter. For motion freely through space, that ‘ghost’ mass, which is itself seated in a charged particle system, has to tag along and so the overall centrifugal force has to be doubled. This results in the standard formula mv2/R.

    Now, had that electron not been in free motion but was simply tracking around an orbit of radius r in its quantum jitter motion, that added ‘ghost’ mass is not moving as if seated with the electron. Instead, it is positioned in juxtaposition on the other side of the orbit, because it provides the dynamic balance. It is at the other end of the restoring force that opposes the centrifugal forces set up by both the electron and its ‘ghost’. In other words the centrifugal force formula can, correctly, be written as if the particle has half its true mass.

    This is why the aether lattice particle exhibits a mass that is only half that given by the formula E=Mc2, but only in respect of its centrifugal inertial response in its confined quantum jitter orbit.

    If Peter McNeall regards this as a challenge to one’s sanity, then may I say that a mind that keeps to one track as we advance in physics is likely to get confused when reaching the buffers at the end of the journey. When twos and halves creep into comparisons between theory and experiment, then it needs a little double vision to make sense of what one sees. However, there has to be an answer, if one is on the right track. The routes followed by Einstein and Dirac cannot be right, given that my theory is already well past many of the stations that they could not reach.

    As a footnote, I can but say that delving into the physics of the aether is not an exercise that can be fitted into the tight logical picture that we have formed from observations on the behaviour of normal matter. It may be that physicists can imagine ‘superstrings’ and ‘worm holes’ in a microscopic sub-world of empty space, but that will not give the precise numerical answers that have emerged as a check on my more mundane vision of the aether world. The aether particles formed into a simple cubic structure and held displaced collectively so as not to be at positions of negative potential, thanks to centrifugal action set up by motion in orbits which define the Planck quantum of action, tell us enough, without getting into ‘worm holes’. I will never know what it is that accounts for that charge pair-creation activity in space, but explaining that has not been my challenge. I have sought simply to decipher the coded version of Nature that is locked into the numbers representing the dimensionless physical constants, particularly those involving G, h, e, c and M/m, the proton-electron mass ratio. I had to do this using a physical model of the structure of the aether, because numerology alone leads one nowhere. However, the twos and halves, such as discussed above, though simple numerically have been equally challenging.

    *****

    Some readers, those familiar with the wave properties of electrons, will wonder how what is said above fits in with the diffraction and the de Broglie wave length properties of an electron. Well, it is beyond the scope of this Research Note to get into the problems of the interplay of photons and the electron as the electron travels at speed through the aether. As these Web pages expand I shall be providing a full account of this relationship, building from what is said in Chapter 4 of my book ‘Physics without Einstein’. However, in the meantime a useful reference is my paper Physics Letters A: ‘A causal theory for neutron diffraction’ which dates from December 1986 (See volume 119, pp. 105-108). Abstract [1986k] in these Web pages.

    This shows how four photon spin units involving a 3x3x3 array of aether lattice particles cooperate in setting up standing waves at the de Broglie wavelength. The unit of priming energy associated with the combined spins of these four photon components is the energy involved in the annihilation or creation of an electron. The rate of spin is a function of the speed of an associated electron and the change of spin energy equals the kinetic energy of the electron. However, these photons have a transient existence as the system characterizing the electron alternates between a state in which the electron exists in company with these photon units and states in which it is not moving in the electromagnetic reference frame but has the company of one or more electron-positron pairs, which have the kind of motion described above in this Research Note. The Physics Letters paper just referenced concerns also the way in which electron-positron pairs are involved in setting up the wave properties which account for neutron diffraction. Indeed, just as the electron alternates between states, so the neutron alternates continuously between its four states, as illustrated in Table I of the Hadronic Journal paper referenced as [1986d] in the abstracts included in these Web pages. Although the electron and the neutron have state transitions that differ considerably owing to their entirely different compositions, the photon activities associated with their motion through the aether are identical in character, resulting in their respective de Broglie wavelengths being similar except for the different mass values involved.


  • FLYWHEELS AND ANTI-GRAVITY

    FLYWHEELS AND ANTI-GRAVITY

    © Harold Aspden, 1997

    Research Note: 12/97: May 19, 1997

    I am here responding openly to a letter dated May 16, 1997 from Ron Thompson, the Scottish journalist who revealed to us the pioneer work of Sandy Kidd as presented in Kidd’s book: ‘BEYOND 2001 – the Laws of Physics Revolutionized’. That book (first published in 1990 by Sidgwick & Jackson – ISBN 0 283 99925 X) told the remarkable story of Sandy Kidd’s valiant efforts to get the world interested in his anti-gravity machines. These machines had flywheels which developed anomalous lift forces. The Appendix of that book was a detailed report by an external test laboratory in Melbourne, Australia, based on 20 sets of test results which gave evidence confirming what Kidd had claimed.

    Seven years has passed since this work was published and since I last had contact with Ron Thompson and, now renewing the contact, Ron has asked me if I could spell out the inter-relation between certain scientific principles so far as they might relate to Sandy Kidd’s machine. The question really was about the evident confrontation the machine poses for Newton’s Third Law, Einstein and the Law of Conservation of Energy.

    My answer below relates generally to any machine that uses flywheels in a way which can demonstrate a small, but real, anti-gravity effect.

    One must always keep faith with the need to conserve energy, but that does not mean that we have to confine ourselves to the mechanical energy and the heat energy of the machine under study. It is as if we were born in, and have remained in, an enclosed laboratory on a ship and are now testing a machine which is driven by the ship’s propeller, as the ship slows down, but without us knowing that the ship is actually moving on a sea that we have never seen. Equally, it is as if we are in a ship at anchor but there is a flow of water under the ship keeping the engine turning. If the gyroscopic machine in our Earth laboratory develops an out-of-balance force, then it must be involved in an energy transfer process. The flow of water relative to the ship turns the propeller and energy is transferred because the speed of the flow multiplied by force is the rate at which work is done. A scientifically minded person, inside that enclosure, would infer the existence of that sea outside. A perverted mind might contrive some kind of philosophical explanation. Either way, however, energy must remain conserved.

    The Sandy Kidd machine must, therefore, satisfy the Law of Conservation of Energy, because it has a way of latching onto something that connects with that hidden energy sea that we speak of as the ‘aether’.

    Now, where does Einstein fit into this picture? Well, it seems he could not make up his mind as to whether or not there is an aether, but he assumed, as a philosophical exercise, that we could live in happy ignorance and not refer to it at all. All Einstein did was to provide a kind of computer program that provided a picture of virtual reality which satisfied the concerns of some of those secluded in that enclosed part of that ship. He was born in the year when Clerk Maxwell died and, for a while, he was allowed to wander about the upper decks of the ship. Maxwell’s aether was there in the range of sight. It was well charted in 19th century physics, but towards the end of that century attempts to see it in greater detail, by illuminating it with interfering light waves, somehow obscured the expected detail. Einstein must have known in 1905 about the non-successful efforts to measure the ship’s speed by using mirrors immersed in water and timing light signals reflected from those mirrors. (This is just an anecdotal way of referring to the well-known experiment by Michelson & Morley.) So he then took up a philosophical stance and decided that physics was governed by rules which in no way depend upon reference that old vision of the aether sea.

    He invented his own imaginary ocean and he, or his followers, called it ‘space-time’. It was tailored to fit the requirement that light travels at a constant speed but relative to the observer, even within that enclosed world inside the ship. Then, as the years moved on, Einstein decided that the ship could not move on a steady course but must follow a curved track and, still without involving reference to that aether sea, he saw that as explaining why there is a gravity force. So, if the Sandy Kidd machine exhibits an out-of-balance force and it has to be explained by Einstein’s theory, someone has to figure out why the ship can go off a rectilinear course in its space-time travel as a function, not of the curvature of ‘space-time’, but of the operating state of the Kidd machine.

    In fact, on aether theory, the simple answer I offer applies to the case where the ship is at anchor and the force arises from the propeller reaction, but where, concerning gravity, that ship’s anchor slips a little occasionally, as I explain below.

    First dealing with the question of Newton’s Third Law of Motion, which requires action and reaction to be equal and opposite, this again holds valid so long as the aether is accepted as a component part of the system. You see, if there is force balance in an overall system which includes the aether and you are foolish enough to take away the aether then you give yourself a problem with Newton’s Third Law. That, however, would be your own invented problem and not one confronting Sandy Kidd’s machine.

    As to spinning flywheels, remember that, once spinning, they are difficult to manipulate by turning the spin axis, owing to their inertial effects. The right way to explain the force of gravity acting on that flywheel is as a force on ‘something’ that is separate from but ‘anchored’ to the mass elements of that wheel. The flywheel exhibits weight. However, there are, in effect, two systems spinning about that spin axis, the wheel proper and that ‘something’. The apparent weight of the flywheel is really the gravity force conveyed through that link with the anchor. The Kidd machine is geared to connect with the varying orientation of that flywheel system. One must then ask what happens if that ‘anchor’ does slip and the flywheel becomes partially disconnected from that ‘something’. The weight property of the wheel is then affected, because that ‘something’ is part of the hidden aether.

    Now, slip it will, because there are two spinning systems and it takes quite a force to displace a flywheel from its plane of spin but the machine is only applying that force to one of those two systems. The complementary aether spin of the other system has to adapt to the change as best it can and, though the aether responds very rapidly in its reactions to changes in our material world, its response can be sluggish if it has also to contend with its own spin action.

    To conclude, whatever those who talk in terms of Laws of Energy Conservation, Einstein’s theory and Newton’s Third Law have to say about the impossibility of anti-gravity machines, the fact that they neither understand what underlies the force of gravity nor do they recognize the existence of the aether renders their views irrelevant.

    Obviously, it is not going to be easy to break through technologically in developing anti-gravity machines, but if the Sandy Kidd machine has opened a crack through which we can glimpse the way forward, then his efforts and those of Ron Thompson in supporting him are to be commended.


  • END OF SCIENCE?

    END OF SCIENCE?

    © Harold Aspden, 1997

    Research Note: 11/97: May 5, 1997

    This Research Note is being written on May 5, 1997 after reading two articles on page 14 in today’s issue of THE TIMES (London newspaper). In the dominant article, Nigel Hawkes reports on a new book by John Horgan entitled ‘The End of Science’ which implies that everything worthwhile has already been discovered, even though such a belief was attributed also to the eminent physicist Lord Kelvin at the end of the 19th century. Kelvin is quoted as
    saying that the future truths of physical science were to be looked for in the ‘sixth place of decimals’ but it was noted that this was ‘just before Einstein transformed the entire understanding of the subject’.

    John Horgan is a senior writer for Scientific American and it seems that his conclusion is that ‘the great era of scientific discovery is over’. Well, Mr Horgan, if that is true where, in
    Einstein’s theory, can I find the derivation of the value of G, the constant of gravitation? By that, I mean the derivation of G to the ‘sixth place of decimals’ in terms of other fundamental constants of physics which have been measured to that degree of precision.

    Surely, the views of Lord Kelvin deserve our attention. Something determines the universal values of the fundamental physical constants and we would be better employed in our scientific endeavour if we try to decipher the coded information contained in those numbers, before we lose ourselves in a so-called four-space world and its Big Bang origins.

    The other article by Nigel Hawkes, standing alongside the commentary on Horgan’s book was headed ‘A challenge to Einstein’s theory’. It referred to something published in Physical
    Review Letters. It said that physicists, Dr Borge Nodland of the University of Rochester in New York and Dr. John Ralston of the University of Kansas, had analysed 160 observations of distant galaxies to find something quite remarkable:

    ‘Radio signals coming in from one direction – the constellation Sextans – appeared minutely different from the ones originating 90 degrees away in the sky. The polarization, or preferred direction of oscillation, of the radio waves differed, depending on which direction the physicists looked.’

    Upon reading this my attention was drawn back to some words that Nigel Hawkes used to introduce this subject:

    “To show that the universe as a whole behaves differently, depending which way you slice it, has momentous implications. For a start, it would overturn Einstein’s
    theory of relativity, which holds that physical laws are the same everywhere in the universe.”

    What, I wondered, had Nigel Hawkes in mind when he used the words: ‘which way you slice it’. Had he at some time read my book ‘Modern Aether Science’, where chapter 16 entitled ‘The Cosmic Aether’ actually provides evidence to show that space is sliced up into discrete domain configurations? The analogy familiar to the physicist is the way in which the state of ferromagnetism in a crystal is sliced up into discrete ‘magnetic domains’ which exist, not as a physical structure comprising atomic matter, but only as a fluid field condition.

    Well, let me tell you why what Dr Borge Nodland and Dr John Ralston have discovered comes as no surprise to me. It was some thirty years ago that I published the second edition of my book ‘The Theory of Gravitation’ and that was five years before my book ‘Modern
    Aether Science’
    was published. [Both of these works can still be supplied to anyone seriously interested, as a few copies remain in print – see the Book/Report section of these Web pages].

    At the time I worked for IBM and visited U.S.A. quite regularly on company business. I mentioned my interest in Aether Science to a colleague at IBM’s Corporate Headquarters and he kindly suggested that I should see a Professor Thomas at Columbia University in New
    York on my next trip. Professor Thomas was an expert on Einstein’s theory, known in connection with the ‘Thomas Precession’ and he had a standing retainer as a scientific consultant to IBM. A meeting was duly arranged and, in advance of my meeting, I mailed to Professor Thomas a copy of my 1966 book ‘The Theory of Gravitation’, then just published. The book showed how one could derive the dimensionless fine-structure constant to that sixth place of decimals and went on to apply the same theory to derive the value of G, all
    based on a physical interpretation of an aether that had properties akin to those found in the underlying field structure of a ferromagnet. I had, however, at that time not gone so far as to
    slice space up into domain regions. The thought had not occurred to me, as my primary interest was in showing that the force of gravitation along with the internal forces which cause mutual attraction inside a magnet had a common origin in a synchronous charge motion, though on an entirely different scale.

    So, I met with Professor Thomas, only to find that he had not read my book at all and had no questions based on such preliminary review. He merely listened to what I had to say about my theory. It was not possible to cover the full ground in the hour or so that we spent
    together. Nevertheless the meeting was helpful because, after listening to how I justified the electrostatic aspects of my interpretation of the aether, he asserted that my calculation would
    not hold up unless I could prove there was no problem posed by boundary conditions. Curiously, physicists are not at all impressed by calculations which give the right numerical answers. They need to see the physical formulations as rooted in the current theoretical
    activity and that, of course, means Einstein’s territory. In no way, at least in 1966, was there any patience with the idea that the aether should be revived and Einstein’s theory brushed aside!

    Anyway, I had a point of criticism to address and my mind was then focused on that boundary problem. To explain this in simple terms I need first to outline the aether model I was using. We know from the success of Maxwell’s theory that there must be electric charges in the vacuum medium, as otherwise there would be nothing to sustain displacement currents. There is no sense in saying that, because the vacuum is electrically neutral, it must contain particles of positive and negative charge in equal numbers. They would coalesce and form matter. Therefore I took the view that all those particles that contributed to Maxwell’s displacement action must have the same polarity and it seemed appropriate to say that they were all identical in form. To provide the electrical neutralization I made the assumption that there was a uniform background, a charge continuum of opposite polarity, without speculating at all on its true nature. Those particles would then each seek to occupy
    a neutral position to which they are attracted by electrostatic action, but they would repel one another to take up positions in a structured array. I realized that this array would be a simple
    cubic form, familiar as I was with the different circumstances applicable where particles have an attractive affinity, as in material crystals where the electron shells of adjacent atoms
    overlap slightly. Structures that are of face-centred cubic, body-centred cubic, or other such forms are then found in ferromagnets, for example. However, so far as the aether is concerned, the structure is simple cubic!

    So far, this is mere hypothesis, but I was then able to consider energy deployment. If all those aether particles were at their positions of least electric potential then the energy of
    charge interaction would be negative overall. I could not believe that the aether is governed by a state of negative energy. If it were, then everything would be at rest and be frozen in place. There could be no action and no motion. My flash of genius, if such it was, was that of assuming that each and everyone of those aether lattice charges was at a position of least possible energy potential, so long as that potential was of positive value. I excluded negative
    energy. If that expression has meaning at all it can only apply as a relative expression and the basic aether has to be the absolute bedrock foundation for reference. Note that I am talking about energy and not the speed of light as most physicists assume when the aether is mentioned.

    On this basis I explained to Professor Thomas that my calculations of the necessary concerted displacement of those particles, meaning the whole aether lattice, fixed a calculable distance in terms of the particle spacing in the aether lattice and this defined a radius about which there had to be universal synchronous motion of all those particles. Here was where I discovered the link with Planck’s constant and the Bohr magneton which I knew featured in the theory of ferromagnetism. However, I had used in my analysis a restoring force rate that invoked my knowledge of electrostatics and gave the right answer when the
    energy density of an electric field is calculated in terms of that Maxwell displacement. Professor Thomas questioned how I could eliminate boundary considerations as they applied to my aether model.

    He did not enlarge on this point, but when I later sat down to struggle with this issue and imagined a lattice charge system to be displaced within a spherically-bound aether I got an energy density result for field energy storage that was only one third of the true value. To get the right answer the boundaries of the electrical charge system of the aether had to be set by mutually parallel planes. The aether had to be ‘sliced’, to use the above words of
    Nigel Hawkes.

    This was a rather absurd picture, because common sense tells us that the vastness of space should not involve us in trying to picture the shape of the ultimate boundaries, given that it is far too remote to have any influence. Mathematical principles nevertheless oblige
    us to envisage a ‘sliced’ space medium, where there are local boundaries that are planar. This is no problem, because this is exactly what we see in the magnetic domain structure inside the crystals which form a ferromagnetic material. The shape of the ultimate boundaries, those of the body of the crystal, do not affect those local domain boundaries. Instead, the latter are orientated according to the local atomic lattice structure of the crystal.

    What, then, constitutes a planar space boundary? The answer to this that I have adopted is that, on one side of a boundary, the aether particles have positive polarity and occupy a negative charge continuum, whereas on the other side of the boundary the aether
    particles have negative polarity and occupy a positive charge continuum. Note that this assumption seemed to be only option. It made sense because it compensated for the asymmetry that the aether would otherwise need in its overall perspective. I had been forced
    into this interpretation by that critical observation of Professor Thomas. But I was not deterred from belief in the theory I had evolved. On the contrary all its results, qualitative and quantitative, remained intact, but there was some spin-off that led to something new.

    I asked myself what would happen if a body such as Earth, in moving through space, actually passes through such a boundary and for a few moments of transit at its cosmic speed of several hundred kilometres per second it occupies two regions of aether divided by
    that boundary. Well, the answer I could give without enlarging on my theory as it stood was that the Earth’s magnetic field would reverse as a result of that transit and, in the astride-the-boundary position, there would be a reversal of the gravity force as between the two regions. In other words, during the passage, the Earth would experience enormous upheavals, earthquakes on a scale we cannot imagine.

    Consequently, I went in search of evidence to see if there were indications of any correlation between geological events and the reversal of the geomagnetic field. I found that in August 1970 and it was in March 1972 that I published ‘Modern Aether Science’, but see also chapter 8 entitled ‘The Cosmic World’ in my 1980 book ‘Physics Unified’. Yes, there was such evidence, very strong evidence, but there was a surprise as well. The timing of the
    reversals, which occur in a seemingly irregular pattern, can be deciphered in a way which suggests that the space boundaries exist in a three-dimensional sense. It is as if space is sliced by three mutually orthogonal sets of parallel boundaries!

    There had to be some other way in which a planar boundary system can divide regions of space which would account for the antigravity transitions and the magnetic field reversals. Was this possible, without affecting that electrostatic restoring force rate which dictated the need for the set of planar boundaries separating regions of alternating charge polarity? Well, I have stated that the aether particles describe tiny orbits about the charge centres to which they are attracted. They also move in synchronism and, as seen in motion in the planes of their orbits they can all describe those orbits in either a clockwise or an anticlockwise sense. Here, then, was the clue needed to interpret the other boundary forms. Not only does the aether medium preserve a kind of symmetry in its electric charge system when viewed overall, notwithstanding the particle-continuum asymmetry of the local space domains, but it assures an overall balance of angular momentum, notwithstanding the
    concerted orbital patterns of motion in its local space domains.

    The analogy of the ferromagnet was, indeed, quite relevant, because, looking inside a ferromagnetic crystal and in a direction parallel with its overall natural polarization, there are domains separated by parallel boundaries and the spins accounting for ferromagnetism have opposite directions in adjacent domains.

    If you now ask how an electromagnetic wave might be affected in its passage across a space domain boundary, you will see that the electric field oscillations can be orthogonal to the orbits of the aether particles or in the plane of those orbits, depending upon their direction of motion through space. In the former situation they will pass through unaffected. However, in the latter situation they will have an energy interaction which affects their intensity on entry through a domain boundary and restores it to its original condition on its exit through a domain boundary.

    This is my immediate reaction to the May 5th account I read in today’s issue of THE TIMES. The findings there reported do seem consistent with space being ‘sliced’ by those planar boundaries. However, the point is that we do not have to wait the 100,000 years or so before the next Earth transit through such a space domain boundary, because we can, it now seems, interpret the electromagnetic radiation from distant stars to learn something about the
    orientation of those space domains.

    We will not see ‘The End of Science’ predicted by author John Horgan until we come to understand those space domain boundaries. We already have the solutions to the puzzles
    posed by gravitation, the proton-electron mass ratio and the fine-structure constant, without the need for superstring theory and the like, and with it we have complied with Lord Kelvin’s ‘sixth place of decimal’ confirmation. However, that assumes that the reader is interested in delving into Aether Science as presented in these Web pages.


  • COSMOLOGICAL DILEMMA?

    COSMOLOGICAL DILEMMA?

    © Harold Aspden, 1997

    Research Note: 010/97: April 27, 1997

    Have you ever wondered about the creation of stars and planets? You surely have, but you well know that astrophysicists have expert knowledge on this subject and so you all you can do is be attentive to what they have to tell us.

    This research note is being written on Sunday, April 27, 1997. Yesterday Nigel Hawkes, the Science Editor of The Times (London, U.K.) reported on page 4 of that newspaper a news item headed ‘Discovery of giant planet suggests other Earths exist’. That, of itself, is hardly news. There are so many stars like the Sun in the universe and it is plain common sense to assume that there are countless other planets similar to body Earth in the vast arena of
    space.

    No, the news item that I found of interest was summarized in the last paragraph of that report. It followed the comment that there is now evidence that some stars have giant planets that can be closer to the star than Earth is to the Sun. Hawkes wrote:

    ‘The unsolved mystery is why such massive planets should form so close to their parent stars. Current theories of the birth of the solar system suggest that large planets could form only a long way from stars.’

    He ended by quoting Dr. Robert Noyes of the Smithsonian Institution
    Astrophysical Observatory in Cambridge, Massachusetts as saying: ”

    ‘The whole picture of solar-system formation needs to be looked at afresh in the light of these new planet discoveries.’

    Now, first, before one starts reading the history books on science to see what good ideas are already of record, let us take stock of a statement made in this newspaper report. It said that the newly-discovered giant planet was slightly more massive than Jupiter and was in orbit around the star Rho Coronae Borealis, which is 50 light years away. It further quoted Dr. Timothy Brown of NCAR (National Center for Atmospheric Research in Boulder, Colorado) as saying:

    “All the giant planets found so far orbit Sun-like stars. Rho Coronae Borealis is another one of these, but it appears to be ten billion years old – twice as old as the Sun.”

    Let us here apply a little logic. If, when a new solar system is formed, the large planets can only come into being ‘a long way from the star’, then it follows that, with the passage of time,
    they will either wander further and further away from the star or progressively get closer to that star. It is all a question of how energy and angular momentum adjust with time, but the same assumptions must be applied to the planets in orbit around the Sun and those in orbit around that more-aged star. So we find that when the star is twice as old as the Sun the giant planet is seen to be very much closer to the star that its counterpart in our solar system. I do not find that at all surprising, if the assumption is made that the planets will eventually die by falling into their local star. What would be surprising would be the discovery that planets can escape from their orbits with the passage of time and wander off into outer space, but that cannot be if the older star has managed to pull them in closer and closer from the orbit in which they were created.

    That, however, goes against the theme of that ‘unsolved mystery’, because its authors seem to ‘know’ that the planet was ‘born’ in the precise orbital location where it is today, or rather 50 years ago, allowing for the observational delay. Who is to say where a planet is born, merely from observation after billions of years?

    We must, of course, look again at the theory of planetary creation, but dare I suggest that we should first decide how the individual stars were formed, before we extrapolate into the realm of planetary creation?

    If you, the reader, are ready to learn something about my published work on that subject, which is quite comprehensive and will tell you how large planets form from their parent star, then Chapter 8 of my book ‘Physics Unified’ is available. However, what you will learn by referring to that work is that the star is born at a time when gravity ‘switches on’, as it were. If that surprises then you should be equally surprised by the fact that the phenomenon of
    ferromagnetism which pulls iron into itself ‘switches on’ when the iron cools through its Curie temperature. It is, surely, plausible to assume that cooling played a part in the coalescence of astronomical bodies from dispersed cosmic dust.

    Of course, this means that I am assuming a justifiable analogy between the electrodynamic processes that occur in a ferromagnet and those that occur on a cosmic scale, but that kind of assumption is the driving force urging cosmologists forward in their quest to find a Unified Field Theory. Contrary to what they have to say, the story I tell is one of conquest in that territory, but it has already been told before, as in my book ‘Physics Unified’. All I am doing
    now, as I watch the new discoveries, is saying “I told you so!”

    In that book you will see that, whereas Einstein tried to replace the aether with ‘four-space’, I stuck to the aether picture and first solved the mystery of energy storage by magnetic inductive reaction and then solved the problem raised by the Michelson-Morley experiment, but that is a long story.

    So far as the creation of a star is concerned, what happens is that the primordial star acquires a net charge owing to its protons aggregating together on a priority basis before the full measure of neutralizing electrons come along. You see, the mutual acceleration of attraction between two protons is greater than that between two electrons by a factor of 1836, the proton-electron mass ratio. Think about it and work it out yourself! My research told me that the radial electric field set up inside a plasma body would promote a spin in the underlying aether and this is why stars were born in a state of spin. As the charge was neutralized by the arrival of the electrons that spin was enough to cause the planetary matter to be thrown off, but, with the passage of time, as the stellar system settles down, the chances are that those planets so formed will drop back into the star as energy is dissipated.

    If you think this is all hypothesis then look at that book ‘Physics Unified’ and see how the numbers work out. By that I mean that you can actually calculate how energy and mass is
    apportioned between star and planets. If you do not want to buy my book but are just curious to see the case I present, then be patient. It is only a question of time and the equally limited resources I have at my disposal. I shall, as the months go by, be adding the analysis to these Web pages, but if you are a student of cosmology and are in a hurry to get the measure of this so as to save wasting time studying false doctrine about the Big Bang and such like, then you really need to read up on what I have been writing about over the years. The alternative is to read about the evolving saga of observations in cosmological science that do not fit accepted theory as regularly reported in science journals and then try to
    decipher the developing confusion for yourself.

    By the way, long before Earth gets too close to the Sun for human comfort, it will pass through a space domain boundary and mankind will be unlikely to survive unless space stations can serve as a kind of Noah’s Ark. Happily, the last such crossing was about 12,000
    years ago and another one is not due for … well, again, read my book ‘Physics Unified’ and you will see yourself how you might be able to predict that event!


  • THE POINTLESS ELECTRON

    THE POINTLESS ELECTRON

    © Harold Aspden, 1997

    Research Note: 09/97: April 24, 1977

    About the title: The word ‘pointless’ can be used in two senses. It can be used to say that something has a form which expands somewhat from a point. Thus the extremity of a
    pencil or tool can become blunted and so lose its pointed form. It can also be used as a way of saying that something is meaningless. So far as the electron is concerned, theoretical physicists have tended to believe that it exists at a point and has no body or form. To me, that is a meaningless concept, and so I say that the point electron is pointless, but my ‘point’ in writing this Research Note is that the electron really does have a body form. This play on
    words may seem a ridiculous way of opening a discussion on a genuine proposition in science, but ridicule seems appropriate and that, sadly, is the state of the art at this time.

    On page 7 of the ‘Interface’ section of THE TIMES (London, April 23, 1997) there is an article entitled ‘The heart of the matter’, authored by Chris Partridge. It relates to a new discovery
    concerning the electron. The insert caption reads: ‘This may force a rethink on electrons’ and the text concludes by quoting the scientist Dr. Ken Long, ‘one of the discoverers of the new sub-electronic particles’, as saying “This might solve problems with the electron, such as the fact that it appears to have mass but no volume.” Dr. Long is at the Imperial College, London.

    Now, I am writing this Research Note on April 26th, 1997 and it was on April 11th that someone in France, Jean Chevalier, who had purchased items of my published work sent me a curious message asking for my opinion about a report in New Scientist of March 1, 1997.
    The report was entitled ‘May the fifth force be with us? He wrote: “You certainly should know about this discovery. What should we think?”

    I want, by this Web page text to reply more fully to Jean Chevalier, now that I see a connection with the above-mentioned report by Chris Partridge. I also want to share a few thoughts with others who may be struggling to sustain interest in these new particle discoveries, but yet cannot make much sense of the journalistic commentaries which try to keep us informed.

    The ‘Interface’ article is more informative and it attributes to Dr. Long a new discovery involving the electron. What, we may wonder, is that discovery? It says that ever since J.J. Thomson discovered the electron:

    “physicists have puzzled over its exact nature, as it appears to have charge and mass but no volume. But it has always been assumed that it is a fundamental particle, not made up of smaller particles.”

    Well, in fact, it is not true to say that physicists have struggled to connect electron mass with its charge and form. J. J. Thomson lived for 43 years after discovering that electron and, so far as I know, he had not abandoned his own interpretation of the electron as a sphere of charge having a finite radius of some 1.88 millionth of a trillionth of a metre, that is 1.88×10-13 cm. Indeed,
    though particle physicists who try to understand the electron’s properties cannot decide how to measure the electron’s exact radius, they conceded a relationship of some kind when they adopted the recommended values of physical constants, which include the ‘classical electron radius’ of 2.817 94092(38)x10-13 cm. Here I am quoting from a 1986 listing published in U.K. by the Royal Society jointly with the Institute of Physics and the Royal
    Society of Chemistry. Thomson’s electron has a radius that is two-thirds of this arbitrary notional value.

    Now, there is more than one way of justifying Thomson’s electron radius, one being the association of the electron’s kinetic energy with the measure of magnetic field energy added as a function of speed. This was, I believe, Thomson’s own way of calculating the link between electron mass, charge radius and energy. The alternative method which I prefer is simply that of calculating the electric field energy of the electron as seated outside its
    spherical body form and then adding the additional one third which applies if the electric field energy density inside it is uniform and equal to that applicable at its surface boundary. That implies that the body of the electron has a uniform pressure internally.

    Whether or not you, the reader, like this way of explaining something in modern physics, you will find, if you study my writings on the subject, that J. J. Thomson’s electron radius formula has far more to offer in connection with a unified field theory than you can find in the point-electron notions which pervade quantum electrodynamic theory. But now, it seems that the contest as to who is right might be settled by experiment, given that the electron has something to tell us about its charge volume.

    My book Physics Unified’ offers a full and readable account on the subject and shows how the constant of gravitation can be deduced in terms of the volume of electrified space displaced by the gravitons which mimic the form of the electron on a much smaller volume scale. You need little more than a high school or undergraduate level of knowledge of physics to follow the analysis, but if you want to avoid mathematics completely and still understand the electron and its gravitational association then you need to read my book ‘Modern Aether Science’.

    That said, I ask you to consider other statements in the article about Dr. Long. The core of the news item is contained in the words: “Now research by an international team of physicists at the HERA particle accelerator in Hamburg, including many from the U.K., have been firing electrons at protons at high speeds with results that suggest the electron may consist of smaller particles. If they are correct, some of the ideas about the structure of matter may have to be rethought.” The accelerator “propels bunches of electrons in one direction and protons in the other, banks of detectors monitoring what comes out. The recent experiments
    looked at what happens when an electron hits a proton in exactly the right spot to knock one of its three quarks out.” There is, it seems, an anomaly because, says Dr. Long,:

    “We have been seeing more particles coming out than we expected…If the effect is real, it could force a rethink on the electron. It has always been thought that the electron is a fundamental particle, but we believe it consists of smaller particles which some people are already calling preons.”

    This is, of course, all very curious. Physicists will go into the wildest of dreams when they cannot get the energy books and particle numbers to balance and then invent notional particles which may no true existence, unless one believes in ghosts. The classical such
    assumption concerns the neutrino, which has energy but no mass and no electric charge. Now the electron which supposedly has no volume, but has energy and mass as well as charge, has to be given some body form and, for some wild reason connected with more particles being generated by a high energy collision, that involves a new invention, the ‘preon’.

    Where, I wondered as I read the ‘Interface’ article, is the statement that scientists have reason for discounting the prospect that surplus energy from the collision can actually create particles and their anti-particle forms from nothing, as it were, apart from that energy? After all, quantum electrodynamics tells us that electrons and positrons can appear like magic from the vacuum, given the right energy conditions. Or is it not just the question of the excess of
    numbers of particles that are appearing, but their individual particle mass value being found to be much smaller than that of an electron? The article tells us nothing about the estimated mass of those preons.

    So, we must wait and see, but for the impatient who seeks to know more about the prospect of a sub-electron form being a denizen of the aether that fills space I draw attention to my [1987f] paper entitled ‘The Case for the Sub-Electron’, as recently republished in these Web pages. See the Appendix to Lecture No. 1.

    At this point it is appropriate to refer to the other article, the one in New Scientist. Its inset caption reads: ‘If it is true, this discovery could be on a par with that of the electron or DNA’.
    However, the opening words of the article do not refer to anything so small that it could be part of an electron. The text opens by saying:

    ‘An exotic heavy particle may have made its debut at a particle accelerator in Hamburg. Researchers say it could mark the birth of an entirely new physics…’

    That, however, is about all the article has to tell us other than the fact that this has been named a ‘leptoquark’. It goes on then to say that:

    “The leptoquark is a bizarre object that we don’t understand completely.”

    That, in my opinion, is itself a bizarre statement, given that physicists seem not to understand how Nature creates the electron or the proton, even to this day.

    So, given that these two articles seem to relate to the same discovery, brought about by head-on collisions between electrons and protons, what, one may ask has been discovered? Heavy exotic particles or minute exotic particles? The aether, by the way, can intermediate in a dual energy threshold activity, revealing its presence in its response to cosmic radiation.

    To understand this statement, the reader needs to refer to the concluding pages 159-160 of my book ‘Modern Aether Science’. There was a time when physicists studied, indeed discovered or rather caught a glimpse of, the ‘exotic’ particle by examining cosmic radiation. There was a short paragraph on page 160 of ‘Modern Aether Science’ that is very apt:

    “It
    must be remembered that when we look up into space we are not just looking at the stars, but are also looking into the aether. If we see things which are difficult to explain in terms of the phenomena we associate with ordinary matter then perhaps we should take note of the aether and develop our understanding of aether science.”

    That paragraph followed a quotation from the February 11, 1970 issue of ‘New Scientist and Science Journal’, an earlier transitory name for the ‘New Scientist’ of today. We go back here some 27 years, and 25 years to my book ‘Modern Aether Science’. (Just a few copies now left for sale). The quotation reads: “The main stumbling block to progress is the shape of the X-ray spectrum. This has a curious discontinuity at 20-40 keV, usually termed the kink
    or break; it corresponds to a break at 2-5 GeV in the parent electron spectrum, which is itself hard to explain.”

    So now, in 1997, we are told that high energy collisions involving electrons and protons, respectively of rest-mass energies 511 keV and 0.938 GeV, are creating mysterious particles, both below the lower and above the higher of these energies. It was on page 159
    of my book that I stressed the point that my theory said that, in order to explain gravitation and Planck’s action quantum, G and h, both in qualitative physical terms and quantitatively, in perfect accord with measured values, the aether had to include in its composition a sub-electron form of energy 20.9 keV and a graviton form of energy 2.58 GeV. Then came the words:

    “Such particles, as ingredients of the unseen aether, have never been detected directly, but if the aether contains particles of these dimensions what would be their consequence to electromagnetic wave propagation? Might they not affect frequencies
    corresponding to their annihilation or creation?”

    J. J. Thomson discovered the electron and gave good theoretical reasons to show that it had a finite form. That was at a time when scientists did not question belief in the aether, the only problem being that of understanding some anomalies concerning how electromagnetic radiation is affected by reflection in moving mirrors. Once Einstein came along and told the world that everything is an optical illusion and Dirac interpreted some equations concerning the electron, working under the influence of a shadow cast by Einstein, then we lost sight of our aether. With no such foundation on which now to stand as we try to interpret Nature’s
    mysteries we (meaning scientists in general) have lost our way. We do not need to ‘rethink’ the electron but there is need to rethink the Einstein-Dirac picture. So, Chris Partridge and Ken Long, please do not tell me that there was need to puzzle over the problem of an
    imaginary electron which has charge and mass but no volume.

    Needless to say, I would be delighted if the eventual measurements come close to those values just stated, but Nature is full of surprises and we must wait and see. In the meantime, as I proceed in writing these Web pages, it seems appropriate to discuss the recognized weaknesses of Quantum Field Theory, which is closely concerned with the properties of the electron. Also, there will be more reference to gravitons and their technological implications
    as more is added soon to these Web pages.

    In conclusion, I record my appreciation to Jean Chevalier, who stirred me into commenting on a subject which gets itself linked to ‘a fifth force’ and a ‘sub-electron’ or, rather, a ‘leptoquark’ and a ‘preon’.

    For information on the availability of my books ‘Modern Aether Science’ and ‘Physics Unified’ see the listing in the ‘Books and Reports’ section of these Web pages.


  • THE NEW ENERGY SPECTRUM

    THE NEW ENERGY SPECTRUM

    © Harold Aspden, 1997

    Research Note: 08/97: April 20, 1997

    This Research Note has also been published in the May, 1997 issue of New Energy News (NEN).

    For several years now, since I woke up to the prospect that one day our world may derive its power needs from the quantum activity of the omnipresent aether, I have tried to correlate information about the energy anomalies that I find particularly relevant to my interpretation of aether physics. Readers of the April 1997 issue of NEN will see mention of my latest Energy Science Report No. 10, which has been my way of reporting on my research interests in recent times. That Report shows that the New Energy Spectrum extends into the biophysical world of the human body, which seems to exhibit, deep in its molecular structure, a form of room temperature superconductivity and even a microscopic motor action in the our body cells.

    I have been struggling, however, to keep at my experimental pursuits on magnetism, reluctance motors and what I call ‘vacuum spin’, whilst trying to generate interest in my early theoretical research on the aether topic, and whilst keeping abreast of developments that I hear about from the world at large. I wish here to comment on three topics that I believe contribute to the New Energy Spectrum. I had planned that two of these would be the subjects of my Energy Science Reports Nos 11 and 12. Also I intended to keep writing such Reports until I had exhausted the material I have in my files, particularly on the themes of cold fusion and thermoelectricity. In the event, I will henceforth be completing this program by publishing instead on my Internet web pages. However, NEN readers may like to have some hint concerning my plans for three of these items.

    Firstly, the experimental findings of Dave Gieskieng (Arvada, Colorado) deserve particular mention. Year after year he experimented in transmitting radio waves across deep canyons. He used an antenna designed to send an E wave in quadrature phase with an H wave and compared the results with conventional dipole antenna transmission which forces the E and H waves to propagate in phase. His findings convinced me that normal radio transmission sheds all the wave energy as heat over a short range from the transmitter but a quadrature-phase EM wave (whether formed ab initio or as a residue of the conventional wave) still ripples on, not transporting energy, until intercepted by another antenna, where energy in the local aether is then tapped. Common sense should tell us that energy proper does not travel at the speed of light. Just imagine two waves traveling through one another in opposite directions and think through the physics of the energy deployment without getting too embroiled in mathematical symbols concerning photons! The experimental findings of Gieskieng should not have been ignored!

    Secondly, on my brainchild, the `supergraviton’ theme, and its relevance to warm superconductivity, cold fusion and permanent magnetism, I will be reporting on this subject soon in a very comprehensive way, drawing attention to the copious data which supports my proposition that the range close to 101 atomic mass units plays a special role in the dynamic resonance of molecular forms in perovskites, organic matter, etc and atomic groups in metals. This is marginally below the supergraviton mass of 102.18 amu, because the supergravitons lose a little effect in spreading their action over several atomic sites. I believe thermal energy is regenerated as electricity in the truly resonant states that one can then attribute to certain substances. I will, however, be pointing to recorded evidence of the tuning effects of hydrogen absorption by such molecules.

    Thirdly, and to conclude these remarks with something more specific, I will be drawing special attention to the ‘free energy’ implications of a U.S. patent just cited against one of my patent applications. It is U.S. Patent 4,435,663 granted to IBM and dated March 6, 1984. Its title is ‘Thermochemical Magnetic Generator’. What is described, however, is ‘a thermochemical magnetic generator which uses hydrogen as a working gas and magnetic intermetallic compounds which absorb hydrogen as the working magnetic material.’ The description of the invention says that ‘thermomagnetic generators are devices that convert heat into electricity’. The description further shows that hydrogen is not consumed, it is trapped in an enclosure and merely transferred forwards and backwards from one absorbing substance to another cyclically under the regulated control of heat input. The magnetic transitions induce output electricity in a coil wrapped around the chamber housing the working substance. This patent presents experimental data showing that the mere cyclical variation of hydrogen gas pressure resulting from the heat cycle will generate electricity. This is a room temperature device but the magnetic state of the intermetallic compound transits through the Curie temperature, converting ferromagnetic state to non ferromagnetic state, merely in response to hydrogen pressure, as thermally controlled. My interest is aroused by the fact that the chemical composition of the lanthanum pentacobalt working substance varies by absorption of hydrogen and a group of three such molecules, without the hydrogen, has a mass that is an integral multiple of 100.15. The operative cycle used by this IBM device cycles the composition between states where each molecule has 3.5 or 4.5 hydrogen atoms, respectively. This makes the mass transition one between an integer multiple of 100.96 and 101.19 amu. As I see it this is evidence of the ‘fine-tuning’ of the supergraviton resonance and, indirectly, it does have bearing on the ‘cold fusion’ theme. However, do not rush to procure a copy of that IBM patent in the hope of building an energy generator. The practical potential seems to me to be very limited. What is important, however, is the experimental confirmation of the physical principles which I can see us harnessing in future power generators.

    To conclude I mention that on March 26, 1997, I was granted GB Patent No. 2,278,491 entitled ‘Hydrogen Activated Heat Generation Apparatus’. It has 18 claims and is part of my, albeit theoretical, efforts to contribute something to the cold fusion theme. I also mention that the British Patent Office has notified me that on April 16th the grant of my GB Patent 2,283,361 will be published. This is entitled ‘Refrigeration and Electric Power Generation’. It bears upon the thermoelectric theme, the subject of my Energy Science Report No. 3, but it also exploits the 101-102 amu supergraviton resonance theme by disclosing why oxidized polypropylene is a room temperature superconductor and showing how this can be incorporated in a thermoelectric power converter. A group of seven molecules in the chain structure of oxidized polypropylene [C3H6O]7 has a molecular mass that is 4 times 101.5 amu.