Crab Nebula (M1) — supernova remnant imaged by Herschel and Hubble Space Telescopes

Category: Lectures

Lecture series on aether physics

Crab Nebula (M1), supernova remnant · ESA/Herschel/PACS; NASA, ESA & A. Loll/J. Hester (Arizona State Univ.) · NASA Image Library ↗

  • LECTURE NO. 17

    LECTURE NO. 17

    THE MAGIC OF MIRRORS

    TEC II

    Copyright © Harold Aspden, 1998

    INTRODUCTION

    Professors will tell you that “In nature heat is never found to flow up a temperature gradient of its own accord”. From this, they and the textbooks on which they rely advance to the statement of a law according to which it is impossible for any machine to abstract heat from the coldest body of its surroundings and convert this into useful work, surplus to that needed to power the machine. The law thus justified is known as “The Second Law of Thermodynamics”.

    If you are a student of physics or engineering you are thereby indoctrinated and become committed to the belief that if someone, in your later life, whether in academia or in industry, comes to you with a bright idea or proposition about designing a machine that does not keep within the bounds of that particular law, then you are justified in giving vent to your scorn and ridiculing that person for being ill-educated.

    So it is that our world, in which we are so anxious to catch a glimpse of a ray of hope that we may one day inhabit a pollution-free environment, is left in darkness, thanks to the ‘good’ education that we physicists and engineers have received in our university years.

    I am now too old to think that I can put right the damage done by all those professors, but I can suggest that any student who listens to such teaching in the future should pay close attention to the argument used. Now read again the opening paragraphs above and ask yourself two questions: “Where in nature does one ever see a ‘machine’, as such?”, and: “What is the point of building a machine, said to be a heat engine, if all it does is to allow heat to flow ‘of its own accord’?” Surely, the very fact that man has intervened by providing a machine which deploys heat energy in some way, is an intrusion upon that territory of something doing something of its own accord!

    Search the whole spectrum of physics and ask yourself whether you see ‘order’ or chaos’ in nature. Surely you can see both. There is ‘order’ in the behaviour of electrons in atoms. There is order in the way atoms fit together in a crystalline structure. There is order provided by the magnetic domain structure inside ferromagnetic materials. All that order is governed by energy finding its optimized state. Seemingly, however, there is disorder in the energy activity we refer to as ‘heat’, and those professors of yours will delight in introducing you to the mysterious word ‘entropy’. They do not know what ‘entropy’ means, other than saying it is the quantity of heat as divided by its temperature and that it can only increase. That is because they have the conviction that heat can only ‘go downhill’ and degrade in quality, meaning that its temperature has to fall inexorably as the heat energy passes on into the oblivion of outer space.

    But suppose there is, in the system we call ‘space’, something that can be said to be a ‘machine’. Then your professors will smile at your suggestion and come back to that Second Law of Thermodynamics. The heat can only go ‘downhill’ in temperature and entropy can only increase!

    It is here that I step in and refer to something of mine that was published in the science journal ‘Nature’, 1990h in these Web pages. In fact, I have already introduced this subject in the first chapter TEC I of the sequence of commentaries I am putting on the Web as my account of ‘Thermodynamic Energy Conversion’. In this Lecture No. 17 I wish to delve into some simple physics, rather than technological detail, just to ease the path for those who are still under the spell of their professorial teachings.

    The Magic of Mirrors

    Imagine that a thin nylon cord supports two metal spheres, each at a different focus of a concave mirror. Now ask yourself whether the temperatures of those two metal spheres must be the same?

    To answer this question refresh your memory of what you may have learnt in your physics lessons. I quote from a 1957 textbook on ‘Geometrical and Physical Optics’ by R.S. Longhurst:

    Suppose there is a source of light inside a sphere then …. (by the analysis here presented …. the flux reflected from each part of the sphere is equally distributed over the other part, or the flux received by an element after reflection at other points is everywhere the same. For a given sphere it can therefore only depend upon the total flux radiated by the enclosed source.

    In your lessons on heat radiation you also learn about black body radiation and how uniformity of temperature prevails within a spherical cavity so that inspection through a small aperture in the cavity allows you to perceive the nature of that ‘black body radiation’. Indeed you are taught that the radiation is very similar to that of a perfect black body at the same temperature as prevails within that spherical cavity. The cavity, moreover, need not be of spherical form, given that equilibrium conditions will prevail anyway and the result of all this is enshrined in what is called The Law of Cavity Radiation.

    So, when you come to answer the above question your instincts should be to say that both metal spheres must be at the same temperature.

    Now consider the two metal spheres as being at the focal points of an ellipsoidal mirror which constitutes that radiation cavity, as depicted in Fig. 1.

    FIG. 1

    Here, if only by an argument based on symmetry, you can assure yourself that the two metal spheres will tend to remain at the same temperature. Now, however, ask yourself what happens if the sphere at A is cooled by some internal means. Will it then merely absorb heat from the cavity surface, whilst the metal sphere at B retains its equilibrium temperature as that prevailing at the cavity surface?

    The answer to this is known from Pictet’s experiment or that of Count Romford, which dates from 1800, as you may see mentioned in the ‘Background of the Invention’ section of my U.S. Patent No. 5,101,632. Metal sphere B will cool down to complement the cooling of metal sphere A.

    Now, although I say the answer is known from such experiments, with regard to the details of such experiments, I only see reference to the use of a concave mirror, so we need to look now at what is shown in Figs. 2 and 3.

    FIG. 2
    FIG. 3

    Here the mirror is not a complete ellipsoid but just a concave portion of such a form. We have asymmetry in the apparatus and, yes, we know from those experiments that, if one of A or B is at a temperature different from the equilibrium temperature of the environmental surroundings, so the other will adjust to a temperature that is also different from the ambient temperature. However, what I now ask is whether, without any predetermination of the temperature of A or B by some special heating or cooling means, A and B will adopt the same temperature under normal conditions of equilibrium?

    You can answer this in two ways. You can declare that according to the Second Law of Thermodynamics the temperatures of A and B must be the same, owing to it being contrary to experience for ‘heat to flow from a cooler body to a warmer body of its own accord’. It must do that if A and B are to adopt different temperatures, given that heat energy is conserved. Alternatively you could say that you do not know the answer to the question but that you surmise that, since in Fig. 2 the portion of the cavity housing surface radiating heat to sphere B is larger than that in Fig. 1 radiating heat to sphere A, it seems likely that B will get hotter than A. You can further argue that if the two spheres have the same size, then more heat will be radiated from A to B than is radiated from B to A. So B should become hotter than A.

    You do this by discussing the role of the mirror in capturing radiation from B over a small solid angle of radiation, and reflecting it to A, whereas, as is evident from Fig. 3, the mirror captures a large solid angle of radiation from A and reflects it to B. Inevitably you should be in a quandary as to whether you can trust that Second Law of Thermodynamics or whether, in fact, if you build such a device and eliminate air convection you will actually witness A and B developing a temperature difference.

    Think about it! Ask yourself what this means. I know myself that, if I can get heat to flow through a thermoelectric device from one metal heat sink to another, then I can generate electricity. However, if I can generate electricity by merely building a thermoelectric device combined with a mirror and without doing anything to feed in any other form of energy, then I have either worked a miracle, performed a feat of magic or, perhaps, found an alternative to imitating one of Mother Nature’s more subtle energy regenerative processes.

    This process does not in any way breach the First Law of Thermodynamics, namely the need to conserve energy, because all it does is to take heat energy from our ambient surroundings and convert it into electricity. Now, and I say this with emphasis, why do we persist in trying to solve our future energy problems by attempting to replicate the imaginary hot fusion processes occurring in the Sun when here we can see a possible way forward using the simple ‘magic of mirrors’?

    In fact, all this amounts to is the harnessing of Maxwell’s Demon, except we do let the mirror perform the task effortlessly. We do not need a demon sitting at the toll gates where passage of heat energy is allowed or not allowed, according a selection of the particles that convey heat. Maxwell’s hypothetical demon opens and shuts a gate, to admit and confine the more energetic particles in one heat chamber, whilst obstructing entry of the least energetic particles and allowing egress from the chamber by the least energetic particles. By the simple labour of opening and shutting a gate, heat is thereby transferred to that chamber to elevate its temperature. However, we use mirrors instead. Energy seeking entry to that chamber is direct to a mirror focus located at the point of entry through a small aperture, whereas energy coming the other way from within the chamber has to find its own way through the small aperture leading through that passage and so has some difficulty escaping, at least until its temperature raises sufficiently to give it the necessary impetus.

    Practical Considerations

    This sounds interesting, doesn’t it, but is it practical? Well it helps if the source of radiation is not simply a black body surface at room temperature, but rather one at an elevated temperature. The practical aspect could well just be one of scale and a consideration of economic factors, weight and volume to power ratio as well as capital cost to power ratio.

    If you think we might never be able to power an automobile by such a method then I will not argue with you on that point. After all, one hundred years ago there were those who could never see technology developing that could power the flight of a Boeing 747. Nor could they imagine the technology that we now see in the fabrication of electronics microstructure in the computer industry. All I can say is that my calculations, as summarized towards the end of that U.S. Patent No. 5,101,632 indicate the prospect of generating 15 kW in a structure the size of a cubic metre. One can, presumably, contemplate the development of automobiles using this new technology, given one hundred years of onward development!

    In such research so much does depends upon how we convert that heat at the elevated temperature into useful work, as by generating electricity. However, I have allowed for that in those calculations and suggested a way forward. There is the question of whether our mirror engine is subject to the Carnot efficiency. In fact, it cannot depend upon the Carnot criteria, because we are not losing any energy. We have no exhaust gases that carry away the degraded heat from which our engine has extracted its power. However, if we use a conventional heat engine, such as a steam engine or hot air engine to convert the heat our mirrors have produced at an elevated temperature then, sadly, we suffer from the Carnot efficiency limitation. What is also sad about this is the fact that we could well be considering using a reverse heat engine to increase the temperature of ambient heat intake as a kind of pre-heat stage in our engine. That offers the advantages of the Carnot criteria as a gain factor of the heat pump process. If only we can convert heat into electricity with an efficiency well in esxcess of the Carnot efficiency, then we can work the necessary miracle!

    The Magic of Magnetism

    Physicists will smile at the above suggestion, thinking, as they do, in terms of photons and such like. They believe that the energy carried by radiation of light and heat is transported by those so-called ‘photons’ which travel at the speed of light. Photons, if they exist as something that really does travel at that high speed, really can give physicists reason for having a headache, if they avoid being blinded by mathematical symbols and try to make sense of this photon notion. There is, for example, something called the ‘Langevin Paradox’. According to one expert on Einstein’s Theory of Relativity, writing as recently as in the September 15 issue of Physics Letters, vol. 234A (1997) pp. 75-85, to get Einstein’s theory to be consistent with photons as carriers of light energy, one needs to say that the photon has a finite mass. Now, Einstein’s theory is sacrosanct, not to the engineer, but to the physicist, so the world of physics must be losing its grip on reality. Can we afford to wait until physicists put their house in order?

    We all know that a particle travelling at the speed of light will acquire infinite mass, at least that is consistent with experiments on electrically charged particles that can be accelerated to speeds close to that of light. That was all known before Einstein tried to build on the fact as support for his theory. However, there is another fact that needs to be remembered, the fact that the Earth’s rotation can be sensed by optical interference techniques using a rotating system of mirrors (The Sagnac Effect). This fact known for most of the 20th century, still has to be reconciled with Einstein’s theory or the idea of photons as particles or both have to be rejected. Take note also that there is evidence that the west-east motion of our laboratories on body Earth can be sensed owing to the fact that it intrudes to upset the precision of measurements of the Michelson-Morley type. The latter is the famous experiment which disproved the notion that the aether had certain properties previously assumed. It did not disprove anything about the ability of the aether to store energy, it being a universal energy ‘bank’ in which we can deposit energy by magnetic induction and recover that energy on demand!

    Surely therefore we need to reconsider the foundations on which physicists rely when they express opinions on fundamental energy issues. Mirrors can reflect light, meaning energy if light transports energy at the speed of light, but it may well be that all that is transported is the ripple we associate with an electromagnetic wave, a ripple of the sea of energy that is everywhere in space. Now, I can deflect a moving electron by using a magnetic field, without injecting energy into the magnet producing that field. I cannot deflect an electromagnetic wave by using a magnet, at least not sufficiently for it to have any practical consequence. I decline to comment on whether a magnet can affect the motion of a photon, for the simple reason that I can only see the ‘photon’ as an ‘event’ that marks an energy-cum-momentum transaction as between aether and matter and events can occur at points A and B without requiring all the energy involved in those transactions to make the journey between A and B at the speed of light.

    Our Thermodynamic Energy Conversion project (TEC) involves us, not with photons, but with energy and temperature and our next task is to examine the physics of converting heat energy into electricity. We are aiming at something close to 100% conversion efficiency, with no Carnot factor to bother us. We cannot talk about photons, because photons do not have a ‘temperature’, even though they are deemed to represent a package of energy relating energy with Planck’s constant, h, times the frequency assigned to the photon.

    The Electron and Maxwell’s Demon

    Imagine that Maxwell demon sitting patiently at a point inside a block of metal. You are sitting outside. You connect that metal in an electrical circuit and you pass current through it. The demon sees electrons migrating past his viewing station as the current flow. Now we do not want our demon to exert himself by opening and closing a gate or shutter, so we have provided a magnet for him to sit upon. The magnet produces a field which acts on the electron and, as we all know from our basic physics education that electron will be deflected sideways. See Fig. 4.

    FIG. 4

    We can now, if we wish draw current from that metal at right angles to its normal flow path. The stronger the magnet and the greater the magnetic field H, the greater the EMF generated in the lateral direction, the electric field E being proportional to H and also to the velocity v of the electron e.

    This process is known in physics as the Hall Effect. There is no conversion of heat into electricity. The energy you supply in getting the electrons to migrate at that velocity v is all deployed in developing that electric field E and powering the current we might draw from the resulting EMF. The magnet does no work. It merely sits there and forces those electrons to change direction. The Hall Effect and the Carnot criteria of thermodynamic engines have no ground in common so beware of becoming confused as we now extend our Fig. 4 deliberations into the realm of heat energy conversion.

    We are only interested in heat and we want to equip our Maxwell demon with a whole assembly of magnets positioned along that electron flow path, having now in mind the fact that the flow of heat in metal is a flow of electrons! Our demon knows only one temperature, that where he sits, and he pays no attention to what physicists living outside that lump of metal might have to say about Carnot efficiency. That is something that depends upon the specific absolute values of two temperatures, whereas our demon knows that all that matters to him is the flow of heat energy carried past his viewing station and that merely depends upon a temperature gradient at his position.

    So those electrons migrating past him as heat are, as before, deflected to set up that transverse electric field, but this time their energy is that of heat and the magnet puts order into things and takes energy from that heat to feed it into the orderly state of an electric field. In short, we have quite efficient conversion of heat into electricity, because what is not converted moves on to be processed by the demon’s assistants further down the line, namely those other magnets.

    If we take electricity as output in that lateral direction, so we have cooled the metal. That is what we require in our mirror engine system. Two metal heat sinks at different temperatures T and T’ powered by mirror magic and linked by a metal path including our Maxwell demon, or rather his magnets. Let us just picture the flow path of electrons that make the lateral detour. This is illustrated in Fig. 5.

    FIG. 5

    When an electron flowing along the metal path is diverted laterally to flow around an external loop circuit which includes a load device (not shown) it returns on the opposite side of the metal to share the heat latent in the metal at that point of return and moves on in the forward direction conveying heat.

    The electron does not need any extra power to re-enter the main path of the metal conductor. Indeed, what has been described is simply the conversion of heat into electricity, an energy conservation process, but one which, thanks to the magnet and the established flow direction of the heat, converts thermal chaos into electrical order. You may ask whether this can really work. More to the point, you may ask what happens to the electrons that reach the end of the main path and what is the source of those that come from the beginning of the main path. Now, there is a mystery! Frankly, I have yet to see this explained in a textbook and I wonder how I have missed it. My textbooks tell me how electrons carry the heat flow but they do not say quite how. One is left to wonder if it is a kind of ‘knock-on’ effect, owing to collisions between faster-moving electrons coming from the left and slower moving electrons coming from the right. In that case, since at any instant the flow rate of electrons to the left must equal the flow rate to the right, given no external closure circuit between the ends as heat sinks, there can be no net field E generated at all!

    So one can say that heat is still carried by electrons in their ‘knock-on’ effects, thanks to a component of motion laterally directed with respect to the magnetic field, but that would mean no overall heat energy-to-electricity conversion. So, is the phenomenon described real? Well it is, because it is known as the Nernst Effect and those EMFs induced by heat flow, given the presence of a mutually orthogonal magnetic field, have been measured. They are particularly high in nickel. It follows, therefore, that our standard assumptions concerning the interaction of a magnetic field and an electron in motion must be erroneous.

    Now, I have long suspected this, because I have wondered why it is that the magnetic field of a permanent magnet can penetrate through a block of copper without the numerous free electrons in motion within that copper reacting to screen such fields virtually in their entirety. My answer, one I adopted long ago, is the following:

    When an electron in motion reacts to a magnetic field it is a quantum event, meaning that maybe it will and maybe it won’t, this being determined by whichever affords the optimum response from an energy equilibrium viewpoint.

    To picture what I mean here, note that an applied magnetic field is an ‘action’ and the response of the electron in that field is a ‘reaction’. If the magnetic field increases then there is more reaction opposing that field, but the reaction must allow the field to assume the level at which it has stored the maximum amount of energy density in the reacting electrons. Remember that kinetic energy absorbs potential energy and, as potential energy minimizes, so kinetic energy increases. The energy density in a magnetic field H is proportional to H2. So, if we are to store such energy density in a system of reacting charge in motion, then we are referring to the component of motion that acts to set up the opposing field. The energy transferred to those charges, whether electrons or not, is a kind of thermal energy and it is pooled with the thermal state of the absorbing medium. It is dispersed as a result, apart from just that amount of energy that is polarized by the need to sustain the field reaction.

    When I worked all this out, back in the mid 1950s, I discovered that what all this meant was that the magnetic field set up by an electron in orbital motion is really double the strength we have assigned to it in our standard electrical theory, but the optimum field reaction of the charge that retains that energy in readiness for its return when the electron’s motion ceases will always set up a back-field halving that primary action. It all made good sense and it explained what is known as the gyromagnetic ratio, the anomalous factor-of-two observed, in the ratio of magnetic moment to angular momentum, when the magnetic polarization of pivotally-mounted ferromagnetic rods is reversed.

    So what I am really saying here is that the Nernst Effect is evidence of the selective or quantized reaction of electrons when subject to a magnetic field. The Lorentz force law, which says how an electron or other charge in motion will react in a magnetic field, is not of universal applicability where the kinetic energy possessed by the reacting charges is so great as to exceed the magnetic energy density of the field in which they are present. There is also, it seems, a kind of pecking order, as between charges of different mass and even between charges of similar mass, such as electrons, if some have more energy than others.

    Now I do not want to dwell on this theme here, especially as it is further complicated by that EMF produced by Nernst Effect having a different polarity coefficient for some metals versus others. So, I will hide behind the facts of experiment and say that the Nernst Effect is a real phenomenon, which is described in some of the better physics textbooks. I will go further than this, as we develop these ‘TEC’ web pages and will describe two different technological consequences, giving technical details of performance of the resulting heat to electricity conversion.

    My concluding message here is that, if you are a physicist or student of physics and you are satisfied with what you have come to know about quantum electrodynamics and the application of the Lorentz force, sufficiently for you to think you can rely on that knowledge when judging the new energy proposals that I am introducing in these Web pages, then you will surely be missing opportunities for making a useful contribution to the energy technology of the future.

    Otherwise, if you wish to learn more, then I invite you to progress to the next item TEC III.


    Harold Aspden

  • LECTURE NO. 16

    LECTURE NO. 16

    50 YEARS AND ON WE GO!

    Copyright © Harold Aspden, 1998

    INTRODUCTION

    I am, in this Lecture, going to go back more than 50 years to the time when I was a university student and tell you of something that happened in one of our lectures.

    In those days, at least in the British academic system, the title of professor was only bestowed upon someone who was head of a department, so we only had one Professor of Electrical Engineering. By today’s standards, the Senior Lecturer who was addressing us back in that 1947-1948 period would have been a ‘professor’, so I will refer to him by that title.

    The professor was presenting his lecture by writing a very lengthy sequence of mathematical equations on the black board. It was all about electrons and current emission from hot electrodes in vacuum tubes. There were perhaps fifty or so of us students, all trying to make notes, rather frantically, as the professor was rushing through his task. He was either bored with the mundane chore of that effort, perhaps wanting to get back to his research, or he had other personal reasons for his haste, but to be sure I began to feel that none of us were following or understanding what he was scribing on that blackboard.

    Eventually he turned around, looked at us, and I had the impression he was about to pick up his notes and close the lecture early. It was then that I had time to take a longer look at the last equation on that blackboard, the result he set out to prove.

    I then reacted, rather abruptly, to my own surprise, and openly declared that the result could not be correct. The professor was astounded and, consistent with his Germanic background, he reacted. In a rather abrupt and blunt way he began to go through what he had written on that blackboard, line by line, turning around after each recital and, looking directly at me, asked if I agreed with that step.

    I said: “Yes.” Indeed, I said “Yes” time and again until at one point, when the professor, with his back to us, was reading aloud through the next line and for my benefit, he suddenly hesitated. He did not turn around. After a few moments he began altering and correcting the subsequent lines on that blackboard. As he did that, I became conscious of the rising crescendo of stamping feet. My student colleagues were applauding in the time-honoured way. I had, it seems, scored some kind of goal in the great academic game. The professor, after correcting the final formula on that blackboard, did not look up. He said nothing and, with bowed head, picked up his papers and fled, quite evidently in a rare temper.

    I would have expected a professor, faced with such a situation, to contrive to smile, admit the oversight, and then jump on the rest of the students for not staying awake and spotting the flaw earlier, but that was not to be in this instance.

    I did, as did one other colleague in that student assembly, find that, when I graduated a few months later, I was awarded a first class honours degree, but the whole event was a lesson in itself. The lesson, though unintended by the professor, was to be sure that a derived physical formula has a proper balance of its physical dimensions. If you are counting oranges, you cannot mix them up with apples, and count both as equal. They are both items of fruit, but, as I say, ‘if you are counting oranges…’.

    So often there is, by those who try to crack the secrets of Nature, as hidden in the coded messages we receive as numbers, the physical quantities that we measure numerically, a tendency to forget the need to keep a true physical balance. There are certain ‘dimensionless’ physical constants, fundamental constants, such as the fine-structure constant, that are pure numbers, that one being approximately 1/137, but … well, all I can say is “Beware and be sure to avoid being faulted by not keeping a proper dimensional balance.”

    That, as you will see, introduces the subject of this Lecture.

    An Example – 50 years on!

    Well, in later life, I did find, once I started developing my own theory of gravitation, that there were those who expressed interest and then duly gave voice to their own brainchild – their theory of gravitation. I was sent so many and all of these theories suffered from a fundamental flaw that should have made them ‘still-born’.

    Indeed, there was one called ‘The Pushing Theory of Gravitation’, which had many variants. In essence it amounted to saying that there is a neutrino sea, or some such activity, in which all matter is immersed as if in a gas and these ‘neutrinos’ push matter together. When you draw attention to how matter might screen other matter and so destroy the picture that represents in the inverse-square law of force, the response is that the absorption is rather subtle. The neutrinos are only very slightly obstructed in their passage through matter. That, however, is inventing an ‘assumption’ to explain something that is not assumption.

    There was the case, I well recall, where one kind individual sought to convince me that gravity was attributable to the ongoing expansion of things. He was not talking about the post-Big-Bang scenario, but rather the prospect that we are held to body Earth by gravitation because the Earth expands so that its surface accelerates outwards at the rate we call g, some 32 feet per second per second or 981 cm per second per second. That really does pose a curious picture of things, especially if you then wonder how the Moon will appear to grow or shrink in size, as seen from Earth, because there is different g applicable to the Moon.

    Then there are those who see the numbers game as the way forward. Now, let me say here that those numbers, such as the 6.67 that precedes an order of magnitude as a representation of G, the Constant of Gravitation, are clues which point the way forward. If your own pet theory of gravitation tells you that G is some other number, then you know you are wrong, but, conversely, if your theory gives this very number, it does not mean you are right.

    So much depends upon the method being inherently self-consistent and consistent with other physical processes as well.

    Now, what causes me to be writing this at this time? Well, I am writing this on March 26, 1998, when I should be busy on other important matters, but it was late yesterday, March 25th, that I received a fax message from M. Zaman Akil of 49-50 Prince Albert Road in London. It was accompanied by a copy of a paper that he had had published in Apeiron, No. 12, Winter 1992. Its title was ‘On the Constant of Gravitation’.

    Akil was obviously concerned that his theory had been ignored. The preamble to the paper, a Note by Jean-Claude Pecker of the College de France in Paris, seemed to support Akil’s case, whilst observing that members of the Academy of Sciences were unwilling to accept Akil’s efforts to ‘equate a dimensionless quantity to a physical quantity’. On scrutiny the paper presents an equation connecting G with the inverse of a product of two expressions, one involving the proton-electron mass ratio, the other involving the muon-electron mass ratio, and both involving the factor 2π, G being in cgs units.

    The value of G provided by this arbitrary formula was, indeed, remarkably close to the measured value of G, but the physical dimensions did not balance. Therefore, there is no way that the formula can have any meaning whatsoever. Yet I can understand that, having discovered such an appealing numerical relationship, it is very hard to let go. Akil argues in favour of a new system of ‘natural units’ which would aim at a force-fit of the result in a proper dimensional balance, but that is a futile pursuit.

    I have my own way of explaining G and deriving a value for G that does involve certain mass ratios and, indeed, involves the muon and the proton in developing those ratios, notably for the ratio of the mass of something I call the ‘graviton’ relative to the mass of the electron. To me, the electron, the muon, the tau (or taon) and the graviton are all leptons on a rising mass scale and it was interesting to read Akil’s last sentence in his article:

    A number of investigators had already expected the proton to play such a part (meaning a role in a formula for G): but why the muon and not, say, a taon? This intriguing question certainly merits further investigation.

    Conclusion

    The point I make here is that I am not alone in the quest to present a theory of gravitation which allows G to be formulated and derived as a numerical quantity. Because I am not alone, but one of many, all claiming to have the right answer, and because ‘par for the course’, if this were a game of golf, would involve everyone normally falling into a hole, owing to a false assumption, then the assumption is that we are all trying to achieve the impossible.

    However, what can one do, other than seek to convince that one does have the right answer? At this time, March 26th, I am about to wave a flag to get the world to pay attention. My ‘flag’ is really a ‘spoof’, a ruse to get people to look at my Web pages, because on April 1, for one day, I will assert that I have discovered the long-sought proof of Fermat’s Last Theorem, achieved virtually by a few notes on the back-of-an envelope. Fermat, the great French mathematician, has been suspected of leading us astray by saying a simple proof did exist. Hopefully one day a simple proof will be discovered. However, Fermat’s Last Theorem is a problem that is purely numerical and devoid of physical dimensions, so one cannot go wrong on that score. However, though I cannot claim a solution to the theorem that the great minds of science could not solve in 360 years; but yet I do claim what is, I believe, a greater achievement, the solution of the problem of gravity. I promise you that it is not just a numbers game, the rules of physics are involved in the play!

    The derivation of my formula for G is to be found in these Web pages at:

    and my ‘spoof’ of Fermat’s Last Theorem, which I shall keep on these Web pages in order to preserve the argument presented here, is accessed by pressing:


    Harold Aspden
  • ENERGY FROM FUSION

    ENERGY FROM FUSION

    The Opinion of a Patent Attorney

    Copyright © 1998 Harold Aspden

    We are at War! Washington is involved. It is a fight between David and Goliath, Goliath being armed with the power vested in the United States Patent Office. I am one of those who ranks as a mere ‘David’, my weapon being my experience as a European Patent Attorney having scientific and technical qualifications. I seek to report here, in a somewhat anecdotal style, the story of the war as seen from my perspective. How the war will end, I do not know, but the outcome is important. The global problems on the energy front affect us all. Here is a war that can end by bringing happiness, prosperity and security on the energy front or end by leaving us to the mercy of the forces of pollution. At this time the United States administration is waging battle on the wrong side of this contest. It is important that this situation should be brought to the attention of the world at large.

    I note here that, in writing these Web pages, I have no personal ‘axe to grind’. I am 70 years of age and fully retired. All I seek is the satisfaction of knowing that I have tried to help the world by casting light on a few of the mysteries that pervade the sector of science with which I am familiar. I will be rewarded if I see that others come to understand the rudiments, if not the detail, of what I am saying, because, in the long term, the battle must be won even though it will take an army of like-minded souls. Once it is won the world will see at the edge of the battleground the path leading to an understanding of Creation, meaning how the building blocks of all matter in the universe emerge from the energy field in which we are immersed.


    Introduction

    It was in 1950 that my post-graduate training in a U.K. major engineering company took me, at my request, for a period into that company’s Patent Department. I had an aptitude for research and wanted to learn something about the protection of inventions. To become professionally qualified in the patent field and also venture on a three-year research project to earn a Cambridge Ph.D. degree would take six years. I decided to work for both and later decide which career path, patents or research, I would follow. In the event I earned my income as a patent professional, pursued my research as a theoretical physicist as a hobby, and eventually took early retirement to concentrate wholly on my academic research, my interest being in anomalous energy phenomena.

    I became IBM’s Director of European Patent Operations. I represented the International Chamber of Commerce at international meetings concerned with intellectual property matters and was, for two years, President of the Trademarks, Patents and Designs Federation in U.K. In short, I seek by this introduction to explain why I feel competent enough to express an opinion on the ‘war’ picture I now see when looking in the direction of the Examining Group of Art Unit No. 2204 in the United States Patent Office.

    I well recall that when, many years ago, the British Patent Office decided to adopt a new posture concerning what was or was not patentable in the computer field, they circulated a notification, stating their intentions and inviting comment. I am not aware that the U.S. Patent Office has done that in respect of their decision to obstruct the grant of any patents that purport to depend upon ‘cold fusion’.

    It is generally understood in patent practice that novel ideas relating, for example, to methods of accountancy are not patentable and, as computer software developed, especially in connection with banking and commerce, we were content to accept that computer programs, as such, were not patentable. It was British Petroleum in U.K. who rocked the boat one day by urging the British Patent Office to grant them a patent on linear programming. In the analysis of data gained from surveying for oil and gas reserves they had to compute optimum solutions to a plurality of mathematical equations. Could a computer program for solving mathematical equations be patented or not? The issue was important. IBM had not filed for patents on computer programs.

    This, as I saw it, was not so much a matter of corporate policy, but one of practicality. Yes, we could write patent specifications on computer programs but there is a reverse side to that coin. A patent ends with a set of claims which define a monopoly. It would be the task of the patent attorneys who reported to me to scan all such patents that might be granted and determine whether or not any of our computer programs came within the scope of any of those claims. Now, with an engineered product or a chemical process or a duly formulated chemical composition, there is a clear ‘something’ one can grasp mentally to classify and search. How could one be expected to check each line of code in a computer program, especially one subjected to repeated updates, and compare its content with the purported cover of the numerous claims that would emerge in patents if computer programs were patentable? In the end the task would itself need to be computerized and that would only be feasible if what one was seeking was proof of copying. In a court action concerning patents, the evidence is not concerned exclusively with copying, but with whether what the alleged infringer has done comes within the strict terms of the patent claims and whether those claims are in fact valid. Computers cannot make such judgements. If they could then one could eliminate the need for judge and jury when engaging in litigation, even in cases concerned with more general subject matter, and we are still some way off seeing that in prospect.

    It was for that reason that we urged revision of copyright law to extend to computer programs and argued against the formal recognition that patents could be granted for computer programs, as such. IBM made an offer to the British Patent Office to supply them with a copy of every IBM program for use in their patent searching, well knowing that the offer was unlikely to be accepted. Our posture was: ‘If you grant patents on something, you must first search the prior art.’ In the event, the offer was declined and the ongoing revisions of Patent law as well as the eventual European Patent Law and international arrangements took care of the problem.

    To the lay person the issue can be expressed in a simple way. If you write a novel with an ingenious plot, you get copyright protection for your work. Whether you get protection for the plot alone is also a question of interpreting the extent to which it is copied, meaning it is a matter of copyright law. You would not dream of filing a patent to cover the plot of your book, nor would the patent system grant you such protection. However, where there is technology involved and an industrial process or manufactured product, then patents are there to serve the inventor and those who sponsor the inventor.

    An invention that presents us with a new way of generating power by deploying energy in some special way is, beyond any doubt, proper subject matter for an invention. So, if the United States Patent Office has decided on a policy of refusing the grant of such patents, where they conflict with the commercial interests of those trying to get ‘hot fusion’ inventions to work, then the administration concerned should review their international treaty obligations besides inviting public comment. Indeed, if I were still to hold the voluntary position I once held as Rapporteur in the Intellectual Property section of the International Chamber of Commerce, I would be recommending overtures to the United Nations arm, the World Intellectual Property Organization, to get the United States to explain its action. The United States Patent Office should not be examining and processing patent applications through the Geneva service of the Patent Cooperation Treaty and then declaring to the applicant who seeks a U.S. patent on that same application that the invention is in a category not deemed patentable! The tactics used lack integrity in that they fall short of making such a ‘declaration’ and are those of a ‘war of attrition’ aimed at driving the applicant to despair – but I will come to tell the story about that later in these Web pages.

    A Wartime Secret?

    Above I mentioned that in 1950 I spent a period in London in a corporate patent department. Indeed, I kept my contacts with that department through my Ph.D. research years at Cambridge and earned a little money working for them during some of my vacations. In this way I ‘clocked-up’ several months of tutored training which made it possible to take my eventual patent examinations earlier than I could otherwise. It saved me a precious year.

    Reminiscing a little about the previous 10 years, much of which I had spent at school, I found that my ‘tutor’ had been called up for service in World War II and, upon being asked about his aptitudes and inclinations, he had not stressed his expert knowledge at assessing and dealing with inventions. Instead, he had merely revealed his interest in his ‘hobby’, the mechanics of the motor vehicle. He was accordingly assigned to the army as a driver of a truck. When later on leave at Christmas time, by tradition, he took his wife to a dinner dance attended by many of his colleagues in the patent profession to find, to his wife’s dismay, that those of his colleagues who had joined the armed services had officer rank. His wife was adamant and took it upon herself to complain to officialdom that her husband’s skills were being wasted as a truck driver. The result was that he was transferred to a Whitehall government office and given the task of evaluating some of the wartime invention proposals that were coming through the system. So it was, as he told me about his experiences, that one submission he received proffered advise on how to win the war against the U-boat menace. The idea was quite simple. Drop something in the sea close to a U-boat, something that can boil the water surrounding it and so kill the crew inside. The question of what to drop in the ocean was left for the scientific experts to work out. In retrospect one can see that nothing short of an atom bomb would suffice in such a venture, but the point is made. There are crazy inventions, but not all, seemingly crazy, inventions that involve heat and water can be dismissed so easily. You see, we really have to face up to a similar situation where the invention is genuine but where we confront the ‘war in peace time’ issue of ‘cold fusion’.

    After the World War II my ‘tutor’ in patent practice, whose initials were W.A.R., returned to English Electric Company and, four or so years later, I joined him in their Patent Department. I eventually became fully qualified in the patent professional field and had my Ph.D. and then one day I had a briefing from a senior engineer who told me about the company’s interest in a project aimed at generating energy from fusion. There was reference to the ZETA project and how it had proved difficult to stabilize the pulsed electric current discharge that was supposed to pinch electrodynamically – enough to set up temperatures that could promote nuclear fusion.

    My Ph.D. concerned magnetism, particularly electromagnetic induction, so I was particularly interested in what I heard. The message was that the whole project had been kept under secrecy but that all efforts to trigger fusion had failed and those researching the subject had run out of ideas. They were releasing information about ZETA hoping that the scientific community at large might have something to add by way of inspiration. Here, then, was a verdict, as long ago as 1958, on the fate of ‘hot fusion’. It was not a viable pursuit and those closely involved in the project were on the verge of surrender.

    A little common sense is enough to assure us that, if it takes a temperature of 100,000,000 degrees or so to trigger the fusion reaction, then the chances of containing something transiently developing such temperatures in a commercial environment are beyond contemplation. A fleeting explosion and a momentary reaction are a long way away from a ‘hot fusion’ reactor technology. Evenso, I devised my own scheme for an apparatus which should have contained that discharge, had the laws of physics been obeyed by the ions involved in the discharge. English Electric Company did secure a patent on my invention, but the specific form of apparatus I proposed was never, so far as I know, put to the test.

    Undoubtedly, however, the standard laws of physics fail where heavy ions, such as protons or deuterons, constitute the discharge and those who examine patent applications tend only to allow the grant of patents for inventions which are not in breach of the well-established laws of physics. Given that so many scientists would say that the laws of electrodynamics are ‘well-established’, whereas I can point to clear experimental evidence that does not comply with those laws, I have not hesitated to debate the issue where appropriate and with success. However, I had not realised that one day I would be destined to encounter a U.S. patent examiner who could dictate his own laws as to whether or not something in science is or is not possible. By his dictates, ‘hot fusion’ is possible, ‘cold fusion’ is not possible. The sun is a ‘hot fusion’ source of energy. The hydrogen bomb is a fusion device. The endeavour to build a fusion reactor has his blessing, although that technology is getting to be prehistoric so far as having any patent relevance. Those who fund ‘hot fusion’, government organizations, do not need patents anyway.

    My story will develop and come to that U.S. patent examiner topic in due course. In the meantime let us think back in time and suppose that nuclear fission power had not proved itself to be a contender on the energy scene and suppose that the prospect of ‘cold fusion’ as a source of heat had become an option. Surely it would have been researched with vigour, backed by government funding, and surely the question of its operability as a viable source of power would have been settled. The funds involved would have been miniscule compared with what was to be wasted on ‘hot fusion’ research.

    Of course, you will say that ‘cold fusion’ was not on the table as a proposal, so it was not an option one could think about. Well, be that as it may, I remember, all those years ago, the efforts of a man named Bruce, who was with ERA (Electrical Research Association) in U.K. He was concerned with electrical discharge phenomena and he argued, from his knowledge of radiation from arc discharges in comparison with the available evidence from solar radiation, that the sun’s temperature was attributable to the same phenomena that accompanies electrical discharges. The scientific question became one of understanding how the electric potentials are set up at the solar surface and not one of just assuming ‘hot fusion’. We are talking about something that occurs at, say, 6,000 K and not 100,000,000 degrees of temperature. We are talking about ‘cold fusion’, if we use the word ‘cold’ as a relative term, ‘hot’ meaning millions of degrees.

    If I remember correctly, Bruce pointed to the fact that sun spots, which are blemishes inside the outer surface of the sun, are cooler than that outer region, the latter being a region of continuous electric discharge – perpetual lightning!

    If Bruce was right, then the source of the sun’s heat could be something other than attributable to the fusion of hydrogen nuclei or their derivatives. So, I ask, can we be 100% sure that the centre of the sun is at 100,000,000 or so degrees? After all, we can only see its surface and 6,000 degrees is what we see!

    I have referred to Bruce in my book ‘Modern Aether Science’. On page 13 of that work I quote a commentary on that subject by Sir Basil Schonland taken from his 1964 book ‘The Flight of the Thunderbolts’. A short extract from that quotation is the statement:

    Many hot stars, including our sun, emit radio waves of high frequency which penetrate our ionosphere; their sources are hot plasma in stellar magnetic fields and hardly qualifying for description as thunderstorms. But whether any of the dying stars have relatively cold atmospheres in which thunderstorms could be created is an interesting speculation. Bruce has developed ingenious theories. It is too early to form judgment on his many remarkable proposals.

    Here is one of those examples of an ‘expert’ making assumptions, albeit popular assumptions, and telling another ‘expert’ that his unorthodox opinion is wrong. Of course there is no mechanism in the solar atmosphere that can replicate that involved in creating thunderclouds and thunderstorms, but lightning is really nothing other than an electrical discharge set up by an electric field. The radiation from Sun to Earth is absorbed by electrons in the Earth’s surface and atmosphere and that initiates those electric fields here on Earth. The radiation reaction can do the same at the solar surface and, in addition to that, that hot solar plasma acted on by gravity can assert a preferential pull on the proton ions in relation to the electron ions. There will inevitably be a concentration of positive charge within the body of the sun, enough to set up those electric fields which sustain those ‘lightning’ discharges that Bruce mentions.

    That core charge will prevent gravity from compressing the plasma inside the sun beyond the point where more than a very small proportion of the hydrogen atoms present becomes non-ionized. That will keep the solar core at temperatures commensurate with those observed at the solar surface. Therefore, ‘hot fusion’ as the source of solar heat is not a viable proposition. It defies the standard and well accepted principles that I have come to accept from my physics education. I believe Bruce, the expert on lightning phenomena, was guiding us in the right direction. Certainly, if it was ‘too early to form a judgment’ thirty and more years ago, as Schonland then declared, then when can we expect that day of judgment? Why not now? How can the solar plasma ever allow the mutual gravitation of free proton ions to overwhelm the positive charge thereby induced? Keep in mind that two protons attract gravitationally with 1836 times the force per unit of mass acted upon when compared with the gravitational action between two electrons. To get balance in a true hydrogen plasma, meaning a ‘soup’ comprising only free protons and free electrons, there has to be a preponderance of protons, enough to set up a positive electric potential exactly balancing the negative gravitational potential. The sun must have a positive core charge! It can sustain continuous activity in the form of electrical discharges of the kind we associate with lightning. If it involves a nuclear fusion process, then ‘cold fusion’, albeit in the 6,000 K temperature region, is the fusion process we should be contemplating – not hot fusion!

    A Question for a Nuclear Expert

    Imagine you are an expert, a noted authority on nuclear physics. Imagine that you are called upon to advise the President of the United States. The subject is ‘cold fusion’. Now imagine that the President is not just a politician, but that he has a fair knowledge of physics, dating from his years at college and university. You begin by briefing the President on your own special background and then outline briefly the situation with respect to cold fusion. Then you declare your overall opinion that whatever accounts for the excess heat, if any is really generated, cannot be due to nuclear fusion – but it must be investigated.

    You think that is the final conclusion to what you need to say in proffering your advice. You instill bias into the picture, backing the venture as a loser, but cover your risk by saying that the claim should be investigated.

    However, the President asks a question, the kind of question that any reasonable person having a moderate knowledge of physics might put. The question is: “If you take a bottle full of pure H2O, meaning that its natural deuterium oxide content as been substantially depleted, and leave it on the shelf for a period of time, will it, ever so gradually, convert in part to deuterium oxide so as to recover the natural relative abundance state?”

    You see, one can buy, from chemical suppliers, deuterium-depleted water, typically having only 1% of the normal content of heavy water. I see from my 1967 edition of ‘Handbook of Physics’ edited by Condon and Odishaw (publishers McGraw-Hill) that the natural abundance ratio as between atoms of H2 and H1 is 1492 to 10 million, so I assume that there is a way of measuring that ratio. If one buys that depleted water it will arrive in your hands with about 1492 molecules of heavy water amongst every billion such molecules of water, light or heavy.

    So the President wants to know whether that bottle of water will witness a gradual adjustment of that abundance ratio as it creeps up in increments to increase one hundred-fold to become normal water. A simple question deserving a simple answer. A special commodity is involved, but it is only water! It is on the market. Does it have a ‘shelf life?’ and if so ‘what is the measure of its half-life?’

    Now I have cast you, the reader, as an expert and assume that you would not have read this far if you could not understand the significance of my question. The President is looking at you. What is your answer? You are an expert on nuclear physics and here is the most basic of all practical questions that could be put to you!

    You play safe. Maybe you do not have the answer. You cannot admit you do not know? That is tantamount to saying that the question of ‘cold fusion’ is a wide open question on which you, as an expert, are supposed to have an opinion. If you took that stance the President would move in for the kill throw you out. So you have only one course of action. You declare that the water in the bottle will never undergo change to become normal water. You know you are an expert but can you back that statement up by saying that experiments by Dr. X on water monitored for a period of Y years showed not the slightest trace of any increase in deuterium content, at least with specific limits of measurement error?

    You have given an ‘opinion’ that there is nothing in the ‘cold fusion’ claim, but the President has his doubts. He has the sense to realise that it would not cost much to fund a test involving monitoring a tank of deuterium-depleted water to see if it shows any sign of adjusting to the normal state of normal water. He wonders how water created from protons ever became contaminated with heavy water involving deuterons. However, he has other more pressing concerns and he sends you on your way, duly thanking you for your advice.

    Now you begin to think. After all, you are an ‘expert’ on nuclear fusion’. You reason that if two protons are somehow fusing to create a deuteron then there will be heat generated. Therefore the rate of heat generation will be a measure of the rate of transmutation involved. We do not need to monitor the composition, because there should be a temperature differential. That water in the bottle must be hotter than its environment, if nuclear fusion is occurring and it must occur if H20 changes to HDO or D20, D being the sybol for deuterium otherwise represented as H2. Add a satellite electron and you still get what, chemically, is a hydrogen atom, but the atom has a deuteron as nucleus instead of a proton. The proton and the deuteron each have the same unitary charge, both being positive and either being able to neutralize the negative charge of the electron. That is basic physics, of the kind also involved in physical chemistry. However, our ‘expert’ needs to estimate that temperature to see if we can expect it to be measurable.

    So he now sets about convincing himself that there simply can be no fusion process to worry about. If there is conversion from light water to heavy water by natural processes that conversion must be very slow, otherwise the oceans would be mainly composed of heavy water, and the oceans have been around for quite a long time. A little mental calculation than suggests that the rate of heat generated by fusion of protons to form deuterons would be so minute as to have no commercial significance. The advice to the President was therefore quite sound. Our expert can sleep in contentment.

    However, the President, meanwhile, has found a new way of enhancing his slumbers. Instead of counting sheep he has found that all he has to do is to picture hydrogen atoms trying to combine to form deuterons. Eventually, his mind wanders onto the thought of how two positively charged protons can fuse to create a deuteron having a single positive charge. Eureka! There has to be an inflow of negative electrical charge to permit such a fusion reaction. No input of electricity means no cold fusion. He has answered his own question. Do you remember the question?:

    “If you take a bottle full of pure H2O, meaning that its natural deuterium oxide content as been substantially depleted, and leave it on the shelf for a period of time, will it, ever so gradually, convert in part to deuterium oxide so as to recover the natural relative abundance state?”

    The conversion can only occur if electricity can get into the bottle and that needs some wires and an electrode system unless the bottle is electrically conductive and the atmosphere is well stocked with ions. ‘Cold fusion’, if that means expedited fusion of the hydrogen isotopes in water, therefore goes hand in hand with wires and electrodes. That seems to make some sense of what the ‘crank’ researchers are claiming! However, the ‘expert’ opinion said there could be no fusion involved in the laboratory-temperature experiments involving water. ‘Expert’ opinion must be respected and, after all, if ‘cold fusion’ were to be a viable technology it would have been discovered in the 19th century and the oceans are not seas of heavy water! Indeed, how would that electricity, essential to feed the proton fusion process, get into the sea? Ever heard of lightning, Mr. President?

    Now cast yourself as ‘Expert No. 2’. You see things a little differently. Water comprises molecules. Molecules involve atoms. Atoms have nuclei surrounded by electrons. The hydrogen atom, whether nucleated by a proton or a deuteron, has a satellite electron. It is a bodyguard running around the nucleus keeping all intruders at bay. As long as it is there no two atomic nuclei can ever come together and, even if they could they have positive charges which repel one another, so there is no chance that water can be involved in a nuclear fusion process. To get fusion one has to remove those electrons and bring the temperature up to the point where the protons and deuterons are dashing around at such enormous speeds that they can collide with one another in spite of the braking action of those repulsive forces.

    It may be true that the electrons can be stripped off at room temperature if the hydrogen atoms are absorbed into the body of a metal electrode, something that is a feature of the alleged ‘cold fusion’ experiments, but surely those protons and deuterons inside that host metal electrode cannot move at anything like the speed needed to collide and fuse. Obviously ‘cold fusion’ is a ridiculous proposition!

    And so it is that ‘experts’ will agree that the claims concerning ‘cold fusion’ are false. The President has been well advised.

    Gremlins in the White House?

    I am not at all sure what is meant by the word ‘gremlin’. It is not in my Concise Oxford Dictionary, but that bears the date 1939. Anyway, as far as I am concerned, it is as good a word as any for describing something that is appears sporadically, enough to ‘rock the boat’, as it were, but never for long enough for us to grab hold of it. Now ‘expert’ scientists will tell you that gremlins do not exist, and there may be the odd ‘crank’ pseudo-scientist who might picture them as some kind of ghost-like alien creature having eyes, ears and even the odd antenna. The gremlin I have in mind is something that appears momentarily as a ‘proton’ and then vanishes immediately with the result that our genuine scientists missed seeing it.

    You cannot say I am talking nonsense, because, when it comes to protons, you, however expert you are, have no idea where protons came from. You can pretend you know by saying they are formed from quarks, but where do quarks come from? Pretend again and think of gremlins. Say that the gremlin is a ‘virtual proton’ or a ‘virtual anti-proton’, just as you can imagine ‘virtual electrons’ and ‘virtual positrons’. They are all members of the gremlin family. They appear and vanish everywhere where energy abounds, even in what we think of as empty space, because they are the life’s blood of quantum electrodynamics, or, concerning protons, quantum chromodynamics. Scientists do have a way of inventing words to describe and classify what they cannot understand. Let us just use that word ‘gremlin’.

    Now those ‘experts’ who advise on matters scientific do not have nightmares, because nightmares involve imagination and fantasies that can involve ghostly phenomena. Experts deal in facts. They have no time to waste and no tolerance when they hear reports on strange phenomena which trespass into their territory but are not an accepted feature of their the stock of factual knowledge that they share with their peers. However, the President of the United States surely is entitled to have the odd nightmare, given the weight of his responsibilities and his problem in balancing fact and uncertainty in making his administrative decisions. So, given that gremlins are everywhere, they must have a presence in White House.

    So, our fictitious image of the President, allows us to imagine that his slumbers are interrupted by a dream in which those gremlin protons appear in that bottle of deuteron-depleted water. His nightmare dream assures him that all protons are clones of one another. So if God ordains that a proton should appear at point A and promptly vanish from the scene, but is satisfied if an existing clone very close to A vanishes instead, then we can have our gremlins being rather peevish and moving the property inside the White House around. This does not take the form of a rearrangement of the furniture, but a slight repositioning of the protons in the cells or molecules that constitute the substance or fabric of that furniture or the body tissue of the human form. In short, if there are gremlins at work, then there is scope for a proton at A to find itself in the arms of a gremlin proton that has appeared as if from nowhere and, with God demanding retribution, an isolated proton very close at B is sacrificed and swept away to that background underworld sea of energy that pervades space. Is this really a dream? It could be a ‘cool reality’ and we would only see a trace of what has happened if the protons in the water molecule change into deuterons. In short, if light water converts into heavy water by some natural process, we can blame it on those gremlins.

    ‘Nonsense’, you say, ‘It is all a dream’. So let us now look for those gremlins in the sea. If they can, so to speak, rearrange the furniture in the sea and create heavy water from light water, then why cannot they move the furniture around some more and convert heavy water into light water? Have we done any experiment, watching heavy water, to see if, slowly but surely, it converts into light water? Would you think it absurd if I suggested that the the process can work both ways? All it means is that, a bottle of deuterium-depleted water will slowly become a bottle of normal water and a bottle of heavy water will slowly become a bottle of normal water, whereas in normal water or in sea water, there is equilibrium between the two transmutation processes.

    Clearly those wires and electrodes, if present and energized electrically, will speed up the tranformations by supplying or extracting the needed electrical charge, but no such provision is needed for the equilibrium condition.

    Now, about now you will be telling yourself that anyone can ramble on like this, without saying anything that is meaningful or helpful. Therefore, I face the task of convincing you. Well, where would you like me to begin? Suppose I can show how protons are actually created and prove it by deducing the precise proton-electron mass ratio, meaning its value to the measured accuracy of a part or so in ten million. Suppose, further, that I can show you how to work out the equilibrium ratio in normal water, as between H2O and D2O and show that it is the same as that measured. I can do both and, indeed, in these Web pages I will do both, but will you then believe in those gremlins?

    Don’t you want to understand how the universe was created? All I have said is that our environment contains a sea of energy which is trying to materialize and does so very transiently by creating gremlins. There is, it seems, an equilibrium limit on how many of those gremlins can stay with us as the matter form – as protons and electrons. I admit that I have not seen a way of determining what governs that state of equilibrium but do appreciate that, in science, we can only proceed step by step. The next step that I hope to see is the acceptance of ‘cold fusion’ as a new technological source of energy. That will mean that we have got those gremlins working for us just as Clerk Maxwell said his ‘demon’ might serve us in generating heat by astute manipulation of a kind of valve separating two gas-filled compartments. It is all about interfering with the state of equibibrium as between two states of a physical system.

    Option Time

    At this juncture you, the reader, have a choice. You may, of course, transfer your interest away from these Web pages of mine or you may stay with my discourse. In the latter case, you might like to digress with me as I tell you something interesting about the Maxwell demon, something you cannot read elsewhere. Or you might like to delve deeply into that proton creation theme I have just mentioned. Instead, you might wish to go directly to the section of these Web pages where I show how Nature determines that ratio or relative proportion of light and heavy water. If cold fusion is your primary concern, in an academic sense – meaning theoretical physics, then that is the track to follow. It will lead you first into an interesting scenario but the onward journey gets a little demanding and involves a few formulae. However, halfway along that journey you will have another chance to break away and divert your attentions, always, however, being guided to the ultimate destination. If cold fusion is your primary concern, in a general or technological and commercial sense, then you may wish to be briefed on the tactics of battle, as being fought by the U.S. Patent Office. In the end, however, this Web page sequence of presentation will converge in technological terms on the ultimate source of energy that will provide power for future generations and with it will emerge a wholesome picture of Creation that avoids the warped notions of those living in a virtual image of multi-dimensioned space-time. Do remember that those gremlins I mentioned constitute more that 99.9% of your body mass. You are composed of virtually nothing but protons put together in a myriad of ways and held in place by the activity and the intervention of electrons. If your teaching has led you to believe that nearly half of your body weight is constituted by neutrons, then you need to worry a little as you try to figure how you can survive, given that neutrons have a decay lifetime that is little more than a quarter of an hour. You can escape that worry if you believe in gremlins! On the other hand, if you want to know the real truth about neutrons, take the first option listed below. Another option is to go back to the main Index page and to make your choice. My advice? Explore all options, but be a little patient. I am getting this material onto the Web as fast as I know how. It will be loaded progressively and quite soon.

    The choices are:

    Protons, Deuterons and Neutrons
    Cold Fusion: My Story: Part I
    The Maxwell Demon: A 21st Century Prospect
    Cold Fusion Index


    Post Script: I do emphasize that there is no way whatsoever for energy science to advance within the framework of our immediate environment unless we come to terms with that mysterious something that sits with us here and now and shares our environment. That something is the ‘hammering we are taking’ as the field energy in space tries to materialize in and around us. We are not alone, but we are in a state of equilibrium, though one could say that human decay and old age owes something to the transmutations produced by that ‘hammering’. We march towards our ultimate destiny living in that sea of energy. All I am saying is: “Recognize that it exists and seek to take advantage of it, whether directly, as in a ‘cold fusion’ reactor or, indirectly, by more subtle techniques exploiting its quantum characteristics.” The latter are revealed in certain types of electric plasma discharges and ferromagnetic processes. If you prefer to march on, simply accepting the laws of physics, which do not recognize that energy underworld and are devoid of hope, then you are depriving future generations the benefit they can derive from today’s research. If you do not believe that there is scope for such a source of alternative energy, then stand back from the field of battle and do not obstruct the fight you will witness from the volunteer forces who carry the flag. For one side, it is a fight to the death, because to win is to die, early death by energy pollution, a death to be shared by all who just stand and watch. Just hope that the contender side where my flag can be seen will prove to be the winner!

    Harold Aspden
    March 15, 1998


  • Cold Fusion: A War Story

    Cold Fusion: A War Story

    My Personal Encounter

    Copyright © Harold Aspden, 1998

    Introduction

    I will begin this sorry tale by reminding you that in March 1989 two professors, Martin Fleischmann of the University of Southampton in England and Stanley Pons of the University of Utah, stunned the nuclear world by revealing experimental data which indicated that what seemed to be a nuclear fusion reaction involving hydrogen isotopes could generate otherwise unaccountable amounts of heat by an electrochemical process operating at room temperature.

    Nearly nine years later one has now to face the bewildering conflict of two basic facts:

    Fact No. 1: From April 19-24, 1998 a conference named ‘ICCF-7’ is to be held in Vancouver. It is the seventh of a series of well-attended international conferences, billed as “The Best Ever” and it will bring together hundreds of scientists whose experiments provide supporting evidence to show that cold fusion is a technological reality, that the rival ‘hot fusion’ factions cannot ignore.

    Fact No. 2: As one can read in the October 1997 issue of New Energy News:

    TERMINATION OF THE ORIGINAL PONS-FLEISCHMANN LICENSE
    ENECO, Salt Lake City – ENECO has now completely re-directed its business plan and internal development activities around its own “second generation” non-electric technology. In late May, we terminated our exclusive license agreement with the University of Utah regarding the original 1989 Pons-Fleischmann electrolytic patent applications. The timing of the decision to cancel the license was driven by the U.S. Patent Office’s final rejection of the licensed applications. The only remaining recourse for ENECO, as the licensee, was to pursue an expensive appellate process at an estimated cost in excess of $1 million over the next two or three years. ENECO terminated the license, with its impending high legal costs and other technical difficulties, to devote full corporate resources towards its own proprietary technology that is believed to have a quicker route to commercial success.

    These two facts tell their own story. Here was an invention which was recognized as patentable in Europe but which, for some ‘mysterious’ reason has been deemed unpatentable by the authorities administering the United States Patent Office. This, together with the hostile attitude of U.S. research funding agencies, has obstructed the development of a new technology which affords a prospective plentiful supply of heat energy without involving pollution. It is evident that those who benefit from research funding for ‘hot fusion’ projects have won the day by asserting their oppressive influence through political channels to break faith with the international patent treaty posture on what is, and what is not, patentable.

    An invention that is new, not obvious, potentially useful and is in the industrial category we associate with manufacture and industrial processes, is deemed universally to be proper subject matter for patent grant. If the United States Patent Office is unwilling to grant patents relevant to cold fusion technology then the Commissioner of Patents and Tradmarks should have declared that as a policy position, to stem the wastage of time and money that has resulted since 1989. There should have been appropriate overtures made through the auspices of the World Intellectual Property Organization in Geneva to bring the heads of other National Patent Offices on board with the reasons for that decision. Inventions in the energy field are important to mankind generally and it is unfair for U.S. corporations to be able to secure patents in other countries if the United States of America is not prepared to reciprocate.

    There is no reason why I should venture as a spokesman for the University of Utah in respect of their rights to U.S. patent grant on the Fleischmann and Pons inventions or for the many hundreds of U.S. patent applicants hurt by this situation. I am a British citizen and all I can do is to tell my own story based on my efforts to secure grant of a U.S. patent on cold fusion. From what I have heard my experience is typical of that encountered by others who dare to seek such patents in U.S.A. Now retired, I happen to have a scientific interest in the subject but, careerwise, I was a European Patent Attorney in the employ of IBM. I was Director of IBM’s European Patent Operations. Indeed, many years ago, when the draft of the Patent Cooperation Treaty (P.C.T.) was being discussed in Geneva, I recall being called upon to represent N.A.M., the National Association of Manufacturers at such a meeting. I could never have imagined that one day I would witness the situation that has now occurred in the examining division of the U.S. Patent Office that deals specifically with cold fusion patent applications.

    However, what I have to say below I say as an applicant-inventor but I preface that with the general observation that, in the practical world of the corporate patent environment, when a patent has been on the books for 8 to 10 years, if it is not by then earning its keep by being licensed or by protecting something to be seen on the production line, then its days before it is allowed to lapse by non-payment of renewal fees are numbered. I am not surprised therefore to hear that ENECO in Utah has decided to cut their losses and withdraw from their contest with the U.S. Patent Office. Given then that the files of record in the Patent Office are not available for inspection by the public until a patent issues, it may be that the histories of the file record on cold fusion applications will remain a secret. However, I can here record a little part of that history by making this disclosure as a patent applicant.

    It began in 1989

    In March 1989 I was in my sixth year after retirement and in my sixth year as a Visiting Senior Research Fellow at the University of Soputhampton. I was in the Department of Electrical Engineering housed in the Faraday building close to the Department of Electrochemistry to which Martin Fleischmann belonged. There had been no reason for contact with Professor Fleischman, even though I had indulged in ideas concerned with protons, deuterons, and neutrons.

    Indeed, some three years previously, in 1986, I had been very pleased to secure publication of a sequence of three papers, Papers 1, 2 and 3 in the Part 2 section of my 1996 book ‘Aether Science Papers’, and the first of these was entitled ‘The Theoretical Nature of the Deuteron and the Neutron’. This paper explained how one could determine the proton/electron mass ratio, as being 1836.152, precisely in accord with its measured value. It further give similar precise account of the nature of the deuteron and the neutron. The theory told me that there is no neutron in the deuteron! (Keep in mind here that the Fleischmann and Pons experimental findings were later criticised because there was no evidence of neutron emission from what was ostensibly a deuteron fusion reaction.)

    Now I mention this because I did, at the time reprints of these papers were available, send copies of all three to the relevant professor in the Department of Physics at the university. I explained that, though I was in the Department of Electrical Engineering, the fact that these three papers concerned theoretical physics might lead to enquiries directed to his function and, therefore, it was appropriate that he should see what I had written. I did know that he and his research staff were involved in a funded project involving analysis of particle data and it was the quantum chromodynamic theory that was supposed to account for proton creation. My simple derivation of the proton/electron mass ratio to within a fraction of part per million agreement with its measured value should have impressed him.

    On the contrary, the professor was, dare I say, quite livid, so to speak, in his reaction, which came back in writing, with a copy to the head of the Department of Electrical Engineering. The letter expressed his outrage in the form of a declaration that these were the sort of ideas that he had been trying to stop his research students from voicing. He was not happy. I read this as meaning that my independent unfunded contribution to the understanding of the nature of the proton, deuteron and neutron, could conceivably impact his chances of securing continued funding for the particle research on which his group thrived. The head of my department expressed to me his dismay at the attitude his physics counterpart had taken. For my part, I wondered what was happening in the academic world if research students were not allowed some freedom to develop ideas of their own.

    Was research funding so important that it had to be expended on using ‘research students’, virtually as slaves who sift through particle data at the crack of a whip from their professor, but could not be let loose as free-thinking individuals, albeit targetting their attentions at the common objective? Could it be that that professor was concerned in case his students actually saw my way of deriving the proton-electron mass ratio and he feared their revolt against the theoretical route he was advocating? If so, I hope one of those students will see these Web pages of mine and make his or her own judgement. Would it not have been prudent for that professor to show my papers to his students, given that he knew that they had an inclination towards such ideas themselves, and then explain to them in clear terms why what I was advocating had to be in error?

    Well, that is water under the bridge, as they say, but it will, I hope, show why I was particularly interested when the University of Utah staged the Fleischmann and Pons announcement.

    It certainly caused me to exercise my mind, because here was a claim that an excess of heat was being produced in an electrolytic cell in which the deuteron content of heavy water had been adsorbed into a palladium cathode. As I saw it, if the process involved really was a nuclear fusion process, then that could confirm views I had held for many years concerning the fallacy that the sun’s power was dependent upon hot fusion.

    I had discussed this subject in a book I wrote in 1972: ‘Modern Aether Science’. On page 68 of that work I suggested that the energy released by stars was really coming from the aether. The aether is full of energy, primarily in the form of what are called ‘heavy electrons’ or mu-mesons or, to use another name, muons, but physicists see these as an enigma and wonder where they fit into Nature’s pattern. Those muons create protons and one can begin to see how the aether can shed energy when such mysteries of Nature are deciphered in terms of an acceptance that the aether does exist.

    My curiosity about the excess heat source of the Fleischmann and Pons cell had been aroused. I even wondered if it could be connected with something I had explored exerimentally myself, when I tried to set up electrodynamic forces between electric currents in two adjacent electrolytic cells containing a concentrated salt solution involving normal water.

    To formulate a law of electrodynamics which accommodates to the form of the law of gravity but complies with the basic experimental observations, one is driven into a rather special situation. If two electric charges in general motion interact electrodynamically, but with their electrostatic interactions compensated, then there is a form of law which prescribes that, so long as their motions relative to the electromagnetic reference frame are mutually parallel or antiparallel, there will be an inverse square law of force acting between them directed between their centres of charge. However, should those motions not be of that form, there will be imbalance of action and reaction. This means that the aether itself or whatever it is that determines that electromagnetic frame of reference will shed energy as it tries to assure that balance of action and reaction.

    My interest had focused on the electrodynamic interaction of electrons and heavy ions. In the cold fusion experiments I could see that here was a situation where heavy ions (deuterons) could move freely inside the metal body of a cathode in which there were electrical currents conveyed by electrons. The electrons had a short mean free path, but the deuterons might flow by meandering to avoid the positively charged base structure of the metal crystal and this I saw as a possible recipe for electrodynamic action leading to an anomalous acceleration of those deuterons powered by energy supplied by the aether. That would generate anomalous power in the form of heat.

    I saw, on the other hand, that here might be a true cold fusion process if those deuterons were able to fuse by being driven into one another by those anomalous electrodynamic forces, which my theory indicated would be thousands of times greater than could expected from an electron-electron interaction.

    Either way, it made sense, in my opinion to contemplate having an apparatus in which the cathode was part of a closed circuit path of very low resistance carrying a high current, meaning that this would involve very little heat injection by normal means. This, I reasoned, would enhance the activity in which those adsorbed deuterons were involved. One could have 100 amps circulating in a cathode and just a few amps traversing the electrolyte from anode to cathode. Inject heat to enhance the heat gain and do it by making the closed cathode circuit a secondary winding on a transformer fed with very little power input to its primary winding. That was my ‘invention’ as conceived a few days after I heard of the Fleischmann and Pons announcement.

    I filed my patent application on April 15, 1989 at the British Patent Office.

    So by this Lecture on my Web pages I introduce my own story of the saga I encountered once I took an interest in ‘cold fusion’.

    Harold Aspden
    March 14, 1998

    To choose a ‘cold fusion topic’, press the link button:
    Cold Fusion Index
    or, to read on, continuing this Lecture theme, press:


  • LECTURE NO. 13

    LECTURE NO. 13

    COLD FUSION: EINSTEIN’S QUESTION

    Copyright © Harold Aspden, 1998

    WHAT IS NUCLEAR FUSION?

    ‘Nuclear fusion’ is nothing other than the creation of heavier forms of matter as we know it from its constituent components. Physicists use the word ‘nucleon’ and, whereas the hydrogen atom has a nucleon count of 1, there being one unit of proton mass in its nucleus, gold, for example, comprises 197 nucleons, whereas cobalt comprises 60 nucleons, and aluminium, 27.

    Uranium has an even larger number of nucleons, its prime form having 238 nucleons, but it occurs naturally with the 0.7% presence of a form of uranium containing 235 nucleons and if the latter form is isolated and has sufficient size, those nucleons break out and one has fission, the opposite to fusion.

    Now, going back in history, it was the aim of the alchemist to transmute base metals into gold. Lead has more nucleons than gold so to form gold from lead those alchemists would have had to trigger a fission reaction, whereas to form gold from copper or silver, which have fewer nucleons, the transmutation process would have involved fusion. Isaac Newton, a notorious alchemist, was deeply interested in understanding Nature’s processes of creation and so pursued such research, no doubt having an eye on the initial financial rewards before the eventual devaluation of gold when the secret of the transmutation process became public knowledge.

    If you wish to read about Newton’s efforts on that front, then a book published in England in 1997 tells the story. It is entitled: ‘Isaac Newton: The Last Sorcerer’. Its author is Michael White and its reference is ISBN 1-85702-416-8.

    On page 183 of that book, after a statement in the main text: ‘If the experiments do not support the proposal then it will not be promoted from a mere hypothesis’, there follows a footnote, reading:

    A modern example is the case of cold fusion. In 1989, Professors Fleischmann and Pons, announced that they had achieved the process of nuclear fusion at around room temperature. However, before this staggering claim could be accepted, teams of scientists around the world tried to repeat the experiments. When these attempts failed completely, the idea was considered to be almost certainly false and tests eventually pinpointed the fault with the original experiments. Although Professor Fleischmann is still researching cold fusion at a centre financed by a collection of Japanese corporations, the cold-fusion hypothesis has been widely discarded by orthodox science.

    So, the historian who writes about the alchemist’s dream based on the life and times of Isaac Newton has already taken note that the ongoing fiasco concerning ‘cold fusion’ is destined to becomes history repeating itself, alchemy and cold fusion being different names for the same kind of pursuit, where the word ‘energy’ substitutes for the word ‘gold’.

    Now, when I read the book about Isaac Newton’s efforts to transmute one element into another, I had no difficulty in appreciating why Newton had that intuitive belief that Nature must have its own way of causing elements to undergo transmutation. Surely, just because we now think that these transmutations occurred once and for all in a Big Bang event or in an still ongoing activity at the centres of stars, that should not affect our intuitive feeling that there might be other ways of promoting the transmutations here on Earth at normal temperatures.

    If you say that such ‘intuition’ is unwarranted then please give me an answer to Einstein’s question. Einstein was not an alchemist. He merely promoted his hypotheses and remember that it should take more than hypothesis to establish a belief which can be sustained in the world of orthodox science. I have never accepted the doctrines of Einstein’s theory, simply because they stand in the way of onward scientific development and technology based on there being an energy resource locked into the aether that fills space. I see that aether as the energy source from which those nucleons are created and on from there, when we talk of alchemy and cold fusion, we are only concerned with how those nucleons can combine to form a variety of atoms. I am attentive to what I hear about such reactions occurring at modest temperatures and that brings me back to Einstein’s question.

    EINSTEIN’S QUESTION

    You may not have heard about this ‘question’, so this may come to you as an item of news. The question is, in Einstein’s own words:

    Why does there still exist uranium, despite its comparatively rapid decomposition, and despite the fact that no possibility for the creation of uranium is recognizable?

    This question was raised in the ‘Appendix for the Second Edition’ of Einstein’s book ‘The Meaning of Relativity’, Princeton University Press. Einstein did not even attempt an answer. The paragraph in which the question was raised ended with the sentence: ‘But it would lead too far to go into questions of this type.’

    So, ‘Hail, Einstein’, the scientist who did not want to extend himself by probing questions which even Isaac Newton was willing to explore by his alchemical pursuits, and which Professors Fleischmann and Pons later probed by contemplating transmutation of elements at normal laboratory temperatures. Yes, it is much easier to ‘discover’ what is happening in the far depths of space-time than it is to discover how things are as they are on body Earth.

    There is my own question, with Isaac Newton in mind, one I would add to that of Einstein: “Why is it that gold appeared on Earth in the form of gold nuggets, rather than as atoms sprinkled around and mixed with atoms of other elements over the whole of the Earth’s crust?” You see, if those atoms of gold were formed in hot fusion (or hot fission) reactions in a Big Bang event or deep in the hot core of stars, how did they contrive to come together as a solid lump of gold to form that nugget on planet Earth? It is just as important, indeed probably far more important, to discover how they got together on Earth as it is to know, or rather imagine how they were formed as atoms some billions or so years ago.

    Physicists, it seems, those of our modern academic community only raise questions that they can answer. So Einstein deserves credit for putting that question I mention above. So, what is the answer? Does ‘orthodox science’ offer such an answer?

    THE GOLD NUGGET THEORY

    We know from the folklore of modern physics that the heavier elements, such as gold, were created in the nuclear activity within stars. However, physicists who subscribe to that belief but yet advocate exclusive reliance on facts they can verify by observation have not taken account of a fact known to every prospector who has searched for gold. This is that gold is a rare commodity found only where gold exists in concentrated form.

    By this statement I intend to stress the fact that atoms are not dispersed in a random distribution in a general mixture with the myriad of other atomic species which constitute matter. We find gold where there is gold, just as in the Wild West of America one found wild horses where there were other wild horses. The ‘herding instinct’ is at work in the latter case but, more to the point, there is the ‘creation and survival factor’. People are ‘created’ where there are people and it does not need a physicist to tell you why. There are two principal factors which govern this circumstance, the emergence of the embryonic form and its survival after birth. Survival depends upon community, collective association and parental care.

    In the particle world of the physicist the close association of two particles of opposite electrical polarity, a parent particle and a ‘seeding’ particle, given energy input, can create a parent and offspring, which separate to build a family of mixed polarity but like particle forms. That is something you can read about elsewhere in these Web pages. See [1978b]. However, following the creation of such particles, they are subject to processes of decay much as applies to us humans. They have interplay of energy amongst themselves and the rest of the particle world. They are bombarded at random by energy activity from the background environment, but their chance of survival is enhanced by a mutual proximity which allows them to nurture one another by pooling their energy. So, their mean lifetimes increase if they cluster or herd together. Again that is something that can be explained by basic physical principles. See, for example, [1981b] and [1982d]. Here I ask you just to accept the proposition that the creation of an atom is followed by its inevitable decay, but that chance of decay depends upon two factors (a) whether the atom is over-weight or of a specially-awkward form and (b) whether the atom keeps close company with a family atoms of identical form.

    You can recognize radioactivity as being the symptom of the first category. There are two ‘mid-weight’ atoms in the periodic table of atomic elements which are awkward in the sense just described, these being promethium and technetium. They are social mis-fits in the quantum world and suffer early demise. I show why in one of my published scientific papers. See [1987a]. Then, concerning the lifetime factor and family proximity that has to do with mutual stability, energy exchange in a local conservative exchange requiring at least three identical particles to keep close company. In the case of the atom as a whole, the nuclear core charge is the ‘particle’ in question. I do not mean the atomic nucleus as a whole, but rather the Ze charge form, Z being the atomic number of the atom.

    My conception of an atom is one having a positive core charge Ze surrounded by a local ‘cloud’ of anti-protons which have displaced negative aether charges from adjacent lattice sites. Remember that Nobel Laureate Dirac theorized that the aether contained electrons sitting as latent occupants of positive holes, sites in the aether compartments where a negative charge could sit and exhibit only a neutral effect. According to Dirac an electron is ‘created’ when such a ‘hole’ is vacated and this leaves a positive charge behind which is interpreted as a ‘positron’.

    Well, I have my own way of describing the aether, but it amounts to something similar. My vision is that anti-protons can sit in those positive holes and appear as electrically neutral nucleons. I do not use the word ‘neutron’ in this description of the atom. A neutron, as such, is something only seen in free flight, well clear of the atom from which it is created as a decay product, which itself decays with a half-life of a few minutes. So, overall, an atom comprises the Ze charge unit, a series of latent, neutralized anti-protons clustered around that Ze charge, and the familiar satellite system of Z electrons which make the atom electrically neutral as a whole.

    The stability and equilibrium factor depends upon how those Ze charges of adjacent atoms get on with each other. They will be stable, mutually stable, if Z is the same for all members to the group.

    To sum up, if Nature creates a gold atom somewhere by a kind of random process, then it will decay if it is not in the near presence of other gold atoms. It will decay anyway eventually, as do all atoms, but the overall result, what we see, is the outcome of an ongoing contest, governed by chance and probabilities of creation and decay as between the various atomic species, tempered by events which juggle the atoms about in geological or cosmic turmoil.

    In short, there is nothing in this story that says that the spectrum of atomic species must originate exclusively within the inferno of a star or a Big Bang event. My answer to Einstein’s question is that uranium is still being created here and now on body Earth, as is gold and deuterium and even hydrogen.

    So, if Nature has a way of determining the natural abundance as between the respective elements and we, for example, move all the heavy water component, D2O, deuterium oxide, into isolation in, say, the Dead Sea and leave the oceans replete in their purified form as H2O, we might expect Nature to get to work and transmute by fission or fusion until the proper balance of heavy and light water in the seas and oceans is restored. That would involve ‘cold fusion’ such as we have seen in the accelerated process reported by Fleischmann and Pons.

    Einstein’s question might well, therefore, find its answer in the work of Isaac Newton and the efforts of those now active on the cold fusion scene.

    COLD FUSION: MY PERSONAL OPINION

    As someone trained professionally as a Patent Attorney, having to understand inventions, often in greater depth than does the inventor, I have learned to be attentive when I hear of something new and important in the technological fields in which I have specialized. The technology of electrical power systems, and computers during their years of evolution from the 1950s, even the electronic aspects of control systems for ballistic missiles were all part of my spectrum of activity. So one does not set up a hostile barrier upon hearing of a new development. Rather one must seek to understand, in the knowledge that, when the inventor wanders off to get on with his pursuits, one has to document a full description of the invention and stand ready to argue with a Patent Examiner its operation and its merits in relation to prior art.

    I had retired from my position with IBM and was engaged in my own research at university when I heard of the ‘cold fusion’ breakthrough by Fleischmann and Pons. I imagined the situation if that invention had come my way for patent protection in the days when I was handling such work myself. My guess was that the invention would be seen more as a chemical process for generating heat than as a heat-generating apparatus. I knew that if the invention really was to be the energy breakthrough that was promised, there would be numerous onward developments all patentable and that competition to get patent cover in this new field would be enormous. It is characteristic of all major technological developments.

    What struck me as particularly relevant to my interests was the fact that it seemed that no neutrons were emitted as deuterium atoms fused to shed heat. My published scientific work had established to my satisfaction that there is no neutron in the deuteron and so here was something that I thought could confirm my theory. My published scientific work had also revealed my interest in anomalous electrodynamic forces set up when an electric current is transported, not by electrons, but by heavy atomic particles such as protons (or deuterons), these having masses some thousands of times larger than the electron mass. Those anomalous forces reported from research dating back half a century and more had never been explained. They were – well, I will not digress. I have written about this elsewhere (See [1969a] and 1977a), so suffice it to say here that I wondered if the deuterons absorbed into the cathode of the Fleischmann-Pons electrolytic cells were being accelerated anomalously so as to collide with one another in the metal of the cathode and so trigger fusion. I also knew of other energy anomalies occurring in metal when carrying electric current and so my interest was aroused as well as my imagination.

    I thought about the effects of introducing a strong electric current flow, with very low energy input to a closed circuit including the cathode in a modified cell, based on what Fleischmann-Pons claimed to have discovered. I knew this would add heat and the real object was to generate heat as output rather than supply it as input, but it makes sense to add a little of something, a strong electric current stimulus, to get more action from the cell.

    So it was that, on April 15, 1989, a matter of days after the Fleischmann-Pons discovery became news, I filed a patent application at the British Patent Office with apparatus claims directed at that proposition. It was eventually granted. However, the corresponding U.S. Patent Application was destined to become part of a saga that I could never have imagined during my years of experience in the patent profession.

    I intend to tell that story in these Web pages, but that will take us to another Lecture and on from there. My object in this Lecture is to defend the spirit of scientific enquiry that motivated those of long ago who sought to transmute the elements and to point out that Einstein, the champion of orthodox science, as he has become, never did answer his own ‘cold fusion’ question concerning the presence of uranium on body Earth.

    To read about the cold fusion saga, press the link button:
    or choose a cold fusion topic from:
  • LECTURE NO. 12

    LECTURE NO. 12

    WHY HAWKING IS WRONG!

    Copyright Harold Aspden, 1998

    Introduction

    The time has come when the wild imagination of those who indulge in cosmological theory based on the Big Bang theme needs to be tamed by reminding its proponents of something they should have learned in their student days. I am writing these words on February 25th, 1998 and have in mind a brief article by Nick Nuttall which has appeared here in U.K. in THE TIMES of Monday February 23rd, two days ago. Under the heading ‘The Universe will expand for ever, says Hawking’, it reads:

    The Universe began as a tiny particle, Stephen Hawking, the scientist and bestselling author has concluded. The Lucasian Professor of Mathematics at Cambridge University has turned his attention to what may have happened in the fraction of a second before Big Bang. Professor Hawking and Neil Turok, also of Cambridge, believe that not only was there a microscopic particle but that it was expanding in a process known as inflation just before Big Bang …… Their theory, based on one put forward by Professor Hawking in 1983 and Einstein’s theory of gravity, also concludes that the universe will expand for ever. …. The theory pointed to the Universe being cone-shaped, having started out as a dot in space and time and expanding like an ice-cream cornet over the 12 billion years. Professor Hawking will outline the theory at a meeting next month at the California Institute of Technology.

    Now, it is extremely easy, and involves quite elementary physics, to prove that the universe, governed as it is by the standard physics of electric fields, is subject to the space constraints imposed by planar boundaries. Were this not so, the energy density stored by an electric field would not conform with the established empirical formulations of fundamental physics. This precludes the evolution of a universe, one which necessarily embraces the field medium associated with its electric fields, as expanding from a point, even in a symmetric spherical expansion. Furthermore, one must challenge the very belief that the universe expands, given that it is based on the assumption that light waves once emitted by a radiating atom never ever suffer a loss of frequency in their billions of years of passage through space. I have discussed this latter point in the previous lecture, Lecture No. 11, but I will seek in this Lecture No. 12 to clarify what I have just said about the planar space boundary.

    A LETTER TO STEPHEN HAWKING

    I have, in Lectures 10 and 11, referred to the findings of Dave Gieskieng from his canyon experiments in Colorado. Well, looking through my piles of papers and concerning Gieskieng, I see that I was in correspondence with him from a period beginning in 1983. I find amongst those papers a copy of a letter which Dave Gieskieng sent to Stephen Hawking, bearing the date 31 March, 1984. I will quote it below in full, because I wish to make the point that Gieskieng was urging Hawking to consider the implications of his antenna research some 14 years ago.

    S. W. Hawking
    University of Cambridge
    Department of Applied Mathematics and Theoretical Physics
    Silver Street
    Cambridge CB3 9EW
    England

    Dear Mr. Hawking:

    Thank you for the copy of your paper, THE LIMITS OF SPACE AND TIME. One of the central themes appears to be gravity, and I thought that you might be interested in a paper I wrote some time ago, DOES AETHER CAUSE GRAVITY? It would seem to be the only medium universal enough to accommodate both radio and gravitation.

    Harold Aspden of Southampton was kind enough to review my paper that you have, and suggested that I might write a brief paper, concentrating upon one of its graphs. This has just been done with rather startling perceptions of the aether and electric fields emerging. It is two pages, and titled: A CHARACTERIZATION OF THE AETHER.

    In my previous efforts on this I have not been able to get the radio magazines interested in the antenna and its results. Since you have had some success with the more scientific publication, Nature, I am wondering if you would care to perhaps join in critiquing it for submittal to them. Would you have any other suggestions?

    Yours sincerely,
    (signed)
    D. H. Gieskieng
    9653 Renselaer Dr.
    Arvada, Colorado 80004
    U.S.A.

    Now, let me say at once that I am not in the least surprised that this letter made no impact of Professor Hawking. Hawking is wedded to Einstein’s theory and the Big Bang. He is hardly likely to be interested in anything which implies interest in the ‘aether’ and he is a theoretical physicist, with a cosmological speciality, so experiments involving radio antenna are unlikely to evoke his interest. Nor, indeed, are such letters from non-academic sources likely to be given more than a cursory glance. That said, however, the point is made that the experimental evidence which I see as a killer, so far as Big Bang theory is concerned, is there in those Gieskieng experiments. The aether, the non-expanding aether, with its energy attributes and its planar boundaries governing how electromagnetic waves shed energy to rely on aether energy, is playing its part in those experiments by Dave Gieskieng.

    A LESSON ON ELECTRICAL SCIENCE

    The most fundamental property of space, meaning what we regard as the vacuum, is that it can store energy. We are introduced to this in our early lessons in physics by learning about parallel plate capacitors. Whether there is air between the capacitor plates or a vacuum, it has very little effect on the amount of energy one can store between two parallel metal plates by setting up a voltage between the plates.

    We know that most of that energy is stored in that something that we admit occupies the so-called vacuum. Teachers avoid reference to it as ‘the aether’. They prefer to assign a dielectric constant to the vacuum and then tell us that there is really nothing there inside that vacuum. If we define the vacuum as ‘space devoid of matter’ then that does not preclude the presence of that ‘something’, which I say is the ‘aether’. The teacher avoids the word and, if pressed, hides behind a screen of mathematics representing Maxwell’s equations, but a discerning student knows that there is an aether, whatever the textbooks may say. However, I am sure that some students must be so disillusioned by the lack of physical reality and reliance of mere mathematical formulations, all arising from the aether being ‘outlawed’ that some, at least, must turn away from physics and, as a result, wander into other fields of learning.

    There is an aether! To store energy in that aether, in accordance with the facts observed by experiment, it must have planar boundaries. A parallel plate capacitor has such boundaries, but take away the metal plates and, somehow, somewhere in the space we inhabit, there exists those boundaries as a property of space itself. Otherwise, the energy density stored in the vacuum medium by an electric field would not conform with the formulae that we know are valid.

    There is really no sense in imagining that the material universe can begin as a point and then spread to infinity, if one admits there is an aether present that already extends to infinity. What is the sense in trying to create a universe from ‘nothing’ – a mere point in space – when common sense tells us that if there is an aether present thoughout space, that aether is the likely source of the energy which feeds the creation of the universe? The universe should be associated with matter which is created throughout space and merely nucleates to form stars everywhere in space. So, to talk about expansion of the universe from a point one needs to explain how energy drawn from the aether ever converged on that point or bring the aether itself into the expansion scheme.

    I wonder if Professor Hawking has even thought of this, because it is rather basic and it bears heavily upon his conclusions concerning the expanding form of the universe. The chances are that for ‘aether’ he substitutes Einstein’s mathematical equations and it is then rather difficult to separate an argument about the ‘universe’ and whatever it is that those equations represent.

    I will try not to comment further in this style and will, instead, concentrate on merely presenting some additional background information and then the formal elementary proof of the planar space boundary proposition.

    THE UNIVERSE PRE-HAWKING

    It was in December 1954 that Albert Einstein signed off on the Preface Note of the Fifth Edition of his book The Meaning of Relativity. Contemporary with that the University of Cambridge issued Abstracts of Dissertations, the ones approved for research degrees 1953-1954. Amongst those dissertation abstracts, which included the one relating to my Ph.D. degree, there was one entitled ‘On the Origin of Inertia’ by Dennis Sciama. Sciama in later years provided the supervision for the PhD. research of Stephen Hawking.

    Writing about the meaning of his General Theory, I see on page 107 of that book by Einstein a set of conclusions which read:

    Thus we may present the following arguments against the conception of a space-infinite, and for the conception of a space-bounded, or closed, universe:-
    1. From the standpoint of the theory of relativity, to postulate a closed universe is very much simpler than to postulate the corresponding boundary condition at infinity of the quasi-Euclidean structure of the universe.
    2. The idea that Mach expressed, that inertia depends upon the mutual action of bodies, is contained, to a first approximation, in the equations of the theory of relativity; it follows from those equations that inertia depends, at least in part, upon mutual actions between masses. But this idea of Mach’s corresponds only to a finite universe, bounded in space, and not to a quasi-Euclidean, infinite universe. From the standpoint of epistomology it is more satisfying to have the mechanical properties of space completely determined by matter, and this is the case only in a closed universe.
    3. An infinite universe is possible only if the mean density of matter in the universe vanishes. Although such an assumption is logically possible, it is less probable than the assumption that there is a finite mean density of matter in the universe.

    So this tells us that we need to understand inertia before we really can say much about the limits of space and our universe, which brings us to the Sciama’s Ph.D. dissertation. It was entitled ‘On the Origin of Inertia’. The abstract tells us that:

    As Einstein himself was the first to point out, general relativity does not fully account for inertia. Thus a new theory of gravitation is needed.

    The abstract goes on to refer to:

    a precise theory, according to which local observations of inertia suffice to specify the large-scale structure of the universe.

    So here we see that the scale of the universe, its shape and its boundary form, as well as its evolution over time, are matters intimately interwoven with the theory of relativity, but ever dependent upon our understanding of what it is that endows a particle with its inertial property or mass. It is not sufficient to declare that E=Mc2; one must know why it is that E is equal to Mc2!

    The Mach principle was deemed as governing inertia back in 1954.

    Well, a decade later, I had evolved my own theory of gravitation based on a structured aether and the numbers, meaning those that related to the relationship between G, the constant of gravitation, electron mass and charge etc., came out right only if I assumed that E=Mc2 was attributable to the non-radiation of energy by the accelerated charge. Here I was saying that if you try to accelerate an electric charge by putting it in an electric field, i.e. by causing other charges to act upon it, it will react to resist dissipation of its energy. It will seek to preserve itself – a normal reaction one can surely expect – and that means a reluctance to move spontaneously as it prefers to move in just such a way as to keep its energy in proper order. Here was the the property of inertia and that E=Mc2 formula was derived on that basis, namely that energy is conserved and not shed by radiation. In one single step I had circumnavigated around Einstein’s theory and solved the mystery of inertia, but I had no need to even think about the far-off deployment of mass at the outer bounds of the universe and its gravitational interaction with the particle in question.
    I visited Cambridge from time to time and one day called in to chat about this with someone in the Department of Applied Mathematics and Theoretical Physics in Silver Street. That ‘someone’ who gave attention to my request was Sciama. He invited me to join him over his tea break and we talked about my theory. Sciama was courteous but I had scored no success. I left with my ears recollecting his words of advice: “if you write about these matters you should not use the word ‘aether’. Yes, there is an ‘aether’, but we call it ‘space-time’”

    It was another ten years on before, the E=Mc2 proposition of mine, though published in my 1966 book The Theory of Gravitation was finally accepted as a formal peer reviewed paper by the International Journal of Theoretical Physics,
    1976b.

    So far as the above Einstein conclusions are concerned, by eliminating the Mach principle as an explanation of inertia I had no need to be concerned about whether the Universe was closed or infinite, but I do wonder how Stephen Hawking can theorize on these matters without a proper understanding of the nature of inertia.

    I will come now to my own theme concerning the boundary conditions governing the medium that fills space.

    PLANAR SPACE BOUNDARY THEORY

    I have already, in Essay No. 2 in these Web pages presented the case for establishing that space is ‘sliced’, as it were, into segments which have planar boundaries. However, I will develop the argument here from a slightly different standpoint.

    Firstly, imagine that someone tells you that the surface of the United States of America is expanding steadily. That either means that it is growing at the edges where it interfaces with the oceans or it is literally stretching outwards everywhere as if it is supported on a elastic base which is expanding. The question one might then ask is how this might affect the division of U.S.A. into its several states. Would there be need to add new states to the areas formed at the boundaries or will all the existing states get progressively larger in the process?

    Now, that is not something one needs to worry about, but let me now come to my second point. Imagine instead a block of steel which is progressively extended by welding on more and more steel. As physicists already known and as cosmologists should already know, that body of steel comprises minute ‘states’ called ‘magnetic domains’, each one populated by many trillions of iron atoms. Those domains, which have planar boundaries and measure of the order of 100 microns between adjacent planar boundaries, do not expand in proportion to the growing size of that steel block. In fact, new domains are added as the block grows, because those domains are governed by macroscopic physical conditions not deriving from the individual component atoms within the domain. Those conditions are linked to the collective properties of a number of those atoms, determined in conjunction with the electrical properties of the underlying medium, space or aether, however you might choose to term it, but really the capacity of that medium to store field energy.

    In short, there is, in every piece of steel, indeed in every ferromagnetic substance, a latent presence of domains which exist in their own right, governed solely by the way in which energy present chooses to deploy itself to suit its optimum lifestyle.

    Now, what I am coming to by this argument is the proposition that, on a very large scale, extending throughout outer space, the presence of matter which we see as the ‘universe’ exists in an all-pervading system of space domains each of which has planar boundaries. This is not really hypothesis, because if you do not accept what I am suggesting then you have grounded yourself as a physicist until you wrestle with and resolve the issue of how it is that the energy density of an electric field in the vacuum medium can possibly have the value as formulated in your textbooks. Yes, you can say that empirical data based on observations here on body Earth are sufficient for your needs, but if you are a theoretical physicist, especially a cosmologist, you cannot build your notions with confidence unless you can deduce that energy density formulation by theory which takes full account of boundary conditions applicable in space, even in outer space!

    At this point it may comfort the reader if I quote two paragraphs from a paper on this subject published by the Italian Institute of Physics in 1983. See: 1983k in these Web pages.

    Modern physical theory is tending to regard the vacuum medium as having structure somewhat analogous to that of crystalline materials. Thus we see WEISSKOPF [1] discussing quantum electroweak dynamics and asserting that the Higgs field implies that the vacuum has a certain fixed direction in isospace, namely that of the spinor associated with the Higgs field. WEISSKOPF states that the situation is like that of a ferromagnet, in which the direction in real space is determined as long as the energy transfers are smaller than the Curie energy.

    This, of course, implies an ordered structure of the vacuum medium, a feature discussed at some length by REBBI [2] in an article entitled The Lattice Theory of Quark Confinement. REBBI refers to a 1974 proposal by Wilson that QCD (Quantum Chromodynamics) should be formulated on a cubic lattice, an array that divides space and time into discrete points, but is essentially an approximation to real space-time. The advantage is that this allows calculations to be made that would otherwise be impossible.

    The references just introduced are:

    [1] V. F. Weisskopf: Physics Today, 69 (1981)
    [2] C. Rebbi: Scientific American, v. 248, 36 (1983)

    You see! Physicists have a very complicated way of communicating their agony at not being able to fathom what is really going on in space. Given that admission that there is the glimmer of a connection with ferromagnetism, let us move on by keeping our physics in proper perspective.

    The planar domains in steel should be your clue, if you venture to theorize about the expansion of the universe and wonder about its outer bounds. Indeed, you should be very careful about how you define your ‘universe’, because there is that something in space that provides the storehouse for the energy which comes with your material universe and you cannot just ignore its extent and the simple question of whether it also ‘expands’ with your universe. If you believe in Einstein’s theory then you really have to weigh how that four-space system shares that expansion.

    Where Stephen Hawking’s ‘cone-shaped’ universe expanding from a point and destined to go on a never-ending trek all the way to infinity, fits into the picture of reality, you must judge for yourself. My case is that space contains planar boundaries and I do not see them as part of a universal expansion. Do not be deceived by the conical sections I shall introduce below to develop the mathematical analysis. I am merely sectionalizing space into segments defined by spherical angles to perform elemental analysis before merging the results to apply to space as a whole.

    Note that the task is to explain how energy is stored in space. To store energy in a linear system you need to do work against a force arising from the distortion of whatever it is that fills that space. By linear system I mean one with a linear force rate. Twice the force implies four times the energy stored. This is the essential ingredient of simple harmonic motion, and related oscillations, as well as the property of electric and magnetic fields. So, space must contain something that has two parts which it tries to keep together but which we separate slightly by storing energy.

    The obvious assumption then is that we are talking here about electric charge, which comes in positive and negative forms that attract one another, but which can, at least in a macroscopic sense, compensate one another if not displaced by storing energy. The question then is how boundaries can form within such a system to create something analogous to those domains. To answer this we need to introduce an asymmetrical property. There is something special about negative charge compared with positive charge in what we see as our local space, but that feature inverts as between the two charge polarities in an adjacent region of space, thereby giving scope for defining a boundary between two types of vacuum medium, shall we say ‘space’ and ‘anti-space’?

    Speculate if you will about how this can be and be guided by a clue I now offer. Maybe positive and electric charge are really only states of an oscillating system. All positive charges share the same oscillation phase and all negative charges share the same phase of oscillation but the latter are in anti-phase with the former. Yes, that involves instantaneous action at a distance, but we are talking here about electric charges fields and not propagating disturbances characterized by electric and magnetic field effects. Remember that quantum theory requires action at a distance to be effective in the Coulomb gauge.

    What then is that asymmetry? I will illustrate what I have in mind. Referring to the following figure, suppose that in one domain of space there are negative charges which are depicted in white and that these are set in a a continuum of positive charge which is depicted by a black background. The amount of charge of the two opposite polarities is the same in any domain. It is just that one form is concentrated in a kind of particle form, whereas the other is spread thinly as a uniformly dispersed charge form. Then, in an adjacent space domain, these charge forms are reversed, the negative charge being the background continuum and the positive charge having the particle-like form.

    There will be a planar boundary between the two space domains. Now you may ask why this is important and the answer to that emerges once we consider the effects of actions which displace those charged particles from their positions of equilibrium. This is an action which stores energy, the physical basis for electric field energy as stored in the vacuum.

    We shall suppose that all those charged particles in a given region of space are displaced in unison when an electric field is present and this means that they retain their relative positions. The force we are considering is the force on each such displaced charge that arises from the electric interaction with that background continuum in the local space domain.

    We will define an area A subtended by a cone, the apex of which is at point P and which has a spherical angle S. See the following figure.

    Our object is to determine the force acting on a charge at P and attributable to a planar slice of the background charge continuum distant y from P. That planar slice is parallel with the space domain boundaries of the domain region in which P is located. See the next figure, where the boundaries are those separating the local (green) domain and the adjacent (black) domains.

    Let q denote the charge density of the background continuum and assign the charge at P a unit value. Then the charge in the section denoted by the area A is qA(dy), where dy is the elemental thickness of the planar slice of the continuum as shown. The electrostatic force acting on the charge at P will be an attractive force pulling the charge towards the planar slice. By the inverse square force law it will be of value qA(dy)/x2 directed at an angle B to the line drawn at right angles to the planar form. Accordingly, the component force pulling the charge in the normal direction will be given by:

    (qA/x2)cosB(dy) …… (1)

    Now, AcosB is x2 times the solid angle S. It follows therefore that the force expression in equation (1) can be written simply as:

    qS(dy) ……..(2)

    which means that, owing to that segment of continuum charge of area A, there is an attractive force component pulling the charge at P towards the planar boundary and that force component is given by the expression (2). The total force attributable to a ‘slice’ of continuum charge is simply q(dy) times the whole of the solid angle subtended by the area of the slice, namely half of 4(pi).

    To work out the overall effect of displacing the charge at P towards the planar boundary, we simply note that we are displacing it from a position of equilibrium and so a step through a distance y will reduce the action of a slice of continuum of thickness y in front of the charge and enhance the action of a corresponding slice of continuum behind the slice. The former involves a reduction of the attractive force by 2(pi)qy and the latter corresponds to a restraining force of 2(pi)qy. Together these amount to a restoring force acting on the unit charge of 4(pi)q per unit charge per unit distance of displacement.

    You can then work out why it is that, in cgs units as used in the early days of field theory in physics, the energy stored in unit volume of space by electrical displacent within the vacuum state is 1/8(pi) times the square of the electric field strength. The force is subject to a linear rate and half the resulting force times the distance through which it is displaced gives the energy involved.

    DISCUSSION

    Now, of course, we need to keep all this in proper context. We know the energy stored in a unit volume of vacuum when there is an electric field of a given strength present. We know that because experiments prove it and because electrical technology builds on the basis of that knowledge with success.

    We therefore know, or should know, that there is something in space that is affected by that electric field condition so as to be able to store energy. To comply with what is observed I am saying that there are two opposite kinds of electric charge that are pulled apart just a little in storing that field energy. Now, I could say that, given that the applied electric field is uniform over the region where we are interested in energy stored, that small displacement of charge must, in effect, amount to the development of a planar slice of electric charge. In that case, it seems irrelevant to worry about the ultimate boundary conditions governing space; everything that is affected is local to the test region. Indeed, when I first developed my theory that was my way of looking at things. However, then, sometime around 1966-1967, my employer IBM suggested that I visit Professor L. H. Thomas at Columbia University in New York to see what he thought of my aether theory and its gravitational aspects. He urged me to consider and take full account of boundary effects, as these could affect my conclusions.

    Now, for my part, since I believe in the reality of the aether as containing a uniform continuum of one charge polarity populated by a lattice-like array of identical discrete charges, which, being crystal-like itself, can nucleate its structure on material crystals, I regard the planar boundary state as an implicit property of the aether. If scientists are unwilling to accept the real aether of this form then they need to find their own way of giving physical form to the field energy storage process. My theory, however, implies conditions governing the gravitational action, conditions which demand harmonious motion such as stem from that linear restoring force rate deduced above.

    In order to derive the law of gravitation from electrodynamic analysis and go on from there to derive the value of G, the constant of gravitation, neither of which are possible based on Einstein’s theory, one has to accept that harmony of motion by charge displaced in unison in that non-emptiness of outer space. The seat of the gravitational action is the system that is in dynamic balance with that charge motion. This is all explained elsewhere in these Web pages by reference to my published work of academic record. However, the planar boundary problem that we are discussing is part of the story.

    Possibly Professor Thomas had Mach’s Principle in mind when he stressed the boundary factor. That principle suggests that the mass of a particle here on Earth depends upon its gravitational interaction with all matter extending to the bounds of the universe, but I do not subscribe to that belief. I do believe that the rhythm of the aether charge motion has to be preserved within each and every space domain in order for gravitation to be a universal force within that domain, but I suspect that mass on one side of a planar domain boundary will not interact in the normal gravitational sense with mass in an adjacent boundary.

    For that reason I submit that one needs to study the major geological upheavals and reversals of the Earth’s magnetism that occur from time time, because that signals an event lasting a matter of seconds when body Earth, moving at 400 km/s, relative to the cosmic background, is carried through a space domain wall. We measure that motion by assessing the thermal condition of space through which we are travelling at cosmic speed. Though isotropic, its anistotropy, as we see it, tells us how fast we are moving. The temperature measured tells us the strength of the gravitational potential of matter confined within our local space domain. You see, the energy released by the gravitational interaction between matter and those aether charges in space has nowhere to go other than into a state of thermal agitation of the aether charge, a motion superimposed upon its rhythmic oscillations. So we can relate the individual mass of each of those aether charges with its temperature and the gravitational potential at the point in space where it sits. The 2.7 K temperature background of space can be calculated by such theory, if the gravitational action of the solar system approximates that of the local gravitational potential. See my paper ‘The Determination of Absolute Gravitational Potential’, 1983d.

    Such evidence compells me to believe that there are those planar boundaries in space, even though they exist with a separation distance that can be 100 or so light years, meaning that body Earth experiences those major upheavals at intervals that can be as long as a few hundred thousand years. See the ‘Feedback’ comment elsewhere in these Web pages: Feedback Note 02.

    Now, before I leave this subject and leave you to choose whether to believe Stephen Hawking or believe what I am saying here, I will invite review of the Appendix added below. It is quoted from the published paper: 1983k. I just wish to emphasize the point that if the boundaries of space were spherical in form, then the harmonious oscillation property would not have the form needed to explain the known energy density property of electric field theory, and if the boundaries of the universe were conical, as Hawking believes, I doubt if one could ever derive a theoretical value for the constant of gravitation. There seems no point in working out the ultimate shape of the universe, if one cannot explain the existing property of gravitation here on Earth!

    APPENDIX

    If the elements of this vacuum structure are in a state of stable equilibrium and comprise electric charge, then, in order to satisfy Earnshaw’s law, they must pervade a charged electrical continuum of opposite polarity. The latter is necessary to assure stability by providing a restoring action upon displacement. Accordingly, the only feasible model for a vacuum state having structure is one for which the discrete charges e of the same polarity interact to form a lattice within a continuum of opposite charge density q. It seems logical that the charges e are of equal magnitude and that q is uniform over a local region of space.

    Then, by simple analysis, one may show that if a lattice parameter d is written to satisfy:

    e = qd3 …. (1)

    the electrically-neutral state of the vacuum implies that each charge e takes up a space volume d3. With a uniform electric field of intensity E applied we find that the charges e will all be displaced in unison to satisfy:

    Ee = ky ….. (2)

    where k is a constant restoring force rate and y is displacement. In effect, the whole lattice is displaced relative to the background continuum.

    The energy density stored by this displacement is:

    W = ky2/2d3 …… (3)

    or, from (2):

    W = (Ee)2/2kd3 …. (4)

    and, as this is E2/8π, for a vacuum of unit permitttivity, we find that k is given by:

    k = 4πe2/d3 = 4πqe …..(5)

    from (1), thereby justifying the statement that it is constant.

    The use of this restoring force rate is fundamental to classical theory and it might seem somewhat elementary to have derived it from the electrically neutral vacuum model under discussion. However, the problem emerges upon analysis of radial displacement of a charge q within an arbitarily spherical bounded system. At a distance R from the centre of a sphere of continuum charge the total continuum charge acting on e is 4πqR3/3. The electric field is 4πqR/3, where R is now a vector.

    For any displacement x, as shown in the figure below, a charge e at P will be subject to a restoring force which is the expression 4πq/3 times the vector difference between two radius vectors R and R’. This is simply 4πqx/3. It follows that if e is part of a rigid lattice which is displaced as a whole by the distance x within the bounding spherical continuum, then the lattice is subject to restoring forces which are only one third of those expected from equation (5).


  • LECTURE NO. 11

    LECTURE NO. 11

    THE HUBBLE CONSTANT: ITS ‘FREE ENERGY’ ROLE

    Copyright, Harold Aspden, 1998

    INTRODUCTION

    In Lecture No. 6 I declared that I would tell the story about deducing the Hubble constant in a later Lecture. This is that Lecture. It is about my discovery that there is something amiss in Maxwell’s Equations as applied to free space, something which gives us insight into why light waves emanating from distant stars lose frequency. This avoids the need to interpret that shift of frequency towards the red end of the visible spectrum as an indication of an expanding universe having, as a start point, what cosmologists have named the ‘Big Bang’.

    The story I tell is not all theoretical, as will be seen when we come to the details of Gieskieng’s Canyon Experiments. It is based on the loss of energy by propagating waves, coupled with their loss of frequency. It is based also on the role played by the aether in energy recovery, meaning our ability to extract energy from the aether locally in response to the absorption of those propagating waves by matter in their path.

    As proof of my case, apart from the experimental findings of Gieskieng, I will show how to formulate a value for the Hubble constant H. That should wake cosmologists up from the slumbers and help to put a stop to their dreams about the creation of the universe as a Big Bang phenomenon.

    To conclude this Introduction I will first present the equation connecting H with the other fundamental constants of physics that we have measured in the confinement of laboratories on body Earth, doing so in a form which can be compared with the six ‘Governing Equations’ that I listed in Lecture No. 6. Then I will quote a passage from a book by Brian W. Petley of the National Physical Laboratory in U.K. to set the scene for the onward discourse.

    The Hubble Equation, according to my theory, is:

    H-1 = (2mμ/me)9(N3)[72π]3(e2/mec2)[6/πc]

    where c is the speed of light and e2/mec2 is the ‘classical radius of the electron’, a recognized fundamental physical constant having no particular physical meaning, but one tabulated with very high precision in the standard physical data tables, it being listed as 2.81794092(38)x(10)-13 cm. N is an integer characteristic of the galactic domain region through which the electromagnetic wave travels, its value being normally 1843, as we have seen from what has been said in the Tutorial Notes in these Web pages and particularly in Lecture No. 6. The expression 2mμ/me is the ratio of the mass of a pair of virtual muons to the mass of the electron, it being the mass-energy of such a pair of muons that accounts for virtually all of the energy in one unit cubic cell of the aether, the lattice dimension of that cell being 72π times that classical electron radius or 108(pi) times the Thomson radius of the electron.

    We shall be deriving the above equation as we proceed, but, from the data just given and the knowledge from Lecture No. 6 that 2mμ/me is twice 206.3329, you can calculate H-1, the Hubble time, to find it is 4.5x(10)17 seconds or about 14.3 billion years.

    What this means is that a light wave needs to travel for 1.43 million years through space devoid of matter in order to suffer a loss of frequency of 1 part in 10,000. Such are the numbers on which modern ideas concerning our universe are based.

    Quoting now from Petley’s book: The Fundamental Constants and the Frontier of Measurement, as already referenced in the Discussion section of my Keynote Address in these Web pages, one reads on pp. 41-42:

    The question of time formed the crux of Dirac’s argument. The largest time that we encounter is the age of the universe, ~1017s, and the smallest is about the time it takes for light to travel a distance equal to the classical electron radius, the tempon, or chronon, and which for many scientists represents a natural unit of time, ~0.49x(10)-23 s. Taking the ratio of these again yields about the same number ~1040.

    The essential point made by Dirac was that the dimensionless ratios all came to this value, not as the result of an accidental coincidence, but because they depended on the age of the universe in chronons. This therefore led to his theory for the change of these ratios with time. Thus the force ratio led to a prediction that the gravitational constant would vary with time.

    There have, of course, been a number of theories proposing time variations of certain combinations of the constants since that of Dirac. These constants are as follows:
    (1) the fine structure constant,
    (2) …… ,
    (3) …… ,
    (4) the quantities δ and ε

    δ = Hh/mec2 ~ 1042
    ε = Gdo/H2 ~ 2x(10)-3

    for

    do = 7x(10)-28 kg/m-3

    Both δ and ε involve the Hubble constant H, which characterizes the variation of the red shift of stellar light with distance and is roughly the reciprocal of the age of the universe. The different cosmologies predict different variations of the parameters with time, and the fact that none of them has been entirely successful is an indication that the subject is still an open one.

    Item (2) is a constant of Dirac’s theory that we shall not address, it relating to Fermi’s theory of beta decay. Item (3), the ratio of Coulomb force to gravitational force, will be addressed separately in the theory of record in these webpages.

    Note that δ is dimensionless in that H has the dimension of the reciprocal of time, whereas h/mec is distance, it being the Compton wavelength of the electron (a recognized fundamental constant of 2.42631058(22)x(10)-10cm) and so c, which is a speed, renders the expression dimensionless. The quantity do is a density, it being representative of the order of the average mass density of matter in the universe.

    Whatever you, the reader, make of this interest which Nobel Laureate Paul Dirac took in the Large Number Hypothesis, I hope you will come to see it for what it is, merely a play on numbers. It is not true physics to infer that because two ratios, which man can contrive to relate to physical constants, are both very large and of the same order, then there has to be a physical relationship based on those ratios being the same. That is mere wishful thinking and it should not be allowed to substitute for the methodical explanation of each of those two ratios in terms of a genuine physical process. It is so unlikely that the ratios would prove to be equal in the real world as governed by Nature. Our task ahead is to show what determines the Hubble constant and so deduce the true physical value for that quantity (delta). We have already, in Lecture No. 6 given the physically-based formula for G in terms of the charge/mass ratio of the electron and that yields one of Dirac’s ‘Large Numbers’. We find, of course, that we cannot associate that number with the one which will be derived below from our study of the Hubble situation.

    THE STEADY-STATE FREE ELECTRON POPULATION OF FREE SPACE

    This was the title of a paper of mine that was published in the English language periodical Lettere al Nuovo Cimento of the Italian Institute of Physics, a periodical noted for its rapid publication of scientific papers. It appeared at pp. 252-256 in vol. 41 and was dated October, 1984.

    As my object in writing these Web pages is to (a) present my aether theory to the world, (b) arouse interest in exploring the scope for extracting useful energy from the aether and (c) to make it very clear that the main body of those who see themselves as true scientists has chosen to ignore the truths evident from the research findings I have reported. I will in this section now be quoting extensively from that 1984 paper, interjecting a few comments in italics.

    Note that the argument is developed in two parts. The first part concerns how it is that in what we see as empty space there is a uniform ongoing activity that creates the transient presence of a kind of pseudo-matter. The aether attempts to create protons but does not succeed because there is no energy surplus to the equilibrium requirements of the aether. Yet those attempts intrude into the pathways of propagating light waves and cause the aether to absorb energy from those waves. The second part explains why the resulting attenuation of those waves with distance, attenuation over and above the normal inverse of distance degradation, will involve frequency attenuation as well, the latter having been misinterpreted as a Doppler effect attributable to a cosmic expansion seated in a Big Bang.

    In referring to that 1984 paper, I will here be incorporating a few corrections, entering those in italic text in the quoted extracts. My notes, as such, will be put in brackets, as will the reference data amended to refer to the Bibliography section of these Web pages. The only other preliminary point I wish to make concerns the ‘Thomson scattering cross-section’.

    This formula:

    A = [8π/3](e2/mec2)2

    gives the area deemed to obstruct an electromagnetic wave owing to the presence of a single electron of mass me and charge e. It is a formula derived by J. J. Thomson and is based on the assumption that an electromagnetic wave will promote oscillatory electron acceleration f so as to dissipate energy at a rate given by the Larmor formula 2e2f2/3c3. The force of the electric field of the wave acting on the electron is equated to mf to get this result and it is assumed that the energy radiated as electric field energy is doubled to account for magnetic field energy accompanying the radiated wave.

    Now, I wish to make it abundantly clear that this formula for energy scattering by a single electron is incorrect, because the acceleration of an electron does not involve energy radiation. Indeed, the inertial response of an electron in its efforts to make sure that its intrinsic electrical energy is not dissipated, is just such as will endow it with mass according to the formula E=Mc2. That is a vital feature of my theory and one which makes my method independent of the Einstein assumptions. What Larmor had overlooked was that, so far as the single electron reaction is concerned, the accelerating electric field that produces the acceleration interacts with the electron’s own field and the cross-products of that field interaction cancel any energy radiation from the body of the electron. This does not govern the collective interaction of electrons and so the Larmor radiation formula does have physical meaning and some practical use. It can be used to explain radio propagation from an antenna, where billions of electrons all accelerate in harmony to set up the electromagnetic waves. As to the single electron, you can see that it cannot radiate energy, as otherwise every atom would collapse with its electrons coming to rest in the nucleus. Avoiding that scenario led to Bohr quantizing electron motion in the atom and declaring, quite arbitrarily, that the atom could only radiate if an electron, for some reason, jumped from one of its quantized orbits to another. The point I make about deriving E=Mc2, was the subject of a paper of mine in the ‘International Journal of Theoretical Physics’. [1976b]

    However, in the free space situation we shall be considering, even that collective action does not result in any energy absorption, because the electrons that are transiently present are so far separated that they do not share the same accelerated motion, meaning one that is in-phase with that of adjacent electrons.

    So we need to look again at the action of the electromagnetic wave upon the electron. The wave has two energy components, the electric component and the dynamic component, the latter being regarded as the ‘magnetic’ feature of the wave. Only the electric component accounts for the force accelerating the electron and all of the energy absorbed goes into kinetic energy. This energy, which is half that we might otherwise have presumed to be captured according to the Thomson scattering cross-section, acts to cause that electron to become a wave-transmitting antenna itself and it produces a wave that is 90o out of phase with the intercepted wave. This sets up a secondary wave. However, as this secondary wave propagates it loses half of its energy as it settles to a condition in which the in-phase electric and magnetic fields adapt to a more natural mode of oscillation in which they are in phase-quadrature. This latter situation corresponds to energy oscillations in what is a standing wave system, as between the electric and dynamic action of the aether charge conveying the wave. The waves do not convey energy through space; they merely ripple the aether energy which exists in a sea of uniformity and equilibrium spread throughout all space. However, this secondary wave process involves a loss of half of that energy absorbed by the electron. It is shed to the quantum underworld as entropy pending its eventual deployment in those ongoing efforts to create matter in the form of protons.

    Overall, this means that we can use the formula for the scattering cross-section of the electron, provided we recognize that the relevant absorbing cross-section is one quarter of that cross-section as formulated by Thomson.

    Quoting now from the paper:

    In an earlier letter [H. Aspden and D. M. Eagles: Phys. Lett. A, v.41, 423 (1972)] it was suggested that space may have properties associated with a characteristic cubic-cell of lattice dimension d=72πe2/mec2, a characteristic frequency f=mec2/h and a characteristic threshold energy quantum which analysis gave as the combined energy of 1843 electrons. This led to a value of α-1 of:

    108π(8/1843)1/6 = 137.035915 …….. (1)

    Above, e is the electron charge, me the electron mass, c the speed of light in vacuo and h is Planck’s constant. The expression (alpha) is the fine structure constant.

    The theory also indicated that space may well be populated by virtual energy quanta, equivalent to having a muon pair in each cell. In 1975 this model was applied to the exact derivation of the proton/electron mass ratio ([1975a] in the Bibliography of these Web pages), the dual muon energy constituting the nucleus on which the proton form was synthetized. Recently, by regarding these muon constituents as point charges migrating at random at the frequency fo, (the Compton electron frequency), the model has found further application in explaining and evaluating the muon lifetime,(See my book: ‘Physics Unified’, pp. 145-146), the neutron lifetime [1981b] and the pion lifetime [1982d]. Furthermore, the critical energy threshold set by the 1843mec2 quantum was crucial to the neutron lifetime determination and is of significance, on stability criteria, to the creation of the proton.

    Physically, this quantum arises because each cubic cell has a lattice charge element q set in a uniform background continuum of opposite charge density and the condition for which q can change in form without displacing the continuum is that it absorbs energy to create N electrons and positrons occupying the same volume. Thus:

    q = N(e+, e) ………….. (2)

    The argument is that q has to be zero or at near zero potential in relation to all other q charge and the continuum charge. This fixes the cubic structure and the position of q in relation to the centre of each cell. The dynamics of the space model are linked to the properties of the electron and the physical size of the electron charge in relation to that of q. The analysis shows [1972a] that a true zero potential condition would correspond to a non-integral value of N lying between 1844 and 1845. Since the potential cannot be negative for a true vacuum state, N has to be lower than this and it must be odd to cater for electron-positron pair creation and q converting to an electron or positron. Thus N is 1843.

    ( I have quoted this from that 1984 paper to show that enough concerning the detail of my aether theory is of record in university libraries for scientific academia to have taken my research seriously and so realize that this account of the Hubble constant kills the hypothetical notion of an expanding universe!)

    The pair of virtual muons in each cell were identified as such because they assure energy equilibrium by giving the cell the same energy density as the q elements. Analysis indicated that their mass was very slightly less than the mass of the real muon. The same analysis (‘Physics Unified’, pp. 103 and 108), applying the Thomson formula relating charge radius and energy, allowed the volume of q to be determined as (1/N)1/2(me/mmu)d3, where mmu is the mass of the virtual muon.
    The advance now to be presented in this paper is based on the simple realization that in free space the transition indicated by equation (2) will occur naturally but with a very low probability. It takes the energy of nine virtual muons to exceed the energy threshold set by Nmec2, with N=1843. The virtual muon mass is a little in excess of 206me. Therefore, we look to the event when four muon pairs plus one muon of charge opposite to q all combine within the volume of q in the same cycle of migration. The muon pairs have a random freedom of movement and are not confined to a particular cell. (See comment below) The chance of one muon entering the q volume is one in (1/N)1/3(me/2mmu). Therefore, the chance of nine muons entering the same cell volume at the same time is this factor raised to the power 9.

    (The way I visualize this process is that the entry of a muon, or a muon pair, into the body of the q charge traps the energy for one period of the aether rhythm. This allows the cell occupied by q to be replenished by energy inflow from surrounding aether in readiness for the next muon strike. If this occurs in the following cycle, then the energy of the muons remains trapped, otherwise that first muon (or muon pair) will decay during that cycle and displace any energy that has entered the cell in its absence inside q. This scenario therefore permits a chain of events building up the energy inside q, but with a diminished probability factor for each successive step in the chain. I am now saying that, by ‘simultaneous’ as used in the paper, I mean the sequence of events that occur within a charge q without rupturing the timed energy chain.)

    The logic of this supposes that each muon arrives independently and simultaneously and that the chance of four negative muons appearing is the factor raised to the power 4, whereas the chance of five positive muons appearing is the factor raised to the power 5, the total chance being the product of the two. We find that the overall effect is that at any time the chance of a q element converting according to eq. (2) is (1/N)3(me/2mμ)9.

    (By this it is meant that at any instant the chance of the q charge in each cell of space being excited to the threshold state is that specified by the above expression. With N=1843 this means that momentarily the q charge has become 921 electron-positron pairs plus a residual electron, assuming that q is the same as the electron charge.)

    The electron-positron pairs will not obstruct the passage of electromagnetic waves because they have a mutual inertial balance and are collectively neutral in their response to electric fields. This leaves the electrons as presenting a scattering cross-section to radiation.

    (Here I introduced the ‘Thomson scattering cross-section’ as providing the obstruction to onward propagation of an electromagnetic wave. However, I ought really to have termed this the ‘absorbing cross-section’, for the reasons already stated above where I indicated I would be using a factor 4 in what follows in order to correct an error in that 1984 paper.)

    The formula given in the introductory paragraph can be used to evaluate d as 6.37×10-11 cm, meaning that there are 3.87×1036 cells in a cubic metre of space. With N=1843 and mmu/me=207 (or, to be precise, 206.3329, an adjustment now incorporated in the onward text.) it is evident that one cell in 2.17×1033 is subject to the transition just discussed. There are, therefore, approximately 1,780 excited electron cells in each cubic metre of free space.

    The Thomson scattering cross-section of the electron is well established as 0.666 barns or 6.66×10-29 m2 and, accordingly, our theory tells us that the vacuum should present a cross-section of 1780 times 0.666 barns or 1.185×10-25 m2 per metre cubed. Here, however, we must divide by 4 for the reasons already stated. This reduces that cross-section to 2.96×10-26 m2 per metre cubed. On average, therefore, a photon would have to travel at the speed of light 3×108 m/s for 1.125×1017 seconds before being wholly absorbed.

    It is noted that this period is really a rate of exponential decay but is does mean that we are here contemplating a period measured in billions of years. We are now ready to move on to the task of explaining why waves lose frequency reduces in their through free space. First, note that the paper contained a reference to ‘missing matter’.

    The mass density of the electron population causing this obstruction of radiation is as low as 1.6×10-27 kg/m3, which is curiously of the order of the mean mass density seen in the galaxies and attributed to the so-called missing mass in cosmological theory.

    There is purpose in examining whether the scattering process has some bearing upon the cosmological red shift. Universal expansion by which the red shift becomes a cosmological Doppler effect is the accepted hypothesis. The alternative provided by the ‘tired light’ hypothesis, which requires the degeneration of frequency in transit, is discounted by Misner, Thorne and Wheeler (‘Gravitation’ (Freeman, San Francisco), p. 775) who quote Zel’dovich (Sov. Phys. Uspekhi, v.6, p. 475; 1964. He stressed that the statistical picture of photon interception by particles in the interstellar space would require some photons to lose more energy than others, resulting in a spectral line broadening that is not observed. Yet space is so tenuous that one may well question how one can be sure that a statistical interception process applies when the particles involved are about 10 cm apart.

    (Here I would have liked when I first wrote the paper to have expressed my view that photons really do not convey energy at the speed of light. They are events which occur in space when waves are generated or absorbed. The wave is the only feature we need consider. Neither photons nor waves transport energy across interstellar space at the speed of light. It was only the belief that there is no aether that led physicists into the syndrome which drove them down that blind alley where all they could ‘see’ as explaining a cosmological red shift was the so-called ‘Big Bang’.

    There is something very special about the true vacuum that is never mentioned in this context. It has the ability to transmit waves without frequency dispersion, the very property which Zel’dovic sees as missing when matter is present. The author ( H. Aspden: ‘Wireless World’, v. 88, p. 37; 1982) has recently discussed this zero dispersion vacuum property and argues that space itself must adapt to the local wave disturbance so as always to be in tune locally with the signal in transit. Furthermore, the frequency property must somehow be codified at each point in space-time without regard to whatever happens at adjacent points and without involving the propagation speed c.

    The author has shown that this state of affairs applies if one accepts that the electric field vector E is a composite of two electric field vector components E1 and E2 having separate physical significance. This allows us to write two equations:

    E = E1 + E2 ……………… (3)
    dE/dt = (E1 – E2)F(E1/E2), ………….. (4)

    where t is time and F is a function of the ratio of E1 and E2. The rate of change of the amplitude of an electromagnetic wave can be codified in this way in terms of the strengths of two electric field components at the point in question. It need not be determined by the speed at which the wave progresses to adjacent points.

    The function F is governed by the condition that there is zero frequency dispersion, at least up to the threshold frequency at which electrons and positrons are created. One can infer that at this limit E2 is zero, whereas at frequencies in the radio and optical spectrum E1 and E2 are approximately equal, though their actual ratio is a crucial indicator of frequency.

    With such a feature electromagnetic theory admits the possibility that the presence of matter could attenuate E1 and E2 unequally, that is, not in linear proportion. In this case the ratio E1/E2 can change and the frequency might vary in transit. In the quantum situation, where collective action of intercepting matter co-operates in a photon reaction, the change is substantial and the frequency is reduced in a quantum step, but in a very rarified interstellar medium, the frequency will reduce progressively as each element of matter is intercepted to scatter energy.

    The analysis involves analogy with a simple harmonic oscillator for which the linear restoring force rate is a variable giving a resonant frequency fr, where:

    (fr)2 = kfo2 ………… (5)

    The variable k is equal to E/E1, this being 1-E2/E1. It gives E2=0 when fo is the Compton electron frequency.

    For a sinusoidal planar wave the amplitude of dE/dt is 2πfrE or:

    2πfok1/2(E1-E2),

    which is of the form given by equation (4) because k is a function of E1/E2. Also, E1 and E2, though approximately equal in magnitude over much of the frequency spectrum, are associated with relatively very different charge densities and inversely so related to very different physical displacements. This causes one of them to be the seat of almost all wave energy loss so that E1 is effectively constant in a planar wave and E2 is the main variable.

    The energy density W of such a wave is proportional to E2 or (kE1)2, which means that (1/W)dW can be written as (2/k)dk. Also, from (5), with f constant, we find that (2/fr)dfr is (1/k)dk. Taken together, these relationships allow us to write:

    (1/fr)dfr/dx = (1/4W)dW/dx ………. (6)

    where x is the distance travelled by the wave.

    We arrive, therefore, at the remarkable proposition that the dual component electric displacement, needed to explain the zero dispersion property of the vacuum, gives it the property of attenuating the frequency of waves in transit at one quarter of the rate at which the wave energy is absorbed, subject to overriding quantum effects associated with matter when present.

    In the true vacuum where only the transient electron induction process discussed in this paper causes any attenuation, there should be a progressive reduction of frequency with a time constant of 4.5×1017 seconds, that is four times the period calculated for energy attenuation. This is 14.3 billion years, a quantity comparable with the 12 billion years estimated as the average age of the galaxies as judged from their spectral character (Narlikar: The Structure of the Universe, Oxford University Press, 1977, p. 228).

    The author sees the contribution in this paper as a major advance in a theory of the structured vacuum which has been evolving for many years. It is extremely gratifying that a theory which has proved to be so fruitful in determining fundamental constants with high precision should so easily lead to the theoretical derivation of the most relevant constant in cosmology. The interpretation of Hubble’s constant as a phenomenon linked to dual displacement in the field medium should now encourage experimental enquiry into the detection of this property of radiation, one avenue being the study of anomalous antenna properties reported by Gieskieng (D.H. Gieskieng: The Mines Magazine (January, 1981), p. 29).


    As already noted the above account concerning the theoretical evaluation of the Hubble constant was published in 1984 in the Italian Institute of Physics periodical, Lettere al Nuovo Cimento, v. 41, pp. 252-256. That is a publication in the English language which offered rapid publication of scientific contributions which could survive peer review by referee scrutiny. It was shortly after that that the experimental research findings of Dave Gieskieng came to my attention. They confirmed to me that the aether does have the self-tuning, dual displacement property, which sustains natural oscillations at the signal frequency of electromagnetic waves. Nature’s ongoing attempts to convert aether energy into protons put that something, a kind of ghost-like, quasi-state of matter, into space and, as explained above, it can weaken those waves and progressively reduce their frequency, according to distance travelled. That led to efforts to publish what Gieskieng had discovered. The story on that it now told in these Web pages as the Appendix to the previous Lecture No. 10. I have deferred entering this Lecture 11 until I was able to record it immediately following Lecture No. 10.

    Harold Aspden
    February 25, 1998.


  • APPENDIX TO LECTURE NO. 10

    APPENDIX TO LECTURE NO. 10

    AN ANTENNA WITH ANOMALOUS RADIATION PROPERTIES

    Copyright Harold Aspden, 1987, 1998

    This paper dates from the 1986 period when it was offered for publication to the journal Radio Science. Its original form was much greater in length and contained a more extensive account, mainly in the experimental section. The referee opinion was that it was important and warranted publication, subject to it being contracted in length. This resulted in the paper having the form presented here. However, it was rejected upon submission by the Editor, without further comment. The paper is exactly as it stood at that time, giving the following addresses of the authors then applicable, though Dave Gieskieng still resides at the address stated. An Abstract of the paper was published in the No. 1 issue of volume 10 of Speculations in Science and Technology (1987), coupled with an an offer to supply copies of the paper to readers upon request. Only one or two such requests were received. In view of this Web page presentation that offer is now withdrawn. The Abstract presented in the above periodical is more informative than that heading the paper below. Its full text is to be found elsewhere in these Web pages in the Bibliographic section under reference: [1987n]


    D. H. GIESKIENG, WOFK,
    9653 Rensselaer Drive, Arvada, Colorado 80004
    and

    H. ASPDEN, FIEE,
    Department of Electrical Engineering,
    University of Southampton, Southampton SO9 SNH, England.



    Antennas designed to radiate electric and magnetic fields in quadrature time-phase are found to have anomalous radiation properties relative to the in-phase propagation properties of the conventional dipole. It is shown that there is a marked advantage in wave survival efficiency over the dipole, increasingly evident beyond a mile range. This is attributed to the excitation of a natural wave propagation mode by the new antenna, rather than the dipole’s forced wave propagation mode and the deceleration of the latter over the short range into a natural wave with some energy dissipation.

    INTRODUCTION

    This paper is a joint contribution by one author (D.H.G.) who has performed the extensive experimental investigations reported and another author (H.A.) who seeks to show that the experimental findings have important fundamental significance to electromagnetic theory. From a practical viewpoint, the discovery of what will here be termed the Gieskieng antenna is significant because it appears that by accepting a little degradation of signal strength over an initial short range of transmission there is a pay-off for longer range transmission which shows that the antenna is more efficient than conventional dipole antennas. This most unusual and somewhat anomalous propagation property is difficult to explain and appears not to be a consequence of ground or atmospheric reflection. It has, therefore, been examined in the context of hitherto unsuspected properties of the propagating medium.

    In summary, the Gieskieng antenna differs from conventional antennas in that it is expressly designed so that the electric field energy and the magnetic field energy are not forced into the field medium in time-phase with each other, as applies for a conventional half-wave dipole. Instead, the antenna is so structured that the electric and magnetic fields are set up in quadrature phase. For the same excitation this appears to produce a signal of half strength initially, but it has, it seems, the property of being less subject to dissipation and attenuates more in accord with theoretical prediction, whereas conventionally produced signals attentuate more rapidly.

    It will be argued that this phenomenon is attributable to the field medium having a natural propagation mode which is directly excited by the Gieskieng antenna, whereas the excitation by the dipole antenna develops a forced propagation mode with which we are familiar, the forced propagation degenerating into natural mode propagation with energy dissipation over an initial range of transmission.

    Gieskieng [1] disclosed the features of his new antenna in a short note published in The Mines Magazine in 1981 and interpreted its unusual properties as due to a separation of the truly electromagnetic radiation from that associated with the electric field. Such argument was tentative, as may be the theoretical argument now offered below. The essential point is the appreciation that there is an anomaly involved in the antenna radiation properties and that an answer must be found in the interests of communication technology. The experimental findings reported here are, therefore, the more important element of this paper. However, since measurements of the kind reported are notoriously difficult and it is, therefore, easy to doubt their validity, some elaboration of the theoretical background relating to wave propagation, seems in order. In this way it is hoped that the reader may see the results in context and favour further research into what does appear to be a quite fundamental avenue for development.

    MAXWELL’S EQUATIONS

    It is generally accepted that full and sufficient basis for our understanding of electromagnetic wave propagation is contained within the framework of Maxwell’s equations and that Maxwell’s equations are on very firm foundation. Within the field of radio science it is not usual to become concerned with the quantum properties which govern energy transfer processes, a duality between wave and photon actions being tolerated as something we have to live with. However, we must be ever conscious that there are aspects of the field medium and its propagation properties which are not embraced by present knowledge. For example, we do not understand why the field medium, when excited by waves at the frequency we associate with the Compton wavelength 2.426×10-12 m can suddenly absorb the wave energy and create electrons and positrons.

    Maxwell’s equations are silent on the subject of such threshold frequency conditions, but it is clear that something endows the vacuum medium with a resonant property at the Compton electron frequency 1.24 1020 Hz.

    Historically, Maxwell’s equations had a physical basis. They relied upon the so-called displacement current and belief in the ether as a tangible medium filling space. This was before Compton’s discovery of the wave interaction with the electron. The measured isotropy of light speed led to the ether going out of fashion. Displacement currents were retained only within the mathematical framework of Maxwell’s equations, which, however, gave no physical picture from which to assess threshold conditions. The ether has been revived by quantum techniques. The energy fluctuations known to exist in the vacuum and the quantum interactions which form the basis of quantum-electrodynamics and electron-positron pair creation and annihilation have made the ether respectable again, but in a new adaptable form. Graham and Lahoz [2] have gone further in demonstrating that the vacuum can be the seat of electrically-induced inertial reactions and have affirmed from their experiments that Maxwell’s displacement currents are real properties. Maxwell’s equations warrant rescrutiny in these circumstances.

    There are of historic record proposals for generalizing Maxwell’s equations in order to accommodate for the motion of the ether with an observer. Hertz [3] postulated the need for total time derivative formulations, a theme followed recently by Tombe [4], who suggests that the partial derivative form of Maxwell’s equations prevents them from
    being Galilean invariant and proposes corrections modifying the equations. Tombe refers to independent proposals along the same lines by Phipps [5] and Kosowski [6]. Also of record in connection with efforts to adjust the equations to conform with speed anisotropy observations are the modifications proposed by Trimmer and Baierlein [7] which retain the partial derivative form but introduce small anisotropy constants connected with a preferred direction in space.

    Such modifications of the fundamental equations which have proved quite well founded must be seen as speculative, pending experimental evidence stronger than that now available. Here, we do have some experimental evidence, but it strikes at something more basic than the equations themselves, that is the energy transfer issue not really embraced by the field equations as formulated.

    The points at issue, theoretically, are the energy transfer process, the threshold frequency condition and, a rather unusual question concerning the nature of a magnetic field in a wave propagating through the free vacuum. If the field arises from electric displacement currents, there is the question of what is actually displaced and relative to what is it displaced? We avoid these questions, and perhaps miss technological opportunities, if we look only at Maxwell’s equations.

    Considering the threshold frequency condition and the prospect from the above-mentioned experiments by Graham and Lahoz that a mass property must be associated with the vacuum medium, Aspden [8] has shown that the displacement is a relative displacement. There are two constituents in the field medium which move in opposition when an electromagnetic wave is in transit in a natural propagation mode. It is only by this means that the medium can have an inertial property which is hidden from us under normal circumstances and does not cause frequency dispersion in long range wave propagation. The same principles have recently been shown to give basis for calculation of the small but progressive reduction of frequency associated with the red shift of modern cosmology. The Hubble constant was found to be in full accord with the observed value, without recourse to the expanding universe philosophy [9]. This endorses the proposition that we should look to the foundations of Maxwell’s equations for new properties affecting electromagnetic wave transfer.

    The energy transfer question is crucial and here we find that there is, of record, a quite challenging account in the work of Professor G. H. Livens, Fellow of Jesus College, Cambridge, England. In his book ‘The Theory of Electricity’, 2nd. Ed. published by Cambridge University Press in 1926, under the heading ‘On the flux of energy in radiation fields’, he wrote:

    ‘The usual procedure is to base the whole of the discussion on Poynting’s form of the theory, which appears to provide the simplest view of the phenomena, and to ignore the possibility of alternatives. We must not however forget that our viewpoint may be coloured by a long use of the particular form of the theory as the sole possibility, so that its apparent suitability may be misleading. It is therefore essential that we bear in mind that Poynting’s theory is not the only one which is consistent with the rest of the electromagnetic scheme.’

    Liven’s then demonstrated that all the empirical evidence was also consistent with the field oscillations being established by energy stored in the kinetic energy of the field in the proximity of the antenna and sustained in propagation without energy transfer at the wave velocity. When waves are intercepted the same energy is absorbed as one calculates from conventional theory based on Poynting’s assumptions. In a sense, therefore, we have here, in 1926, proposals which could well be compatible with the waves acting as catalysts in promoting quantum energy transfer
    from a symmetrical energy-primed background, a theme later developed in quantum physics. Liven’s work is extensively discussed in a 1972 book by Aspden [10].

    FORCED AND NATURAL WAVE PROPAGATION

    Given the above basis for taking a more open-minded approach to the wave propagation problem, we now address the mechanism which appears to underly the experimental work to be reported.

    When a standard half-wave dipole is excited the electric field oscillation is set up as shown in Fig. 1 and propagates at the speed c in the direction shown by the arrow. The field medium has a response time of the order of 10-20 s and will follow these field oscillations with negligible dispersion. The related dipole current sets up
    a magnetic field oscillation in time phase with the electric field but in space quadrature. Energy is, therefore, forced into the waves, shared equally between the electric and magnetic fields, and must travel at speed c. Maxwell’s equations assure that the electric and magnetic waves are mutually sustained. In energy terms, we expect that energy from the electric field transfers forward to the magnetic field and vice versa.

    Fig. 1

    The above is conventional. Now we add our constraints. First, if the field medium has a natural threshold frequency and so a related inertial property, then such a wave must eventually suffer frequency
    dispersion. Yet, in the true vacuum that we associate with outer space, Warner and Nather [11] have found that the group velocity of light is constant to better than 5 parts in 1017. We must, therefore, at least have a suspicion that the accepted form of wave oscillation shown in Fig. 1 is not of the type which really does penetrate the vast distances of interstellar space. In theory, if we admit that a real vacuum medium is involved and are looking for technological consequences, we are forced to the view that such a wave must be degenerate.

    Now we ask if a wave can be produced without forcing the magnetic field to be in phase with the electric field, a condition that will be a deliberate design constraint of the antenna to be described. Note here that a magnetic field requires charge to move relative to the electromagnetic reference frame. According to modern relativistic principles, the latter is the observer’s reference frame, but for waves in outer space this requirement stretches the imagination. On conventional old fashioned theory, this frame is set by the ether itself. So, either way, one wonders how, in true vacuum, there can be such a thing as a magnetic field. The thermodynamic characteristics of a magnetic field suggest reaction effects (see Aspden [12]) within a primary medium. Therefore, in the absence of matter, the basic symmetry of the primary vacuum medium to which it owes its transparency must be broken to create field substance sustaining the magnetic energy condition. This is a forced and unnatural state ultimately incurring dissipation of energy, but one which can no doubt exist, in view of the conventional type of wave depicted in Fig. 1.

    In any event, there is good reason for accepting that the magnetic property is a special state of the field medium in which the kinetic energy of charge in a motion reacting to the inducing action appears to constitute what is, in a thermodynamic sense, the magnetic field energy.

    The Gieskieng antenna to be described produces the electric field wave with a 90o time phase advance compared with the magnetic field wave, because in relation to the direction of wave propagation the electric dipoles are set one quarter wavelength ahead of the section producing the magnetic effect. This wave system is shown in Fig. 2.

    Fig. 2

    The question at issue, however, is whether this is done in breach of Maxwell’s equations, thereby implying their need for modification to more general form, or whether the physical processes truly involve a reacting magnetic field in the conventional sense. If the field medium had a natural oscillation frequency equal to that at which the antenna is excited, then the energy deployment could involve electric displacement with energy transferring at each point in space between the electric field and the kinetic energy of the displacement rate. This is the condition contemplated by Livens, as already discussed. Electric energy need not then be fed into the reacting magnetic field system and the oscillation of the electric field is sustained by its own displacement motion. In fact, there need not be a true magnetic field in this case. The electric field does all the work, a condition in evidence, incidentally, in optical phenomena.

    The magnetic field H, shown in Fig. 2, may, therefore, be a notional field signifying dynamic energy in the field medium. On such an interpretation, Maxwell’s field equations change from the form:

    (1/c)dE/dt = curl(H) …….. (1)
    – (1/c)dH/dt = curl(E) ……… (2)

    to:

    – (j/c)dE/dt = curl(H) ……… (3)
    – (j/c)dH/dt = curl(E) ……… (4)

    where j is the familiar operator signifying phase advance of 90o and electric field strength E and magnetic field strength H are varying sinusoidally with time t.

    The proposition is that it is possible for the field medium to be set in a natural propagation mode which conforms with equations (3) and (4), so far as analogy with Maxwell’s equations are concerned. These new equations are symmetrical, a favourable contrast with the conventional equations of (1) and (2), if they are to represent conditions in the vacuum medium not governed by the forced constraints of action involving matter.

    The physical process involved in generating the Fig. 2 wave is that as energy is supplied into the electric field we do not drive energy into the magnetic field at the same rate but rather leave it to the electric field displacement to do all the work. The inductive effect of changing magnetic field does not then develop the electromotive forces which oppose and so contain the displacement. The result is that, for the same electric field, the Fig. 2 wave involves a much larger physical displacement of the field medium than does the Fig. 1 wave. The kinetic energy represented by the magnetic field is then primarily energy of the motion of the electric displacement, a much slower process than the one involved in the Fig. 1 wave. Indeed, there is a counter-displacement which offsets the electric field condition and adapts to the frequency of the signal in transit so as always to be in resonance at a level set by the energy density of the wave. This resonance is not excited in the forced wave mode of Fig. 1.

    This process is justified elsewhere [8, 9], but the outline above will serve to show that we may expect unusual properties to be displayed by an antenna excited in the special mode just discussed. In particular, the Fig. 2 wave propagation mode should be subject to no dispersion and so should have better long range characteristics.

    Two other points should be mentioned at this stage. The first is that a wave generated in the Fig. 1 mode will tend to degenerate into the Fig. 2 mode as the electric field oscillations progressively need to transfer energy into the kinetic form because the magnetic energy is slowly dissipated and cannot sustain the back EMFs which absorb energy from the electric field. Secondly, it is of interest to note that at zero frequency the counter-displacement state mentioned above is wholly balanced against any forward displacement. This is the state for which the vacuum medium has adapted to the local frame of the notional Earth-bound observer and moves with him through the cosmic space. However, in radio propagation we are concerned with the lateral displacements and these always have a partial, though nearly equal, electrical counterbalance displacement. Only at ultra high frequencies does this counter displacement become small, being zero at the threshold at which electrons and positrons are created.

    PRINCIPLE OF EXPERIMENT

    The principle of the experiment involves direct comparison of the propagation properties of the special antenna and a half.-wave dipole antenna. We compare their relative signal strengths under conditions not unduly contaminated by reflections by making measurements at numerous positions over a wide range. The detector used was a conventional dipole. It is designed to detect waves emitted by a standard dipole antenna and so the comparison is dependent upon this factor. In retrospect the measurements should also have been made using an optional receiving antenna of special design to complement the test antenna. However, in the event, it appears that the dipole receiver was 50% efficient in absorbing energy from the natural wave produced by the special antenna based on a 100% reference for detection of the forced wave produced by the dipole antenna. Thus, at close and long range, taking the natural wave mode of the special antenna as reference, the 50% signal loss has to be kept in mind. The dipole transmission was at 100% signal strength close to the transmitter, on this basis, but, as the results will show, it degenerated rapidly in relation to the natural propagation from the special antenna and settled 50% below the latter within a few miles. This is interpreted as indicating that the dipole transmission degenerates from its forced mode to the natural mode by losing half of its initial energy by additional dissipation compared with the special antenna transmission. In the natural mode, its wave energy is subject to the 50% factor at the detecting dipole. So, in effect, it appears that the dipole has lost three quarters of its signal compared with the special antenna. In fact, it has lost half, but it needs a special antenna to extract the optimum signal from the waves propagating in natural mode.

    The results are deemed to confirm the theory presented above, but it is emphasized that the data provided by these experiments was of record before the above theory was related to the specific findings. It is, therefore, quite rewarding to find that the extensive work involved in collecting the data has given results which stand up well against a theoretical background.

    Further experiment will, no doubt, give additional confirmation. As noted in the earlier publication (Gieskieng [1]), experiments using the special antenna as a receiver have shown it to have significant advantages, as if it is particularly adapted to the natural wave mode which all radio signals degenerate into, according to the above proposition. Experiments have already been performed to set up interference tests between two transmitting antenna, firstly dipole versus the special antenna and secondly special versus special antenna. With the dipole interference the data show a progressive and predictable change of wavelength as between signal peaks close to the dipole (about ten wavelengths distant). However, with the interference between the two special antenna, the peak to peak distance does not change and so is not affected at this range by the direct wave components we associate with the dipole. This, itself, shows that we are dealing with a special wave mode propagation. These interference experiments are the subject of continuing research and will be reported on separately in due course.

    The new antenna has been termed the ‘Maxwell Antenna’ by Gieskieng in his prior work, partly because of the emphasis placed upon the pure electromagnetic wave propagation properties and magnetic, as opposed to electric, content of Gieskieng’s tentative interpretation as to its operation. In view of the new emphasis in this paper upon its operation as an electro-kinetic wave propagator, it seems preferable to label it with the name of its creator. Hence the use of the term ‘Gieskieng Antenna’ in the onward description.

    THE GIESKIENG ANTENNA

    This antenna is illustrated in Fig. 3.

    FIG. 3

    It is a transmission line stub antenna having two legs of length B and a shorting bar of length A connecting adjacent ends of the two legs. In the form used in the tests described it was fabricated from tubing of diameter C. At the free ends there are tuning sleeves and there is a balun feed adjacent the shorting bar. The design requires the overall length 2B+A to be about a half wavelength of the signal to which the antenna is tuned. Suggested dimensions to cover the following amateur bands are as tabulated:

    Table I
    Band

    A B C
    20 mtr

    48 180 4
    15 mtr

    38 133 3
    10 mtr

    30 84 3
    6 mtr

    18 45 2
    2 mtr

    7 17.25 2

    The lengths of A, B and C are given in inches.

    THE TEST GROUPS

    The test results reported relate to antennas operating at 2 meters
    20 meters nominally, with frequencies of 145 MHz and 14 MHz, respectively. The antenna can be mounted with its plane vertical or
    horizontal. In choosing the different sites for the many tests performed, care was taken to exploit the natural features of the terrain to minimize ground reflections which might affect the comparison of the performance of the antenna relative to the reference antenna.

    The 145 MHz comparisons were made at a height of several wavelengths in addition to being on the edge of large cliffs, where both antennas could radiate with a great deal of certainty that the forward radiations would have negligible involvement with immediate ground and that when the downward portion of the wave did finally reflect it would be prevented from reaching the monitoring antenna by placing the latter some 300 feet back from the edge of the opposite cliff.

    It was found impractical to make similar cliff-side tests at 14 MHz and the new antenna was therefore tested against existing beams, using a mountain-top Monitor station.

    The following is a synopsis of some of the tests which were performed over a period of four years with the help of many colleagues (see later acknowledgement).

    Test Group 1

    This involved many 14 MHz tests from three significantly different sites in Arvada, Colorado area, transmitting to the top of Squaw Mountain, which averaged some 5,000 feet higher and 24 miles distance to the West. The monitor station included a horizontal dipole and a vertical dipole, crossing near their midpoints, closely cut to resonance and fed separately into the receiver building. The test antenna used for transmission was constructed of 4 inch diameter aluminium pipe with a line portion spacing of 24 inches. As indicated above, it was resonant by virtue of its total component length being half-wave. It was tested at numerous rotations, tilts and heights, ranging from the shorting bar section touching the ground to a position in which it was at a height of 50 feet. Well over one thousand data readings were taken in assessing the consistency of the system by repeated test.

    In this case, to provide the standard of reference, six existing commercially made triband Yagi beams and one monoband Yagi beam in the Golden-Arvada-Denver area were also tested to Squaw Mountain, with several checks in their favoured direction to assure obtaining their peak values. To provide a common basis of comparison their signal readings were adjusted to a range of 24 miles and further adjusted for lobe angle so as to provide a common reference. The latter correction was necessary because the monitor station on the top of Squaw Mountain only subtended vertical angles ranging from 2.0o to 3.33o from the test sites, whereas the lobes of the beams used peak at vertical angles ranging from 12o to 37o.

    Both Yagi and Gieskieng antenna results are summarized in Fig. 4, where the average of the polarization components are shown. The curves relate the different Yagi beams according to their antenna height above ground and the Gieskieng antenna, in both vertical and horizontal positions, also for different heights. This result is best interpreted in the light of the further tests reported below.

    Fig. 4. Power averages of vertical and horizontal monitor antennas Yagi beam stations (*) and the Gieskieng antenna in horizontal mode (broken line) and vertical mode (full line), all related to antenna height above ground.

    Test Group 2

    Two double element quad beams were found beyond the area suitable for inclusion in the above tests but having a reasonable angle with respect to Squaw Mountain. Both were on 80 ft. towers and less than one mile apart. This test group involved the opportunity to install the 20 meter Gieskieng antenna on one of these towers, which was temporarily not being used. Through the years the other quad had always obtained world-wide reports identical to the quad originally on this tower. This made it an established comparison for the Gieskieng antenna.

    First, the Gieskieng antenna was rotated to Squaw Mountain to test its directional properties. It was found to be omnidirectional within about 1 db, confirming a result found for the test group 1 data, and then left in a fixed position. The owners of the two stations then undertook a closely-monitored comparison between the Gieskieng antenna and the two element quad (both being horizontally polarized). An analysis of the log of 17 stateside and worldwide mutual contacts, including reports on a Yagi that joined in on 9 of the contacts, gave 26 long range comparisons of antenna performance.

    The contacts reported results for the quad and Yagi beams that were on average 4.5 db over the Gieskieng antenna performance. However, half of the 26 contacts reported precisely equal performance for the Gieskieng antenna, indicating that, in these instances, the latter has, approximately, a 3 db advantage over a dipole in long range wave survival efficiency.

    The reason for this is understood if we compare performance as adjusted for isotropic radiation. The dipole lobe is 2.2 db over this reference level and the calculation of lobe off-centre power levels for the highest (80 ft) Yagi beam in the Fig. 4 data indicated that the beam lobe was only 0.75 db over the dipole. The Yagi beam should, on this basis, have about a 3 db advantage over the omnidirectional Gieskieng antenna. Yet, as Fig. 4 shows, the Yagi beams gave about the same results as the Gieskieng antenna, showing that the omnidirectional power of the latter could match the lobe power of the Yagi beam and had a 3 db advantage in wave survival efficiency. The same result was confirmed for half the 26 contacts of test group 2.

    Test Group 3

    Whilst the rotation and inclination tests of the 20 Meter antenna suggested that it had a spherical radiation pattern, it was desired to be certain that there was no null in its overhead pattern, to be sure of the above conclusion. The first test utilized a manned balloon flyover. A short steerable receiving dipole was carried by the balloon and kept broadside to the transmitting Gieskieng antenna on the ground. It revealed sphericity through the maximum 60o vertical angle of its passage, but unfortunately a wind shift prevented a directly overhead flight.

    Subsequently, experiments at 2 meters made it possible to explore higher vertical angles by using a Gieskieng antenna to monitor the OSCAR satellite beacon. Passages above 80o still failed to indicate an overhead null. Thus the pattern of the antenna is regarded as being very nearly spherical (see later discussion).

    Test Group 4

    Direct comparison of dipole and Gieskieng antenna properties
    were made at 145 MHz. The data obtained was of the form shown in Table II and as plotted in Fig. 5. The transmitting antennas were horizontally polarized and the receiving antenna was arranged so that it was always broadside to the transmitting antenna, but could be rotated in a vertical plane to side tilts of 0o (horizontal), 22.5o, 45o, 67.5o and 90o (vertical). The voltage drop across the receiver meter was fed into a strip chart recorder, which recorded signal strength received from the transmitting antenna through its 0o – 360o horizontal plane rotations.

    Dipoles constructed of 12 gauge wire, 1.25 inch and 2.5 inch tubing were tested in the transmitting mode with a complete horizontal plane revolution for each of the receiving antenna side inclinations. The test group 1 work had shown that it was imperative to procure the average of at least the vertical and horizontal energy components to obtain consistent figures of merit and obtaining three additional intermediate components was a further refinement.

    Immediately following the dipole tests, Gieskieng antennas constructed of 5/8 inch, l.25 inch and 2 inch diameter conductors were similarly tested. It should be noted that the 2 inch and 1.25 inch Gieskieng antennas were fed with untuned ferrite baluns, which provide symmetry but occasion some loss at 145 MHz. This loss was determined to be about 1 db and this was allowed for in the data presented for these two antennas. Subsequent tests using a ‘bazooka’ feed to eliminate cable contribution and to avoid the toroid loss verified this adjustment to within 0.2 db.

    The test site was the Golden, Colorado area which has two large equally high basalt-capped plateaus, North and South Table Mountains, which are nearly one mile apart, have precipitous cliffs and are joined by a flat valley some 600 ft. below. This terrain provided an ideal opportunity to let the 2 meter transmitting antenna located near the edge of one of these cliffs radiate freely, in the knowledge that the earth reflections would be effectively blocked from reaching the receiving antenna, set back some 300 ft. from the edge of the other Table Mountain. The transmitting antenna was on a 16 ft. pole and the receiving antenna on a 20 ft. pole.

    Table II
    Antenna

    Horiz 22.5o 45o 67.5o 90o Mean Feed
    G (2 in)

    18.99 17.73 14.90 10.04 2.44 14.53 (1)
    G (1.25 in)

    18.08 17.92 16.12 10.43 3.14 14.62 (1)
    G (5/8 in)

    18.37 17.99 15.35 10.38 2.97 14.58 (2)
    Mean

    18.48 17.88 15.46 10.28 2.85 14.58
    D (2.5 in)

    16.90 16.60 13.75 8.76 3.09 13.18 (3)
    D (1.25 in)

    16.76 13.27 0.91 12.46 (3)
    D (12 ga)

    16.91 15.26 12.66 6.88 2.38 12.30 (3)
    D (12 ga)

    16.62 15.89 13.37 8.29 0.29 12.59 (4)
    Mean

    16.81 15.92 13.29 8.05 2.08 12.71
    G – D

    1.67 1.96 2.17 2.23 0.77 1.87

    The table shows signal strength in db for the different Gieskieng (G) and Dipole (D) antennas. It shows how a reading for a set position of the transmitting antennas is obtained. The average for the antennas in this particular position is 14.58 db for the Gs and 12.71 db for the Ds, given a mean advantage of the Gieskieng antenna over the dipole antenna of 1.87 db at the one mile range.

    Fig. 5. Comparison of signal strengths from Gieskieng antennas and dipole antennas for receiving antenna side tilts of 0o (horizontal) (dotted line points), 22.5o (crosses on broken lines) and 45o (circles on lines).

    The feeds used in Table II are (1) untuned balun, (2) untuned gamma, (3) tuned gamma and (4) double tuned balun.

    The effect of rotating the transmitting antenna is shown in Fig. 5. This confirms the omnidirectional properties of the Gieskieng antenna in the horizontal plane and shows its power advantage over the equally energized dipole antenna.

    Test Group 5

    A year after the above tests were performed another 2 meter test was made over the same mile range using a 2 inch diameter Gieskieng antenna and a standard dipole. Both had Bazooka feed. The power difference was 1.66 db or within 0.21 db of the foregoing test.

    Test Group 6

    North Table Mountain had a number of long extending fingers with cliffs and a range of 0.36 mile was utilized to repeat tests on Gieskieng and dipole antennas similar to the test group 4 tests. The results gave a 5 angle integrated similarity within 0.1 db, that is, their wave energies were essentially equal at this distance.

    Test Group 7

    A short non-resonant dipole was used to monitor 145 MHz Gieskieng and dipole rotations from a distance of 24 ft., without the benefit of the intervening chasm. This was as close as it was possible to still obtain coherent recordings. The dipole had an integrated advantage of 3 db.

    Test Group 8

    Another 145 MHz cliff side test was made over a distance of 5.1 miles and resulted in a 2.2 db integrated advantage of the Gieskieng antenna over the dipole.

    The data obtained from these various tests are summarized in Fig. 6. The curve shows the measured power of the dipole antenna signal relative to that of the Gieskieng antenna, the five data points marked by dots being the comparisons at 145 MHz and corresponding to test groups 7, 6, 5, 4 and 8, respectively. The + data point was for the 14 MHz test of test group I and the * data point is the one inferred by the method of test group 2.

    Fig. 6. Comparison of signal strength of dipole transmission in relation to that of equally powered Gieskieng antenna
    taken as 0 db ordinate reference.

    DISCUSSION

    Other tests have been conducted with a view to developing beaming properties in a system using the Gieskieng antenna and, as stated earlier, experiments on interference between antennas are giving interesting results bearing upon the unusual properties of this new antenna. However, these are outside our present scope and will be reported separately.

    The omnidirectional properties of the Gieskieng antenna are of special significance. If, as implied by the theoretical introduction, this antenna produces a displacement oscillation in the field medium which is a true natural oscillation in a direction parallel to the free ends of the antenna legs then it is logical that the wave radiating from it should be symmetrical about this axis. Radiation should therefore be isotropic in a transverse plane. However, radiation perpendicular to this plane will-also occur owing to the current in the two legs. This current is in anti-phase in these legs and, owing to their separation by the distance A in Fig. 3, a wave representative of the spacing will be propagated as a forced wave in the antenna plane and in a direction parallel with the shorting bar A. Since the legs are longer than the shorting bar, the fact that A is much smaller than a half wavelength will tend to be balanced and a somewhat spherically-symmetrical wave propagation can be expected, at least over a short range.

    It is submitted that the data in Fig. 6 bears out the proposition that, whereas the Gieskieng antenna, radiates what is essentially a natural non-dispersive electromagnetic wave oscillation, the dipole antenna radiation is dissipated over a range of a few miles as half of its power is lost in adjusting to the natural propagation mode. The dipole antennas used in the tests would then respond at 50% efficiency at the longer range for the dipole and at near range and longer range for the Gieskieng antenna, thereby explaining the curve in Fig. 6.

    Further research is needed to verify that if a Gieskieng antenna had been used as the receiving monitor in the tests, the transmitting Gieskieng antenna would have a four to one advantage over the dipole.

    It is recognized that tests comparing theoretical propagation properties of antennas with their actual propagation properties are very difficult and hardly practical. See for example the work of Dolle and Cory [13] who compared radiation from a dipole antenna and a loop antenna at various frequencies and various close ranges. Above 100 MHz they found that the field attenuation measurements were erratic because they were influenced by nearby objects. However, though they did find that the dipole field attenuated more rapidly than theory predicted, other than obtaining some useful empirical data nothing of fundamental significance can be attached to the discrepancy.

    It is submitted, therefore, that the tests reported here are rather special in the way in which the ground reflection problems are overcome and in the way an entirely new type of antenna forms part of the comparison.

    If one considers whether tests could be performed on wave propagation under more controlled conditions, the question is then raised as to whether a pure sinusoidal signal propagated along a coaxial cable would degenerate in transit from a forced wave mode to a natural wave mode. On the theory presented it seems probable that the phase of the electric wave field will alter by one eighth of a wavelength over a range of adjustment (of the order of a few miles) because the in-phase electric and magnetic oscillations (Fig. 1) adjust to a quadrature phase relationship (Fig. 2). The question then arises, firstly, whether this, in fact, occurs and, secondly, if it does, what determines the rate of degradation of the signal? Such an experiment seems viable but, happily, it seems that this first possibility has already been tested and found affirmative. Torr and Kolen [4] have reported some very perplexing results in an experiment sending a 5 MHz signal along a 500 meter coaxial line. They sought to measure the one way speed of propagation, or rather its variation, by sending the signal between two atomic clocks and keeping track of its phase change. Assuming that the phase change measured was solely due to variations in propagation speed they inferred that the speed of light could vary in a one way measurement by as much as 1%. Phase differences of 8 nanoseconds or 0.04 wavelengths were found and they had a spurious dependence upon the time of day, an implication that there might be some sensitivity to atmospheric conditions. However, a change of wavelength of 0.04 in 500 meters would correspond with phase shift accompanying the attenuation predicted by this paper and apply over a range commensurate with the measurements reported in Fig. 6.

    It follows, therefore, that the antenna data reported here may have independent support and relevance to basic research on coaxial cable transmissions. Torr and Kolen admitted being perplexed by their findings and drew attention to their dilemma by saying:


    Since there is no theory available which can account for these variations, we believe that it is essential to repeat the experiment with different clocks ….

    A theory has been provided in this work, but it remains to be seen whether it will stand the test of time. Meanwhile, however, there is purpose in exploiting the new antenna proposed and investigating the reasons for the degeneration of the forced Maxwell wave by studying wave interference phenomena.

    ACKNOWLEDGEMENT

    Particular thanks are expressed to ‘Bob’ Swanlund, WOWYX, who, as live-in owner occupier of the Squaw Mountain repeater station, gave some 500 reports in the evolution of the horizontal and vertical
    monitor array and subsequently made over one thousand readings in the
    actual tests. Since his retirement he has moved to Golden, Colorado
    and has participated in most of the 2 meter tests around Table Mountains. As a practical radioman, his continued interest over a 4 year span was a special inspiration, as was that of my (DHG) brother, originally 9BDF, 1924.

    Over 25 others came, in some cases repeatedly and from over considerable distances to help in manning the field stations, in loaning equipment, use of their towers and stations for the quad comparison tests as well as providing photographs of several field tests. This assistance included also the provision and manning of a balloon (Don Ida) for the flyover tests, provision of a computer and considerable time in integrating the Table Mountain tests and expert advice in antenna performance evaluations.

    References

    (1) D. H. Gieskieng, The Mines Magazine, 29 (January 1981).
    (2) G. M. Graham & D G Lahoz, Nature, 285, 154 (1980).
    (3) H. Hertz, ‘Electric Waves’, English translation by D. E. Jones, MacMillan, London, 1893; Dover, New York 1962.
    (4) F. D. Tombe, The Toth-Maatian Review, 2, 839 (1984).
    (5) T. E. Phipps Jr., Journal of Classical Physics, 2, 1 (1983).
    (6) S. Kosowski, unpublished manuscript.
    (7) W. S. N. Trimmer & R. F. Baierlein, Physical Review D, 8, 3326 (1973).
    (8) H. Aspden, Wireless World, 88, 37 (1982).
    (9) H. Aspden, Lett. Nuovo Cimento, 41, 252 (1984).
    (10) H. Aspden, ‘Modern Aether Science’, Sabbdrton, Southampton,
    p. 133 et seq 1972.
    (11) B. Warner & R. E. Nather, Nature, 222, 158 (1969).
    (12) H. Aspden, Lett. Nuovb Cimento, 39, 247 (1984).
    (13) W. C. Dolle & W. E. Cory, IEEE Trans. Electromagnetic Compatibility, EMC-10, 313 (1968).
    (14) D. G. Torr & P. Kolen, ‘Precision Measurement and Fundamental Constants II’ B. N. Taylor and W. D. Phillips, Eds., Nat]. Bur. Stand. (U.S.), Spec. Pub]. 617, p. 675 (1984).
    


    There has been some feedback from Dave Gieskieng on the content of this Web page. See Feedback Note 3/98. See also Message from Dave Gieskieng

    Readers may appreciate that what has been discussed in this paper and its introduction as provided by Lecture No 10 has very far reaching implications. The issue concerns the latent energy condition of the aether and it offers scope for reinterpretation of cosmological redshift data to avoid the absurdity of the belief that the universe was born at a single point in space and has been expanding ever since. It gives scope for a new interpretation of the source of heat which powers the sun. Indeed, at the end of the day, a day sometime in the 21st century, we will see that ‘cold fusion’ in its broadest sense will feature in the recognized forces of creation, the materialization of matter from the energy residing in the aether. These Web pages have much to report on this author’s contribution to this subject.

    *******
  • LECTURE NO. 10

    LECTURE NO. 10

    THE OCEAN OF SPACE

    Copyright Harold Aspden, 1998

    Introduction

    This lecture topic was conceived as I relaxed on an ocean crossing. I was on the cruise liner ‘ORIANA’ headed from the port of Southampton in England to Auckland, New Zealand, by way of the Panama Canal, San Francisco, Hawaii and Fiji. I am writing this on February 22nd, 1998, just a few days after returning from New Zealand by air with a refuelling stop at Los Angeles. I had been reflecting on a dinner conversation over dinner with a table companion who had boarded the ship at San Francisco. His background was technical, that of a senior executive in the steel industry. Our conversation touched upon our past experiences and present pursuits, mine, as my wife duly explained, being a scientific interest in proving that I was right and that Einstein was wrong. Our dinner companion expressed his curiosity; he had never really understood what Einstein’s theory was about. I was now in deep water, metaphorically speaking. There was the Pacific Ocean beneath us, but I ask, how can one begin to explain the intricacies of Einstein’s theory, and far less my own efforts, as part of one’s social intercourse over dinner without sharing the motion of the ship and floundering a little? So, I compile this lecture from my secure foundation back at home and will use the gist of it as my fall-back position if I am asked the question again on a future ocean voyage.

    Making Waves

    From one’s viewpoint on a ship in mid-ocean one sees sea, its horizon and the sky above. The ocean is vast. The sky above, meaning the atmosphere that we can see, extends upwards over a few kilometers, but beyond that there is that mysterious medium we cannot see. We call it space and, so far as we can judge, that extends to infinity.

    In the 19th century that space was deemed to be a subtle medium which was referred to by the name ‘aether’ otherwise spelt as ‘ether’. The aether was seen as a real medium which regulated the motion of waves, meaning the kind of wave we associate with radio communication or light radiation. The 19th century scientist assumed that those waves, which he knew travelled at the high speed of light (some 300,000 km/s), were moving at that constant speed by reference to the aether.

    Towards the end of the 19th century, scientists developed a way by which to attempt to measure the speed at which body Earth was moving through space by setting up multiple reflections of light between mirrors. They expected that, relative to that apparatus, the light would travel between the mirrors one way faster than it would in the opposite direction. That assumes that the light travels parallel with the direction of the Earth’s motion. For light making such a round trip at right angles to the motion but over the same distance there should be no such difference and so, comparing the two conditions, there should be a small detectable effect which would indicate the Earth’s speed through the aether. To their horror and dismay, however, their scientific tests gave a null result. This suggested that the Earth was at rest in the aether, even though we know it travels around the sun at some 30 km/s. What had gone wrong?

    Now, I want you to imagine that you are the captain of an ocean liner travelling through the waves on the ocean. You can, if you wish, measure the speed of the ship by dropping something that floats overboard and timing its passage over a measured distance along the side of the ship. You would have reason to be most concerned if the test proved that the ship was at rest even though you know it is moving at speed. But there is a difference here between this test and that based on motion through space. Those 19th century scientists did not drop something into the aether which then stayed put in that aether as we moved away from it, nor could they measure distance between two fixed points in the aether. Instead, they made assumptions about waves and looked at how the crests of two separate waves stood in relationship to one another. So, let us ask our captain to perform an experiment analogous to that of the aether tests.

    To perform this test, the captain has to go to a swimming pool on board the ship and jump in. (In fact, I witnessed the captain of the ‘Oriana’ being duly ducked in the pool on the occasion of the equatorial transit – 1998 World Cruise – but that is by the way. He was not performing the test I now describe.)

    The pool has to be square in form and the object is to see how the wavelength of a surface ripple depends upon the frequency of the wave, for ripples set up in the direction of the main axis of the ship and for ripples set up in the transverse direction. If there is a difference then a scientist can deduce the speed of the ship, at least according to the same logic as was used in devising that aether test based on light waves.

    The captain has to go into the pool so as to be an observer immersed in the medium he is observing, without being distracted by having sight of the surrounding sea, as otherwise he would be able to judge the speed of the ship without relying on a test analogous to the aether experiment.

    Now of course this test is not feasible owing to ship’s vibration, resonances and numerous other factors, but it helps to put the problem in context. The water in the swimming pool is not part of the body of water in the ocean and so the speed of waves in the pool will hardly be affected by the speed of the ship. Indeed, common sense, backed if necessary by some modest knowledge of Newtonian mechanics, assures us that what happens to water in the ship’s pool is referenced on the pool and not the external ocean.

    Note that water is trapped in the pool and so moves with the ship. In the analogous case of the aether, surely one can imagine that there is something in the aether than can move bodily with an optical apparatus bounded by mirrors.

    The experiment in question is known to physicists as the Michelson-Morley Experiment but they seem not to have heeded the statement by N. R. Campbell in his 1913 book ‘Modern Electrical Theory’, published by Cambridge University Press. He challenged the meaning of the word ‘aether’. On page 388 one reads:

    This is the simple way out of the difficulties raised by the Michelson-Morley experiment. If from the beginning we had used a plural instead of a singular word to denote the system in which radiant energy is localised (or even a word which, like ‘sheep’, might be either singular or plural), those difficulties would never have appeared. There has never been a better example of the danger of being deceived by arbitrary choice of terminology. However, physicists, not recognising the gratuitous assumption made in the use of the words ‘the aether’, adopted the second alternative; they introduced new assumptions.

    In short, physicists failed to see the null test of that experiment as telling us something about the properties of the aether. They just slept on the problem, only to be aroused from their slumbers when Albert Einstein came into the picture by, in effect, saying there was no problem anyway, because, as our ship’s captain could have explained, what happens on and inside a ship takes its reference from the ship itself. It is a question of relativity, but not the kind of ‘relativity’ that Einstein was to present. Yes, there was no problem, because the aether has a way of adapting to wave energy when that energy is confined by apparatus in motion, but Einstein’s ‘no problem’ stipulation was to say that physical processes are all referenced on the observer witnessing those processes; non-accelerated motion is said not to affect what is observed. In other words, Einstein said: “Let us adopt a new philosophy that says there is no problem and adjust our physics to fit what we say – the aether can be ignored if we think along those lines.”

    Well, either the aether exists or it does not and I am not one for ignoring it, given that it is the only source of energy that offers promise for our future salvation. There is much to learn about the aether and how it creates matter and only a fool would be content to follow the Einstein flag, given that it leads only to a fool’s paradise.

    Our ship’s captain, according to Einstein, cannot see the ocean through which his ship moves. Instead, he has to be content with what he sees occurring in his swimming pool and his observation of ships on the distant horizon. If those ships happen to be moving at the same velocity, they will appear to be at rest; there is no relative motion. However, the captain knows that without reference to Einstein’s theory.

    So, my reflections amount to defending the need for an aether and urging enquiry into its form, not so much because I care how the speed of light is affected by it, but because I care about its energy properties. It is a storehouse for energy, what is known as field energy, the energy we associate with electric and magnetic fields. One cannot declare that it does not exist, simply because it seems not to live up to one’s imaginary expectations!

    Einstein has obstructed our progress in understanding every aspect of the aether, especially its role in the creation of matter and its role in regulating the quantized motion of electrons in atoms.

    Making More Waves

    Still on the theme of waves on water, suppose you throw a bucket of water into a pool having an unruffled surface. You will set up waves. There will be a travelling wave, a kind of surge spreading from the point at which the water entered the pool and this will subside to leave the pool agitated by the up and down motion of water and the attendant waves, before the pool eventually reverts to having once again a flat surface. The surge was a ‘forced’ effect, what I could refer to as a ‘forced wave’, whereas the up and down movement of water can be said to involve ‘natural waves’.

    The question of interest is whether such a distinction between forced waves and natural waves exists in the case of electromagnetic waves propagated through space. If it can then we know for sure that Einstein’s theory has had its day; it offers no feature that can explain the two forms of wave. The real aether offers such a feature and so aether theory must replace relativity.

    The proof that forced waves and natural waves do exist for radio communication is to be found in the canyon experiments reported by Dave Gieskieng.

    In summary, a natural wave involving that up-and-down motion, is one in which the potential energy (electric field energy) is exchanged with kinetic energy (magnetic field energy) without obliging energy to move at the wave propagation speed. This means that the energy sustaining this exchange process is conserved locally in space; it is energy that exists ab initio, just as there is water in the ocean before it is rippled by surface waves. The forced wave arises where energy is forced into the space medium, as by a radio antenna, with the kinetic energy (magnetic field energy) and the potential energy (electric field energy) being fed together so as to be driven forward, keeping in phase, with both having their wave crests at the same instant along the propagation path.

    As might be expected, with the passage of time, the forced wave subsides into the natural wave form and the experiments need to be able to detect the transition. Obviously, if you accept Einstein’s theory, which says that light waves travel at a constant speed, you will not expect there to be any transition to a natural wave. Indeed, you would face a scenario where energy must travel from sun to Earth at the speed of light, rather than one where the wave oscillations merely release energy from the aether where the waves are intercepted, leaving the aether to find its own equilibrium, as does the ocean, if a bucket of water is taken from it.

    You are deep into the need for an aether once you face the facts concerning forced and natural waves. However, if you accept Einstein’s theory you will not be one to seek funding to perform the necessary experiments. You would rather sleep on the dilemma as to how the energy transported by waves is deployed when two waves crash into each other from opposite directions. Somehow, when you wake up, you will need to reconcile the fact that when two light waves pass through one another their amplitudes are unaffected and if the two waves have the same amplitude you must explain how they acquire extra energy at their instant of interception. Bear in mind that energy is a function of the square of wave amplitude, and two squared implies four units of energy, but only two such units are available. Also bear in mind that, if you say that energy travels at the speed of light, you confront some interesting issues, particularly for waves of long wavelength. Your energy will need to dash around hither and thither in a motion superimposed upon the speed of light, but yet the energy cannot move at any speed other than the speed of light!

    Maybe before you wrestle with that problem it is better for you to go back to sleep and dream about Einstein. Or maybe you will be content to rely on Maxwell’s equations. They are based on theory which is empirically based on observation of the action of forced waves, so they are silent on the question of natural waves and they lack symmetry for that very reason!

    Putting all this into practical terms and coming to the experiments performed by Dave Gieskieng, the issue faced is whether an efficient radio antenna is one which puts out the greatest amount of power by literally forcing energy into the radiating field or one which is designed to exploit the natural wave properties of the aether by setting the energy latent to that aether in motion at the point of transmission. Note that the energy forced into the aether is always lost and dispersed as heat in its passage from the antenna. It is the onward natural wave oscillation that is the true work horse covering the mileage to a distant receiver. If that receiving antenna is designed specially to pick up natural waves then it will do better than one, such as a simple dipole, which is one which might seem to be optimum for forced waves.

    Well, I will close this introductory theme here by saying that the Gieskieng experiments prove my case and prove the need for an aether able to sustain natural waves. By testing different combinations of the two types of antenna for transmission or reception and using a radiation path traversing a deep canyon in Coloroda, Dave Gieskieng has discovered something that simply tells its own tale. However, here is an other example where those in authority simply do not want to know the truth, because they have treasured beliefs instilled in them by their academic training. For my part I could not stand by and turn a blind eye to what Dave Gieskieng had to say and so I joined forces with him in documenting the report ‘An Antenna with Anomalous Radiation Properties’. It presents the facts of experiment. It has not been published hitherto because the scientific community, those who referee scientific papers, fear the consequences of challenging the equations of Clerk Maxwell and the doctrines of Albert Einstein. Also the experimental research involved was performed by an individual acting on his own initiative and not being part of an institutional academic or governmental research team.

    The above-mentioned paper is reproduced in full in the continuation pages of this Lecture.


    To continue this lecture press:

  • LECTURE NO. 9

    LECTURE NO. 9

    SUPERGRAVITONS AND COLD FUSION

    Copyright, Harold Aspden, 1997

    This Lecture is an article that appeared on pp. 112-116 of the July-August, 1997 issue of ‘Infinite Energy’.

    Introduction

    There are millions of patents and millions of scientific papers and many, many books. Indeed, as I see from reading a 1997 issue of CAM, the Alumni Magazine of Cambridge in England, over 8 million copies of a scientific book by Stephen Hawking have been sold to readers interested in his ‘A Brief History of Time’.
    All of those copies of that book say the same thing in discussing whether time can run backwards and how time began, as well as whether the universe is infinite or enclosed in boundaries. There is no way of proving what is said on these matters, but 8 million people in the world are avidly interested in such scientific issues. Whether they can understand what the book discloses about the topics just mentioned is questionable, but they can rest assured (by the scientific community) that the content of the book is backed by ‘reputable evidence’, meaning the goodwill of Hawking’s fellow researchers who spend their lives delving into the secrets of space-time.

    I use the expression ‘reputable evidence’ because I find that this is a criterion applied by the U.S. Patent Office in examining ‘cold fusion’ patent applications. There is, it would seem, no ‘reputable evidence’ supportive of the ‘cold fusion’ phenomenon.

    On the one hand there are 8 million books all explaining to the world something that they can never ever understand, far less verify or see as a benefit to their existence, and on the other hand there are millions of patents and millions of scientific papers all saying something different and all being understandable because they have been scrupulously refereed by patent examiners or peer scientists. That is, if you are fortunate enough to have the backing of ‘reputable evidence’, as by being employed in the research laboratories of a major corporation. Yet, we are not destined to see amongst the U.S. collection of patents any which are based on the discovery of ‘cold fusion’, simply because the evidence in support is said to be of no ‘repute’! It was not ‘of repute’ because it went against the vested interests of those researching ‘hot fusion’ and even though the attempts to prove its feasibility are of more than forty years vintage, that research has the proper ‘repute’.

    Why is it that ‘cold fusion’ does not have the backing of ‘reputable evidence’? The reason is the startling nature of the scientific phenomenon involved. It is basic to the issues involved in the creation of the universe, but ‘cold fusion’ has not developed from the efforts of those who work with big particle accelerators or those who interpret four-space and worry about ‘Black Holes’ and such like. The orchestrated collaboration which keeps universities in funds applied to study the grand issues of the universe by computing what happened in the first few milliseconds after the beginning of time has assured a peer activity in creating the type of ‘reputable evidence’ that Examiner Behrend of the U.S. Patent Office would find acceptable.

    If the Wall Street Journal says ‘cold fusion’ is not viable, then Examiner Behrend takes that as reason to reject an application for an invention, the structure of which is new and which has never been considered by those who brief the journalists of the Wall Street Journal.

    Lack of 2020 Vision at U.S. Patent Office

    Now, apart from these opening remarks, I have something more constructive to say amd I am going to exercise a little 2020 vision in declaring that, by the year 2020, we will have commercially viable materials that are superconductive at temperatures well above room temperature and which also exhibit ferromagnetic properties. I go further here and ask you to join me in conceiving a new invention by a process of grinding that material down into small particles which are closely moulded inside an electrically insulating substance. That might seem a rather futile thing to do, to take a superconductor and then mould it into a non-conductive block form, but remember that invention ought to have an element of surprise, as otherwise it would be obvious and so non-patentable.

    So, in 2020 we find we can make something that ostensibly lacks utility. That is another criterion which Patent Examiner Behrend keeps in mind in judging patentability. But then I say that by doing something obvious we can endow that non-conductive block we have fabricated with ‘utility’. It must be useful. Even with our old knowledge of physics we know that a strong magnetic field can destroy the superconductive state, but that, once a magnetic field penetrates the superconductor it can become locked into place at a level of field strength below a critical threshold. Our invention is therefore to take that block we have moulded and apply a very powerful magnetic field which we progressively reduce to zero. I then suggest, with my 2020 vision, that we are then left with what is, in effect, a permanent magnet.

    If what I say is true, all we need, therefore, is to be patient and wait until we have on the market a material that exhibits superconductive properties at temperatures of the order of the boiling point of water and, hey presto, we can fabricate something new in permanent magnets.

    Now, if I were to file a patent application today based on the invention just outlined, the U.S. Patent Office would not grant me a patent unless I could identify a substance suitable for fabricating those magnets. The U.S. patent system does not allow one to speculate, however ingenious the speculation. As a result, the system favours those who confine their efforts to building and testing devices that can be demonstrated with consistent results. The greater issue, such as the prospect of ‘cold fusion’, which may need exhaustive research to unravel all its mysteries, cannot be patented in U.S.A. because Patent Examiner Behrend has to be sure the invention really is a ‘fusion’ device. The idea that a patent can be granted for something meritorious that may need funding and experiment to verify fully its operability is not contemplated. To someone like myself who is familiar with patent practice elsewhere in the world this simply means that, owing to the trend now set by the cold fusion saga, there will be inventions galore that will never see the light of day in a commercial sense if they originate in U.S.A. and rely on U.S. backing for their development.

    Readers of ‘Infinite Energy’ will know that Martin Fleischmann estimates the value of the ultimate breakthrough in ‘cold fusion’ as some 300 trillion dollars. We are, after all, talking about a new source of energy, one which can rival that of the so-called Big Bang creation of the universe, said to be somehow connected with hot fusion. That means temperatures of 100 million degrees rather than the temperatures I contemplate for those magnets fabricated from a new superconductive material.

    Coming back to that theme, suppose I think some more about invention and put those grains of superconductor in a tube with no insulating bonding, but there being spaces through which a hydrogen gas can flow over the surface of those grains of superconductor. Suppose I then ask the question, “Might the superconductivity property be enhanced or weakened by the pressure of that hydrogen gas?” Would not that mean that, if I applied a magnetizing field, then a ferromagnetic condition exhibited by the system would respond by developing, as a function of the pressure of that hydrogen, a magnetic polarization coupled with the enormously powerful magnetic fields we associate with permanent magnets? Would not this involve possible hydride formation and heating and cooling as a function of the hydrogen gas pressure and would not that superconductive feature be an important factor in this type of research?

    Now, I am going to develop this argument, basing my case on a granted U.S. Patent that has been cited against a ‘cold fusion’ application of mine currently in its examination phase in the U.S. Patent Office.

    The subject of that granted patent is the conversion of room temperature heat into electricity. The technique involves the control of hydrogen gas pressure in a cyclic way to produce related variation of the magnetic state of the material. As it switches between a high and low value in unison with the change of pressure, the magnetic field, which is enormous, changes too and it can deliver electricity as output if a coil having many turns is wound around the device.

    A little reasoning tells us that the hydride formation is necessarily a surface phenomenon owing to the fairly rapid cycle rate used in the reported experiments. It is governed by the gas pressure and involves cyclic heating and cooling, this being a reversible process, but if we can take power off as the magnetic field goes up and down then we must either be tapping that heat and so cooling the device or… well, you, the reader, can tell me where the energy comes from. There is doubt, because the device has to be cooled to get its hydride cycle to run faster. Can it be that the heat cycling is akin to an electrical switch mediating in an energy supply system which only controls action but yet can get a little warm in the process?

    Patent Examiner Behrend would expect some ‘reputable evidence’ before granting a U.S. patent having claims to such an invention.

    I will identify the U.S. Patent as we proceed, but let me first digress to discuss the ‘supergraviton’.

    The Supergraviton

    The supergraviton is my own brainchild. It stems from my years of research into the true nature of the force of gravity. It comes into prominence in connection with the neutral Z boson of the high energy particle physicist. But technologically it comes into play in connection with the warm superconductivity phenomenon and I suspect it also may play a role in the cold fusion scenario.

    In this article it is inappropriate to present research ideas that are wholly new. The reader has a right to expect what is described to have matured a little with time and so I will delve into a little history. You see, the years pass by and the research findings of those of us who trespass and express views not in accord with Establishment belief get brushed aside. They are, so to speak, swept under the carpet, but there can come a time when we need to look under that carpet and point the finger of recrimination at those ‘sweepers’.

    Like almost everyone in the scientific world, I had not heard of, or contemplated the possibility of, ‘warm superconductivity’ until its discovery, in breaking through the liquid nitrogen barrier, was announced two or so years before we heard of cold fusion. Alex Muller and Georg Bednorz of IBM’s Zurich Research Laboratory were awarded the Nobel Prize in 1987 for their discovery.

    I spent my main working career with IBM, 19 years of it as their European Patent Operations Director, and visited that Zurich laboratory regularly during those years, years during which my theory of gravitation was evolving as a private venture and had no reason to know that one day my gravitational interest might have bearing on that warm superconductivity phenomenon. I retired from IBM in 1983 and with their support and initial funding became in my retirement a Visiting Senior Research Fellow at the University of Southampton in England, close to my home. My object was to develop my theory by showing it could have technological consequences, the effort being on the electrodynamic issues which I knew were at work in the phenomenon of gravitation. A far cry from superconductivity and cold fusion, you might think, but gravitation is an all-pervading force and there is much to learn.

    Having now moved into the latter stages of my retirement I have begun to scan though my published work and it is appropriate to highlight some that is lost in the dark corners. It is apt here to reproduce the text of a note I wrote on ‘The Theory of Anti-Gravity’ for BASRA, the Journal of the British-American Scientific Research Association, as published in volume XII of the March 1989 issue at pages 2-5.

    It was written at a time when those of us interested in these matters sensed a feeling of euphoria because it was understood that advanced project management in British Aerospace was willing to fund an event at Edinburgh University in Scotland under the direction of Professor Salter, an expert on gyroscopic systems, expressly staged to assist inventors in demonstrating their mechanical anti-gravity devices. This was to be an activity conducted behind closed doors until the machines were performing and had been tested, after which there would be a public disclosure at which the press would be present. In the event, however, as seems to be an inevitable circumstance, the show did not take place. Whether this was due to the definition of the test protocol being too stringent or due to intervention by the [Establishment] powers that be or simply due to the lack of readiness by the inventors of these wonderful machines I cannot say. What I can say is that having been with Scott Strachan when he demonstrated his antigravity machine to an audience of two hundred or so scientists and engineers in Canada in 1988 and knowing that his machine was kept in Edinburgh in a state of readiness for demonstration at the time of the intended event, it is difficult to see why the show did not proceed. Strachan lives about two or three miles from the University laboratory at which Professor Salter was located. The machine, as demonstrated in Canada, developed an out-of balance force sufficient to lift an apple, a tribute to the proverbial discernment of Isaac Newton. Curiously, some years after this event Professor Eric Laithwaite and Alex Jones, in the South of England, both of whom had something to demonstrate were featured in a television program on the subject. Professor Laithwaite stood on a weighing platform and manually forced his spinning flywheel into its abnormal precessing mode. The professor with his flywheel lost weight, as clearly shown by the measurements conducted in a university laboratory. I can but wonder why this demonstration was not made years earlier at the Edinburgh site, but given Laithwaite’s professorial status at the Imperial College of Science, and the fact that he had regularly demonstrated his gyroscopic anomalies at that location, the trek to Scotland to perform for another professor might not have had appeal. Be that as it may, when I wrote the following short paper on ‘The Theory of Anti-Gravity’ for the BASRA Quarterly Journal, I was reacting to the events at the time. The reader will need, therefore, to keep in mind my above comments about the situation as it stood during a winter period in 1988/1989.

    The BASRA paper reads:

    The recent confirmation that a spinning body can lose weight under certain circumstances presents a challenge to those who theorize on the nature of gravitation. The author here explains how his theory of long-standing can cope with this problem.

    There are many experimenters who have discovered loss of weight anomalies in spinning bodies. They have been ignored by those established in the system which governs science funding and regulates the teaching of future generations of orthodox scientists. However, the recent confirmation by a specially commissioned commercial testing laboratory that one such device does lose weight has begun to cause rumbles which might well upset the complacent posture of the relativists who monopolize the field.

    This is a reference to the machine built by Scotsman Sandy Kidd and tested in Australia, but, as readers know, there are many others who have demonstrated such effects, for example Bruce de Palma in USA, Eric Laithwaite in England, and, as was noted in a recent BASRA article [1], Scott Strachan who demonstrated a machine in Canada in 1988.

    A Scottish newspaper, the Dundee Courier of 28th December 1988, reported a projected event at Edinburgh University planned just after Easter 1989 at which as many as 12 such machines might be tested before responsible adjudicators with the object of settling this question once and for all.

    Now, in the likely event that the phenomenon is verified on this occasion, what will the theorist do to cope with this very troublesome problem? We can guess that those interested in commercial exploitation will be in no such dilemma. Einstein’s followers look like being left behind in the advance of technology because their theory is then destroyed for the reasons already of record [1].

    The answer for the non-relativistic theorist is, in this author’s opinion, to be found in the theory of gravitation which is of record [2] and which has the following basic features.

    Firstly, one needs to admit that space is full of something that has energy but is invisible and very elusive. It is a sea of something having equilibrium to such a degree of perfection that it reveals itself only as a carrier of energy at light speed and, even then, it finds a way of confusing us when we try to detect the reference frame that it provides. We can be sure, however, that it is the seat of an inertial reference frame since only empty space provides the universal metric in which rotation is measured.

    Secondly, one needs to admit that matter, as we know it, exists as a disturbance or misfit in this background sea of energy. Matter exists in some measure related to past events of happenings in this background energy sea, but we need not speculate on that in our quest to understand gravity.

    Thirdly, all elements of matter suffer a jitter motion according to Heisenberg’s Uncertainty Principle. Matter has mass and is in a state of jitter in the inertial frame. Accordingly, guided by our own experience of how we get things to jitter, we must look for something that provides a counterbalancing action. It is here that the author intuitively made the presumption that some energy is displaced from the local background medium to form concentrated mass quanta (called gravitons) which move in inertial juxtaposition with the jittering matter to keep things in balance.

    In summary, therefore, the theory regards a mass M of matter as moving in a pattern of motion constituting what we may term an E frame, which jitters about the inertial frame in which an energy deficit (-Mc2) has occurred accompanying the creation of a graviton mass M in balancing motion in what we may term a G frame.

    So far as we can see, the matter mass M stands alone, but jitters. In reality, however, there is a graviton mass M coupled with the matter mass M but neutralized by a mass deficit effect in the sea of empty space.

    How does this help us to understand gravity? Well, firstly, we stand a chance of quantifying gravitational action in terms of a standard graviton unit, especially if we say that the real action of gravity is not between matter mass but between graviton mass. Secondly, we can use electrical theory in an interesting way by saying that the electromagnetic reference frame (the E frame) is somehow determined by the collective presence of matter mass. This explains why no electromagnetic gravity action is seated in the matter mass. However, because the gravitons move relative to this E frame they are able to assert mutual electromagnetic actions and so give scope for interpreting gravitation as an electromagnetic effect, assuming that the gravitons are electrically charged, pervade a tenuous uniform sea of charge, and come in equal numbers as positive or negative.

    Indeed, this theory has developed over the years and the author can now show that the tau lepton is the primary graviton form, whereas the muon lepton forms the background space medium defining the inertial frame. The remaining charged lepton form, the electron, is, of course, a feature of the matter frame or E frame.

    What about that demonstrated loss of weight? Well, how can matter mass lose weight if its rest-mass energy is conserved and gravitates, meaning it has weight? It cannot. So the fact that matter mass ‘appears’ to lose weight is clear verification that matter mass has no weight in the first place. It merely can be coupled, and normally is, to graviton mass which does have the right properties. This is what the author’s theory is all about. Evidently, by suitable manipulation of flywheels in the reported experiments, that close coupling is severed transiently and sufficiently for the gravitons to begin to fall under gravity when freed from the connection with matter.

    In fact, being leptons present in charge pairs, the gravitons die and are recreated constantly by a pair annihilation process. Therefore, as they fall within the coextensive body of the flywheel, they move from positions of higher gravitational potential where they are created closely coupled to matter mass to positions of lower gravitational potential where they decay. This involves energy exchange not sustainable by the vacuum energy equilibrium state if the body of matter itself alters its position relative to the gravitational potential. The reason is that we must have overall energy conservation and if, periodically, the matter mass has its coupling with the gravitons restored, equilibrium demands that any gain of gravitational potential energy in the real matter world must come from somewhere. The laws of mechanics governing the precessing flywheel keep energy conserved so far as concerns motion of the flywheel about axes other than its spin axis. So, unless we can draw on energy of disordered motion, heat energy, the flywheel spin has to yield kinetic energy in some way to meet the demands of gravity, even if this breaches Newton’s Third Law of Motion [3].

    The loss of weight by the force-precessed offset gyroscopic machines must, therefore, be accompanied by a slowing down of the flywheels in a levitating system. Conversely, we should expect the flywheels to speed up in a descending situation. Such are the issues now facing researchers in this field. The author’s theory of gravitation looks like being able to cope with the gravity challenge now before us.
    [1] H. Aspden, BASRA J., pp. 2-4, Dec. 1988.
    [2] H. Aspden, ‘Physics Unified’, (Sabberton), 1980.
    [3] H. Aspden, ‘Anti-Gravity Electronics’, Electronics and Wireless World, vol. 95, pp. 29-31, January 1989.

    In the above paper it was stated that only empty space provides the universal metric in which rotation is measured. By this is meant space devoid of matter. It is an experimental fact that a vacuum state enclosed within an evacuated cavity can be the seat of propagating electromagnetic waves which, by their interference, give a measure of the speed of rotation of the enclosing cavity. There has to be something in the vacuum that endows it with a non-rotating frame of reference, something real rather than a notion in a mathematician’s mind, and I can but say that this has to be an aether medium.
    In referring to Heisenberg’s Uncertainty Principle it may seem that I too am building theory on the notions of a mathematician, Heisenberg. I am not, because I see the graviton system and its dynamic balance with matter as the cause of a universal jitter which accounts for that uncertainty relationship. When something jitters in a circular motion and you see it from a distance it might appear to be at rest, but yet at all times its position is uncertain in measure represented by the radius of that orbit and its momentum is uncertain because it reverses direction constantly, but yet the product of the two is a constant which we relate to Planck’s action quantum.

    Concerning the mention of heat at the end of the paper, I am not really expecting researchers to find that a gyroscope cools when exhibiting anti-gravity. Only experimental research can resolve this question, but if there is an anomalous gain in energy then I have no problem in accepting that it is the aether that cools upon shedding energy. Indeed, the aether heats owing to the gain in entropy and by that jitter motion it puts order into this thermal chaos. When it gets too much energy it sheds it by creating protons and electrons, but maybe the anti-gravity process allows some to be intercepted before reaching the proton stage. The research aimed at ‘free energy’ is all focused on this same point. We must convert heat into useful power by drawing on the ‘heat’ of our ambient environment or on the ‘heat’ of the underworld of the aether!

    Now, this is where the supergraviton enters onto the stage in playing a role as part of the process leading to ‘warm superconductivity’.

    How Particle Discovery reveals a Source of Infinite Energy

    In spite of the resources deployed by high energy particle physicists, they still do not understand why Nature creates mesons such as the mu-meson, otherwise known as the heavy electron or muon. Nor do they recognize that elusive ghost, the signature of the graviton, which occurs at 2.587 GeV. This compares with the electron rest-mass energy of 0.000511 GeV, the muon rest-mass energy of 0.106 GeV or the proton rest-mass energy of 0.938 GeV.

    However, in 1964, the aether theory I had then worked on for ten years revealed the secrets of the mu-meson and, shortly thereafter, in 1965, the 2.587 GeV graviton emerged. It was then a simple matter to evaluate theoretically the precise value of G, the constant of gravitation, expressed in terms of the electron charge-mass ratio and based on energy perturbations of that graviton form.

    At pp. 81-82 of the 1966 edition of my book ‘The Theory of Gravitation’ I show how three well-known mesons were all unstable spin-off products of a decay involving the 2.587 GeV graviton. I also show, from pure theory based on my interpretation of the structured form of the dynamic aether, how its 5063 ratio of mass to that of the electron emerged from the theoretical analysis. By 1969, when I published ‘Physics without Einstein’ I was able to point to the relevance of the discovery, as later reported by Krisch et al [Physical Review Letters, 16, 709 (1966)], of the ‘largest elementary particle to be discovered’. They write: “We believe that this is firm evidence for the existence of a nucleon resonance with mass 3,245 +/- 10 MeV … It seems remarkable that such a massive particle should be so stable.” This nucleon resonance occurred when protons were fed into a high energy environment in which pi-mesons (pions) were being produced. I immediately saw this as a particle resonance in which that graviton ghost had combined with the proton to shed pions and leave the transient signature as the energy quantum discovered by Krisch et al. Here was proton decay brought about by that graviton ghost! Note that 2.587 GeV plus 0.938 GeV less 0.279 GeV, the rest-mass of two pions, leaves 3.246 GeV.

    Those were the days when particle physicists were probing the scope for creating exotic particles in the energy region we associate with the mass regime of protons, deuterons, and tritons, but nowadays they have gone to the very high energy region where they seek to decipher Nature by discovering particle resonances at mass values akin to those of atoms seated at the middle of the periodic table. This is the region where the supergraviton develops in response to the need for a more effective dynamic balance in that quantum jitter condition of the aether.

    My onward research into that territory led me to discover that, if the gravitational balance were to be a joint effort shared by a group of ghost particles, where the mass and charge displacement properties were pooled, then the ratio of these quantitities which preserved the G-value would demand a unique supergraviton form as well as a super-heavy electron form (identified as the tau-particle or taon). The supergraviton is the cluster of such a group, but a degenerate form involving the mutual annihilation of a particle pair from this cluster leaves a residual neutral particle resonance in the region of 91-92 GeV, evidently the so-called neutral Z-boson which preoccupies much of the attention of theoretical particle physicists at this time.

    The scientific paper disclosing this theory was published in Speculations in Science and Technology, 12, 179-186 (1989). The paper is entitled: ‘The Supergraviton and its Technological Connection’. The supergraviton cluster has a rest-mass of 95.18 GeV, corresponding to 102.18 atomic mass units.

    As explained in that paper, the technological spin-off had implications for cold fusion, but the key technological contribution was the account of the warm superconductive properties of the perovskite materials Sr2CuO4 and La2CuO4. These involve a dynamic balance tuned to the near-resonant conditions of interaction with three and four supergraviton masses, respectively. The lanthanum composition has a molecular mass of 405 or 407 according to the Cu isotope present. The strontium composition has a molecular mass of 303 or 305 amu according to the Cu isotope present. This implies an effective supergraviton mass of value between 101 and 102 amu. Warm superconductivity arises because the electron collisions with atoms involve energy transfer from the thermal motion of the atom to the electron, owing to the impact being absorbed through the centre of mass of the dynamic system, whilst field energy stored in magnetic induction sustains the current by its regeneration effect.

    This then is some of the background leading to the supergraviton. I have, as I have reported under the title: ‘Extracting Energy from a Magnet’ in New Energy News, August 1995, come to realise that the supergraviton is at work in magnetic materials, particularly those exhibiting the very high coercive force needed by a permanent magnet. Indeed, in the latter part of my latest communication to New Energy News entitled ‘The New Energy Spectrum’ I drew special attention to the ‘free energy’ implications of a U.S. patent just cited against one of my patent applications. It is U.S. Patent 4,435,663 granted to IBM and dated March 6, 1984. Its title is ‘Thermochemical Magnetic Generator’. What is described is an apparatus which uses hydrogen as a working gas and magnetic intermetallic compounds which absorb hydrogen as the working magnetic material’. The description of the invention says that ‘thermomagnetic generators are devices that convert heat into electricity’. The description further shows that hydrogen is not consumed, it is trapped in an enclosure and merely transferred forwards and backwards from one absorbing substance to another cyclically under the regulated control of heat input. The magnetic transitions induce output electricity in a coil wrapped around the chamber housing the working substance. This patent presents experimental data showing that the mere variation of hydrogen gas pressure resulting from the heat cycle will generate electricity. This is a room temperature device but the magnetic state of the intermetallic compound transits through the Curie temperature reversibly, converting ferromagnetic state to non ferromagnetic state and vice versa, merely in response to hydrogen pressure, as thermally controlled. My interest is aroused by the fact that the chemical composition of the lanthanum pentacobalt working substance varies by absorption of hydrogen and a group of seven or eight such molecules, without the hydrogen, has a mass that is an integral multiple of 101 or 102 amu. The addition of hydrogen in changing LaCo5 to LaCo5H4 can affect the resonant tuning in the supergraviton coupling.

    In my New Energy News communication I also mentioned that on March 26, 1997, I was granted GB Patent No. 2,278,491 entitled ‘Hydrogen Activated Heat Generation Apparatus’. It has 18 claims and is part of my, albeit theoretical, efforts to contribute something to the cold fusion theme. I also mentioned that the British Patent Office has notified me that on April 16th the grant of my GB Patent 2,283,361 will be published. This is entitled ‘Refrigeration and Electric Power Generation’. It bears upon the thermoelectric theme, the subject of my Energy Science Report No. 3, but it also exploits the 101-102 amu supergraviton resonance theme by disclosing why oxidized polypropylene is a room temperature superconductor and showing how this can be incorporated in a thermoelectric power converter. A group of seven molecules in the chain structure of oxidised polypropylene [C3H6O]7 has a molecular mass that is 4 times 101.5 amu.

    The IBM patent is, of course, the one which I had in mind in referring to 2020 vision. If you have followed my logic in suggesting invention in using room temperature superconductor materials to fabricate a permanent magnet, then the converse implication is that, with the right treatment, the substances used today in fabricating permanent magnets might prove to be viable room temperature superconductors. The substance lanthanum pentacobalt warrants attention with that thought in mind.

    I shall write more extensively on this subject and point to other evidence of anomalous effects arising from hydride (and deuteride) composition resonances as I expand my website presence on Internet.

    Footnote

    Readers who are curious to know where the 2.587 GeV graviton is mentioned in an easily accessible scientific periodical shelved in a university library, may look up the review paper by D. M. Eagles at pp. 265-270 in International Journal of Theoretical Physics, 15 (1976). It is entitled: ‘A Comparison of Results of Various Theories for Four Fundamental Constants of Physics’.